This essay considers the production and consumption of aesthetic value as a thermodynamic system. This view formalizes the philosophical and mathematical mechanics of the observations of The Innovator's Dilemma
. The S-curve is mathematically related to the gamma distribution. That explains why the gammatone filters of animal senses are able to detect complex forces in real-time. This applies to work-measuring systems at any scale, from the shapes of individual raindrops, to the individual work value of a person, to the work value of the global economy, and is falsifiable to all scales.
Any group of work applied over time, made up of components with regular time patterns, has its value applied over time in the form of a gamma distribution:
The theory is:
Each individual type of work is representable as a different Poisson Point Process. Aggregating many such processes results in a Poisson distribution. The continuous form of the Poisson distribution is the gamma distribution. Therefore, each type of work can be projected to be completed across time along a gamma distribution, the average completion time being the shape variable of the gamma distribution.
Even though the time scale (shape variable) and value scale (scale variable) of each type of work varies, the sum of the whole of the component gamma distributions of the component parameters will still reflect a single gamma distribution representing the average parameters.
Therefore, the cumulative value, though appearing similar to an S-curve, is actually rather the cumulative distribution function (CDF) of the gamma distribution:
The Innovator's Dilemma
describes the process with an S-curve. The CDF(gamma distribution) looks a lot like an S-curve, but does not arbitrarily stop, continuing to accumulate less and less over time at its unending tail. The gamma CDF better explains reality, which is evidence of a better mechanical understanding as aggregations of Poisson Point Processes. Understanding this time-causal mechanism enables an understanding of the disruption of different sets of work across time.
Expecting a linear production of value from a constant application of work also explains why The Dip
is perceived at the beginning of the application of work. Comparing (subtracting) a linear slope of expectation against the CDF(gamma) produces an initial dip that is followed by a bump that tapers over time. Theoretically, shortening the iteration time of work components predicts a mitigation of The Dip.
The Innovator's Dilemma
is that the established company has difficulty executing on a new set of work since its organization has been optimized for the old set of work. Mathematically, due to that mismatched organizational (structural) bias, the old company's cumulative gamma distribution function for new work has a shape value (scale over time) that is far larger than the new company's shape value.
That bias gives the new company a time advantage, where smaller companies can grow within The Dip
of larger companies. This also explains why the Big Tech companies of today are structured in an extremely federated manner, where organizational silos are actually preferred. That is, overall efficiency in the macrocosm is sacrificed to protect innovation in the microcosm.
The explanation provided by The Innovator's Dilemma
for why the value growth diminishes (in the second half of the S-curve, the tail of the gamma distribution) is that the technology surpasses market demand. Once the technology addresses the entire market, the total addressable market
can become saturated, an event known as market saturation
, the limit approached by the tail of the gamma CDF. That addressable space of the market is what defines the limit on the total amount of valuable work uncovered by the innovative technological perspective.
A particular view of a market, or interface with a market, has a particular set of work that needs to be applied to it. Disruption occurs when the game is changed
by which it is meant that the market interface has changed. That is, to reach a new level of efficiency, the market needs to be interfaced differently. The new market interface requires a new set of work to be applied to the market. This new set of work is representable by a different distribution of gamma events, expressing a whole new value distribution.
That is, each mere perspective of a distinct set of work is representable by a different gamma distribution. A new efficiency-enabling perspective is what is required to identify a new set of work, with its accompanying applied value. The aesthetic formulation of value is that a change in perspective changes value. For example, the demand pricing inherent in capitalist theory is such an aesthetic formulation of value.
A newly discovered perspective also discovers new value. Innovation is the application of the newly discovered perspective to produce new value. The application follows the discovery, which leads to the gamma shape of value.
A newly discovered perspective can also devalue existing applications of work. A down-graded stock price is a frequent occurrence, often a result of a noted achievement of an innovative competitor. This is also a reflection of demand pricing, and aesthetic valuation.
This explains why strategic growth involves focusing on smaller subsets of work in the new market interface, and increasing the scope of work over time: because each smaller component of work will also be gamma distributed, which will also have an exponential head. This means that the work can be serialized, and completed in an order suitable to market demands. Serializing the work, focusing on smaller pieces of work, provides an additional time-efficiency advantage by increasing vertical scale (per-operation efficiency) that lowers the shape value of the work's gamma distribution.
This is why the role of each startup as a new type of platform with new value is so critical to producing exponential value reflected in the head of the cumulative gamma distribution function. Note that the exponential distribution is a special case of the gamma distribution that expresses only the exponentially decreasing tail, but the first quartile, the first half of the head, grows exponentially. The task of disruption is to define a new interface to the market that produces a different set of values. Each new set of values yields a new exponential growth opportunity. What is needed is both the new platform, and its market interface, produced in manageable iterations.
Many philosophers have presaged The Innovator's Dilemma. About a century earlier, Nietzsche described The Revaluation of All Values
which exists in eternal recurrence
(is recursive). About a century earlier than that, there is Hegelian Dialectic, commonly known as Thesis, Antithesis, Synthesis. However, Hegel used the terms Abstract, Negative, Concrete, and his Negative
step is described as an overcoming
, the same word Nietzsche often used.
Nietzsche described the old values as coming from old idols
with anti-values, where everything coming from them is no longer correct, even though the old values were previously correct for society. He wrote how the anti-values get captured in history itself, and how mere perspective changes the meaning of the old history. That is, it appears as if history is rewritten, when it is merely seen in a new light (a new dawn
).
Now, the terms innovation
, disruption
, and revolution
are used, and old technology (from the old idols
) is seen as backward
rather than forward
. The form of each is the same: the goal remains the same, but where many old values are inverted to effect an overcoming.
There is even a suggestion of technology as the driving force of disruption. Nietzsche posits that a scientific will-to-truth (a Gay Science
) is required for overcoming
the natural will-to-power. His description of the overman
(Übermensch) in Thus Spoke Zarathustra
(a fictional elaboration of The Gay Science
) reflects afflictions commonly associated with engineers, symptoms along the autism spectrum, and his description of the last men
reflects those happily educated in anti-values to be overcome by the overman. This is why a tone poem representation of Nietzsche's Thus Spoke Zarathustra
, by Richard Strauss, is the opening music to Stanley Kubrik's 2001: A Space Odyssey
. Recall the abstract scenes with inverted hues (inverted values), and the final scene near Jupiter representing a rebirth aligning with a slow, boring death (a dawn
after a twilight
).
The will-to-truth is the technological force that changes the medium itself (creates new markets), while the will-to-power is the economic force that moves value along each medium (rebalances market efficiency, regulating supply and demand). This aligns with The Free Energy Principle, a thermodynamic entropy representation, where will-to-truth is free energy
, and will-to-power is bound energy
, operating under the principles of both entropy and thermodynamics. See Entropy and free‑energy based interpretation of the laws of supply and demand
.
What this means is that technological and economical forces work together in a self-regulatory system. Since the development of new technology increases supply, a demand vacuum is created. The task is to connect the supply with demand: to connect each new platform (free energy) with its new market interface (bound energy). This result from sociological analysis is the same result as from statistical analysis. The micro-statistical view of samples (Poisson Point Process) ultimately connects with the macro-statistical view of systems (thermodynamics with gamma-shaped energy fluxes) by way of the value-dynamics of Existentialist moral relativism.
Innovators are necessarily relativist in the axis of absolute value, as innovators represent the derivative, generative functions of value. Any sustained force having an exponential effect is by definition a derivative, generative function of integrated, absolute value.
In the domain of programming, a common practice is test-driven development. That is to define an interface, to develop a test for that interface, and then to develop an implementation to pass the test. Test-driven development allows systems to be defined by interface first, which compartmentalizes both design and implementation, easing the development of large-scale systems.
By interface, it is meant that the interface can be completely abstract, not necessarily constructed as an interface in the platform, but just a logic of interaction that might be de facto represented by the test. Therefore, the same can be done for an interface to a market.
This is like a user story
, but is actually a marketer story, where the pitch is the de facto representation of the interface to the market. Wording the story properly is important. Wording to the marketer is what is actually needed. A marketer story can use rationale that has direct value to the application of technology, even if it is not directly valuable to the user. This leads to less awkward rationale, and very clear business goals. It can be an input to product management, to make it easier for product management to develop and prioritize user stories.
The marketer story specifically represents the application of technology to the market. That application is a flow of free energy (the technology) into bound energy (the market), a flow of supply into demand. In the case of disruption, the supply, the technology, provides a new perspective of the demand, creating new demand through the manifestation of a new type of demand: a new market. A user story cannot represent this without assuming the marketer’s perspective. Therefore, the marketer story is a dependency of the application of disruptive technology.
User stories are about consumer application use. Marketer stories are about consumer market onboarding. The sales pitch joins them. Disruptive technology is typically ahead of a user's own understanding of possible usages. The sales pitch speaks to the new lifestyle of the game-changing market.
For example, internet-connected and gps-connected map technology solved a well-understood user need, but moved the usage from an offline lifestyle to an online lifestyle. So-called conquest marketing
(for the conquest of a new market) is typically pitched with a message of a new lifestyle. Disruptive technology always implicates lifestyle.
Piano tuning apps first existed on hardware platforms (rack mounts, and table tops), then on laptop software platforms, and now on phone software platforms. The price has remained constant ($1,000), and the basic method of piano tuning has remained constant.
On each platform, the apps did the same thing, but each time the platform shifted, the apps needed to shift platforms to where the market moved. Each time, the high-level user stories remained the same, but the marketing stories changed, which affected the low-level platform user stories. Each change in market required a new set of work, with a new cumulative gamma distribution function. Each time a platform was changed, the efficiency of the app improved, and the market share shifted to the product that first shifted to the new platform market.
Even in this simple story of market adaptation, the marketer story is the key to market share, and market onboarding (to the laptop, then to the phone) is the key problem.
The topic of this essay is the difference between the object orientation of logical objects, and the object orientation of ontological objects. The object orientation of logical objects is now able to be formalized against lambda calculus (Elegant Objects
phi calculus). That reveals a formal boundary around the opposite side of a dichotomy: ontological objects.
It is explained how the ontological philosophies of Ludwig Wittgenstein and Martin Heidegger can explain the existential distinction between the two types of objects.
It is explained how the scientific philosophy of Karl Popper can explain the empirical distinction between the two objects.
It is explained how the philological philosophy of Friedrich Nietzsche can explain the aesthetic distinction between the two types of objects.
Philosophy | Philosopher | Role |
---|---|---|
Ontological | Ludwig Wittgenstein and Martin Heidegger | Existential |
Scientific | Karl Popper | Empirical |
Philological | Friedrich Nietzsche | Aesthetic |
It is proposed an ontological formalization of ontological objects via a dichotomy in relation to logical objects based on the fundamental theorem of calculus, and the discrete integrations of finite-difference calculus.
It is proposed a scientific formalization of ontological objects in software verification via the principles of Falsifiability and Reproducibility.
It is proposed a philological formalization of ontological objects in platform development via the Hellenic Dionysian versus Apollonian perspective as used in Greek tragedy and operatic composition.
Formalization | Principle |
---|---|
Ontological | Finite-Difference Calculus |
Scientific | Falsifiability and Reproducibility |
Philological | Greek Tragedy and Operatic Composition |
It is explained how logical objects are for optimizing vertical scale through immutability, and how ontological objects are for optimizing horizontal scale through mutability.
It is explained the role of ontological processing in application server platforms, and why application developers often falsely believe that logical processing is an ideal.
An ontologically-aware design for an application server platform is demonstrated in the mnvkd
platform.
Object Type | Scale Type | Mutability | Stack Layer Type |
---|---|---|---|
Logical | Vertical | Immutable | Application Code |
Ontological | Horizontal | Mutable | Platform Code |
Consider that logic is a branch of philosophy. At the time when Kurt Gödel introduced Incompleteness Theory, the philosophical considerations were widely considered because the issue formed from the core philosophical problem of Principia Mathematica. The full title of his paper is On Formally Undecidable Propositions of Principia Mathematica and Related Systems
, which speaks directly to the philosophical problem.
How did Kurt Gödel know to address this problem? The philosophical problem was first called out in Ludwig Wittgenstein's Tractatus Logico-Philosophicus:
4.1212 What can be shown cannot be said.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus4.1212
That is, where symbols (what can be said) can't reach, evaluate values (what can be shown). Conversely, where values can't demonstrate, evaluate symbols. Wittgenstein was defining the philosophical scope to logic. What he means is formalized:
6.1 The propositions of logic are tautologies.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus6.1
6.23 If two expressions are connected by the sign of equality, this means that they can be substituted for one another. But whether this is the case must show itself in the two expressions themselves.
It characterizes the logical form of two expressions, that they can be substituted for one another.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus6.23
This is actually the philosophical spring of the modern formulation of the Scientific Method: what cannot be deduced (propositions
that cannot be decided
), must be repeatably demonstrated. That is, science is about Gödel's undecidable propositions
, what cannot be solved with symbols, symbols being Wittgenstein's tautologies
; what can be said
. Instead, Wittgenstien's pictures
; what can be shown
, or what can be solved with evaluation, Gödel's undecidable propositions
.
This is why after Incompleteness Theory came Karl Popper's principles of Falsifiability and Reproducibility. Falsifiability relies on the fact that reproduced verification, or value-based evaluation, is often easier than symbolic solutions. Reproducibility addresses the fact that verifications must be demonstrated, not merely stated, so the statements must be falsifiable rather than logical.
The P versus NP
problem is often informally described similarly, as whether every problem whose solution can be quickly verified can also be quickly solved (Wikipedia)
. Technically, it actually classifies sets of computing problems to represent these two classes of problems, so it is a mathematical subset of this philosophical principle. However, it is still described informally conceptually the same as Wittgenstein: symbolic statements versus verifying demonstrations. Therefore, an ontological formality is more complete than a mathematical formality.
Computationally, an ontological distinction exists between code and data; that is, instruction memory (the functions, the symbols) and data memory (the objects, the values, the pictures
). This distinction is obscured by the Von Neumann Architecture, where code and data are both stored in the same memory. Operating systems now attempt to isolate instructions from program data memory with, for example, things like write or execute (W^X).
Compilers attempt to logically validate code during compile-time rather than run-time. They do this validation symbolically, not by distinctly evaluating each possible input and output of each function, which would be an empirical, run-time validation.
Run-time testing is often a functional test of a single input and output for the purpose of code coverage, the proportion of lines of code the test executes. The coverage acts as the falsifiable theory of the test: the coverage of all conditionals represents the control of all variables — the falsifiable demonstration.
Consider that, in computing, reproducibility is needed for complex system evaluation precisely because evaluation problems are non-symbolic in nature. Reproducibility is the flip side to symbolically analyzing decidability. The Scientific Method is a solution because it is an Empirical System, the insight of Karl Popper. Only an empirical system can solve a paradox of decidability, a rational system. And now we are back to Empiricism versus Rationalism, and therefore back to the Ancient Greek dichotomy of Being versus becoming.
Distributed computing platforms are about dealing with The Whole from the perspective of the computational limits, The Limited Whole. Again, this goes back to a philosophical problem identified by Ludwig Wittgenstein's Tractatus Logico-Philosophicus: To view the world sub specie aeterni [from the point of view of eternity] is to view it as a whole—a limited whole. Feeling the world as a limited whole—it is this that is mystical.
Symbols cannot span beyond subjective perspective. It is actually values, what can be shown
, that do so.
Consider the difference between Function Signature Interfaces and Input/Output Interfaces. In functional interfaces, the values are immutable. In I/O interfaces, the values are mutable channels. I/O interfaces present a barrier to Lambda Calculus. Why? Because What can be shown cannot be said.
: they are undecidable because they are mutable. The word shown
is used because the problem is empirical.
Consider that testing across systems, between I/O endpoints, needs integration testing. This is what puts distributed platforms in an entirely different class of computational problems: distributed problems are literally NP
problems. In fact, the fundamental distributed issues of partitioning, availability, consistency, and latency are expressed in the non-deterministic spatiotemporal errors of I/O.
Principia Mathematica to Kurt Gödel's Incompleteness Theory is a route through discrete integrated infinities. It is a completely symbolic route, with layers of symbols, looking at infinite spaces. This symbolic path is taken by Alan Turing and Alonzo Church.
But what if a route is taken through the calculus of finite differences, operating on discrete integrations of expressed value with complex stateful methods?
A quantum world appears, not of quantum physics, but of quantum computation:
This is, instead of a symbolic analysis of computing, a value analysis of computing. Technique becomes isolation and messaging, of memory spaces and procedural protocols. This is where art actually enters computing, where compositional issues come to the fore. Computer programmers are typically far away from this, but should not be, since it is an essential part of the practice: compositional issues are formalized by the philosophy of aesthetics.
Paradigms often form dichotomies of apparent polar opposites, but apparent polar opposites are often two valid perspectives of similar subject matter. The limits of each perspective only apply when the perspectives are treated as mutually exclusive. Programming paradigms are often this way.
Paradigms often form into a related group standing in dichotomatic polar opposition to another related group. The attempt to mutually include polar opposing paradigms is often pressured by the interdependencies of each polar group. That is, when connecting to a polar opposing paradigm, the interfaces are often more suited to the original pole, rather than the paradigm of the opposing role. This presents a paradox of dichotomatic pressure.
A solution to this paradox is to build an aesthetic platform of paradigmatic layers. Platforms are aesthetic constructs, not logical constructs. Consider the Aesthetic Platform of Ancient Greek Tragedy. Friedrich Nietzsche's doctoral dissertation is The Birth of Tragedy Out of The Spirit of Music
, one of the most famous works of philology, a properly aesthetic analysis of Ancient Greek literature. It explains that, during the Hellenic period, two opposing perspectives are presented, a sort of philosophical argument played out in a story: The Dionysian and The Apollonian.
The expression of Dionysus, the god of worldy eternal chaos.
The expression of Apollo, the god of individual temporal order.
Subject | Dionysian | Apollonian |
---|---|---|
Play Scope | Theme | Scene |
Play Subject | The Setting | The Characters |
Play Voice | Vocal Chorus | Vocal Dialog |
Musical Genre | Dance | Song |
Musical Voice | Musical Harmony | Musical Melody |
Physical Time | The Eternal | The Temporal |
Physical Space | Being | Becoming |
Politics | The City | The Individual |
Computing | Value | Symbol |
All art has the same problem as distributed computing.
The issue is locality. The canvas is small. The stage is small. A musical track is short. That is why scene is such a focus in art. Likewise, computer memory is limited. Disk size is limited. Processor resources are limited.
In fact, a Turing Machine abstracts away these limitations axiomatically. That is like taking Mona Lisa out of her scene. The Mona Lisa is more famous in the art community for its scene work. Most of the time a computer spends is moving bits around. A purely functional view completely ignores this problem axiomatically. However, this is where Object Orientation shines.
Functional and Procedural programming paradigms have different corresponding object types. These object types and their associated paradigms correspond via their Immutability and Mutability to Greek Eternity an Temporality, and therefore Theme and Scene, forming a complete aesthetic compositional platform.
Object Type | Method Type | Data Type | Scale Type | Mutability |
---|---|---|---|---|
Logical | Functional | Symbol | Vertical | Immutable |
Ontological | Procedural | Value | Horizontal | Mutable |
The Elegant Objects principles relate object orientation 1:1 to lambda calculus, calling it phi calculus. All well and good, except that the community around Elegant Objects advocates limiting object orientation to only this functional expression. The argument is that functional object orientation is superior to non-functional object orientation as a matter of principle, because of what functional provides. I show that it is a valid perspective, especially on the side of application logic, although incomplete by itself when considering the whole application stack.
One of the principles of Elegant Objects is that objects are not data containers
. The opposite of Logical Objects are these data containers
, Ontological Objects. There is an aesthetic difference between these two types of objects as forms of expression, where ontological objects represent values, and logical objects represent symbols. An application is typically developed symbolically, on top of a platform that distributes values.
What is not understood by the functional-only community is what object orientation is actually for — the opposite of what functional is for (the distinction):
The strict opposite of functional is procedural. But the optimization is, as lambda calculus is to functional, object orientation is to procedural (the relationship):
Lambda calculus is therefore not a complete calculus without object orientation.
This is why, even with something functional, like Erlang, the goal of a platform is to provide a functional interface, but through the platform itself being implemented in object orientation. That is, Erlang is a horizontal resource-management platform, so that application logic can cleanly sit on top. It is a server, but the server internals are invisible to the application.
To put it simply, the object orientation of mutable platform resources enables the function orientation of immutable application logic. It is not just the compiler working to translate immutable to mutable. It is the platform working to translate mutable to immutable. The interface of the platform is where mutable and immutable meet. It is where horizontal scale becomes the foundation for vertical scale.
It is easier and easier to miss this because platform authors have put more and more work into abstracting away resource management. Application programming is now so high-level that functional purists get instantly frustrated with any resource management issue, pushing it to DevOps, and relying more and more on senior programmers, who worked on these platforms, being placed in DevOps positions out of need.
This is a natural part of the imperative-declarative cycles as platform ecosystems mature. As an ecosystem becomes commoditized, its imperative aspects become more declarative, until a new imperative ecosystem appears on top of it, and the cycle continues.
But as each layer builds up, overheads increase, and the platform work becomes to flatten the layers. It acts as Hegelian Thesis-Antithesis-Synthesis.
For example, instead of using AWS Lambda functions (free-form code), they are now being provided by platform-specific functions (modules), creating a flatter vertical, with lower overhead. Using a platform-specific messaging layer is often used to optimize within a larger portable messaging layer, like using a gRPC gateway instead of an HTTP gateway (event handlers). The flow of data moves higher up the stack (when growing up), and becomes more local.
Heidegger makes a clear distinction between two types of Being. Through this, a distinction is made between two types of objects. He explains that the German language has terms that other languages do not, which affect discourse.
For example, immutable config objects defy my intuition. When people speak, adding constraining descriptions to things, they don't intend that previous antecedent nouns are separate forms. For example, the statement Charlie wore a red shirt and blue jeans.
is not There is a human wearing a red shirt. That human is now also wearing blue jeans. Forget the human who once wore a red shirt only. The human that remains is Charlie.
By creating an object for everything, a lot of extra platform work is required. It makes no sense, because you are building up a single coherent state. The state object should be labelled with an identifier to convey identity. The functional chaining where the identity state is merely an arbitrary intermediary expression is not clear.
Charlie is the ontical object (the being), and his description is the ontological description (the Dasein
Being — with a capital B). Think of the ontological Dasein Being as a type value, and the ontical object as type instance value. Think of ontological Dasein Being as a noun, and ontical being as a verb that places the noun into existence. That is, the ontological is a priori (before the fact), and the ontical is a posteriori (after the fact).
Don't conflate them with abstract and concrete. An ontological value is still a full expression of a value, just not necessarily instantiated in space. That is, it is fully evaluated within its boundary, just not placed in a space. That is, it has equality, just not identity. It is important to make this distinction for the sake of mutability.
Consider also the difference between a method and a function. A method is the actual mutation, whereas a function is a mapping of values. For example, in 2+2=4
, the method is +2
, and the function is add(2, 2) == 4
. That is, the ontological is about state changes happening in space, equality. The ontical is about placing in space, identity.
The point of working with state directly is that a larger equality can be constructed over time. When working with a single ontological object, mutability is desired, so that fewer objects are created, by only creating one ontological value that is given ontical identity.
There is a difference between “objects” and “structs”, where objects are the labels, and structs are the values. Classes are not values, as only complete types are values. Classes belong more to categorization, and are orthogonal to ontology.
The elegant objects argument is that objects are not actual data containers, but rather logical statements of functional processes. But some objects definitely are data containers. In these, the data is itself an expression, a physical interface rather than a logical interface, like a picture rather than a text. They are not logical values, but rather they are ontological values.
Ontological objects trace back to Wittgenstein's Picture Theory
, which English speakers tend to not realize was the actual point. Bertrand Russell's English introduction gets hung up on how Wittgenstein resolved his logical paradoxes as an improvement in logic, and got distracted from the ontological advances. It is easy to miss, because Wittgenstein was effectively creating a new, modern ontology. And later, in unpublished writings, he came up with a more symbolic theory that seemed to counter Picture Theory. I am proposing that they are actually dichotomies.
It is a curious fact that the same small book that introduced the complete truth table of logic gates — also — introduced picture theory, the foundation of object orientation. By saying What is shown cannot be said.
he was saying that expression objects are orthogonal to the tautological optimization (later formulated as lambda calculus) of logical function.
Pictures are copiable complete types, not logical abstractions. The ideas in the pictures are logical abstractions, but the expressed values are ontological values. That working directly with expression is the technical point. Functional rather attempts to be independent of expression.
Immutable objects rely on lambda calculus to reformulate the configuration into the imperative, stateful form, but that often doesn't even work. In dynamically typed languages, it is easy for the compiler to not be able to do it, because values are references, often null, and not complete, encapsulated values.
Since that encapsulation is missing, adopting a practice from, say, Scala into TypeScript makes no sense, because it can't actually purify the side effects of these functions. Remember, the type protections of TypeScript normally only affect the transpile step, to prevent type-related coding errors, so it doesn't actually improve the underlying JS compilation happening at JIT runtime, unless special refactorings are implemented at the transpile step.
So you can only gain the Scala-like improvements if you are using a JVM or LLVM based platform that uses typescript syntax (some exist) (The EO advocates are from the JVM world.) The fact that it is metaprogramming is hidden from you, and the main difficulty of metaprogramming is the fact itself that it is an orthogonal layer.
Working directly with state does not need compiler optimization. This is why scripting languages are often referred to as configuration languages, and why Python (which is insanely slow) does so well in high performance computing as a wrapper to extremely accelerated processes, because most of the complexity is in the configuration of state, and what is important is to keep a crystal clear separation between what is mutable and what is immutable, important for both performance and security. A functional-only stack is a design problem.
Look at generational garbage collection, and the way Lua garbage collects. There is a clear separation between object and function. The first generation is application function. After that is long-lived configuration. The Lua stack is for application function. Lua tables are for configuration, and only table values are garbage collected — they equate to later generations, or bridge to the first generation that goes on the stack.
That first generation, the stack generation, is the ontical generation, the generation of application usage lifecycle. In the first decade of The Web, all service processes were on-demand. From inetd, to CGI, to Apache SAPI modules, a process or interpreter would be created and destroyed per request. Garbage collection was simply tearing down the whole interpreter's memory mappings.
PHP minimally garbage collected for years, not even having a way to deal with reference cycles. Every object was assumed to be first generation. Most values were passed by-value. Tracking an object lifecycle was pointless. Most everything lived on a stack. This is precisely where lambda calculus excels.
But the new object-oriented platforms, which manage distributed resource life-cycles, are completely different. The old, on-demand tools don't work. This is why PHP was replaced with Ruby. Ruby is the complete opposite of PHP:
Python had a similar role to Perl, but in the context of data processing, especially as data moved from text files to databases. Python's major advancement was all of the ways to interface with C to optimize the number crunching, the application logic. C flipped from being the platform language to the application language. And now applications even use hardware-accelerated parallel processing via GPU, where slow Python is still the configuration platform.
With this understanding, the emergence of Golang makes sense. It formalizes resource management all the way to the kernel, trying to remove configuration as an operational problem. The application can be statically built with a dynamic platform runtime. It doesn't run out of memory, but rather memory blocks until it is available. Everything is oriented around blocking logic to the point where even garbage blocks. It is not possible to optimize past a block with lambda calculus. The solution is to statically compile everything, taking all wins you can get.
Looking into more implementation specifics, you can see what is actually happening. Those who advocate for elegant objects deny object-relational mapping. The replacement is that objects are to speak SQL, because they are interface, not expression. What is actually being done is removing object oriented side-effects from the objects, turning them into functional expressions. Because the SQL itself is independent of the state.
However, people who know futures and monads know how an actual translation into functional actually looks. The EO advocates are changing the semantics to suit their syntactic arguments. They are literally skipping over the state machines, thereby skipping state altogether. They have to do this in their argument because keeping the state machines reveals how much more awkward sequential flow is than a simple thread. If they switched to stateful thread architecture, the syntax would be non-functional with the same performance, but far better clarity, and no need for closures.
It is the same argument as the 12 factor app
, where state is pushed as far back as possible. It is ignored that all of the difficult work is carried out in complex, distributed resources, with extremely high operation overhead. Every single resource dimension is separately scaled to a separate backend plane. This means each joining of state incurs a huge latency and throughput overhead.
12 factor apps are easy to scale, but have horrible vertical efficiency. The costs are especially burdensome when trying to grow past the prototype stage, and greatly eat into margins. This technique suits revenue only, and requires massive refactoring to focus on profitability. VC-funded firms love this, but VCs should hate this because it risks profitability, and likewise the ability to exit at >= 1x when growth does not take off.
The EO advocates are so dogmatic that one of their principles is that their virtual objects must represent and model physical objects independent of application need. I recognized this for what it is: a form of Platonic Essentialism.
To put it simply, Platonic Essentialism is the syllogistic logic made fun of in Monty Python's How do you know she is a witch?
scene, where coherent objects don't really exist, or exist only in Plato's allegory of the Cave. The limits of speaking are considered informative of the real world. Like, for example, Categorical Imperatives (Kant is the archetypical essentialist, pun intended).
In the history of philosophy (affecting computer science), there was a rift in logical analysis that was caused by the Cold War, where the West leaned toward British analytic
philosophy, and broke collaboration with German continental
philosophy, which broke apart groups like the Vienna Circle that were trying to resolve differences between the two. The academic void still exists. The easiest way to see it is to look where Turing and Wittgenstein influences diverge. Turing provided logical tools. Wittgenstein demonstrated their limits (in a computing sense). Wittgenstein is basically saying Look at all the ways mutations happen.
while Turing is basically saying Look at how immutable everything really is
and it is yet again a Being vs. Becoming issue that sets up yet another cycle of battling over Platonic Essentialism (of which Elegant Objects is).
All application servers are serverless. That is their job. They provide a gateway interface
that abstracts away what they do. For example, inetd, CGI, Apache's SAPI modules, Java's servlets, Python's WSGI, Ruby's Racks.
Serverless just means that server visibility is limited. This explains why serverless became popular in node communities: node-express
. The express framework is designed around using middleware to build a complete server. Server visibility is the point. It is not an actual application server. It is a server toolkit that is used to write applications.
Serverless
platforms are a reaction to that. They are anti-server because of the type of server that exists in the ecosystem. What the ecosystem actually needs is a serverless interface. Enter React. In React, everything is immutable to the point where even state-fetching functions are immutable by default. The changing of state is an exceptional. React went fully stateless.
However, if you look carefully at the server platform node-express
in this light, you can see where immutability and mutability are highly mixed. The routing is immutable. But the contexts they handle are specifically not. In traditional server APIs, the middleware is handled by libraries that have been dependency injected. Express uses inversion of control instead, where the server injects that state. However, the typical practice with inversion of control is to be very careful about standardizing interfaces.
For example, even TLS context in CGI is de facto standardized by Apache's environment provided to SAPI modules. You never need middleware or libraries for working with TLS, and you never need to coordinate with arbitrary server dependencies.
So people in the JavaScript community have an idea of a server that applies functional optimization to the configuration layer, the wrong layer. The only worse server I have seen is Ruby's Unicorn, both of these somehow worse than Apache.
But what makes servers better? The answer is locality of reference. This is the opposite of partitioning. Rather than scattering, this is gathering. This is where it is key: the distinction between functional and object orientation, and the difference between what is immutable and what is mutable.
Computers are not Turing machines. Turing machines are logical constructs. Computers emulate Turing machines with many layers of local memory-space caching. Functional thinking is for logical thinking on top of this physical emulation. When dealing with the organization of computing resources, that is where immutable functional ends, and where mutable object oriented begins.
This emulation of a Turing machine is actually incomplete, requiring explicit management. The Operating System is able to get processes to run at all. Applications also do a lot of this work, specific to how data is organized, and at web scale
, this problem spans many physical systems.
For example, it was popular three decades ago to attempt single-system image
clustering, where an operating system managed many physical computers as one system image
where image
is a single Turing memory space. This turned out to be horribly inefficient because the data needed was almost never on the same system that is wanting to process it.
The 12 factor app
is actually another attempt at that. The data is, by design, never on the originating system. The cost of computing is getting cheaper, while the cost of running a web app is skyrocketing.
In the last decade, the software development community has gotten so used to working with cloud resources that months can be spent ignoring horizontal resources, if efficiency is ignored. These days, the main way to improve efficiency is not to focus on vertical scale, but to focus on horizontal distribution, and messaging between systems.
Cloud vendors, due to their cost-plus business model, have an incentive not to (a disincentive to) increase total efficiency, but rather an incentive to decrease vertical scale while increasing horizontal scale, to actually scale those cost inefficiencies.
Normally, the application of a new technology at-scale increases total efficiency. What is happening in this case? The propagation of the ignorance of software platform technique. The idea that functional purity is an ideal in-itself is the device to enable that purpose.
It is deceptive because functional optimization used be a method for increasing vertical scale because it results in smaller mutation windows — the mutation of data correlates lineally to computational costs.
TCO case studies from the early days of The Cloud conflated the containerization aspects with the cloud aspects. The benefit of cloud-enabling a system is really the containerization of a system. Containerization enables reproducible building and deployment, controlling environments in ways that symbolic analysis cannot. It commoditizes the physical compute resources.
So why would a cloud vendor want to commoditize their physical resources? They don't. What they are actually doing is productizing stateful services. If you need to rely on a productized stateful service, you are limited to using compute resources next to that service, because distance from that state correlates to latency.
Nowadays, the main argument is not cost, but what defines core business
. This leaves space for Big Tech to dominate on cost on their own services, while renting out their unused resources for even more profit.
It may be a surprise, but cloud vendors don't regularly use their own clouds. They use servers with serverless interfaces, not serverless resources. What is the difference? Serverless resources are billing-accounted functions. That is, they need a way to charge for logical operations, not computing operations, because the computing resources are a commodity.
The mnvkd
platform is an example of a purely ontological object design for an application server, providing extreme locality of reference in both data and code structure. Every object is completely encapsulated. A child object never touches a parent object. The communication back to the parent object is always a local state that the parent object inspects from the child object. The memory of peer objects and parent objects isn't even available to a child object.
Just like an operating system kernel, a virtual kernel deals with virtual process scheduling and blocking, but virtual thread state is entirely local to the virtual process, including the scheduling of virtual threads within the virtual process. This means that:
This is complete isolation enabling a completely modular inversion of control. Any parent structure can be swapped out. For example, the virtual processes could be compiled into WASM, and executed anywhere, but with common thread-like semantics, driven by the same code.
Locality of reference enforces state parsimony, guaranteeing optimal execution. Instead of using functional expression to prove optimal execution symbolically, the locality enforces optimal engineering practices.
In practice, debugging is extremely straightforward. There is no concern over misbehavior since the behavior can only happen in one place. The combinatorial complexity is theoretically large, but in practice, limiting the space actually greatly helps debug the symbolic methods within an object.
In practice, runtime performance is astonishing. The application functions run as cache-hot as possible, being automatically cache-aware. I/O is automatically aggregated and lazily evaluated. Tiny embedded processors perform nearly as well as huge server processors. The efficiency of hardware feels 10 years ahead. This is simply because the platform is actually doing its job of resource management, and is designed aesthetically rather than logically.
Database consistency is characterized by these two principles:
The simplest way to summarize them is that CAP theory explains the issues in relation to space. That is, partitioning. Whereas PACELC theory explains the issues in relation to time. That is, latency. Consistency and availability issues apply to partitions of particular space and time.
Generally, consistency is solved for particular usage needs. It will be shown how time is used to solve problems of space, and how space is used to solve problems of time. Solving for a variable means removing it from the answer side of the equation.
Relational databases were designed in the mainframe era, the era of centralization, before parallelization, and before distribution was a consideration. Networks were so slow that sending work to another system would have been ridiculous. Even building on the host system would have been faster than downloading the build artifacts. The data was only on another system due to storage limitations. Can't be having replicas everywhere.
Such a centralized database is designed for managing contending access to local resources that it centrally manages. The problem is simply: many users to one storage system. Therefore, clustering a relational database requires global consistency, since the partitioning does not extend to each user.
Relational data was born around the same time Unix was born. The Postgres lineage comes from Ingres, before SQL existed. It was another decade until clustering a database was even on the radar, as local networks got faster, and another decade after that before it started becoming a normal enterprise concern, as the Internet got faster.
So naturally, migrating a design from centralized to distributed requires a form of consistency that maintains the single system image
nature. That is, all local physical resources act as a single global logical resource. This is called Strong Consistency.
When designing a decentralized system, Strong Consistency doesn't even need to be a requirement. Instead, the goal is for operations on logical objects or localities to be consistent. This is called Causal Consistency.
The main difference between the two is that Strong Consistency coheres across universal order: coherence by time itself. If time is coherent, then all else will be coherent. Whereas Causal Consistency coheres order within local space, like a partition or an object.
Therefore, they can be characterized as:
That is, in a relational database, even if operations are ordered, different partitions will have access to the coherent ordering at different times. A relational database needs Strong Consistency that synchronizes across time.
To have all nodes in space cohere in time requires latency. It is actually the physical space-time relationship (the speed of light) that limits this.
Spatially-distributed operations need to be ordered at times in the future so that each partition can synchronize the ordering of commands locally at the same time.
For example, if you wanted a node on Earth and a node on Mars to act at the same time (Strong Consistency), the latency between them is up to 22 minutes, so the command from Earth may need to be issued at least 44 minutes in advance, because it may take another 22 minutes to receive the acknowledgement. The read and the write can happen on either planet. The same thing happens in protocols initiated from the read instead of the write, waiting for the other planet to provide state from the past.
But if all that is needed is that the operations maintain the same order on each planet (Causal Consistency), then the read and the write can be isolated to the same planet. The data written by Mars can be reorganized for a later read by Earth. But that planet isolation sacrifices the cluster-wide time-consistency across space-time to the other planet.
To gain that cluster-wide time-consistency, the overhead is expected to be an order of magnitude greater, like 10x more latency, often much more. But that isn't even the worst part. The more efficient your application, the greater the impact the platform's latency has on the proportion of total overhead.
While ordering by time ahead-of-time solves spatial problem, the converse is also true. Ordering by space ahead-of-time solves temporal problems.
There exists a dichotomy between an ordered index and a hashed index. If data is ordered by an ordinal value, nearby keys can be co-located. However, if data is keyed by a hashed identifier, any ordering is purposefully obscured by the hash function: the goal to avoid collisions also avoids locality. The goal is to evenly distribute partitions.
Think of it this way: an ordering is a kind of bias. To deal with a bias, there are two approaches:
These relate to map and reduce:
noise shaping), for horizontal scale, and
The functional expression of a distributed query are these map
and a reduce
functions. They represent the scatter and gather in a distributed process. Partitioning is the scattering map
, and aggregating the partitions is a gathering reduce
. When multiple distributed relations are accessed in sequence, the functional operators are a cyclical sequence of mapping and reducing. Each map-reduce
cycle increases latency. That is, the operations are distributed in time to synchronize in space. That distribution in time is: latency.
So what is the solution? To distribute in space to synchronize in time.
To do that, flatten the joining reduce-map
connections in these map-reduce
chains. The ideal is to have a single map, then a long line of processing, then a single reduce. The goal is to express global big data as local small data.
This is a principle called locality of reference: that is, staying locked to the same partition as long as possible, to make data small for as long as possible. For example, instead of joining by map-reduce-merge-map-reduce
, the join can be map-merge-reduce
, deleting the middle reduce
and map
.
Consider the benefit of staying post-map
for a longer chain of processing, avoiding reduce-map
cycles. The horizontal scaling benefit of mapping/partitioning can be held with no additional overhead, while the local ordering can be used across the entire chain to increase vertical scale.
A local ordering tends to make it possible to change an algorithm from O(n)
to O(log n)
, or if each event in the n is seen as O(1)
, then the aggregation increases it to O((log n) / n)
, even in a real-time stream. This opens up the possibility of operating on arrays, or with stateful filters, or any other kind of aggregation.
In fact, this is why Digital Signal Processing (DSP) tends to use data flow programming, like GNU Radio, or Max MSP, or even PulseAudio. There are even commodity event processing systems, like Apache NiFi. For example, the Fast Fourier Transform has logarithmic complexity, and is the fundamental building block for most audio filters. Most FPGAs have an FFT block somewhere.
There are even some reduce/filter functions that are not sensitive to order, with minimal losses, like many statistical functions. In those cases, coherence can even be ignored, letting the natural statistical equivalence provide the coherence.
Orderings also have a tendency to have statistical dependence. That is, they often spatially correlate. For example, say there is a process that partitions by IP netblock. That can also effectively partition by country or even city. This can heavily boost cache-hit rates in joins.
Let's review the approaches with this in mind.
A best practice in relational databases is to design the schema normalized, where data has a normal
place, and lacks redundancy, then add denormalized indexes and views to that normal form to speed up access by common usage.
A query is materialized when the results are updated on-write, rather than being searched each time on-read. A view is just a table that is composed of a query of other tables. It is essentially a named query. A materialized view is not a logical query, but rather the physical results of a query updated on-write, ahead of the read. It is a sort of cache. Sometimes caches generated on-read are referred to as materialized views, but that conflates the distinct ideas. When updated on-write, then invalidation is removed as a problem. When updated on-read, and cached, then invalidation becomes a problem.
Therefore, denormalization is a form of locality of reference, but only half, because it still references the global space when updated, and still incurs the latency hits, but instead of doing this on-read, it is done on-write. The problem remains that space is not fully reorganized. The normal
form is still spatially scattered.
This is the relational database solution to latency. At best it moves a linear overhead to a logarithmic overhead, moving the problem of constant cost to instead cause resource contention on both reads and writes. Cost overheads increase with scale. After a modest amount of scale, the costs compound and put great pressure on the business model, limiting growth potential as margins become more of a problem.
The typical solution is to break the monolithic database into separate services, each handling one or a few materialized views, where each service has its own database, and communicates with other services to join data. This also allows for ownership federation.
But does this solve the contention problem? In fact, the same issue exists that each database is still spatially distributed across the relation. However, since each service is federated, there exists an alternative.
The relational aspects are directed by the service mesh, the messaging layer relationally connecting the federated services. Therefore, a place beneath the messaging layer suits a sub-relational storage layer, but this time using Casual Consistency. Enter the document database.
A relational database is tabular, so even if materialized views are double-keyed, each sub-table needs to be a separate view that is dereferenced separately, manually joining with an IN
clause. Relational databases are not designed for flattening so much.
A typical solution is to put a document store cache in front of a relational database. But since SQL cannot express the nested layers, every query needs the expression of:
IN
-joined views in a document hierarchy, often with different consistency semantics anyway.GraphQL and Document Databases exist for this reason: they form these tables into graphs. GraphQL can be in front of either relational tables, or a document database storing sub-graphs. Or the tabular database can be skipped entirely, and writes sent directly to the document sub-graphs, often fed by an event-driven data-flow event pipeline.
A relational database continually goes back to a synchronous store, via reduce-map
. Much of the benefit is wasted of using a document store with different consistency semantics. Combine that with limiting the aggregation of event processing, and limiting locality of reference. The tabular operation and global view of a relational database has extreme overheads in a distributed environment. When the database is distributed, a relational databases should be used for digital ledgers, purely transactional data.
The primary aesthetic challenge of any art, including the craft of data management, is to make the big appear in a small space. This idea goes back to Ancient Greek Tragedy, where on a small stage, the Dionysian scenes carry Appolonian Themes. Films do this on a television. A powerful artistic masterpiece is viewed on a small, rectangular piece of canvas. Social media creators produce for a tiny phone screen. Similarly, each computer node is its own little canvas of low dimensionality. The conception of a Turing Machine tries to abstract this away, but even under emulation, they are still tiny, physical devices that hold big, virtual visions.
The problem with distributed systems is that they are distributed. That is, they are not local. Time is sacrificed for space. As the algorithmic complexity is reduced, the operational overhead is increased. Things that used to be fast, with fewer resources, are now slow, with more resources. Everything around the Big O
got simpler, while the Big O
itself grew in complexity, layer after layer of stack, each a map-reduce
, actually forming a long chain of reduce-map
links.
Enter SQLite, The Serverless Database. SQLite is serverless in the fact that it is an embeddable library, not a service nor application. It is a database toolkit, where the application can be its own database.
Many distributed databases are built on SQLite. It supports plug-ins for virtual tables, and virtual file systems, used both for testing the core, and for extension, which makes it a proper toolkit. An SQLite database is merely one (database journal blocks) or two (write-ahead log) files. It is trivial to store SQLite files anywhere.
Now, consider if a database were to only contain objects causally related. That is, a database only contains data for a single tenant (user, client, etc.). This is how SQLite is actually used in practice on small, embedded devices, like mobile phones. Most databases on mobile phones are powered by SQLite.
Can't servers do the same thing? Yes, they can. The SQLite databases on mobile phones are often synchronized across multiple devices held by the same tenant. This is trivial to do because the physical representation is so trivial. SQLite also has an ATTACH
statement, which allows multiple databases to be attached to the same session, and data can be joined across the different databases, similarly to an enterprise database. But in this case, the journals can be materialized via a virtual file system in any way desired.
This single-tenant-per-database model ends up being Strong Causal Consistency. By expressing Big Data as Small Data that is causally related, both temporal Strong Consistency and spatial Causal Consistency are obtained. The data is ordered the way it was when things were fast with few resources, reflecting the few resources actually required per tenant, just as before.
A system may isolate vertical layers using simulation, or a system may isolate horizontal layers using emulation. Operating systems isolate horizontally using emulation. Emulation is far faster than simulation.
This system isolates horizontally, enabling a true userland kernel, allowing hardware protection, but while using the same page table, vastly reducing the complexity, limiting it to scheduling and I/O. Moving scheduling and I/O to userland is where most of the speed improvements may be found. Instead of supporting a complete virtual kernel, this supports merely a userland scheduler with task-level isolation, rather than isolating an entire process with complete system access.
mprotect()
system call, and kernel-side counterpart, allows memory pages to be temporarily masked from both userland and a system call.SIGSYS
signal is generated. Instead of the default of killing the process, the process can redirect to a user-task error handling routine.A new set of system calls is needed:
vkregister()
in the normal tableTo setup the pathway back into the userland scheduler.
Registers a virtual kernel scheduling callback function and a user-data state pointer and size. The callback:
Handles any sub-task isolation needed before the next vkcontinue()
.
Calls the next vkcontinue()
.
Registers a list of page mappings that are virtual kernel privileged.
Registers a kernel system call masking callback function registered to handle SIGSYS
signals. The callback:
Raises an error to the subtask (or its supervisor), noting the abused system call.
Calls vkyield()
on the subtask, giving control back to the virtual kernel to schedule the error handling continuation with vkcontinue()
, which may continue a supervisor or the subtask itself.
vkcontinue()
in the normal tableTo continue a protected task from the scheduler: a yield in.
Uses mprotect()
to deny access to pre-registered virtual kernel
pages holding privileged state.
Changes the system call table to only have a single system call that exits that protected mode: vkyield()
. When a system call that is unavailable is called, the kernel generates a SIGSYS
.
Any regular system calls called during this protection mode will generate a SIGSYS
against the process that may be caught, and be handled by the error callback registered by vkregister()
.
vkyield()
in the isolated tableTo yield a protected task back to the scheduler: a yield out.
Uses mprotect()
to allow access to pre-registered virtual kernel
pages holding privileged state.
Changes the system call table back to the original table with full access to the system.
Calls the vkregister()
-registered callback function with the registered user-data state pointer. The callback ultimately calls the next vkcontinue()
to proceed where the protected task yielded.
Occam's Razor has been stated in many forms. For example:
The simpler answer is the correct one, all things being equal.
Do not posit without necessity.
I think of it as:
Simpler patterns occur more often.
Simpler combinations are more probable.
That last expression reveals the tautological self-evidence of the statement. But it is a bit of a stretch to get from the tautological expression to a statement about answers. It takes thinking of combinatorial complexity as related to search cost, like in computer science. There is a relationship, not a causal relationship, not between cause and effect, but rather between system and effect -- the interface between a structural medium and actions within it, between the eternal and the temporal. It is that old philosophical dichotomy of Being and Becoming.
With respect to values of complexity, a distinction exists between cardinal complexity, and ordinal complexity, which correspond philosophically to Being and Becoming. Cardinal complexity is structural, the count (cardinality) of structural elements. Ordinal complexity is aesthetic, the distance (ordinality) of the aesthetic path through the structural elements, the order in which the structure is traversed, or searched.
A search involves some kind of path ordering. Search cost is therefore aesthetic, ordinal complexity. That is, a complexity of predicate, or temporal causal function. The size of the search space is the structural, cardinal complexity, a complexity of subject, or eternal logical form.
It is assumed that there is a relationship, but the specific relationship depends on the search algorithm, and the linearity of that relationship is what is described as a time complexity
in computer science, a type of ordinal complexity where time is the ordinal. However, many time complexity notations are only able to represent some derivative of the time complexity with missing variables, and therefore fail to properly integrate into a benchmark predictor. They are used because they are sometimes easier to formalize.
Philosophically, a search algorithm represents a philosophical method to Becoming. That is how life answers are bound to subjects, whether in real or virtual media.
In computer science, the linguistic association of a predicate to a subject is known as binding
(the association) a method
(the predicate) to an object
(the subject). This leads to a simple way to understand this complexity dichotomy:
This aligns with the relationship between a noun and a verb as described in elementary school: The noun stays, but the verb goes away.
That is, the noun is eternal Being (without time, with form, without function, it stays
), and the verb is temporal Becoming (with time, without form, with function, it goes away
).
There is an unintuitively complimentary idea in philosophy that is a logical extension of Occam's Razor: it is Eternal Recurrence.
I think of it as:
From the point of view of eternity, complex patterns still occur infinitely often.
Complex combinations have some probability.
Think of the set of even numbers, the set of odd numbers, and the set of both even and odd numbers. Even numbers and odd numbers occur at half the frequency of both even and odd numbers. However, each of these sets contain an infinite count of members (cardinality). These sets of half can be divided infinitely into ever-smaller infinitesimals, and the same will be true. The selection of even or odd, or some other smaller set, has its own ordinal complexity, and there is the aesthetic significance. The philosophical realization is that all occurrences are significant from the point of view of eternity, because it is a subjective experience with the same value as any other subjective experience.
Can this idea be expressed, like Occam's Razor, in terms of answers? Just reciprocate the terms, revealing a converse, complimentary idea.
The complex answer is the correct one, things being unequal.
Do posit with necessity.
Note how these statements change orientation, from without action to with action. The statements become temporal.
Think about what that means. The temporary circumstances may lead to the complicated choice being the right choice, with the full weight of eternal significance. Do not let the imperative of the category override the imperative of the case.
Think of the action not as a categorical binding (nor a method
on a class
), but as an execution of the state variable
representing the entire machine state. It is that state of the entire machine which has eternal recurrence.
Think of a computer core image
being executable:
Each environment is different, yet each is an eternal recurrence of the same state variable
.
Occam's Razor is considered a foundational idea of Positivism. Eternal Recurrence is therefore the Existentialist compliment of Positivism.
Logic is a set of tautologies. Wittgenstein explained it so well in his Tractatus Logico-Philosophicus (Treatise on Logical Philosophy) that it not only lead to a philosophical view, it also developed into the logic behind logic gates that are used in computing. That view of logic formalized the field by limiting the scope of the problem to something comprehensible.
5.101 The truth-functions of a given number of elementary propositions can always be set out in a schema of the following kind: [This table has been stylized and paraphrased with modern headers for clarity.]
Truth Table of Binary Logical Operators (✓ = True; ✗ = False) Term Variable Input Text From Tractatus Logico-Philosophicus Antecedent p ✓ ✗ ✓ ✗ Consequent q ✓ ✓ ✗ ✗ Function Operator Output Name Description of Formula Symbolic Formula TRUE
p ⊤ q ✓ ✓ ✓ ✓ Tautology If p then p, and if q then q. p⊃p · q⊃q
NAND
p ⌅ q ✗ ✓ ✓ ✓ Not both p and q. ~(p · q)
CONVERSE
p ⊂ q ✓ ✗ ✓ ✓ If q then p. q⊃p
IMPLY
p ⊃ q ✓ ✓ ✗ ✓ If p then q. p⊃q
OR
p ∨ q ✓ ✓ ✓ ✗ p or q. p∨q
NOT q
¬ q ✗ ✗ ✓ ✓ Not q. ~q
NOT p
¬ p ✗ ✓ ✗ ✓ Not p. ~p
XOR
p ⊻ q ✗ ✓ ✓ ✗ p or q, but not both. p ·~q:∨:q ·~p
XNOR
p ≡ q ✓ ✗ ✗ ✓ If p then q, and if q then p. p≡q
IS p
p ✓ ✗ ✓ ✗ p p
IS q
q ✓ ✓ ✗ ✗ q q
NOR
p ⊽ q ✗ ✗ ✗ ✓ Neither p nor q. ~p ·~q or p|q
NONCONVERSE
p ⊅ q ✗ ✗ ✓ ✗ p and not q. p ·~q
NONIMPLY
p ⊄ q ✗ ✓ ✗ ✗ q and not p. q ·~p
AND
p ∧ q ✓ ✗ ✗ ✗ q and p. p · q
FALSE
p ⊥ q ✗ ✗ ✗ ✗ Contradiction p and not p, and q and not q. p ·~p · q ·~q
I will give the name truth-grounds of a proposition to those truth-possibilities of its truth-arguments that make it true.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus5.101
These binary logical operators accept two (hence binary) truth values (known as Boolean values, after George Boole), values of either true or false. They return one truth value. That is, they are functions with two Boolean valued arguments that return one Boolean value.
For example, a C library of truth table functions with Boolean values:
bool tautology( bool p, bool q) { return ~0 ; } // p ⊤ q bool nand( bool p, bool q) { return ~( p & q); } // p ⌅ q bool converse( bool p, bool q) { return ~( p & ~q); } // p ⊂ q bool imply( bool p, bool q) { return ~(~p & q); } // p ⊃ q bool or( bool p, bool q) { return p | q ; } // p ∨ q bool not_q( bool p, bool q) { return ~q ; } // ¬ q bool not_p( bool p, bool q) { return ~p ; } // ¬ p bool xor( bool p, bool q) { return p ^ q ; } // p ⊻ q bool xnor( bool p, bool q) { return ~( p ^ q); } // p ≡ q bool is_p( bool p, bool q) { return p ; } // p bool is_q( bool p, bool q) { return q ; } // q bool nor( bool p, bool q) { return ~p & ~q ; } // p ⊽ q bool nonconverse( bool p, bool q) { return p & ~q ; } // p ⊅ q bool nonimply( bool p, bool q) { return ~p & q ; } // p ⊄ q bool and( bool p, bool q) { return p & q ; } // p ∧ q bool contradiction(bool p, bool q) { return 0 ; } // p ⊥ q
Each Boolean has two combinations, true or false, and there are two Boolean arguments, so there are 4 possible inputs. With 4 possible inputs, there are 2 to 4th power (16) possible outputs. Therefore, there are 16 possible functions, each represented in the truth table.
Input p | 0 |
1 |
0 |
1 |
||
---|---|---|---|---|---|---|
Input q | 0 |
0 |
1 |
1 |
||
Function | Output | Expression | Operation | |||
contradiction() |
0 |
0 |
0 |
0 |
0 |
p ⊥ q |
and() |
0 |
0 |
0 |
1 |
p & q |
p ∧ q |
nonimply() |
0 |
0 |
1 |
0 |
~p & q |
p ⊄ q |
is_q() |
0 |
0 |
1 |
1 |
q |
q |
nonconverse() |
0 |
1 |
0 |
0 |
p & ~q |
p ⊅ q |
is_p() |
0 |
1 |
0 |
1 |
p |
p |
xor() |
0 |
1 |
1 |
0 |
p ^ q |
p ⊻ q |
or() |
0 |
1 |
1 |
1 |
p | q |
p ∨ q |
nor() |
1 |
0 |
0 |
0 |
~p & ~q |
p ⊽ q |
xnor() |
1 |
0 |
0 |
1 |
~( p ^ q) |
p ≡ q |
not_p() |
1 |
0 |
1 |
0 |
~p |
¬ p |
imply() |
1 |
0 |
1 |
1 |
~(~p & q) |
p ⊃ q |
not_q() |
1 |
1 |
0 |
0 |
~q |
¬ q |
converse() |
1 |
1 |
0 |
1 |
~( p & ~q) |
p ⊂ q |
nand() |
1 |
1 |
1 |
0 |
~( p & q) |
p ⌅ q |
tautology() |
1 |
1 |
1 |
1 |
~0 |
p ⊤ q |
In the domain of propositional logic, the arguments are named p and q. The first argument is the antecedent
p, the cause. The second argument is the consequent
q, the effect. A proposition is a declarative statement that is either true or false, but not both. Therefore, a proposition can be represented by a Boolean value, and a Boolean value can represent a proposition.
For example, the statement It is raining.
is a proposition because it is either true or false, but not both. Let us call the statement the Booolean value p, and say p ⊻ ¬p
, or xor(p, not(p)) == true
, to declare that it is a Boolean proposition.
6.1 The propositions of logic are tautologies.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus6.1
6.23 If two expressions are combined by means of the sign of equality, that means that they can be substituted for one another. But it must be manifest in the two expressions themselves whether this is the case or not. When two expressions can be substituted for one another, that characterizes their logical form.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus6.23
For example, let the arguments be:
It is raining.
It is not raining.
q = not(p)
Let the proposition be: It is raining or it is not raining[, but not both].
It is raining.⊻
It is not raining
It is raining.⊻ ¬
It is raining
xor(p, not(p))
tautology(p, q)
Since xor(p, not(p))
is always true, no matter the truth or falsity of p or q, then it is logically equal to a tautology truth function. Therefore, the proposition It is raining or it is not raining
is logically valid.
Let the proposition be: It is raining and it is not raining.
It is raining.∧
It is not raining.
It is raining.∧ ¬
It is raining.
and(p, not(p))
contradiction(p, q)
Since and(p, not(p))
is always false, no matter the truth or falsity of p or q, then it is logically equal to a contradiction truth function. Therefore, the proposition It is raining and it is not raining
is logically invalid.
A proposition is a declarative statement that can be true or false. It is said to hold truth or falsity. But the role it plays in logic is to represent a condition of state. It is the existence of the condition of state that is expressed as truth or falsity.
4.121 Propositions cannot represent logical form: it is mirrored in them. What finds its reflection in language, language cannot represent. What expresses itself in language, we cannot express by means of language. Propositions show the logical form of reality. They display it.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus4.121
A proposition is described as not just a piece of logic, but actually an orthogonal axis among logic, where logic is what is said, and propositions are what is shown.
4.1212 What can be shown cannot be said.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus4.1212
Etymologically, even the words proposition
and statement
are concerned with position and state. The propositions represent conditions of state. The constraint of the proposition, true ⊻ false
, interfaces the represented state of the proposition with equality of position, interfacing the cardinal axis of state with the ordinal axis of its position.
Of the 16 functions in the binary truth table, only 2 of them, the tautology and contradiction functions, cover all that is inherently true (tautology), and all that is inherently false (contradiction). That leaves 14 out of 16 functions addressing a space of what is not inherently true nor false. That means that most ways of looking at truth at this lowest level are in the space of non-inherent truth. Such non-inherent truth is conditional, relative, existential truth.
The definition of logic as a set of tautologies is often considered the main point of the Tractatus. Before it, Logic was still conflated with metaphysical mysticism. Wittgenstein even used the word mystical
when delineating what was not logical. After the Tractatus, Logic was finally able to be treated as purely rational.
6.44 It is not how things are in the world that is mystical, but that it exists.
6.45 To view the world sub specie aeterni [from the point of view of eternity] is to view it as a whole—a limited whole. Feeling the world as a limited whole—it is this that is mystical.
Ludwig Wittgenstein,Tractatus Logico-Philosophicus6.44-6.45
Wittgenstein carves out a clear boundary around logic, where the realm of not logic begins. Then he describes what is outside of logic as a kind of relative (temporal) existence in an absolute (eternal) space. This was written by 1918, and published in 1921. Soon after, in 1927, Martin Heideggar published:
Being lies in the fact that something is, and in its Being as it is; in Reality; in presence-at-hand; in subsistence; in validity; in Dasein [being-there]; in the 'there is'.
Martin Heideggar,Being and Time, page 7 (Original German), page 25 (cited English translation by John Macquarrie and Edward Robinson)
An object being there
is about where in reality
there is
an object. That is, it is not whether an object exists, but rather where an object exists. That is, existence pertains to a state of a space, the integrated state within a logical space.
What these ideas lead to is not just a separation of what can be shown or said, but a proposal for a third axis of Dasein
being: what can be ordered; ontological ordinality; the device of creation. Declarative theory ends at what is ordered, while imperative technique begins at what is ordered, is expressed through what is shown, and finally becomes a term of what is said: a complete aesthetic cycle.
Typically, logic is treated as a medium like physics, as a low-level functional ruleset to derive all high-level effects. Logic and physics are both rulesets to represent a particular type of medium. Each says nothing about the particular state of any particular medium.
For example, each mathematical system, each a tautological form of logical rules, may or may not apply to a given medium, just as any model of physics may or may not apply to a given medium. If the model is found to apply through non-tautological observation, it is presumed to explain all state changes in the medium, as a set of derivative functions, as a generative model. It is that conditional if the model is found to apply that is represented as the decidability of undecidable propositions
in incompleteness theory.
However, there is another aspect of incompleteness: a functional model does not account for temporal state, and it cannot fully integrate the final state without the context of a temporal state. The generative model needs parameters as any sequence needs a position. A logical medium being the set of functions without state, without the actual space of the medium: a logical medium is the code without the data, without even memory.
Existence enters as the positional aspect of the spatial integration. It is this first-mover,
this ordinal key into in a value sequence, the function parameter, to view the world [...] as a limited whole [...] that is mystical
to Wittgenstein. It is where the value is, literally. And that is the scope of art.
With respect to media, logical views can be understood trough immutability, while stateful views can be understood through mutability.
In computing, there is a practice of marking pages of memory holding functional code as write-disabled and execute-enabled, or effectively immutable, while leaving the pages of memory holding data as write-enabled and execute-disabled, effectively mutable. That is, the memory is either holding code-to-execute or data-to-manipulate, where code logic is treated as a set of immutable functions, and data is treated as mutable state.
W^X ("write xor execute", pronounced W xor X) is a security feature in operating systems and virtual machines. It is a memory protection policy whereby every page in a process's or kernel's address space may be either writable or executable, but not both. Without such protection, a program can write (as data "W") CPU instructions in an area of memory intended for data and then run (as executable "X"; or read-execute "RX") those instructions.
Functional, Immutable Code | Stateful, Mutable Data | |
---|---|---|
Write Access | ✗ Disabled | ✓ Enabled |
Execute Access | ✓ Enabled | ✗ Disabled |
Our provisional aim is the Interpretation of time as the possible horizon for any understanding whatsoever of Being.
Martin Heideggar,Being and Time, page 1 (Original German), page 1 (cited English translation by John Macquarrie and Edward Robinson)
That W^X
example is a direct analog with code relating to function, and data relating to state. But the other side of computing is the representational side, or databases. In the study of databases, there is a principle of Log-Table Duality that is used to control time-related access issues. It is understood that:
The time-independent journal is used to resolve time disparities across disparate time-expressions of the mutable table. This is an example of using an immutable expression to relate mutable expressions, where the immutable derives mutable integrated state.
Let's come back to databases for a bit. There is a fascinating duality between a log of changes and a table. The log is similar to the list of all credits and debits and bank processes; a table is all the current account balances. If you have a log of changes, you can apply these changes in order to create the table capturing the current state. This table will record the latest state for each key (as of a particular log time). There is a sense in which the log is the more fundamental data structure: in addition to creating the original table you can also transform it to create all kinds of derived tables.
Log | Table | |
---|---|---|
Mutability | Immutable | Mutable |
Type | Derivative Method | Integrated Object |
In philosophy, there is an ancient distinction between the eternal (without time) and the temporal (with time). There is a Platonic view that form is eternal, and function is temporal. In this respect, function is identified as temporal action.
However, there is both a logical and ontological view of form and function. A logical view of a function is as an eternal mapping between values, and a logical view of form is as an internally tautological set of mappings that form a structure. An ontological view of a function is as a temporal method on a mutable object value, and an ontological view of a form is as a temporal object in a mutable space of temporally ordered states.
Logical View | Ontological View | |
---|---|---|
Mutability | Immutable | Mutable |
Temporality | Eternal | Temporal |
Form | Eternal, Immutable Form | Temporal, Mutable Object |
Function | Eternal, Immutable Function | Temporal, Mutable Method |
A function is an eternal, immutable representation of a change, while a method is a temporal, mutable representation of a change. For example, in 2 + 2 = 4
, the function is add(2, 2) == 4
, or the mapping of (2, 2)
as arguments to an add
function that results in 4
, where 2 + 2
is an eternal logical identity of 4
. However, the method is what +
or add()
does, a temporal ontological action.
Each of the eternal and temporal perspectives has a different usage. Different schools of thought have a tendency to pick one perspective, leading to a possible combination of Four Schools of Functional Thought:
In the Platonic view, form
is eternal, immutable form, but function
is temporal, mutable method, where the mutable function is used to bridge the eternal with the temporal.
This is where form follows function
comes from, since function is a priori (eternal) and form is a posteriori (temporal), the opposite of the Platonic view: form
as the integrated state of the logical function
. The key observation is that form is mutable, for anything subject to creation has to be mutable.
The mathematical view is completely logical, where both form
and function
are eternal, where the eternal is used to explain the temporal. However, with only a logical formalism, without an ontological formalism, the gap between logic and art has become so wide that even science is losing its formality.
The object oriented view is ontological, where both form
and function
are temporal, object and method, where the medium is a completely immutable space, and the eternal is expressed as temporal.
There is another mode of functional thought that is not mutually exclusive (xor
) with its views, but rather inclusive (or
):
The technical view starts and ends with the temporal view, passing through the abstract, eternal, logical view. It is reification, representation, reproduction, all of the re-
temporal ideas. It is perceiving the finite, conceiving the indefinite, then reducing the finite to the significant conception. That comes from seeing two different things the same way.
View | Binary Op |
Integration |
Form |
Object |
Derivative |
Function |
Method |
---|---|---|---|---|---|---|---|
Ancient | xor |
Form | ✓ | ✗ | Method | ✗ | ✓ |
Modern | Object | ✗ | ✓ | Function | ✓ | ✗ | |
Mathematical | Form | ✓ | ✗ | Function | ✓ | ✗ | |
Object-Orientation | Object | ✗ | ✓ | Method | ✗ | ✓ | |
Technical | or |
Both | ✓ | ✓ | Both | ✓ | ✓ |
In the perspective of the Log-Table Duality, the Table is a mutable, integrated object, and the Log is an immutable, derivative function. In the perspective of the Write-Execute Duality, the Writable page is a mutable, integrated object, and the Executable page is an immutable, derivative function. Both of these are of the Modern View.
Mutable Object | Immutable Function |
---|---|
Table | Log |
Writable Page | Executable Page |
The logical analysis of a media space is therefore limited to a functional analysis. Functional analysis describes how one object changes into another, but it does not describe what the object is, where it exists, or even that it exists. Those are the existential, ontological questions: questions of value. To answer them, there must be an ontological analysis that reflects the state-of-affairs, the environmental conditions and state.
It is known in computer science that there are other types of analysis for media space. There is already a jargon of object-orientation which aligns with the jargon of ontology:
Beingof objects, per Martin Heideggar
Dasein [being-there]of objects, per Martin Heideggar.
Those terms are about the media space state, not the media functions. The terms are about the data, and relationships between data, the organization of data. Object analysis is ontological, spatial analysis.
There is a fundamental orthogonality between the ontological and the ontical:
That which is ontically closest and well known, is ontologically the farthest and not known at all; and its ontological signification is constantly overlooked.
Martin Heideggar,Being and Time, page 43 (Original German), page 69 (cited English translation by John Macquarrie and Edward Robinson)
For each medium, this orthogonality is not just a pairing, but forms a tripling with logic itself, a complete hierarchy of state:
Ontological | Geometrical | Object Orientation | Biological |
---|---|---|---|
Logical Rules | Plane | Virtual Memory Space | Physical Environment |
Ontical Values | Values | Classes and Types | Species |
Ontological Objects | Positions | Object Instances | Organisms |
The mind is attuned to perceive ontological objects, and can conceive logical rules with time. However, the ontical values tend to be a function of the body, subject to the domain of organizational systems, and are difficult for the mind to formalize. Ontology is where the hard sciences end, and the soft sciences begin.
The great product of The Enlightenment is the development of Existential, Non-Platonic methods to begin the formalization of the soft sciences, like biology, psychology, and economics. Physics was able to be founded through mathematics before the Enlightenment. But the sciences of media, the soft sciences, needed a philosophy of science to develop. The first philosophy of science is Existentialism. Today, there is an Ontology of Object Orientation.
Each level of the ontological media hierarchy may be understood by a type of equality:
Saying the same thing differently is developed through tautologies, or logical, functional equalities.
Showing the same thing differently is developed through reproducibilities, or ontical, aesthetic equalities.
Ordering the same thing differently is developed through ordinalities, or ontological, relative equalities.
That is why, for example, many programming languages have 3 types of equalities:
=
or let
: equality for logical comparison (when assignment is represented functionally as Static Single Assignment during compilation)==
or eq
: equality for value comparison===
or is
: equality for identity comparison (when each object is assigned a distinct memory allocation)However, the analogy isn't perfect due to platform emulation shortcuts and limitations, so the meanings often have a subtle difference due to platform design artifacts. For example, in a copy-by-value system, the identity equality operator only tests value and embedded type, like with PHP, or with Python immutable strings.
That would seem like a layering violation, but it is an optimization technique that respects the layers. In computer science, the scopes of logic and ontology are made distinct by the property of mutability. That is, a functional analysis requires immutable properties over the course of the analysis, so mutable properties are translated to immutable properties when possible. This is what optimizing compilers are doing with Static Single Assignment, moving as much of the flow as possible to the logical layer to reduce costly mutations.
Each mutation of state is an act of creation. And any technical process is an optimization of creation, enabled by logical equalities (tautologies) that connect ontical equalities (reifications).
The ability to work in virtual spaces provided by modern computing allows for purely ontological statements to be falsifiable for the first time in history. This is what finally enables the jump from logic to ontology. The properties of the medium itself are not just the beginning of a problem, but also the beginning of a new solution. It isn't just opening the door to physics through physical modeling and simulation, but allows for the foundation of a scientific field of metaphysics.
For the first time in history, philosophical problems may be formulated in a falsifiable manner, and Kantian limitations may be defeated. Analytic and Continental philosophy may finally be unified. That will be the next legacy of computer science.
I propose that Arts and Sciences are more properly founded on Ontology, not Logic. First, I start with a falsifiable set of ontological rules from which Arts and Sciences are derived.
The study of ontology is defined through media-independence, as opposed to the study of the physical:
Dependent | Independent | |
---|---|---|
The Ontological | All-Media | Per-Media |
The Physical | Per-Media | All-Media |
That is, the ontological is everything that is common to all media, and nothing that pertains to a particular medium, which is rather the physical. This provides a formal definition and falsifiable scope to all of ontology.
It follows that all of ontology may be understood through any medium, including the ontology of physical
media.
That is to say that the question of the existence of the most significant things may be understood in the same way as the most trivial, the most assessable, or any desirable condition, as long as the interfaces of the compared ontological conditions are constant.
It follows that ontologicality may be falsified by demonstrating a property existing in some (one or more) media, but not some (one or more) others, because that would demonstrate media-dependence, falsifying the media-independence defining ontologicality.
Ontology is independent of physicality.
Since physicality is assigned to particular physical media, by definition physicality is media-dependent on each physical medium, and therefore by definition physicality is a physical property, not an ontological property.
Physical planes are merely virtual planes from an internal point of view.
That is, a plane appears physical to you if you are inside it, so real
planes are contextual. Real planes are ontologically illusory in that they have no ontological significance, per the principle of ontological media-independence. Every plane appears physical to its physical contents.
However, the relative nature of physicality is media-independent, and therefore an ontological property.
A great modern example of an ontology of non-real planes, and therefore a type of metaphysics, is Stephen Wolfram's A New Kind of Science
. He has applied complex physical generalities to simple cellular automata. In that way, he overcame the limitations of geometry by only simulating the common interface, not simulating the whole environment: the distinction between emulation and simulation.
The principle of ontological media-independence implies that this is a valid method of demonstration. Wolfram is re-orienting the problems as reproducible aesthetic constructs through logical constructs that simulate physical aesthetic properties, relating the artificial medium with the natural medium. That is, instead of logic, a logic of media is simulated.
All that is missing is to link what Wolfram does to ontology formally. This requires, as in the field of logic, a limiting definition of scope that serves as a way to define strict, formal relationships.
Demonstrations are processes of action. A space is explored not just through the organization of state, but through the interfaces of action. From production to reproduction to valuation, there are three ontological models of action:
A simulation is a media space model. The ecosystem is driven only by generic operations designed to discover effects. It is an application of theory. That is, simulation uses theory to discover. Theory is declarative. It describes immutable media function, equations. Representation is simulated.
An emulation is a media interface model. The ecosystem is driven by special operations designed to reproduce effects. It is an application of technique. That is, emulation uses technique to reproduce. Technique is imperative. It describes mutable object actions, methods of manipulation. Representation is emulated.
A virtualization is a media state model. The ecosystem is driven by stateful operations designed to produce effects. It is an application product instance. That is, virtualization is an instance of state of a particular emulation.
Representation | Media Model | Practice | Component |
---|---|---|---|
Simulation | Spatial | Declarative Theory for Discovery | Logical Function |
Emulation | Interface | Imperative Technique for Reproduction | Ontical Type Method |
Evaluation | State | Produce Value | Ontological Object Instance |
To that end, I propose to define Art as the set of all reifications (re-representations, or emulations). This is Wittgenstein's "logical picture", which Wittgenstein presented only as an idea. That is, art is about similar effects from different causes, or showing the same thing differently, or reproduction. That reproduction is essentially what emulation provides. Each production of art, each affect of art, expresses relational similarity. That reproduction is not logical equality, but aesthetic equality spanning the artistic, cross-media relations.
Logic is tautological: re-declaring the same thing. It is structure. It is duplicate Being. It can be simulated.
Art is reification: re-instantiating the same thing. It is reconstruction. It is duplicate Becoming. It can be emulated.
Reification is:
Composition is the production of a theme within a scene. Orchestration is producing the same theme within a different scene: cross-media composition. Reification technique is therefore developed from a theory of orchestration.
To evaluate a cross-media orchestration method, each medium-method is mapped to a common super-medium from which each sub-medium-method is evaluated. For example, a human can evaluate methods to perform an electric song as an acoustic song by listening to two recordings of each method. The recording medium is a super-medium. The single human listener is a super-medium. In computing, benchmark suites are often used to compare the performance of two different techniques of the same set of interface operations defined by the benchmark suite.
This can also be done between a virtual medium and a physical medium. That is, it is possible to emulate aspects of the physical world to better understand aspects of the physical world.
Looking at reification in Art, the scenes with common themes can be shared within the same presentation, or there may be the scenes on stage interacting with the scenes as observed. The genius of tragedy is the use of presentational themes to create mental themes.
Looking at Stephen Wolfram's A New Kind of Science, it is only missing the evaluation side to base it on ontology. That is, it is missing the theme interface that is constant between each scene, where he crosses scenes in the physical world with scenes in cellular automata. To create such a reification evaluator, it must be able to operate across media, meaning that the media need to be unified into the medium of the evaluator, by way of a super-medium. The body and mind of the audience act as an informal evaluator. The next step to formality is now clear.
Enumeration is the simplest evaluator. All quantization processes are such. Like standard I/O. The Unix Pipeline is for constructing aesthetic processes. Message passing systems in general are aesthetic processes. That is why they emerged first from engineering before entering science.
The field of machine learning currently makes use of common evaluators when evaluating the effectiveness of classification systems by comparing discriminatory systems against human-discriminated data sets. That is, the supervision is the super-media evaluator.
Investigations into the processes of each of our organs of sense, have in general three different parts. First we have to discover how the agent reaches the nerves to be excited, as the light for the eye and sound for the ear. This may be called the physical part of the corresponding physiological investigation. Secondly we have to investigate the various modes in which the nerves themselves are excited, giving rise to their various sensations, and finally the laws according to which these sensations result in mental images of determinate external objects, that is, in perceptions. Hence we have secondly a specially physiological investigation for sensations, and thirdly a specially psychological investigation of perceptions.
Hermann Helmholtz, Introduction toOn The Sensations of Tone as a Physiological Basis for the Theory of Music, 1863
Helmholtz Layer | State Hierarchy | Action Hierarchy |
---|---|---|
Physical | Logical Rules | Simulation |
Physiological | Ontical Values | Emulation |
Psychological | Ontological Objects | Evaluation |
Helmholtz speaks about any sense organ. In this era of computing, sensory systems may be understood as artificial sense organs. As such, Helmholtz's observation applies generally to sensory systems.
Computer programs may be understood as sensory devices. Even the simplest programs may be understood in this way. For example, a process with input and output, or any automaton. This is independent of general intelligence or complexity. A mere fruit fly has a psychology, and so does a mere regular expression.
A physiological layer has an interface to the physical layer, the sensory organ, and an interface to the psychological layer, the sensory cortex. The pathway from the organ to the cortex flows from cardinality (counting value) to ordinality (ordering value); that is, from evaluating magnitude to evaluating position, from ontical value magnitude to ontological object position.
The I/O devices and processor are the physical hardware. The operating system driving the I/O devices is to quantize the I/O, typically grouped into events. The process holding the application and standard library usage is where objects are conceptualized from quantized I/O.
Sensory System | Computing Platform |
---|---|
Physical | Processor and I/O Devices |
Physiological | Operating System I/O Drivers |
Psychological | Application and Standard Library |
Computer networks have been studied and modeled for decades, and networks are a great example of a sensory system. The main model of a network is The Open Systems Interconnection (OSI) Model, which aligns as predicted with the Helmholtz layers:
Sensory System | Computing System | OSI Model | ||
---|---|---|---|---|
Physical | I/O Devices | Media Layers | 1 | Physical Symbols |
2 | Data Link Frames | |||
Physiological | I/O Drivers | 3 | Network Packets | |
Host Layers | 4 | Transport Segment/Datagram | ||
Psychological | I/O Libraries and Application | 5 | Session | |
6 | Presentation | |||
7 | Application |
Temple Grandin, an austist with a brain having an augmented visual sensory system, writes about three types of specialized brians
:
These three types correspond to the three Helmholtz Sensory Layers, even in the order described:
Sensory Layer | Thinking Function | Thinking Object | Ontological Media Layer |
---|---|---|---|
Physical | Visual Thinking | Pictures | Ontological: Media Data State |
Physiological | Music and Math Thinking | Patterns | Ontical: Media Data-Function Interface |
Psychological | Verbal Logic Thinking | Words | Logical: Media Function Implementation |
The symptoms of Autism Spectrum Disorder are often described as groups of various Sensory Processing Disorders
. Instead of describing these sensory processing disorders as disorders
, they are often described as differences
in evolutionary optimization of different types of sensory processing problems. It is therefore intuitive for these types of processing to share general traits of any sensory system, and as such relate to the general Helmholtz Sensory Layers.
Notice that the Ontological Media Layers related to Helmholtz Sensory Layers are associated in reverse order to the relationship described in the table Ontological Hierarcy of Sensation
, where the ontological layer is the psychological layer. That is because of the flow of input and output. Outside the mind (or sensory system), the ontological objects are physical, but inside the mind, ontological objects are the final evaluation of the psychological layer, the virtual physical plane.
Biological senses observe changes, not final effects. They observe forces, not attributes. When a person speaks in a room, the speaking reverberates within the room. The reverberation echos the speaker, but each is heard completely differently. The speaker is heard as the source, and the reverberation is heard as the room. The sense cares nothing for a repeatable description of sound, but rather for what is happening. The sense isn't capable of merely hearing sounds: it actually listens.
When a person speaks the word water
to those who know the meaning, what is heard is the value, the meaning, not the symbol, not the way water
sounds to those who don't know the meaning. When one is asked to repeat what another says, it is easier to represent its meaning through one's own voice rather than to mimic the voice of the other. Speaking in one's own voice comes innately, but repeating the sequence of sounds requires cognition. That is the skill of the impressionist.
That is the same process by which a person is recognized in a painting, yet particulars specific to the person are easily overlooked. We instantly recognize Mona Lisa, but fail to notice the lack of eyebrows until it is pointed out to us.
The physiological pattern in biology is that the senses model the assumptions of the functional aspects of the medium. They detect against the grain of media function to detect the state within, and actions upon each medium.
The model training of machine learning is to learn the functional aspects of the medium, such that the execution of the model normalizes observations to invert the functional effects of the medium, to reveal the activity of the state variable of the medium, the generating function parameters.
That is, the modeling discovers the invariant rules of the medium, while the execution of the model discovers the equivariant values of the medium.
A sensor that is equivariant reveals the value-variants in objects. A sensor that is invariant reveals variants in environment, to ultimately reveal the objects it contains. That is, equivariants are logical equalities dependent with identity, while invariants are aesthetic equalities independent of identity.
It is the role of the Helmholtzian physiological layer of a sensor device to be oriented with an object-detecting bias. That is why optical sensors are based on edge-detection, the edges are more constant to the object than the color. That is why convolutions often work well as a front-end to neural networks, the shape can be even more constant than an edge. That is why audio (and radio) sensors detect relative phase difference, not absolute phases, where a consistency of the phase difference denotes a signal value, and a variation in the phase difference denotes a signal change. The relative phase difference is independent of the position of the message, but dependent on the message contents.
The Arts are developed from human control over both equivariants and invariants, a formal expansion of the role of the body. In that way, experience is shared along with the conditions of experience. It answers not just what, but how. It is the difference between merely declaring the roads slick, and telling a story that demonstrates the slickness. In that way, stories are much more faithful to the truth than mere propositions.
The Arts relate in that way to the reproducibly of The Sciences, where reproducible demonstrations allow the development of personal, direct stories, even more faithful to the truth. But Art can also have the power of science when the story already directly relates to a personal story. In that way, fiction can be a better persuasive device than non-fiction without a laboratory to reproduce results.
Given reification theory, the tautologies of Logic are extended aesthetically by Art. Philosophically, Logic is a priori equality, and Art is a posteriori equality. That is to say that Logic is equality before (prior to) evaluation, and Art is equality after (posterior to) evaluation. That is, art is virtual media. Not evaluation of truth-value, as Wittgenstein's logic (which comes after), but evaluation as a virtual sign
to affect meaning. After that comes its use as a conditional in a truth-value. That is, it is the variable itself, the signal itself, the flow of entropy through a system, the observation before the contemplation, what is seen before the thought.
This representational aspect of the human-condition is so baked into the experience of life that it is easily overlooked by analytical minds. It instead takes an experiential mind, a mind prepared to receive the vast depths of even the most shallow surfaces. That is because of the concept of virtual scope. The system of Art is not just physical, but arbitrary. Any canvas, any medium, even virtual media, are Art systems.
That can explain all activities of Art so well that symbolic question marks about things related to Art can be more easily seen as more-narrowly-scoped subsets of Art.
For example, Science. Science saw a lot of formal development in the last century. Science can be defined as the class of Art reproduction based on demonstrations of reproducibility. There is a general consensus that Science is about the reproduction of value based on the principle of reproducibility — not just logical reproducibility, not just natural reproducibility, but also artificial reproducibility.
The scientific man is the further development of the artistic man.
Friedrich Nietzsche,
Human, All-Too-Human, 226
But for some reason, the concept of the reproduction of value alone (without necessarily a basis in reproducibility) is not generally recognized as a formal exercise. I attribute that to a lack of common study of Ontology in the sense of Heidegger. I think that a modern dialect of Heidegger's ideas in Being and Time
can formalize the scope of Art, as a basis for developing Artistic theory for any compositional activity.
The aesthetic equality of Art involves the ontological distinction between identity and equality. A logical equality describes two equal expressions of the same identity. An aesthetic equality describes equal expressions of different identities. For example, an aesthetic equality may be comprised of two operands: 1. a real expression; 2. a virtual expression equating to the real expression. But that is not a logical equality, because the real and virtual identities are different instances,
though having logically identical value. The instantiation is posterior evaluation.
Logical equalities can even be described as independent of identity, that they describe relationships between any identity. And aesthetic equalities can be described as dependent on identity. In that way, aesthetic reifications are an ontological extension of logic.
Think of Wittgenstein's truth-values. They represent a Boolean comparison of two represented conditions, one usually a representation of an observation of reality, and the other a proposed condition. Think of an if-statement in most programming languages.
Science is a method to develop theory about how classes of expressions reproduce in nature. On top of that is the development of skilled artistic technique, craft, or engineering. Engineering is artistic technique founded on scientific methods. That is, engineering is artistic technique that is founded on reproducible demonstrations. The Science is the production of the taxonomy, and the engineering is the consumption of the taxonomy. An engineer doesn't necessarily even construct things. The engineer details the parameters of the work based on consumption of the taxonomy. For example, the computer does the work, but the programmer provides the instructions, often as an exercise of engineering, but sometimes not.
A programmer could even operate with no engineering at all, even with great skill developed without such a taxonomy, but developed from raw experience. Sometimes an artisan uses skills developed inductively, not deduced from reproducibility. It should be recognized as a valid craft.
It is strange to even need to say that, but today the value of craft is almost inherently refuted by society in the lack of industrial scale, by an appearance of inefficiency, and by a lack of scientific basis. Capital is pushing for skills to be automated by machine learning. The interest in developing human skill is diminishing.
But some development of artistic craft is necessary for humans to understand values and systems. The lack of capital toward art is diminishing the design skills required to produce and operate high-scale mechanisms toward social values.
There is a distinction between micro-reifications which provide detail, and macro-reifications which provide context. This is the distinction between the theme (macro) and the scene (micro). Drama involves both. Macro-reifications tend to be invariant and micro-reification tend to be equivariant.
Operatic musical scores are macro-reifications that provide context and thematic coherence to the micro-reifications of the operatic scenes. That is, the score connects the scenes through the plot structure. The score doesn't correspond to exact movements of the characters, but reveals what isn't captured by the narrow scope of the theme activity. This also applies to film and even video games.
The highest form of dramatic art probably is video gaming. Video games require a strong sense of game-level orientation due to the dynamic nature, and need for input feedback. Video games rely on thematic elements in all available media. It is absolutely essential for gameplay that the player does not lose orientation when the player is in control, especially when complicated operations are required. Think of how effective it is in first-person games to have the equipped weapons in view.
Data structures are tautologies of reifications.
The logical complexity optimization should come after the aesthetic models have been found. Most design work is solving the aesthetic model.
A sensor system needs to work backwards from object media-value. Starting with measuring media forces into object-invariant-forces (synchronize-able forces) into object-actions into object-properties.
What can be learned from physiology is not necessarily what to do, but an example of an efficient, workable system design. Physiology is a genetic deduction within biological media, while memetics are the inductions on top.
Induction is deduction from expectation built statistically from prior data. It is entirely correlational. To learn is not necessarily to study, as correlation is not necessarily cause. The genetic deductions represent that study in biological systems.
Lindeberg shows that idealized models of receptive fields need to separate the property effects from the positional effects. That is, separate the invariants from the equivariants. That is, the onset/offset and group are the identity defining equivariants, but the formants and spectral properties are the invariant media-value of the identity. This properly separates and places object in medium.
The invariant detector pathway is deductive, while the equivariant detector pathway is inductive. Equivariant detectors, representing relative movements, are typically derivative functions driven by some method of holding prior state. That is, memory layers directly inside the nerve pathways.
Keep state on equivariant movements in invariant detectors. That is, when updating the frequency selected by a sensor, keep state in the change on frequency (glissando percept) detected. Those act as vectors in a sequence graph, linking the spatial with the relational. That is also dynamic depth of learning, and stale values drop into a historical log, like the hippocampus.