The Cognitive-Theoretic Model of the Universe:
A New Kind of Reality Theory
Intelligence in one sense may be defined as the ability to make choices (which efficiently satisfy certain constraints)...another definition may be the amount of global "topological" structure functionally represented by a local "algebra"...I think mathematical concepts such as hypersets, Hopf Algebras and Topoi are vital.
Efficiency is what matters...in Persian the word for 'Self' is "Khod", the word for 'God' is "Khodah", and the word for that which is without cause, unwarranted, gratuitous or unnecessary is "Bee-Khod" (literally 'without Self').
There is room for undecidability, which allows degrees of self-configurative freedom, but not inconsistency.
Some interesting papers:
"Studying the Hamiltonian of a living organism rather than just biochemical components raises a property of fuzzy-like conservativity which contrasts with the status of physical objects. However, no physical structure is strictly conservative: the ceaseless motion does not exist, and all corpuscles have limited duration of life. In a molecule, atoms have different Hamiltonians, and the Hamiltonian of the molecule itself is subjected to the nature of interactions with its environment.
In a more complex system like a ecosystem, all components of individual Hamiltonians are interacting in a dynamical steady state. It has been demonstrated (Bounias and Bonaly, 2000) that the state of such an ecosystem is determined by the properties of the orbit of each component (which includes species, habitat and resources) by the manifold of functions. All combinations of these parameters are timely non-linear and the evolution of the system is logically determined by a non-linear convolution: this supports the result obtained here from a more fundamental approach involving the
moments of junction as differential elements of spacetime. The fuzzy-invariance component appearing in biological systems represents a term with topological meaning.
In effect, the convolution of bio-Hamiltonians correlates all their components in a compact space since it is finite and discrete. The Heine-Borel-Lebesgue theorem states that a finite subcover can exist from any finite subcover: the latter is necessarily finite and it involves all possible correlations, of which some actually are reflected in a finite section of spacetime. This lets a choice about which components are selected in a redundant system as Life, and therefore the presence of a fuzzy operator is justified. On the other hand, while the invariance of moments originates in empirical observations, and remains to be formally proved from a completely independent theory, conservativity has been shown to be fulfilled through a continuum of the geometry of physical objects in a 4-manifold, where only their traces in 3-D sections have a physical meaning."
"Let us turn now our attention to the issue of freedom. We have seen that after the occurrence of an event the system may choose between behaving as if there has been a reduction process or not. That is, after the observation of the event either the system simply behaves as if it were part of the universe and its state were that of the universe or if as its state would be given by the reduction postulate. In the first case the system would keep its entanglement with the rest of the universe (i.e. the environment), in the second it will lose its entanglement. The availability of this choice opens the possibility of the existence of free acts. This type of act of the system will not imply any violation whatsoever of the laws of physics, understanding the latter as regularities in the observation of nature. It should be noted that this freedom in the system is not even ruled by a law of probabilities for the possible outcomes."
"The CTMU has a meta-Darwinian message: the universe evolves by hological self-replication and self-selection. Furthermore, because the universe is natural, its self-selection amounts to a cosmic form of natural selection. But by the nature of this selection process, it also bears description as intelligent self-design (the universe is “intelligent” because this is precisely what it must be in order to solve the problem of self-selection, the master-problem in terms of which all lesser problems are necessarily formulated). This is unsurprising, for intelligence itself is a natural phenomenon that could never have emerged in humans and animals were it not already a latent property of the medium of emergence. An object does not displace its medium, but embodies it and thus serves as an expression of its underlying syntactic properties. What is far more surprising, and far more disappointing, is the ideological conflict to which this has led. It seems that one group likes the term “intelligent” but is indifferent or hostile to the term “natural”, while the other likes “natural” but abhors “intelligent”. In some strange way, the whole controversy seems to hinge on terminology. Of course, it can be credibly argued that the argument actually goes far deeper than semantics… that there are substantive differences between the two positions. For example, some proponents of the radical Darwinian version of natural selection insist on randomness rather than design as an explanation for how new mutations are generated prior to the restrictive action of natural selection itself. But this is untenable, for in any traditional scientific context, “randomness” is synonymous with “indeterminacy” or “acausality”, and when all is said and done, acausality means just what it always has: magic. That is, something which exists without external or intrinsic cause has been selected for and brought into existence by nothing at all of a causal nature, and is thus the sort of something-from-nothing proposition favored, usually through voluntary suspension of disbelief, by frequenters of magic shows. Inexplicably, some of those taking this position nevertheless accuse of magical thinking anyone proposing to introduce an element of teleological volition to fill the causal gap. Such parties might object that by “randomness”, they mean not acausality but merely causal ignorance. However, if by taking this position they mean to belatedly invoke causality, then they are initiating a causal regress. Such a regress can take one of three forms: it can be infinite and open, it can terminate at a Prime Mover which itself has no causal explanation, or it can form some sort of closed cycle doubling as Prime Mover and that which is moved. But a Prime Mover has seemingly been ruled out by assumption, and an infinite open regress can be ruled out because its lack of a stable recursive syntax would make it impossible to form stable informational boundaries in terms of which to perceive and conceive of reality."
Professor Farnsworth is accused of being a creationist, but his attorney, Bender, pleads not guilty by reason of insanity.
Composition may be accidental/random/indeterminate, involuntary/default/determinate, or voluntary/purposeful/self-determinate.
I believe there is no pre-existing designer, rather design is a result of seeking homeostasis between intrinsic and extrinsic forces.
"Voluntary" exchanges promote efficiency or trade-offs which fully utilize the benefits of having the ability to make choices, which is part of making decisions, and planning about the future within anticipatory models of the system.
I also think that we play a role in the formation of higher-order relations between "Natural Law" and "State"...though I wouldn't characterize it as conscious design...rather design is a constructive process which incorporates second-order "meta-cybernetic" interactions between individual and collective.
The station of these Manifestations is unique in creation. Their essential nature is twofold: they are at once human and divine. But they are not identical with God , the Creator, Who is Unknowable. Of God, Bahá'u'lláh has written,
He, in truth, hath, throughout eternity, been one in His Essence, one in His attributes, one in His works. Any and every comparison is applicable only to His creatures, and all conceptions of association are conceptions that belong solely to those that serve Him. Immeasurably exalted is His Essence above the descriptions of His creatures. He, alone, occupieth the Seat of transcendent majesty, of supreme and inaccessible glory. The birds of men's hearts, however high they soar, can never hope to attain the heights of His unknowable Essence. It is He Who hath called into being the whole of creation, Who hath caused every created thing to spring forth at His behest.1
Furthermore, Bahá'u'lláh, addressing God in a prayer, says:
Exalted, immeasurably exalted art Thou above any attempt to measure the greatness of Thy Cause, above any comparison that one may seek to make, above the efforts of the human tongue to utter its import! From everlasting Thou hast existed, alone with no one else beside Thee, and wilt, to everlasting, continue to remain the same, in the sublimity of Thine essence and the inaccessible heights of Thy glory.
And when Thou didst purpose to make Thyself known unto men, Thou didst successively reveal the Manifestations of Thy Cause, and ordained each to be a sign of Thy Revelation among Thy people, and the Day-Spring of Thine invisible Self amidst Thy creatures...2
Describing the relationship between the Manifestations of God and Their Creator, Bahá'u'lláh used the analogy of the mirror: God is as the Sun, and the Manifestations are as Mirrors that reflect that divine light -- but they are in no way to be considered as identical to that Sun:
These sanctified Mirrors...are, one and all, the Exponents on earth of Him Who is the central Orb of the universe, its Essence and ultimate Purpose. From Him proceed their knowledge and power; from Him is derived their sovereignty. The beauty of their countenance is but a reflection of His image, and their revelation a sign of His deathless glory.3
Bahá'u'lláh's central message for humanity in this day is one of unity and justice. "The best beloved of all things in My sight is justice,"4 He wrote, and"The earth is but one country, and mankind its citizens"5 in two often-quoted passages. He also stated, "The well-being of mankind, its peace and security, are unattainable unless and until its unity is firmly established."6 This is the prescription of God, the divine and all-knowing Physician, for our ailing world.
And we don't need Satan, either. We just don't believe in him. All wrong-doing, all sins are upon our own souls. When we are called to account for our trespasses, we must take full responsibility. As a Baha'i, you can't say, "The Devil made me do it." You can't say, "He tempted me, Lord."
While Christians do take responsibility for giving into temptation, the sin doesn't originate within them. It's outside them, because, in their minds, Satan preys upon them.
So I don't believe in demons, and I don't believe in hell. Hell, after all, is not even a word you can find within the Bible, except in translation. It was taken from the Norse goddess, Hel, which was also the name of her dominion. In the sixth century, Pope Gregory the Great sent missionaries out into Europe to preach to the pagans. He was fascinated by pagan culture and folklore and admonished his followers to accommodate their culture, not force people into the Christian Faith. And so the missionaries began translating words with very different meanings--Sheol, Hades, Gehenna--into the one word: hell.
I'm getting too academic, too distant. I can't displace the pain I'm feeling this morning with scholastic ramblings. The point is that I don't believe in the "lake of fire." I don't believe in demons, no matter how vivid my dreams are. And I don't believe that a battle is being waged for my eternal soul."
"Fundamental principles of such an association of hereditary groups, not originally or necessarily related by blood, were
ISONOMAI, equality of assignment in material and social amenities,
ISEGORIA, equality of utterance,
ISOTELEIA, equality of function and responsibility.
The results was literally ELEUTHERIA, ‘grown-up-ness’ (to translate the Greek word for freedom); every member was his own master, so long as he was master of himself, of his own behaviour (that is) toward the others.”
"The philosophical principal, put forth by Monad (1971) and others, that under certain conditions, biological evolution will form designs that are in accordance with the laws of nature is referred to as teleonomy. "
"In the first of three articles, we review the philosophical foundations of an approach to quantum gravity based on a principle of representation theoretic duality and a vaguely Kantian-Buddist perspective on the nature of physical reality which I have called ‘relative realism’. Central to this is a novel answer to the Plato’s cave problem in which both the world outside the cave and the ‘set of possible shadow patterns ’ in the cave have equal status. We explain the notion of constructions and ‘co’constructions in this context and how quantum groups arise naturally as a microcosm for the unification of quantum theory and gravity. More generally, reality is ‘created ’ by choices made and forgotten that constrain our thinking much as mathematical structures have a reality created by a choice of axioms, but the possible choices are not arbitary and are themselves elements of a higher-level of reality. In this way the factual ‘hardness ’ of science is not lost while at the same time the observer is an equal partner in the process. We argue that the ‘ultimate laws ’ of physics are then no more than the rules of looking at the world in a certain self-dual way, or conversely that going to deeper theories of physics is a matter of letting go of more and more assumptions. We show how this new philosophical foundation for quantum gravity leads to a self-dual and fractal like structure that informs and motivates the concrete research reviewed in parts II,III. Our position also provides a kind of explanation of why things are quantized and why there is gravity in the first place, and possibly why there is a cosmological constant."
"NICK: Little egos, yes. That's one guess, that the mind is the software in the hardware of the brain. The other guess is that mind is somehow an emergent feature of certain complex biological systems--that it will arise whenever the biology gets complicated enough. Self-awareness is just an unsuspected evolutionary possibility of living meat. Elemental Mind explores the hypothesis that none of that is true. It's a long-shot--that mind is as fundamental to nature as light or electricity. It's all around in one form or another, and our minds are just specific examples of it, specific ways that the Universal Mind has manifested. So I'm looking for evidence for this sort of thing, and ways of making Elemental Mind more plausible. By the way, I tried to think of a word for the other kind of mind, and the best I could come up with is molecular mind. Molecular mind versus elemental mind. Molecular mind is where you put stuff together and make a mind, and elemental mind is where mind is already fundamental. So you don't have to make it, it's already there. All you have to do is have systems that will manifest it. So my latest project is to work on that, and make that make sense."
Emergence is a universal phenomenon that can be defined mathematically in
a very general way. This is useful for the study of scientifically legitimate
explanations of complex systems, here defined as hyperstructures. A
requirement is that observation mechanisms are considered within the
general framework. Two notions of emergence are defined, and specific
examples of these are discussed.
Key words : mathematical definition of emergence, complex systems,
hyperstructures, observation mechanisms."
"The Memory Evolutive Neural Systems (or MENS) studied in this paper are a model for cognitive systems of animals, up to a theory of mind for man, which incorporates a basic level Neur formed by the neural system, and higher levels, deduced from it, representing an 'algebra of mental objects' (in the terms of Changeux, 1983). The main idea is that these higher levels emerge from the basis through iterative binding processes, so that a mental object appears as a family of synchronous assemblies of neurons, then of assemblies of assemblies of neurons, and so on. They develop over time through successive 'complexification processes', up to the formation of higher cognitive processes and consciousness. Their evolution is internally self-regulated and relies on the formation of a memory in which the different data, experiences, procedures can be stored in a flexible manner, to be later recalled or actualized for a better adaptation. The model takes account of the exchanges with the environment of a physical kind, through receptors and effectors which confer to it a kind of "embodiement" (Varela,1989), and, for higher animals, through education and cultural activities, stressing the role of the society in the development of higher processes. The notion of self relies on the development of a permanent global invariant, the archetypal core, which integrates the main corporal, perceptual, behavioral, procedural and semantic experiences, with their emotional overtones; its self-maintained activation is at the root of consciousness, characterized in particular by temporal extension processes."
"With the model, Fontanaand Buss obtained three levels of organization, with coexistence of self-maintaining organizations on the highest level. Eigen and Schuster  introduced the notion of a hypercycle, which is an interrelated hierarchy of cyclic reaction networks. They considered the emergence of hypercycles as a possible mechanism for the creation of biological hierarchies. Also these systems can be viewed as instances of hyperstructures [4, 5]."
"Nils Baas has been emphasizing for many years, in print and in private communication, the conviction that the usual notions of n-category, infinity-category omega-category in higher category theory are not naturally suited for describing
extended cobordisms such as appearing in
the tangle hypothesis
in extended quantum field theory
hierarchical systems such as appearing in
The point is essentially that the directedness of morphisms and — related to that — the binary notion of source and target in categories and higher categories are notions alien to these contexts, which in applications have to and are essentially removed again in a second step by adding extra structure and requiring further properties, such as various monoidal structures and dualities, which allow to change the direction of morphisms, to collect objects together, etc.
In contrast to that, Baas pointed out that more naturally the above situations are thought of from the beginning in terms of hierarchies of what he calls bonds, where, quite generally, a bond is an object equipped with information of how a collection of sub-bonds sits inside it, bound by the bond."
"Teleologic Evolution (TE) is a process of alternating replication and selection through which the universe "creates itself" along with the life it contains. This process, called telic recursion, is neither random nor deterministic in the usual senses, but self-directed. Telic recursion occurs on global and local levels respectively associated with the evolution of nature and the evolution of life; the evolution of life thus mirrors that of the universe in which it occurs. TE improves on traditional approaches to teleology by extending the concept of nature in a way eliminating any need for "supernatural" intervention, and improves on neo-Darwinism by addressing the full extent of nature and its causal dynamics.
In the past, teleology and evolution were considered mutually exclusory. This was at least partially because they seem to rely on different models of causality. As usually understood, teleology appears to require a looping kind of causality whereby ends are immanent everywhere in nature, even at the origin (hence the causal loop). Evolution, on the other hand, seems to require a combination of ordinary determinacy and indeterminacy in which the laws of nature deterministically guide natural selection, while indeterminacy describes the "random" or "chance" dimension of biological mutation.
In contrast, the phrase teleologic evolution
expresses an equivalence between teleology and evolution based on extended, refined concepts of nature and causality. This equivalence is expressed in terms of a self-contained logic-based model of reality identifying theory, universe and theory-universe correspondence, and depicting reality as a self-configuring system requiring no external creator. Instead, reality and its self-creative principle are identified through a contraction of the mapping which formerly connected the source and output of the teleology function. In effect, the creative principle itself becomes the ultimate form of reality.
The self-configuration of reality involves an intrinsic mode of causality, self-determinacy, which is logically distinct from conventional concepts of determinacy and indeterminacy but can appear as either from a localized vantage. Determinacy and indeterminacy can thus be viewed as "limiting cases" associated with at least two distinct levels of systemic self-determinacy, global-distributed and local-nondistributed. The former level appears deterministic while the latter, which accommodates creative input from multiple quasi-independent sources, dynamically adjusts to changing conditions and thus appears to have an element of "randomness".
According to this expanded view of causality, the Darwinian processes of replication and natural selection occur on at least two mutually-facilitative levels associated with the evolution of the universe as a whole and the evolution of organic life. In addition, human technological and sociopolitical modes of evolution may be distinguished, and human intellectual evolution may be seen to occur on collective and individual levels. Because the TE model provides logical grounds on which the universe may be seen to possess a generalized form of intelligence, all levels of evolution are to this extent intelligently directed, catalyzed and integrated."
"Dually, one can view processes occurring in nature as information processing. Such processes include self-assembly, developmental processes, gene regulation networks, protein-protein interaction networks, biological transport (active transport, passive transport) networks, and gene assembly in unicellular organisms. Efforts to understand biological systems also include engineering of semi-synthetic organisms, and understanding the universe itself from the point of view of information processing. Indeed, the idea was even advanced that information is more fundamental than matter or energy. The Zuse-Fredkin thesis, dating back to the 1960s, states that the entire universe is a huge cellular automaton which continuously updates its rules. Recently it has been suggested that the whole universe is a quantum computer that computes its own behaviour."
"In the cosmic organism theory (Chung, 2002a), different universes are different cosmic organs for the cosmic organism that is the multiverse. Different universes are the different expressions of the common cosmic gene. The cosmic gene is the cosmic digital code (Chung, 2002b) in the cosmic dimension. The cosmic digital code includes the codes for the object structure and the space structure. In the analysis of the multiverse (Ellis, Kirchner, and Stoeger (2004) state that there is a definite causal connection, or “law of laws”, relating all the universes in
these multiverses. Law of laws can be described as the ultimate law of fundamental laws, which include relativity, quantum mechanics, and the laws for the existence of different physical constants, dimensionality, particle content, and the size of universe in the multiverse. The ultimate law connects fundamental laws.
In the cosmic organism theory, the ultimate law is the cosmic gene. The cosmic organism theory follows the Alfred North Whitehead’s philosophy of organism. According to Whitehead (Whitehead, 1929), the actual world is a process, and the process is the becoming of actual entities. An actual entity is not an inert and permanent substance, but a relational process of becoming. Its ‘being’ is constituted by its ‘becoming’. Michel Bounias (2002) applied the Hamiltonian concept to living organism in the evolutionary process.
The paper is divided into seven sections: the introduction, the cosmic digital code for the object structure, the digital code for the space structure, the cosmic dimension, cosmology, force fields, and the summary."
"Composition is of three kinds.
1. Accidental composition.
2. Involuntary composition.
3. Voluntary composition.
There is no fourth kind of composition. Composition is restricted to these three categories."
"Diagram 1: 1. Indeterminacy 2. External determinacy 3a. Self-determinacy 3b. Intrinsic self-determinacy (The effectual aspect of the object or event has simply been moved inside the causal aspect, permitting the internalization of the blue arrow of determinacy and making causality endomorphic.)
Determinacy and indeterminacy…at first glance, there seems to be no middle ground. Events are either causally connected or they are not, and if they are not, then the future would seem to be utterly independent of the past. Either we use causality to connect the dots and draw a coherent picture of time, or we settle for a random scattering of independent dots without spatial or temporal pattern and thus without meaning. At the risk of understatement, the philosophical effects of this assumed dichotomy have been corrosive in the extreme. No universe that exists or evolves strictly as a function of external determinacy, randomness or an alternation of the two can offer much in the way of meaning. Where freedom and volition are irrelevant, so is much of human experience and individuality.
But there is another possibility after all: self-determinacy. Self-determinacy is like a circuitous boundary separating the poles of the above dichotomy…a reflexive and therefore closed boundary, the formation of which involves neither preexisting laws nor external structure. Thus, it is the type of causal attribution suitable for a perfectly self-contained system. Self-determinacy is a deep but subtle concept, owing largely to the fact that unlike either determinacy or randomness, it is a source of bona fide meaning. Where a system determines its own composition, properties and evolution independently of external laws or structures, it can determine its own meaning, and ensure by its self-configuration that its inhabitants are crucially implicated therein."
"...according to the CTMU, intelligence inheres in the universe and distributed over its contents. The universe isn't just assumed to be intelligent; it is shown to possess a collection of formal properties which together imply intelligence and volition (this involves the concept of "conspansive spacetime", which adds a certain kind of connectivity while making the universe evolve like a generative grammar as well as a dynamical system. It also involves the Telic Principle, analogous to the Anthropic principle but tailor-made for an informationally self-contained system.
Since UBT is the absence of all constraint, it contains nothing but possibility. It's pure self-actualizative potential. Self-actualizative potential is not ordinary potential, which pertains only to states, but telic potential, which applies to "infocognitive" relationships of natural law and state. One has to be careful about the word "contains" in this context. Ordinarily, to be contained by something means to be somehow "bound" or "bounded" by it in a logical or geometric sense. Obviously, that's not the case here. UBT precludes any distributive binding constraint and can be likened to "anticonstraint"."
"Question: If God does in fact continuously create reality on a global level such that all prior structure must be relativized and reconfigured, is there any room for free-will?
Answer: Yes, but we need to understand that free will is stratified. As a matter of ontological necessity, God, being ultimately identified with UBT, has "free will" on the teleological level...i.e., has a stratified choice function with many levels of coherence, up to the global level (which can take all lower levels as parameters). Because SCSPL local processing necessarily mirrors global processing - there is no other form which it can take - secondary telors also possess free will. In the CTMU, free will equates to self-determinacy, which characterizes a closed stratified grammar with syntactic and telic-recursive levels; SCSPL telors cumulatively bind the infocognitive potential of the ontic groundstate on these levels as it is progressively "exposed" at the constant distributed rate of conspansion."
"More is different: The potential for complexity increases with cardinality; with large numbers of elements comes combinatorial variety and the potential for the sort of multilevel logical structure that typifies biological organisms and modern computers alike. This is a fundamental precept of complexity theory. Wheeler poses a question: “Will we someday understand time and space and all the other features that distinguish physics—and existence itself—as the self-generated organs of a self-synthesized information system?” ... And the CTMU describes the universe as just the sort of complex, teleologically self-variegating, self-synthesized information system prescribed by more is different, telic-recursively explicating multiplicity and diffeonesis from the unity and synesis of distributed SCSPL syntax, the (unique) CTMU counterpart of what has sometimes been called “the Implicate Order”."
"More is Different: Broken Symmetry and the Nature of the Hierarchical Structure of Science"
"This profound concept is the transactional choice principle. The same non-linearity that makes mass become infinite at the speed of light in special relativity causes particles to have two solutions, one travelling in each direction in space-time - forwards and backwards with opposite energies. All the forces are mediated by virtual particles which appear transiently out of the 'fabric' of quantum uncertainty and thus must have both an emitter (creator) and an absorber (annihilator). The transactional principle asserts that all particles, including the real ones which make up radiation and matter, are similarly linked and that a space-time handshaking occurs between emitter and absorber. The essential difference between real and virtual particles is that every possible virtual particle coexists, but only certain real outcomes occur in our experience. In this case, the boundary conditions say only one real interaction out of all the many possibilities can occur. The Feynman diagram and transaction are illustrated in fig 6 centre.
The transactional principle asserts that this choice is made through space-time handshaking and that the apparent 'randomness' of quantum uncertainty may mask a very complex web of such hand-shaking interactions across space-time, in which quantum 'information' is exchanged, both from future to past and from past to future. This principle neatly explains all the known mysteries of quantum non-locality. It may also unearth free-will.
Far from being an accidental irrelevancy in the universe at large, biomolecules are the final interaction in the cosmogenic wave-particle hierarchy, beginning with cosmic symmetry-breaking. The most energetic of these forces interact first to form composites like the proton and then in stars to form the atomic nuclei. These then combine chemically at lower energies (because the electromagnetic force is weaker) to form the next fractal hierarchy of interaction, atoms and molecules and finally at lower energies still to form the global weak bonding associations of large biomelecular assemblies."
"A Different Universe is very new book by Stanford physics professor (and Nobel winner) Robert Laughlin. His thesis is that we are leaving the age of reductionism and entering the age of emergence. By this he means that we may well have learned one set of fundamental laws of how matter works, but there are many limits to what we can do with these laws in terms of predicting higher-level collective phenomenon."
"A novel theory of reflective consciousness, will and self is presented, based on modeling each of these entities using self-referential mathematical structures called hypersets. Pattern theory is used to argue that these exotic mathematical structures may meaningfully be considered as parts of the minds of physical systems, even finite computational systems. The hyperset models presented are hypothesized to occur as patterns within the "moving bubble of attention" of the human brain and any brainlike AI system. They appear to be compatible with both panpsychist and materialist views of consciousness, and probably other views as well."
A network automaton (plural network automata) is a mathematical system consisting of a network of nodes that evolves over time according to predetermined rules. It is similar in concept to a cellular automaton, but much less studied.
Stephen Wolfram's book A New Kind of Science, which is primarily concerned with cellular automata, briefly discusses network automata, and suggests (without positive evidence) that the universe might at the very lowest level be a network automaton.
Chu Spaces: Automata with Quantum Aspects:
"Chu spaces are a recently developed model of concurrent computation extending automata theory to express branching time and true concurrency. They exhibit in a primitive form the quantum mechanical phenomena of complementarity and uncertainty. The complementarity arises as the duality of information and time, automata and schedules, and states and events. Uncertainty arises when we define a measurement to be a morphism and notice that increasing structure in the observed object reduces clarity of observation. For a Chu space this uncertainty can be calculated numerically in an attractively simple way directly from its form factor to yield the usual Heisenberg uncertainty relation. Chu spaces correspond to wavefunctions as vectors of Hilbert space, whose inner product operation is realized for Chu spaces as right residuation and whose quantum logic becomes Girard’s linear logic."
"Chu spaces generalize the notion of topological space by dropping the requirements that the set of open sets be closed under union and finite intersection, that the open sets be extensional, and that the membership predicate (of points in open sets) be two-valued. The definition of continuous function remains unchanged other than having to be worded carefully to continue to make sense after these generalizations."
Chu spaces unify a wide range of mathematical structures, including the following:
Relational structures such as sets, directed graphs, posets, and small categories.
Algebraic structures such as groups, rings, fields, modules, vector spaces, lattices, and Boolean algebras.
Topologized versions of the above, such as topological spaces, compact Hausdorff spaces, locally compact abelian groups, and topological Boolean algebras.
Algebraic structures can be reduced to relational structures by a technique described below. Relational structures constitute a large class in their own right. However when adding topology to relational structures, the topology cannot be incorporated into the relational structure but must continue to use open sets.
Chu spaces offer a uniform way of representing relational and topological structure simultaneously. This is because Chu spaces can represent relational structures via a generalization of topological spaces which allows them to represent topological structure at the same time using the same machinery."
"The concept of syndiffeonesis can be captured by asserting that the expression and/or existence of any difference relation entails a common medium and syntax, i.e. the rules of state and transformation characterizing the medium. It is from these rules that the relation derives its spatial and temporal characteristics as expressed within the medium. Thus, a syndiffeonic relation consists of a difference relation embedded in a relational medium whose distributed rules of structure and evolution support its existence.
Every syndiffeonic relation has synetic and diffeonic phases respectively exhibiting synesis and diffeonesis (sameness and difference, or distributivity and parametric locality), and displays two forms of containment, topological and descriptive. The medium is associated with the synetic phase, while the difference relation is associated with the diffeonic phase (because the rules of state and transformation of the medium are distributed over it, the medium is homogeneous, intrinsically possessing only relative extension by virtue of the difference relationships it contains). Because diffeonic relands are related to their common expressive medium and its distributive syntax in a way that combines aspects of union and intersection, the operation producing the medium from the relands is called unisection. The synetic medium represents diffeonic potential of which the difference relationship is an actualization.
Syndiffeonic relations can be regarded as elements of more complex infocognitive lattices with spatial and temporal (ordinal, stratificative) dimensions. Interpreted according to CTMU duality principles, infocognitive lattices comprise logical relationships of state and syntax. Regressing up one of these lattices by unisection ultimately leads to a syntactic medium of perfect generality and homogeneity…a universal, reflexive “syntactic operator”.
In effect, syndiffeonesis is a metalogical tautology amounting to self-resolving paradox. The paradox resides in the coincidence of sameness and difference, while a type-theoretic resolution inheres in the logical and mathematical distinction between them, i.e. the stratificative dimension of an infocognitive lattice. (Note: the type-theoretic resolution of this paradox is incomplete; full resolution requires MU, a kind of “meta-syndiffeonesis” endowing the infocognitive lattice of spacetime with a higher level of closure.) Thus, reducing reality to syndiffeonesis amounts to “paradoxiforming” it. This has an advantage: a theory and/or reality built of self-resolving paradox is immunized to paradox.
So far, we know that reality is a self-contained syndiffeonic relation. We also have access to an instructive sort of diagram that we can use to illustrate some of the principles which follow. So let us see if we can learn more about the kind of self-contained syndiffeonic relation that reality is.
The Principle of Linguistic Reducibility
Reality is a self-contained form of language. This is true for at least two reasons.
First, although it is in some respects material and concrete, reality conforms to the algebraic definition of a language. That is, it incorporates
(1) representations of (object-like) individuals, (space-like) relations and attributes, and (time-like) functions and operations;
(2) a set of “expressions” or perceptual states; and
(3) a syntax consisting of (a) logical and geometric rules of structure, and (b) an inductive-deductive generative grammar identifiable with the laws of state transition.
Second, because perception and cognition are languages, and reality is cognitive and perceptual in nature, reality is a language as well.
While there have been many reductionist programs in science and philosophy, the promised reduction is always to the same thing: a theoretical language.
Theoretical reduction involves a regressive unbinding of progressive informational constraints in order to achieve increasingly basic explanations. Closed theoretical signatures are ripped open and reduced to more basic concepts that can be reformed into more basic and expressive signatures. However, the informational part of the regress terminates where further reduction would compromise intelligibility; there can be no further reductive regress through increasingly fundamental theoretic strata once the requirements of regression, reduction, theorization and stratification have themselves been lost. Beyond this point, infocognition gives way to informational and cognitive potential, or telesis.
The process of reducing distinctions to the homogeneous syntactic media that support them is called syndiffeonic regression. This process involves unisection, whereby the rules of structure and dynamics that respectively govern a set of distinct objects are reduced to a “syntactic join” in an infocognitive lattice of syntactic media. Unisection is a general form of reduction which implies that all properties realized within a medium are properties of the medium itself.
Where emergent properties are merely latent properties of the teleo-syntactic medium of emergence, the mysteries of emergent phenomena are reduced to just two: how are emergent properties anticipated in the syntactic structure of their medium of emergence, and why are they not expressed except under specific conditions involving (e.g.) degree of systemic complexity?"
I read 'On Intelligence' when it first came out, definitely significant, especially interesting from an 'anticipatory computing' perspective...interesting how he makes predictions about how we make predictions.
"Enhanced neural activity in a...nticipation of a sensory event
1. In all areas of cortex, Hawkins (2004) predicts "we should find anticipatory cells", cells that fire in anticipation of a sensory event.
Note: As of 2005 mirror neurons have been observed to fire before an anticipated event."
"Besides these subjectively satisfying explanations, the framework also makes a number of testable predictions. For example, the important role that prediction plays throughout the sensory hierarchies calls for anticipatory neural activity in certain cells throughout sensory cortex. In addition, cells that 'name' certain invariants should remain active throughout the presence of those invariants, even if the underlying inputs change. The predicted patterns of bottom-up and top-down activity - with former being more complex when expectations are not met - may be detectable, for example by functional magnetic resonance imaging (fMRI)."
"Major hypotheses are that such propagation is more efficient in more intelligent brains, that the essential function of cognition is anticipation, and that activation cycles "up" and "down" between percepts and concepts in a bootstrapping fashion. The framework is inspired by recurrent connectionist models and by the "memory-prediction" framework proposed by the brain theorist Jeff Hawkins in his book "On Intelligence"."
"Under the heading anticipation, we encounter subjects such as preventive caching, robotics, advanced research in biology (defining the living) and medicine (especially genetically transmitted disease), along with fascinating studies in art (music, in particular). These make up a broad variety of fundamental and applied research focused on a controversial concept. Inspired by none other than Einstein–he referred to spooky actions at distance, i.e., what became known as quantum non-locality–the title of the paper is meant to submit my hypothesis that such processes are related to quantum non-locality. The second goal of this paper is to offer a cognitive framework–based on my early work on mind processes (1988)–within which the variety of anticipatory horizons invoked today finds a grounding that is both scientifically relevant and epistemologically coherent. The third goal of this paper is to identify the broad conceptual categories under which we can identify progress made so far and possible directions to follow. The fourth and final goal is to submit a co-relation view of anticipation and to integrate the inclusive recursion in a logic of relations that handles co-relations.
Keywords: auto-suggestive memory, co-relation, non-locality, quantum semiotics, self-constitution, interactive computation"
Anticipatory Skin Conductance Responses: A Possible Example of Decision Augmentation Theory
"Anomalous anticipatory effects in the human autonomic nervous system were first conducted by Vassy in the late 1960s, but reported later (Vassy, 1978). Vassy used electrodermal activity as a dependent variable in a classical conditioning experiment. The design investigated whether a conditioned response to a stimulus would appear before a randomly timed unconditioned electroshock stimulus.
They found significant differential anticipatory effects in skin conductance levels five seconds before the stimuli were randomly selected and displayed."
DAT (Decision Augmentation Theory) : a theory associated with an attempt to reconceptualise psychokinesis as a precognition-based selection process rather than one of actual influence.
"Teleonomy is the science of adaptation. It is "the quality of apparent purposefulness in living organisms that derives from their evolutionary adaptation". The term was coined to stand in contrast with teleology. A teleological process is one that is planned in a purposeful way by a sentient, intelligent being. Artifacts that emerge from such a process are the products of foresight, and intent. A teleonomic process, such as evolution, produces products of stunning intricacy without the benefit of such a guiding intelligence. Instead, it blindly accrues information about what has worked, exploiting feedback from the environment via the selection and survival of fitter coalitions of such insight. It unwittingly choreographs a grand audition of a horde of variations on what it has learned thus far, culling the also-rans, and casting the winners in its next production. It hoards hindsight, and uses it to make "predictions" about how to cope with the future."
Models, Archetypes, Teleonomy
"There are signs these days that something akin to the old notion of telos is about to be revived. Quietly discarded by the mainstream about the time Laplace declared God to be an unnecessary hypothesis, telos went the way of "arguments from design," and for a couple of centuries ceased to be a respectable topic in polite scientific company.
Our eyebrows should be raised, then, at seeing telos brought back into the discussion by a man noted for his advanced speculation about the nature of living systems. Robert Rosen, theoretical biologist at Dalhousie, has made his reputation by examining the role of modeling among organisms; in particular by discussing the distinction between representation and reality, and its pertinence to the development of perception in the living process. Although Rosen's framework is conventionally Darwinian, in a book called Anticipatory Systems he breaks with established doctrine by suggesting something biologically unorthodox, i.e., that "a change of state in the present occurs as a function of some predicted future state, and [that] the agency through which the prediction is made must be, in the broadest sense, a model..."
"Homeotely: The term homeotely signifies that subsystems will direct their behaviour in such a way that it is beneficial for the well-being of the overall system. When applied to the evolutionary process, it states that subsystems will develop in such a way that they are beneficial for the well-being of the overall system. At first glance, this sounds embarrassingly teleological. However, if we recognize the fact that the behaviour as well as the evolution of systems is guided by context-sensitive self-interest, teleology vanishes into thin air. Context-sensitive self-interest is a systemic evolutionary principle: organisms are forced by their selfish genes to seek nothing but their own advantage - but the environment in which they develop, or the system of which they are a subsystem, only allows a limited set of developments and patterns of behaviour, any breach of the rules being punished with elimination. For an animal endowed with choice this harsh law transforms into an ethical principle: since its behaviour is only partly genetically determined, the word sensitive assumes its active meaning, i.e. it refers to conscious reactions to perceived or anticipated effects of behaviour or development on the overall system. (LM, based on Edward Goldsmith, The Way)"
"Broadly speaking, what I propose to call metagenetics is the study of how the machinery of inheritance, which mediates all evolution, itself evolves. It is, so to speak, the 'why' science of genetics, which attempts to explain what geneticists having discovered, have long taken for granted."
"The genetic code sure is interesting. Irrespective of its origin, the code seems to be optimized for evolution and maintain its own functional integrity. Whatever the explanation for the origins of the code, whether intentional agency, only RV+NS, self-organization or a combination of these, the fact that these processes converged on a single, reasonably optimal code that is able to facilitate evolution makes it look like it was an inevitable result from the system. The system seems to be rigged and biased towards certain outcomes similar to the evolution of life. Why?"
"However, any homeostasis is impossible without reaction - because homeostasis is and must be a "feedback" phenomenon.
The phrase "reactive homeostasis" is simply short for "reactive compensation reestablishing homeostasis", that is to say, "reestablishing a point of homeostasis." - it should not be confused with a separate kind of homeostasis or a distinct phenomenon from homeostasis; it is simply the compensation (or compensatory) phase of homeostasis."
"Entities such as the Web, mankind, life, the earth, the solar system, the Milky way, and our universe are viewed as massive dissipative/replicative structures. This paper will examine the structure and process of massive dissipative/replicative structures. In addition, it will examine the concept of massive dissipative/replicative structures and what are the necessary issues in structuring the scientific understanding of the phenomena. The methodology of comparative complexity is suggested to help in the construction and analysis of scientific theories.
"The progress of science is the discovery at each step of a new order which gives unity to what had seemed unlike." -- Jacob Bronowski
"The concept of measure is intimately involved with the notion of number. Modeling, a sophisticated form of abstract description, using mathematics and computation, both tied to the concept of number, and their advantages and disadvantages are exquisitely detailed by Robert Rosen in Life Itself, Anticipatory Systems, and Fundamentals of Measurement. One would have hoped that mathematics or computer simulations would reduce the need for word descriptions in scientific models. Unfortunately for scientific modeling, one cannot do as David Hilbert or Alonzo Church proposed: divorce semantics (e.g., symbolic words: referents to objects in reality) from syntax (e.g., symbolic numbers: referents to a part of a formal system of computation or entailment). One cannot do this, even in mathematics without things becoming trivial (ala Kurt Godel). It suffices to say that number theory (e.g., calculus), category theory, hypersets, and cellular automata, to mention few, all have their limited uses. The integration between all of these formalisms will be necessary plus rigorous attachment of words and numbers to show the depth and shallowness of the formal models. These rigorous attachments of words are ambiguous to a precise degree without the surrounding contexts. Relating precisely with these ambiguous words to these simple models will constitute an integration of a reasonable set of formalisms to help characterize reality."
Existence Itself: Towards the Phenomenology of Massive Dissipative/Replicative Structures by David M. Keirsey http://edgeoforder.org/pofdisstruct.html
Biological evolution — a semiotically constrained growth of complexity:
"Any living system possesses internal embedded description and exists as a superposition of different potential realisations, which are reduced in interaction with the environment. This reduction cannot be recursively deduced from the state in time present, it includes unpredictable choice and needs to be modelled also from the state in time future. Such non-recursive establishment of emerging configuration, after its memorisation via formation of reflective loop (sign-creating activity), becomes the inherited recursive action. It leads to increase of complexity of the embedded description, which constitutes the rules of generative grammar defining possible directions of open evolutionary process. The states in time future can be estimated from the point of their perfection, which represents the final cause in the Aristotelian sense and may possess a selective advantage. The limits of unfolding of the reflective process, such as the golden ratio and the golden wurf are considered as the canons of perfection established in the evolutionary process."
"My vision of biological organization is based on the principles of the quantum measurement theory, which can be considered as a mirrored image of theoretical biology . I define life as a self-organizing and self-generating activity of open non-equilibrium systems determined by their internal semiotic structure. Life by its existence (in self-reflecting loops) establishes basic physical parameters of the Universe [9,10]. Here you can find some citations from my works."
"The hypersets are shown to play the truth values of stable properties of nondeterministc dynamical systems. In fact, the universe of hereditory finite hypersets, with the truth value as an atom added, is shown to be the subobject classifier of the category of simulations of nondeterministic dynamical systems."
"Let us begin with the assumption that there may exist a Conservation of Total Information 'law' for the entire universe. The motivation for this is based in the idea of conservation of total mass-energy for the universe regardless of the forms matter takes during the reconfi guration processes of matter within the framework of an expanding vacuum fi lled with growing quantum networks.
If all current visible structures floating on the sea of the vacuum constitute a very small percent of the total 'information' in the entire universe and the 'expansion' of 'space' combined with local gravitationally driven aggregation of mass into 'information' sources and sinks (such as stars and planets for instance) provide a means for 'computing' new con gurations of matter (biological systems for instance), then perhaps the remainder of
the 'invisible information' is in the vacuum 'reservoir'. All unstable 'visible' physical systems such as atoms and molecules represent the building blocks for complex hierarchical systems."
"Murray Gell-Mann defines "Plectics" as the "...the study of simplicity and complexity. It includes the various attempts to define complexity; the study of roles of simplicity and complexity and of classical and quantum information in the history of the universe, the physics of information; the study of non-linear dynamics, including chaos theory, strange attractors, and self-similarity in complex non-adaptive systems in physical science; and the study of complex adaptive systems, including prebiotic chemical evolution, biological evolution, the behaviour of individual organisms, the functioning of ecosystems, the operation of mammalian immune systems, learning and thinking, the evolution of human languages, the rise and fall of human cultures, the behaviour of markets, and the operation of computers that are designed or programmed to evolve strategies - say, for playing chess, or solving problems."
Murray Gell-Mann is a founding member and currently a distinguished fellow at SFI as well as the Robert Andrews Millikan Professor Emeritus at the California Institute of Technology, where he joined the faculty in 1955. His research focuses on “plectics,” the study of simplicity and complexity, scaling, and the evolution of languages.
"a broad transdisciplinary subject covering aspects of simplicity and complexity as well as the properties of complex adaptive systems, including composite complex adaptive systems consisting of many adaptive agents."
Let's Call it Plectics:
"A decade ago, when the Santa Fe Institute was being organized, I coined a word for our principal area of research, a broad transdisciplinary subject covering aspects of simplicity and complexity as well as the properties of complex adaptive systems, including composite complex adaptive systems consisting of many adaptive agents. Unfortunately, I became discouraged about using the term after it met with a lukewarm response from a few of my colleagues. I comforted myself with the thought that perhaps a special name was unnecessary.
Perhaps I should have been more forceful. A name seems to be inevitable. Various authors are now toying with such neologisms as "complexology," which has a Latin head and a Greek tail and does not refer to simplicity. In this note, I should like to try to make up for lost time and put forward what I have long considered to be the best name for our area of study, if it has to have one.
It is appropriate that plectics refers to entanglement or the lack thereof, since entanglement is a key feature of the way complexity arises out of simplicity, making our subject worth studying. For example, all of us human beings and all the objects with which we deal are essentially bundles of simple quarks and electrons. If each of those particles had to be in its own independent state, we could not exist and neither could the other objects. It is the entanglement of the states of the particles that is responsible for matter as we know it. Likewise, if the parts of a complex system or the various aspects of a complex situation, all defined in advance, are studied carefully by experts on those parts or aspects and the results of their work are pooled, an adequate description of the whole system or situation does not usually emerge. The reason, of course, is that these parts or aspects are typically entangled with one another. We have to supplement the partial studies with a transdisciplinary "crude look at the whole," and practitioners of plectics often do just that.
I hope that it is not too late for the name "plectics" to catch on. We seem to need it."
"This paper is divided in two main parts. The first part (Sections 2 to 4) serve as an introduction to general system theory. Our aim is to present the evolution of system theory from a categorial viewpoint; subsequently (sections 5 to 8) we shall study systems from the standpoint of an 'universal' Topos, logico-mathematical, constructions that cover both the commutative and the non-commutative frameworks.
In so doing, we shall distinguish three major phases in the development of the theory (two already completed and one in front of us). The three phases will be respectively called ―The Age of Equilibrium, ―The Age of Complexity and ―The Age of Super-complexity. The first two may be taken as lasting from approximately 1850 to 1960, and the third being rapidly developed from the 1940s (Eilenberg and Mac Lane, 1945) and the late 1950s and 60s (Rashevsky, 1954, 1967-1969; Rosen, 1958a, b).
Boundaries are peculiarly relevant to systems. They serve to distinguish what is internal to the system from what is external to it. By virtue of possessing boundaries, a system is an entity for which there is an interior and an exterior defined for such an entity. The initial datum, therefore, is that of a difference, of something which enables a (characteristic or essential) difference to be established between a system and its environment.
"While physicists often use this rule to explain the conservation of energy-momentum (or as Wheeler calls it, “momenergy”), it can be more generally interpreted with respect to information and constraint, or state and syntax. That is, the boundary is analogous to a constraint which separates an interior attribute satisfying the constraint from a complementary exterior attribute, thus creating an informational distinction." - Langan, PCID, 2002
"To meet this responsibility, the child requires internal sensors that provide information on exactly what is happening deep inside its growing body, preferably at the intracellular level, and that permit feedback."
Categorical Ontology of Complex Spacetime Structures: The Emergence of Life and Human Consciousness
A categorical ontology of space and time is presented for emergent biosystems, super-complex dynamics, evolution and human consciousness. Relational structures of organisms and the human mind are naturally represented in non-abelian categories and higher dimensional algebra. The ascent of man and other organisms through adaptation, evolution and social co-evolution is viewed in categorical terms as variable biogroupoid representations of evolving species. The unifying theme of local-to-global approaches to organismic development, evolution and human consciousness leads to novel patterns of relations that emerge in super- and ultra- complex systems in terms of colimits of biogroupoids, and more generally, as compositions of local procedures to be defined in terms of locally Lie groupoids. Solutions to such local-to-global problems in highly complex systems with ‘broken symmetry’ may be found with the help of generalized van Kampen theorems inalgebraic topology such as the Higher Homotopy van Kampen theorem (HHvKT). Primordial organism structures are predicted from the simplest metabolic-repair systems extended to self-replication through autocatalytic reactions. The intrinsic dynamic ‘asymmetry’ of genetic networks in organismic development and evolutionis investigated in terms of categories of many-valued, Łukasiewicz–Moisil logic algebras and then compared with those obtained for (non-commutative) quantum logics. The claim is defended in this essay that human consciousness is unique and should be viewed as an ultra-complex, global process of processes. The emergence of consciousness and its existence seem dependent upon an extremely complex structural and functional unit with an asymmetric network topology and connectivities—the human brain—that developed through societal co-evolution, elaborate language/symbolic communication and ‘virtual’, higher dimensional, non-commutative processes involving separate space and time perceptions. Philosophical theories of the mind are approached from the theory of levels and ultra-complexity viewpoints which throw new light on previous representational hypotheses and proposed semantic models in cognitive science. Anticipatory systems and complex causality at the top levels of reality are also discussed in the context of the ontological theory of levels with its complex/entangled/intertwined ramifications in psychology, sociology and ecology. The presence of strange attractors in modern society dynamics gives rise to very serious concerns for the future of mankind and the continued persistence of a multi-stable biosphere. A paradigm shift towards non-commutative, or non-Abelian, theories of highly complex dynamics is suggested to unfold now in physics, mathematics, life and cognitive sciences, thus leading to the realizations of higher dimensional algebras in neurosciences and psychology, as well as in human genomics, bioinformatics and interactomics.
"The network of all such interactions is called the Interactome. Interactomics thus aims to compare such networks of interactions (i.e., interactomes) between and within species in order to find how the traits of such networks are either preserved or varied. From a mathematical, or mathematical biology viewpoint an interactome network is a graph or a category representing the most important interactions pertinent to the normal physiological functions of a cell or organism.
Interactomics is an example of "top-down" systems biology, which takes an overhead, as well as overall, view of a biosystem or organism. Large sets of genome-wide and proteomic data are collected, and correlations between different molecules are inferred. From the data new hypotheses are formulated about feedbacks between these molecules. These hypotheses can then be tested by new experiments.
Through the study of the interaction of all of the molecules in a cell the field looks to gain a deeper understanding of genome function and evolution than just examining an individual genome in isolation. Interactomics goes beyond cellular proteomics in that it not only attempts to characterize the interaction between proteins, but between all molecules in the cell."
I. C. Baianu, R. Brown, G. Georgescu & J. F. Glazebrook (2006). Complex Non-Linear Biodynamics in Categories, Higher Dimensional Algebra and Łukasiewicz–Moisil Topos: Transformations of Neuronal, Genetic and Neoplastic Networks. Axiomathes 16 (1-2).
A categorical, higher dimensional algebra and generalized topos framework for Łukasiewicz–Moisil Algebraic–Logic models of non-linear dynamics in complex functional genomes and cell interactomes is proposed. Łukasiewicz–Moisil Algebraic–Logic models of neural, genetic and neoplastic cell networks, as well as signaling pathways in cells are formulated in terms of non-linear dynamic systems with n-state components that allow for the generalization of previous logical models of both genetic activities and neural networks. An algebraic formulation of variable ‘next-state functions’ is extended to a Łukasiewicz–Moisil Topos with an n-valued Łukasiewicz–Moisil Algebraic Logic subobject classifier description that represents non-random and non-linear network activities as well as their transformations in developmental processes and carcinogenesis. The unification of the theories of organismic sets, molecular sets and Robert Rosen’s (M,R)-systems is also considered here in terms of natural transformations of organismal structures which generate higher dimensional algebras based on consistent axioms, thus avoiding well known logical paradoxes occurring with sets. Quantum bionetworks, such as quantum neural nets and quantum genetic networks, are also discussed and their underlying, non-commutative quantum logics are considered in the context of an emerging Quantum Relational Biology.
Quantum Genetics, Quantum Automata and Computation:
"Quantum Automata were introduced in a paper published in the Bulletin of Mathematical Biophysics, 33:339-354 (Baianu, 1971a). Categorical computations, both algebraic and topological, were also introduced the same year based on adjoint functor pairs in the theory of categories, functors and natural transformations (Baianu, 1971b).
The notions of topological semigroup, quantum automaton, or computer, were then suggested with a view to their potential applications to the analogous simulation of biological systems, and especially genetic activities and nonlinear dynamics in genetic networks. Further, detailed studies of nonlinear dynamics in genetic networks were carried out in categories of n-valued, Łukasiewicz Logic Algebras that showed significant dissimilarities (Baianu, 1977) from Boolean models of human neural networks (McCullough and Pitts, 1943)."
To quote Robert Rosen:
“Ironically, the idea that life requires an explanation is a relatively new one. To the ancients life simply was; it was a given; a first principle..."
Function, Anticipation, Representation:
"Function emerges in certain kinds of far-from-equilibrium systems. One important kind of function is that of interactive anticipation, an adaptedness to temporal complexity. Interactive anticipation is the locus of the emergence of normative representational content, and, thus, of representation in general: interactive anticipation is the naturalistic core of the evolution of cognition. Higher forms of such anticipation are involved in the subsequent macro-evolutionary sequence of learning, emotions, and reflexive consciousness.
Keywords: representation, action selection, anticipation, far from equilibrium systems, self maintenant systems
I argue that representation is emergent in a particular form of anticipatory function, a kind of anticipation involved in action selection. Specifically, representation is emergent in anticipations of what further actions and interactions are possible under current or indicated conditions. Developing and supporting that claim, plus arguing that various alternative approaches to representation are flawed, is the primary focus of this paper.
There are six steps in this development: 1) establishing the legitimacy of naturalistic emergence, 2) modeling the emergence of normative function, 3) modeling the emergence of primitive representation, 4) critiquing some alternative models of representation, 5) indicating the adequacy of this model of representation for more complex forms of representation, and 6) situating this model of representational phenomena in a broader framework of a macro-evolutionary sequence of anticipatory adaptations.
6.5 Reflexive Consciousness
An emotion system is a kind of meta-system that monitors uncertainty conditions in the flows of interactive and learning processes. In addition, it outputs a signal of that uncertainty into the interactive system. An emotion system, then, is a partial interactive system, with the first level interactive system as its interaction environment. If the inputs to such a meta-system come to be adequate to the functional flow at the first level, and the outputs come to be competent to modify and re-organize that flow, then a full second level, or meta-interactive system will have evolved.
Such a second level system, a second level interactive knowing system, will be able to track first level interactive processes and organization. It will be able to rehearse particular first level processes and contents. It will able to "examine" first level interactive organization as a means to planning and anticipating the environment. These are just a few of the new capabilities that emerge with a second level knowing system."
Cognitive Robotics and the Philosophy of Knowledge
Director, Institute for Interactivist Studies
Papers: Pre- and Post-Publication:
"The bulk of theoretical and empirical work in the neurobiology of emotion indicates that isotelesis—the principle that any one function is served by several structures and processes—applies to emotion as it applies to thermoregulation, for example (Satinoff, 1982)...In light of the preceding discussion, it is quite clear that the processes that emerge in emotion are governed not only by isotelesis, but by the principle of polytelesis as well. The first principle holds that many functions, especially the important ones, are served by a number of redundant systems, whereas the second holds that many systems serve more than one function. There are very few organic functions that are served uniquely by one and only one process, structure, or organ. Similarly, there are very few processes, structures, or organs that serve one and only one purpose. Language, too, is characterized by the isotelic and polytelic principles; there are many words for each meaning and most words have more than one meaning. The two principles apply equally to a variety of other biological, behavioral, and social phenomena. Thus, there is no contradiction between the vascular and the communicative functions of facial efference; the systems that serve these functions are both isotelic and polytelic."
Know that there are two kinds of knowledge: the knowledge of the essence of a thing and the knowledge of its qualities. The essence of a thing is known through its qualities; otherwise, it is unknown and hidden.
As our knowledge of things, even of created and limited things, is knowledge of their qualities and not of their essence, how is it possible to comprehend in its essence the Divine Reality, which is unlimited? ... Knowing God, therefore, means the comprehension and the knowledge of His attributes, and not of His Reality. This knowledge of the attributes is also proportioned to the capacity and power of man; it is not absolute."
"The Bahá'í teachings state that there is only one God and that his essence is absolutely inaccessible from the physical realm of existence and that, therefore, his reality is completely unknowable. Thus, all of humanity's conceptions of God which have been derived throughout history are mere manifestations of the human mind and not at all reflective of the nature of God's essence. While God's essence is inaccessible, a subordinate form of knowledge is available by way of mediation by divine messengers, known as Manifestations of God. The Manifestations of God reflect divine attributes, which are creations of God made for the purpose of spiritual enlightenment, onto the physical plane of existence. All physical beings reflect at least one of these attributes, and the human soul can potentially reflect all of them. Shoghi Effendi, the Guardian of the Bahá'í Faith, described God as inaccessible, omniscient, almighty, personal, and rational, and rejected pantheistic, anthropomorphic and incarnationist beliefs."
"When and how did the universe begin? Why are we here? Why is there something rather than nothing? What is the nature of reality? Why are the laws of nature so finely tuned as to allow for the existence of beings like ourselves? And, finally, is the apparent "grand design" of our universe evidence of a benevolent creator who set things in motion -- or does science offer another explanation?
In 'The Grand Design' they explain that according to quantum theory, the cosmos does not have just a single existence or history, but rather that every possible history of the universe exists simultaneously. When applied to the universe as a whole, this idea calls into question the very notion of cause and effect. But the "top-down" approach to cosmology that Hawking and Mlodinow describe would say that te fact that the past takes no definite form means that we create history by observing it, rather than that history creates us.
Along the way Hawking and Mlodinow question the conventional concept of reality, posing a "model-dependent" theory of reality as the best we can hope to find."
"Question: Or, alternatively, does God instantaneously or non-spatiotemporally— completely, consistently, and comprehensively—reconfigure and reconstitute reality’s info-cognitive objects and relations?
Answer: The GOD, or primary teleological operator, is self-distributed at points of conspansion. This means that SCSPL evolves through its coherent grammatical processors, which are themselves generated in a background-free way by one-to-many endomorphism. The teleo-grammatic functionality of these processors is simply a localized "internal extension" of this one-to-many endomorphism; in short, conspansive spacetime ensures consistency by atemporally embedding the future in the past. Where local structure conspansively mirrors global structure, and global distributed processing "carries" local processing, causal inconsistencies cannot arise; because the telic binding process occurs in a spacetime medium consisting of that which has already been bound, consistency is structurally enforced."
"Back here we were talking about the symmetry-breaking that takes place in mathematics by the choice of working in Set, which John attributed to nothing less than the ‘arrow of time’.
Why do many-to-one but not one-to-many relations get singled out for single treatment and dubbed ‘functions’? Because functions are supposed to be ‘deterministic’: the cause must determine the effect. We don’t care if the effect fails to determine the cause.
Now what is there to be said about the 2-category Cat and its three duals: Cat op, Cat co and Cat coop?"
Big Toy Models: Representing Physical Systems As Chu Spaces:
"We pursue a model-oriented rather than axiomatic approach to the foundations of Quantum Mechanics, with the idea that new models can often suggest new axioms. This approach has often been fruitful in Logic and Theoretical Computer Science. Rather than seeking to construct a simplified toy model, we aim for a `big toy model', in which both quantum and classical systems can be faithfully represented - as well as, possibly, more exotic kinds of systems.
To this end, we show how Chu spaces can be used to represent physical systems of various kinds. In particular, we show how quantum systems can be represented as Chu spaces over the unit interval in such a way that the Chu morphisms correspond exactly to the physically meaningful symmetries of the systems - the unitaries and antiunitaries. In this way we obtain a full and faithful functor from the groupoid of Hilbert spaces and their symmetries to Chu spaces. We also consider whether it is possible to use a finite value set rather than the unit interval; we show that three values suffice, while the two standard possibilistic reductions to two values both fail to preserve fullness."
Coalgebras, Chu Spaces, and Representations of Physical Systems
"By analogy with the extension of a type as the set of individuals of that type, we define the extension of an attribute as the set of states of an idealized observer of that attribute, observing concurrently with observers of other attributes. The attribute theoretic counterpart of an operation mapping individuals of one type to individuals of another is a dependency mapping states of one attribute to states of another. We integrate attributes with types via a symmetric but not self-dual framework of dipolar algebras or disheaves amounting to a type-theoretic notion of Chu space over a family of sets of qualia doubly indexed by type and attribute, for example the set of possible colors of a ball or heights of buildings. We extend the sheaf-theoretic basis for type theory to a notion of disheaf on a profunctor.
Applications for this framework include the Web Ontology Language OWL, UML, relational databases, medical information systems, geographic databases, encyclopedias, and other data-intensive areas standing to benefit from a precise ontological framework coherently accommodating types and attributes.
Keywords: Attribute, Chu space, ontology, presheaf, type."
The Yoneda Lemma without category theory: algebra and applications
"3.3 Ontology of properties and qualia
Three long-standing problems of philosophy are, in decreasing order of seniority, Cartesian dualism, the nature of properties or attributes, and the existence of qualia.
In 1929 C.I.Lewis, an early contributor to modal logic, wrote Mind and the World Order: Outline of a Theory of Knowledge in which he summarized his thinking about qualia as entities bridging the physically observable (as measured by scientific instruments) and the psychologically observable (as the sensations reported by human observers). Philosophers have since divided themselves into qualiaphiles such as Edmond Wright, editor of The Case for Qualia, a just-published score of qualia-friendly essays, and qualiaphobes such as Daniel Dennett who maintain that the concept is incoherent.
Communes are a new mathematical construct that provide a common solution to all three problems by giving a way of thinking about them. Since communes are well-defined, this allows the questions to be formulated more sharply as, how faithfully do communes capture the notions of mind, property, and quale? Communes also suggests novel ways of defining and organizing those notions so as to make them more consistent both individually and in combination with each other."
"Materialism arbitrarily excludes the possibility that reality has a meaningful nonmaterial aspect, objectivism arbitrarily excludes the possibility that reality has a meaningful subjective aspect, and although Cartesian dualism technically excludes neither, it arbitrarily denies that the mental and material, or subjective and objective, sides of reality share common substance.
How come the “one world” out of many observer-participants? Insofar as the term “observer-participants” embraces scientists and other human beings, this question invites a quasi-anthropological interpretation. Why should a universe consisting of separate observers with sometimes-conflicting agendas and survival imperatives display structural and nomological unity? Where observers are capable of creating events within the global unitary manifold of their common universe, why should they not be doing it strictly for themselves, each in his or her own universe, and never the twain shall meet? Where the observer-participant concept is generalized to include non-anthropic information-transducing systems, what is holding all of these systems together in a single unified reality?
A scientist employs empirical methods to make specific observations, applies general cognitive relationships from logic and mathematics in order to explain them, and comes off treating reality as a blend of perception and cognition. But this treatment lacks anything resembling an explicit justification. When a set of observations is explained with a likely set of equations interpreted therein, the adhesion between explanandum and explanation might as well be provided by rubber cement. I.e., scientific explanations and interpretations glue observations and equations together in a very poorly understood way. It often works like a charm…but why? One of the main purposes of reality theory is to answer this question.
The first thing to notice about this question is that it involves the process of attribution, and that the rules of attribution are set forth in stages by mathematical logic. The first stage is called sentential logic and contains the rules for ascribing the attributes true or false, respectively denoting inclusion or non-inclusion in arbitrary cognitive-perceptual systems, to hypothetical relationships in which predicates are linked by the logical functors not, and, or, implies, and if and only if. Sentential logic defines these functors as truth functions assigning truth values to such expressions irrespective of the contents (but not the truth values) of their predicates, thus effecting a circular definition of functors on truth values and truth values on functors. The next stage of attribution, predicate logic, ascribes specific properties to objects using quantifiers. And the final stage, model theory, comprises the rules for attributing complex relations of predicates to complex relations of objects, i.e. theories to universes. In addition, the form of attribution called definition is explicated in a theory-centric branch of logic called formalized theories, and the mechanics of functional attribution is treated in recursion theory.
In sentential logic, a tautology is an expression of functor-related sentential variables that is always true, regardless of the truth values assigned to its sentential variables themselves. A tautology has three key properties: it is universally (syntactically) true, it is thus self-referential (true even of itself and therefore closed under recursive self-composition), and its implications remain consistent under inferential operations preserving these properties. That is, every tautology is a self-consistent circularity of universal scope, possessing validity by virtue of closure under self-composition, comprehensiveness (non-exclusion of truth), and consistency (freedom from irresolvable paradox). But tautologies are not merely consistent unto themselves; they are mutually consistent under mutual composition, making sentential logic as much a “self-consistent circularity of universal scope” as any one of its tautologies. Thus, sentential logic embodies two levels of tautology, one applying to expressions and one applying to theoretical systems thereof. Predicate logic then extends the tautology concept to cover the specific acts of attribution represented by (formerly anonymous) sentential variables, and model theory goes on to encompass more complex acts of attribution involving more complex relationships.
Reality theory is about the stage of attribution in which two predicates analogous to true and false, namely real and unreal, are ascribed to various statements about the real universe. In this sense, it is closely related to sentential logic. In particular, sentential logic has four main properties to be emulated by reality theory. The first is absolute truth; as the formal definition of truth, it is true by definition. The other properties are closure, comprehensiveness and consistency. I.e., logic is wholly based on, and defined strictly within the bounds of, cognition and perception; it applies to everything that can be coherently perceived or conceived; and it is by its very nature consistent, being designed in a way that precludes inconsistency. It is the basis of mathematics, being the means by which propositions are stated, proved or disproved, and it is the core of science, underwriting the integrity of rational and empirical methodology. Even so-called “nonstandard” logics, e.g. modal, fuzzy and many-valued logics, must be expressed in terms of fundamental two-valued logic to make sense. In short, two-valued logic is something without which reality could not exist. If it were eliminated, then true and false, real and unreal, and existence and nonexistence could not be distinguished, and the merest act of perception or cognition would be utterly impossible.
Thus far, it has been widely assumed that reality theory can be sought by the same means as any other scientific theory. But this is not quite true, for while science uses the epistemological equivalent of magic glue to attach its theories to its observations, reality theory must give a recipe for the glue and justify the means of application. That is, reality theory must describe reality on a level that justifies science, and thus occupies a deeper level of explanation than science itself. Does this mean that reality theory is mathematical? Yes, but since mathematics must be justified along with science, metamathematical would perhaps be a better description… and when all is said and done, this comes down to logic pure and simple. It follows that reality theory must take the form of an extended logic…in fact, a “limiting form” of logic in which the relationship between theory and universe, until now an inexhaustible source of destructive model-theoretic ambiguity, is at last reduced to (dual-aspect) monic form, short-circuiting the paradox of Cartesian dualism and eliminating the epistemological gap between mind and matter, theory and universe.
As complexity rises and predicates become theories, tautology and truth become harder to recognize. Because universality and specificity are at odds in practice if not in principle, they are subject to a kind of “logical decoherence” associated with relational stratification. Because predicates are not always tautological, they are subject to various kinds of ambiguity; as they become increasingly specific and complex, it becomes harder to locally monitor the heritability of consistency and locally keep track of the truth property in the course of attribution (or even after the fact). Undecidability, LSAT intractability and NP-completeness, predicate ambiguity and the Lowenheim-Skolem theorem, observational ambiguity and the Duhem-Quine thesis…these are some of the problems that emerge once the truth predicate “decoheres” with respect to complex attributive mappings. It is for reasons like these that the philosophy of science has fallen back on falsificationist doctrine, giving up on the tautological basis of logic, effectively demoting truth to provisional status, and discouraging full appreciation of the tautological-syntactic level of scientific inquiry even in logic and philosophy themselves.
This common attribute invalidates any assertion to the effect that the difference between the relands is “absolute” or “irreducible”; the mere fact that the difference can be linguistically or geometrically expressed implies that it is only partial and that both relands are manifestations of one and the same ontological medium. Where X and Y represent arbitrary parts or aspects of the difference relation called reality, this diagram graphically demonstrates that reality ultimately consists of a unitary ontological medium. Accordingly, reality theory must be a monic theory reducing reality to this medium (this idea is further developed in the Principle of Infocognitive Monism).
The primary transducers of the overall language of science are scientists, and their transductive syntax consists of the syntax of generalized scientific observation and theorization, i.e. perception and cognition. We may therefore partition or stratify this syntax according to the nature of the logical and nonlogical elements incorporated in syntactic rules. For example, we might develop four classes corresponding to the fundamental trio space, time and object, a class containing the rules of logic and mathematics, a class consisting of the perceptual qualia in terms of which we define and extract experience, meaning and utility from perceptual and cognitive reality, and a class accounting for more nebulous feelings and emotions integral to the determination of utility for qualic relationships. For now, we might as well call these classes STOS, LMS, QPS and ETS, respectively standing for space-time-object syntax, logico-mathematical syntax, qualio-perceptual syntax, and emo-telic syntax, along with a high-level interrelationship of these components to the structure of which all or some of them ultimately contribute. Together, these ingredients comprise the Human Cognitive-Perceptual Syntax or HCS.
Lest the inclusion of utility, qualia or feelings seem “unscientific”, we need merely observe that it would be vastly more unscientific to ignore things that are subjectively known to exist on the wishful and rationally unjustifiable assumption that subjectivity and subjective predicates play no part in the self-definition of reality. Insofar as subjectivity merely refers to the coherent intrinsic identities of the elements of objective relationships, this would be logically absurd. But in any case, our aim at this point is merely to classify the basic elements in terms of which we view the world, whether or not they have thus far proven accessible to standard empirical methodology, and this means recognizing the reality of qualic and emotional predicates and adjoining the corresponding nonlogical constants to SCSPL syntax. If QPS and ETS predicates turn out to be reducible to more fundamental STOS/LMS predicates, then very well; it will permit a convenient reduction of the syntax. But this is certainly not something that can be decided in advance.
Cognitive-perceptual syntax consists of (1) sets, posets or tosets of attributes (telons), (2) perceptual rules of external attribution for mapping external relationships into telons, (3) cognitive rules of internal attribution for cognitive (internal, non-perceptual) state-transition, and (4) laws of dependency and conjugacy according to which perceptual or cognitive rules of external or internal attribution may or may not act in a particular order or in simultaneity.
To see how information can be beneficially reduced when all but information is uninformative by definition, one need merely recognize that information is not a stand-alone proposition; it is never found apart from syntax. Indeed, it is only by syntactic acquisition that anything is ever “found” at all. That which is recognizable only as syntactic content requires syntactic containment, becoming meaningful only as acquired by a syntactic operator able to sustain its relational structure; without attributive transduction, a bit of information has nothing to quantify. This implies that information can be generalized in terms of “what it has in common with syntax”, namely the syndiffeonic relationship between information and syntax.)" - Langan, 2002, PCID
"A fundamental principle of the Bahá'í Faith is the harmony of religion and science. Bahá'í scripture asserts that true science and true religion can never be in conflict.
`Abdu'l-Bahá, the son of the founder of the religion, stated that religion without science leads to superstition and that science without religion leads to materialism. He also admonished that true religion must conform to the conclusions of science.
This latter aspect of the principle seems to suggest that the religion must always accept current scientific knowledge as authoritative, but some Bahá'í scholars have suggested that this is not always the case. On some issues, the Bahá'í Faith subordinates the conclusions of current scientific thought to its own teachings, which the religion takes as fundamentally true. This is because, in the Bahá'í understanding the present scientific view is not always correct, neither is truth only limited to what science can explain."