Saturday, December 4, 2010

Emergence, Condensed Matter Physics, Infothermodynamics


"Entropy should not and does not depend on our perception of order in the system. The amount of heat a system holds for a given temperature does not change depending on our perception of order. Entropy, like pressure and temperature is an independent thermodynamic property of the system that does not depend on our observation.

Entropy As Diversity

A better word that captures the essence of entropy on the molecular level is diversity. Entropy represents the diversity of internal movement of a system. The greater the diversity of movement on the molecular level, the greater the entropy of the system. Order, on the other hand, may be simple or complex. A living system is complex. A living system has a high degree of order AND an high degree of entropy. A raccoon has more entropy than a rock. A living, breathing human being, more than a dried up corpse."

"Syntropy is another word for negative entropy, or negentropy (also extropy). I prefer syntropy because it is a positive term for an otherwise double negative phrase meaning the absence of an absence (or minus the minus of order). Syntropy might best be thought of as the "capacity for entropy," and increased certainty and structure. A technological or living system acts as an efficient drain for entropy -- the more organized, structured, and complex the organization, the faster the system can generate entropy. In other words the more syntropic it is, the more efficient it is in creating entropy. At the same time, the creation of entropy is what you get with the expideture of energy, so this "urge" to drain entropy becomes a pump for order!"‎

"Things do not change; we change." - Henry David Thoreau

‎"Therefore, the seeker after the truth is not one who studies the writings of the ancients and, following his natural disposition, puts his trust in them, but rather the one who suspects his faith in them and questions what he gathers from... them, the one who submits to argument and demonstration, and not to the sayings of a human being whose nature is fraught with all kinds of imperfection and deficiency. Thus the duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and, applying his mind to the core and margins of its content, attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency." - Ibn al-Haytham

"In mathematics, Ibn al-Haytham builds on the mathematical works of Euclid and Thabit ibn Qurra, and goes on to systemize infinitesimal calculus, conic sections, number theory, and analytic geometry after linking algebra to geometry."

"According to Merriam-Webster, logic is the science that deals with the principles and criteria of validity of inference and demonstration. It is the science of the formal principles of reasoning. A logic consists of a first order language of types, together with an axiomatic system and a model-theoretic semantics."

Progressive Ontology Alignment for Meaning
Coordination: An Information-Theoretic Foundation:

A Formal Model for Situated Semantic Alignment:

Semantic Alignment of Context-Goal Ontologies:

Several distributed systems need to inter-operate and exchange information. Ontologies are gained the popularity in AI community as a means for enriching the description of information and make their context more explicit. Thus, to enable Interoperability between systems, it is necessary to align ontologies describing them in a sound manner. Our main interest is focused on ontologies describing systems functionalities. We treat these lasts as goals to achieve. In general, a goal is related to the realization of an action in a particular context. Therefore, we call ontologies describing goals and their context Context-Goal Ontologies. Most of the methodologies proposed to reach interoperability are semi automatic, they are based on probabilities or statistics and not on mathematical models. The purpose of this paper is to investigate an approach where the alignment of C-G Ontologies achieves an automatic and semantic interoperability between distributed systems based on a mathematical model "Information Flow".


This paper presents ongoing research in the field of extensional mappings between ontologies. Hitherto, the task of generating mapping between ontologies has been focused on the intensional level of ontologies. The term intensional level refers to the set of concepts that are included in an ontology. However, an ontology that has been created for a specific task or application needs to be populated with instances. These comprise the extensional level of an ontology. This particular level is being generally neglected during the ontologies’ integration procedure. Thus, although methodologies of geographic ontologies integration, ranging from alignment to true integration, have, in the course of years, presented a solid ground for information exchange, little has been done in exploring the relationships between the data. In this context, this research strives to set a framework for extensional mappings between ontologies using Information Flow.

Information Flow based Ontology Mapping

The use of ontology for knowledge organization is a common way to solve semantic heterogeneity. But when the categories of the relative concepts of ontologies are different, the semantic interoperation will encounter new obstacles. A new theory model is needed to solve this kind of problem. Information flow (IF) theory seems to be a promising avenue to this end. In this paper, we analyze semantic heterogeneities among two data sources in distributed Xu Beihongpsilas Galleries and describe an instance based approach to accomplish semantic interoperation and IF based formulation process of the global semantics. The emphasis is how to use IF theory to discover concept and instance duality in knowledge sharing. In the end, the more interrelated problems are farther discussed.

Semantic Alignment in the Context of Agent Interactions

We provide the formal foundation of a novel approach to tackle semantic heterogeneity in multi-agent communication by looking at semantics related to interaction in order to avoid dependency on a priori semantic agreements. We do not assume existence of any ontologies, neither local to interacting agents nor external to them, and we rely only on interactions themselves to resolve terminological mismatches. In the approach taken in this paper we look at the semantics of messages that are exchanged during an interaction entirely from an interaction-specific point of view: messages are deemed semantically related if they trigger compatible interaction state transitions—where compatibility means that the interaction progresses in the same direction for each agent, albeit their partial view of the interaction (their interaction model) may be more constrained than the actual interaction that is happening. Our underlying claim is that semantic alignment is often relative to the particular interaction in which agents are engaged in, and, that in such cases the interaction state should be taken into account and brought into the alignment mechanism.

Formal Method for Aligning Goal Ontologies

Many distributed heterogeneous systems interoperate and exchange information between them. Currently, most systems are described in terms of ontologies. When ontologies are distributed, the problem of finding related concepts between them arises. This problem is undertaken by a process which defines rules to relate relevant parts of different ontologies, called “Ontology Alignment.” In literature, most of the methodologies proposed to reach the ontology alignment are semi automatic or directly conducted by hand. In the present paper, we propose an automatic and dynamic technique for aligning ontologies. Our main interest is focused on ontologies describing services provided by systems. In fact, the notion of service is a key one in the description and in the functioning of distributed systems. Based on a teleological assumption, services are related to goals through the paradigm ‘Service as goal achievement’, through the use of ontologies of services, or precisely goals. These ontologies are called “Goal Ontologies.” So, in this study we investigate an approach where the alignment of ontologies provides full semantic integration between distributed goal ontologies in the engineering domain, based on the Barwise and Seligman Information Flow (noted IF) model.

“With a roadmap in place, the participating agencies and their countries will benefit enormously from a comprehensive, global approach to space exploration.”

‎"This article explores the pros and cons of both types of space exploration and hopefully will spark more discussion of this complex and highly political issue.

Unmanned Missions – going where no man has gone before - and maybe never will

Ro...botic space exploration has become the heavy lifter for serious space science. While shuttle launches and the International Space Station get all the media coverage, these small, relatively inexpensive unmanned missions are doing important science in the background.

Most scientists agree: both the shuttle (STS – Space Transport System) and the International Space Station are expensive and unproductive means to do space science.
NASA has long touted the space station as the perfect platform to study space and the shuttle a perfect vehicle to build it. However, as early as 1990, 13 different science groups rejected the space station citing huge expenses for small gains.

Shuttle disasters, first the Challenger followed by Columbia’s catastrophic reentry in February, 2003, have forced NASA to keep mum about crewed space exploration and the International Space Station is on hold.
The last important media event promoting manned flight was Senator John Glenn’s ride in 1998 – ostensibly to do research on the effects of spaceflight on the human body, but widely seen by scientists as nothing but a publicity stunt.

Since each obiter launch cost $420 million dollars in 1998, it was the world’s most expensive publicity campaign to date. Proponents say the publicity is needed to support space program funding. Scientific groups assert the same money could have paid for two unmanned missions that do new science - not repeat similar experiments already performed by earlier missions.

Indeed, why do tests on the effects of zero gravity on humans anyway when they can sit comfortably behind consoles directing robotic probes from Earth?

Space is a hostile place for humans. All their needs must be met by bringing a hospitable environment up from a steep gravity well, the cost of which is enormous. The missions must be planned to avoid stressing our fragile organisms. We need food, water and air requiring complicated and heavy equipment. All this machinery needs to be monitored, reducing an astronaut’s available time to carry out experiments. Its shear weight alone reduces substantially the useful payload.

The space shuttle is a hopelessly limited vehicle. It’s only capable of reaching low earth orbit. Worse, the space station it services is placed in the same orbit – one that is not ideal for any type of space science. Being so close to the Earth, gravity constantly tugs at the station making it unstable for fabrication of large crystals – part of NASA’s original plans but later nixed by the American Crystallographic Association.

To date, more than 20 scientific organizations worldwide have come out against the space station and are recommending the funds be used for more important unmanned missions.

NASA has gone so far as to create myths about economic spin-offs from manned spaceflight - the general idea being the enormous expense later results in useful technology that improves our lives. Items like Velcro, Tang and Teflon – popularly believed to have come from the space program or invented by NASA. There is only one problem: they did not.

Shuttle launches are expensive: very expensive. Francis Slakey, a PhD physicist who writes for Scientific American about space said, “The shuttle’s cargo bay can carry 23,000 kilos (51,000 lbs) of payload and can return 14,500 kilos back to earth. Suppose that NASA loaded the Shuttle’s cargo bay with confetti to be launched into space. If every kilo of confetti miraculously turned into gold during the return trip, the mission would still lose $270 million.” This was written in 1999 when a shuttle flight cost $420 million.

Currently, it’s estimated that just the shuttle program average cost per flight has been about $1.3 billion over lifetime and about $750 million per launch over its most recent five years of operations. This total includes development costs and numerous safety modifications. That means each shuttle launch could pay for 2 to 3 unmanned missions.

While recent failures have more than quadrupled success rates for unmanned missions, they still have managed to keep space programs alive – not just for the US, but Russia, Japan and China as well.

Mars Pathfinder and Mar Exploration Rovers have succeeded beyond the expectations of their designers and continue to deliver important data to earthbound scientists."

Part I [sections 2–4] draws out the conceptual links between modern conceptions of teleology and their Aristotelian predecessor, briefly outlines the mode of functional analysis employed to explicate teleology, and develops the notion of cybernetic organisation in order to distinguish teleonomic and teleomatic systems.... Part II is concerned with arriving at a coherent notion of intenti

Some Considerations Regarding Mathematical Semiosis:

More Than Life Itself: A Synthetic Continuation in Relational Biology:

Complexity, Artificial Life and Self-Organizing Systems Glossary: More

Praefatio: Unus non sufficit orbis xiii

Nota bene xxiii

Prolegomenon: Concepts from Logic 1
...In principio... 1
Subset 2
Conditional Statements and Variations 3
Mathematical Truth 6
Necessity and Sufficiency 11
Complements 14
Neither More Nor Less 17

PART I: Exordium 21

1 Praeludium: Ordered Sets 23
Mappings 23
Equivalence Relations 28
Partially Ordered Sets 31
Totally Ordered Sets 37

2 Principium: The Lattice of Equivalence Relations 39
Lattices 39
The Lattice X4

Page 8

Mappings and Equivalence Relations 50
Linkage 54
Representation Theorems 59

3 Continuatio: Further Lattice Theory 61
Modularity 61
Distributivity 63
Complementarity 64
Equivalence Relations and Products 68
Covers and Diagrams 70
Semimodularity 74
Chain Conditions 75

PART II: Systems, Models, and Entailment 81

4 The Modelling Relation 83
Dualism 83
Natural Law 88
Model versus Simulation 91
The Prototypical Modelling Relation 95
The General Modelling Relation 100

5 Causation 105
Aristotelian Science 105
Aristotle’s Four Causes 109
Connections in Diagrams 114
In beata spe 127

6 Topology 131
Network Topology 131
Traversability of Relational Diagrams 138
The Topology of Functional Entailment Paths 142
Algebraic Topology 150
Closure to Efficient Causation 156

Page 9

PART III: Simplex and Complex 161

7 The Category of Formal Systems 163
Categorical System Theory 163
Constructions in S 167
Hierarchy of S-Morphisms and Image Factorization 173
The Lattice of Component Models 176
The Category of Models 183
The $ and the : 187
Analytic Models and Synthetic Models 189
The Amphibology of Analysis and Synthesis 194

8 Simple Systems 201
Simulability 201
Impredicativity 206
Limitations of Entailment and Simulability 209
The Largest Model 212
Minimal Models 214
Sum of the Parts 215
The Art of Encoding 217
The Limitations of Entailment in Simple Systems 221

9 Complex Systems 229
Dichotomy 229
Relational Biology 233

PART IV: Hypotheses fingo 237

10 Anticipation 239
Anticipatory Systems 239
Causality 245
Teleology 248
Synthesis 250
Lessons from Biology 255
An Anticipatory System is Complex 256

Page 10

11 Living Systems 259
A Living System is Complex 259
(M,R)-Systems 262
Interlude: Reflexivity 272
Traversability of an (M,R)-System 278
What is Life? 281
The New Taxonomy 284

12 Synthesis of (M,R)-Systems 289
Alternate Encodings of the Replication Component 289
Replication as a Conjugate Isomorphism 291
Replication as a Similarity Class 299
Traversability 303

PART V: Epilogus 309

13 Ontogenic Vignettes 311
(M,R)-Networks 311
Anticipation in (M,R)-Systems 318
Semiconservative Replication 320
The Ontogenesis of (M,R)-Systems 324

Appendix: Category Theory 329
Categories o Functors o Natural Transformations 330
Universality 348
Morphisms and Their Hierarchies 360
Adjoints 364

Bibliography 373

Acknowledgments 377

Index 379

Page 11


Unus non sufficit orbis

In my mentor Robert Rosen’s iconoclastic masterwork Life Itself [1991], which dealt with the epistemology of life, he proposed a Volume 2 that was supposed to deal with the ontogeny of life. As early as 1990, before Life Itself (i.e., ‘Volume 1’) was even published (he had just then signed a contract with Columbia University Press), he mentioned to me in our regular correspondence that Volume 2 was “about half done”. Later, in his 1993 Christmas letter to me, he wrote:

...I’ve been planning a companion volume [to Life Itself] dealing with ontology. Well, that has seeped into every aspect of everything else, and I think I’m about to make a big dent in a lot of old problems. Incidentally, that book [Life Itself] has provoked a very large response, and I’ve been hearing from a lot of people, biologists and others, who have been much dissatisfied with prevailing dogmas, but had no language to articulate their discontents. On the other hand, I’ve outraged the “establishment”. The actual situation reminds me of when I used to travel in
Eastern Europe in the old days, when everyone was officially a Dialectical Materialist, but unofficially, behind closed doors, nobody was a Dialectical Materialist.

Cheating the Millennium: The Mounting Explanatory Debts of Scientific Naturalism

2003 Christopher Michael Langan

1. Introduction: Thesis + Antithesis = Synthesis
2. Two Theories of Biological Causality
3. Causality According to Intelligent Design Theory
4. Causality According to Neo-Darwinism
5. A Deeper Look at Causality: The Connectivity Problem
6. The Dualism Problem
7. The Structure Problem
8. The Containment Problem
9. The Utility (Selection) Problem
10. The Stratification Problem
11. Synthesis: Some Essential Features of a Unifying Model of Nature and Causality
The Utility (Selection) Problem
As we have just noted, deterministic causality transforms the states of preexisting objects according to preexisting laws associated with an external medium. Where this involves or produces feedback, the feedback is of the conventional cybernetic variety; it transports information through the medium from one location to another and then back again, with transformations at each end of the loop. But where objects, laws and media do not yet exist, this kind of feedback is not yet possible. Accordingly, causality must be reformulated so that it can not only transform the states of natural systems, but account for self-deterministic relationships between states and laws of nature. In short, causality must become metacausality.([35]Metacausality is the causal principle or agency responsible for the origin or “causation” of causality itself (in conjunction with state). This makes it responsible for its own origin as well, ultimately demanding that it self-actualize from an ontological groundstate consisting of unbound ontic potential.)

Self-determination involves a generalized atemporal([36]Where time is defined on physical change, metacausal processes that affect potentials without causing actual physical changes are by definition atemporal.) kind of feedback between physical states and the abstract laws that govern them. Whereas ordinary cybernetic feedback consists of information passed back and forth among controllers and regulated entities through a preexisting conductive or transmissive medium according to ambient sensory and actuative protocols – one may think of the Internet, with its closed informational loops and preexisting material processing nodes and communication channels, as a ready example - self-generative feedback must be ontological and telic rather than strictly physical in character.([37]Telesis is a convergent metacausal generalization of law and state, where law relates to state roughly as the syntax of a language relates to its expressions through generative grammar…but with the additional stipulation that as a part of syntax, generative grammar must in this case generate itself along with state. Feedback between syntax and state may thus be called telic feedback.) That is, it must be defined in such a way as to “metatemporally” bring the formal structure of cybernetics and its physical content into joint existence from a primitive, undifferentiated ontological groundstate. To pursue our example, the Internet, beginning as a timeless self-potential, would have to self-actualize, in the process generating time and causality.

But what is this ontological groundstate, and what is a “self-potential”? For that matter, what are the means and goal of cosmic self-actualization? The ontological groundstate may be somewhat simplistically characterized as a complete abeyance of binding ontological constraint, a sea of pure telic potential or “unbound telesis”. Self-potential can then be seen as a telic relationship of two lower kinds of potential: potential states, the possible sets of definitive properties possessed by an entity along with their possible values, and potential laws (nomological syntax) according to which states are defined, recognized and transformed.([38]Beyond a certain level of specificity, no detailed knowledge of state or law is required in order to undertake a generic logical analysis of telesis.) Thus, the ontological groundstate can for most purposes be equated with all possible state-syntax relationships or “self-potentials”, and the means of self-actualization is simply a telic, metacausal mode of recursion through which telic potentials are refined into specific state-syntax configurations. The particulars of this process depend on the specific model universe – and in light of dual-aspect monism, the real self-modeling universe - in which the telic potential is actualized.

And now we come to what might be seen as the pivotal question: what is the goal of self-actualization?

‎Conveniently enough, this question contains its own answer: self-actualization, a generic analogue of Aristotelian final causation and thus of teleology, is its own inevitable outcome and thus its own goal.([39]To achieve causal closure with respect to final causation, a metacausal agency must self-configure in such a way that it relates to itself as the ultimate utility, making it the agency, act and product of its own self-configuration. This 3-way coincidence, called triality, follows from self-containment and implies that self-configuration is intrinsically utile, thus explaining its occurrence in terms of intrinsic utility.) Whatever its specific details may be, they are actualized by the universe alone, and this means that they are mere special instances of cosmic self-actualization. Although the word “goal” has subjective connotations – for example, some definitions stipulate that a goal must be the object of an instinctual drive or other subjective impulse – we could easily adopt a reductive or functionalist approach to such terms, taking them to reduce or refer to objective features of reality. Similarly, if the term “goal” implies some measure of design or pre-formulation, then we could easily observe that natural selection does so as well, for nature has already largely determined what “designs” it will accept for survival and thereby render fit.
Given that the self-containment of nature implies causal closure implies self-determinism implies self-actualization, how is self-actualization to be achieved? Obviously, nature must select some possible form in which to self-actualize. Since a self-contained, causally closed universe does not have the luxury of external guidance, it needs to generate an intrinsic self-selection criterion in order to do this. Since utility is the name already given to the attribute which is maximized by any rational choice function, and since a totally self-actualizing system has the privilege of defining its own standard of rationality([40]It might be objected that the term “rationality” has no place in the discussion…that there is no reason to assume that the universe has sufficient self-recognitional coherence or “consciousness” to be “rational”. However, since the universe does indeed manage to consistently self-recognize and self-actualize in a certain objective sense, and these processes are to some extent functionally analogous to human self-recognition and self-actualization, we can in this sense and to this extent justify the use of terms like “consciousness” and “rationality” to describe them. This is very much in the spirit of such doctrines as physical reductionism, functionalism and eliminativism, which assert that such terms devolve or refer to objective physical or functional relationships. Much the same reasoning applies to the term utility.), we may as well speak of this self-selection criterion in terms of global or generic self-utility. That is, the self-actualizing universe must generate and retrieve information on the intrinsic utility content of various possible forms that it might take.

The utility concept bears more inspection than it ordinarily gets. Utility often entails a subject-object distinction; for example, the utility of an apple in a pantry is biologically and psychologically generated by a more or less conscious subject of whom its existence is ostensibly independent, and it thus makes little sense to speak of its “intrinsic utility”. While it might be asserted that an apple or some other relatively non-conscious material object is “good for its own sake” and thus in possession of intrinsic utility, attributing self-interest to something implies that it is a subject as well as an object, and thus that it is capable of subjective self-recognition.([41]In computation theory, recognition denotes the acceptance of a language by a transducer according to its programming or “transductive syntax”. Because the universe is a self-accepting transducer, this concept has physical bearing and implications.) To the extent that the universe is at once an object of selection and a self-selective subject capable of some degree of self-recognition, it supports intrinsic utility (as does any coherent state-syntax relationship). An apple, on the other hand, does not seem at first glance to meet this criterion.

But a closer look again turns out to be warranted. Since an apple is a part of the universe and therefore embodies its intrinsic self-utility, and since the various causes of the apple (material, efficient and so on) can be traced back along their causal chains to the intrinsic causation and utility of the universe, the apple has a certain amount of intrinsic utility after all. This is confirmed when we consider that its taste and nutritional value, wherein reside its utility for the person who eats it, further its genetic utility by encouraging its widespread cultivation and dissemination. In fact, this line of reasoning can be extended beyond the biological realm to the world of inert objects, for in a sense, they too are naturally selected for existence. Potentials that obey the laws of nature are permitted to exist in nature and are thereby rendered “fit”, while potentials that do not are excluded.([42]The concept of potential is an essential ingredient of physical reasoning. Where a potential is a set of possibilities from which something is actualized, potential is necessary to explain the existence of anything in particular (as opposed to some other partially equivalent possibility).) So it seems that in principle, natural selection determines the survival of not just actualities but potentials, and in either case it does so according to an intrinsic utility criterion ultimately based on global self-utility.

‎It is important to be clear on the relationship between utility and causality. Utility is simply a generic selection criterion essential to the only cosmologically acceptable form of causality, namely self-determinism. The subjective gratification associated with positive utility in the biological and psychological realms is ultimately beside the point. No longer need natural processes be explained under suspicion of anthropomorphism; causal explanations need no longer implicitly refer to instinctive drives and subjective motivations. Instead, they can refer directly to a generic objective “drive”, namely intrinsic causality…the “drive” of the universe to maximize an intrinsic self-selection criterion over various relational strata within the bounds of its internal constraints.([43]Possible constraints include locality, uncertainty, blockage, noise, interference, undecidability and other intrinsic features of the natural world.) Teleology and scientific naturalism are equally satisfied; the global self-selection imperative to which causality necessarily devolves is a generic property of nature to which subjective drives and motivations necessarily “reduce”, for it distributes by embedment over the intrinsic utility of every natural system.

Intrinsic utility and natural selection relate to each other as both reason and outcome. When an evolutionary biologist extols the elegance or effectiveness of a given biological “design” with respect to a given function, as in “the wings of a bird are beautifully designed for flight”, he is really talking about intrinsic utility, with which biological fitness is thus entirely synonymous. Survival and its requisites have intrinsic utility for that which survives, be it an organism or a species; that which survives derives utility from its environment in order to survive and as a result of its survival. It follows that neo-Darwinism, a theory of biological causation whose proponents have tried to restrict it to determinism and randomness, is properly a theory of intrinsic utility and thus of self-determinism. Athough neo-Darwinists claim that the kind of utility driving natural selection is non-teleological and unique to the particular independent systems being naturally selected, this claim is logically insupportable. Causality ultimately boils down to the tautological fact that on all possible scales, nature is both that which selects and that which is selected, and this means that natural selection is ultimately based on the intrinsic utility of nature at large.

The Stratification Problem
It is frequently taken for granted that neo-Darwinism and ID theory are mutually incompatible, and that if one is true, then the other must be false. But while this assessment may be accurate with regard to certain inessential propositions attached to the core theories like pork-barrel riders on congressional bills([44]Examples include the atheism and materialism riders often attached to neo-Darwinism, and the Biblical Creationism rider often mistakenly attached to ID theory.), it is not so obvious with regard to the core theories themselves. In fact, these theories are dealing with different levels of causality."

"[45] Cognitive-perceptual syntax consists of (1) sets, posets or tosets of attributes (telons), (2) perceptual rules of external attribution for mapping external relationships into telons, (3) cognitive rules of internal attribution for cognitive (internal, non-perceptual) state-transition, and (4) laws of dependency and conjugacy according to which perceptual or cognitive rules of external or internal attribution may or may not act in a particular order or in simultaneity."

"Measurement is aimed at assigning a value to “a quantity of a thing”. Therefore a clear statement of what a quantity is appears to be a required condition to interpret unambiguously the results of a measurement. However, the concept of quantity is seldom analyzed in detail in even the foundational works of metrology, and far too often, quantities are “defined” in terms of attributes, characteristics, qualities, etc. while leaving such terms in themselves undefined. The aim of this paper is to discuss the meaning of “quantity” (or, as it will be adopted here for the sake of generality, “attribute”) as it is used in measurement, also drawing several conclusions on the concept of measurement itself.

Author Keywords: Measurement theory; Measurement science; Concept of quantity"


"The fundamental mechanism of Operator Grammar is the dependency constraint: certain words (operators) require that one or more words (arguments) be present in an utterance. In the sentence John wears boots, the operator wears requires the presence of two arguments, such as John and boots. (This definition of dependency differs from other dependency grammars in which the arguments are said to depend on the operators.)

In each language the dependency relation among words gives rise to syntactic categories in which the allowable arguments of an operator are defined in terms of their dependency requirements. Class N contains words (e.g. John, boots) that do not require the presence of other words. Class ON contains the words (e.g. sleeps) that require exactly one word of type N. Class ONN contains the words (e.g. wears) that require two words of type N. Class OOO contains the words (e.g. because) that require two words of type O, as in John stumbles because John wears boots. Other classes include OO (is possible), ONNN (put), OON (with, surprise), ONO (know), ONNO (ask) and ONOO (attribute).

The categories in Operator Grammar are universal and are defined purely in terms of how words relate to other words, and do not rely on an external set of categories such as noun, verb, adjective, adverb, preposition, conjunction, etc. The dependency properties of each word are observable through usage and therefore learnable."

"As an example of the tautological nature of MAP, consider a hypothetical external scale of distance or duration in terms of which the absolute size or duration of the universe or its contents can be defined. Due to the analytic self-containment of reality, the functions and definitions comprising its self-descriptive manifold refer only to each other; anything not implicated in its syntactic network is irrelevant to structure and internally unrecognizable, while anything which is relevant is already an implicit ingredient of the network and need not be imported from outside. This implies that if the proposed scale is relevant, then it is not really external to reality; in fact, reality already contains it as an implication of its intrinsic structure.

In other words, because reality is defined on the mutual relevance of its essential parts and aspects, external and irrelevant are synonymous; if something is external to reality, then it is not included in the syntax of reality and is thus internally unrecognizable. It follows that with respect to that level of reality defined on relevance and recognition, there is no such thing as a “real but external” scale, and thus that the universe is externally undefined with respect to all measures including overall size and duration. If an absolute scale were ever to be internally recognizable as an ontological necessity, then this would simply imply the existence of a deeper level of reality to which the scale is intrinsic and by which it is itself intrinsically explained as a relative function of other ingredients. Thus, if the need for an absolute scale were ever to become recognizable within reality – that is, recognizable to reality itself - it would by definition be relative in the sense that it could be defined and explained in terms of other ingredients of reality. In this sense, MAP is a “general principle of relativity”.
The Principle of Attributive (Topological-Descriptive, State-Syntax) Duality

Where points belong to sets and lines are relations between points, a form of duality also holds between sets and relations or attributes, and thus between set theory and logic. Where sets contain their elements and attributes distributively describe their arguments, this implies a dual relationship between topological containment and descriptive attribution as modeled through Venn diagrams. Essentially, any containment relationship can be interpreted in two ways: in terms of position with respect to bounding lines or surfaces or hypersurfaces, as in point set topology and its geometric refinements (⊃T), or in terms of descriptive distribution relationships, as in the Venn-diagrammatic grammar of logical substitution (⊃D).
Because states express topologically while the syntactic structures of their underlying operators express descriptively, attributive duality is sometimes called state-syntax duality. As information requires syntactic organization, it amounts to a valuation of cognitive/perceptual syntax; conversely, recognition consists of a subtractive restriction of informational potential through an additive acquisition of information. TD duality thus relates information to the informational potential bounded by syntax, and perception (cognitive state acquisition) to cognition.

In a Venn diagram, the contents of circles reflect the structure of their boundaries; the boundaries are the primary descriptors. The interior of a circle is simply an “interiorization” or self-distribution of its syntactic “boundary constraint”. Thus, nested circles corresponding to identical objects display a descriptive form of containment corresponding to syntactic layering, with underlying levels corresponding to syntactic coverings.

This leads to a related form of duality, constructive-filtrative duality.
‎"However, this ploy does not always work. Due to the longstanding scientific trend toward physical reductionism, the buck often gets passed to physics, and because physics is widely considered more fundamental than any other scientific discipline, it has a hard time deferring explanatory debts mailed directly to its address. Some of the explanatory debts for which physics is holding the bag are labeled “causality”, and some of these bags were sent to the physics department from the evolutionary biology department. These debt-filled bags were sent because the evolutionary biology department lacked the explanatory resources to pay them for itself. Unfortunately, physics can’t pay them either.

The reason that physics cannot pay explanatory debts generated by various causal hypotheses is that it does not itself possess an adequate understanding of causality. This is evident from the fact that in physics, events are assumed to be either deterministic or nondeterministic in origin. Given an object, event, set or process, it is usually assumed to have come about in one of just two possible ways: either it was brought about by something prior and external to it, or it sprang forth spontaneously as if by magic. The prevalence of this dichotomy, determinacy versus randomness, amounts to an unspoken scientific axiom asserting that everything in the universe is ultimately either a function of causes external to the determined entity (up to and including the universe itself), or no function of anything whatsoever. In the former case there is a known or unknown explanation, albeit external; in the latter case, there is no explanation at all. In neither case can the universe be regarded as causally self-contained.

To a person unused to questioning this dichotomy, there may seem to be no middle ground. It may indeed seem that where events are not actively and connectively produced according to laws of nature, there is nothing to connect them, and thus that their distribution can only be random, patternless and meaningless. But there is another possibility after all: self-determinacy. Self-determinacy involves a higher-order generative process that yields not only the physical states of entities, but the entities themselves, the abstract laws that govern them, and the entire system which contains and coherently relates them. Self-determinism is the causal dynamic of any system that generates its own components and properties independently of prior laws or external structures. Because self-determinacy involves nothing of a preexisting or external nature, it is the only type of causal relationship suitable for a causally self-contained system.

In a self-deterministic system, causal regression leads to a completely intrinsic self-generative process. In any system that is not ultimately self-deterministic, including any system that is either random or deterministic in the standard extrinsic sense, causal regression terminates at null causality or does not terminate. In either of the latter two cases, science can fully explain nothing; in the absence of a final cause, even material and efficient causes are subject to causal regression toward ever more basic (prior and embedding) substances and processes, or if random in origin, toward primitive acausality. So given that explanation is largely what science is all about, science would seem to have no choice but to treat the universe as a self-deterministic, causally self-contained system. ([34]In any case, the self-containment of the real universe is implied by the following contradiction: if there were any external entity or influence that were sufficiently real to affect the real universe, then by virtue of its reality, it would by definition be internal to the real universe.)

And thus do questions about evolution become questions about the self-generation of causally self-contained, self-emergent systems. In particular, how and why does such a system self-generate?

Complex Systems from the Perspective of Category Theory: II. Covering Systems and Sheaves

"Motivated by foundational studies concerning the modelling and analysis of complex systems we propose a scheme based on category theoretical methods and concepts [1-7]. The essence of the scheme is the development of a coherent relativistic perspective in the analysis of information structures associated with the behavior of complex systems, effected by families of partial or local information carriers. It is claimed that the appropriate specification of these families, as being capable of encoding the totality of the content, engulfed in an information structure, in a preserving fashion, necessitates the introduction of compatible families, constituting proper covering systems of information structures. In this case the partial or local coefficients instantiated by contextual information carriers may be glued together forming a coherent sheaf theoretical structure [8-10], that can be made isomorphic with the original operationally or theoretically introduced information structure. Most importantly, this philosophical stance is formalized categorically, as an instance of the adjunction concept. In the same mode of thinking, the latter may be used as a formal tool for the expression of an invariant property, underlying the noetic picturing of an information structure attached formally with a complex system as a manifold. The conceptual grounding of the scheme is interwoven with the interpretation of the adjunctive correspondence between variable sets of information carriers and information structures, in terms of a communicative process of encoding and decoding."

Concurrent ontology and the extensional conception of attribute

"By analogy with the extension of a type as the set of individuals of that type, we define the extension of an attribute as the set of states of an idealized observer of that attribute, observing concurrently with observers of other attributes. The attribute-theoretic counterpart of an operation mapping individuals of one type to individuals of another is a dependency mapping states of one attribute to states of another. We integrate attributes with types via a symmetric but not self-dual framework of dipolar algebras or disheaves amounting to a type-theoretic notion of Chu space over a family of sets of qualia doubly indexed by type and attribute, for example the set of possible colors of a ball or heights of buildings. We extend the sheaf-theoretic basis for type theory to a notion of disheaf on a profunctor. Applications for this framework include the Web Ontology Language OWL, UML, relational databases, medical information systems, geographic databases, encyclopedias, and other data-intensive areas standing to benefit from a precise ontological framework coherently accommodating types and attributes.

Keywords: Attribute, Chu space, ontology, presheaf, type."

"Now going back to our subject and the facts upheld by materialists. They state that inasmuch as it is proven and upheld by science that the life of phenomena depends upon composition and their destruction upon disintegration, then where comes in the need or necessity of a Creator -- the self-subsistent Lord?

For if we see with our own eyes that these infinite beings go through myriads of compositions and in every composition appearing under a certain form showing certain characteristics virtues, then we are independent of any divine maker.

597. This is the argument of the materialists. On the other hand those who are informed of divine philosophy answer in the following terms:

Composition is of three kinds.

1. Accidental composition.
2. Involuntary composition.
3. Voluntary composition.

There is no fourth kind of composition. Composition is restricted to these three categories.

If we say that composition is accidental, this is philosophically a false theory, because then we have to believe in an effect without a cause, and philosophically, no effect is conceivable without a cause. We cannot think of an effect without some primal cause, and composition being an effect, there must naturally be a cause behind it.
738. Consequently, the great divine philosophers have had the following epigram: All things are involved in all things. For every single phenomenon has enjoyed the postulates of God, and in every form of these infinite electrons it has had its characteristics of perfection.

Thus this flower once upon a time was of the soil. The animal eats the flower or its fruit, and it thereby ascends to the animal kingdom. Man eats the meat of the animal, and there you have its ascent into the human kingdom, because all phenomena are divided into that which eats and that which is eaten. Therefore, every primordial atom of these atoms, singly and indivisibly, has had its coursings throughout all the sentient creation, going constantly into the aggregation of the various elements. Hence do you have the conservation of energy and the infinity of phenomena, the indestructibility of phenomena, changeless and immutable, because life cannot suffer annihilation but only change.

The apparent annihilation is this: that the form, the outward image, goes through all these changes and transformations. Let us again take the example of this flower. The flower is indestructible. The only thing that we can see, this outer form, is indeed destroyed, but the elements, the indivisible elements which have gone into the composition of this flower are eternal and changeless. Therefore the realities of all phenomena are immutable. Extinction or mortality is nothing but the transformation of pictures and images, so to speak -- the reality back of these images is eternal. And every reality of the realities is one of the bounties of God.

Some people believe that the divinity of God had a beginning.

They say that before this particular beginning man had no knowledge of the divinity of God. With this principle they have limited the operation of the influences of God.

For example, they think there was a time when man did not exist, and that there will be a time in the future when man will not exist. Such a theory circumscribes the power of God, because how can we understand the divinity of God except through scientifically understanding the manifestations of the attributes of God?

How can we understand the nature of fire except from its heat, its light? Were not heat and light in this fire, naturally we could not say that the fire existed.

Thus, if there was a time when God did not manifest His qualities, then there was no God, because the attributes of God presuppose the creation of phenomena. For example, by present consideration we say that God is the creator. Then there must always have been a creation -- since the quality of creator cannot be limited to the moment when some man or men realize this attribute. The attributes that we discover one by one -- these attributes themselves necessarily anticipated our discovery of them. Therefore, God has no beginning and no ending; nor is His creation limited ever as to degree. Limitations of time and degree pertain to the forms of things, not to their realities. The effulgence of God cannot be suspended. The sovereignty of God cannot be interrupted.

As long as the sovereignty of God is immemorial, therefore the creation of our world throughout infinity is presupposed. When we look at the reality of this subject, we see that the bounties of God are infinite, without beginning and without end."

"In practice, systems incorporating reactive planning tend to be autonomous systems proactively pursuing at least one, and often many, goals. What defines anticipation in an AI model is the explicit existence of an inner model of the environment for the anticipatory system (sometimes including the system itself). For example, if the phrase it will probably rain were computed on line in real time, the system would be seen as anticipatory.

In 1985, Robert Rosen defined an anticipatory system as follows [1]:

A system containing a predictive model of itself and/or its environment, which allows it to change state at an instant in accord with the model's predictions pertaining to a later instant.

In Rosen's work, analysis of the example : "It's raining outside, therefore take the umbrella" does involve a prediction. It involves the prediction that "If it is raining, I will get wet out there unless I have my umbrella". In that sense, even though it is already raining outside, the decision to take an umbrella is not a purely reactive thing. It involves the use of predictive models which tell us what will happen if we don't take the umbrella, when it is already raining outside.

To some extent, Rosen's definition of anticipation applies to any system incorporating machine learning. At issue is how much of a system's behaviour should or indeed can be determined by reasoning over dedicated representations, how much by on-line planning, and how much must be provided by the system's designers."

"In order to take advantage of model-based behavior it is necessary to be able to properly describe the surroundings in terms of how they are perceived. Such description processes are inductive and not recursively describable. That a system can perceive and describe its own surroundings means further that it has a learning capability. Learning is the process of making order out of disorder and this is precisely the most distinguish quality of inductive inference. Genuine learning without inductive capability is impossible. The implication of this is that systems that have a model of the surroundings are not possible to implement on computers nor can computers be learning devices contrary to what is believed in the area of machine learning."

"Deduction. Apply a general principle to infer some fact.
Induction. Assume a general principle that explains many facts.
Abduction. Guess a new fact that implies some given fact.
Ibn Taymiyya admitted that deduction in mathematics is certain. But in any empirical subject, universal propositions can only be derived by induction, and induction must be guided by the same principles of evidence and relevance used in analogy. Figure 3 illustrates his argument: Deduction proceeds from a theory containing universal propositions. But those propositions must have earlier been derived by induction with the same criteria used for analogy. The only difference is that induction produces a theory as intermediate result, which is then used in a subsequent process of deduction. By using analogy directly, legal reasoning dispenses with the intermediate theory and goes straight from cases to conclusion. If the theory and the analogy are based on the same evidence, they must lead to the same conclusions.
In analogical reasoning, the question Q leads to the same schematic anticipation, but instead of triggering the if-then rules of some theory, the unknown aspects of Q lead to the cases from which a theory could have been derived. The case that gives the best match to the given case P may be assumed as the best source of evidence for estimating the unknown aspects of Q; the other cases show possible alternatives. For each new case P′, the same principles of evidence, relevance, and significance must be used. The same kinds of operations used in induction and deduction are used to relate the question Q to some corresponding part Q′ of the case P′. The closer the agreement among the alternatives for Q, the stronger the evidence for the conclusion. In effect, the process of induction creates a one-size-fits-all theory, which can be used to solve many related problems by deduction. Case-based reasoning, however, is a method of bespoke tailoring for each problem, yet the operations of stitching propositions are the same for both.

Creating a new theory that covers multiple cases typically requires new categories in the type hierarchy. To characterize the effects of analogies and metaphors, Way (1991) proposed dynamic type hierarchies, in which two or more analogous cases are generalized to a more abstract type T that subsumes all of them. The new type T also subsumes other possibilities that may combine aspects of the original cases in novel arrangements. Sowa (2000) embedded the hierarchies in an infinite lattice of all possible theories. Some of the theories are highly specialized descriptions of just a single case, and others are very general. The most general theory at the top of the lattice contains only tautologies, which are true of everything. At the bottom is the contradictory or absurd theory, which is true of nothing. The theories are related by four operators: analogy and the belief revision operators of contraction, expansion, and revision (Alchourrón et al. 1985). These four operators define pathways through the lattice (Figure 4), which determine all possible ways of deriving new theories from old ones."

"Landauer and Bellman define semiotics as “the study of the appearance (visual or otherwise), meaning, and use of symbols and symbol systems.” From their examination of classification by biological systems, they conclude that it would require a radical shift in how symbols are represented in computers to emulate the biological classification process in hardware. However, they argue that semiotic theory should provide the theoretical basis for just such a radical shift. Landauer and Bellman do not claim to have discovered the Unifying semiotic principle of pattern-recognition, but they suggest that it must be inductive in character.

Indeed, the development of a unified inductive-learning model is the key to artificial intelligence. Induction is defined as a mode of reasoning that increases the information content of a given body of data. The application to pattern-recognition in general is obvious. An inductive pattern recognize would learn the common characterizing attributes of all (possibly infinitely many) members of a class from observation of a finite (preferably small) set of samples from the class and a finite set of samples not from the class. The problem arises due to the fact that the commonly used “learning” paradigms (neural nets, nearest neighbor algorithms, etc.) are based on identifying boundaries between classes, and are incapable of inductive learning.

How then should induction be performed? The leading thinkers in machine intelligence believe it should somehow emulate the process used in biological systems. That process appears to be model-based. Rosen provides an explanation for anticipatory behavior of biological systems in terms of interacting models.
Rosen shows that traditional reductionist modeling does not provide simple explanations for complex behavior. What seems to be complex behavior in such models is in fact an artifact of extrapolating the model outside its effective range. Genuine complex behavior must be described by anticipatory modeling. In Rosen’s words: “In particular, complex systems may contain sub-systems which act as predictive models of themselves and/or their environments, whose predictions regarding fhture behaviors can be utilized for modulation of present change of state. Systems of this type act in a truly anticipatory fashion, and possess many novel properties whose properties have never been explored.” In other words, genuine complexity is characterized by anticipation.

What is the best way to obtain the models required for an AS? The simple answer is to observe reality to a finite extent and then to generalize from the observations. To do so is inherently to add information to the data, or to perform an induction. It requires the generation of a likely principle based on incomplete information, and the principle may later be improved in the light of increasing knowledge. Where several possible models might achieve a desired goal, the best choice is driven by the relative economy of different models in reaching the goal.


How migh:;his be done in practice with noisy data? The most powerful method is Bayesian parameter estimation. Bayesian drops irrelevant parameters without loss of precision in describing relevant parameters. It filly exploits prior knowledge. Most important, the computation of the most probable values of a parameter set incidentally includes the measure of the probability. That is, the calculation produces an estimate of its own goodness. By comparing the goodness of alternative models, the best available description of the underlying reality is obtained. This is the optimal method of obtaining a model from experimental dat~ or of predicting the occurrence of future events given knowledge from the past, and of improving the prediction of the future as knowledge of the past improves. Bayesian parameter estimation is a straightforward method of induction."

There are many interesting ways to use generalized manifolds.

"Briefly, in this scheme teleomatic systems are classified as end-resulting, teleonomic as end-directed, and teleological are end-seeking."

"The teleonomic logic of evolution dictates that if animals with a more accurate representation of their environment have a better chance of survival, then over time they will develop mental models that are congruent with the laws of physics. The world around us consists mainly of what are described mathematically as differentiable manifolds; these include the curves, surfaces and solids that form the substrate for equations involving space, time, frequency, mass and force. A computational manifold is an abstract computing element that can exactly model the physical quantity it describes. A computational map is a computational manifold together with a parameterization, a discretization and an encoding. These completely specify the mesh of numerical values that represent the manifold and permit its realization within a digital computer or a biological nervous system. Just as the current density in Maxwell’s equations or the mass density in the Navier-Stokes equation define derivatives over large numbers of discrete electrons and molecules, a computational manifold can define a continuum over a computational map composed of discrete living cells. The fidelity of the representation is dependent not just on the resolution, but also on how each computing element behaves. In particular, since the physical phenomena may contain discontinuities, the encoding must be amenable to the calculation of Lebesgue integrals.
Audio spectrograms, which can be viewed as images in a two-dimensional product space of frequency and time, are the principal components in a language simulacra. Associating sounds with other manifolds, and those manifolds with other sounds, under the control of a continuous formal system, provides a framework for language and the communication of ideas. Areas of the neocortex can be modeled as associative memories that accept images as input addresses and produce images as outputs. Language and geometric reasoning simulacra comprising networks of CAMMs and projections from sensory-motor manifolds form the basis of conscious reflection. The mathematics that describes these surfaces and volumes of computation, the functional relationships between them, and the computations they perform provide a unified theory that can be used to model natural intelligence."

I came across the idea of varifolds a while back, interesting how they can be used in machine learning, managing complexity or modeling...I was thinking it may be useful in bridging analog vs. digital approaches.

Learning on Varifolds:

"Popular manifold learning algorithms (e.g., [2, 3, 4]) typically assume that high dimensional data lie on a lower dimensional manifold. This is a reasonable assumption since many data are generated by physical processes having a few free parameters. Tangent spaces for manifolds are the generalization of tangent planes for surfaces, and are used for instance in [5] for machine learning by locally fitting the tangent space of each data point and extracting local geometric information. One problem with tangent spaces is that they may not exist for non-differentiable manifolds, and since data commonly do not come continuously, we do not have enough reason to believe that we can always achieve reliable tangent space estimation. Therefore, in this paper, we argue to use varifolds (see next section for a definition), which accommodate the discrete nature of data, instead of differentiable manifolds as the underlying models of data and present an algorithm based on hypergraphs that facilitate the computation required for typical machine learning applications. We organize the paper as follows: we review the basic concept of manifolds and varifolds in Section 2, and introduce hypergraphs in Section 3, followed by our main varifold learning algorithm in Section 4. We give experimental validations on toy and real data in Section 5 and finally conclude the paper with future research questions."

Minimal Surfaces, Stratified Multivarifolds, and the Plateau Problem:

"Soap films, or, more generally, interfaces between physical media in equilibrium, arise in many applied problems in chemistry, physics, and also in nature. In applications, one finds not only two-dimensional but also multidimensional minimal surfaces that span fixed closed "contours" in some multidimensional Riemannian space. An exact mathematical statement of the problem of finding a surface of least area or volume requires the formulation of definitions of such fundamental concepts as a surface, its boundary, minimality of a surface, and so on. It turns out that there are several natural definitions of these concepts, which permit the study of minimal surfaces by different, and complementary, methods."
Dao Trong Thi formulated the new concept of a multivarifold, which is the functional analog of a geometrical stratified surface and enables us to solve Plateau's problem in a homotopy class."

"Perhaps less colorfully, Cleland (1993) argues the related point that computational devices limited to discrete numbers (i.e. Turing machines) can not compute many physically realized functions. Presumably this limitation is avoided by analog computers and brains.

Further arguments for cognitive continuity arise from a different sort of computational consideration. Consider a simple soap bubble, whose behavior can be used to compute extremely complex force-resolving functions (see Uhr, 1994). The interactions of molecular forces which can be used to represent certain macro-phenomena are far too complex for a digital computer to compute on a reasonable time scale. It is simply a fact that, in certain circumstances, analog computation is more efficient than digital computation. Coupled with arguments for the efficiency of many evolved systems, we should conclude that it is reasonable to expect the brain to be analog.
However, analog computation has a far more serious concern in regards to explaining cognitive function. If computation in the brain is fundamentally analog, serious problems arise as to how parts of the brain are able to communicate (Hammerstrom, 1995). It is notoriously difficult to ‘read off’ the results of an analog computation. Analog signals, because of their infinite information content, are extremely difficult to transmit in their entirety. Particularly since brain areas seem somewhat specialized in their computational tasks, there must be a means of sending understandable messages to other parts of the brain. Not only are analog signals difficult to transmit, analog computers have undeniably lower and more variable accuracy than digital computers. This is inconsistent with the empirical evidence for the reproducibility of neuronal responses (Mainen and Sejnowski, 1995).
Many who have discussed the continuity debate have arrived at an ecumenical conclusion similar in spirit to the following (Uhr 1994, p. 349):

The brain clearly uses mixtures of analog and digital processes. The flow and fusion of neurotransmitter molecules and of photons into receptors is quantal (digital); depolarization and hyperpolarization of neuron membranes is analog; transmission of pulses is digital; and global interactions mediated by neurotransmitters and slow waves appear to be both analog and digital.
In retrospect, Kant seems to have identified both the deeper source of the debate and, implicitly, its resolution. He notes that the tension between continuity and discreteness arises from a parallel tension between the certainty of theory and the uncertainty of how the real world is; in short, between theory and implementation. If we focus on theory, it is not clear that the continuity of the brain can be determined. But, since the brain is a real world system, implementational constraints apply. In particular, the effects of noise on limiting information transfer allow us to quantify the information transmission rates of real neurons. These rates are finite and discretely describable. Thus, a theory informed by implementation has solved our question, at least at one level of description.

Whether the brain is discrete at a higher level of description is still open to debate. Employing the provided definitions, it is possible to again suggest that the brain is discrete with a time step of 10ms as first proposed by Newell. Empirical evidence can then be brought to bear on this question, likely resulting in its being disproved. Similarly, one might propose that the brain is continuous at a similar level of description. Again, empricial evidence can be used to evalute such a claim. However, what has been shown here is that the brain is not continuous ‘all the way down’. There are principled reasons for considering the cognitively relevant aspects of the brain to be discrete at a time step of about one millisecond. In a sense, this does not resolve all questions concerning the analogicity of the brain, but it resolves perhaps the most fundamental one: Can the brain ever be considered digital for explaining cognitive phenomena? The answer, it seems, is “Yes”."

Symbols, neurons, soap-bubbles and the neural computation underlying cognition

A wide range of systems appear to perform computation: what common features do they share? I consider three examples, a digital computer, a neural network and an analogue route finding system based on soap-bubbles. The common feature of these systems is that they have autonomous dynamics — their states will change over time without additional external influence. We can take advantage of these dynamics if we understand them well enough to map a problem we want to solve onto them. Programming consists of arranging the starting state of a system so that the effects of the system''s dynamics on some of its variables corresponds to the effects of the equations which describe the problem to be solved on their variables. The measured dynamics of a system, and hence the computation it may be performing, depend on the variables of the system we choose to attend to. Although we cannot determine which are the appropriate variables to measure in a system whose computation basis is unknown to us I go on to discuss how grammatical classifications of computational tasks and symbolic machine reconstruction techniques may allow us to rule out some measurements of a system from contributing to computation of particular tasks. Finally I suggest that these arguments and techniques imply that symbolic descriptions of the computation underlying cognition should be stochastic and that symbols in these descriptions may not be atomic but may have contents in alternative descriptions.

"These programs all measure and utilize the P300 wave response in the brain, which was discussed in the previous post. The P300 wave response is an electrical signal that rises in the brain in response to recognition of ‘objects of significance’. The response is uber fast – the 300 in P300 stands for 300 milliseconds. Dr. Larry Farwell is credited as the ‘discoverer’ of the practical applications of this response. His Brainwave Fingerprinting technology uses P300 diagnostics for criminal justice and credibility assessment applications.

The NIA program uses the P300 wave to accelerate Imagery Intelligence (IMINT) analysis. IMINT analysts pour through large amounts of photographic imagery, and scan this imagery for ‘objects of significance’, which can mean Improvised Explosive Devices (IEDs) or other. Usually, the IMINT process is slow and tedious. An analyst has to crawl through piles of data, scanning every bit more or less slowly, virtually in search of needles in a haystack. The NIA program speeds up this process, a lot. An analyst wears an EEG cap and this cap measures the P300 response while the analysts are doing their IMINT tasks. If the analyst sees an ‘object of significance’ then software automatically tags that IMINT bit as relevant, and it is later given further scrutiny. Using this methodology, analysts can course through 10-20 images per second . There is no conscious scanning of the information – the entire process occurs subliminally. Pretty amazing!"

"Two competing methodologies in procedural content generation are teleological and ontogenetic. The teleological approach creates an accurate physical model of the environment and the process that creates the thing generated, and then simply runs the simulation, and the results should emerge as they do in nature.

The ontogenetic approach observes the end results of this process and then attempts to directly reproduce those results by ad hoc algorithms. Ontogenetic approaches are more commonly used in real-time applications such as games. (See "Shattering Reality," Game Developer, August 2006.)

A similar (overlapping) distinction is "top down" versus "bottom up": top-down algorithms directly modify/create things to obtain the desired result, and bottom up algorithms create a result that may only have what you want as a side effect (I personally prefer that terminology, it's easier to remember which is which. But they may not refer to exactly the same thing).

This distinction is visible in map generation when you compare your generated map to your desired topology - whether all points can be reached, whether certain specific points (enter? exit? starting point? goal?) are connected to each other, whether there are choke points, etc.

If you're generating a map using Perlin Noise for height, which in turns determines which areas can be crossed (not those above / below a certain height, or with a certain slope), you can't guarantee anything about topology.

But many dungeon and maze generation algorithms directly place rooms and corridors so that they're connected (though some may screw up along the process).

There's a general tradeoff between having good-looking and interesting maps, versus preserving your topological constraints - or more generally, generating something challenging but not completely impossible. You can also loosen up the importance of topological constraints, for example with destructible environments (digging, bombs), or allowing the player to re-generate levels until he finds one he likes."

"The CTMU and Quantum Theory

The microscopic implications of conspansion are in remarkable accord with basic physical criteria. In a self-distributed (perfectly self-similar) universe, every event should mirror the event that creates the universe itself. In terms of an implosive inversion of the standard (Big Bang) model, this means that every event should to some extent mirror the primal event consisting of a condensation of Higgs energy distributing elementary particles and their quantum attributes, including mass and relative velocity, throughout the universe. To borrow from evolutionary biology, spacetime ontogeny recapitulates cosmic phylogeny; every part of the universe should repeat the formative process of the universe itself.

Thus, just as the initial collapse of the quantum wavefunction (QWF) of the causally self-contained universe is internal to the universe, the requantizative occurrence of each subsequent event is topologically internal to that event, and the cause spatially contains the effect. The implications regarding quantum nonlocality are clear. No longer must information propagate at superluminal velocity between spin-correlated particles; instead, the information required for (e.g.) spin conservation is distributed over their joint ancestral IED…the virtual 0-diameter spatiotemporal image of the event that spawned both particles as a correlated ensemble. The internal parallelism of this domain – the fact that neither distance nor duration can bind within it – short-circuits spatiotemporal transmission on a logical level. A kind of “logical superconductor”, the domain offers no resistance across the gap between correlated particles; in fact, the “gap” does not exist! Computations on the domain’s distributive logical relations are as perfectly self-distributed as the relations themselves.

Equivalently, any valid logical description of spacetime has a property called hology, whereby the logical structure of the NeST universal automaton – that is, logic in its entirety - distributes over spacetime at all scales along with the automaton itself. Notice the etymological resemblance of hology to holography, a term used by physicist David Bohm to describe his own primitive nonlocal interpretation of QM. The difference: while Bohm’s Pilot Wave Theory was unclear on the exact nature of the "implicate order" forced by quantum nonlocality on the universe - an implicate order inevitably associated with conspansion - the CTMU answers this question in a way that satisfies Bell's theorem with no messy dichotomy between classical and quantum reality. Indeed, the CTMU is a true localistic theory in which nothing outruns the conspansive mechanism of light propagation.

The implications of conspansion for quantum physics as a whole, including wavefunction collapse and entanglement, are similarly obvious. No less gratifying is the fact that the nondeterministic computations posited in abstract computer science are largely indistinguishable from what occurs in QWF collapse, where just one possibility out of many is inexplicably realized (while the CTMU offers an explanation called the Extended Superposition Principle or ESP, standard physics contains no comparable principle). In conspansive spacetime, time itself becomes a process of wave-particle dualization mirroring the expansive and collapsative stages of SCSPL grammar, embodying the recursive syntactic relationship of space, time and object.

There is no alternative to conspansion as an explanation of quantum nonlocality. Any nonconspansive, classically-oriented explanation would require that one of the following three principles be broken: the principle of realism, which holds that patterns among phenomena exist independently of particular observations; the principle of induction, whereby such patterns are imputed to orderly causes; and the principle of locality, which says that nothing travels faster than light. The CTMU, on the other hand, preserves these principles by distributing generalized observation over reality in the form of generalized cognition; making classical causation a stable function of distributed SCSPL grammar; and ensuring by its structure that no law of physics requires faster-than-light communication. So if basic tenets of science are to be upheld, Bell’s theorem must be taken to imply the CTMU.

As previously described, if the conspanding universe were projected in an internal plane, its evolution would look like ripples (infocognitive events) spreading outward on the surface of a pond, with new ripples starting in the intersects of their immediate ancestors. Just as in the pond, old ripples continue to spread outward in ever-deeper layers, carrying their virtual 0 diameters along with them. This is why we can collapse the past history of a cosmic particle by observing it in the present, and why, as surely as Newcomb’s demon, we can determine the past through regressive metric layers corresponding to a rising sequence of NeST strata leading to the stratum corresponding to the particle’s last determinant event. The deeper and farther back in time we regress, the higher and more comprehensive the level of NeST that we reach, until finally, like John Wheeler himself, we achieve “observer participation” in the highest, most parallelized level of NeST...the level corresponding to the very birth of reality."

Collective Intelligence
"First, the system must contain one or more agents
each of which we view as trying to maximize an associated private utility. Second, the system must have an associated world utility function that rates the possible behaviors of that overall system."

Collective Intelligence, Data Routing and Braess' Paradox:

"We consider the problem of designing the the utility functions of the utility-maximizing agents in a multi-agent system (MAS) so that they work synergistically to maximize a global utility. The particular problem domain we explore is the control of network routing by placing agents on all the routers in the network. Conventional approaches to this task have the agents all use the Ideal Shortest Path routing Algorithm (ISPA).We demonstrate that in many cases, due to the side-effects
of one agent’s actions on another agent’s performance, having agents use ISPA’s is suboptimal as far as global aggregate cost is concerned, even when they are only used to route infinitesimally small amounts of traffic. The utility functions of the individual agents are not “aligned” with the global utility, intuitively speaking. As a particular example of this we present an instance of Braess’ paradox in which adding new links to a network whose agents all use the ISPA results in a decrease
in overall throughput. We also demonstrate that load-balancing, in which the agents’ decisions are collectively made to optimize the global cost incurred by all traffic currently being routed, is suboptimal as far as global cost averaged across time is concerned. This is also due to “side effects”, in this case of current routing decision on future traffic. The mathematics of Collective Intelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects
in multi-agent systems, both over time and space. We present key concepts from that mathematics and use them to derive an algorithm whose ideal version should have better performance than that of having all agents use the ISPA, even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated via empirical means (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this COIN algorithm almost always avoids Braess’ paradox."

Science of Collectives:

"The relevance paradox occurs because people only seek information that they perceive is relevant to them. However there may be information (in its widest sense, data, perspectives, general truths etc) that is not perceived as relevant because the information seeker does not already have it, and only becomes relevant when he does have it. Thus the information seeker is trapped in a paradox.

In many cases where action or decision is required, it is obvious what information relevant to the matter at hand may be lacking - a military attack may not have maps so reconnaissance is undertaken, an engineering project may not have ground condition details, and these will be ascertained, a public health program will require a survey of which illnesses are prevalent, etc. However, in many significant instances across a wide range of areas, even when relevant information is readily available, the decision makers are not aware of its relevance, because they don't have the information which would make its relevance clear, so don't they look for it. This situation has been referred to as the relevance paradox. This occurs when an individual or a group of professionals are unaware of certain essential information which would guide them to make better decisions, and help them avoid inevitable, unintended and undesirable consequences. These professionals will seek only the information and advice they believe is the bare minimum amount required as opposed to what they actually need to fully meet their own or the organization's goals.

An analogy would be a short sighted person, unaware of the condition, would not be able to see the glasses they need, until they have the glasses on their head - which situation can only be resolved by a clear sighted assistant placing them there.

The Relevance Paradox is cited as a cause of the increase in diseases in developing countries even while more money is being spent on them - "Relevance paradoxes occur because of implementation of projects without awareness of the social or individual tacit knowledge within a target community. The understanding of the individual and the social tacit knowledge in a given community, which is a function of knowledge emergence, is the foundation of effectiveness in leadership practice."
The notions of Information Routing Groups (IRGs) and Interlock research were designed to counter this paradox by the promotion of lateral communication and the flow of Tacit knowledge which in many cases consists of the unconscious and unwritten knowledge of what is relevant.

A related point is that in many cases, despite the existence of good library indexing systems and search engines, the way specific knowledge may be described is not obvious unless you already know the knowledge."

‎"Toward Principles for the Design of Ontologies Used for Knowledge Sharing.

Context: Introduces the notion that ontologies are design and should be amenable to engineering methodologies. Proposes five design criteria for ontologies.

Abstract: Recent work in Artificial Intelligence is exploring the use of formal ontologies as a way of specifying content-specific agreements for the sharing and reuse of knowledge among software entities. We take an engineering perspective on the development of such ontologies. Formal ontologies are viewed as designed artifacts, formulated for specific purposes and evaluated against objective design criteria. We describe the role of ontologies in supporting knowledge sharing activities, and then present a set of criteria to guide the development of ontologies for these purposes. We show how these criteria are applied in case studies from the design of ontologies for engineering mathematics and bibliographic data. Selected design decisions are discussed, and alternative representation choices and evaluated against the design criteria."

Duality in Knowledge Sharing

"I propose a formalisation of knowledge sharing scenarios that aims at capturing the crucial role played by an existing duality between ontological theories to be merged and particular situations to be linked. I use diagrams in the Chu category and cocones and colimits over these diagrams to account for the reliability and optimality of knowledge sharing systems and show the advantage of this approach by re-analysing a system that shares knowledge between a probabilistic logic program and Bayesian belief networks."

Algebraic Logic
Categorical and Universal Algebra
Ordered Structures
Modular Description Logics
Privacy-Preserving Reasoning

"A community of practice (CoP) is, according to cognitive anthropologists Jean Lave and Etienne Wenger, a group of people who share an interest, a craft, and/or a profession. The group can evolve naturally because of the members' common interest in a particular domain or area, or it can be created specifically with the goal of gaining knowledge related to their field. It is through the process of sharing information and experiences with the group that the members learn from each other, and have an opportunity to develop themselves personally and professionally (Lave & Wenger 1991). CoPs can exist online, such as within discussion boards and newsgroups, or in real life, such as in a lunchroom at work, in a field setting, on a factory floor, or elsewhere in the environment.

While Lave and Wenger coined the term in the 1990s, this type of learning practice has existed for as long as people have been learning and sharing their experiences through storytelling."

Algorithms and Mechanism Design for Multi-Agent Systems:

"A scenario where multiple entities interact with a common environment to achieve individual and common goals either co-operatively or competitively can be classified as a Multi-Agent System. In this thesis, we concentrate on situations where the individual objectives do not align with each other or the common goal, and hence give rise to interesting game theoretic and optimization problems."

Assessing the Assumptions Underlying Mechanism Design for the Internet:

"The networking research community increasingly seeks to leverage mechanism design to create incentive mechanisms that align the interests of selfish agents with the interests of a principal designer."

I posted a couple links on Mechanism Design theory (apparently "deep" enough to win a Nobel Prize in Economics in 2007), basically it studies ways of aligning private and public utility functions through incentive-compatible mechanisms.

Mechanism Theory:

Mechanism Design: How to Implement Social Goals

"Maskin has devoted much of his career to the study and advancement of mechanism design theory, a sub-field of economics that studies the creation of systems which achieve a desired outcome even though participants act in their own self-interest without disclosing their preferences.

This is, Maskin said, "the engineering part of economic theory." Rather than considering existing institutions or procedures, "we begin with the outcomes we want and then we ask, are there institutions or procedures or mechanisms that we can build or design to achieve those goals?"
To use an example that Maskin borrowed from the Old Testament, imagine two siblings fighting over the last piece of cake. Their mother wants to end the squabble by making sure that the cake is divided in such a way that both children feel they're getting their fair share.

"Is there some mechanism that she can design which will lead to a fair division even though she herself doesn't know what a fair division consists of?" asks Maskin. One mechanism that resolves the problem is to have one child divide the cake and the other child choose a piece. Both children are happy, one because he is sure that he divided the cake exactly in half and the other because he is sure he picked the choicest piece."

"It is instructive that this year’s Nobel Prize in Economics was awarded to three economists whose collective intellectual contributions under the rubric of “Mechanism Design Theory” help us understand better the real world in which we humans interact. Their insights could help us create our own institutions that would encourage the development of desirable behaviors and traits in our people by realigning our private and public incentives accordingly.

To purist disciples of Adam Smith, the open marketplace, guided only by the omnipresent “invisible hand” that would smack those who make the wrong decisions and pat those who had the right ones, is the best mechanism to ensure this. However we all know that competition – and thus the marketplace – is hardly ever “pure.” Unrestrained, the human tendency is to collude and conspire. Unrestrained “pure” capitalism would produce only conscienceless capitalists of Dickens’s era. We still see those characters today, in such places as China, resulting in millions of children being poisoned by their cheap but dangerous toys.
Mechanism design theorists recognize the world as it is and take humans as we are. That is, we are neither saints nor satans and that we respond to incentives in what we believe to be in our best self interests, our public declarations notwithstanding. What we consider as incentives however may vary. To capitalists, interest income is a powerful incentive to save; to devout Muslims, an invitation to a life of sin and thus a definite disincentive!

A more monumental problem is that what we profess publicly may at times be at variance to what we believe or want privately, a phenomenon economist Timur Kuran refers to in his book, Private Truths, Public Lies, as “preference falsification.” This is the greatest barrier to formulating sound public policy.

The insight of mechanism design theory is in implicitly recognizing this and designing institutions that would best align public and private goals. This could be reconciling the seller wanting to maximize his profit and the buyer demanding the cheapest product; to universities upholding meritocracy and admitting only “top” students over the demands of influential alumni in “legacy” admissions favoring their children. On a broader public order, it could be the government wanting the greatest revenue from its broadwave spectrum to making sure that the public is well served."

Overcoming Communication Restrictions in Collectives:

Machine Learning, Game Theory, and Mechanism Design for a Networked World:

"Many of the key algorithmic challenges in the context of the internet require considering the objectives and interests of the different participants involved. These include problems ranging from pricing goods and resources, to improving search, to routing, and more generally to understanding how incentives of participants can be harnessed to improve the behavior of the overall system. As a result, Mechanism Design and Algorithmic Game Theory, which can be viewed as “incentive-aware algorithm design,” have become an increasingly important part of algorithmic research in recent years."

Games, Groups, and the Global Good:

"How do groups form, how do institutions come into being, and when do moral norms and practices emerge? This volume explores how game-theoretic approaches can be extended to consider broader questions that cross scales of organization, from individuals to cooperatives to societies. Game theory strategic formulation of central problems in the analysis of social interactions is used to develop multi-level theories that examine the interplay between individuals and the collectives they form. The concept of cooperation is examined at a higher level than that usually addressed by game theory, especially focusing on the formation of groups and the role of social norms in maintaining their integrity, with positive and negative implications. The authors suggest that conventional analyses need to be broadened to explain how heuristics, like concepts of fairness, arise and become formalized into the ethical principles embraced by a society."

"Collective intelligence has existed for at least as long as humans have. Tribes of hunter-gatherers, nations, and modern corporations all act collectively with varying degrees of intelligence. But this ancient phenomenon is now occurring in dramatically new forms. For example:

Google uses the knowledge millions of people have stored in the World Wide Web to provide remarkably useful answers to users' questions

Wikipedia motivates thousands of volunteers around the world to create the world's largest encyclopedia

Innocentive lets companies easily tap the talents of the global scientific community for innovative solutions to tough R&D problems

With new communication technologies-especially the Internet-huge numbers of people all over the planet can now work together in ways that were never before possible in the history of humanity. It is thus more important than ever for us to understand collective intelligence at a deep level so we can create and take advantage of these new possibilities.

That is the goal of the newly-named MIT Center for Collective Intelligence.

One way of framing the basic research question is:

How can people and computers be connected so that-collectively-they act more intelligently than any individuals, groups, or computers have ever done before?"

High-quality original submissions are welcome from all areas of contemporary AI; the following list of topics is indicative only.

Agents & Multi-agent systems
Agent Communication Languages & Protocols
Auctions & Mechanism Design
Coalition Formation
Computational Social Choice
Cooperation & Coordination
Game-Theoretic & Economic Foundations
Programming Languages & Environments for MAS

Case-Based Reasoning
Analogical Reasoning
Applications of CBR
Case Representation
Case Retrieval

Cognitive Modeling & Interaction
AI & Creativity
Ambient Intelligence
Artificial Life
Human-Computer Interaction
Human Experimentation
Instruction & Teaching
Intelligent User Interfaces
Perception & Performance
Personalisation & User Profiling
Philosophical Foundations
Recommender Systems
Skill Acquisition & Learning
User & Context Modelling
Usage Analysis & Usage Mining
Web Usability & Accessibility

Constraints & Search
Anytime Search
Constraint Optimisation
Constraint Programming
Constraint Propagation
Constraint Satisfaction
Distributed Constraint Solving
Global Constraint
Logic & Constraint Programming
Soft Constraints
User Interaction

Knowledge Representation and Reasoning
Abductive & Inductive Reasoning
Answer Set Programming
Automated Theorem Proving
Belief Revision
Common-Sense Reasoning
Decision Making
Description Logics
Logical Foundations
Logic Programming
Modal Logics
Model Checking
Non-Classical Logics
Nonmonotonic Reasoning
Ontologies & Ontology Languages
Reasoning about Actions & Change
Resource-Bounded Reasoning
Semantic Web
Spatial Reasoning
Temporal Reasoning
Verification & Validation

Machine Learning
Adaptive Systems
Bayesian Learning
Data Mining
Decision Tree & Rule Learning
Dimension Reduction
Ensemble Methods
Evolutionary Computing
Information Extraction
Kernel Methods
Knowledge Discovery
Machine Learning Applications
Multiagent Learning
Neural Networks
Reinforcement Learning
Unsupervised Learning

Model-Based Reasoning
Causal Reasoning
Diagnosis, Testing & Repair
Design & Configuration
Model-Based Reasoning & Diagnosis
Model-Based Systems
Qualitative Reasoning
Real-Time Systems

Natural Language Processing
Computational Morphology & Parsing
Computational Semantics & Pragmatics
Corpus-Based Language Models
Evaluation Methods for NLP
Intelligent Conversational Agents
Intelligent Information Retrieval
Lexicons & Ontologies
Linguistic Knowledge Acquisition & Discovery
Machine Learning for NLP
Opinion Mining & Sentiment Analysis
Paraphrasing & Textual Entailment
Semantic Search & Question Answering
Spoken & Multimodal Dialogue Systems
Text Mining & Information Extraction

Perception and Sensing
Active Vision & Sensory Planning
Image Processing
Model-Based Vision
Motion, Flow & Tracking
Object Recognition
Programming Environments & Languages
Statistical Models & Visual Learning
Task Planning & Execution

Planning and Scheduling
Applications of Planning & Scheduling
Classical Planning
Constraint-Based Planning & Scheduling
Heuristics for Planning
Hierarchical Planning & Scheduling
Interactive Planning & Scheduling
Knowledge Engineering for Planning & Scheduling
Markov Decision Processes
Planning & Learning
Planning with Incomplete Information
Planning with Uncertainty
Temporal Planning & Scheduling

Autonomous Systems
Estimation & Learning for Robotic Systems
Human-Robot Interaction
Motion Planning
Multi-Robot Systems
Robot Architectures
Robot Navigation
Robot Programming Environments & Languages
Search & Rescue Robots
Sensory Planning
Service Robots
Task Planning
Viewpoints & Modality Selection

Uncertainty in AI
Bayesian Networks
Belief Revision & Update
Conceptual Graphs
Graphical Models
Probabilistic Reasoning

Applications of AI
AI & Autonomous Vehicles
AI & Education
AI & Life Sciences/Medicine
AI & the Internet
AI & the Semantic Web
AI & Sensor Networks

Deductive coherence and norm adoption

"This paper is a contribution to the formalisation of Thagard’s coherence theory. The term coherence is defined as the quality or the state of cohering, especially a logical, orderly, and aesthetically consistent relationship of parts. A coherent set is interdependent such that every element in it contributes to the coherence. We take Thagard’s proposal of a coherence set as that of maximising satisfaction of constraints between elements and explore its use in normative multiagent systems. In particular, we interpret coherence maximisation as a decision-making criterion for norm adoption. We first provide a general coherence framework with the necessary computing tools. Later we introduce a proof-theoretic characterisation of a particular type of coherence, namely the deductive coherence based on Thagard’s principles, and derive a mechanism to compute coherence values between elements in a deductive coherence graph. Our use of graded logic helps us to incorporate reasoning under uncertainty, which is more realistic in the context of multiagent systems. We then conduct a case study where agents deliberate about norm adoption in a multiagent system where there is competition for a common resource. We show how a coherence-maximising agent decides to violate a norm guided by its coherence."

Has Foundationalism Failed?

The Coherence Theory of Truth:

Coherence, Truth, and the Development of Scientific Knowledge:

"What is the relation between coherence and truth? This paper rejects numerous answers to this question, including the following: truth is coherence; coherence is irrelevant to truth; coherence always leads to truth; coherence leads to probability, which leads to truth. I will argue that coherence of the right kind leads to at least approximate truth. The right kind is explanatory coherence, where explanation consists in describing mech- anisms. We can judge that a scientific theory is progressively approximating the truth if it is increasing its explanatory coherence in two key respects: broadening by explaining more phenomena and deepening by investigating layers of mechanisms. I sketch an explanation of why deepening is a good epistemic strategy and discuss the prospect of deepening knowledge in the social sciences and everyday life."

Inference to the Best Plan: A Coherence Theory of Decision

"In their introduction to this volume, Ram and Leake usefully distinguish between task goals and learning goals. Task goals are desired results or states in an external world, while learning goals are desired mental states that a learner seeks to acquire as part of the accomplishment of task goals. We agree with the fundamental claim that learning is an active and strategic process that takes place in the context of tasks and goals (see also Holland, Holyoak, Nisbett, and Thagard, 1986). But there are important questions about the nature of goals that have rarely been addressed. First, how can a cognitive system deal with incompatible task goals? Someone may want both to get lots of research done and to relax and have fun with his or her friends. Learning how to accomplish both these tasks will take place in the context of goals that cannot be fully realized together. Second, how are goals chosen in the first place and why are some goals judged to be more important than others? People do not simply come equipped with goals and priorities: we sometimes have to learn what is important to us by adjusting the importance of goals in the context of other compatible and incompatible goals. This paper presents a theory and a computational model of how goals can be adopted or rejected in the context of decision making. In contrast to classical decision theory, it views decision making as a process not only of choosing actions but also of evaluating goals. Our theory can therefore be construed as concerned with the goal-directed learning of goals."

‎"FOUNDATIONALISM in epistemology, the view that some beliefs can justifiably be held by inference from other beliefs, which themselves are justified directly—e.g., on the basis of rational intuition or sense perception. Beliefs about material objects or about the theoretical entities of science, for example, are not regarded as basic or foundational in this way but are held to require inferential support. Foundationalists have typically recognized self-evident truths and reports of sense-data as basic, in the sense that they do not need support from other beliefs. Such beliefs thus provide the foundations on which the edifice of knowledge can properly be built."

"COHERENTISM, Theory of truth according to which a belief is true just in case, or to the extent that, it coheres with a system of other beliefs. Philosophers have differed over the relevant sense of “cohere,” though most agree that it must be stronger than mere consistency. Among rival theories of truth, perhaps the oldest is the correspondence theory, which holds that the truth of a belief consists in its correspondence with independently existing facts. In epistemology, coherentism contrasts with foundationalism, which asserts that ordinary beliefs are justified if they are inferrable from a set of basic beliefs that are justified immediately or directly. Coherentism often has been combined with the idealist doctrine that reality consists of, or is knowable only through, ideas or judgments."

"IDEALISM, in philosophy, any view that stresses the central role of the ideal or the spiritual in the interpretation of experience. It may hold that the world or reality exists essentially as spirit or consciousness, that abstractions and laws are more fundamental in reality than sensory things, or, at least, that whatever exists is known in dimensions that are chiefly mental—through and as ideas.

Thus, the two basic forms of idealism are metaphysical idealism, which asserts the ideality of reality, and epistemological idealism, which holds that in the knowledge process the mind can grasp only the psychic or that its objects are conditioned by their perceptibility. In its metaphysics, idealism is thus directly opposed to materialism, the view that the basic substance of the world is matter and that it is known primarily through and as material forms and processes. In its epistemology, it is opposed to realism, which holds that in human knowledge objects are grasped and seen as they really are—in their existence outside and independently of the mind.
Bradley, F. H.As a philosophy often expressed in bold and expansive syntheses, idealism is also opposed to various restrictive forms of thought: to skepticism, with occasional exceptions as in the work of the modern British Hegelian F.H. Bradley; to positivism, which stresses observable facts and relations as opposed to ultimates and therefore spurns the speculative “pretensions” of every metaphysics; and often to atheism, since the idealist commonly extrapolates the concept of mind to embrace an infinite Mind. The essential orientation of idealism can be sensed through some of its typical tenets: “Truth is the whole, or the Absolute”; “to be is to be perceived”; “reality reveals its ultimate nature more faithfully in its highest qualities (mental) than in its lowest (material)”; “the Ego is both subject and object.”"

"Conation is a term that stems from the Latin conatus, meaning any natural tendency, impulse, striving, or directed effort. It is one of three parts of the mind, along with the affective and cognitive. In short, the cognitive part of the brain measures intelligence, the affective deals with emotions and the conative takes those thoughts and feelings to drive how you act on them."

"Nevertheless, for Thagard, there are still ways of shoring up coherence with varying degrees of vigour. Minimally, by taking the cue from the title of the book, one could concentrate on the action part, rather than thought, and emphasise the centrality of coherence in conative contexts. In planning tasks where the problem is not so much about truth or falsity but devising the most efficient way of reconciling various practical goals and objectives, approximations of most coherent plans are as good as the most coherent ones. Thus, from a practical perspective, coherence as a criterion of adequacy does play a principal part in our reasoning deliberations.
In conclusion, through his eclecticism and approximation algorithms, Thagard seems able to tout a viable notion of coherence. However, this is only achieved by extensively curtailing the traditional claims of coherentism and by conceding, for example, that ‘the formation of elements such as propositions and concepts and the construction of constraint relations between elements depend on processes to which coherence is only indirectly relevant.’ (p. 24) Also, even though he may appear to have vaccinated coherentism against the virus of isolation, i.e., there is no guarantee that the most coherent theory is also true, the susceptibility still remains the Achilles’ heel of coherentism. That is, approximating maximum coherence is not yet the same thing as approximating truth. Indeed, Thagard himself admits that for the coherentist project to succeed one needs ‘to see a much fuller account of the conditions under which progressively coherent theories can be said to approximate the truth.’ (p. 280) Therefore, in view of these pressing problems for coherentism, it seems rather too hasty to hypothesise, let alone to state categorically, that foundationalism “has undoubtedly failed”." - Majid Amini (Has Foundationalism Failed?)

"Moreover, when it comes to the task of clarifying the nature of coherence, an appeal can be made to many foundationalists. While there might not be much motivation to develop a position that one rejects, there is this: many foundationalists want to incorporate considerations about coherence. As we saw, they usually do this in one of two ways, either by allowing coherence to boost the level of justification enjoyed by beliefs that are independently justified in some non-coherentist fashion, or by stamping incoherent beliefs as unjustified. Defending these conditions on justification requires clarifying the nature of coherence. So, it is not just coherentists that have a stake in clarifying coherence."

‎"Please God, that we avoid the land of denial, and advance into the ocean of acceptance, so that we may perceive, with an eye purged from all conflicting elements, the worlds of unity and diversity, of variation and oneness, of limitation and detachment, and wing our flight unto the highest and innermost sanctuary of the inner meaning of the Word of God." (The Báb)

"If the flowers of a garden were all of one color, the effect would be monotonous to the eye; but if the colors are variegated, it is most pleasing and wonderful. The difference in adornment of color and capacity of reflection among the flowers gives the garden its beauty and charm. Therefore, although we are of different individualities, different in ideas and of various fragrances, let us strive like flowers of the same divine garden to live together in harmony. Even though each soul has its own individual perfume and color, all are reflecting the same light, all contributing fragrance to the same breeze which blows through the garden, all continuing to grow in complete harmony and accord. Become as waves of one sea, trees of one forest, growing in the utmost love, agreement and unity.
And when you pass by a garden wherein vegetable beds and plants, flowers and fragrant herbs are all combined so as to form a harmonious whole, this is an evidence that this plantation and this rose garden have been cultivated and arranged by the care of a perfect gardener, while when you see a garden in disorder, lacking arrangement and confused, this indicates that it has been deprived of the care of a skillful gardener, nay, rather, it is nothing but a mass of weeds." (‘Abdu'l-Baha)

"Because our three principles correspond to the 3 C’s, and because they all begin with the letter M, we might as well call them the “3 M’s”: M=R, MAP and MU, respectively standing for the Mind Equals Reality Principle, the Metaphysical Autology Principle, and the Multiplex Unity Principle.

The M=R principle, a tautological theoretical property that dissolves the distinction between theory and universe and thus identifies the real universe as a “self-reifying theory”, makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax.

MAP tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses, and in conjunction with M=R, renders the universe perfectly self-contained in the bargain.

And MU tautologically renders this syntax, and the theory-universe complex it describes, coherent enough to ensure its own consistency (thus, the “C” corresponding to MU actually splits into two C’s, consistency and coherence, and we have four altogether). To each of these principles we may add any worthwhile corollaries that present themselves.

(Note that while this seems to imply that the 3 M’s are “axioms” and therefore independent, the premise of axiomatic independence is itself a rather flimsy concept. These principles are actually rather strongly related in the sense that they can to some extent be inferred from each other in a reality-theoretic context.)"

T.S. Eliot: "Time present and time past. Are both perhaps present in time future, And time future contained in time past. If all time is eternally present ..."

"CALO, as an adaptive agent, is incredibly complex. It includes task processors, hybrid theorem provers, and probabilistic inference engines; multiple learning components employing a wide range of logical and statistical techniques; and multiple heterogeneous, distributed information sources underlying the processing. Despite the sophistication, however, individual CALO components typically provide little transparency into the computation and reasoning being performed.
At CALO's heart is also the ability to take autonomous control. CALO must not only assist with user actions, but it must also act autonomously on behalf of its users.

As CALO plans for the achievement of abstract objectives, executes tasks, anticipates future needs, aggregates multiple sensors and information sources, and adapts its behavior over time, there is an underlying assumption that there will be a user in the loop whom CALO is serving. This user would need to understand the CALO's behavior and responses enough to participate in the mixed-initiative execution process and to adjust the autonomy inherent in CALO. The user would also need to trust the reasoning and actions performed by CALO, including that those actions are based on appropriate processes and on information that is accurate and current. CALO needs to be able to use these justifications to derive explanations describing how they arrived at a recommendation, including the ability to abstract away detail that may be irrelevant to the user's understanding and trust evaluation process. Further, with regards specifically to task processing, CALO needs to explain how and under what conditions it will execute a task, as well as how and why that procedure has been created or modified over time.

One significant challenge to explaining a cognitive assistant like CALO is that it, by necessity, includes task processing components that evaluate and execute tasks, as well as reasoning components that determine conclusions. Thus, a comprehensive explainer needs to explain task processing responses as well as more traditional reasoning systems, providing access to both inference and provenance information, which we refer to as knowledge provenance."

Knowledge Provenance Infrastructure

"The web lacks support for explaining information provenance. When web applications return answers, many users do not know what information sources were used, when they were updated, how reliable the source was, or what information was looked up versus derived. Support for information provenance is expected to be a harder problem in the Semantic Web where more answers result from some manipulation of information (instead of simple retrieval of information). Manipulation includes, among other things, retrieving, matching, aggregating, filtering, and deriving information possibly from multiple sources. This article defines a broad notion of information provenance called knowledge provenance that includes proof-like information on how a question answering system arrived at its answer(s). The article also describes an approach for a knowledge provenance infrastructure supporting the extraction,
maintenance and usage of knowledge provenance related to answers of web applications and services."

"Knowledge Provenance is proposed to address the problem about how to determine the validity and origin of information/knowledge on the web by means of modeling and maintaining information source and dependency, as well as trust structure. Four levels of Knowledge Provenance are introduced: Static, where the validity of knowledge does not change over time; Dynamic, where validity may change over time; Uncertain, where the truth values and trust relation are uncertain; and Judgmental: where the societal processes for determining certainty of knowledge are defined. An ontology, semantics and implementation using RDFS is provided for Static Knowledge Provenance."

‎"‎"Those who investigate the Cause of Bahá'u'lláh with sincerity readily appreciate that the Bahá'í community is a creative minority that is the embodiment of its Founder's vision of the future and of His indomitable Will to achieve it. Thro...ugh your love, your sacrifices, your services and your very lives, you have proven to be the true promoters of the progress of your dear homeland of which Bahá'u'lláh has written:

The horizon of Persia hath been illumined with the light of the heavenly Orb. Erelong will the Daystar of the supernal realm shine so brightly as to raise that land even unto the ethereal heights and to cause it to shed its radiance over the whole earth. The imperishable glory of bygone generations shall once more be manifest in such wise as to dazzle and bewilder the eyes....

Iran shall become a focal centre of divine splendours. Her darksome soil will become luminous and her land will shine resplendent. Although now wanting in name and fame, she will become renowned throughout the world; although now deprived, she will attain her highest hopes and aspirations; although now destitute and despondent, she will obtain abundant grace, achieve distinction and find abiding honour.
"We have reviewed briefly here the argument of 'Abdu'l-Bahá's great message because of the remarkable extent to which contemporary events vindicate its diagnosis and prescriptions. The insights it contains illumine both the situation in which the Iranian people currently find themselves and the related implications for you who are the followers of Bahá'u'lláh in that country. The message was a summons--to the country's leaders and the population alike--to free themselves from blind submission to dogma and to accept the need for fundamental changes in behaviour and attitude, most particularly a willingness to subordinate personal and group interests to the crying needs of society as a whole.

As you well know, the Master's appeal was ignored. Locked in the grip of an antiquated Qajar autocracy restrained only by its incompetence, Persia drifted ever deeper into stagnation. Venal politicians competed with one another for a share of the diminishing wealth of a country driven to the verge of bankruptcy. Worse still, a population that had once produced some of the greatest minds in the history of civilization--Cyrus, Darius, Rumi, Hafiz, Avicenna, Rhazes and countless others--had become the prey of a clerical caste, as ignorant as it was corrupt, whose petty privileges could be maintained only by arousing in the helpless masses an unreasoning fear of anything progressive.

Little wonder then that, taking advantage of the chaos that followed in the wake of the first world war, an ambitious army officer was able to seize power and establish a personal dictatorship. To him--as to his son after him-- deliverance from Persia's ills was assumed to lie in a systematic programme of "Westernization". Schools, public works, a trained bureaucracy and a well- equipped military served the needs of the new national government. Foreign investment was encouraged as a means of developing the country's impressive national resources. Women were freed from the worst of the restrictions that had prevented their development and were given opportunities for education and useful careers. Although the Majlis remained little more than a facade, hope rose that, in time, it might emerge as a genuine institution of democratic government.

What emerged, instead, through the single-minded exploitation of Iran's petroleum resources, was wealth on an almost unimaginable scale. In the absence of anything resembling a system of social justice, the chief effect was to vastly enrich a privileged and self-serving minority, while leaving the mass of the population little better off than they had been before. Treasured cultural symbols and the heroic episodes of a glorious past were resurrected merely to decorate the monumental vulgarity of a society whose moral foundations were built on the shifting sands of ambition and appetite. Protest, even the mildest and most reasonable, was smothered by a secret police unconstrained by any constitutional oversight.

In 1979 the Iranian people threw off this despotism and swept its counterfeit claims to modernity into history's dustbin. Their revolution was the achievement of the combined forces of many groups, but its driving force was the ideals of Islam. In place of wanton self-indulgence, people were promised lives of dignity and decency. Gross inequities of class and wealth would be overcome by the spirit of brotherhood enjoined by God. The natural resources with which providence has endowed so fortunate a land were declared to be the patrimony of the entire Iranian people, to be used to provide universal employment and education. A new "Islamic Constitution" ostensibly enshrined solemn guarantees of equality before the law for all citizens of the republic. Government would endeavour conscientiously to combine spiritual values with the principles of democratic choice.

How do such promises relate to the experience being described 25 years later by the great majority of Iran's population? From all sides today one hears cries of protest against endemic corruption, political manipulation, the mistreatment of women, a shameless violation of human rights and the suppression of thought. What is the effect on public consciousness, one must further ask, of appeals to the authority of the Holy Qur'an to justify policies that lead to such conditions?

* * *

Iran's crisis of civilization will be resolved neither by blind imitation of an obviously defective Western culture nor by retreat into medieval ignorance. The answer to the dilemma was enunciated on the very threshold of the crisis, in the clearest and most compelling language, by a distinguished Son of Iran Who is today honoured in every continent of the world, but sadly not in the land of His birth. Persia's poetic genius captures the irony: "I searched the wide world over for my Beloved, while my Beloved was waiting for me in my own home." The world's appreciation of Bahá'u'lláh came perhaps most explicitly into focus on 29 May 1992, the centenary of His death, when the Brazilian Chamber of Deputies met in solemn session to pay tribute to Him, to His teachings and to the services rendered to humanity by the community He founded. On that occasion, the Speaker of the Chamber and spokespersons from every party rose, successively, to express their profound admiration of One who was described in their addresses as the Author of "the most colossal religious work written by the pen of a single Man", a message that "reaches out to humanity as a whole, without petty differences of nationality, race, limits or belief".

What has been the response in His native land to a Figure whose influence th has brought such honour to the name of Iran? From the middle years of the 19 century when He arose to champion the Cause of God, and despite the reputation His philanthropy and intellectual gifts had won, Bahá'u'lláh was made the object of a virulent campaign of persecution. In recognizing His mission, your forefathers had the imperishable glory of sharing in His sufferings. Throughout the ensuing decades, you who have remained faithful to His Cause, who have sacrificed for it and promoted its civilizing message to the most remote regions of the planet have known your own portion of abuse, bereavement and humiliation--each Bahá'í family in Iran.

One of the most appalling afflictions, in terms of its tragic consequences, has been the slander of Bahá'u'lláh's Cause perpetrated by that privileged caste to whom Persia's masses had been taught to look for guidance in spiritual matters. For over 150 years, every medium of public information-- pulpit, press, radio, television and even scholarly publication--has been perverted to create an image of the Bahá'í community and its beliefs that is grossly false and whose sole aim is to arouse popular contempt and antagonism. No calumny has been too vile; no lie too outrageous. At no point during those long years were you, the victims of this vilification, given an opportunity, however slight, to defend yourselves and to provide the facts that would have exposed such calculated poisoning of the public mind.
This is the real reason why Bahá'u'lláh was so desperately opposed by clergy and rulers who recognized in Him--correctly if only dimly--the Voice of a coming society of justice and enlightenment, in which they themselves would have no place. Nor should you have any doubt that it is this same fear that animates the suc...cessive waves of persecution you have long endured."

‎"Humanity's evolution has been marked by such progressive stages of social organization as family, tribe, city-state and nation. Bahá'u'lláh's express purpose was to usher in the next and ultimate stage, namely, world unity -- the harbinger of the Great Peace foretold in the world's religions. As the Word of God as revealed by Bahá'u'lláh is the source and impetus of the oneness of humankind, so the Covenant He has established is the organizing principle for its realization.

Bahá'u'lláh's Covenant guarantees both unity of understanding of His Faith's fundamental doctrines and actualization of that unity in the Bahá'í community's spiritual and social development. It is distinguished by its provision for authentic interpretation of the sacred texts and for an authorized system of administration, at the apex of which is an elected legislative body empowered to supplement the laws revealed by Bahá'u'lláh.
Shoghi Effendi expressed this view about the Covenant in a letter written on his behalf by his secretary:

As regards the meaning of the Bahá'í Covenant, the Guardian considers the existence of two forms of Covenant, both of which are explicitly mentioned in the literature of the Cause. First is the Covenant that every Prophet makes with humanity or, more definitely, with His people that they will accept and follow the coming Manifestation Who will be the reappearance of His reality. [Bahá'u'lláh states that a Manifestation will come not less than a thousand years after Him.] The second form of Covenant is such as the one Bahá'u'lláh made with His people that they should accept the Master [`Abdu'l-Bahá]. This is merely to establish and strengthen the succession of the series of Lights that appear after every Manifestation. Under the same category falls the Covenant the Master made with the Bahá'ís that they should accept His administration after Him.
A Covenant implies a solemn agreement between two parties. As already noted, Bahá'u'lláh's part of His Covenant is to bring us teachings that transform both the inner and outer conditions of life on earth, to provide us with an authoritative interpreter to keep us from misunderstanding God's will for us, and to give us guidance to establish institutions that will pursue the goals of the achievement of unity. Bahá'u'lláh's Covenant affects us at all levels of existence, from our social organizations to our individual lives.

As individuals, we in turn have the responsibility to observe the laws God has given to us to safeguard our dignity and to enable us to become the noble beings He created us to be -- to pray, to meditate, to read the Sacred Writings, to fast, to live a chaste life, to be trustworthy. It is our responsibility to show love towards each other, as imperfect as we may be; it is our obligation to love and to obey the institutions Bahá'u'lláh brought into being. Unless we do these things, we do not open ourselves to the benefits of Bahá'u'lláh's Covenant with us.

In an appealing collection of ethical writings called The Hidden Words, Bahá'u'lláh wrote, in the voice of the Divine:

"Love Me, that I may love thee. If thou lovest Me not, My love can in no wise reach thee. Know this, O servant."

This brief passage encapsulates the essence of the Covenant and our responsibility. It shows the Creator's abiding love for us as well as our freedom to choose whether to love Him in return -- and the consequences of that choice."

‎"I read in the Gulistan, or Flower Garden, of Sheik Sadi of Shiraz, that "they asked a wise man, saying: Of the many celebrated trees which the Most High God has created lofty and umbrageous, they call none azad, or free, excepting the cypress, which bears no fruit; what mystery is there in this?"

‎"He replied, Each has its appropriate produce, and appointed season, during the continuance of which it is fresh and blooming, and during their absence dry and withered; to neither of which states is the cypress exposed, being always flourishing; and of this nature are the azads, or religious independents. — Fix not thy heart on that which is transitory; for the Dijlah, or Tigris, will continue to flow through Bagdad after the race of caliphs is extinct: if thy hand has plenty, be liberal as the date tree; but if it affords nothing to give away, be an azad, or free man, like the cypress."

"Why self-reference is a specific feature of biological systems and not physical systems should now be evident: self-referential dynamics is an inherent and probably defining feature of evolutio...nary dynamics and thus biological systems. Thus, the question really is how self-referential dynamics arises as a universality class from the basic laws of microscopic physics, as an expression of nonequilibrium physics. Although there is a recognition of this sort of question in some of the literature on philosophy, artificial life and the evolution of language,
nothing approaching a serious calculation has been done to our knowledge.
The emphasis on redundancy as a motif suggests that a component of evolution is multifunctionalism. To see why, consider how a system is modeled, perhaps as a set of differential equations or lattice update rules. These rules themselves need to evolve: but how? We need
an additional set of rules describing the evolution of the original rules. But this upper level of rules itself needs to evolve. And so we end up with an infinite hierarchy, an inevitable reflection of the fact that the dynamic we are seeking is inherently self-referential. The way that the conundrum can be resolved is to begin with an infinite-dimensional dynamical system which spontaneously undergoes a sequence of symmetry-breaking or bifurcation events into lower-dimensional systems. Such transitions can be thought of as abstraction events: successive lower dimensional systems contain a representation
in them of upper levels of the original system. A precise mathematical prototype of such a construction can be constructed from consideration of the dynamics of functions on a closed one-dimensional interval, admittedly with very little direct biological interpretation, but with the positive outcome of generating a hierarchically entangled dynamical network. Such networks are
not simple tree structures, and this means that the nodes and links of the network drive each other in a way that a biologist would interpret as co-evolutionary. Another way to interpret such dynamical systems is that the elements are multifunctional: their input-output map depends on the state of the system, rather than being a constant
in time."(Life is physics: evolution as a collective phenomenon far from equilibrium)

"If the universe is really circular enough to support some form of “anthropic” argument, its circularity must be defined and built into its structure in a logical and therefore universal and necessary way. The Telic principle simply asserts that this is the case; the most fundamental imperative of reality is such as to force on it a supertautological, conspansive structure. Thus, the universe “selects itself” from unbound telesis or UBT, a realm of zero information and unlimited ontological potential, by means of telic recursion, whereby infocognitive syntax and its informational content are cross-refined through telic (syntax-state) feedback over the entire range of potential syntax-state relationships, up to and including all of spacetime and reality in general.

The Telic Principle differs from anthropic principles in several important ways. First, it is accompanied by supporting principles and models which show that the universe possesses the necessary degree of circularity, particularly with respect to time. In particular, the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely-separated events, lets the universe “retrodict” itself through meaningful cross-temporal feedback.

Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so-called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility."

"Robert Rosen (1985, p. 341), in the famous book Anticipatory Systems tentatively defined the concept of an anticipatory system: a system containing a predictive model of itself and/or of its environment, which allows it to state at an instant in accord with the model’s predictions pertaining to a later instant.

Robert Rosen considers that anticipatory systems are related to the final causation of Aristotle. A future cause could produce an effect at the present time. Then the causality principle seems reversed. Robert Rosen relates some anticipatory systems to feedforward loops.

So, for such anticipatory systems, it is perhaps better to speak of a finality principle and to see the process at a non-local or global point of view instead of seeing locally the causality process. In cybernetics and control theory, a goal and objective, defined at the present time by an engineer, drives the future states of a system by feedback loops.

It is interesting to point out that in physics, recursive causal systems can be formally expresses in a global and equivalent way from the principle of least action of Maupertuis.

An important class of anticipatory system is a system with multiple potential future states for which the actualisation of one of these potential futures is determined by the events at each current time. Such an anticipatory system is thus a system without an explicit future objective.

An anticipatory system could be also a system which contains a set of possible responses to any potential or, even, unpredictable external events. In this sense, the co-operative dynamics of the immune systems, for example, is a self-organising system which can be considered as an anticipatory system. Then all learning and evolutionary systems belong also to this class of anticipatory system.
Many Biologists would prefer Lamarckian comprehension of evolution answering the "why" rather than Darwinian explanation answering the "how" question.

At my point of view, Darwinian and Lamarkian systems are two complementary models of evolutionary natural systems, similarly to the two descriptions of physical systems, on one hand, from a causal principle and, on the other hand, from a least action principle.

In the neo-Darwinism, random mutations create different species which are then selected by the environment. Thus a neo-Darwinian system is an anticipatory system with mutiple potential mutations which are selected by the environment.

In the Lamarckism, species possess organs with multiple potential functions which are then selected by the environment. Consequently, the function actualises particular functions for each organ. Thus a Lamarkian system is an anticipatory system with multiple potential functions and the selection by the environment actualises some of these."

"The book directly challenges nearly all conventional views of the future and illuminates the danger that lies ahead if we do not plan for the impact of rapidly advancing technology."

This part is interesting from a collective intelligence/mechanism design perspective:

"Creating a Virtual Job
At the most basic level, a job is essentially a set of incentives. As a person acts according to those incentives, he or she performs work that is currently required in order to produce products and services. In the economy of the future, if that work is no longer required, we will need to create “virtual” jobs. In other words, people will continue to earn income by acting in accordance with incentives, but their actions will not necessarily result in “work” in the traditional sense.

The income earned by individuals must be unequal and dependent on each individual’s success in acting according to the established incentives. This will insure that people are motivated to act in ways that benefit themselves as well as society as a whole. Most importantly, this system will get a reliable stream of income into the hands of consumers, and as we have seen, that is absolutely essential
in order to create sustained demand for mass market products and services and therefore drive the economy. If we can do that successfully, then the free market economy can continue to operate and generate broadbased prosperity indefinitely.

The obvious questions that arise next are: What should these incentives be, and who should set them? The basic incentives should be fairly obvious; we simply need to combine the best positive incentives that are currently built into the idea of a traditional job with additional incentives that directly address the externalities that our current system overlooks.
Setting the Incentives

Who should be responsible for defining these incentives and setting the associated income levels? Clearly, this would have to be a function of government, although it might be privatized to some degree (see below). In Chapter 3, I argued that we should consider creating an independent agency to administer the details of the tax code.

We would certainly not want our future incentive scheme to be directly influenced by special interests, so it therefore seems likely that the creation of another independent agency would make sense. A “National Incentives Board” could be set up to define and maintain income incentives. This agency would be staffed by professionals and would be able to adjust incentives over time in much the same way that the Federal Reserve controls interest rates." (The Lights in the Tunnel)

"What we’re seeing today, says Thomas Malone, a professor at the MIT Sloan School of Management and the author of the 2004 book “The Future of Work,” is a further shift. The growing freelance workforce, he argues, is made up of people who see themselves not as having a single job so much as having several at once. To describe the current change, Malone borrows an image that the sociologist Alvin Toffler used to describe the earlier one.

“One of the things [Toffler] said was that we should move from the idea of a career as a linear progression up the ranks in a single organization to that of a career as a portfolio of jobs that you hold over time in a series of different organizations,” says Malone. “What I’m just now realizing is that many people today see their career portfolio including a combination of jobs at the same time.”
Malone believes that new forms of freelancing will help drive this change. Companies like iStockphoto (a stock photograph and image site containing the work of over 70,000 artists), Threadless (a T-shirt design company where anyone can submit designs and evaluate others), and Elance (an online source of skilled freelance labor) are models of companies where not just secondary jobs but the core function of the business is outsourced to a diffuse online workforce. All are helping connect client companies and freelance laborers to each other easily, without a traditional intermediary and with stricter standards than online marketplaces like Craigslist."

"Leonid Hurwicz started the field in the 1960s when he considered a very policy-oriented problem. How should a planner reach a decision when the quality of the decision relies on information spread among a number of people? The mathematical formulation of this problem is at the heart of mechanism design theory. Among his key insights is the idea that any solution should take into account the incentives of self-interested agents, i.e., the people on whose information the decision relies must find it in their interest to reveal that information.

Hurwicz’ analysis spoke to the great intellectual debate of the time. Was Capitalism or Socialism the better system? His results pointed out to a major weakness of Socialism – the lack of proper individual incentives. His results did not, however, let Capitalism off lightly, because individual incentives are not always aligned with social incentives. It did, however, help governments think about how best to regulate a capitalist economy.

Key contributions from Maskin and Myerson developed the theory further in the 1970s and 1980s. In its current form, mechanism design theory provides a general framework to study any collective decision problem. A mechanism design problem has three key inputs:

· A collective decision problem, such as the allocation of work in a team, the allocation of spectrum for mobile telephony or funding for public schools;

· A measure of quality to evaluate any candidate solution, for example efficiency, profits or social welfare;

· A description of the resources – informational or otherwise – held by the participants.

A mechanism specifies the set of messages that participants can use to transmit information and the decision that will be taken conditional on the messages that are sent. Once a mechanism is in place, participants effectively “play a game” where they send messages (e.g., a bid in an auction) as a function of their information. The goal is to find a mechanism with an equilibrium decision outcome (sometimes required to be unique) that is best according to the given measure of quality. The strength of mechanism design lies in its generality: any procedure, market-based or not, can be evaluated within a unified framework.

How mechanism design is used in economic policy-making

Today, mechanism design theory is part of the standard toolkit of every economist, and every economist uses it – consciously or unconsciously – almost daily. Mechanism design theory has affected virtually all areas of policy. Its policy implications lie at two levels. First, mechanism design theory tells us when markets or market-based institutions are likely to yield desirable outcomes (remember Adam Smith’s Invisible Hand?) and when other institutions will be better at achieving the desired goals. Second, mechanism design theory gives us guidance to design such alternative institutions when markets fail. In the rest of this article, we describe some of the policy areas affected by mechanism design."

2.4.9 Arts and Sciences – Strange and astonishing things

In all matters moderation is desirable. If a thing is carried to excess, it will prove a source of evil. Consider the civilization of the West, how it hath agitated and alarmed the peoples of the world. An infernal engine hath been devised, and hath proved so cruel a weapon of destruction that its like none hath ever witnessed or heard. The purging of such deeply-rooted and overwhelming corruptions cannot be effected unless the peoples of the world unite in pursuit of one common aim and embrace one universal faith. Incline your ears unto the Call of this Wronged One and adhere firmly to the Lesser Peace.

Strange and astonishing things exist in the earth but they are hidden from the minds and the understanding of men. These things are capable of changing the whole atmosphere of the earth and their contamination would prove lethal.
(Baha'u'llah, Tablets of Baha'u'llah, p. 69)

2.4.10 Arts and Sciences – The hidden secrets of nature

Ponder and reflect: all sciences, arts, crafts, inventions and discoveries, have been once the secrets of nature and in conformity with the laws thereof must remain hidden; yet man through his discovering power interfereth with the laws of nature and transfereth these hidden secrets from the invisible to the visible plane. This again is interfering with the laws of nature.
(Abdu'l-Baha, Baha'i World Faith - Abdu'l-Baha Section, p. 339)

2.4.11 Arts and Sciences – The intellect of man

The sciences and arts, all inventions, crafts, trades and their products have come forth from the intellect of man. It is evident that within the human organism the intellect occupies the supreme station. Therefore, if religious belief, principle or creed is not in accordance with the intellect and the power of reason, it is surely superstition.
(Abdu'l-Baha, The Promulgation of Universal Peace, p. 63)

2.4.12 Arts and Sciences – Future applications

The enormous energy dissipated and wasted on war, whether economic or political, will be consecrated to such ends as will extend the range of human inventions and technical development, to the increase of the productivity of mankind, to the extermination of disease, to the extension of scientific research, to the raising of the standard of physical health, to the sharpening and refinement of the human brain, to the exploitation of the unused and unsuspected resources of the planet, to the prolongation of human life, and to the furtherance of any other agency that can stimulate the intellectual, the moral, and spiritual life of the entire human race.

A world federal system, ruling the whole earth and exercising unchallengeable authority over its unimaginably vast resources, blending and embodying the ideals of both the East and the West, liberated from the curse of war and its miseries, and bent on the exploitation of all the available sources of energy on the surface of the planet, a system in which Force is made the servant of Justice, whose life is sustained by its universal recognition of one God and by its allegiance to one common Revelation -- such is the goal towards which humanity, impelled by the unifying forces of life, is moving."
(Baha'u'llah, The Proclamation of Baha'u'llah, p. xii)

"Zeitgeist: The Movie is a 2007 documentary film by Peter Joseph that asserts a number of conspiracy theory-based ideas, including a mythological origin of Christianity, alternate theories for the parties responsible for the September 11th attacks, and finally, that bankers manipulate the international monetary system and the media in order to consolidate power."

I wouldn't take them seriously, even if they might have some imaginative ideas in the Venus Project, they're approaching the problem from a mentality which tries to pin the blame while ignoring the complexity of the mechanisms at work.

A better use of resources would be how economic theories such as mechanism design could be used to prevent unfair/unethical practices.

In terms of the "Future of Work", most of the interesting stuff is being done at MIT:

"Our basic research question is: How can people and computers be connected so that—collectively—they act more intelligently than any individuals, groups, or computers have ever done before?"

There is a lot of research on this stuff...I'm not so worried about the macroeconomic impact as the microeconomic underpinning.

The Future of the Electronic Marketplace

"The marketplace is the place of exchange between buyer and seller. Once rode a mule to get there; now one rides the Internet. An electronic marketplace can span two rooms in the same building, or two continents. How individuals, firms, and organizations approach and define the electronic marketplace of the future depends on people's ability to ask the right questions now and to take advantage of the opportunities that will arise over the next few years.

The contributors to this volume are prime movers in major industries that are remaking themselves in order to shape the global marketplace. They examine the consumers' new powers to assess and exchange goods and services over unparalleled distances. They discuss the opportunities and risks posed by the new integration between manufacturer and consumer, by the erosion of centralized authority, by real-time choice in every financial contingency, and by the fact that travel and transportation have been delegated to the machine processes that can best handle them. They also reflect on how to set an intelligent value on the coming changes, on the tools and procedures required to create this new marketplace of marketplaces."

Collective Cognition
Mathematical Foundations of Distributed Intelligence

Many forms of individual cognition are enhanced by communication and collaboration with other intelligent agents. We propose to call this collective cognition, by analogy with the well known concept of collective action. People (and other intelligent agents) often "think better" in groups and sometimes think in ways which would be simply impossible for isolated individuals. Perhaps the most spectacular and important instance of collective cognition is modern science. An array of formal organizations and informal social institutions also can be considered means of collective cognition. For instance, Hayek famously argued that competitive markets effectively calculate an adaptive allocation of resources that could not be calculated by any individual market-participant. Hitherto the study of collective cognition has been qualitative, philosophical, even at times anecdotal. Only recently, we believe, have the tools fallen into place to initiate a rigorous, quantitative science of collective cognition. Moreover, it appears that soon there will be a real practical need for such a science.

Collective cognition involves an interaction among three elements-the individual abilities of the agents, their shared knowledge, and their communication structure. Cognitive collectives therefore resemble many other complex systems which are collectives of goal-directed processes. Typically, the individual processes know little of the detailed dynamics and the state of the overall system and, therefore, must use adaptive techniques to achieve their goals. There are many naturally occurring examples, including human economies, human organizations, ecosystems, and even spin glasses. In addition, it has recently become clear that many of the engineered systems of the future must be of this type, with massively distributed computational elements. There is optimism in the multi-agent system (MAS) field that widely applicable solutions to large, distributed problems are close at hand. Some experts now believe that, in the information and telecommunications networks of today, we have nascent examples of artificial cognitive collectives.

Much scientific work has been done on the "forward problem'' of deducing the global behavior that ensues for various choices of the design parameters for collectives. In the case of collective cognition, the forward problem concerns the evolution over time of the shared knowledge of the agents. This will be a major theme of the conference; we wish, for instance, to know just how analogous to organic evolution this knowledge evolution really is. The other theme of the conference, however, is the less well-understood inverse problem: starting with a desired global behavior for a collective, design the goals of constituent agents so as to produce that behavior. Much of what is known about this design problem concerns collectives of human beings, e.g., mechanism design in economics. While a crucial starting point, such work is limited by the peculiarities of human psychology, peculiarities which need not be shared by other, perhaps simpler and less intelligent, agents, or simply ones with radically different degrees of freedom.
Impact and Related Fields
There are many fields that have addressed aspects of collective cognition, from decentralized control theory to economics and game theory to social psychology. However, there are major differences in both the approach such fields take and the set of assumptions that form the basis of those fields. For example, the components of a multi-agent system may have many degrees of freedom that human beings lack, and lack many that human beings possess. Since MAS designers can, to a large extent, chose what degrees of freedom to give their agents, they have more flexibility in choosing policies for agent-agent interactions than (say) economists doing mechanism design. Furthermore, while game theory has established a strong theoretical basis, across several disciplines, for analyzing the equilibrium behavior of systems and how various equilibrium states relate to one another, there is little work on far-from-equilibrium behaviors and their robustness to perturbations.

Therefore, neither the direct application nor the simple extension of principles borrowed from existing fields are likely to provide the theoretical tools needed to understand and design the emergence of collective cognition in multi-agent systems. In this workshop we will address the design of systems that are intended to solve large distributed computational problems with little or no hand-tailoring through the collective and adaptive behavior of the agents comprising that system.
Cognitive science
Situated agents
Emergent computation
Bounded rationality
Institutional economics
Economies of information
Evolutionary game theory
Cognitive ethology
Collective phenomena in physics
Neural computation and distributed representations
Distributed computation
Mechanism design
General equilibrium theory
Population biology
Swarm intelligences
Reinforcement learning
Adaptive control
Cultural evolution
Cognitive sociology and the sociology of science
Telecommunications data routing"

Collectives and the design of complex systems

"With the advent of extremely affordable computing power, the world is becoming filled with distributed systems of computationally sophisticated components. However, no current scientific discipline offers a thorough understanding of the relation of such "collectives" and how well they meet performance criteria. "Collectives and Design of Complex Systems" lays the foundation for the study of collective intelligence and how these entities can be developed to yield optimal performance. Partone describes how some information-processing problems can only be solved by the joint actions of large communities of computers, each running their own complex, decentralized machine-learning algorithms. Part two offers general analysis on the dynamics and structures of collectives. Finally, part three addresses economic, model-free, and control-theory approaches to designing these complex systems."

Predicting the Future
by Alan Kay

"Xerox PARC (a computer science think tank for which Kay was a founding principal in 1970) was set up in Palo Alto to be as far away from corporate headquarters in Stamford, Connecticut. as possible and still be in the continental U.S. We used to have visits from the Xerox executives--usually in January and February--and when we could get them off the tennis courts they would come into the building at PARC. Mainly they were worried about the future, and they would badger us about what's going to happen to us. Finally, I said: 'Look, the best way to predict the future is to invent it. This is the century in which you can be proactive about the future; you don't have to be reactive. The whole idea of having scientists and technology is that those things you can envision and describe can actually be built.' It was a surprise to them and it worried them."

The Early History of Smalltalk

"Most ideas come from previous ideas. The sixties, particularly in the ARPA community, gave rise to a host of notions about "human-computer symbiosis" through interactive time-shared computers, graphics screens and pointing devices. Advanced computer languages were invented to simulate complex systems such as oil refineries and semi-intelligent behavior. The soon-to-follow paradigm shift of modern personal computing, overlapping window interfaces, and object-oriented design came from seeing the work of the sixties as something more than a "better old thing." This is, more than a better way: to do mainframe computing; for end-users to invoke functionality; to make data structures more abstract. Instead the promise of exponential growth in computing /$/ volume demanded that the sixties be regarded as "almost a new thing" and to find out what the actual "new things" might be. For example, one would computer with a handheld "Dynabook" in a way that would not be possible on a shared mainframe; millions of potential users meant that the user interface would have to become a learning environment along the lines of Montessori and Bruner; and needs for large scope, reduction in complexity, and end-user literacy would require that data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic any desired behavior.

Early Smalltalk was the first complete realization of these new points of view as parented by its many predecessors in hardware, language and user interface design. It became the exemplar of the new computing, in part, because we were actually trying for a qualitative shift in belief structures--a new Kuhnian paradigm in the same spirit as the invention of the printing pressand thus took highly extreme positions which almost forced these new styles to be invented."

Success means different things to different people, survival/productivity/happiness is part of the equation, for me it comes down to complementary forms of motivation.

"These are the four different kinds of motivation:

First, motivation can b...e intrinsic or extrinsic. Intrinsic motivation is when you want to do something. Extrinsic motivation is when somebody else tries to make you do something.

Secondly, there is positive and negative motivation. Positive motivation is when you want to get something - motivation towards some goal. Negative motivation is away from something you want to avoid.

Combine these two dimensions and we get four kinds of motivation.
Social value orientations are based on the assumption that individuals pursue different goals when making decisions for which the outcomes affect others. Social psychologists generally distinguish between five types of social value orientations. The main difference between each category is the extent to which one cares about his or her own payoffs and that of the other in social dilemma situations.

Altruistic: Desire to maximize the welfare of the other

Cooperative: Desire to maximize joint outcomes

Individualistic: Desire to maximize own welfare with no concern of that of the other

Competitive: Desire to maximize own welfare relative to that of the other

Aggressive: Desire to minimize the welfare of the other
2. You get diminishing returns - If the punishment or rewards stay at the same levels, motivation slowly drops off. To get the same motivation next time requires a bigger reward.

3. It hurts intrinsic motivation - Punishing or rewarding people for doing something removes their own innate desire to do it on their own. From now on you must punish/reward every time to get them to do it."

Factors that promote intrinsic motivation

"What enhances intrinsic motivation? This webpage cites some research and lists the factors that create and sustain intrinsic motivation. The list includes:

Challenge - Being able to challenge yourself and accomplish new tasks.

Control - Having choice over what you do.

Cooperation - Being able to work with and help others.

Recognition - Getting meaningful, positive recognition for your work.
To these I would add:

Happiness at work - People who like their job and their workplace are much more likely to find intrinsic motivation.

Trust - When you trust the people you work with, intrinsic motivation is much easier."

How to avoid the Crowding Out of intrinsic motivation by extrinsic rewards

From a study by Tobias Assman on Incentives for Participation at

"The effects of external interventions on intrinsic motivation have been attributed to two psychological processes:

(a) Impaired self-determination. When individuals perceive an external intervention to reduce their self-determination, they substitute intrinsic motivation by extrinsic control. Following Rotter (1966), the locus of control shifts from the inside to the outside of the person affected. Individuals who are forced to behave in a specific way by outside intervention, feel overjustified if they maintained their intrinsic motivation.

(b) Impaired self-esteem. When an intervention from outside carries the notion that the actor's motivation is not acknowledged, his or her intrinsic motivation is effectively rejected. The person affected feels that his or her involvement and competence is not appreciated which debases its value. An intrinsically motivated person is taken away the chance to display his or her own interest and involvement in an activity when someone else offers a reward, or commands, to undertake it. As a result of impaired self-esteem, individuals reduce effort.

The two processes identified allow us to derive the psychological conditions under which the crowding-out effect appears:

(1) External interventions crowd-out intrinsic motivation if the individuals affected perceive them to be controlling. In that case, both self-determination and self-esteem suffer, and the individuals react by reducing their intrinsic motivation in the activity controlled.

(2) External interventions crowd-in intrinsic motivation if the individuals concerned perceive it as supportive. In that case, self-esteem is fostered, and individuals feel that they are given more freedom to act, thus enlarging self-determination"

‎"In the standard Big Bang model, there's nothing cyclic; it has a beginning and it has no end.

"The philosophical question that's sensible to ask is 'what came before the Big Bang?'; and what they're striving for here is to do away with that 'there's nothing before' answer by making it cyclical."

‎"As to thy question whether the physical world is subject to any limitations, know thou that the comprehension of this matter dependeth upon the observer himself. In one sense, it is limited; in another, it is exalted beyond all limitations. The one true God hath everlastingly existed, and will everlastingly continue to exist. His creation, likewise, hath had no beginning, and will have no end. All that is created, however, is preceded by a cause. This fact, in itself, establisheth, beyond the shadow of a doubt, the unity of the Creator."
(Bahá'u'lláh, Gleanings from the Writings of Bahá'u'lláh, LXXXII, p. 162-163)

Time Travel and Consistency Constraints

"The possibility of time travel, as permitted in General Relativity, is responsible for constraining physical fields beyond what laws of nature would otherwise require. In the special case where time travel is limited to a single object returning to the past and interacting with itself, consistency constraints can be avoided if the dynamics is continuous and the object’s state space satisfies a certain topological requirement: that all null-homotopic mappings from the state-space to itself have some fixed point. Where consistency constraints do exist, no new physics is needed to enforce them. One needs only to accept certain global topological constraints as laws, something that is reasonable in any case.
3. What’s the Problem with Time Travel? The usual stock of time-travel
paradoxes begs us to distinguish between global and local possibility. In stories where the protagonist time travels into the past to kill himself as a youngster, it is purportedly paradoxical how the time traveler can kill himself. Yet, the resolution of the paradox is well known. The time traveler can kill his earlier self in the local sense because given only the local physics, the time traveler is no less capable of murder for his time traveling: he has his trusty sword, the requisite malice, etc. It is impossible for him, though, to kill himself in the global sense because there is no possible world (barring resurrection) where he dies as youth and then later journeys back through time. Conflicting intuitions about the abilities of the time traveler arise only by equivocating between local and global possibility. The global/local distinction makes the world safe for time travel."

IMHO, we have degrees of freedom within "self-determinacy", but since we are local "selves" embedded in a global "self", we are limited by the mutual coherence of our parts. Perhaps that one fixed iteration through time could be a cosmic form of "self-reference"?

A Quantum Computational Approach to the Quantum Universe

We discuss an approach to quantum physics and cosmology based on Feynman's vision of the universe as a vast form of self-referential quantum computation, replacing the conventional geometric view of spacetime with a pre-geometric setting based on quantum information acquisition and exchange.
A particular issue in quantum cosmology is the "problem of time". The application of constraint mechanics and canonical quantization leads to the Wheeler-DeWitt equation, in which the state of the universe appears to be frozen in time. How can a universe which appears timeless on one level appear to be the dynamically changing universe that we see on the local human level? Directly related to this problem is the unresolved issue of what the "state of the universe" means, given that there are, by definition, no observers external to the universe."

4. The CTMU and Cosmology

A curious child often asks “why” questions, and when an answer is given, immediately asks another why question about the answer. Such a child is unsatisfied with superficial explanations, craving instead an ultimate rationale for existence. Example: “Why is grass green?” “Chlorophyll’s green.” “Why does grass have chlorophyll?” “Because it needs to photosynthesize.” “Why?” “Because it gets its energy from the sun.” “Why does the sun make energy?” “Because it’s a huge fusion reactor that takes energy from atoms.” “Why do atoms have energy?” “Because, as a man named Einstein showed, matter is energy.” “Why?” “Because that’s the way the universe is made.” “What’s the universe and who made it?” At this point, the weary adult has exhausted his scientific knowledge and must begin to deal with the most general and philosophically controversial abstractions in his mental vocabulary…or give up.

Stephen Hawking is among those who have proposed a way out of the regress. In collaboration with James Hartle, he decided to answer the last question - what is the universe and who made it? - as follows. “The universe made itself, and its structure is determined by its ability to do just that.” This is contained in the No Boundary Proposal, which Hawking describes thusly: “This proposal incorporates the idea that the universe is completely self-contained, and that there is nothing outside the universe. In a way, you could say that the boundary conditions of the universe are that there is no boundary.” To mathematically support this thesis, Hawking infuses the quantum wavefunction of the universe with a set of geometries in which space and time are on a par. The fact that time consists of a succession of individual moments thus becomes a consequence of spatial geometry, explaining the “arrow of time” by which time flows from past to future.

Unfortunately, despite the essential correctness of the “intrinsic cosmology” idea (to make the universe self-contained and self-explanatory), there are many logical problems with its execution. These problems cannot be solved simply by choosing a convenient set of possible geometries (structurings of space); one must also explain where these geometric possibilities came from. For his own part, Hawking explains them as possible solutions of the equations expressing the laws of physics. But if this is to be counted a meaningful explanation, it must include an account of how the laws of physics originated…and there are further requirements as well. They include the need to solve paradoxical physical conundrums like ex nihilo cosmogony (how something, namely the universe, can be created from nothing), quantum nonlocality (how subatomic particles can instantaneously communicate in order to preserve certain conserved physical quantities), accelerating cosmic expansion (how the universe can appear to expand when there is no external medium of expansion, and accelerate in the process to boot), and so on. Even in the hands of experts, the conventional picture of reality is too narrow to meaningfully address these issues. Yet it is too useful, and too accurate, to be “wrong”. In light of the fundamentality of the problems just enumerated, this implies a need for additional logical structure, with the extended picture reducing to the current one as a limiting case.

The CTMU takes the reflexive self-containment relationship invoked by Hawking and some of his cosmological peers and predecessors and explores it in depth, yielding the logical structures of which it is built. Together, these structures comprise an overall structure called SCSPL, acronymic for Self-Configuring Self-Processing Language. The natural terminus of the cosmological self-containment imperative, SCSPL is a sophisticated mathematical entity that possesses logical priority over any geometric explanation of reality, and thus supersedes previous models as a fundamental explanation of the universe we inhabit. In doing so, it relies on a formative principle essential to its nature, the Telic Principle. A logical analogue of teleology, the Telic Principle replaces the usual run of ontological hypotheses, including quasi-tautological anthropic principles such as “we perceive this universe because this universe supports our existence,” as the basis of cosmogony.

Endophysics literally means “physics from within”. It is the study of how the observations are affected and limited by the observer being within the universe. This is in contrast with the common exophysics assumption of a system observed from the “outside”.

Endophysics, Time, Quantum and the Subjective:


"The three oldest limit statements are — perhaps — the "flaming sword" of the Bible, the notion of "Maya's veil" in India and Anaximander's claim that the Whole (One) is unrecognizable from within (only a cut is recognizable).

The first modern limit statement also takes the form of an interface statement. Boscovich's (1755) principle of the difference predicts that "the impressions generated" are invariant under certain objective changes of the universe (as when it "breathes" (increases and decreases in size) along with the observer and all forces. A century later, Maxwell (1872) recognized that a being who is part of a Hamiltonian universe cannot violate the second law of thermodynamics by performing micro observations followed by appropriate shepherding actions (like watching a gas of cold uranium atoms and setting barriers or removing them in an educated fashion using advanced modern gadgetry), whereas a being who is outside the same universe — a "demon" — can easily do the same thing. He thereby was able to predict that micro motions will in general be "impalpable" (p. 154 of his Theory of Heat, 1872). After this prediction had come true, half a century later, Bohr (1927) claimed that his own "complementarity principle" marks the limit of what an internal observer of a classical-continuous world can hope to measure. Four years later, Gödel (1931) discovered his discrete limit (an inaccessibility in finitely many steps of certain implications of a formal system from within). Unlike Bohr's intutive principle, Gödel's is a hard theorem. However, Bohr's also is harder in a sense because it asserted a contradiction between the inside and the outside view, while Gödel only found that a separating boundary existed between reachable and unreachable truths. Bohr's limit would be a "distortion" limit, Gödel's an "inaccessibility" limit.

Distortion limits were subsequently also found by Ed Moore in 1956 (an analog to uncertainty in certain dissipative automata) and Donald Mackay in 1957 (irrefutability of certain false statements, like the attribution of free will, to deterministic automata). The subsequent limits of Bob Rosen in 1958 (to self-reproduction in category theory), Rolf Landauer in 1961 (to dissipation-free computation) and Ed Lorenz in 1964 (butterfly effect in chaotic dynamics) appear to be inaccessibility limits again. The same holds good for the finite combinatorial inaccessibility (NP-completeness) discovered in the 1970's. The most recent limit of Yukio Gunji (inconsistency of language games) appears to be a distortion limit again.

Do these findings justify the introduction of a new discipline? Two points are in favor of the idea. First, new limits become definable in the fold of the new paradigm. For example, "ultraperspective" (which is inaccessible to animals) plays a fundamental role in both mathematical economics and science. The very notion of limits implies the adoption of two perspectives simultaneously. Second, any distortion limit "splits" the single (exo) reality into many different internally valid (endo) realities. Some of the objective features become interface-specific. This allows a new question to be asked: Can the "mirage properties" implicit in a distortion limit always be identified ("tagged") from the inside, even though they of course cannot be removed by definition? If so, relativity may turn out to be a first example in physics. Related observer-objective phenomena should then occur and be identifiable on the micro level (micro relativity). The famous Goldstein–Kerner "no interaction theorem" (which precludes a frame-dependent description of gases) would acquire a deep significance. Two previously incompatible fundamental theories of physics, namely relativity and quantum mechanics, would be unexpectedly unified by limitology.

A third point is in favor of limitology: distortion limits always leave a loophole. They exist because an objective picture (which no longer depends on the observer) can only be obtained by making the observer explicit — a feat impossible to accomplish from the inside. Nevertheless the impossible can be achieved — on the modeling level. The "artificial universe approach" to the real world therefore qualifies as a new type of measurement. At the same time the computer acquires an unexpected fundamental role."

Endophysics, the fabric of time and the self-evolving universe:

"It is argued that a new understanding of time arises through a paradigm shift in physics, particularly in the areas of astrophysics and cosmology, viz. the change from the exo-... to endo-physical perspective. This claim is illustrated/substantiated by discussion of some of the currently most pressing issues of theoretical physics
related to the evolution of the universe: the nature of time and the interpretation of quantum mechanics. It is demonstrated that the endo-physical (or first-person) view is a fertile ground for getting important insights into the structure of the perceived/subjective aspect of time as well as for inferring a possible connection of
the latter with its physical counterpart. Concerning quantum mechanics, the endo-physical perspective sheds fresh light on its interpretation and the measurement problem. This opens up a window for novel cosmological scenarios, such as a description of the universe as a self-evolving quantum automaton."

The Inside and Outside Views of Life

"Biology is, better than anything else, about existence in time. Hence biological reality cannot be defined without reference to a temporally situated observer. The coupled or detached character of this observer (with respect to the own time variable of the system) provides a link between the observer and the observed. This connections delimits the kinds of scientific descriptions that can be given at all by an observer. In particular, two fundamentally different forms of description, corresponding to different epistemological attitudes and different philosophies of science, called endo- and exo-physics, can be distunguished. Two old puzzles, the Omniscience Problem (illustrated here on the example of Internal Chemistry) and the Chameleon Problem (originally an argument against philosophical functionalism) are reconsidered in the light of these distinctions. As application, the question, in what sense computer models of life can be suitable for studying life, is examined."

Endophysics and The Thoughtbody Environment – An outline For a Neo-computational Paradigm:

Structure of Space-Time:

Time and the dichotomy subjective/objective. An endo-physical point of view.

"The process of collecting information from the observed phenomena occurs in the subjective time of all individuals who communicate to each other their own interpretations. Intercommunication between individuals constitutes a loop of knowledge by which single views are mediated toward constructing an “objective view”, agreed within the human society. During the last centuries this process of intercommunication has been made extremely efficient by Science but the generally adopted exo-physical perspective have often raised to the rank of absolute concepts the achieved objective views, stripping them of their evolving human nature and risking to rise expectations that cannot be satisfied. It is argued here that an endo-physical outlook fits better with the actual situation of humans imbedded in the world and heavily interacting with it. Within this framework, the dichotomy subjective-objective is conceptually overcome and the classical notion of “universal“ time may be seen as one of the results of the loop of knowledge occurring inside the human society. A general developmental scenario is also discussed in connection with the Eakins & Jaroszkiewicz’s “stage paradigm”, where classical spacetime and observers “emerge” in a universe running by quantum jumps."

INTERNALISM: excerpted from:

Uexküll’s work anticipates semiotics, autopoiesis, hierarchy theory, internalism, systems science, the information/dynamics dichotomy, as well as the social construction of knowledge aspect of postmodernism. I focus on his Theoretical Biology text. His central theory is that of function cycles, and his major effective image for evolutionary systems is the Umwelt . Internalism is the major current trend that function cycles relate to, and my discussion comes from that perspective, which I try to describe. I give a Peircean semiotic interpretation of function cycles. I attempt to relate Umwelt, an organism level concept, to the Hutchinsonian and Eltonian concepts of the ecological niche. This raises the question of evolution, and Uexküll’s view of it was developmental rather than Darwinian. I consider how including human culture in the Umwelt model raises the issue of Bertalanffy’s “deanthropomorphizing” of human Umwelten. Major problems raised are the relation between externalist and internalist discourses, and the exact nature of Uexküll’s Impulse Theory concept.

ecological niche, endophysics, function cycles, internalism, specification hierarchy, Umwelt, vitalism'01.pdf

Ecological Hypercycles - Covering a Planet with Life:

Semiotic Hypercycles Driving the Evolution of Language

The evolution of human symbolic capacity must have been very rapid even in some intermediate stage (e.g. the proto-symbolic behavior of Homo erectus). Such a rapid process requires a runaway model. The type of very selective and hyperbolically growing self-organization called “hypercyle” by Eigen and Schuster could explain the rapidity and depth of the evolutionary process, whereas traditional runaway models of sexual selection seem to be rather implausible in the case of symbolic evolution. We assume two levels: at the first level the species is adapted to ecological demands and accumulates the effects of this process in the genome. At the second level a kind of social/cultural knowledge is accumulated via a set of symbolic forms, one of which is language. Bühler’s model of three basic functions of signs can also be elaborated so that its cyclic structure becomes apparent. We assume that the hypercyclic process of semiosis and functional differentiation was triggered in 2 my BP (with the Homo erectus) and got more and more speed with the species Homo sapiens and later. The consequences for the evolutionary stratification of human languages will be drawn in the last section of the paper. The basic aim of the paper is to provide a semiotic (and not just a linguistic) explanation of the origin of language which can be linked to relevant models in evolutionary biology and which exploits the possibilities contained in self-organizing systems.

Keywords Origin of life - Origin of language - Language capacity - Self-organization - Semiotic hypercycle - Cognitive hypercycle - Human ecology - Infinite semiosis - Biosemiotics

"Like Oroborous, the mythical snake devouring its own tail, causality is cyclic, effects feed back into causes. First we make our buildings, then our buildings make us. As we dream, so we become. Urban conglomerates express primarily an anthropocentric world view; anthropocentric in the sense where our reality becomes honed down to the priorities and central preoccupations of human life alone, at the exclusion and detriment of its wider natural context. Cities of the Kali Yuga also reflect a cybernetic view of life, based upon mechanically determined processes of information and matter.

As people grow up within such environments, they are conditioned by all the philosophical presumptions and emotional charges that such urban assemblages express. This run-away process reflects a deep polarization throughout a greater part of the species mind, and in turn the ‘mind of Gaia’ which we are inextricably expressions of.

Anthropocentrism is fear. It is fear of other, it is fear of nature-crawling-under-your-skin, it is fear of being eaten, it is fear of disease. These are ancient sufferings and it is an old habit to shy away from anything associated with suffering. As the imagination of ego, the ’simulation of self’ crystallizes over countless generations, such a hallucination begins to actually manifest and embody itself in the walls, separations, divisions that sever the earths boundless flows and so cut off the human essence from the pranic logos flowing through deep currents."

"Reaction-Diffusion with reaction terms closely related to the hypercycle equation were studied in the context of morphogenesis and biological pattern formation. They yield dissipative structures in systems with suitable spatial boundaries and under the necessary chemical conditions to keep the system far away from thermodynamic equilibrium."

Dynamics, Evolution and Information in Nonlinear Dynamical Systems of Replicators:

The Emergence of Simple Economies of Skill: A Hypercycle Approach to Economic Organization

Reverse Logistics and the Forming of Circular Economy Hypercycle Structure:

Community Ecology: Processes, Models, and Applications:

Community ecology is the study of the interactions between populations of co-existing species. Co-edited by two prominent community ecologists and featuring contributions from top researchers in the field, this book provides a survey of the state-of-the-art in both the theory and applications of the discipline. It pays special attention to topology, dynamics, and the importance of spatial and temporal scale while also looking at applications to emerging problems in human-dominated ecosystems (including the restoration and reconstruction of viable communities).

System Dynamics, Hypercycles and Psychosocial Self-organization
exploration of Chinese correlative understanding:

What is the role of training in organizational learning?

Training is the method to pass organizational memory and routines from one person to another. It helps maintain organizational memory in the organization and socialize new members. It also transmits needed skills, values, etc. to the entire organization,or targets skills in certain members.

What are organization-specific skills?

Organization skills are those that cannot be acquired from outside. These include:

* values, beliefs,

* internal procedures, rules

* organizational communication channels

* proprietary technologies

(Similar to context-specific knowledge seen in consultancy article).

On the origin of R -species

"In this note we formalize certain aspects of observation process
in an attempt to link the logic of the observer with properties of the
observables structures. It is shown that an observer with Boolean logic perceives her environment as a four-dimensional Lorentzian manifold, and that the intrinsic mathematics of the environment may differ from the classical mathematics (i.e. the mathematics of the topos of sets).


Topos theory offers an independent (of the set theory) approach to the foundations of mathematics. Topoi are categories with set-like objects, function-like arrows and Boolean-like logic algebras. Handling these generalized sets and functions in a topos may differ from that in classical mathematics (i.e. the topos Set of sets): there are non-classical versions of mathematics, each with its non-Boolean version of logic. One possible view on topoi is this: abstract worlds, universes for mathematical discourse, inhabitants (observers) of which may use non-Boolean logics in their reasoning. From this viewpoint the main business of classical physics is to construct models of the universe with a given bivalent Boolean model of the observer, and choose the most adequate one. In a sense, our task is inverse: with a given model of the universe, to construct models of the observer, and find out how the observer’s perception of the universe changes if his logic is changed. Thus, it is not the universe itself, but rather its differential is what interests us here."

‎"Service is the Fundamental Basis of Exchange."

At least in terms of human mobility, we're upto 93% predictable, I'm not sure if there's been a study on collective innovation.

"Networks are everywhere: from social networks and terrorist networks linking people through the World Wide Web and beyond to biological networks communicating within a cell and from linguistic networks describing how words relate to each other to networks tracking how diseases spread globally. What’s remarkable, says physicist Albert-Laszlo Barabasi—who studies such networks using the methods of statistical physics—is that these seemingly very different systems have many similar properties.
Statistical mechanics tries to predict properties of collections of particles, such as molecules in a gas. One can’t possibly track the behavior of the individual molecules, but using statistical mechanics, one can still calculate properties, such as the temperature and pressure, of the gas as a whole. “Networks are not so different in many ways,” says Barabasi. “It turns out that you can say things about the behavior of the whole network even if you just know about a few nodes. This is very similar to what happens in statistical mechanics.”

"Barabási and colleagues used three months' worth of data from a cellphone network to track the cellphone towers each person's phone connected to each hour of the day, revealing their approximate location. They conclude that regardless of whether a person typically remains close to home or roams far and wide, their movements are theoretically predictable as much as 93 per cent of the time."

"Why does this work? This works because we all have limited attention spans and therefore we have to prioritize in order to get things done. In fact, this prioritization that we do is what causes the bursty activity that Barabasi explains so elegantly in his book."

con- (Latin: together, together with, with)
-science (Latin: to know, to learn; knowledge)

1. Sharing in the knowledge of, having cognizance of, being a witness to; mentally alive or awake.
2. Having internal perception or consciousness.
3. Aware of what one is doing or intending to do; having a purpose and intention in one's actions.
4. Objective or aware of one's consciousness; known to oneself, felt, sensible.
5. Etymology: from Latin conscius, "knowing, aware"; from conscire; from Latin scire, "to know"; probably a loan-translation of Greek syneidos.


"Consciousness". We have traveled what may seem a dizzying path. First, elementary quantum phenomenon brought to a close by an irreversible act of amplification. Second, the resulting information expressed in the form of bits. Third, this information used by observer-participants via communication - to establish meaning. Fourth, from the past through the milleniums to come, so many observer-participants, so many bits, so much exchange of information, as to build what we call existence.

Doesn't this it-from-bit view of existence seek to elucidate the physical world, about which we know something, in terms of an entity about which we know almost nothing, conciousness? And doesn't Marie Sklodowska Curie tell us, "Physics deals with things, not people?" Using such and such equipment, making such and such a measurement, I get such and such a number. Who I am has nothing to do with this finding. Or does it? Am I sleepwalking? Or am I one of those poor souls without the critical power to save himself from pathological science? Under such circumstances any claim to have "measured" something falls flat until it can be checked out with one's fellows. Checked how? Morton White reminds us how the community applies its tests of credibility, and in this connection quotes analyses by Chauncey Wright, Josiah Royce, and Charle's Saunders Peirce.

Parmenides of Elea ( = 515 B.C. - 450 B.C.) may tell us "What identical with the thought that recognizes it." We, however, steer clear of the issues connected with "conciousness." The line between the conscious and the unconscious begins to fade in our day as computers evolve and develop - as mathematics has - level upon level upon level of logical structure. We may someday have to enlarge the scope of what we mean by a "who." This granted, we continue to accept - as an essential part of the concept of it from bit - Follesdale's guideline, "Meaning is the joint product of all the evidence that is available to those who communicate." What shall we say of a view of existence that appears, if not anthropomorphic in its use of the word "who", still overly centered on life and consciousness? It would seem more reasonable to dismiss for the present the semantic overtones of "who" and explore and exploit the insights to be won from the phrases, "communication" and "communication employed to establish meaning."

Follesdal's statement supplies not an answer, but the doorway to new questions. For example, man has not yet learned how to communicate with an ant. When he does, will the questions put to the world around by the ant and the answers that he elicits contribute their share, too, to the establishment of meaning? As another issue associated with communication, we have yet to learn how to draw the line between a communication network that is closed, or parochial, and one that is open. And how to use that difference to distinguish between reality and poker - or another game - so intense as to appear more real than reality. No term in Follesdale's statement poses greater challenge to reflection than "communication", descriptor of a domain of investigation, that enlarges in sophistication with each passing year."

"We present here a short summary of a formal approach which aims at the modelling of emergent processes in terms of a recursive algebra combining aspects of topos theory with (more classical) criteria for self-organizing "chaotic" phenomena. This approach deals mainly with very general problems of emergence claiming it...s universality with respect to all worldly processes."

"This paper presents a synchronization-based, multi-process computational model of anticipatory systems called the Phase Web. It describes a self-organizing paradigm that explicitly recognizes and exploits the existence of a boundary between inside and outside, accepts and exploits intentionality, and uses explicit self-reference to describe eg. auto-poiesis. The model explicitly connects computation to a discrete Clifford algebraic formalization that is in turn extended into homology and co-homology, wherein the recursive nature of objects and boundaries becomes apparent and itself subject to hierarchical recursion. Topsy, a computer program embodying the Phase Web, is currently being readied for release.

Keywords. Process, hierarchy, co-exclusion, co-occurrence, synchronization, system, autopoiesis, conservation, invariant, anticipatory, homology, co-homology, twisted isomorphism, phase web paradigm, Topsy, reductionism, emergence.
Anticipatory systems (Rosen, 1985) display a number of properties that, together, differentiate them strongly from other kinds of systems:

• They possess parts that interact locally to form a coherently behaving whole.

• The way in which these parts interact differ widely from system to system in detail, yet wholes with very different parts seem nevertheless to resemble each other qua their very wholeness.

• It is impossible to ignore the fact that such systems are situated in a surrounding environment.
Indeed, their interaction with their environment is so integral to what they are and do makes their very situatedness a defining characteristic.

• A critical behavior shared by these wholes is the ability to anticipate changes in their surrounding environment and react in a way that (hopefully) ensures their continuing existence, ie. auto-poiesis.

Attempting to get a handle on anticipatory systems computationally can mean different things to different people."

Distributed Computation as Hierarchy

"Also related is Pratt’s work with Chu spaces and automata [Pratt95]. Automata are a traditional way to view computation, and state and prove formal properties, although their very sequentiality is problematic when dealing with distributed computations.
We have shown how Pratt’s original insight, taken in a different direction, leads to a radically new way to describe distributed computations. The current function-oriented focus on strings of events is replaced by a new kind of hierarchical abstraction that is squarely based on the phenomena peculiar to concurrent systems. The result is a model of concurrent computation where the duality of state and action, the interplay of atomicity and hierarchy, and causality and sequence can be treated in an integrated and mathematically powerful framework. This same framework can also describe self-organizing and self-reflecting computations. Equally important is the fact that this strongly hierarchical approach can tell us how to reduce macroscopic algorithms to quantum-mechanical circuits, and how to design and produce micro-assembled quantum devices. It also offers a way to understand such extremely complex things as DNA-directed assembly and living systems, and in general, to connect computing to contemporary physical mathematics in a way that mathematical logic never can."

Asynchronous Cellular Automata for Pomsets:

In a distributed system, some events may occur concurrently meaning that they may occur in any order or simultaneously or even that their executions may overlap. This is the case in particular when two events use independent sources. On the other hand, some events may causally depend on eachother. For instance, the receiving of a message must follow its sending. Therefore, a distributed behavior may be abstractee as a pomset, that is a set of events together with a partial order which describes causal dependencies of events and with a labeling function. In this paper, we mainly deal with pomsets without auto-concurrency: concurrent events must have different labels."

"Thus, while ordinary disc...rete models of reality rely heavily on the language-processor distinction, SCSPL incurs no such debt. For example, cellular automaton models typically distinguish between a spatial array, the informational objects existing therein, and the distributed set of temporal state-transition rules by which the array and its contents are regulated. In contrast, SCSPL regards language and processor as aspects of an underlying infocognitive unity. By conspansive (ectomorphism-endomorphism) duality, SCSPL objects contain space and time in as real a sense as that in which spacetime contains the objects, resulting in a partial identification of space, time and matter. SCSPL is more than a reflexive programming language endowed with the capacity for computational self-execution; it is a protocomputational entity capable of telic recursion, and thus of generating its own informational and syntactic structure and dynamics."

"Are there general laws that govern the outlines of the evolution of any biosphere? Stuart Kauffman thinks so. Why is the universe complex? We really don't know, says Dr. Kauffman, but thinks his ideas at least represent some very good early science. He believes this is a universe so driven toward increased diversity that he proposes a Fourth Law of Thermodynamics: "The diversity of ways of making a living that organisms can achieve, tends to increase over time." This also may apply, he believes, to the formation of geological bodies, the evolution of galaxies -- the universe.

While a deep understanding of his ideas is grounded in statistical mechanics and a grasp of Darwin’s pre-adaptation (or Gould’s exaptation), suffice it to say the biosphere is creative, doing things we cannot see ahead of time. Innovation is real in the universe, says Dr. Kauffman, changing the way the universe unfolds. Our story, he thinks, is one of confronting something that is forever becoming. What it will end up being, however, we can never say ahead of time.

This drive toward complexity, Dr. Kauffman believes, is also true in technological evolution and in the econosphere. Dr. Kauffman is the founding scientist of Bios Group which uses complexity science to solve complex business problems. Businesses too, he says, are busy making their livings in a world that they co-construct. We cannot say ahead of time what the next adaptation -- or technological revolution -- will be, but businesses, like organisms, can learn to adapt.

It is utterly profound, Dr. Kauffman says, that we do not deduce our lives. We live them. We cannot predict. But we do adapt. What at first may seem terrifying -- that we cannot predict -- is really what life has been doing on this planet for the past 4.8 billion years.

The revolution? Humans finally confronting deep creativity.

Stuart Kauffman started with wondering about autonomous agents. That led him to what he thinks may well be a definition of life, but at least has allowed him to specify an autonomous agent that can be constructed and tested in the next 20 to 30 years.

At that point, since we will be unable to predict, we will finally, truly confront creativity. It’s then that humans -- autonomous agents all -- will most need new stories. We will need them to make sense out of the universe we will be co-creating."

"Regarding Stuart Kauffman's "Investigations:"

In essence the book ascribes to life in general two characteristics that are not characteristic of anything in the physical sciences. A living thing is (1) an "automous agent" that is a member of a group that is (2) always seeking, by some sort of adaptation, to move into a next "adjacent possible" niche.

Because members of a species are always working their ways into the "adjacent possibles," speciation occurs and complexity arises and increases at an ever increasing rate. In the physical world, there may be complexity, but the degree of complexity is constant. The natural laws of thermodynamics (so far!) dictate a predetermined constancy. While a physicist or chemist will usually know what is going to happen next, a biologist generally wonders what the critter will do next - because it is an autonomous agent.

Repeatedly Kaufmann brings in sociology and economics because those are not only analogies of moving into adjacent possibles (new markets, for example) by autonomous agents (us), but are in fact examples of it - afterall, the human brain and its desires and propensities are a part of "life," aren't they? And therefore they ought to obey the rules of the "fourth law", whatever it might turn out to be.

While chemicals react in average sorts of ways - are "main stream," people are always looking for an "edge" - and that edge is a boundary into the next adjacent possible. Similarly ALL other life-forms are looking for an edge - survival of the fittest.

Whenever the physical environment changes, there are newly adapted life-forms that see new edges and move into an adjacent possible adding a new life-form, which, with the original, now makes the general [universal] biology more complex.

Kuffman uses the word 'adaptation" a bit more generally than do most of his biologist peers. To him mutation is a form or adaptation. It is more individual and rare, but in the long run it is an adaptation that allows moving permanently into a new adjacent possible.

Meanwhile the reshufflings of "on's" and "off's" of operons is a more general and more immediate sort of adaptation - but not permanent. Nevertheless, he even speaks of the quirkiness of the interrelations between the operons due to transcriptional and translational and statistical mechanical errors. These spread over all the members of a species promote even more diversity (complexity) of behavior - leading to new "edges" and more movement into adjacent possibles."

Entropy, optropy, and the fourth law of thermodynamics:

A review of From Complexity to Life: On the Emergence of Life and Meaning:

"In the introductory chapter by physicist Paul Davies, a short summary to the topics to be discussed throughout the text is provided. Particular attention is given to introducing what Davies calls the “emergentist worldview”. According to Davies “[t]he emergentist worldview seems to present us with a two fold task requiring the collaboration between the natural sciences, philosophy, and theology. The first is about the causal structure of our world… The second question relates to meaning: How does a sense of meaning emerge from a universe of inanimate matter subject to blind and purposeless forces.”

NASA explains how the experiment was conducted:

"The newly discovered microbe, strain GFAJ-1, is a member of a common group of bacteria, the Gammaproteobacteria. In the laboratory, the researchers successfully grew microbes from the lake on a diet that was very lean on phosphorus, but included generous helpings of arsenic. When researchers removed the phosphorus and replaced it with arsenic the microbes continued to grow. Subsequent analyses indicated that the arsenic was being used to produce the building blocks of new GFAJ-1 cells."

Isn't "information content" involved in part of our modern understanding of thermodynamics somehow? Order and disorder seems to also be involved in our understanding of "energy": the ability to perform work.

"Richard Feynman knew there is a difference between the two meanings of entropy. He discussed thermodynamic entropy in the section called "Entropy" of his
Lectures on Physics
published in 1963 (7), using physical units, joules per degree, and over a dozen equations (vol I section 44-6). He discussed the second meaning of entropy in a different section titled "Order and entropy" (vol I section 46-5) as follows:

So we now have to talk about what we mean by disorder and what we mean by order. Suppose we divide the space into little volume elements. If we have black and white molecules, how many ways could we distribute them among the volume elements so that white is on one side and black is on the other? On the other hand, how many ways could we distribute them with no restriction on which goes where? Clearly, there are many more ways to arrange them in the latter case. We measure "disorder" by the number of ways that the insides can be arranged, so that from the outside it looks the same. The logarithm of that number of ways is the entropy. The number of ways in the separated case is less, so the entropy is less, or the "disorder" is less."

"By tracking the particle's motion using a video camera and then using image-analysis software to identify when the particle had rotated against the field; the researchers were able to raise the metaphorical barrier behind it by inverting the field's phase. In this way they could gradually raise the potential of the particle even though they had not imparted any energy to it directly.

Of course, there's no violation of the laws of thermodynamics here, as the energy needed to run all the macroscopic devices far, far outstrips the microscopic gains in electric potential. That said, the microscopic gains are a real breakthrough - on the nanoscale, the researchers had tapped into a full quarter of the information's energy content, by far the most ever accessed in one experiment, and the first real practical demonstration of the energy-information equivalence."

‎"In 1929, Leo Szilard invented a feedback protocol in which a hypothetical intelligence called Maxwell's demon pumps heat from an isothermal environment and transduces it to work. After an intense controversy that lasted over eighty years; it was finally clarified that the demon's role does not contradict the second law of thermodynamics, implying that we can convert information to free energy in principle. Nevertheless, experimental demonstration of this information-to-energy conversion has been elusive. Here, we demonstrate that a nonequilibrium feedback manipulation of a Brownian particle based on information about its location achieves a Szilard-type information-energy conversion. Under real-time feedback control, the particle climbs up a spiral-stairs-like potential exerted by an electric field and obtains free energy larger than the amount of work performed on it. This enables us to verify the generalized Jarzynski equality, or a new fundamental principle of "information-heat engine" which converts information to energy by feedback control."

"Energy, matter, and information equivalence

Shannon's efforts to find a way to quantify the information contained in, for example, an e-mail message, led him unexpectedly to a formula with the same form as Boltzmann's. Bekenstein summarizes that

"Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement..." of matter and energy. The only salient difference between the thermodynamic entropy of physics and the Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless "bits" of information, and so the difference is merely a matter of convention.

The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary."

"Ask anybody what the physical world is made of, and you are likely to be told "matter and energy."
Indeed, a current trend, initiated by John A. Wheeler of Princeton University, is to regard the physical world as made of information, with energy and matter as incidentals."

"I argued that a large fraction of practicing physicists, especially those in condensed matter physics, do not buy into this idea of a theory of everything, simply based on emergent phenomena that we observe in condensed matter physics."

"Abstract: In 1972, P.W. Anderson suggested that ‘More is Different’, meaning that complex physical systems may exhibit behavior that cannot be understood only in terms of the laws governing their microscopic constituents. We strengthen this claim by proving that many macroscopic observable properties of a simple class of physical systems (the infinite periodic Ising lattice) cannot in general be derived from a microscopic description. This provides evidence that emergent behavior occurs in such systems, and indicates that even if a ‘theory of everything’ governing all microscopic interactions were discovered, the understanding of macroscopic order is likely to require additional insights." (More Really Is Different)

Challenges in theoretical (quantum) physics?

"Dreams of a final theory" What theory describes the fundamental constituents of matter and their interactions? (What computational model is realized in nature?) Weinberg

"More is different" What emergent collective phenomena can arise in condensed matter? (What is the potential complexity of quantum many-body systems?) Anderson

"How come the quantum?" Why do the rules of quantum mechanics apply to Nature? (Is everything information?) Wheeler

Is There a Final Theory of Everything? (Interview with Steven Weinberg)

Part 1:

Part 2:

More is Different
Broken symmetry and the nature of the hierarchical structure of science:

A reading is given of Curie's Principle that the symmetry of a cause is always preserved its effects. The truth of the principle is demonstrated and its importance, under the proposed reading, is defended.

"As far as I see, all a priori statements in physics have their origin in symmetry." (Weyl, Symmetry, p. 126).

Symmetry, order, entropy and information

"Conditions of applicability of the laws established for thermodynamic
entropy do not necessarily fit to the entropy defined for information. Therefore, one must handle carefully the informational conclusions derived by mathematical analogies from the laws that hold for thermodynamic entropy.

Entropy, and the arrow of its change are closely related to the arrows of the change of symmetry and of orderliness. Symmetry and order are interpreted in different ways in statistical thermodynamics, in symmetrology, and in evolution; and their relation to each other is also equivocal. Evolution is meant quite different in statistical physics and in philosophical terms. Which of the different interpretations can be transferred to the description of information?

Entropy, introduced by Shannon on mathematical analogy borrowed from thermodynamics, is a mean to characterise information. One is looking for a possibly most general information theory. Generality of the sought theory can be qualified by its applicability to all (or at least the more) kinds of information.

However, I express doubts, whether entropy is a property to characterise all kinds of information. Entropy plays an important role in information theory. This concept has been borrowed from physics, more precisely from thermodynamics, and applied to information by certain formal analogies. Several authors, having contributed to the FIS discussion and published papers in the periodical Entropy, emphasized the also existing differences in contrast to the analogies. Since the relations of entropy - as applied in information theory - to symmetry are taken from its physical origin, there is worth to take a glance at the ambiguous meaning of this term in physics in its relation to order and symmetry, respectively."

‎"Functional decomposition refers broadly to the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed (i.e., recomposed) from those parts by function composition. In general, this process of decomposition is undertaken either for the purpose of gaining insight into the identity of the constituent components (which may reflect individual physical processes of interest, for example), or for the purpose of obtaining a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity (i.e., independence or non-interaction).
Inevitability of hierarchy and modularity

There are many compelling arguments regarding the prevalence and necessity of hierarchy/modularity in nature (Koestler 1973). Simon (1996) points out that among evolving systems, only those that can manage to obtain and then reuse stable subassemblies (modules) are likely to be able to search through the fitness landscape with a reasonably quick pace; thus, Simon submits that "among possible complex forms, hierarchies are the ones that have the time to evolve." This line of thinking has led to the even stronger claim that although "we do not know what forms of life have evolved on other planets in the universe, we can safely assume that 'wherever there is life, it must be hierarchically organized'" (Koestler 1967). This would be a fortunate state of affairs since the existence of simple and isolable subsystems is thought to be a precondition for successful science (Fodor 1983). In any case, experience certainly seems to indicate that much of the world possesses hierarchical structure.

It has been proposed that perception itself is a process of hierarchical decomposition (Leyton 1992), and that phenomena which are not essentially hierarchical in nature may not even be "theoretically intelligible" to the human mind (McGinn 1994,Simon 1996). In Simon's words,

The fact then that many complex systems have a nearly decomposable, hierarchic structure is a major facilitating factor enabling us to understand, describe, and even "see" such systems and their parts. Or perhaps the proposition should be put the other way round. If there are important systems in the world that are complex without being hierarchic, they may to a considerable extent escape our observation and understanding. Analysis of their behavior would involve such detailed knowledge and calculations of the interactions of their elementary parts that it would be beyond our capacities of memory or computation."

Shannon-Boltzmann-Darwin: Redefining Information Pt. 1

"A scientifically adequate theory of semiotic processes must ultimately be founded on a theory of information that can unify the physical, biological, cognitive, and computational uses of the concept. Unfortunately, no such unification exists, and more importantly, the causal status of informational content remains ambiguous as a result. Lacking this grounding, semiotic theories have tended to be predominantly phenomenological taxonomies rather than dynamical explanations of the representational processes of natural systems. This paper argues that the problem of information that prevents the development of a scientific semiotic theory is the necessity of analyzing it as a negative relationship: defined with respect to absence. This is cryptically implicit in concepts of design and function in biology, acknowledged in psychological and philosophical accounts of intentionality and content, and is explicitly formulated in the mathematical theory of communication (aka “information theory”). Beginning from the base established by Claude Shannon, which otherwise ignores issues of content, reference, and evaluation, this two part essay explores its relationship to two other higher-order theories that are also explicitly based on an analysis of absence: Boltzmann’s theory of thermodynamic entropy (in Part 1) and Darwin’s theory of natural selection (in Part 2). This comparison demonstrates that these theories are both formally homologous and hierarchically interdependent. Their synthesis into a general theory of entropy and information provides the necessary grounding for theories of function and semiosis."

Part 1:

Part 2:

"The main idea that “asymmetry is the foundation of information” as can be seen in the title might be derived from John Collier’s statement that “information originates in symmetry breaking” and “causation is the transfer
of information”, which appeared as article titles of two papers cited in this book. Interested readers may directly read these two papers, available at John’s homepage [2]."

Information as Asymmetry:

Book Review: Once Before Time

‎"String theory has the greatest support but has also proved highly frustrating. Its strength is in the elegant simplicity of the idea. All the particles in physics, including the graviton (the hypothetical particle that carries gravitational force), are represented by different modes of a single elementary object, the string. The attractive image of vibrating strings, however, hides horrendously complex mathematics, the need for nine or 10 dimensions, equations with billions of solutions (any of which could match reality) and a total absence of predictions that could be tested by experiment to add weight to the theory.

Loop quantum gravity is less well developed but takes a very different approach. In this theory it is not only matter that can be split down to atoms. Space itself, even with no matter present, is atomic. Some of the properties of these atoms of space can best be described mathematically using an extended, one-dimensional loop, hence the term "loop quantum gravity." Once more the math is ferocious, with space constructed from a latticework weave of one-dimensional components; like string theory, loop quantum gravity has yet to make a useful prediction that can be tested.

Although both theories attempt to explain the nature of space, time and the universe, they have emerged from totally different directions. String theory was developed by combining the way particles and forces are described, making use of the powerful influence that symmetry holds in basic physics and regarding space-time as a given. Loop quantum gravity stems from an analysis of the geometry of the universe, building everything, including space-time itself, from scratch.

These two theories, string and loop quantum gravity, produce fundamentally different views of the universe's origins. The existing theory about the big bang (and about black holes) involves singularities—points of collapse where standard physical measures like density become infinite, causing the equations to break down. String theory offers mechanisms to get around the singularity problem of the big bang: Physicists Paul Steinhardt and Neil Turok have proposed an "ekpyrotic" model, where two universes floating in multidimensional space collide to produce a new beginning for the universe. But string theory can still produce unmanageable infinities.

The biggest benefit of loop quantum gravity is that it doesn't involve these singularities and infinities. It predicts a quantum effect where the gravitational force becomes repulsive in the conditions around the big bang, producing a "big bounce" before a singularity can form. If this were the case, it would be possible in principle for some information to pass through this bounce from a previous incarnation of the universe. Loop quantum gravity offers the tantalizing possibility of a prehistory of everything.

Perhaps the best-known title in the popular science genre is Stephen Hawking's "A Brief History of Time." It is often said to be a book that many have started to read but few have finished. By comparison with "Once Before Time," Mr. Hawking's book is lightweight bedtime reading. There is no doubt that Mr. Bojowald will leave many casualties by the roadside—but those who persevere will find undoubted insights into one possible explanation of the universe at its most fundamental and will experience the work of top-level science as close to first-hand as is humanly possible for a nonscientist."

"Edward Witten is the most influential string theorists in the world, is now doing research into gravity that is decidedly non-string, and very similar to LQG.

here's a link

his research program is titled "Three-Dimensional Gravity Revisited"

and here's a discussion

Peter Woit writes "If one wants to interpret this new work in light of the the LQG/string theory wars, it’s worth noting that the technique used here, reexpressing gravity in terms of gauge theory variables and hoping to quantize in these variables instead of using strings, is one of the central ideas in the LQG program for quantizing 3+1d gravity."

Reflections on the fate of spacetime (Ed Witten):
"A system of Ising spins on the lattice indicated by blue dots is equivalent to another spin system on the "dual" lattice indicated by the red crosses. In string theory, analogous dualities of an underlying two-dimensional field theory result in dualities of spacetimes."

"In Section 1, using the ideas of the past two chapters, I will present the radical but necessary idea that self and reality are belief systems. Then, in Section 2, I will place this concept in the context of the theory of hypersets and situation semantics, giving for the first time a formal model of the universe in which mind and reality reciprocally contain one another. This "universal network" model extends the concept of the dual network, and explains how the cognitive equation might actually be considered as a universal equation."

"In the first of three articles, we review the philosophical foundations of an approach to quantum gravity based on a principle of representation-theoretic duality and a vaguely Kantian-Buddist perspective on the nature of physical reality which I have called 'relative realism'. Central to this is a novel answer to the Plato's cave problem in which both the world outside the cave and the 'set of possible shadow patterns' in the cave have equal status. We explain the notion of constructions and 'co'constructions in this context and how quantum groups arise naturally as a microcosm for the unification of quantum theory and gravity. More generally, reality is 'created' by choices made and forgotten that constrain our thinking much as mathematical structures have a reality created by a choice of axioms, but the possible choices are not arbitary and are themselves elements of a higher-level of reality. In this way the factual 'hardness' of science is not lost while at the same time the observer is an equal partner in the process."

The Tessellattice of Mother-Space as a Source and Generator of Matter and Physical Laws:

"Real physical space is derived from a mathematical space constructed as a tessellation lattice of primary balls, or superparticles. Mathematical characteristics, such as distance, surface and volume, generate in the fractal tessellation lattice the basic physical notions (mass, particle, the particle’s de Broglie wavelength and so on) and the corresponding fundamental physical laws. Submicroscopic mechanics developed in the tessellattice easily results in the conventional quantum mechanical formalism at a larger atomic scale and Newton’s gravitational law at a macroscopic scale.
Time is thus not a primary parameter, and the physical universe has no beginning: time is just related to ordered existence, not to existence itself. The topological space does not require any fundamental difference between reversible and steady-state phenomena, nor between reversible and irreversible process. Rather relations simply apply to non-linearily distributed topologies, and from rough to finest topologies."

The Cosmic Organism Theory

"We present the cosmic organism theory in which all visible and invisible matter has different cosmic genetic expressions. The cosmic gene includes codes for the object structure and the space structure. The cosmic digital code for the object structure consists of full object (1, 2, and 3 for particle, string, and membrane, respectively) and empty object (0) as anti de Sitter space (AdS). The tessellation lattice of empty objects is tessellattice. The decomposition of a full object in tessellattice results in the AdS/CFT (conformal field theory) duality. The digital code for the object structure accounts for the AdS/CFT duality, the dS/bulk duality, and gravity. The digital code for the space structure consists of 1 and 0 for attachment space and detachment space, respectively. Attachment space attaches to object permanently at zero speed or reversibly at the speed of light. Detachment space detaches from the object irreversibly at the speed of light. The combination of attachment space and detachment space results in miscible space, binary lattice space or binary partition space. Miscible space represents special relativity. Binary lattice space consists of multiple quantized units of attachment space separated from one another by detachment space. Binary lattice space corresponds to the nilpotent universal computational rewrite system (NUCRS) by Diaz and Rowlands. The gauge force fields and wavefunction are in binary lattice space. With tessellattice and binary lattice space, 11D brane is reducing to 4D particle surrounded by gravity and the gauge force fields. The cosmic dimension varies due to different speeds of light in different dimensional space-times and the increase of mass."

"Majid illustrates his idea with the notion of a self-dual bicrossproduct Hopf algebra, (although he doesn't believe that this specific mathematical structure is a candidate for a theory of everything). To understand what this is, we first need to understand what a Hopf algebra is."

The Duality of the Universe

"It is proposed that the universe is an instance of a mathematical structure which possesses a dual structure, and that this dual structure is the collection of all possible knowledge of the physical universe. In turn, the physical universe is then the dual space of the latter."

"Duality principles thus come in two common varieties, one transposing spatial relations and objects, and one transposing objects or spatial relations with mappings, functions, operations or processes. The first is called space-object (or S-O, or S<-->O) duality; the second, time-space (or T-S/O, or T<-->S/O) duality. In either case, the central feature is a transposition of element and a (spatial or temporal) relation of elements. Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures. From this, we may extract a third kind of duality: ST-O duality. In this kind of duality, associated with something called conspansive duality, objects can be “dualized” to spatiotemporal transducers, and the physical universe internally “simulated” by its material contents.

M=R, MU and hology are all at least partially based on duality.
In other words, telesis is a kind of “pre-spacetime” from which time and space, cognition and information, state-transitional syntax and state, have not yet separately emerged. Once bound in a primitive infocognitive form that drives emergence by generating “relievable stress” between its generalized spatial and temporal components - i.e., between state and state-transition syntax – telesis continues to be refined into new infocognitive configurations, i.e. new states and new arrangements of state-transition syntax, in order to relieve the stress between syntax and state through telic recursion (which it can never fully do, owing to the contingencies inevitably resulting from independent telic recursion on the parts of localized subsystems)."

Why should cognition and cosmology related? Why not? After all, they share something in common, reality itself. Also, interesting from the perspective of implementing "strong" artificial intelligence.

What is a quantum group?

The term "Quantum Group" is due to V. Drinfeld and it refers to special Hopf algebras, which are the non-trivial deformations ("quantizations") of the enveloping Hopf algebras of semisimple Lie algebras or of the algebras of regular functions on the corresponding algebraic groups. These objects first appeared in physics, namely in the theory of quantum integrable systems, in the 1980's, and were later formalized independently by Vladimir Drinfeld and Michio Jimbo."

"A quantum group is in the first place a remarkably nice object called a Hopf algebra, the axioms for which are so elegant that they were written down in the 1940s well before truly representative examples emerged from physics in the 1980s. So let us start with these elegant axioms, but with the caveat that it’s the modern examples and their further structure that really make the subject what it is. A Hopf algebra H obeys the following axioms:"

"Our core hypothesis here is that the abstract structures corresponding to free will, reflective consciousness
and phenomenal self are effectively modeled using the mathematics of hypersets. As reviewed in [5] (or less
technically in [6]), these are sets that allow circular membership structures"

"Quantum theory changed the assumptions about the relation between observer and observed but retained Newton's view of space and time. General relativity changed the latter but not the former. So, Lee Smolin said, in the year 2000, of the search for the unifying theory of quantum gravity.
No-one can have access to total knowledge about events in the universe. So, we cannot always say whether a thing is true or false, as Aristotle's classical logic assumes. New systems of logic, acknowledging only partial information, dependent on the observer's situation, reflect the nature of society. One of these systems, topos theory was found, by Fotini Markopoulou-Kalamara, to suit cosmology."

"So topos, or cosmological, logic is the right logic for understanding the human world."

"General relativity is in essence a dynamic on causal networks: it tells you how a causal network at one time (plus some extra information) gives rise to a different, related causal network at a subsequent time.

Finally let's reflect on what Smolin (see Three Roads to Quantum Gravity) calls the "strong holographic principle." His reasoning for this principle is subtle and involves the Bekenstein bound and related results, which state that all the information about the interior of some physical region, may actually be thought of as being contained on the surface of that regions. (He explains this better than I could, so I'll just refer you to his book.)

What the principle says is: a la Nietzsche, there are only surfaces. Re-read Nietzsche's Twilight of the Idols and you'll see that he presaged quantum gravity here, in a similar way to how he presaged quantum theory proper in his vision (in The Will to Power) of the universe as composed of a dancing swarm of discrete interacting quanta. Kant posited phenomena and noumena, Nietzsche saw only noumena. Smolin also. Smolin views the universe as a collection of surfaces, each one defined as a relationship among other surfaces. Put in words like this, it sounds mystical and fuzzy, but there's math to back it up -- the words just hint at the mathematical reality.

But is each of these Smolin surfaces definitively known? No. Each one is probabilistically known. And if each of these surfaces is to be thought of as a relationship between other surfaces, then this means each of these surfaces is most directly modeled as a hyperset (see my prior blog posts on these mathematical constructs). (This is not how Smolin models things mathematically, but I doubt he'd be opposed, as he's used equally recondite math structures such as topoi.) So these surfaces should be modeled as probabilistic hypersets -- aka infinite-order probability distributions."

"The Super-Copernican Principle: Just as Copernicus displaced geocentricity with heliocentricity, showing by extension that no particular place in the universe is special and thereby repudiating “here-centeredness”, the Super-Copernican Principle says that no particular point in time is special, repudiating “now-centeredness”. Essentially, this means that where observer-participation functions retroactively, the participatory burden is effectively distributed throughout time. So although the “bit-size” of the universe is too great to have been completely
generated by the observer-participants who have thus far existed, future generations of observer-participants, possibly representing modes of observer-participation other than that associated with human observation, have been and are now weighing in from the future. (The relevance of this principle to the Participatory Anthropic Principle is self-evident.)"

"THIRD CLUE The super-Copernican principle. This principle rejects now-centeredness in any account of existence as firmly as Copernicus repudiated here-centeredness. It repudiates most of all any tacit adoption of now-centeredness in assessing observer-participants and their number. What is an observer-participant? One who operates an observing device and participates in the making of meaning, meaning in the sense of Follesdale, "Meaning is the joint product of all the evidence that is available? The investigator slices a rock and photographs the evidence for the heavy nucleus that arrived in the cosmic radiation of a billion years ago. Before he can communicate his findings, however, an asteroid atomizes his laboratory, his records, his rocks, and him. No contribution to meaning! Or at least no contribution then. A forensic investigation of sufficient detail and wit to reconstruct the evidence of the arrival of that nucleus is difficult to imagine. What about the famous tree that fell in the forest with no one around? It leaves a fallout of physical evidence so near at hand and so rich that a team of up-todate investigators can establish what happened beyond all doubt. Their findings contribute to the establishment of meaning. "Measurements and observations," it has been said, "cannot be fundamental notions in a theory which seeks to discuss the early universe when neither existed." On this view the past has a status beyond all questions of observer-participancy. It from bit offers us a different vision: "reality is theory"; "the past has no evidence except as it is recorded in the present."

‎"Eternity is a very long time, especially towards the end." - Woody Allen

Intelligent universe: AI, ET, and the emerging mind of the cosmos By James Gardner

"Extraterrestrial Politics
By: Michael A. G. Michaud

The search for extraterrestrial intelligence may be the first step toward involving the human species in cosmic politics. SETI would perform an ancient and respectable political function: the gathering of intelligence about other political entities. It would be essential in establishing the receiving end of the political communications process. And it may enable intelligent beings to give politics a higher purpose.

The Relevant Universe

Each society has its own conception of the relevant universe. Such a shared conception may be evolving among informed humans around the globe. Our recently acquired ability to see the Earth from an extraterrestrial perspective has encouraged us to see our world as a closed, bounded system, and our ecology as a thin layer of minerals, liquid, and gas between the lifeless bulk of the Earth and the cold and dark of space. We have perceived the finiteness of our familiar environment, the ultimate limits to human growth on Earth. Our conception of ourselves as a species has sharpened; we now recognize our common origins and — as long as we all remain on this planet — our common destiny. We are coming to the conclusion that many problems must be solved at the species level, through cooperative action. Joint action on larger scales, for larger purposes, may be crucial for the long-term success of a species whose political subdivisions are armed with increasingly powerful technologies.

These perceptions may be driving the slow emergence of an ethos of the Earth. That ethos, in turn, may be one of the foundations of the conception of the species as a political entity — a global polis, as the ancient Greeks called the political society of the city-state. But this conception of the Earth and its human cargo sets us off from the cosmos, like a sealed spaceship flying through nothingness; it makes the external environment of the Earth seem irrelevant. This conception of the Earth as a closed world means that, for all practical purposes, our relevant universe is as closed and geocentric as the astronomical universe was before Copernicus.
Michael A. G. Michaud is Deputy Director of the Office of International Security Policy, a branch of the U.S. State Department in Washington. He was born in 1938, and received Bachelor's and Master's degrees from the University of California at Los Angeles in political science. He has been with the State Department ever since. His distinguished foreign service career spans a wide range of diplomatic and administrative assignments in Europe, Asia and Australia as well as in the United States. He is author of over 30 publications dealing with the societal consequences of SETI and spaceflight, as exemplified by: "The Consequences of Contact" (American Institute of Aeronautics and Astronautics Student Journal, Winter 1977-78), "Spaceflight and Immortality" (Life Extension Magazine, July/August 1977), and "Interstellar Negotiation", (Foreign Service Journal, December 1972). This same theme is also carried through to his memberships in organizations like the L-5 Society, The British Interplanetary Society, The World Future Society and the Sierra Club. He is also a member of the International Institute for Strategic Studies, The National Space Institute, The Royal Central Asian Society and the American Foreign Service Association. His unique combination of wide diplomatic experience, interest in outer space and communicative abilities make him a frequent invited participant on panel discussions related to the social aspects of space exploration."

"Michaud points to the limits of our technology as well as to SETI searches limited in their coverage. … He highlights the complexities, difficulties, and disappointments that go with trying to establish a code of conduct for the legal aspects of encountering aliens. … This is a timely book; there is not a dull word in it. Recommended." (P. Chapman-Rietschi, The Observatory, Vol. 127 (1200), October, 2007)

Contact with Alien Civilizations:

Mathematics 220A is the first quarter of a three-quarter introduction to mathematical logic. The first part of 220A will be an introduction to first order logic: semantics, formal deduction, and relations between the two. This part of the course will culminate with the fundamental Completeness, Compactness, and Skolem-Löwenheim Theorems.

"In mathematics and logic, a higher-order logic is distinguished from first-order logic in a number of ways. One of these is the type of variables appearing in quantifications; in first-order logic, roughly speaking, it is forbidden to qua...ntify over predicates. See second-order logic for systems in which this is permitted. Another way in which higher-order logic differs from first-order logic is in the constructions allowed in the underlying type theory. A higher-order predicate is a predicate that takes one or more other predicates as arguments. In general, a higher-order predicate of order n takes one or more predicates of order n − 1 as arguments, where n > 1. A similar remark holds for higher-order functions.

Higher-order logic, abbreviated as HOL, is also commonly used to mean higher order simple predicate logic, that is the underlying type theory is simple, not polymorphic or dependent."

Girard, J.-Y. : Truth, modality and intersubjectivity, manuscript, January 2007.

Locus Solum: From the rules of logic to the logic of rules by Jean-Yves Girard, 2000.
The monograph below has been conceived as the project of giving reasonable foundations to logic, on the largest possible grounds, but not with the notorious reductionist connotation usually attached to "foundations". Locus Solum would like to be the common playground of logic, independent of systems, syntaxes, not to speak of ideologies. But wideness of scope is nothing here but the reward of sharpness of concern : I investigate the multiple aspects of a single artifact, the design. Designs are not that kind of syntax-versus-semantics whores that one can reshape according to the humour of the day : one cannot tamper with them, period. But what one can achieve with them, once their main properties —separation, associativity, stability— have been understood, is out of proportion with their seemingly banal definition.

Introduction to Ludics:

‎"Thomas Scheff defines intersubjectivity as "the sharing of subjective states by two or more individuals."

The term is used in three ways:

First, in its weakest sense intersubjectivity refers to agreement. There is intersubjectivity between people if they agree on a given set of meanings or a definition of the situation.

Second, and more subtly intersubjectivity refers to the "common-sense," shared meanings constructed by people in their interactions with each other and used as an everyday resource to interpret the meaning of elements of social and cultural life. If people share common sense, then they share a definition of the situation.

Third, the term has been used to refer to shared (or partially shared) divergences of meaning. Self-presentation, lying, practical jokes, and social emotions, for example, all entail not a shared definition of the situation, but partially shared divergences of meaning. Someone who is telling a lie is engaged in an intersubjective act because they are working with two different definitions of the situation. Lying is thus genuinely inter-subjective (in the sense of operating between two subjective definitions of reality).

Intersubjectivity emphasizes that shared cognition and consensus is essential in the shaping of our ideas and relations. Language, quintessentially, is viewed as communal rather than private. Therefore, it is problematic to view the individual as partaking in a private world, one which has a meaning defined apart from any other subjects."

"If a man finds that his nature tends or is disposed to one of these extremes, he should turn back and improve, so as to walk in the way of good people, which is the right way. The right way is the mean in each group of dispositions common to humanity; namely, that disposition which is equally distant from the two extremes in its class, not being nearer to the one than to the other."
— Maimonides

So we discover the truths that unite us, rather than the half-truths which divide us.

War in the Age of Intelligent Machines:

"Humanistic psychology theory developed in the 1950s, partially as a response to the abundance of military conflict that characterized the first half of the twentieth century.
The core belief of the humanistic psychology approach is that humans are inherently good, and that a belief in and respect for humanity is important for mental health.

Adjacent to this core belief are several other important tenets of the humanistic psychology perspective. The first is that the present is more important and more significant than either the past or the future. Therefore, it is more usefu...l to explore what one can do in the here and now, rather than to make decisions based on what may happen in the future, or to constantly dwell on past experiences.

Second is the idea that every individual must take personal responsibility for their actions or lack of actions. In the humanistic approach to psychology, this sense of personal responsibility is crucial for good mental health. The third belief is the idea that everyone is inherently worthy of basic human respect and dignity, regardless of factors such as race, ethnicity, appearance, wealth, or actions.

The goal of the humanistic psychology approach is that by following these basic ideas, one can achieve happiness through personal growth. Both self-understanding and self-improvement are necessary for happiness. In addition, understanding that every individual has both personal and social responsibility fosters not only personal growth, but community and social involvement as well.

Abraham Maslow, an early proponent of humanistic psychology, believed that these ideas were in direct opposition to Freud’s theory of psychoanalysis. One of the most core beliefs of Freud’s theories is that human drives and desires are subconscious and hidden, whereas to Maslow, humans are consciously aware of the motivations that drive their behavior. Essentially, Maslow believed, psychoanalysis accepts that most aspects of life are outside of individual control, whereas the humanistic approach was based in free will.

The humanistic approach to psychology has some strong points that make it a particularly useful theory in the modern world. This approach emphasizes the idea that everyone can contribute to improving their own mental and physical health, in whichever way is most useful to them. In addition, these theories take into account environmental factors in shaping personal experiences. The concept of all humans having the same rights to respect and dignity is also useful, in that it encourages racial and ethnic tolerance, as well as reinforcing the individual’s belief in their own self-worth.

Critics point out that the humanistic perspective has few standardized treatment approaches. This effect is largely the result of the importance that free will plays in humanist psychology, which makes devising standardized treatments extremely complicated. Another problem is that humanist theory is not a suitable treatment for people with organic mental illnesses such as schizophrenia or bipolar disorder, preventing it from being regarded as an all-encompassing school of thought.

Despite these criticisms, elements of humanistic psychology have been incorporated into many styles of therapy. The humanist approach, with its emphasis on personal responsibility, social responsibility, and social tolerance, makes it a useful basis for positive personal and social change. Therefore, even though this psychological theory may be inadequate in some respects, it provides some simple and practical tools for self-examination."

‎"David P. Ellerman (born March 14, 1943) is a philosopher and author who works in the fields of economics and political economy, social theory and philosophy, and in mathematics. He has written extensively on workplace democracy based on a modern treatment of the labor theory of property and the theory of inalienable rights as rights based on de facto inalienable capacities."

"Author of "Helping People Help Themselves : From the World Bank to an Alternative Philosophy of Development Assistance (Evolving Values for a Capitalist World)"

The Volitional and Cognitive Sides of Helping Theory:

Helping People Help Themselves: Toward a Theory of Autonomy-Compatible Help:

Theory Summary
How can an outside party ("helper") assist those who are undertaking autonomous activities (the "doers") without overriding or undercutting their autonomy?

Help must start from the present situation of the doers.
Helpers must see the situation through the eyes of the doers.
Help cannot be imposed on the doers, as that directly violates their autonomy.
Nor can doers receive help as a benevolent gift, as that creates dependency.
Doers must be in the driver's seat.

"One major application of helping theory is to the problems of knowledge-based notion of development assistance. The standard approach is that the helper, a knowledge-based development agency, has the 'answers' and disseminates them to the doers. This corresponds to the standard teacher-centered pedagogy. The alternative under helping theory is the learner-centered approach. The teacher plays the role of midwife, catalyst, and facilitator, building learning capacity in the learner-doers so that they can learn from any source, including their own experience."

"On the autonomy-respecting path [learner-centered approach], the helper helps the doers to help themselves by supplying not 'motivation' but perhaps resources to enable the doers to do what they were already own-motived to do. On the knowledge side, the autonomy-respecting helper supplies not answers but helps build learning capacity (e.g., by enabling access to unbiased information and to hearing all sides of an argument) to enable the doers to learn from whatever source in a self-directed learning process."

‎"The shifting balance theory of evolution was first laid out by
Wright in two papers in 1931 and 1932. In these papers, Wright
argued that the optimum situation for evolutionary advance, in
the sense of the population becoming progressively b...etter adapted
to its environment, would be when a large population was
‘divided and subdivided into partially isolated local races of small
size’. Wright illustrated these views through his concept of a
‘field of gene combinations graded with respect to adaptive value’,
nowadays more commonly known as a fitness or adaptive landscape,
or a surface of selective value."

"Shifting balance theory is a theory by Sewall Wright used to model evolution using both drift and selection. Wright assumes that when populations are very small, genetic drift and epistasis are very important. Shifting balance theory:

1. A species is divided into subpopulations small enough that random genetic drift is important
2. Drift in one subpopulation carries it past an adaptive valley to a new higher adaptive peak
3. This subpopulation has increased fitness. It makes more offspring, which spread the new adaptation to other populations through increased gene flow"

"A new logic of partitions has been developed that is dual to ordinary logic when the latter is interpreted as the logic of subsets rather than the logic of propositions. For a finite universe, the logic of subsets gave rise to finite probability theory by assigning to each subset its relative cardinality as a Laplacian probability. The analogous development for the dual logic of partitions gives rise to a notion of logical entropy that is related in a precise manner to Claude Shannon’s entropy. In this manner, the new logic of partitions provides a logical-conceptual foundation for information-theoretic entropy or information content."

The Internet Economy as a Complex System:

Multiagent Systems Algorithmic, Game-Theoretic, and Logical Foundations
Imagine a personal software agent engaging in electronic commerce on your behalf. Say the task of this agent is to track goods available for sale in various online venues over time, and to purchase some of them on your behalf for an attractive price. In order to be successful, your agent will need to embody your preferences for products, your budget, and in general your knowledge about the environment in which it will operate. Moreover, the agent will need to embody your knowledge of other similar agents with which it will interact (e.g., agents who might compete with it in an auction, or agents representing store owners)—including their own preferences and knowledge. A collection of such agents forms a multiagent system. The goal of this book is to bring under one roof a variety of ideas and techniques that provide foundations for modeling, reasoning about, and building multiagent systems.

Somewhat strangely for a book that purports to be rigorous, we will not give a precise definition of a multiagent system. The reason is that many competing, mutually inconsistent answers have been offered in the past. Indeed, even the seemingly simpler question—What is a (single) agent?—has resisted a definitive answer. For our purposes, the following loose definition will suffice: Multiagent systems are those systems that include multiple autonomous entities with either diverging information or diverging interests, or both.

Substantive Assumptions and the Existence of Universal Knowledge Structures: A Logical Perspective

From Lawvere to Brandenburger-Keisler: interactive forms of diagonalization and self-reference

Samson Abramsky
(joint work with Jonathan Zvesper)
Oxford University Computing Laboratory

Diagonal arguments lie at the root of many fundamental phenomena in the foundations of logic and mathematics - and maybe physics too!

- In a classic application of category theory to foundations from 1969, Lawvere extracted a simple general argument in an abstract setting which lies at the basis of a remarkable range of results. This clearly exposes the role of a diagonal, i.e. a copying operation, in all these results.

- We shall briefly review some of this material. Our presentation will be based in part on the very nice expository paper by Noson Yanofsky.

- We will describe some current work in progress with Jonathan Zvesper on the ‘Brandenburger-Keisler paradox’ — an interactive (2-person) version of Russell’s paradox. We shall show how this can be reduced to a reformulation
of Lawvere’s argument. We also give an extension to muti-agent situations.

- This raises some interesting issues: e.g. stronger assumptions are needed for the interactive version. Aim: uncovering some foundational aspects ofthe structure of interaction.

Agreeing to Diagree

Interactive Epistemology I: Knowledge

...Abstract. Formal Interactive Epistemology deals with the logic of knowledge and belief when there is more than one agent or "player.'' One is interested not only in each person's knowledge about substantive matters, but also in his knowledge about the others' knowledge. This paper examines two parallel approaches to the subject.
Key words: Epistemology, interactive epistemology, knowledge, common knowledge, semantic, syntactic, model

Interactive Epistemology II: Probability

On the Role Of Interactive Epistemology in Multiagent Planning

This paper focuses on the foundational role of interactive epistemology in the problem of generating plans for rational agents in multiagent settings. Interactive epistemology deals with the logic of knowledge and belief when there is more than one agent. In multiagent settings, we are interested in not only the agent’s knowledge of the state of the world, but also its belief over the other agents’ beliefs and their beliefs over others’. We adopt a probabilistic approach for formalizing the epistemology. This paper attempts to answer the question of why we should study the interactive epistemology of agents within the context of multiagent planning. In doing so, it motivates the need for a more detailed examination of the epistemological foundations of multiagent planning. We conclude this paper with a framework for multiagent planning that explicitly constructs and reasons with nested belief structures.

Logic, Interaction and Collective Agency

Questions of collective agency and collective intentionality are not new in philosophy, but in recent years they have increasingly been investigated using logical methods. The work of Bacharach [2], S...ugden [6] and Tuomela [7], in particular, elegantly combines philosophical relevancy with a strong inclination towards logical modeling of decision and action.

Logical and algebraic approaches are also more and more present in foundations of decision theory (cf. the work of Jeffrey [5] and Bradley [3]), epistemic game theory (Brandenburger [4] and van Benthem [8]), and, of course, dynamic epistemic logic [9,10]. These are three areas where questions of group agency and interaction also naturally arise. In these fields, however, group agency is rather studied from the perspective of individual decision makers, with the aim of understanding how the former stem from mutual expectations of the latter.

Little is known, however, about the relationship between these views on interaction and collective agency. On the one hand, decision- and game-theoretical approaches build on resolutely individualistic premises, and study the logic of individual belief and preferences in interactive decision making. On the other hand, the aforementioned philosophical theories take a more collectivist standpoint, focusing on how decision makers engage in “group”, “team” or “we-mode” of reasoning, which is often claimed to involve irreducibly collective attitudes.

This course will introduce these bodies of literature in order to clarify their relationship, both from a logical and a conceptual point of view. It will first cover recent development in the foundations of decision theory and epistemic foundations of equilibrium play in interaction. It will then move to the three philosophical theories of group agency mentioned above and, using logic as a common denominator, try to understand how they relate to decision- and game-theoretical approaches.

The free-energy principle: a unified brain theory?

A free-energy principle has been proposed recently that accounts for action,
perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective.

‎"Underneath anger is hurt, but underneath hurt is love."

‎"For my part if I have any creed it is so to live as to preserve and increase the susceptibilities of my nature to noble impulses -- first to observe if any light shine on me, and then faithfully to follow it.
Our religion is where our love is." ~ Correspondence, To Isaiah T. Williams, September 8, 1841 (Henry David Thoreau)

“There is no remedy for love but to love more.”

“I went to the woods because I wanted to live deliberately, I wanted to live deep and suck out all the marrow of life, To put to rout all that was not life and not when I had come to die Discover that I had not lived.”

“How vain it is to sit down to write when you have not stood up to live.” ~ Henry David Thoreau

Perhaps "love" is a strange loop, it is the cause and the effect.

“No tower of turtles,” advised William James. Existence is not a globe supported by an elephant, supported by a turtle, supported by yet another turtle, and so on. In other words, no infinite regress. No structure, no plan of organization, no framework of ideas underlaid by yet another structure or level of ideas, underlaid by yet another level, and yet another, ad infinitum, down to bottomless blackness. To endlessness to alternative is evident but a loop, such as: physics gives
rise to observer-participancy; observer-participancy gives rise to information; and information gives rise to physics.

Is existence thus built on “insubstantial nothingness”? Rutherford and Bohr made a table no less solid when they told us it was 99.9 …percent emptiness. Thomas Mann may exaggerate when he suggests that “we are actually bringing about what seems to be happening to us,” but Leibniz reassures us that “although the whole of this life were said to be nothing but a dream and the physical world nothing but a phantasm, I should call this dream or phantasm real enough if, using reason well, we were never deceived by it.”

"A strange loop is a hierarchy of levels, each of which is linked to at least one other by some type of relationship. A strange loop hierarchy, however, is "tangled" (Hofstadter refers to this as a "heterarchy"), in that there is no well defined highest or lowest level; moving through the levels one eventually returns to the starting point, i.e., the original level. Examples of strange loops that Hofstadter offers include: many of the works of M. C. Escher, the information flow network between DNA and enzymes through protein synthesis and DNA replication, and self-referential Gödelian statements in formal systems.

In I Am a Strange Loop, Hofstadter defines strange loops as follows:

“ And yet when I say "strange loop", I have something else in mind — a less concrete, more elusive notion. What I mean by "strange loop" is — here goes a first stab, anyway — not a physical circuit but an abstract loop in which, in the series of stages that constitute the cycling-around, there is a shift from one level of abstraction (or structure) to another, which feels like an upwards movement in a hierarchy, and yet somehow the successive "upward" shifts turn out to give rise to a closed cycle. That is, despite one's sense of departing ever further from one's origin, one winds up, to one's shock, exactly where one had started out. In short, a strange loop is a paradoxical level-crossing feedback loop. (pp. 101-102)"

Algorithmic Thermodynamics:

Examples of super-recursive algorithms include (Burgin 2005: 132):

limiting recursive functions and limiting partial recursive functions (E.M. Gold)
trial and error predicates (Hilary Putnam)
inductive inference machines (Carl Smith)
inductive Turing machines, which perform computations similar to computations of Turing machines and produce their results after a finite number of steps (Mark Burgin)
limit Turing machines, which perform computations similar to computations of Turing machines but their final results are limits of their intermediate results (Mark Burgin)
trial-and-error machines (Ja. Hintikka and A. Mutanen)
general Turing machines (J. Schmidhuber)
Internet machines (van Leeuwen, J. and Wiedermann, J.)
evolutionary computers, which use DNA to produce the value of a function (Darko Roglic)
fuzzy computation (Jirí Wiedermann)
evolutionary Turing machines (Eugene Eberbach)
Examples of algorithmic schemes include:

Turing machines with arbitrary oracles (Alan Turing)
Transrecursive operators (Borodyanskii and Burgin)
machines that compute with real numbers (L. Blum, F. Cucker, M. Shub, and S. Smale)
neural networks based on real numbers (Hava Siegelmann)
For examples of practical super-recursive algorithms, see the book of Burgin.

AI, Granular Computing, and Automata with Structured Memory

Abstract: - Examining artificial intelligence (AI) problems with the aid of a granular-computing mathematical model. The model is based on Turing machines and recursive algorithms. Through extension to inductive Turing machines, we add super-recursive algorithms enhanced with structured memory to these situations. We consider such AI problems as machine learning, text recognition and advanced computation in the context of granular computational models. Using the elaborated model, we demonstrate that granulation gives a powerful means for learning and computing. The paper shows that it provides both for increasing efficiency of computation and for extending the scope of what is computable, decidable and learnable information.

Key-Words: - artificial intelligence, granular computing, structured memory, machine learning, concurrency

1 Introduction

A granule is a commonly understood characteristic of information. It includes items such as classes, clusters, subsets, components, blocks, groups, and intervals. Granular Computing signifies making use of such elements to construct efficient means to deal with huge amounts of data, information and knowledge. In other words, it describes a computational paradigm for complex applications. (Data usually indicates numeric quantities; information may add to that probabilistic qualities; knowledge indicates difficult-to-quantify understanding that may come from human experience.)

The basic notions and principles of granular computing have appeared in a variety of fields, though under different names. For example: artificial intelligence-AI: granularity; programming: information-hiding; theoretical computer science: divide and conquer algorithms; cluster analysis: interval computing: fuzzy/rough-set theory: neutrosophic computing; quotient space theory: belief functions; machine learning: databases. However, the usual computer science theoretical models: Turing machines, partial recursive functions, random access machines, neural networks, cellular automata and others, do not provide efficient means for granular computing modeling. Here we suggest using Turing machines with structured memory and inductive Turing machines with structured memory as theoretical models of granular computing. We also explain how to adapt these advanced automata to problems of AI.

Information Granulation and Approximation in a Decision-Theoretic Model of Rough Sets:

Granular Computing: An Introduction

"Granular computing (GrC) is an emerging computing paradigm of information processing. It concerns the processing of complex information entities called information granules, which arise in the process of data abstraction and derivation of knowledge from information. Generally speaking, information granules are collections of entities that usually originate at the numeric level and are arranged together due to their similarity, functional or physical adjacency, indistinguishability, coherency, or the like.

At present, granular computing is more a theoretical perspective than a coherent set of methods or principles. As a theoretical perspective, it encourages an approach to data that recognizes and exploits the knowledge present in data at various levels of resolution or scales. In this sense, it encompasses all methods which provide flexibility and adaptability in the resolution at which knowledge or information is extracted and represented."

‎"The talk introduces a framework of quotient space theory of problem solving. In the theory, a problem (or problem space) is represented as a triplet, including the universe, its structure and attributes. The worlds with different grain si...ze are represented by a set of quotient spaces. The basic characteristics of different grain-size worlds are presented. Based on the model, the computational complexity of hierarchical problem solving is discussed."

Granular Computing based on Rough Sets, Quotient Space Theory, and Belief Functions

The Information Age/Information Knowledge and the New Economy:

‎"An information economy is where the productivity and competitiveness of units or agents in the economy (be they firms, regions or nations) depend mainly on their capacity to generate, process, and apply efficiently knowledge-based information. It is also described as an economy where information is both the currency and the product."

Services Science, Management, Engineering and Computing

• Services Science
• Service Modeling and Implementation
• Service Delivery, Deployment and Maintenance Service
...• Value Chains and Innovation Lifecycle
• Service-Oriented Architecture (SOA) Industry Standards and Solution Stacks
• Service-based Grid/Utility/Autonomic Computing
• Mobile Service Computing
• Services Computing in Mobile Ad-hoc NETworks (MANETs)
• Service Level Agreements (SLAs) Negotiation, Automation and Orchestration
• Service Security, Privacy and Trust
• Quality of Services (QoS) and Cost of Services (CoS)
• Ontology and Semantic Web for Services Computing
• Services Repository and Registry
• Formal Methods for SOA
• Service Discovery
• Services Engineering Practices and Case Studies

Services-Centric Business Models

• Business Service Analysis, Strategy, Design, Development and Deployment
• Service-Oriented Business Consulting Methodology and Utilities
• Intra- or Inter- Enterprise for Business-to-Business Service Control
• Service Revenue Models and Utility Computing, e.g., Fee-for-Transaction and Fee-for-Service
• Service Strategic Alliance and Partners
• Service Network Economic Structures and Effects
• Ontology and Business Service Rules
• Trust and Loyalty in Services-Centric Business Models
• Cultural, Language, Social and Legal Obstacles in Services-Centric Business Models
• Commercialization of Services Computing Technologies
• Industry Service Solution Patterns
• Service Interaction Patterns
• Case Studies in Services-Centric Business Models (e.g., healthcare, financial, aviation, etc)

Business Process Integration and Management

• Mathematical Foundation of Business Process Modeling, Integration and Management
• Business Process Modeling Methodology and Integration Architecture
• Collaborative Business Processes
• Extended Business Collaboration (eBC) Architecture and Solutions
• Business Process-Based Business Transformation and Transition
• Enabling Technologies for Business Process Integration and Management
• Performance Management and Analysis for Business Process Integration and Management
• Security, Privacy and Trust in Business Process Management
• Return On Investment (ROI) of Business Process Integration and Management
• Requirements Analysis of Business Process Integration and Management
• Enterprise Modeling and Application Integration Services, e.g. Enterprise Service Bus
• Monitoring of Services, Process Mining, and Quality of Service
• Case Studies in Business Process Integration and Management
• SOA Tools, Solutions and Services

SOA Tooling Practices and Examples

• Systematic Design Method for SOA Solutions
• SOA based Consulting Services and Design Services
• SOA Delivery Excellence
• Service-Oriented Computing


• Information Theory
• Learning Theory
• Computational Intelligence
• Soft Computing
• Artificial Neural Networks
• Evolutionary Computation
• Fuzzy and Rough Set
• Knowledge Representation and Reasoning
• Pattern Recognition
• Artificial Life
• Model-based Diagnosis
• Human-Computer Interface
• Information Retrieval
• Natural Language Processing
• Speech Understanding and Interaction
• Text Analysis and Text Understanding
• Information Security
• Knowledge Engineering and Management
• Knowledge Grid
• Knowledge discovery & Data mining
• Web Intelligence
• Business Intelligence
• Semantic Web
• Ontology
• Intelligent Agent
• Multi-agent Systems
• Expert and Decision Support Systems
• Artificial Immune Systems
• Bioinformatics and Biological Computation
• Intelligent Control
• Image Processing and Computer Vision Systems
• Multimedia & Signal Processing
• E-Commerce and E-business
• E-Services and E-Learning
• Management Information System
• Geographic Information System©ownerid=15119

Our point is to be of service, to participate in the exchange of value, and to co-create knowledge by strategically revealing our preferences, which the system uses in the design of more "socially optimal" rules which consider the compatibility of incentives among interacting agents, if you ask me, "our point" is to continue having unlimited and diverse wants within the bounds of resource constaints.

‎"e-Government (short for electronic government, also known as e-gov, digital government, online government, or connected government) is digital interaction between a government and citizens (G2C), government and businesses (G2B), and between government agencies (G2G). This digital interaction consists of governance, information and communication technology(ICT), business process re-engineering(BPR), and e-citizen at all levels of government (city, state/provence, national, and international)."

"Mobility represents a natural human state, hence mobile commerce (m-commerce) has been hailed as the complement, and in many aspects a natural successor to traditional desk-bound e-commerce. The proliferation of smartphones and wireless networks has brought the dream of seamless mobility and always-on connectivity closer to reality – albeit not without new challenges related to technologies, platforms, systems, applications, and business models to name but a few.

The goal of this special issue is to report frontier research addressing the current status and future prospects of m-commerce and to reflect on business innovation and social transformation through mobility. The issue aims to offer an integrated view of the field by presenting approaches originating from and drawing upon multiple disciplines.

The special issue welcomes theoretical, experimental, or survey-based studies that make a significant novel contribution to the field. Submissions should describe original, previously unpublished research, not currently under review by another conference or journal. Topics include, but are not limited to, the following:
• Theoretical foundations of m-commerce
• Mobile-driven business models
• User behavior and innovation diffusion in m-commerce
• Context-aware and location-aware business
• Mobile communities and social networking
• Strategies, policies, economics and societal implications of m-commerce
• Privacy, security, trust and regulation in m-commerce
• Innovation in horizontal and vertical applications: mHealth; mGovernment; mLearning; mEntertainment; mPayments; mMarketing/Advertising; mSCM/CRM; mParticipation; and others"©ownerid=17244

NASA Finds Amino Acids on Impossible Meteorite, Improves Chances E.T. Exists:

Cosmic ancestry holds that life is neither the product of supernatural creation, nor is it spontaneously generated through abiogenesis, but that it has always existed in the universe. It claims that the evolutionary progression from simpler to more complex organisms utilises pre-existing genetic information and does not compose this information as it occurs.

According to the theory, higher life forms, including intelligent life, descend ultimately from pre-existing life which was at least as advanced as the descendants. The genetic programs for the evolution of such higher forms may have been delivered to biospheres, such as the Earth's, within viruses or bacteria in the same manner as proposed by other versions of panspermia. The genetic programs may then be installed by known gene transfer mechanisms. Also, according to cosmic ancestry, life initiates Gaian processes that may environmentally alter biospheres.

Cosmic ancestry is opposed to both neo-Darwinism and Intelligent design theories. Its assertions require the universe to be ageless.

To these considerations could be added the ontological argument: life, like the rest of the Universe, exists simply because it can. Since this has presumably always been so, it would follow that both are indeed ageless. As Richard Feynman has said, "Anything that is not forbidden [by physical law] is compulsory".

"D: Sure. In May of 2001 we traveled to Frenchman Mountain here in Nevada to begin a real project looking for a bio-marker, for a possible precursor virus. It was a rather prosaic study, looking for evidence of panspermia. During the course of that initial investigation we came across some anomalous activity in some of our data sets. That anomalous activity was ultimately tracked down to very unusual electromagnetic activity associated with silicon oxides. And we have since tracked that anomalous activity to any silicon oxides present in minerals... to wit, the activity is the presence of an emission of electromagnetic bundles containing information. We are presently attempting to further define the nature of that electromagnetic anomalous activity. But we have in fact determined that the activity is associated with cells within the terrestrial environment... and that they have effects upon cells in our terrestrial environment, up to and including modifying the genetic material of extant cells in our environment.

K: Are you saying living cells?

D: Yes.

K: Uh huh.

D: We have, together with these electromagnetic emissions, that we have defined with relative precision to date, to be specific varieties of what we termed as particles. We had to call them something. They’re bundles of electromagnetic material, confined discrete bundles, that we believe are possibly related to... as far back as the ancient Pavitrakas of the Hindus... subtle matter particles, which could be imparted into our environment and effect changes. Thus far, we have not observed negative changes, meaning, the effects of these subtle matter particles, if you will, have not affected our environment negatively."

In the process of generating diverse "means" to mutually bound "ends", complex systems break the symmetry of subject-predicate, and institutes the flow of information.

The Nature of Existence:

‎"In Greek mathematics there is a clear distinction between the geometric and algebraic. Overwhelmingly, the Greeks assumed a geometric position wherever possible. Only in the later work of Diophantus do we see algebraic methods of signific...ance. On the other hand, the Babylonians assumed just as definitely, an algebraic viewpoint. They allowed operations that were forbidden in Greek mathematics and even later until the 16th century of our own era. For example, they would freely multiply areas and lengths, demonstrating that the units were of less importance. Their methods of designating unknowns, however, does invoke units. First, mathematical expression was strictly rhetorical, symbolism would not come for another two millenia with Diophantus, and then not significantly until Vieta in the 16th century."

"Not knowing how to deal with this new kind of magnitude, the Pythagoreans refused to consider it a number at all, in effect regarding the diagonal of a square as a numberless quantity. Thus a rift was opened between geometry and arithmetic, and by extension between geometry and algebra. This rift wold persist for the next two thousand years and was to become a major impediment to the further development of mathematics. It was only in the seventeenth century, with the invention of analytic (coordinate) geometry by Rene Descartes, that the divide was finally bridged.

According to the legend, the Pythagoreans were so shaken by the discovery that the square root of 2 was irrational that they vowed to keep it a closely guarded secret, perhaps fearing it might have adverse effects on the populace. But one of them, a person by the name of Hippasus, resolved to reveal the discovery to the world. Enraged by this breach of loyalty, his fellows cast him overboard the boat they were sailing, and his body rests to this day at the bottom of the Mediterranean."

"The majority of recovered clay tablets date from 1800 to 1600 BC, and cover topics which include fractions, algebra, quadratic and cubic equations and the Pythagorean theorem. The Babylonian tablet YBC 7289 gives an approximation to the square root of 2 accuate to five decimal places."

"In the early 11th century, Ibn al-Haytham (Alhazen) was able to solve by purely algebraic means certain cubic equations, and then to interpret the results geometrically. Subsequently, Omar Khayyám discovered the general method of solving cubic equations by intersecting a parabola with a circle.
Omar Khayyám saw a strong relationship between geometry and algebra, and was moving in the right direction when he helped to close the gap between numerical and geometric algebra with his geometric solution of the general cubic equations, but the decisive step in analytic geometry came later with René Descartes.

Persian mathematician Sharafeddin Tusi (born 1135) did not follow the general development that came through al-Karaji's school of algebra but rather followed Khayyam's application of algebra to geometry. He wrote a treatise on cubic equations, entitled Treatise on Equations, which represents an essential contribution to another algebra which aimed to study curves by means of equations, thus inaugurating the study of algebraic geometry."

“It is ironic that gap junctions connect together neurons and glia, at least transiently, into a sort of reticular syncytium— Golgi’s idea overthrown by Cajal’s demonstration of discrete neurons and chemical synapses. Gap junction assemblies of transiently woven-together neurons have been termed “hyper-neurons” and sug...gested as a neural correlate of consciousness.”

Von Schweber Living Systems

‎"In recent years gamma synchrony has indeed been shown to derive not from axonal spikes and axonal-dendritic synapses, but from post-synaptic activities of dendrites. Specifically, gamma synchrony/40 Hz is driven by networks of cortical inter-neurons connected by dendro-dendritic “electrotonic” gap junctions, windows between adjacent cells. Groups of neurons connected by gap junctions share a common membrane and fire synchronously, behaving (as Eric Kandel says) “like one giant neuron.” Gap junctions have long been recognized as prevalent and important in embryological brain development, but gradually diminish in number (and presumably importance) as the brain matures. Five years ago gap junctions were seen as irrelevant to cognition and consciousness. However more recently, relatively sparse gap junction networks in adult brain have been appreciated and shown to mediate gamma synchrony/40 Hz.1-11 Such networks are transient, coming and going like the wind (and Hebbian assemblies), as gap junctions form, open, close and reform elsewhere (regulated by intraneuronal activities). Therefore neurons (and glia) fused together by gap junctions form continually varying syncytia, or Hebbian “hyper-neurons” whose common membranes depolarize coherently and may span wide regions of cortex. (The internal cytoplasm of hyper-neurons is also continuous, prompting suggestions they may host brain-wide quantum states.) By virtue of their relation to gamma synchrony, gap junction hyper-neurons may be the long-sought neural correlate of consciousness (NCC)."

"Distinguishing between The Grid and Computing Fabrics

(i) a Computing Fabric denotes an architectural pattern applicable at multiple levels of scale, from global networks to networks on a chip (which are beginning to appear) whereas the Grid is an architecture specifically applied to campus through global networks (a consequence is that a Computing Fabric may be denser than a Grid as Rob noted),

(ii) a Computing Fabric admits to irregularity, where some regions of connected nodes may be very closely coupled and present a strong single system image while other regions may be very loosely coupled letting distribution concerns show through (the Grid by comparison is uniform), and finally

(iii) a Computing Fabric is dynamically reconfigurable and fluid whereas a Grid may be rigid or require periods of downtime for repair, upgrade, or reconfiguration."

A multi-stage roadmap is presented by which to incrementally transform today's static, rigid Enterprise Architecture into a dynamically fluid and fully netcentric architecture that enables automated interoperability without requiring uniformity. A concrete worked example demonstrates the recommendations of the first stage.

‎"What do we gain?

So what do we gain out of computing fabrics? Well, there's an unwritten law that the more memory and the more computing power you can apply to a problem, the better the chance of solving it. Of course, there's also the cor...ollary that all computing problems expand to fit available memory, but we'll ignore that theory for the purpose of this discussion. So, using the fabrics model, massive concurrent processing combined with a huge, tightly-coupled address space will, theoretically, make it possible to solve huge computing problems. For example, the Von Schweber's talk of applications such as massive data warehouses and advanced, distributed supply-chain management systems.

But there are challenges to this architectural approach as well. For example, programming for these things may well be a bear. Historically, multi-processor systems have always had a non-linearly degrading performance curve. In other words, if one processor performed at the obvious 100% of single-processor performance, adding a second processor might move the performance to 160% of single-processor performance, and a third might move it only up to 200% of single processor performance.

In part, this is due to the overhead of coordinating all the actions between the processors and in part due to the fact that not all aspects of program execution lend themselves to parallelization. In fact, like in project management where tasks are on a critical path, meaning one task is dependent on a previous task, many programming tasks must be solved linearly throughout much of the algorithm."

"The simplest way to think about it is the next-generation architecture for enterprise servers. Fabric computing combines powerful server capabilities and advanced networking features into a single server structure. ... In the fabric computing view, resources are no longer tied to a single machine. A customer buying a typical server does not know exactly how to configure it or what applications to run. In our systems, you're not locked into a predetermined set of assets. You can reconfigure on the fly without adding software layers that slow everything down. Everything is done on hardware at full speed. Remember, we're not talking about just changing CPU memory. We're talking about changing the network I/O. It reduces a lot of the complexity that customers struggle with. You no longer reconfigure machine by machine. You have complete control of the entire fabric.

If it sounds like grid computing, that's because essentially it is an evolution of it - though it's hard to determine which came first. It turns out the term "fabric computing" dates back to at least 1998, when Erick and Linda von Schweber, founders of the Infomaniacs, published an article in eWEEK using the term, describing it as "a new architecture" that would "erase the distinctions between network and computer" by linking "thousands of processors and storage devices into a single system," according to this 2002 eWEEK article."

‎"I swear by my life and my love of it that I will never live for the sake (end, objective, purpose) of another man, nor ask another man to live for mine. I recognize no obligations toward men except one: to respect their freedom and to take no part in a slave society."

Autoselection or "selfishness" as Dawkins and others describe it, actually entails the ability to design/adapt the constraints or rules of the game according to one's values so that outcomes resulting from self-interested agents do not sabatoge "external" yet crucially linked network "hypercycles".

"Key Brain Regions for Rule Development

Hypothalamus, Midbrain, and Brain Stem

The organism’s basic physiological needs are represented in deep subcortical structures that are shared with other animals and that have close connections with visceral and endocrine systems. These regions include several nuclei of the hypothalamus and brain stem.

This part of the brain does not have a neat structure that lends itself readily to modeling; consequently, these areas have been largely neglected by recent modelers despite their functional importance to animals, including humans. Yet these deep subcortical areas played key roles in some of the earliest physiologically based neural networks. Kilmer, McCulloch, and Blum (1969), in perhaps the first computer simulated model of a brain region, placed the organism’s gross modes of behavior (e.g., eating, drinking, sex, exploration, etc.) in the midbrain and brainstem reticular formation. The representations of these behavioral models, whose structure was suggested by the “poker-chip” anatomy of that brain region, were organized in a mutually inhibiting fashion. A similar idea appears in the selective attention network theory of Grossberg (1975), who placed what he called a sensory-drive heterarchy in the hypothalamus. Different drives in the heterarchy compete for activation, influenced both by connections with the viscera (giving advantage in the competition to those drives that most need to be satisfied) and with the cortex (giving advantage to those drives for which related sensory cues are available).

This idea of a competitive-cooperative network of drives or needs has not yet been verified but seems to have functional utility for behavioral and cognitive modeling. Rule making and learning are partly based on computations regarding what actions might lead to the satisfaction of those needs that survive the competition. Yet the competition among needs is not necessarily winner-take-all, and the best decisions in complex situations are those that go some way toward fulfilling a larger number of needs.

But what do we mean by “needs”? There is considerable behavioral evidence that the word should be expanded beyond the purely physiological ones such as hunger, thirst, sex, and protection. The term should also include the needs for social connections, aesthetic and intellectual stimulation, esteem, and self-fulfillment, for example.
The idea of biological drives and purposes beyond those promoting survival and reproduction goes back at least to the psychologist Abraham Maslow’s notion of the hierarchy of needs, a concept that has been widely misunderstood. The word “hierarchy” has been misinterpreted to mean that satisfaction of the lower-level needs must strictly precede any effort to fulfill higher-level needs, an interpretation Maslow explicitly denied (Maslow, 1968, p. 26). But neural network modeling, based on dynamical systems, allows for a more flexible meaning for the word “hierarchy” (or, as McCulloch and Grossberg have preferred, heterarchy). It is a hierarchy in the sense that there is a competitive-cooperative network with biases (see, e.g., Grossberg & Levine, 1975). This means there tends to be more weight toward the lower-level needs if those are unfulfilled, or if there is too much uncertainty about their anticipated fulfillment (see Figure 1 for a schematic diagram). However, the bias toward lower-level need fulfillment is a form of risk aversion, and there are substantial individual personality differences in risk aversion or risk seeking that can either mitigate or accentuate the hierarchical biases.

But this still leaves some wide-open questions for the biologically based neural modeler or the neuroscientist. We know something (though our knowledge is not yet definitive) about deep subcortical loci for the survival and reproductive oriented needs. But are there also deep subcortical loci for the bonding, aesthetic, knowledge, or esteem needs? If there are, it seems likely that these needs are shared with most other mammals. Data such as those of Buijs and Van Eden (2000) suggest that the answer might be yes at least for the bonding needs. These researchers found a hypothalamic site, in a region called the paraventricular nucleus, for the production of oxytocin, a key hormone for social bonding and for pleasurable aspects of interpersonal interactions (including orgasm, grooming, and possibly massage). Also, recent work reviewed by McClure, Gilzenrat, & Cohen (2006) points to a strong role of the locus coeruleus, a midbrain noradrenergic nucleus that is part of the reticular formation, in promoting exploratory behavior.

The deeper subcortical structures of the brain played a strong role in qualitative theories by pioneering behavioral neuroscientists who studied the interplay of instinct, emotion, and reason (e.g., MacLean, 1970; Nauta, 1971; Olds, 1955), and continue to play a strong role in clinical psychiatric observations. Yet these phylogenetically old areas of the subcortex are often neglected by neural modelers, except for some who model classical conditioning (e.g., Brown, Bullock, & Grossberg, 1999; Klopf, 1982). The time is ripe now for a more comprehensive theory of human conduct that will reconnect with some of these pioneering theories from the 1960s and 1970s."

If Maslow's heirarchical pyramid/cone is seen from the top down instead of from the side angle, it could be interpreted as concentric rings, rather than just ending with self-actualization, one must also begin with it in a sense, in a heterarchy this would create a strange loop.

Aspect-oriented quantum monism and the hard problem

Reductive explanation traditionally begins by at best ignoring, and at worst discarding, the mental and phenomenal aspects of the world, subsequently proceeding to reduce what remains - com...plex high-level physical phenomena - to simpler lower-level physical phenomena. We revise reductive explanation, taking the world as it comes in all its aspects, and immediately proceed to reduce high-level multi-aspect phenomena to low-level multi-aspect phenomena. A reductive, quasi-functional (relational) theory of phenomenal consciousness becomes possible if we accept that the most fundamental element of the world, Plank's quantum of action (h bar), presents not only a physical aspect but also a phenomenal aspect, and is more precisely termed the *quantum of observed interaction*. We first review evidence from quantum physics, e.g., consideration of the measurement problem, that supports the position that quantum theory is *about* phenomenal experience and knowledge. We borrow concepts and terminology from aspect-orientation, the more dynamic and agile outgrowth of object-oriented software design. We also abstract and redeploy a relationship, proposed independently by Formal Concept Analysis and Objectivist epistemology, that holds between a concept and its units. Next we explore the scale or scales, from subatomic to human and beyond, at which the quantum of observed interaction applies. We then argue that advances in psycho-neurobiology, particularly quantum brain theories, e.g., Penrose-Hameroff Orch OR, provide means by which the quantum of observed interaction is modulated by a brain so as to form the rich and varied phenomenal manifold of our conscious experience, including all manner and variety of qualia, the impression of world and of self. Lastly we present the beginnings of a formal approach to framing the relationship between the physical and phenomenal aspects of the quantum of observed interaction, e.g., a Galois connection between these aspects, between quanta and qualia, and the lattice structure of "quacepts" so constituted.

Key Words:
monism;reductive explanation;hard problem;qualia; aspect-oriented;formal;quacept;concept;quantum of observed interaction;phenomenal manifold;relational;phenomenal consciousness;qualia modulation

Relational Galois Connections:

Interface of global and local semantics in a self-navigating system based on the concept lattice:

If the syntax–semantics interaction is driven by the interface, it also interrupts the interaction on its own right. Because the syntax is verified to be isomorphic to the semantics, the interaction is open to the diagonal argument leading to a contradiction. That is why it is necessary to introduce a particular interface to drive the interface to make the interaction possible despite the contradiction. In this context we propose the system implemented with the syntax–semantics loop by using a concept lattice and a particular weak quantifier. This system is expressed as the self-navigating system which wanders in a two-dimensional space, encounters some landmarks, constructs the relationship among landmarks to which decision making with respect to the move is referred. The syntax of this system is defined as two-dimensional move and the semantics is defined as a concept lattice [B. Ganter, R. Wille, Formal Concept Analysis, Springer, Berlin, 1999] constructed by the binary relation between landmarks and some properties of landmarks, and by Galois connection. To implement the interface driving and interrupting the interaction between syntax and semantics, we divided semantics into local and global concept lattices, and introduce a weak quantifier to connect a local with a global lattice. Because the contradiction results from diagonal argument or using a normal quantifier , the use of a quantifier is restricted dependent on the situation to avoid a contradiction. It is shown that due to the role of a weak quantifier our self-navigating system is both robust and open to the emergent property through simulating studies.

Observational Heterarchy as Phenomenal Computing (2010)

We propose the notion of phenomenal computing as a dynamical pair of a computing system and the environments of executing computation. It is expressed as a formal model of observational heterarchy inheriting robustness against structural crisis. Observational heterarchy consists of two different categories connected by pre-adjoint functors where inter-categories operations are defined as pre-functors. Owing to the attribute of prefunctor, the model reveals robust behaviors against perpetual structural changes.

In general: Dynamical wholeness What is wholeness? We generally evaluate the notion of parts and whole in science. Such a notion sounds like something misleading. In starting from the notion of parts and whole, it looks as if we could comprehend the concept of wholeness. A concept is generally defined as the equivalence between Intent and Extent.

Chu Spaces, Concept Lattices and Information Systems in nDimensions:

Grothendieck topologies on Chu spaces:

Mediating secure information flow policies:

A Category of Discrete Closure Systems:

A Categorial View at Generalized Concept Lattices: Generalized Chu Spaces

Concrete Groups and Axiomatic Theories II:

The Galois correspondence is in the spirit of Klein’s Erlanger Programm: a structure has a group of transformations, but in a logical sense the group determines the structure. Frankly, I found this sort of surprising at first: theories of structures would seem to be something very general, and groups rather more specific. But I found out recently that the logician Alfred Tarski was also interested in applying Klein’s Erlanger Programm to logic, in his “What are Logical Notions?”

Generalized Kripke Frames:

General Stone Duality:

Category '99 List of Contributed Talks:

Morphisms in Context:

Morphisms constitute a general tool for modelling complex relationships
between mathematical objects in a disciplined fashion. In Formal Concept Analysis (FCA), morphisms can be used for the study of structural properties of knowledge represented in formal contexts, with applications to data transformation and merging. In this paper we present a comprehensive treatment of some of the most important morphisms in FCA and their relationships, including dual bonds, scale measures, infomorphisms, and their respective relations to Galois connections. We summarize our results in a concept lattice that cumulates the relationships among the considered morphisms. The purpose of this work is to lay a foundation for applications of FCA in ontology research and similar areas, where morphisms help formalize the interplay among distributed knowledge bases.

A Partial Order Approach to Decentralized Control:

A category of L-Chu correspondences and Galois functor:

Dualities between Nets and Automata Induced by Schizophrenic Objects:

The so-called synthesis problem for nets, which consists in deciding whether a given graph is isomorphic to the case graph of some net, and then constructing the net, has been solved in the litterature for various types of nets, ranging from elementary nets to Petri nets. The common principle for the synthesis is the idea of regions in graphs, representing possible extensions of places in nets. When the synthesis problem has a solution, the set of regions viewed as properties of states provides a set-theoretic representation of the transition system where transition systems and nets can be viewed respectively as the extensional versus the intensional description of discrete event systems. Namely, a state of a transition system can be represented by the set of properties it satisfies, and an event which is given in extension by a set of transitions between states may also be described intensionally by the set of properties it modifies. Now the fact that an event is enabled in a state can be deduced from the representations of the event and the state.