Tuesday, March 1, 2011

Value Propositions, Utility Substitutions, Telentropic Exchange, Anticipatory Design, Scanning Operators, Service Systems, Cognitive Informatics

"Creating a value proposition is part of business strategy. Kaplan and Norton say "Strategy is based on a differentiated customer value proposition. Satisying customers is the source of sustainable value creation."

Developing a value proposition is based on a review and analysis of the benefits, costs and value that an organization can deliver to its customers, prospective customers, and other constituent groups within and outside the organization. It is also a positioning of value, where Value = Benefits - Cost (cost includes risk)"
http://en.wikipedia.org/wiki/Value_proposition

"In logic and philosophy, the term proposition (from the word "proposal") refers to either (a) the "content" or "meaning" of a meaningful declarative sentence or (b) the pattern of symbols, marks, or sounds that make up a meaningful declarative sentence. The meaning of a proposition includes having the quality or property of being either true or false, and as such propositions are claimed to be truthbearers.

The existence of propositions in sense (a) above, as well as the existence of "meanings", is disputed by some philosophers. Where the concept of a "meaning" is admitted, its nature is controversial. In earlier texts writers have not always made it sufficiently clear whether they are using the term proposition in sense of the words or the "meaning" expressed by the words. To avoid the controversies and ontological implications, the term sentence is often now used instead of proposition to refer to just those strings of symbols that are truthbearers, being either true or false under an interpretation. Strawson advocated the use of the term "statement", and this is the current usage in mathematical logic."
http://en.wikipedia.org/wiki/Proposition

"Propositions, as ways of "measuring" semantic information by the topic-ful, turn out to be more like dollars than like numbers[...]There are no real, natural universal units of either economic value or semantic information."
http://www.consciousentities.com/dennett.htm

"Inasmuch as science is observational or perceptual in nature, the goal of providing a scientific model and mechanism for the evolution of complex systems ultimately requires a supporting theory of reality of which perception itself is the model (or theory-to-universe mapping). Where information is the abstract currency of perception, such a theory must incorporate the theory of information while extending the information concept to incorporate reflexive self-processing in order to achieve an intrinsic (self-contained) description of reality. This extension is associated with a limiting formulation of model theory identifying mental and physical reality, resulting in a reflexively self-generating, self-modeling theory of reality identical to its universe on the syntactic level. By the nature of its derivation, this theory, the Cognitive Theoretic Model of the Universe or CTMU, can be regarded as a supertautological reality-theoretic extension of logic.
...
The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.

A system that evolves by means of telic recursion – and ultimately, every system must either be, or be embedded in, such a system as a condition of existence – is not merely computational, but protocomputational. That is, its primary level of processing configures its secondary (computational and informational) level of processing by telic recursion. Telic recursion can be regarded as the self-determinative mechanism of not only cosmogony, but a natural, scientific form of teleology.
...
The ultimate “boundary of the boundary” of the universe is UBT, a realm of zero constraint and infinite possibility where neither boundary nor content exists. The supertautologically-closed universe buys internal diffeonesis only at the price of global synesis, purchasing its informational distinctions only at the price of coherence.
...
Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so-called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility."
http://www.ctmu.net/

"In economics, utility is a measure of relative satisfaction. Given this measure, one may speak meaningfully of increasing or decreasing utility, and thereby explain economic behavior in terms of attempts to increase one's utility. Utility is often modeled to be affected by consumption of various goods and services, possession of wealth and spending of leisure time.

The doctrine of utilitarianism saw the maximization of utility as a moral criterion for the organization of society. According to utilitarians, such as Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873), society should aim to maximize the total utility of individuals, aiming for "the greatest happiness for the greatest number of people". Another theory forwarded by John Rawls (1921–2002) would have society maximize the utility of the individual initially receiving the minimum amount of utility."
http://en.wikipedia.org/wiki/Utility

"Reflective equilibrium is a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments. Although he did not use the term, philosopher Nelson Goodman introduced the method of reflective equilibrium as an approach to justifying the principles of inductive logic. The term 'reflective equilibrium' was coined by John Rawls and popularized in his celebrated A Theory of Justice as a method for arriving at the content of the principles of justice.

Rawls argues that human beings have a "sense of justice" which is both a source of moral judgment and moral motivation. In Rawls's theory, we begin with "considered judgments" that arise from the sense of justice. These may be judgments about general moral principles (of any level of generality) or specific moral cases. If our judgments conflict in some way, we proceed by adjusting our various beliefs until they are in "equilibrium," which is to say that they are stable, not in conflict, and provide consistent practical guidance. Rawls argues that a set of moral beliefs in ideal reflective equilibrium describes or characterizes the underlying principles of the human sense of justice.

An example of the method of reflective equilibrium may be useful. Suppose Zachary believes in the general principle of always obeying the commands in the Bible, and mistakenly thinks that these are completely encompassed by every Old Testament command. Suppose also that he thinks that it is not ethical to stone people to death merely for being Wiccan. These views may come into conflict (see Exodus 22:18, but see John 8:7). If they do, Zachary will then have several choices. He can discard his general principle in search of a better one (for example, only obeying the Ten Commandments), modify his general principle (for example, choosing a different translation of the Bible, or including Jesus' teaching from John 8:7 "If any of you is without sin, let him be the first to cast a stone" into his understanding), or change his opinions about the point in question to conform with his theory (by deciding that witches really should be killed). Whatever the decision, he has moved toward reflective equilibrium."
http://en.wikipedia.org/wiki/Reflective_equilibrium

"In philosophy, especially that of Aristotle, the golden mean is the desirable middle between two extremes, one of excess and the other of deficiency. For example courage, a virtue, if taken to excess would manifest as recklessness and if deficient as cowardice.

To the Greek mentality, it was an attribute of beauty. Both ancients and moderns realized that there is a close association in mathematics between beauty and truth. The poet John Keats, in his Ode on a Grecian Urn, put it this way:

"Beauty is truth, truth beauty," -- that is all
Ye know on earth, and all ye need to know.

The Greeks believed there to be three 'ingredients' to beauty: symmetry, proportion, and harmony. This triad of principles infused their life. They were very much attuned to beauty as an object of love and something that was to be imitated and reproduced in their lives, architecture, Paideia and politics. They judged life by this mentality.

In Chinese philosophy, a similar concept, Doctrine of the Mean, was propounded by Confucius; Buddhist philosophy also includes the concept of the middle way."
http://en.wikipedia.org/wiki/Golden_mean_(philosophy)

"The term intentionality was introduced by Jeremy Bentham as a principle of utility in his doctrine of consciousness for the purpose of distinguishing acts that are intentional and acts that are not. The term was later used by Edmund Husserl in his doctrine that consciousness is always intentional, a concept that he undertook in connection with theses set forth by Franz Brentano regarding the ontological and psychological status of objects of thought. It has been defined as "aboutness", and according to the Oxford English Dictionary it is "the distinguishing property of mental phenomena of being necessarily directed upon an object, whether real or imaginary". It is in this sense and the usage of Husserl that the term is primarily used in contemporary philosophy."
http://en.wikipedia.org/wiki/Intentionality

"The intentional stance is a term coined by philosopher Daniel Dennett for the level of abstraction in which we view the behavior of a thing in terms of mental properties. It is part of a theory of mental content proposed by Dennett, which provides the underpinnings of his later works on free will, consciousness, folk psychology, and evolution.

"Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do." (Daniel Dennett, The Intentional Stance, p. 17)"
http://en.wikipedia.org/wiki/Intentional_stance

The Intentional Stance
http://consc.net/mindpapers/2.1b

The Intentional Stance: Developmental and Neurocognitive Perspectives
http://ase.tufts.edu/cogstud/incbios/griffinr/datapubs/griffin&bc-dennett.pdf

Anticipation, Design and Interaction
http://www.youtube.com/watch?v=s7l4VF7asXs

Dynamic ontology as an ontological framework of anticipatory systems
http://www.emeraldinsight.com/journals.htm?articleid=1864162&show=abstract

Strong anticipation: Multifractal cascade dynamics modulate scaling in synchronization behaviors

"Previous research on anticipatory behaviors has found that the fractal scaling of human behavior may attune to the fractal scaling of an unpredictable signal [Stephen DG, Stepp N, Dixon JA, Turvey MT. Strong anticipation: Sensitivity to long-range correlations in synchronization behavior. Physica A 2008;387:5271–8]. We propose to explain this attunement as a case of multifractal cascade dynamics [Schertzer D, Lovejoy S. Generalised scale invariance in turbulent phenomena. Physico-Chem Hydrodyn J 1985;6:623–5] in which perceptual-motor fluctuations are coordinated across multiple time scales. This account will serve to sharpen the contrast between strong and weak anticipation: whereas the former entails a sensitivity to the intermittent temporal structure of an unpredictable signal, the latter simply predicts sensitivity to an aggregate description of an unpredictable signal irrespective of actual sequence. We pursue this distinction through a reanalysis of Stephen et al.’s data by examining the relationship between the widths of singularity spectra for intertap interval time series and for each corresponding interonset interval time series. We find that the attunement of fractal scaling reported by Stephen et al. was not the trivial result of sensitivity to temporal structure in aggregate but reflected a subtle sensitivity to the coordination across multiple time scales of fluctuation in the unpredictable signal."
http://dx.doi.org/10.1016/j.chaos.2011.01.005

Introduction to the Natural Anticipator and the Artificial Anticipator

"This short communication deals with the introduction of the concept of anticipator, which is one who anticipates, in the framework of computing anticipatory systems. The definition of anticipation deals with the concept of program. Indeed, the word program, comes from “pro-gram” meaning "to write before" by anticipation, and means a plan for the programming of a mechanism, or a sequence of coded instructions that can be inserted into a mechanism, or a sequence of coded instructions, as genes or behavioural responses, that is part of an organism. Any natural or artificial programs are thus related to anticipatory rewriting systems, as shown in this paper."
http://orbi.ulg.ac.be/handle/2268/81299
http://tinyurl.com/47ogqta

How General is Nilpotency?

"Evidence is presented for the generality, criticality and importance of nilpotence and the associated criteria of Pauli exclusion, quantum phase factor and quantum holographic signal processing in relation to calculation, problem solving and optimum control of Heisenberg uncertainty in Quantum Interaction.
...
Nilpotent logic rather than digital logic reflects this by making the universal automatically the mirror image of the particular because the universe is constrained to have zero totality. This clearly operates in the case of quantum mechanics. The question that then emerges is how much can any system (e.g. life, consciousness, galactic formation, chemistry), which has a strong degree of self-organization manage to achieve this by being modeled on a nilpotent structure. The work of Hill and Rowlands (2007), and Marcer and Rowlands (2007), suggests that this is possible in a wide variety of contexts. The reason is that the nilpotency does not stem
from quantum mechanics initially, but from fundamental conditions of optimal information processing which are prior to physics, chemistry and biology, and even to mathematics."
http://www.naturescode.org.uk/files/HowgeneralANPA_(2).pdf

The 'Logic' of Self-Organizing Systems

A totally new computational grammatical structure has been developed which encompasses the general class of self-organizing systems. It is based on a universal rewrite system and the principle of nilpotency, where a system and its environment have a space-time variation defined by the phase, which preserves the dual mirror-image relationship between the two.
http://www.aaai.org/ocs/index.php/FSS/FSS10/paper/download/2188/2674

A Computational Path to the Nilpotent Dirac Equation

"Using a rewrite approach we introduce a computational path to a nilpotent form of the Dirac equation. The system is novel in allowing new symbols to be added to the initial alphabet and starts with just one symbol, representing ‘nothing’, and two fundamental rules: create, a process which adds news symbols, and conserve, a process which examines the effect of any new symbol on those that currently exist. With each step a new sub-alphabet of an infinite universal alphabet is created. The implementation may be iterative, where a sequence of algebraic properties is required of the emerging subalphabets. The path proceeds from nothing through conjugation, complexification, and dimensionalisation to a steady (nilpotent) state in which no fundamentally new symbol is needed. Many simple ways of implementing the computational path exist.

Keywords. Rewrite system, substitution system, nilpotent, Dirac equation, universal
alphabet.

Rewrite systems are synonymous with computing in the sense that most software is
written in a language that must be rewritten as characters for some hardware to
interpret. Formal rewrite (substitution or production) systems are pieces of software that take an object usually represented as a string of characters and using a set of rewrite rules (which define the system) generate a new string representing an altered state of the object. If required, a second realisation system takes the string and produces a visualisation or manifestation of the objects being represented. Each step of such rewrite systems sees one or more character entities of the complex object, defined in terms of symbols drawn from a finite alphabet Σ, being mapped using rewrite rules of the form L→R, into other character entities. Some stopping mechanism is defined to identify the end of one step and the start of the next (for example we can define that for each character entity or group of entities in a string, and working in a specific order, we will apply every rule that applies). It is usual in such systems to halt the execution of the entire system if some goal state is reached (e.g. all the character entities are in some normal form); if no changes are generated; if changes are cycling; or after a specified number of iterations. The objects being rewritten and differing stopping mechanisms determine different families of rewrite system, and in each family, alternative rules and halting conditions may result in strings representing differing species of object. Allowing new rules to be added dynamically to the existing set and allowing rules to be invoked in a stochastic fashion are means whereby more complexity may be introduced. For examples of various types of rewrite system see: von Koch (1905), Chomsky (1956), Naur et al (1960), Mandelbrot (1982), Wolfram (1985), Prusinkiewicz and Lindenmayer (1990), Dershowitz and Plaisted (2001), Marti-Oliet and Meseguer (2002), etc."
http://www.naturescode.org.uk/files/IJCAS-Diaz-Rowlands.pdf

Vicious Circles in Orthogonal Term Rewrite Systems
http://web.mac.com/janwillemklop/Site/Bibliography_files/94.viciouscircles-entcs.pdf

The Universe from Nothing: A Mathematical Lattice of Empty Sets
http://arxiv.org/abs/physics/0309102

Lattice Duality: The Origin of Probability and Entropy
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.8541

Intelligent machines in the twenty-first century: foundations of inference and inquiry

"The last century saw the application of Boolean algebra to the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in our understanding of the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we recently identified the algebra of questions as the free distributive algebra, which will now allow us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper, we examine the foundations of inference and inquiry. We begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine–learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. Last, we will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, which experiment to perform, or which measurement to take given what they have learned and what they are designed to understand."

Inference Probability Entropy Bayesian Methods Lattice Theory Machine Intelligence
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.59.1028&rep=rep1&type=pdf

Entropy/Information Theory Publications:
http://knuthlab.rit.albany.edu/pubs-entropy.html

Using Cognitive Entropy to Manage Uncertain Concepts in Formal Ontologies:
http://www.cs.us.es/~tchavez/53270315.pdf

Betweeness, Metrics and Entropies in Lattices
http://www.cs.umb.edu/~dsim/papersps/bme.pdf

Scanning the structure of ill-known spaces: Part 1. Founding principles about mathematical constitution of space

Necessary and sufficient conditions allowing a previously unknown space to be explored through scanning operators are reexamined with respect to measure theory. Generalized conceptions of distances and dimensionality evaluation are proposed, together with their conditions of validity and range of application to topological spaces. The existence of a Boolean lattice with fractal properties originating from nonwellfounded properties of the empty set is demonstrated. This lattice provides a substrate with both discrete and continuous properties, from which existence of physical universes can be proved, up to the function of conscious perception. Spacetime emerges as an ordered sequence of mappings of closed 3-D Ponicare sections of a topological 4-space provided by the lattice. The possibility of existence of spaces with fuzzy dimension or with adjoined parts with decreasing dimensions is raised, together with possible tools for their study. The work provides the introductory foundations supporting a new theory of space whose physical predictions (suppressing the opposition of quantum and relativistic approaches) and experimental proofs are presented in details in Parts 2 and 3 of the study.
http://arxiv.org/abs/physics/0211096

"That is to say, there may be properties of the known universe that can only be known or explained - "scanned" - only by dimensional probes of more than three dimensions."
http://tinyurl.com/4shlsfr

Topology in Computer Science: Constructivity; Asymmetry and Partiality; Digitization
http://www.dagstuhl.de/Reports/00/00231.pdf

Partiality I: Embedding Relation Algebras

"As long as no cooperation between processes is supposed to take place, one may consider them separately and need not ask for the progress of the respective other processes. If a composite result of processes is to be delivered, it is important in which way the result is built, only by non-strict/continuous “accumulation” (i.e., open for partial evaluation) or with additional intermittent strict/non-continuous “transactions”.

We define the concept of partiality to cope with partial availability. To this end relations are handled under the aspect that orderings are defined in addition to the identities in every relation algebra. Only continuous functions with respect to these orderings are considered to regulate transfer of partialities."
http://homepage.mac.com/titurel/Papers/SchmidtJLAP.pdf

"7. The CTMU and Teleology

Historically, the Telic Principle can be understood as a logical analogue of teleology incorporating John Archibald Wheeler’s Observer Participation Thesis (approximately, “man participates in the ongoing quantum-scale creation of reality by observing it and thereby collapsing the wavefunction representing its potential”). More directly, the Telic Principle says that reality is a self-configuring entity that emerges from a background of unbound potential as a protean recursive construct with a single imperative: self-actualization. In other words, existence and its amplification is the tautological raison d’être of the cosmos. The phrase “raison d’être” has a literal significance; in order to exist, a self-contained universe must configure itself to recognize its own existence, and to configure itself in this way it must, by default, generate and parameterize its own self-configuration and self-recognition functions. This describes a situation in which the universe generates its own generalized utility: to self-configure, the universe must have a “self-actualization criterion” by which to select one of many possible structures or “futures” for itself, and this criterion is a generalized analogue of human utility…its raison d’être.

In addition to generalized utility and generalized volition (teleology), the universe also possesses generalized cognition (coherent self-recognition). By any reasonable definition of the term “mental”, this makes the universe mental in a generalized sense, where “generalized” means that these attributes conform to general functional descriptions of what humans do in the process of volition, cognition and mentation. The “coherent self-recognition” feature of reality appears as an explicit feature of conspansive spacetime, a model-theoretic dual of the expanding cosmos. Whereas the expanding cosmos is simplistically depicted in terms of a model called ERSU, short for Expanding Rubber-Sheet Universe, conspansive spacetime is depicted by a model-theoretic dual of ERSU called USRE, short for the Universe as a Self-Representational Entity. While ERSU is a product of Cartesian mind-matter dualism that effectively excludes mind in favor of matter, USRE, which portrays the universe as a “self-simulation”, is a form of dual-aspect monism according to which reality is distributively informational and cognitive in nature.

It is important to understand that the CTMU does not arbitrarily “project” human attributes onto the cosmos; it permits the logical deduction of necessary general attributes of reality, lets us identify any related human attributes derived from these general attributes, and allows us to explain the latter in terms of the former. CTMU cosmology is thus non-anthropomorphic. Rather, it uses an understanding of the cosmological medium of sentience to explain the mental attributes inherited by sentient organisms from the cosmos in which they have arisen. Unlike mere anthropomorphic reasoning, this is a logically correct description of human characteristics in terms of the characteristics of the universe from which we derive our existence."
http://www.megafoundation.org/CTMU/Articles/Nexus.html

"As our knowledge of things, even of created and limited things, is knowledge of their qualities and not of their essence, how is it possible to comprehend in its essence the Divine Reality, which is unlimited? For the substance of the essence of anything is not comprehended, but only its qualities. For example, the substance of the sun is unknown, but is understood by its qualities, which are heat and light. The substance of the essence of man is unknown and not evident, but by its qualities it is characterized and known. Thus everything is known by its qualities and not by its essence. Although the mind encompasses all things, and the outward beings are comprehended by it, nevertheless these beings with regard to their essence are unknown; they are only known with regard to their qualities.

Then how can the eternal everlasting Lord, who is held sanctified from comprehension and conception, be known by His essence? That is to say, as things can only be known by their qualities and not by their essence, it is certain that the Divine Reality is unknown with regard to its essence, and is known with regard to its attributes. Besides, how can the phenomenal reality embrace the Pre-existent Reality? For comprehension is the result of encompassing --embracing must be, so that comprehension may be --and the Essence of Unity surrounds all, and is not surrounded.

Also the difference of condition in the world of beings is an obstacle to comprehension. For example: this mineral belongs to the mineral kingdom; however far it may rise, it can never comprehend the power of growth. The plants, the trees, whatever progress they may make, cannot conceive of the power of sight or the powers of the other senses; and the animal cannot imagine the condition of man, that is to say, his spiritual powers. Difference of condition is an obstacle to knowledge; the inferior degree cannot comprehend the superior degree. How then can the phenomenal reality comprehend the Pre-existent Reality? Knowing God, therefore, means the comprehension and the knowledge of His attributes, and not of His Reality. This knowledge of the attributes is also proportioned to the capacity and power of man; it is not absolute. Philosophy consists in comprehending the reality of things as they exist, according to the capacity and the power of man. For the phenomenal reality can comprehend the Pre-existent attributes only to the extent of the human capacity. The mystery of Divinity is sanctified and purified from the comprehension of the beings, for all that comes to the imagination is that which man understands, and the power of the understanding of man does not embrace the Reality of the Divine Essence. All that man is able to understand are the attributes of Divinity, the radiance of which appears and is visible in worlds and souls.

When we look at the worlds and the souls, we see wonderful signs of the divine perfections, which are clear and apparent; for the reality of things proves the Universal Reality. The Reality of Divinity may be compared to the sun, which from the height of its magnificence shines upon all the horizons and each horizon, and each soul, receives a share of its radiance. If this light and these rays did not exist, beings would not exist; all beings express something, and partake of some ray and portion of this light. The splendors of the perfections, bounties, and attributes of God shine forth and radiate from the reality of the Perfect Man, that is to say, the Unique One, the universal Manifestation of God. Other beings receive only one ray, but the universal Manifestation is the mirror for this Sun, which appears and becomes manifest in it, with all its perfections, attributes, signs, and wonders."
http://bcca.org/bahaivision/BWF/0712mansknowledgeofgod.html

"Duality principles thus come in two common varieties, one transposing spatial relations and objects, and one transposing objects or spatial relations with mappings, functions, operations or processes. The first is called space-object (or S-O, or S<-->O) duality; the second, time-space (or T-S/O, or T<-->S/O) duality. In either case, the central feature is a transposition of element and a (spatial or temporal) relation of elements. Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures. From this, we may extract a third kind of duality: ST-O duality. In this kind of duality, associated with something called conspansive duality, objects can be “dualized” to spatiotemporal transducers, and the physical universe internally “simulated” by its material contents.
...
Deterministic computational and continuum models of reality are recursive in the standard sense; they evolve by recurrent operations on state from a closed set of “rules” or “laws”. Because the laws are invariant and act deterministically on a static discrete array or continuum, there exists neither the room nor the means for optimization, and no room for self-design. The CTMU, on the other hand, is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity. This amounts to self-generative freedom, hologically providing reality with a “self-simulative scratchpad” on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes.

Standard recursion is “Markovian” in that when a recursive function is executed, each successive recursion is applied to the result of the preceding one. Telic recursion is more than Markovian; it self-actualizatively coordinates events in light of higher-order relationships or telons that are invariant with respect to overall identity, but may display some degree of polymorphism on lower orders. Once one of these relationships is nucleated by an opportunity for telic recursion, it can become an ingredient of syntax in one or more telic-recursive (global or agent-level) operators or telors and be “carried outward” by inner expansion, i.e. sustained within the operator as it engages in mutual absorption with other operators. Two features of conspansive spacetime, the atemporal homogeneity of IEDs (operator strata) and the possibility of extended superposition, then permit the telon to self-actualize by “intelligently”, i.e. telic-recursively, coordinating events in such a way as to bring about its own emergence (subject to various more or less subtle restrictions involving available freedom, noise and competitive interference from other telons). In any self-contained, self-determinative system, telic recursion is integral to the cosmic, teleo-biological and volitional levels of evolution.
..
Where emergent properties are merely latent properties of the teleo-syntactic medium of emergence, the mysteries of emergent phenomena are reduced to just two: how are emergent properties anticipated in the syntactic structure of their medium of emergence, and why are they not expressed except under specific conditions involving (e.g.) degree of systemic complexity?"
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf

"More complex anticipatory capabilities, which are referred to as mental simulations, permit the prediction and processing of expected stimuli in advance. For example, Hesslow (2002) describes how rats are able to ‘plan in simulation’ and compare alternative paths in a T-maze before acting in practice. This capability can be implemented by means of the above described internal forward models. While internal models typically run on-line with action to generate predictions of an action’s effects, in order to produce mental simulations they can be run off-line, too, i.e., they can ’chain’ multiple short-term predictions and generate lookahead predictions for an arbitrary number of steps. By ’simulating’ multiple possible course of events and comparing their outcomes, and agent can select ’the best’ plan in advance (see fig. 2)."
http://www.istc.cnr.it/doc/1a_0000b_20080724d_anticipation.pdf

Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation

Mutual information among three or more dimensions (μ* = –Q) has been considered as interaction information. However, Krippendorff [1,2] has shown that this measure cannot be interpreted as a unique property of the interactions and has proposed an
alternative measure of interaction information based on iterative approximation of
maximum entropies. Q can then be considered as a measure of the difference between
interaction information and redundancy generated in a model entertained by an observer. I argue that this provides us with a measure of the imprint of a second-order observing system—a model entertained by the system itself—on the underlying information processing. The second-order system communicates meaning hyper-incursively; an observation instantiates this meaning-processing within the information processing. The net results may add to or reduce the prevailing uncertainty. The model is tested empirically for the case where textual organization can be expected to contain intellectual organization in terms of distributions of title words, author names, and cited references."
http://www.mdpi.com/1099-4300/12/1/63/pdf

Telentropy: Uncertainty in the biomatrix

"Teleonics is a systemic approach for the study and management of complex living systems, such as human beings, families, communities, business organisations and even countries and international relationships. The approach and its applications have been described in several publications, quoted in the paper. The units of teleonics are teleons, viz, end-related, autonomous process systems. An indication of malfunction in teleons is a high level of telentropy that can be caused by many factors, among which the most common are the lack of well defined goals, inefficient governance, inappropriate interference and undeclared sharing of subsystems between teleons. These factors, as well as other modes of telentropy generation and transfer are described, together with some suggestions about ways to avoid them."
http://www.informaworld.com/smpp/content~db=all~content=a922786098

"Stressors that challenge homeostasis, often regarded as the most urgent of needs, are the best known. When an organism's competence to maintain homeostasis within a specific range is exceeded, responses are evoked that enable the organism to cope by either removing the stressor or facilitating coexistence with it (Antelman and Caggiula, 1990). While many stressors can evoke dramatic neural and endocrine responses, a more modest or “subclinical” response may be exhibited in response to milder stimuli. These responses may build on or extend homeostatic mechanisms or they may be more or less tightly linked to homeostatic responses in a hierarchical manner creating a functional continuum. For example, such a hierarchical system was described for thermoregulation in mammals by Satinoff (1978) in which more recently evolved regulatory mechanisms are invoked when more conservative ones are unable to restore balance."
http://icb.oxfordjournals.org/content/42/3/508.full

"We have developed the proposal by Satinoff (1978) of a parallel hierarchical system, parallel in that each effector could be assigned to its own controller, and hierarchical in that some controllers have a greater capacity to influence thermoregulation than others, to include subsystem controllers responsible for the autoregulation of elements such as scrotal and brain temperature (see Mitchell and Laburn, 1997). Thus, if autoregulation fails or is overwhelmed, then a higher-ranking system can be invoked to regulate the temperature of the subsystem by regulating the whole system containing it (see Fig. 5)."
http://tinyurl.com/4apgpuu

"The bulk of theoretical and empirical work in the neurobiology of emotion indicates that isotelesis—the principle that any one function is served by several structures and processes—applies to emotion as it applies to thermoregulation, for example (Satinoff, 1982)...In light of the preceding discussion, it is quite clear that the processes that emerge in emotion are governed not only by isotelesis, but by the principle of polytelesis as well. The first principle holds that many functions, especially the important ones, are served by a number of redundant systems, whereas the second holds that many systems serve more than one function. There are very few organic functions that are served uniquely by one and only one process, structure, or organ. Similarly, there are very few processes, structures, or organs that serve one and only one purpose. Language, too, is characterized by the isotelic and polytelic principles; there are many words for each meaning and most words have more than one meaning. The two principles apply equally to a variety of other biological, behavioral, and social phenomena. Thus, there is no contradiction between the vascular and the communicative functions of facial efference; the systems that serve these functions are both isotelic and polytelic."
http://tinyurl.com/4dt4gqs

"In other words, telesis is a kind of “pre-spacetime” from which time and space, cognition and information, state-transitional syntax and state, have not yet separately emerged. Once bound in a primitive infocognitive form that drives emergence by generating “relievable stress” between its generalized spatial and temporal components - i.e., between state and state-transition syntax – telesis continues to be refined into new infocognitive configurations, i.e. new states and new arrangements of state-transition syntax, in order to relieve the stress between syntax and state through telic recursion (which it can never fully do, owing to the contingencies inevitably resulting from independent telic recursion on the parts of localized subsystems). As far as concerns the primitive telic-recursive infocognitive MU form itself, it does not “emerge” at all except intrinsically; it has no “external” existence except as one of the myriad possibilities that naturally exist in an unbounded realm of zero constraint.

Telic recursion occurs in two stages, primary and secondary (global and local). In the primary stage, universal (distributed) laws are formed in juxtaposition with the initial distribution of matter and energy, while the secondary stage consists of material and geometric state-transitions expressed in terms of the primary stage. That is, where universal laws are syntactic and the initial mass-energy distribution is the initial state of spacetime, secondary transitions are derived from the initial state by rules of syntax, including the laws of physics, plus telic recursion. The primary stage is associated with the global telor, reality as a whole; the secondary stage, with internal telors (“agent-level” observer-participants). Because there is a sense in which primary and secondary telic recursion can be regarded as “simultaneous”, local telors can be said to constantly “create the universe” by channeling and actualizing generalized utility within it."
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf

"The notion of classifying topos is part of a trend, begun by Lawvere, of viewing a mathematical theory as a category with suitable exactness properties and which contains a “generic model”, and a model of the theory as a functor which preserves those properties. This is described in more detail at internal logic and type theory, but here are some simple examples to give the flavor. "
http://ncatlab.org/nlab/show/classifying+topos

Space and time in the context of equilibrium-point theory

"Advances to the equilibrium-point (EP) theory and solutions to several classical problems of action and perception are suggested and discussed. Among them are (1) the posture–movement problem of how movements away from a stable posture can be made without evoking resistance of posture-stabilizing mechanisms resulting from intrinsic muscle and reflex properties; (2) the problem of kinesthesia or why our sense of limb position is fairly accurate despite ambiguous positional information delivered by proprioceptive and cutaneous signals; (3) the redundancy problems in the control of multiple muscles and degrees of freedom. Central to the EP hypothesis is the notion that there are specific neural structures that represent spatial frames of reference (FRs) selected by the brain in a task-specific way from a set of available FRs. The brain is also able to translate or/and rotate the selected FRs by modifying their major attributes—the origin, metrics, and orientation—and thus substantially influence, in a feed-forward manner, action and perception. The brain does not directly solve redundancy problems: it only limits the amount of redundancy by predetermining where, in spatial coordinates, a task-specific action should emerge and allows all motor elements, including the environment, to interact to deliver a unique action, thus solving the redundancy problem (natural selection of action). The EP theory predicts the existence of specific neurons associated with the control of different attributes of FRs and explains the role of mirror neurons in the inferior frontal gyrus and place cells in the hippocampus."
http://onlinelibrary.wiley.com/doi/10.1002/wcs.108/full

Scoring Rules, Generalized Entropy, and Utility Maximization

Information measures arise in many disciplines, including forecasting (where scoring rules are used to provide incentives for probability estimation), signal processing (where information gain is measured in physical units of relative entropy), decision
analysis (where new information can lead to improved decisions), and finance (where investors optimize portfolios based on their private information and risk preferences). In this paper, we generalize the two most commonly used parametric
families of scoring rules and demonstrate their relation to well-known generalized entropies and utility functions, shedding new light on the characteristics of alternative scoring rules as well as duality relationships between utility maximization and entropy minimization.
http://faculty.fuqua.duke.edu/~rnau/scoring_rules_and_generalized_entropy.pdf

A conversion between utility and information

"Rewards typically express desirabilities or preferences over a set of alternatives. Here we propose that rewards can be defined for any probability distribution based on three desiderata, namely that rewards should be real-valued, additive and order-preserving, where the latter implies that more probable events should also be more desirable. Our main result states that rewards are then uniquely determined by the negative information content. To analyze stochastic processes, we define the utility of a realization as its reward rate. Under this interpretation, we show that the expected utility of a stochastic process is its negative entropy rate. Furthermore, we apply our results to analyze agent-environment interactions. We show that the expected utility that will actually be achieved by the agent is given by the negative cross-entropy from the input-output (I/O) distribution of the coupled interaction system and the agent's I/O distribution. Thus, our results allow for an information-theoretic interpretation of the notion of utility and the characterization of agent-environment interactions in terms of entropy dynamics."
http://arxiv.org/abs/0911.5106

"In 1993, Gerard’t Hooft published Dimensional Reduction in Quantum Gravity, in
which he made the first direct comparison of theories of quantum gravity to holograms:

We would like to advocate here a somewhat extreme point of view. We suspect that
there simply are not more degrees of freedom to talk about than the ones one can draw
on a surface, as given by eq. (3). The situation can be compared with a hologram of a
3-dimensional image on a 2-dimensional surface."
http://physics.ucsc.edu/~jeff/holographic.pdf

"On the surface, holographic reduced representations are utterly different from logical unification. But I can't help feeling that, at a deeper level, they are closely related. And there is a categorical formulation of logical unification, described in the first reference below, by Rydeheard and Burstall. They say their formulation is derived from an observation by Goguen. So it may be based (I'm not an expert) on the ideas in the second reference:

David Rydeheard and Rod Burstall, Computational Category Theory. Prentice Hall, 1988. See Chapter 8.
http://www.cs.man.ac.uk/~david/categories/book/book.pdf

Joseph Goguen, What is unification? A categorical view of substitution, equation and solution. In Resolution of Equations in Algebraic Structures, 1: Algebraic Techniques, Academic Press, 1989.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.9221

So, can we extend that categorical formulation to holographic reduced representations? I don't know. But if we could, we would better understand how they are related to logic programming, and we might gain new tools for analogical reasoning. It's worth trying."
http://drdobbs.com/blogs/228700165#unihrr

Utility, rationality and beyond: from behavioral finance to informational finance
http://books.google.com/books?id=_LFdBxG9w-kC&lr=&source=gbs_navlinks_s

"In the era of knowledge-driven economy, technological innovation is a key character. The thesis describes the connotation, purpose and core topics of service science through implementing knowledge management, and finally put forward the suggestion of improving the technological innovation capacity through knowledge management and service science."
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5301012

"The essential difference is that in a knowledge economy, knowledge is a product, while in a knowledge-based economy, knowledge is a tool. This difference is not yet well distinguished in the subject matter literature. They both are strongly interdisciplinary, involving economists, computer scientists, engineers, mathematicians, chemists and physicists, as well as cognitivists, psychologists and sociologists.

Various observers describe today's global economy as one in transition to a "knowledge economy," as an extension of an "information society." The transition requires that the rules and practices that determined success in the industrial economy need rewriting in an interconnected, globalized economy where knowledge resources such as know-how and expertise are as critical as other economic resources. According to analysts of the "knowledge economy," these rules need to be rewritten at the levels of firms and industries in terms of knowledge management and at the level of public policy as knowledge policy or knowledge-related policy."
http://en.wikipedia.org/wiki/Knowledge_economy

"Baumol's cost disease (also known as the Baumol Effect) is a phenomenon described by William J. Baumol and William G. Bowen in the 1960s. It involves a rise of salaries in jobs that have experienced no increase of labor productivity in response to rising salaries in other jobs which did experience such labor productivity growth. This goes against the theory in classical economics that wages are always closely tied to labor productivity changes.

The rise of wages in jobs without productivity gains is caused by the necessity to compete for employees with jobs that did experience gains and hence can naturally pay higher salaries, just as classical economics predicts. For instance, if the banking industry pays its bankers 19th century style salaries, the bankers may decide to quit and get a job at an automobile factory where salaries are commensurate to high labor productivity. Hence, bankers' salaries are increased not due to labor productivity increases in the banking industry, but rather due to productivity and wage increases in other industries."
http://en.wikipedia.org/wiki/Baumol's_cost_disease

"Ever since Harvard sociologist Daniel Bell published his book, The Coming of Post-Industrial Society, in 1973, there has been a strong sense of inevitability about the rise and dominance of services in the world’s advanced economies. And, in general, people have concluded that this is a good thing. But there’s danger lurking in services. At this point in their evolution, they’re less efficient and productive than modern manufacturing and farming. Also, while manufacturing took over 200 years before its “quality revolution,” services have only been dominant for a few decades and have yet to figure out quality. These issues could mean big trouble not just for developed countries but for the entire global economy.

Some of today’s top thinkers about services are sounding alarms. Robert Morris, head of service research at IBM Research, says that unless services become more scientific and technical, economic growth could stagnate. Henry Chesbrough, the UC Berkeley professor who coined the term “open innovation,” says this is a major issue facing the world economy long term. He calls it the “commodity services trap.”

Underpinning their thinking is an economic theory called Baumol’s disease. The idea is that as services become an ever larger piece of the economy, they consume an ever larger share of the human and capital resources–but don’t create enough value, in return. Think of an electricity generation plant that consumes more energy than it produces. “Productivity and quality of services isn’t growing comparably to other sectors, including manufacturing and agriculture, so the danger is that it swamps the economy–employment, the share of GDP, and what people have to pay for,” says Morris. “The world economy could stall.”

Developed nations are particularly vulnerable to Baumol’s disease. In Europe and the United States, a lot of progress has been made in the past decade in improving the efficiency of IT services, but other service industries and frightfully inefficient and ineffective: think government, health care and education.

So while adding jobs is vitally important to countries that are still reeling from the economic meltdown, if the jobs that are added are commodity service jobs, long term, it’s adding to the inefficiency of the economy. That’s why governments need to invest aggressively in science and education and technology to improve services in spite of their budget deficits.

One area that deserves investment is service science. It’ s the academic discipline that IBM (with help from Chesbrough) began promoting in 2002. A multidisciplinary approach, service science addresses Baumol’s disease head on by using the ideas and skills of computer science, engineering, social science and business management to improve the productivity, quality and innovation in services. Many of the techniques that have already been developed in computer, mathematical and information sciences can be directly applied to helping services. But new breakthroughs and the better interactions with behavioral and business sciences are also essential, because services are, and always will be, people-centric businesses.

Today, more than 450 universities worldwide offer some sort of service science program. But much more can and should be done to avoid falling into the commodity services trap. Otherwise, the the post-industrial society could take on a post-apocalyptic tinge."
http://asmarterplanet.com/blog/category/smarter-systems

Innovation in services: a review of the debate and a research agenda
http://www.slideshare.net/rooteranalysis/articulo-3-innovacionservicios

The service paradox and endogenous economic growth
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=358350

‎"In this blog we will explore how recent ideas in cognitive science can be used to develop new products, services and organizations that enhance how we think and feel.

We want exciting, beautiful, easy-to-use things. We ask our artifacts (anything that is designed) to make us smarter, reflect our values, invoke the respect and admiration of others and involve our friends and family when appropriate. We want all of this on top of whatever it is they are suppose to do.

The basic functionality of any artifact is now table stakes. What designers must do is go beyond the basics and deliver the aesthetic, emotional, experiential, profound and even transformational. We must make the ordinary extraordinary in an authentic way. In many respects, that has always been the goal of design and exceptional designers achieve it (somehow) everyday.

But it goes beyond that.

There are things that we design that fail to achieve their intended purpose because they don’t reflect sufficient understanding of how the mind works. And the consequence can be dire. Take for example weight loss or chronic disease management programs that are designed to change our behaviors but fail to do so. The cost of that design failure is very high.

Over the last two decades there has been an explosion in what we know about how the minds works. Significant advances in the neuro and cognitive sciences and a wide range of emerging high-potential fields including neuroeconomics, cognitive ergonomics, behavioral finance, augmented cognition and others promise to provide the principles, models and tools needed to systematically design artifacts that not only support cognition but actually make it better.

Cognitive design seeks to paternalistically harness these insights and translate them into improved products, services, change programs, workflow, organizational designs, workspaces and any other artifact that impacts how we think and feel. Cognitive design, like human factors, interactive design and most other modern design movements looks to put the latest findings from the human sciences to work. But it goes further than that.

It goes further by insisting that the scope and orientation of the design problem itself must change. The central idea is in fact somewhat radical:

We need a new design stance that says we are not just designing the functionality of the artifact but we are also designing the mental states of the user.

In this sense the mental functioning and states of the end user are ever bit as much a part of the design problem and specification as are the more traditional considerations of feature, function and form. We seek to break down the distinction between an artifact and the user’s reaction to it by including both as the “thing to be designed”. Now it is feature, function, form and mental state. The fact that we have the science and soon the practice to do this is both exciting and worrisome.

We will cover both the promise and the peril (ethical considerations) of cognitive design in this blog."
http://newvaluestreams.com/wordpress/?page_id=2

"I am hoping soon to start work on the final draft of a book whose working title has been Cognitive Design. This book is about the design and standardization of sets of things – such as the letters of the alphabet, the fifty United States, the notes in the musical octave, the different grades you can give your students, or the stops on a subway line. Every person deals with one or another of these sets of things on a daily basis, and for many people they hold a sort of fascination. We sometimes forget that societies, cultures, and the human beings within them – not nature – designed these sets, chose labels for their members, and made them into standards. Many people have a sense that these different sets have something in common – but most would be hard-pressed to say what that is. My book lays out the answer. I submitted the most recent draft of it as my Ph.D. dissertation at Rutgers University in April 2005."
http://www.ianwatson.org/contrast_set_design_overview.html

Cognitive Design Features on Traffic Signs
http://www.engineeringletters.com/issues_v14/issue_1/EL_14_1_3.pdf

6 cognitive design principles (simplicity, consistency, organization, natural order, clarity, and attractiveness)
http://www.ncbi.nlm.nih.gov/pubmed/18359412

"We are a multidisciplinary research center devoted to the study of medical decision-making, cognitive foundations of health behaviors and the effective use of computer-based information technologies.Our research is deeply rooted in theories and methods of cognitive science, with a strong focus on the analysis of medical error, development of models of decision-making, and design and evaluation of effective human-computer interactions. These studies are guided by a concern for improving the performance of individuals and teams in the health care system."
http://www.uthouston.edu/cognitive-informatics/

Cognitive Informatics: Exploring the theoretical foundations for Natural Intelligence, Neural Informatics, Autonomic Computing, and Agent Systems:
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=978138C997A258571CD8F3F58A44D558?doi=10.1.1.89.2133&rep=rep1&type=pdf