Saturday, September 11, 2010

Generalized Utility, Principles of Economy, Telic Recursion, Second-Order Cybernetics, Universal Harsanyi Type Spaces

George Soros, Reflexivity and Market Reversals

"No matter how complex, the underlying basis of almost every economic theory is that markets search for prices that create a balance between supply and demand. Consequently, when all participants act rationally, free markets and the economy are stable.

George Soros does not agree.

His theory of reflexivity suggests that, sometimes, markets are inherently unstable. The underlying forces create negative feedback loops that cause prices to diverge wildly from equilibrium. Reflexivity helps explain why this happens and is the philosophy that he uses to identify these unstable environments. When prices move to the extreme, he bets on a reversal. As evidenced by his investment track record, Mr. Soros has applied his theories with great success.

In this paper, we examine the ideas behind reflexivity and discuss how they result in parabolic price patterns. The belief that markets simply tend to overshoot in search for equilibrium is inadequate. When destabilizing forces take hold, businesses, industries and financial markets move along a relentless path away from equilibrium, sometimes creating a virtuous spiral of prosperity, and other times a vicious cycle of economic destruction.


Mr. Soros calls it his life’s work, and has written several books1,2 on the topic. Even so, he admits to receiving as much criticism as praise for his theories on the economy and financial markets. In a 1994 speech3, Soros attempted to explain his concept of reflexivity in the following statement,

“There is an active relationship between thinking and reality, as well as the passive one which is the only one recognized by natural science and, by way of false analogy, also by economic theory. I call the passive relationship the “cognitive function” and the active relationship the “participating function,” and the interaction between the two functions I call “reflexivity.”

Reflexivity is, in effect, a two-way feedback mechanism in which reality helps shape the participants’ thinking and the participants’ thinking helps shape reality in an unending process…”

Wow. Clearly his talents as a practitioner are superior to his ability to explain his ideas, and enlighten others. Our goal in this paper is to simplify and clarify. We will even be so bold as to draw some conclusions about the current market environment based on reflexivity."

Adding a bit more context from Alan Murray’s piece:

"Even the best-managed companies aren’t protected from this destructive clash between whirlwind change and corporate inertia. When I asked members of The Wall Street Journal’s CEO Council, a group of chief executives who meet each year to deliberate on issues of public interest, to name the most influential business book they had read, many cited Clayton Christensen’s “The Innovator’s Dilemma.” That book documents how market-leading companies have missed game-changing transformations in industry after industry—computers (mainframes to PCs), telephony (landline to mobile), photography (film to digital), stock markets (floor to online)—not because of “bad” management, but because they followed the dictates of “good” management. They listened closely to their customers. They carefully studied market trends. They allocated capital to the innovations that promised the largest returns. And in the process, they missed disruptive innovations that opened up new customers and markets for lower-margin, blockbuster products."

Information is being generated at ever increasing speeds. Executives hoping to draw lessons from those profiled by Christensen would do well to realize that thought leadership is all around them. Our Foresight platform facilitates innovation management by extracting collective thought leadership from throughout (and, if the business case requires it, beyond) the corporation.

More Murray:

"Information gathering also needs to be broader and more inclusive. Former Procter & Gamble CEO A.G. Lafley’s demand that the company cull product ideas from outside the company, rather than developing them all from within, was a step in this direction. (It even has a website for submitting ideas.) The new model will have to go further. New mechanisms will have to be created for harnessing the “wisdom of crowds.” Feedback loops will need to be built that allow products and services to constantly evolve in response to new information. Change, innovation, adaptability, all have to become orders of the day."

I wonder if when civilizations advance the conception of "utility" and its voluntary exchange would also tend to shift from more concrete to more abstract.

While I think it's too soon for such a change in the international monetary system, I think eventually the dollar will be replaced by a new standard global reserve.

Philosophical Speculation...I think of "utility" as conceptual/semantic, the
means/medium of expressing/exchanging it would be grammatical/syntactic, if there were a way to find "information preserving transformations" which nearly accurately preserve intrinsic interpretations of value(s), in spite extrinsic differences in representation...there would be no need to establish a uniform global currency, merely a reliable method of translating between relative/absolute notions of utility.

First off, what is utility?

"The main principle in the theory of value is expressed in the common phrase, " A thing is worth what it will fetch," — that is, what some one will give for it; the value depending on the will of the purchaser, as determined by his judgment. ... In this is seen, not only the certain distinction between value and utility, but one of the most beneficent laws of the science, which may be stated as follows: Value moves, diminished constantly by the substitution of the gratuitous agencies of Nature, by the ingenuity and industry of man. Utility remains fast-anchored in the wants of man and the properties of matter. This is the primary fact. But value moves again, — not to increase, but to multiply. Values arc no greater, but there are more of them. The factor that multiplies is the evergrowing wants of man. Now, utility begins to move, expanding with the enlargement of man's activities, and the increase of the fruits of labor. Here we have the promise that the human race is destined to a constant augmentation of utilities, bringing in a great amelioration of its condition. Man is relieved from part of his labor only to feel new wants, and so, through fresh efforts, to find greater satisfactions in life."


The Future of Humanity Institute (FHI) is a unique multidisciplinary research institute at the University of Oxford. FHI belongs to the Faculty of Philosophy and the James Martin 21st Century School.


The Future of Humanity Institute’s mission is to bring careful thinking to bear on big-picture questions about humanity and its prospects. The Institute’s work focuses on how future technology might affect fundamental parameters of the human condition, the risks and opportunities involved, and the epistemic, moral, and prioritization issues that confront actors who pursue long-range global objectives. We currently pursue four interlinked research programs:

  • Global catastrophic risks: What are the biggest threats to global civilization and human well-being? How can the human species survive the 21st century?
  • Human enhancement: How can medicine and technology be used to enhance basic biological capacities, such as cognition and lifespan? Can enhancement be ethical and wise?
  • Applied epistemology and rationality: How can we make better decisions under conditions of profound uncertainty and high stakes? How can we reduce bias and human error in our decision-making?
  • Future technologies: What would be the impacts of potentially transformative technologies such as advanced nanotechnology and artificial intelligence?

Despite the great theoretical and practical importance of these issues, they have received scant academic attention. FHI enables a few outstanding and creative intellects to work on these pivotal problems in close collaboration. Our goal is to pioneer research that demonstrates how such problems can be rigorously and fruitfully investigated.

The Institute combines world-class philosophical expertise with strong multidisciplinary scientific capability. Founded in 2005, the Institute has established itself as the world leader in its fields of research. The Institute has previously, successfully worked on several closely related subjects and has pioneered quite a few ideas and approaches of relevance to the present Programme, including, inter alia, foundational work on existential risks, the concept of crucial considerations, the whole brain emulation roadmap, meta-level uncertainty in risk assessment, fundamental normative uncertainty, observation selection effects, human enhancement ethics and the reversal test, the singleton hypothesis, global catastrophic risks, and some approaches to machine intelligence safety analysis.

Our research staff is drawn from a variety of fields, including physics, neuroscience, economics, and philosophy. Several of us have an academic background in more than one discipline. We use whatever intellectual tools we judge most likely to be effective for the specific problem at hand, often combining the techniques of analytic philosophy with those of theoretical and empirical scientific inquiry.

FHI also works to promote public engagement and informed discussion in government, industry, academia, and the not-for-profit sector."

"Utility is taken to be correlative to Desire or Want. It has been already argued that desires cannot be measured directly, but only indirectly, by the outward phenomena to which they give rise: and that in those cases with which economics is chiefly concerned the measure is found in the price which a person is willing to pay for the fulfilment or satisfaction of his desire. (Marshall 1920:78)"

"The term is also used to refer to a game with moves that consist of creating or modifying the rules of another game, the target or subject game, to maximize the utility of the resulting rule set. Thus, we could play a metagame of optimizing the rules of "chess-like" games to maximize the satisfaction of play, and perhaps arrive at the rules of standard chess as an optimum. This is related to mechanism design theory in which the metagame would be to create or make changes in the management rules or policy of an organization to maximize its effectiveness or profitability. Constitutional design can be seen as a metagame of assembling the provisions of a written constitution to optimize a balance of values such as justice, liberty, and security, with the constitution being the rules of the game of government that would result."

"From the point of view of the older institutionalism, new institutionalism tries to explain institutional change as merely another instance of utility maximization. Old institutionalism, on the contrary, seeks to articulate reasons for institutional change in terms of social and political volition."

Great Expectations. Part I: On the Customizability of Generalized Expected Utility:

"We propose a generalization of expected utility that we call generalized EU (GEU), where a decision maker's beliefs are represented by plausibility measures, and the decision maker's tastes are represented by general (i.e.,not necessarily real-valued) utility functions. We show that every agent, ``rational'' or not, can be modeled as a GEU maximizer. We then show that we can customize GEU by selectively imposing just the constraints we want. In particular, we show how each of Savage's postulates corresponds to constraints on GEU."

Great Expectations. Part II: Generalized Expected Utility as a Universal Decision Rule:

"Many different rules for decision making have been introduced in the literature. We show that a notion of generalized expected utility proposed in Part I of this paper is a universal decision rule, in the sense that it can represent essentially all other decision rules."

Utility and Entropy

"In this paper we study an astonishing similarity between the utility representation problem in economics and the entropy representation problem in thermodynamics."

Multi-competence Cybernetics: The Study of Multiobjective Artificial Systems and Multi-fitness Natural Systems:

"Multiobjective problems involve several competing measures of solution quality, and multiobjective evolutionary algorithms (MOEAs) and multiobjective problem solving have become important topics of research in the evolutionary computation community over the past 10 years. This is an advanced text aimed at researchers and practitioners in the area of search and optimization. The book focuses on how MOEAs and related techniques can be used to solve problems, particularly in the disciplines of science and engineering. Contributions by leading researchers deal with the concepts of problem, solution, objective, constraint, utility and preference, and show how these concepts are being investigated in current practice. The book is distinguished from other texts on MOEAs in that it is not primarily about the algorithms, nor specific applications, but about the concepts and processes involved in solving problems using a multiobjective approach. Each chapter contributes to the central, deep concepts and themes of the book: evaluating the utility of the multiobjective approach; discussing alternative problem formulations; showing how problem formulation affects the search process; and examining solution selection and decision-making. The book will be of benefit to researchers, practitioners and graduate students engaged with the underlying general theories involved in the multiobjective approach in fields such as natural computing and heuristics."

Incentive Compatibility

"In typical strategic interactions under incomplete information, different types (of a player) can choose from among a menu of different actions (strategies) that comprises the possibility that they mimic the behavior of other types (of the same, or of another player). Incentive compatibility conditions ensure that different types (of each player) align themselves such that they can be identified by their equilibrium choices. Typically, they are used to prevent that some type profits from copying another type's action (given the other types do not disguise themselves behind others' choices). More generally, incentive compatibility conditions force a desired constellation of choices to form a strategic equilibrium for a given array of types. In particular, they might as well ensure that it be worthwhile for different types to choose the same action (the types pool on an action). Yet in most economic problems, incentive compatibility conditions serve to induce a strategic equilibrium which reveals the players' private information by having them choose different 'characteristic' equilibrium actions, i.e. they have the types 'sort themselves out'."

“What you believe to be true will control you, whether it’s true or not.” —Jeremy LaBorde

Epistemic Game Theory: Beliefs and Types

Harsanyi's Utilitarian Theorem: A Simpler Proof and Some Ethical Connotations:

Type spaces and conceptions of utility are intimately linked. Developing sensitivity to local contexts will have to be considered by future informatics-based commerce. The Internet is paving the way for such "interoperability" standards, the semantic conception of utility is subject to "more is different", as is any exchange regulated form of interaction.

‎"The global derivatives market is a main pillar of the international financial system and the economy as a whole. Today, businesses around the world use derivatives to effectively hedge risks and reduce uncertainty about future prices. Derivatives contribute to economic growth and increase the efficiency of markets by improving price discovery for assets. It is important to note that derivatives did not cause the financial crisis and need to be differentiated from securities, e.g. equities, bonds or structured securities (ABS, CDOs, CLOs etc.). Nevertheless, the derivatives market has certainly been affected by and has played a role in the recent market turbulences.
The most important benefit of derivatives is the ability to manage market risk, i.e. to lower the actual market risk level to the desired one. This task of minimizing or eliminating risk, often called hedging, means that derivatives can safeguard corporates and financial institutions against unwanted price movements.
A second essential function fulfilled by derivatives is price discovery, allowing investors to trade on future price expectations. By trading in derivatives, investors effectively disclose their beliefs on future prices and increase the amount of information available to all market participants. In this way, derivatives enhance valuation and thereby allocation efficiency."

"Several traditions in cybernetics have existed side by side since its beginning. One is concerned with circular causality, manifest in technological developments--notably in the design of computers and automata--and finds its intellectual expression in theories of computation, regulation and control. Another tradition, which emerged from human and social concerns, emphasizes epistemology--how we come to know-- and explores theories of self-reference to understand such phenomena as autonomy, identity, and purpose. Some cyberneticians seek to create a more humane world, while others seek merely to understand how people and their environment have co-evolved. Some are interested in systems as we observe them, others in systems that do the observing. Some seek to develop methods for modeling the relationships among measurable variables. Others aim to understand the dialogue that occurs between models or theories and social systems. Early work sought to define and apply principles by which systems may be controlled. More recent work has attempted to understand how systems describe themselves, control themselves, and organize themselves. Despite its short history, cybernetics has developed a concern with a wide range of processes involving people as active organizers, as sharing communicators, and as autonomous, responsible individuals.

"Cybernetics, as we all know, can be described in many ways. My cybernetics is neither mathematical nor formalized. The way I would describe it today is this: Cybernetics is the art of creating equilibrium in a world of possibilities and constraints."


"The theory of interconnectedness of possible dynamic self-regulated systems with their subsystems"


"Today's cybernetics is at the root of major revolutions in biology, artificial intelligence, neural modeling, psychology, education, and mathematics. At last there is a unifying framework that suspends long-held differences between science and art, and between external reality and internal belief."


"The word was first used by Plato to mean 'the art of steering' or 'the art of governing'. It was adopted in the 1940s at MIT to refer to a way of thinking about how complex systems coordinate themselves in action: "the science of control and communication, in the animal and the machine", as Wiener put it. Cybernetics was originally formulated as a way of producing mathematical descriptions of systems and machines. It solved the paradox of how fictional goals can have real-world effects by showing that information alone (detectable diffferences) can bring order to systems when that information is in a feedback relation with that system. This essentially bootstraps perception (detection of differences) into purpose.

In broad terms, cybernetics incorporates the following three key ideas: systemic dynamicity; homeostasis around a value; and recursive feedback."


"First order cybernetics: The cybernetics of observed systems.

Second order cybernetics: The cybernetics of observing systems."


"the study of justified intervention"

"Variety: a measure of the number of possible states or actions

Entropy: a probabilistic measure of variety

Self-organization: the spontaneous reduction of entropy in a dynamic system

Control: maintenance of a goal by active compensation of perturbations

Model: a representation of processes in the world that allows predictions

Constructivism: the philosophy that models are not passive reflections of reality, but active constructions by the subject

Cybernetics is the science that studies the abstract principles of organization in complex systems. It is concerned not so much with what systems consist of, but how they function.

Cybernetics focuses on how systems use information, models, and control actions to steer towards and maintain their goals, while counteracting various disturbances. Being inherently transdisciplinary, cybernetic reasoning can be applied to understand, model and design systems of any kind: physical, technological, biological, ecological, psychological, social, or
any combination of those. Second-order cybernetics in particular studies the role of the (human) observer in the construction of models of systems and other observers."

"Equal Liberty and Justice Under Law:

Isonomia — equal law — is the historical and philosophical foundation of liberty, justice, and constitutional democracy. Aristotle considered it the core ingredient of a civilization that seeks to promote individual and societal happiness. First ordained by the ancient Athenian lawgiver Solon (c. 638-558 B.C.), isonomia was later championed by the Roman Republic's finest orator, Cicero; but it was subsequently eclipsed for a millennium, until (in effect) "rediscovered" in the eleventh century A.D. by the founders of the Western Legal Tradition, the law students of Bologna who synthesized the Greek genius for systematic thought, the Roman genius for pragmatic administration, and the medieval Judeo-Christian-Islamic preoccupation with the "uses" of faith and reason to secure a common humanity under a common deity.

Empathy, Liberty, Equality. These concepts are often linked, a triune "much of a muchness"; there is a good reason for this; it is rooted in the history of the idea of isonomia, the parent of demokratia.

Our most precious legacy is our understanding — originating in our evolving capacity for empathy — that we must be equal in our liberties and hence equal in the restraints upon our liberties.

We must be equal under the law, and hence equal in the making of laws. Liberty and law are coevolving, and any "constitutional democracy" worthy of this oh-so-precious name must reflect that sequence of foundational ideas.

I believe that Socrates' metaphor captures and "domesticates" the deep wisdoms of that ancient Greek trinity of ontology, epistemology, and teleology; he bends philosophy to his will, to human purposes, in ways that still ring true.

The art of the helmsman requires integrating knowledge of the changeless stars with knowledge of the naturally-changing — the winds and waves of circumstance — in order to choose, and act, and react, with reference to the angle of the rudder, the trim of the sail, but more ... to change these (a) in relation to each other, for each affects the other, and (b) in relation to achieving an ultimate goal, such as safe passage across open seas to prosperous harbors.

This is the art of cybernetics — goal-focused governance that cultivates and harvests feedback, continuously monitoring "progress" in light of hierarchies of facts and values, including ultimate objectives. Law is the quintessential cybernetic calling."

"The Letter of the Law is the literal meaning of the law (the actual wording of the law in the original text). The Spirit of the Law is the intent or purpose of the law (what the law-maker intended the law to do)."

"All levels of analysis from action theory to moral philosophy, from moral philosophy to practical reasoning, from practical reasoning to doctrinal considerations have in common an underlying teleological structure in terms of what goals are at stake, and how they are ordered. This teleological structure is isomorphic for all the above levels of analysis. This structure and this structure alone is the final arbitrator of legal doctrine, whether it is right or wrong, and whether it is correctly or incorrectly applied to a specific event. ... A structural teleological analysis of the law of tort will reveal that the goals of freedom of action and the prevention of physical harm to persons and property often come into conflict in the context of individual action, and consequently must be ordered against each other. ... Through human experience in the context of human action, we learn what we value and how we order values when they come into conflict such that we must choose one or the other. We then develop moral justificatory practices and practical reasons which we generalize as policies to justify the imposition of our value structure. These in turn are reflected in legal doctrine. There is thus an isomorphic teleological structure in terms of goals and their ordering which underlies legal doctrine. The prima facie duty doctrine, limited by a test of proximity, negates the common sense, empirically-based distinctions between physical injury to persons and property and pure economic loss, and the distinction between causing harm and failing to prevent it from happening. This doctrine runs contrary to the deep teleological structure of ordering of values which comes out of human experience.

The Yale Law Journal recently published a piece entitled The Most-Cited Articles from the Yale Law Journal. Nearly all of the named articles entailed a teleological analysis of law or an area of law in terms of what I have been calling "deep structure." Why is it that the kinds of analysis which lawyers, students, academics and judges find the most interesting seldom appears in legal argument or in the text of legal judgements? The reason in simple. The information costs of proving claims made in terms of policy and practical reasons are too high to be feasible in a court of law.

At the level of analysis of human action we recognize that there are many actions which everyone does most of the time. Because they are performed by a lot of people a lot of the time, limitations on such actions place limitations on our freedom to act. The risks of physical harm to persons or property fall reciprocally on nearly everyone. We are prepared, therefore, to impose no higher standard of care than that of reasonableness. Other actions, however, are done only by particular persons, on particular occasions, or in particular places. The risks of such actions are, therefore, non-reciprocal, and we can limit them to a greater degree without interfering with freedom of action in general. This set of goals and their ordering is isomorphically reflected in moral justificatory theories of fault and causation or strict liability. At the level of doctrine it is reflected in rules such as that of Rylands vs. Fletcher." The teleological structure thus runs through all levels of analysis from action theory, moral theory and practical reasoning to doctrinal structure. If one takes most of the analyses of the most cited articles of the Yale Law Journal, one will see that the authors have identified a teleological structure which is isomorphic for the underlying philosophical, economic, practical, and doctrinal levels of the subject matter with which they are dealing."

"In linguistics, and especially the study of syntax in the tradition of generative grammar (also known as transformational grammar), the deep structure of a linguistic expression is a theoretical construct that seeks to unify several related structures. For example, the sentences "Pat loves Chris" and "Chris is loved by Pat" mean roughly the same thing and use similar words. Some linguists, in particular Noam Chomsky, have tried to account for this similarity by positing that these two sentences are distinct surface forms that derive from a common deep structure."

The Deep Structure of Law and Morality:

Utility maximization, morality, and religion

"John Rawls (1958, 1971) proposed two principles for creating a just society: (1) each person should be allowed the maximum amount of liberty that is compatible with everyone else having the maximum amount of liberty and (2) inequalities should be allowed only if it is reasonable to believe that the inequalities will be most beneficial to the least well off. The first of these principles takes precedence over the second, and the first requires that everyone's "liberty" be considered equally.

Joseph Fletcher, who developed situation ethics, said that when I make a choice, it should be the most loving option that I have (Thompson 2003, 51-52). Love requires that I place others on at least an equal basis with myself.

The categorical imperative, as developed by Immanuel Kant (1927a, b), requires two things: (1) that I make only choices that I could change into universal laws for everyone and (2) that I treat other people only as ends, and never as means. Changing choices into universal law would result in giving equal weight to everyone. The second of these rules is logically inconsistent with utility maximization because, according to utility theory, everything I do is a means to increase my own utility.

For example, utility theory would say that my giving money to a homeless person is done to increase my own utility, which implies that I am treating the homeless person as a means, not an end. All of these ethical theories directly imply that, to be moral, I must (as a necessary but not always a sufficient condition) place everyone else on the same level as myself when making choices. In contrast, utility maximization, as taught by economists, places 100 percent of the emphasis on the chooser's utility.

Under utility maximization, I will do things that increase the utility of others if and only if that increases my utility. My utility is primary. Most important, it is theoretically impossible to imagine everyone having utility functions in which everyone else's utility has a weight equal to the chooser's utility because such utility functions would be infinitely recursive. This does not mean that it is impossible to be moral.

Instead, being moral requires that I use rationality to separate my thinking from my own utility function. If I can divorce my thinking from my own utility function and view everyone on an equal basis, then I am more likely to make choices that produce the maximum good for the maximum number of people. Furthermore, then I can make choices that treat people as ends, not means."

"The Human Use of Human Beings is a book by Norbert Wiener. It was first published in 1950 and revised in 1954.Wiener was the founding thinker of cybernetics theory and an influential advocate of automation. Human Use argues for the benefits of automation to society. It analyzes the meaning of productive communication and discusses ways for humans and machines to cooperate, with the potential to amplify human power and release people from the repetitive drudgery of manual labor, in favor of more creative pursuits in knowledge work and the arts. He explores how such changes might harm society through dehumanization or subordination of our species, and offers suggestions on how to avoid such risks."

Morality, Maximization, and Economic Behavior:

‎"On the surface, Bentham's doctrine bears a resemblance to the ancient Greek philosophy of hedonism, which also held that moral duty is fulfilled in the gratification of pleasure-seeking interests. But hedonism prescribes indi vidual actions without reference to the general happiness. Utilitarianism added to hedonism the ethical doctrine that human conduct should be directed toward maximizing the happiness of the greatest number of people. "The greatest happiness for the greatest number'' was the watch phrase of the utilitari-"ans - those who came to share in Bentham's philosophy. Among them were such personalities as Edwin Chadwick and the father-son combination of James and John Stuart Mill. This group champi oned legislation plus social and religious sanctions that punished individuals for harming others in the pursuit of their own happiness. Bentham defined his principle in the following fashion:

By the principle of utility is meant that principle which approves or disapproves of every action whatsoever, according to the tendency which it appears to have to augment or diminish the happiness of the party whose interest is in question... not only of every action of a private individual, but of every measure of government (Principles of Morals and Legislation, p. 17).

What is noteworthy about this declaration is the very minimal distinction Bentham made between morals and legislation. His self-conceived mission was to make the theory of morals and legislation scientific in the Newtonian sense. As Newton's revolutionary physics hinged on the universal principle of attraction (i.e., gravity), Bentham's theory of morals swung on the principle of utility. Newton's roundabout influence on the social sciences was felt in other ways as well. The nineteenth century was one with a passion for measure ment. In the social sciences, Bentham rode the crest of this new wave. If pleasure and pain could be measured in some objective sense, then every legislative act could be judged on welfare considerations. This achievement required a conception of the general interest, which Bentham readily under took to provide."

Lemuel: Hamid, Maximization as one would see it under many constraints?

Hamid: Yes, under many constraints (polytely), some are redundant (isotely), however in some way all are mutually dependent in a heterarchical/heirarchical relationships representing the ordering of values, which are spatially distributed and "inter-temporal", through "cross-acquisition" of orderings there are certain invariants which they share common (koinotely).

Lemuel: That would make we wonder what constraints utilitarians put into their equations to maximize utility. As always, when constraints are included, more than the usual, people react.

Hamid: Indeed they do, predictably in many cases, which is why it's important to design the constraints in ways which to some degree anticipate and minimize deviations from local-global complementarity.

Anticipatory Topoi:


Develop a geometric/logic approach of some concepts related to anticipatory systems (viewed as transition systems)

Logical/geometric characterisation of an anticipatory operator on systems.

"True, there is a fundamental difference between our ability to explore human behavior, or study bacteria or electrons. Bacteria don't get annoyed at you when you put them under a microscope. The moon will not sue you for landing a spacecraft on its face. Electrons are not subject to privacy laws. Yet, none of us want to submit to the invasive inquiry to which we subject our bacteria, our planets, or our electrons--aiming to know everything about us, all the time. In Bursts: The Hidden Pattern Behind Everything We Do, I try to convince you that this is about to change, with profound consequences.

Indeed, today just about everything we do leaves digital breadcrumbs in some database. Our e-mails are preserved in the log files of our e-mail provider; when, where, and what we shop for, our taste and our ability to pay for it is catalogued by our credit card provider; our face and fashion is remembered by countless surveillance cameras installed everywhere, from shopping malls to street corners. While we often choose not to think about it, the truth is that we are under multiple microscopes and our life, with minute resolution, can be pieced together from these mushrooming databases. And the measurements my research group and other scientists have performed on some of these datasets show something rather unexpected: they not only track our past, but they reveal our future as well. Indeed, by studying the communication and movement of millions of individuals through the electronic records they left behind, like mobile phone records, we have found a huge degree of predictability of individual behavior. The measurements told us that to those familiar with our past, our future acts should rarely be a surprise.

As we follow our impulses and daily priorities, we rarely realize that we submit ourselves to mathematically precise laws that describe our activities. The patterns are by no means new--they drove human behavior for centuries, dominating everything from wars to Einstein's correspondence. Our ability to collect these patterns is new, however, allowing us to extract the laws that govern some of our most intimate moments. And as we did that, we learned that everything we do, we do in bursts--brief periods of intensive activity followed by long periods of nothingness. These bursts are so essential to human nature that trying to avoid them is not only foolish, but futile as well."

"Overcoming Bias began in November ‘06 as a group blog on the general theme of how to move our beliefs closer to reality, in the face of our natural biases such as overconfidence and wishful thinking, and our bias to believe we have corrected for such biases, when we have done no such thing."

Semantic Enhancement of Legal Information… Are We Up for the Challenge?…-are-we-up-for-the-challenge/


Since 1960, we have fought for a world of deep electronic documents-- with side-by-side intercomparison and frictionless re-use of copyrighted material.We have an exact and simple structure. The Xanadu model handles automatic version management and rights management through deep connection.Today's popular software simulates paper. The World Wide Web (another imitation of paper) trivializes our original hypertext model with one-way ever-breaking links and no management of version or contents. WE FIGHT ON.

"Reflection is a processing paradigm whereby a computer programme is characterized by its ability to modify itself during its own execution. Such a programme can be said to have the ability to "observe" itself and to change its own structure and behavior."

"Reflective equilibrium is a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments. Although he did not use the term, philosopher Nelson Goodman introduced the method of reflective equilibrium as an approach to justifying the principles of inductive logic."


In his recent book Self-Modifying Systems in Biology and Cognitive Science (1991), George Kampis has outlined a new approach to the dynamics of complex systems. The key idea is that the Church-Turing thesis applies only to simple systems. Complex biological and psychological systems, Kampis proposes, must be modeled as nonprogrammable, self-referential systems called "component-systems."

In this chapter I will approach Kampis's component-systems with an appreciative but critical eye. And this critique will be followed by the construction of an alternative model of self-referential dynamics which I call "self-generating systems" theory. Self-generating systems were devised independently of component-systems, and the two classes of systems have their differences. But on a philosophical level, both formal notions are getting at the same essential idea. Both concepts are aimed at describing systems that, in some sense, construct themselves. As I will show in later chapters, this is an idea of the utmost importance to the study of complex psychological dynamics."

"With respect to the origin of any self-determinative, perfectly self-contained system, the feedback is ontological in nature and therefore more than cybernetic. Accordingly, ontological feedback bears description as “precybernetic” or “metacybernetic”. Indeed, because of their particularly close relationship, the theories of information, computation and cybernetics are all in line for a convergent extension… an extension that can, in a reality-theoretic context, lay much of the groundwork for a convergent extension of all that is covered by their respective formalisms.

Ordinary feedback, describing the evolution of mechanical (and with somewhat less success, biological) systems, is cyclical or recursive. The system and its components repeatedly call on internal structures, routines and actuation mechanisms in order to acquire input, generate corresponding internal information, internally communicate and process this information, and evolve to appropriate states in light of input and programming. However, where the object is to describe the evolution of a system from a state in which there is no information or programming (information-processing syntax) at all, a new kind of feedback is required: telic feedback.

Diagram 2: The upper diagram illustrates ordinary cybernetic feedback between two information transducers exchanging and acting on information reflecting their internal states. The structure and behavior of each transducer conforms to a syntax, or set of structural and functional rules which determine how it behaves on a given input. To the extent that each transducer is either deterministic or nondeterministic (within the bounds of syntactic constraint), the system is either deterministic or “random up to determinacy”; there is no provision for self-causation below the systemic level. The lower diagram, which applies to coherent self-designing systems, illustrates a situation in which syntax and state are instead determined in tandem according to a generalized utility function assigning differential but intrinsically-scaled values to various possible syntax-state relationships. A combination of these two scenarios is partially illustrated in the upper diagram by the gray shadows within each transducer.

The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.

A system that evolves by means of telic recursion – and ultimately, every system must either be, or be embedded in, such a system as a condition of existence – is not merely computational, but protocomputational. That is, its primary level of processing configures its secondary (computational and informational) level of processing by telic recursion. Telic recursion can be regarded as the self-determinative mechanism of not only cosmogony, but a natural, scientific form of teleology.
Telic recursion occurs in two stages, primary and secondary (global and local). In the primary stage, universal (distributed) laws are formed in juxtaposition with the initial distribution of matter and energy, while the secondary stage consists of material and geometric state-transitions expressed in terms of the primary stage. That is, where universal laws are syntactic and the initial mass-energy distribution is the initial state of spacetime, secondary transitions are derived from the initial state by rules of syntax, including the laws of physics, plus telic recursion. The primary stage is associated with the global telor, reality as a whole; the secondary stage, with internal telors (“agent-level” observer-participants). Because there is a sense in which primary and secondary telic recursion can be regarded as “simultaneous”, local telors can be said to constantly “create the universe” by channeling and actualizing generalized utility within it.

Diagram 13: The above diagram illustrates the relationship of primary and secondary telic recursion, with the latter “embedded in” or expressed in terms of the former. The large circles and arrows represent universal laws (distributed syntax) engaged in telic feedback with the initial state of spacetime (initial mass-energy distribution), while the small circles and arrows represent telic feedback between localized contingent aspects of syntax and state via conspansion. The primary stage maximizes global generalized utility on an ad hoc basis as local telors freely and independently maximize their local utility functions. The primary-stage counterparts of inner expansion and requantization are called coinversion and incoversion. It is by virtue of telic recursion that the SCSPL universe can be described as its own self-simulative, self-actualizative “quantum protocomputer”.

Deterministic computational and continuum models of reality are recursive in the standard sense; they evolve by recurrent operations on state from a closed set of “rules” or “laws”. Because the laws are invariant and act deterministically on a static discrete array or continuum, there exists neither the room nor the means for optimization, and no room for self-design. The CTMU, on the other hand, is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity. This amounts to self-generative freedom, hologically providing reality with a “self-simulative scratchpad” on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes.

Standard recursion is “Markovian” in that when a recursive function is executed, each successive recursion is applied to the result of the preceding one. Telic recursion is more than Markovian; it self-actualizatively coordinates events in light of higher-order relationships or telons that are invariant with respect to overall identity, but may display some degree of polymorphism on lower orders. Once one of these relationships is nucleated by an opportunity for telic recursion, it can become an ingredient of syntax in one or more telic-recursive (global or agent-level) operators or telors and be “carried outward” by inner expansion, i.e. sustained within the operator as it engages in mutual absorption with other operators. Two features of conspansive spacetime, the atemporal homogeneity of IEDs (operator strata) and the possibility of extended superposition, then permit the telon to self-actualize by “intelligently”, i.e. telic-recursively, coordinating events in such a way as to bring about its own emergence (subject to various more or less subtle restrictions involving available freedom, noise and competitive interference from other telons). In any self-contained, self-determinative system, telic recursion is integral to the cosmic, teleo-biological and volitional levels of evolution.
The Telic Principle differs from anthropic principles in several important ways. First, it is accompanied by supporting principles and models which show that the universe possesses the necessary degree of circularity, particularly with respect to time. In particular, the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely-separated events, lets the universe “retrodict” itself through meaningful cross-temporal feedback.

Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so-called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility." - Langan, 2002, PCID

HYJ: There is a why contained in every because, and vice versa, this allows them to mutually refine eachother until reflective equilibrium is found, by doing so purpose becomes a global invariant, or like the principle of least action...maybe we are here to figure out why we are here, in the process values multiply, constraints are redesigned, local/global utility tends to maximize, and choices are enabled waiting to be made.

"In physics, the principle of least action or more accurately principle of stationary action is a variational principle which, when applied to the action of a mechanical system, can be used to obtain the equations of motion for that system. The principle led to the development of the Lagrangian and Hamiltonian formulations of classical mechanics.

The principle remains central in modern physics and mathematics, being applied in the theory of relativity, quantum mechanics and quantum field theory, and a focus of modern mathematical investigation in Morse theory. This article deals primarily with the historical development of the idea; a treatment of the mathematical description and derivation can be found in the article on the action. The chief examples of the principle of stationary action are Maupertuis' principle and Hamilton's principle."

"While an ordinary grammar recursively processes information or binds informational potential to an invariant syntax that distributes over its products, Γ grammar binds telesis, infocognitive potential ranging over possible relationships of syntax and state, by cross-refining syntax and its informational content through telic recursion. Telic recursion is the process responsible for configuring the syntax-content relationships on which standard informational recursion is based; its existence is an ontological requirement of reality. The telic-recursive cross-refinement of syntax and content is implicit in the “seed” of Γ-grammar, the MU form, which embodies the potential for perfect complementarity of syntax and state, law and matter.

Since this potential can only be specifically realized through the infocognitive binding of telesis, and localized telic binding is freely and independently effected by localized, mutually decoherent telic operators, deviations from perfect complementarity are ubiquitous. SCSPL evolution, which can be viewed as an attempt to help this complementarity emerge from its potential status in MU, incorporates a global (syntactic) invariant that works to minimize the total deviation from perfect complementarity of syntax and state as syntactic operators freely and independently bind telesis. This primary SCSPL invariant, the Telic Principle, takes the form of a selection function with a quantitative parameter, generalized utility, related to the deviation. The Telic Principle can be regarded as the primary component of SCSPL syntax…the spatiotemporally distributed self-selective “choice to exist” coinciding with MU."

"Christopher Michael Langan's HI Q & A

Q: What are your thoughts on moral relativism? Can an action be classified as *objectivly* evil or is it only relatively so, depending on one's viewpoint? (Based on a question posted to the Ultranet by Mike Hess after the 9-11 attacks.)

A: Moral relativism says that utility is context-sensitive...that to decide whether an act results in positive net utility (is "good") or negative net utility (is "evil") cannot be decided except with respect to an arbitrary psycho-social frame in which utility is defined, and that frames are essentially incommensurate.

However, since one thing can have utility in more than one frame, intersecting content provides a basis for entanglement of utility functions. For example, if there are two hungry people A and B on a desert island and nothing to eat but one mango hanging from a tree, their individual utility functions both acquire the mango as an argument. Indeed, where teamwork has utility - and this is the rule in human affairs - A and B are acquired by each other's utility function (e.g., suppose that the only way A or B can reach the mango is to support or be supported by the other from below).

In a system dominated by competition and cooperation - a system like the real world - this cross-acquisition is a condition of interaction. But given a system with interacting elements, we have a systemic identity, i.e. a distributive self-transformation applying symmetrically to every element (frame) in the system, and this implies the existence of a mutual transformation relating different elements and ultimately rendering them commensurate after all. So "absolute moral relativism" fails in interactive real-world contexts. It's a logical absurdity."

"Why, if there exists a spiritual metalanguage in which to establish the brotherhood of man through the unity of sentience, are men perpetually at each others' throats? Unfortunately, most human brains, which comprise a particular highly-evolved subset of the set of all reality-subsystems, do not fire in strict S-isomorphism much above the object level. Where we define one aspect of "intelligence" as theamount of global structure functionally represented by a given sÎS, brains of low intelligence are generally out of accord with the global syntax D(S). This limits their capacity to form true representations of S (global reality) by syntactic autology [d(S) Éd d(S)] and make rational ethical calculations. In this sense, the vast majority of men are not well-enough equipped, conceptually speaking, to form perfectly rational worldviews and societies; they are deficient in education and intellect, albeit remediably so in most cases. This is why force has ruled in the world of man…why might has always made right, despite its marked tendency to violate the optimization of global utility derived by summing over the sentient agents of S with respect to space and time."

‎"Type spaces are mathematical structures used in theoretical parts of economics and game theory. The are used to model settings where agents are described by their types, and these types give us “beliefs about the world”, “beliefs about each other's beliefs about the world”, “beliefs about each other's beliefs about each other's beliefs about the world”, etc. That is, the formal concept of a type space is intended to capture in one structure an unfolding infinite hierarchy related to interactive belief.

"We first present the concept of duality appearing in order theory, i.e. the notions of dual isomorphism and of Galois connection. Then we describe two fundamental dualities, the duality extension/intention associated with a binary relation between two sets, and the duality between implicational systems and closure systems. Finally we present two «concrete» dualities occurring in social choice and in choice functions theories." Some order dualities in logic, games and choices

"Game Theory can be roughly divided into two broad areas: non-cooperative (or strategic) games and co-operative (or coalitional) games. The meaning of these terms are self evident, although John Nash claimed that one should be able to reduce all co-operative games into some non-cooperative form. This position is what is known as the "Nash Programme". Within the non-cooperative literature, a distinction can also be made between "normal" form games (static) and "extensive" form games (dynamic)."[94]

"It is argued that virtually all coalitional, strategic and extensive game formats as currently employed in the extant game-theoretic literature may be presented in a natural way as discrete nonfull or even-under a suitable choice of morphisms- as full subcategories of Chu (Poset 2)."On Game Formats and Chu Spaces

Lawrence S. Moss: Articles

No comments:

Post a Comment