Tuesday, October 19, 2010

Service Science, Mechanism Design, Implicit Functions, Fractal Time, Nonlinear Objectives, Swarm Telentropy, Positive Psychology

Image: http://digitalserviceinnovations.wordpress.com/

Enterprise Design Process:
Business Processes
Johan Strümpfer
Enterprise Design






*A regulated set of relationships
*Interacting and interrelated parts
*Parts organised for a purpose
*a Whole with novel features




*Relationships that remain unchanged
*Duration of interest
*Stability and relative change

Process view: PURPOSE


-Functional division
-The whole is integrated at the top
-Optimisation of the parts yields optimisation of the whole

-Process division
-The whole is integrated at the bottom
-Optimisation of the whole is different from optimisation of the parts

*Organise around outcomes, not tasks
*Let output consumers produce output
*Integrate information processing with real work producing the information
*Place decision making where work is performed and build control into process
*Treat geographically dispersed resources as centralised
*Link parallel activities instead of integrating results
*Capture information once and at source
M Hammer, HBR ,1990

*Re-work the transformation, not the output.
*Singular (insular) view (process) of the organisational structure
*Substitution of one basis for organisation for another
*Heavy dependence on IT perspective
*Patchwork of (some good) concepts; lacks rigour
*Design orientation
*Transcends current boundaries
*Promotes questioning --- What framework?
*Stretches value chain thinking

Endo, Exo, Centro-teleon

(Gyuri Jaros & Anakrion Cloete)
Woven mat of processes:

*Sets of connected activities aimed at purpose
*Interlinked and intersecting processes
*Production processes
*Support processes




-Low telentropy = good chance of achieving goal
-High telentropy = low chance of achieving goal


*DESIGN A DESIGN: Model of what ought to be
*CRITICAL REFLECTION: Template for questioning design and reality
*ALIGNMENT: Building up SHARED model of how business works
*PARTICIPATION: Framework for participative design

Biomatrix Theory is a process and web-based systems theory. It is a meta theory which integrates the major systems approaches, models and theoretical concepts developed by other systems thinkers into one coherent theoretical framework. This integration is made possible by the unique conceptual contributions of Biomatrix theory.

The term Biomatrix is derived from the words bios (life) and matrix (mould, womb or pattern). Thus, it literally means pattern of life, or how life is organised. We use the term Biomatrix to describe the whole web of life or the web of all interacting systems on earth.

The fundamental unit of observation of Biomatrix Theory is purposeful, structured and regulated process, which is referred to as “activity system” (or in some of our research articles we also call it process system or teleon).Activity systems link up with each other to form supply chains across and along levels in the systems hierarchy. These supply chains interact with each other in a multitude of ways. In fact, one can view the whole web of life (i.e. the Biomatrix) as a web of interacting supply chains. This gives rise to a web-based view of the world.At various points in the web the interaction of activity systems becomes dense and gives rise to field-like entity systems.

Besides making unique conceptual contributions, Biomatrix Theory also integrates the various systems concepts, models and approaches of other systems thinkers (e.g. of the systems dynamics and ideal systems redesign schools, amongst others) into one coherent meta-systems theory.This integration of the field of General Systems Theory into Biomatrix Theory is a synergistic integration, whereby - to paraphrase a famous systems dictum - Biomatrix Theory is more than the sum of the conceptual parts derived from the various other systems approaches.

Biomatrix Theory makes some unique conceptual contributions to systems thinking, namely:
A distinction between activity and entity systems.Analogous to a fishing net that consists of threads and knots, the Biomatrix is a web that consists of activity systems (i.e. thread-like or vector-like systems) and entity systems (i.e. knot-like or field-like systems).Examples of an entity system are the planet, a society, an organization, an individual, a cell and an atom, while activity system refers to the various activities or functions performed by these entity systems. An entity system emerges from a field of interacting activity systems, yet is more than the sum of its participating activity systems.

This distinction between activity and entity systems is important, because the design and management of entity and activity systems involve different methods and theoretical guiding principles.

Entity systems are characterised by a three-fold organisation.Entity systems consist of three types of activity systems, namely outward, inward and self-directed systems in terms of their purpose.

Amongst others, this three-fold organisation gives rise to a generic organisational structure, namely a three-dimensional process matrix. (We regard this matrix structure as the new organisational structure of the information age, replacing that of the traditional hierarchy of the industrial age.)

Entity systems and activity systems interact and co-produce each other.Activity systems give rise to entity systems and vice versa. Thereby all systems co-produce each other and co-evolve.

This implies that continuous change is inevitable and that systems need to be designed and structured to manage ongoing change without loosing stability, similar to the surfer who needs to keep moving to affect a stable ride, wave after wave.
Co-production and co-evolution occurs across levels.An entity system emerges from the interaction with systems in the outer and inner environment and with itself. Thus, systems co-evolve across three levels.

This implies that a systems intervention needs to span three levels - the interaction of a system with its outer environment, its inner environment and itself (e.g. self-reference, self-reflection and self-management).Systems emerge in the middle from the co-production across three levels. This concept of the emerging middle is a contribution of Biomatrix Theory to evolutionary theory in general.

The Biomatrix consists of three interacting sub-webs. One can distinguish between three types of systems - systems that evolved in nature, systems that emerge from the mind of sentient beings and their interaction with each other (i.e. psychological and social systems) and systems produced by them (i.e. technological systems).We refer to these qualitatively different systems as the naturosphere, psycho-sociosphere and technosphere.

In spite of sharing the same organizational principles, these three types of systems also show differences in organization, thereby requiring different problem (dis)solving approaches and interventions. Managing the interface between them raises issues of carrying capacity and sustainability, amongst others.

Biomatrix Theory emphasises the duality of process and structure.Analogous to the wave - particle duality in physics - Biomatrix Theory emphasises the duality as well as complementing aspects of the process and structure perspective of a system and outlines the organising principles associated with each.

This duality of perspectives gives rise to a worldview that balances change and stability, connectivity and containment, amongst others.
Biomatrix Theory emphasizes the duality of organization in time and space.The existence and continuity of the Biomatrix in time and space gives rise to different organising principles in terms of time and space.

Harmonious co-existence between systems requires management from both perspectives. Likewise, the sustainable development of systems must be managed from a temporal and spatial perspective.

Systems link up with each other through tapping. A contribution offered by a system to another system, needs to be tapped by the receiving system in order to continue. Thus tapping facilitates the continuity of flow of substance, purpose and regulation across system boundaries. The tapping interface also highlights the boundaries between systems.

Without tapping there is no continuity of systems. If tapping does not take place, it can be mediated. During tapping the responsibility shifts from one system to another which has governance implications (e.g. power issues).

The substance of a system is comprised of mei fields.The substance of a system is an interacting field of matter, energy and information or mei fields. It is also referred to as the resources of the system.

Process and supply chain design and management need to consider the optimisation of mei flow and the splitting of mei fields during processing into products and by-products, which become part of different supply chains. The mei composition is also of relevance in resource management.

Systems have a conceptual and physical reality.Analogous to a house that is built (i.e. in physical reality) according to a plan (i.e. its conceptual reality), the physical (Mei) reality of a system is in-formed (i.e. put into form) according to its conceptual (meI) reality. Both types of systems are real with feedback loops between them.

A fault in the conceptual reality of a system will lead to a faulty physical reality of the system.A systems redesign represents a change in the conceptual reality of the system. A systemic performance management system in an organisation links the two realities, allowing continuous improvement of both.

There are seven forces of organisation in a system.Biomatrix Theory identifies seven aspects of systems organisation, namely ethos, aims, process, structure, governance, substance and environmental interaction. Each of these aspects represents a different force that co-produces the overall organisation of a system.

Optimal development of systems requires the development of the system in terms of each of the seven organising aspects (whereby each aspect is associated with different change management approaches), as well as the management of coherence and integration between the different systems aspects.

One can distinguish two types of change within a system.The seven forces of organisation interact with each other to give rise to two fundamentally different flows of change, namely a clockwise flow of intended change and a counter-clockwise one of inherent change.

This distinction of different types of change provides an understanding of how systems develop, change and transform and how one needs to manage change within a system.
The various spatial organising principles of Biomatrix Theory give rise to a generic systems dynamics model.The three types of activity systems within an entity system, the hierarchical organisation of entity systems, the continuity of activity systems along and across levels, and the multi-dimensionality of systems provide a generic systems dynamics. More specifically, the generic systems dynamics within the Biomatrix involves a multi-dimensional inward, outward and self-directed flow of purpose and its associated flow of substance.This generic systems dynamics provides a generic framework to analyse the flow and impact of change throughout the Biomatrix. It prompts the systematic, as well as systemic identification of the variables of a systems dynamics model.

The generic systems dynamics allows for multi-dimensional interaction analysis along and across levels which is useful in both, systems analysis and systems (re)design.
Systems have a teleonic nature.Biomatrix Theory suggests that systems are teleonic, meaning that their activities are driven by a “purpose”.This purpose can be evolved, emergent or designed.

“Without vision, the systems perish”. A change in teleos (purpose or aim) will lead to a fundamental change of the system.

There is telentropy in a system.Until the outcomes of the activity have actually been achieved, systems have telentropy, implying uncertainty of outcomes. Put differently, the concept of telentropy links the teleonic (i.e. conceptual) realty of a system to its physical reality as expressed by the mei flow and configuration of the system, whereby telentropy refers to a misalignment or gap between the two.Because of the interaction of systems, this telentropy is passed from one system to another, following the generic systems dynamics of the Biomatrix.

Telentropy needs to be managed. The method of tracing the nature and flow of telentropy through the generic systems dynamics of the Biomatrix is referred to as telentropy tracing. It is also a useful tool in problem analysis within and across systems and to optimise the interaction between systems across systems boundaries (e.g. in supply chain management).
The spatial and temporal organising principles give rise to frameworks for problem analysis and problem (dis)solving.The spatial organisation of the Biomatrix provides various frameworks for systems analysis and ideal systems (re)design, while the temporal organising principles provide a generic methodology for managing change in a systemic manner.

This facilitates the (dis)solving of any type of organisational, societal or ecological problem, as well as the restoration of systems in nature and the transformation of social systems. It provides the methodology to develop strategies for dissolving society's most pervasive and perplexing problems (e.g. poverty, ecological deterioration, unsustainable societal development, pandemics and infrastructure problems), as well as methods to transform organisations and governments into learning organisations capable of implementing those strategies.

The use of the concept autopoiesis in the theory of viable systems:

Multidisciplinary System Design Optimization:

Isoperformance: Analysis and Design of Complex
Systems with Known or Desired Outcomes

Abstract. Tradeoffs between performance, cost and risk frequently arise during analysis and design of complex systems. Many such systems have both human a...nd technological components and can be described by mathematical input-output models. Often times such systems have known or desired outcomes or behaviors. This paper proposes “isoperformance” as an alternative approach for analyzing and designing systems by working backwards from a set of desired performance targets to a set of acceptable solutions. This is in contrast to the traditional “forward” process, which starts first in the design space and attempts to predict performance in objective space. Isoperformance can quantify and visualize the tradeoffs between determinants (independent design variables) of a known or desired outcome. For deterministic systems, performance invariant contours can be computed using sensitivity analysis and contour following. In the case of stochastic systems, the isoperformance curves can be obtained by regression analysis, given a statistically representative data set. Examples from opto-mechanical systems design and human factors are presented to illustrate specific applications of the method.


Isoperformance is a methodology for obtaining a performance invariant set of analysis or design solutions. These solutions approximate performance invariant contours or surfaces based on an empirical or deterministic system model. The word isoperformance by itself is used interchangeable with the isoperformance approach.


“The experience of the 1960’s has shown that for military aircraft the cost of the final increment of performance usually is excessive in terms of other characteristics and that the overall system must be optimized, not just performance.”

Swarm Intelligence in the Optimization of Concurrent Service Systems:

Operational costs of service systems

A service system is a configuration of technology and organizational networks designed with the intention of providing service to the end users. Practical service systems include hospitals, banks, ticket-issuing and reservation offices, restaurants, ATM, etc. The managerial authorities
are often pressed to drastically reduce the operational costs of active and fully functioning service systems, while the system designers are forced to design (new) service systems operating at minimal costs. Both these situations involve system optimization.

Any optimization problem involves the objective to be optimized and a set of constraints. In this study, we seek to minimize the total cost (tangible and intangible) to the system. The total cost can be divided into two broad categories - cost associated with the incoming customers
having to wait for the service (waiting cost) and that associated with the personnel (servers) engaged in providing service (service cost). Waiting cost is the estimate of the loss to business as some customers might not be willing to wait for the service and may decide to go to the competing organizations, while serving cost is mainly due to the salaries paid to employees.

Business enterprises and companies often mistakenly “throw” capacity at a problem by adding manpower or equipment to reduce the waiting costs. However, too much capacity decreases the profit margin by increasing the production and/or service costs. The managerial staff, therefore, is required to balance the two costs and make a decision about the provision of an optimum level of service. In recent years, customer satisfaction has become a major issue in marketing research and a number of customer satisfaction measurement techniques have been proposed.

Mechanism Design Theory:

"In mid-October, the Nobel prize for economics was awarded to Leo Hurwicz, Eric Maskin and Roger Myerson for their work on mechanism design. Newspaper economics correspondents made what they could of this news, but obviously had only the vaguest idea of what mechanism design is.

Mechanism design is too important to get this kind of treatment." http://www.prospectmagazine.co.uk/2007/11/rulesofthegame/

"In this paper we study the existence of Paréto equilibria of a multicriteria metagame. A theorem on existence of a Paréto equilibrium and a theorem on existence of a Nash equilibrium with weights are presented, which improve and extend some known results in the theory of games with multiple payoffs. Also relations between a Paréto equilibrium and other solution concepts of an optimization problem with multiple criteria are discussed." http://www.springerlink.com/content/c3v35023j7027vl3

"Multiobjective problems involve several competing measures of solution quality, and multiobjective evolutionary algorithms (MOEAs) and multiobjective problem solving have become important topics of research in the evolutionary computation community over the past 10 years. This is an advanced text aimed at researchers and practitioners in the area of search and optimization. The book focuses on how MOEAs and related techniques can be used to solve problems, particularly in the disciplines of science and engineering. Contributions by leading researchers deal with the concepts of problem, solution, objective, constraint, utility and preference, and show how these concepts are being investigated in current practice. The book is distinguished from other texts on MOEAs in that it is not primarily about the algorithms, nor specific applications, but about the concepts and processes involved in solving problems using a multiobjective approach. Each chapter contributes to the central, deep concepts and themes of the book: evaluating the utility of the multiobjective approach; discussing alternative problem formulations; showing how problem formulation affects the search process; and examining solution selection and decision-making. The book will be of benefit to researchers, practitioners and graduate students engaged with the underlying general theories involved in the multiobjective approach in fields such as natural computing and heuristics." Multi-competence Cybernetics: The Study of Multiobjective Artificial Systems and Multi-fitness Natural Systems, [175], [176], [177]

Encyclopedia of Optimization

Multiobjective Water Resource Planning: http://tinyurl.com/2d4ykaw

"Machine learning usually has to achieve multiple targets, which are often conflicting with each other. For example in feature selection, minimizing the number of features and the maximizing feature quality are two conflicting objectives. It is also well realized that model selection has to deal with the trade-off between model complexity and approximation or classification accuracy. Traditional learning algorithms attempt to deal with multiple objectives by combining them into a scalar cost function so that multi-objective machine learning problems are reduced to single-objective problems. Recently, increasing interest has been shown in applying Pareto-based multi-objective optimization to machine learning, particularly inspired by the successful developments in evolutionary multi-objective optimization. It has been shown that the multi-objective approach to machine learning is particularly successful in 1) improving the performance of the traditional single-objective machine learning methods 2) generating highly diverse multiple Pareto-optimal models for constructing ensembles and, 3) in achieving a desired trade-off between accuracy and interpretability of neural networks or fuzzy systems. Multi-objective machine learning covers the following main aspects:

Multi-objective clustering, feature extraction and feature selection

Multi-objective model selection to improve the performance of learning models, such as neural networks, support vector machines, decision trees, and fuzzy systems

Multi-objective model selection to improve the interpretability of learning models, e.g., to extract symbolic rules from neural networks, or to improve the interpretability of fuzzy systems

Multi-objective generation of ensembles

Multi-objective learning to deal with tradeoffs between plasticity and stability, long-term and short-term memories, specialization and generalization."Multi-Objective Machine Learning, International Journal of Systemics, Cybernetics and Informatics, M. R. Meybodi

"In typical strategic interactions under incomplete information, different types (of a player) can choose from among a menu of different actions (strategies) that comprises the possibility that they mimic the behavior of other types (of the same, or of another player). Incentive compatibility conditions ensure that different types (of each player) align themselves such that they can be identified by their equilibrium choices. Typically, they are used to prevent that some type profits from copying another type's action (given the other types do not disguise themselves behind others' choices). More generally, incentive compatibility conditions force a desired constellation of choices to form a strategic equilibrium for a given array of types. In particular, they might as well ensure that it be worthwhile for different types to choose the same action (the types pool on an action). Yet in most economic problems, incentive compatibility conditions serve to induce a strategic equilibrium which reveals the players' private information by having them choose different 'characteristic' equilibrium actions, i.e. they have the types 'sort themselves out'."Incentive compatibility (SFB 504: Glossary)

"In a distributed/open environment where agents are self-interested and goal oriented, they might pursue any means available to them to maximize their own utility. That could lead to undesirable situations where some agents would try to influence the solving process towards solutions that are more preferable to them, but not necessarily acceptable to others, or sub optimal in any case. It would be desirable for the agents to behave truthfully during the entire solving process, otherwise the optimality/fairness of the final solution is not assured.

We will investigate possible ways to design algorithms/protocols that are incentive compatible in the sense that it's always individually rational for each rational to behave truthfully. To this end, we will look at the distributed problem solving process from a game theoretic perspective, and use promising schemes like the Clarke tax or side payments to devise mechanisms that motivate the agents to behave truthfully.

Again, as with the previous point, collusion is a problem of big concern. Coalitions of malicious agents acting together in a coordinated fashion could possibly circumvent the measures we are trying to take in order to ensure that everybody behaves truthfully. For example, in the case of the Clarke tax, it has been shown that even a coalition of two agents can manipulate the system in such a way that their preferred outcome is chosen, and they still have to pay no tax.

We are going to investigate possible ways to achieve incentive compatibility and at the same time ensure to the largest extent possible that coalitions of malicious agents cannot manipulate the system in their own interest. Possible methods are either ``naturally robust against collusion, or rendered so through cryptographic solutions or randomization techniques." Incentive Compatibility, [178]

Innovation and Incentives

"The Design Decisions Laboratory was established at Carnegie Mellon University by Professor Jeremy J. Michalek in 2005. The lab develops theories and tools to understand and assist decision-making in design and product development. The group is interested in the preferences and economics that drive design tradeoff decisions as well as the impact of those decisions on public and private stakeholders. Drawing upon research in economics, econometrics, marketing and public policy as well as engineering and design optimization, the lab pursues three primary thrust areas: Systems Optimization: Develop fundamental knowledge and new methods for multidisciplinary design and complex systems optimization; Design for Market Systems: Measure and model consumer choice in the marketplace to optimize engineering systems for profitability; and Green Design & Environmental Policy: Study the effects of economics, competition and public policy on design decisions and the resulting environmental impact of those decisions." Design Decisions Wiki

"Back in the early ‘90s the term ‘Product Semantics’ coined by Klaus Krippendorf and Reinhart Butter helped to define the meaning of information transferred by product designers through product forms. They state that the mantra of product semantics is not “form follows function” but rather “form follows meaning” and that designers are part of a two-part equation of designer and user. One of the obvious problems with Product Semantics theory is that there can never be a truly one-to-one direct translation from the designer’s intended meanings and the meanings interpreted by users. The term ‘teleosemantics’ comes from information theories in genetic research. Teleosemantic theory generally serves as a means to elucidate an involvement relationship between organisms and their environments. This paper proposes the argument that the process of designing is ‘teleosemantic’ by nature paralleling teleosemantic theories of DNA as information systems and that the inner workings within the gene parallels the relationship between designers and users. When looking at product design through a Product Teleosemantic lens, a designer’s intentions would no longer be seen as invalidated by misinterpretation but rather validated by reinterpretations that lead to new ways of product usage." Product Teleosemantics: The Next Stage in the Evolution of Product Semantics, [179]

"Since its coinage in 1984, the use of “product semantics” has mushroomed. In 2009, a Google search identified over 18,000 documents referring to it. The semantics of artifacts has become of central importance in courses taught at leading design departments of many universities all over the world...It has also permeated other disciplines, notably ergonomics, marketing, cognitive engineering. Reviews can be found by writers on design theory, design history,corporate strategy, national design policy, design science studies, participatory design, interaction design, human-computer interaction, and cybernetics."[180]

"The term affordance was coined by Gibson as a part of the theory of direct perception, also known as the Ecological Approach, to refer to the actionable properties between the environment and the organism that lives in the environment. According to the paradigms of cognitive psychology, human behaviors, such as thinking, acting and perceiving, are guided by mental schemata or cognitive model, which are mainly based on their previous experience and knowledge. In contrast, Gibson’s theory of direct perception stresses that attributes of an object could provide effective perceptual information about the object itself. In short, “The object offers what it does because it is what it is”. Essential to this theory is “the reciprocal relationship between animal and environment”, and the notion of affordance was developed to express the property of the environment in relation to the organism that lives within."[181]

Multiattribute Utility Theory (designdecisions wiki), Multiple Criteria Decision Making, Multiattribute Utility Theory: The Next Ten Years, [182]

"Sarit Kraus is concerned here with the cooperation and coordination of intelligent agents that are self-interested and usually owned by different individuals or organizations. Conflicts frequently arise, and negotiation is one of the main mechanisms for reaching agreement. Kraus presents a strategic-negotiation model that enables autonomous agents to reach mutually beneficial agreements efficiently in complex environments. The model, which integrates game theory, economic techniques, and heuristic methods of artificial intelligence, can be automated in computer systems or applied to human situations. The book provides both theoretical and experimental results." Strategic Negotiation in Multiagent Environments

An Introduction to Multi-Objective Optimization, [183], [184], Multiobjective Optimization: Interactive and Evolutionary Approaches

Applying Multi-Objective Evolutionary Computing to Auction Mechanism Design, Utilitarian Mechanism Design for Multi-Objective Optimization, [185],

Multiobjective Programming and Goal Programming, [186], [187]

Two-Level of Nondominated Solutions Approach to Multiobjective Particle Swarm Optimization:

In multiobjective particle swarm optimization (MOPSO) methods, selecting the local best and the global best for each particle of the population has a... great impact on the convergence and diversity of solutions, especially when optimizing problems with high number of objectives. This paper presents a two-level of nondominated solutions approach to MOPSO. The ability of the proposed approach to detect the true Pareto optimal solutions and capture
the shape of the Pareto front is evaluated through experiments on well-known non-trivial test problems. The diversity of the nondominated solutions obtained is demonstrated through different measures. The proposed approach has been assessed through a comparative study with the reported results in the literature.

Categories and Subject Descriptors
I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search – heuristic methods.
http://citeseerx.ist.psu.edu/viewdoc/download?doi= More

"This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid premature convergence to a local optimum yet still ensure a good rate of convergence to the optimum."

"Groups collaborate to create value that their members cannot create through individual effort. Collaboration, however, engenders interpersonal, social, political, cognitive, and technical challenges. Croups can improve key outcomes using collaboration technologies, but any technology that can be used well can also be used badly; IS/IT artifacts do not assure successful collaboration. The value of a collaboration technology can only be realized in the larger context of a collaboration system, a combination of actors, hardware, software, knowledge, and work practices to advance groups toward their goals. Designers of collaboration systems must therefore many issues when creating a new collaboration system. This track seeks new work from researchers in many disciplines to foster a growing a body of exploratory, theoretical, experimental, and applied research that could inform design and deployment choices for collaboration systems. We seek papers that address individual, group, organizational, and social factors that affect outcomes of interest among people making joint efforts toward a group goal. We look for papers from the range of epistemological and methodological perspectives."

"This is the home page for the Service Science space.

The science of studying service is evolving. The call-to-action was heard around the world. Many institutions have integrated or are beginning to integrate their studies with the service perspective. These pages offer a collection of resources for your use for the development of courses, case studies and degree curricula. We continue to update the site as new developments occur."

Steps Towards a Science of Service Systems:
‎"A service system can be understood as a system composed of people and technologies that adaptively computes and adjusts to the changing value of knowledge in the system."

A Research Manifesto for Services Science:

‎"There is, apart from temporal empirical knowledge (i.e. implying duration), a further, non-temporal access to cognition of temporal structures. A non-temporal access enables us to explain subjectively (in each case) varying empirical knowledge of duration, as well as insight and precognition."

"Towards the end of the book, Yau makes a point that I very much agree with: fundamental physics may get (or have already gotten..) to the point where it can no longer rely upon frequent inspiration from unexpected experimental results, an...d when that happens one avenue left to try is to get inspiration from mathematics:

"So that’s where we stand today, with various leads being chased down – only a handful of which have been discussed here – and no sensational results yet. Looking ahead, Shamit Kachru, for one, is hopeful that the range of experiments under way, planned, or yet to be devised will afford many opportunities to see new things. Nevertheless, he admits that a less rosy scenario is always possible, in the even that we live in a frustrating universe that affords little, if anything in the way of empirical clues…

What we do next, after coming up empty-handed in every avenue we set out, will be an even bigger test than looking for gravitational waves in the CMB or infinitesimal twists in torsion-balance measurements. For that would be a test of our intellectual mettle. When that happens, when every idea goes south and every road leads to a dead end, you either give up or try to think of another question you can ask – questions for which there might be some answers.

Edward Witten, who, if anything, tends to be conservative in his pronouncements, is optimistic in the long run, feeling that string theory is too good not to be true. Though, in the short run, he admits, it’s going to be difficult to know exactly where we stand. “To test string theory, we will probably have to be lucky,” he says. That might sound like a slender thread upon which to pin one’s dreams for a theory of everything – almost as slender as a cosmic string itself. But fortunately, says Witten, “in physics there are many ways of being lucky.”

I have no quarrel with that statement and more often than not, tend to agree with Witten, as I’ve generally found this to be a wise policy. But if the physicists find their luck running dry, they might want to turn to their mathematical colleagues, who have enjoyed their fair share of that commodity as well.""


"What is topology? Is it like geometry?

Geometry is specific and topology is general. Topologists study larger patterns and categories of shapes. For example, in geometry, a cube and a sphere are distinct. But in topology they are the same because you can deform one into the other without cutting through the surface. The torus, a sphere with a hole in the middle, is a different form. It is clearly distinct from the sphere because you cannot deform a torus into a sphere no matter how you twist it.
Does that mean geometry and topology are really two perspectives on the same thing?

Yes. It is like Chinese literature. A poem might describe a farewell between lovers. But in the language of the poem, instead of a man and woman, there is a willow tree, where the leaves are soft and hanging down. The way the branch is hanging down is like the feeling of the man and the woman wanting to be together. Geometry gives us a structure of that willow tree that is solid and extensive. Topology describes the overall shape of the tree without the details—but without the tree to start with, we would have nothing.

It has always amazed me to observe how different groups of people look at the same subject. My friends in physics look at space-time purely from the perspective of real physics, yet the general theory of relativity describes space-time in terms of geometry, because that’s how Einstein looked at the problem.

When you looked at the world through the lens of geometry and topology, what did you learn?

That nonlinear equations were fundamental because in nature, curves abound. Climate isn’t linear. If the wind blows stronger that way, it may cause more trouble over there; it may even depend on the geometry of the earth. Usually you see the stock market described by linear equations and straight lines, but that is not really correct. The stock market fluctuates up and down in a nonlinear way. The Einstein equation described the curvature of the universe, and it was nonlinear. I ended up learning nonlinear equations from a master, although I didn’t know he was a master at the time. His name was Charles Morrey, and he was a classical gentleman. He always dressed in suits in class. He was a very nice man. Even if I was the only one there, he would lecture to me, just as if he were lecturing to the whole class."

‎" Studies of Nonlinear Phenomena in Life Science
Why a Watched Kettle Never Boils
by Susie Vrobel (Institute for Fractal Research, Germany)

...This book provides an interdisciplinary introduction to the notion of fractal time, starting from scratch with a philosophical and perceptual puzzle. How subjective duration varies, depending on the way we embed current content into contexts, is explained.

The complexity of our temporal perspective depends on the number of nestings performed, i.e. on the number of contexts taken into account. This temporal contextualization is described against the background of the notion of fractal time. Our temporal interface, the Now, is portrayed as a fractal structure which arises from the distribution of content and contexts in two dimensions: the length and the depth of time. The leitmotif of the book is the notion of simultaneity, which determines the temporal structure of our interfaces.

Recent research results are described which present and discuss a number of distorted temporal perspectives. It is suggested that dynamical diseases arise from unsuccessful nesting attempts, i.e. from failed contextualization. Successful nesting, by contrast, manifests itself in a “win-win handshake” between the observer-participant and his chosen context. The answer as to why a watched kettle never boils has repercussions in many a discipline. It would be of immense interest to anyone who works in the fields of cognitive and complexity sciences, psychology and the neurosciences, social medicine, philosophy and the arts.

When Time Slows Down
Subjective Duration
The Fractal Structure of the Now: Time's Length, Depth and Density
Fractal Temporal Perspectives
Corrective Distortions
The View from Within: In-Forming Boundaries
Contextualization: Extended Observer-Participants
Temporal Binding: Synchronizing Perceptions
Nesting Speed: Global vs Local Perspectives
Duration: Distributing Content and Context
Modifying Duration I: Nesting and De-Nesting
Modifying Duration II: Time Condensation
Defining Boundaries: Why is It Always Now?
Outlook: Here There be Dragons.

Readership: Cognitive scientists, philosophers working on the topic of time, cyberneticists and systems theorists focusing on nested systems and connectivity, mathematicians and logicians working on fractals and nested systems, psychologists and psychoanalysts interested in contextualization abilities, psycholinguists and neuro-scientists working on synchronization, medical practitioners focusing on integrative health care, theoretical physicists concerned with time, nonlinear dynamics, causality and connectedness and teachers contemplating the effect of temporal contextualization.

Human electroencephalograms seen as fractal time series:
Mathematical analysis and visualization

The paper presents a novel technique of nonlinear spectral analysis, which has been used for processing encephalograms of humans. This technique is based on the concept of generalized entropy of a given probability distribution, known as the Rényi entropy that allows defining the set of generalized fractal dimensions of encephalogram (EEG)
and determining fractal spectra of encephalographic signals. Unlike the Fourier spectra, the spectra of fractal dimensions contain information of both frequency and amplitude characteristics of EEG and can be used together with well-accepted techniques of EEG analysis as an enhancement of the latter. Powered by volume visualization of the brain activity, the method provides new clues for understanding the mental processes in humans.

2005 Elsevier Ltd. All rights reserved.
Keywords: Fractal time series; Generalized entropy; EEG; Visualization; FRep; Implicit functions

"General overview

Some researchers in this field posit that positive psychology can be delineated into three overlapping areas of research:
Research into the Pleasant Life, or the "life of enjoyment", examines how people optimally experience..., forecast, and savor the positive feelings and emotions that are part of normal and healthy living (e.g. relationships, hobbies, interests, entertainment, etc.).

The study of the Good Life, or the "life of engagement", investigates the beneficial affects of immersion, absorption, and flow that individuals feel when optimally engaged with their primary activities. These states are experienced when there is a positive match between a person's strength and the task they are doing, i.e. when they feel confident that they can accomplish the tasks they face. (See related concept, Self-efficacy)

Inquiry into the Meaningful Life, or "life of affiliation", questions how individuals derive a positive sense of well-being, belonging, meaning, and purpose from being part of and contributing back to something larger and more permanent than themselves (e.g. nature, social groups, organizations, movements, traditions, belief systems).

These categories appear to be neither widely disputed nor adopted by researchers across the 12 years that this academic area has been in existence.
‎"The development of the Character Strengths and Virtues (CSV) handbook represents the first attempt on the part of the research community to identify and classify the positive psychological traits of human beings. Much like the Diagnostic ...and Statistical Manual of Mental Disorders (DSM) of general psychology, the CSV provides a theoretical framework to assist in understanding strengths and virtues and for developing practical applications for positive psychology. This manual identifies six classes of virtue (i.e., "core virtues"), made up of twenty-four measurable character strengths.

The introduction of CSV suggests that these six virtues are considered good by the vast majority of cultures and throughout history and that these traits lead to increased happiness when practiced. Notwithstanding numerous cautions and caveats, this suggestion of universality hints that in addition to trying to broaden the scope of psychological research to include mental wellness, the leaders of the positive psychology movement are challenging moral relativism and suggesting that we are "evolutionarily predisposed" toward certain virtues, that virtue has a biological basis.

Comedians are considered masters of humor
The organization of these virtues and strengths is as follows:

Wisdom and Knowledge: creativity, curiosity, open-mindedness, love of learning, perspective, innovation

Courage: bravery, persistence, integrity, vitality
Humanity: love, kindness, social intelligence

Justice: citizenship, fairness, leadership

Temperance: forgiveness and mercy, humility, prudence, self control

Transcendence: appreciation of beauty and excellence, gratitude, hope, humor, spirituality

It should be noted that the organization of these virtues into 5 groups is contested. It has been suggested that the 24 strengths identified are more accurately grouped into just 3 or 4 categories: Intellectual Strengths, Interpersonal Strengths, and Temperance Strengths or alternatively Interpersonal Strengths, Fortitude, Vitality, and Cautiousness

Positive experiences


Mindfulness, defined as actively searching for novelty, is also characterized as non-judging, non-striving, accepting, patient, trusting, open, curious, and letting go. Its benefits include reduction of stress, anxiety, depression, and chronic pain.


Flow, or a state of absorption in one's work, is characterized by intense concentration, loss of self-awareness, a feeling of control, and a sense that "time is flying." Flow is an intrinsically rewarding experience, and it can also help one achieve a goal (e.g. winning a game) or improve skills (e.g. becoming a better chess player).


Spirituality is associated with mental health, managing substance abuse, marital functioning, parenting, and coping. It has been suggested that spirituality also leads to finding purpose and meaning in life. This research on the benefits of spirituality is limited, however, to mostly studies using cross-sectional questionnaires.

Positive futures


Self-efficacy is one's belief in one's ability to accomplish a task by one's own efforts. Low self-efficacy is associated with depression; high self-efficacy can help one overcome abuse, overcome eating disorders, and maintain a healthy lifestyle. High self-efficacy also improves the immune system, aids in stress management, and decreases pain. A related but somewhat differing concept is Personal effectiveness which is primarily concerned with the methodologies of planning and implementation of accomplishment.

Learned optimism

Learned optimism is the idea that a talent for joy, like any other, can be cultivated. It is contrasted with learned helplessness. Learning optimism is done by consciously challenging self talk if it describes a negative event as a personal failure that permanently affects all areas of the person's life."

If the "how's" of psychological health were "one size fits all" (may be good for an industrial economist) then we'd be(come) a rather boring species.

Thursday, October 7, 2010

Biocosms, Self-Duality, Mirrorhouses, Conspansion, Illuminationism, Relative Realism, Monistic Idealism, Constructivist Semiotics, Artificial Life

‎"Research on the geometry of language is highly suggestive in this regard (Van Fraasen, 1980 Van Fraasen and Hooker, 1976 ). If indeed the brain has learned to utilize itself, and this is what we pass along generation to generation, and why knowledge cannot be inherited. Our children inherit only the structure and with... it the potentialities of re-structure."

‎"Language resides on the neurological arrangement of the brain and this neural wiring is somehow utilized in the process of language generation and reception. Using the lexicon of molecular biology, one might posit that instead of nouns an...d verbs and objects, language generation entails in part the strength of ion transference across synaptic clefts. To view a schematic of a neuron, with its dendrites and synapses, is to 'see' grammar and, should we capture the actual exchange of molecules across the cleft, to 'see' language at work. This assumption poses immediate difficulties, as there exist approximately 100 billion neurons in the brain, each of which is extraordinarily complex in its composition. Compounding this, we also know that neurons are connected to other neurons in widely different patterns which can be determined by an individual's experience, and that to trace a cause or effect, or to trace the connectivity of a neuron, is a labyrinthine affair. As noted above, strict localization of information or processes seems doubtful.

There are also different types of synaptic transmission (voltage-gated, ligand-gated); many kinds of neurotransmitters (glutamate, the catecholamines, and serotonin, for example). There also exists a system of neuropeptides that play different roles in brain information processing (cholecystokinin) and more simplified chemicals (acetylcholine). The act of transmittal from synaptic terminal to receiver frequently involves additional stages of biosynthesis. "Second messengers" may form, adding a further modulation onto the process.

An elementary schematic of neurotransmission inspires a few observations. Firstly, it is obvious that even with the technical tools of the late twentieth century, the material brain remains hauntingly complex. Indeed, because of the sophistication of modern analysis the complexity has become better known and appreciated. Secondly, when discussing language, we may also begin to appreciate why the most complex human tool ever devised is so complex. This in itself argues for a 'bottom up' theory of language generation and interpretation. Language has rules, but these rules are a) themselves made up of the material brain, i.e., are structuring devices that ultimately reside in the learned correspondence of a vast array of neurons b) operate along the material neuronal pathways of the brain and c) are thus determined by the biological structure of the brain.

Acknowledging this, it is much easier to understand how language can alter moods, change attitudes, influence behavioral patterns, or result in deep conceptual conversion. Words can cause anger, peacefulness, elevate pulse frequency, precipitate the hot rush of adrenaline surges or make us feel colder by inducing fright. Language can act as a chemical change because it is itself comprised of chemical changes. Biosynthesis is required at some level to read this page, just as it was required to compose it. Because of the strong moods and emotions language can induce, it is probable that language uses both the cortical and limbic system in the brain (Cytowic, 1989). Words are not incorporeal.

What does this mean for the understanding of language? The brain itself is the product of evolutionary biology; each of our brains has developed from birth in what can be described as a highly complex sequencing of information transfer that begins with DNA protein and advances throughvarious interpretation and translation procedures to build the brain as well as other differentiated organs. In many accounts of human development at the molecular level, terms usually associated with language study are used, such as translation, transcription, information exchange, and semantics.

The creation and growth of a biological life form is a series of re-interpretations of code that can result in differentiation of parts that function coherently. We speak of the 'language of genes,' and the 'language of proteins,' but much more rarely of the protein components, or the salts, or synaptic gates of War and Peace. Rather than impose upon language a further abstraction, we might see language as a result of these biological processes that should reflect in some ways these processes. The rules of language would not thus be innate linguistic rules for language generation that are separate from other kinds of brain function, but generalities and predispositions that govern evolution and other instances of information exchange in the living organism.

One correlative, however, we do ascribe to information transfer whether it is via proteins or literary texts is the use of symbols. Natural language is symbolic, as it is representative of something it describes, and strings of natural language components--the context of word groupings--can further symbolize. Thus the word 'chat' in Madame Bovary may symbolize on one level a living feline, or a category of felines, and with repeated contextual occurrences in the novel also come to symbolize infidelity. That the relation between the genotype and phenotype is largely symbolic may also have bearing here, just as the observation that only matter-symbol entities evolve."

“What I am saying, in essence, is that in attempting to explain the linkage between life, intelligence and the anthropic qualities of the cosmos, we have been looking through the wrong end of the telescope. My Selfish Biocosm hypothesis asserts that life and intelligence are, in fact, the primary cosmic phenomena and that everything else—the constants of nature, the dimensionality of the universe, the origin of carbon and other elements in the hearts of giant supernovas, the pathway traced by biological evolution—is secondary and derivative. I doubt that a traditional cosmologist or astrophysicist would have reached this conclusion. I was able to do so only because I am an outsider.”


‎"If there is no God -- no outside transcendent being who designed and created the cosmos and life -- from whence did it all come and how are we to find meaning in an apparently meaningless universe? The answer is derived from science, specifically the new sciences of chaos and complexity theory that attempt to formulate natural explanations for these apparent supernatural phenomena. In this creative consilience of cosmology, evolutionary biology, and complexity theory, James Gardner courageously speculates about how it all could have come about and what it could possibly all mean using only the tools of science. Biocosm is breathtaking in its scope and its subject -- the cosmos and everything in it -- is far grander than the anthropocentric proscenium on which theistic world views play themselves out." - Michael Shermer, Publisher of Skeptic Magazine, monthly columnist for Scientific American, and author of Why People Believe Weird Things.

‎"The Super-Copernican Principle: Just as Copernicus displaced geocentricity with
heliocentricity, showing by extension that no particular place in the universe is special and thereby repudiating “here-centeredness”, the Super-Copernican Pr
inciple says that no particular point in time is special, repudiating “now-centeredness”.

Essentially, this means that where observer-participation functions retroactively, the participatory burden is effectively distributed throughout time. So although the “bit-size” of the universe is too great to have been completely generated by the observer-participants who have thus far existed, future generations of observer-participants, possibly representing modes of observer-participation other than that associated with human observation, have been and are now weighing in from the future.

(The relevance of this principle to the Participatory Anthropic Principle is self-evident.)
Deterministic computational and continuum models of reality are recursive in the standard sense; they evolve by recurrent operations on state from a closed set of “rules” or “laws”. Because the laws are invariant and act deterministically on a static discrete array or continuum, there exists neither the room nor the means for optimization, and no room for self-design.

The CTMU, on the other hand, is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity. This amounts to self-generative freedom, hologically providing reality with a “self-simulative scratchpad” on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes.
If the universe is really circular enough to support some form of “anthropic” argument, its circularity must be defined and built into its structure in a logical and therefore universal and necessary way. The Telic principle simply asserts that this is the case; the most fundamental imperative of reality is such as to force on it a supertautological, conspansive structure. Thus, the universe “selects itself” from unbound telesis or UBT, a realm of zero information and unlimited ontological potential, by means of telic recursion, whereby infocognitive syntax and its informational content are cross-refined through telic (syntax-state) feedback over the entire range of potential syntax-state relationships, up to and including all of spacetime and reality in general.

The Telic Principle differs from anthropic principles in several important ways. First, it is accompanied by supporting principles and models which show that the universe possesses the necessary degree of circularity, particularly with respect to time. In particular, the Extended Superposition Principle, a property of conspansive spacetime that coherently relates widely-separated events, lets the universe “retrodict” itself through meaningful cross-temporal feedback.

Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so-called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility.

In keeping with its clear teleological import, the Telic Principle is not without what might be described as theological ramifications. For example, certain properties of the reflexive, self-contained language of reality – that it is syntactically self-distributed, self-reading, and coherently self-configuring and self-processing – respectively correspond to the traditional theological properties omnipresence, omniscience and omnipotence. While the kind of theology that this entails neither requires nor supports the intercession of any “supernatural” being external to the real universe itself, it does support the existence of a supraphysical being (the SCSPL global operator-designer) capable of bringing more to bear on localized physical contexts than meets the casual eye. And because the physical (directly observable) part of reality is logically inadequate to explain its own genesis, maintenance, evolution or consistency, it alone is incapable of properly containing the being in question."

Sounds very similar to the CTMU. The difference may be that the CTMU would be described as Model-'Interdependent' Realism.

"This multiplicity of distinct theories prompts the authors to declare that the only way to understand reality is to employ a philosophy called "model-dependent realism". Having declared that "philosophy is d
ead", the authors unwittingly develop a theory familiar to philosophers since the 1980s, namely "perspectivalism". This radical theory holds that there doesn't exist, even in principle, a single comprehensive theory of the universe. Instead, science offers many incomplete windows onto a common reality, one no more "true" than another. In the authors' hands this position bleeds into an alarming anti-realism: not only does science fail to provide a single description of reality, they say, there is no theory-independent reality at all. If either stance is correct, one shouldn't expect to find a final unifying theory like M-theory - only a bunch of separate and sometimes overlapping windows."


"Stephen Hawking and Thomas Hertog have a new paper out, called Populating the Landscape: A Top Down Approach. It contains his version of the anthropic landscape idea, based on his “no-boundary” idea of quantum cosmology (sometimes also referred to as the “Hartle-Hawking wavefunction”), and he refers to it as “top-down cosmology”.

Here’s part of the summary:

In a top down approach one computes amplitudes for alternative histories of the universe with final boundary conditions only.

The boundary conditions act as late time constraints on the alternatives and select the subclass of histories that contribute to the amplitude of interest. This enables one to test the proposal, by searching among the conditional probabilities for predictions of future observations with probabilities near one. In top down cosmology the histories of the universe thus depend on the precise question asked, i.e. on the set of constraints that one imposes…

The top down approach we have described leads to a profoundly different view of cosmology, and the relation between cause and effect. Top down cosmology is a framework in which one essentially traces the histories backwards, from a spacelike surface at the present time. The no boundary histories of the universe thus depend on what is being observed, contrary to the usual idea that the universe has a unique, observer independent history. In some sense no boundary initial conditions represent a sum over all possible initial states. This is in sharp contrast with the bottom-up approach, where one assumes there is a single history with a well defined starting point and evolution.


We have also discussed the anthropic principle. This can be implemented in top down cosmology, through the specification of final boundary conditions that select histories where life emerges. Anthropic reasoning within the top down approach is reasonably well-defined, and useful to the extent that it provides a qualitative understanding for the origin of certain late time conditions that one finds are needed in top down cosmology."

"Monistic Idealism (or just Idealism) is a metaphysical theory which states that consciousness, not matter, is the ground of all being. It is a monistic theory because it holds that there is only one type of thing in the universe, and a form of idealism because it holds that one thing to be consciousness. In India this concept is central to Vedanta philosophy.

Monistic idealism rejects any notion of consciousness being an "accident" or the mere side product of material interactions. Instead, consciousness comes before matter; it is the fundamental wellspring from which reality is created. In the words of physicist Amit Goswami, who wrote a book The Self-Aware Universe (1993) on this concept:

The current worldview has it that everything is made of matter, and everything can be reduced to the elementary particles of matter, the basic constituents — building blocks — of matter. And cause arises from the interactions of these basic building blocks or elementary particles; elementary particles make atoms, atoms make molecules, molecules make cells, and cells make brain. But all the way, the ultimate cause is always the interactions between the elementary particles. This is the belief — all cause moves from the elementary particles. This is what we call "upward causation." So in this view, what human beings — you and I think of as our free will does not really exist. It is only an epiphenomenon or secondary phenomenon, secondary to the causal power of matter. And any causal power that we seem to be able to exert on matter is just an illusion. This is the current paradigm.

Now, the opposite view is that everything starts with consciousness. That is, consciousness is the ground of all being. In this view, consciousness imposes "downward causation." In other words, our free will is real. When we act in the world we really are acting with causal power. This view does not deny that matter also has causal potency — it does not deny that there is causal power from elementary particles upward, so there is upward causation — but in addition it insists that there is also downward causation. It shows up in our creativity and acts of free will, or when we make moral decisions. In those occasions we are actually witnessing downward causation by consciousness."

"The ultimate “boundary of the boundary” of the universe is UBT, a realm of zero constraint and infinite possibility where neither boundary nor content exists. The supertautologically-closed universe buys internal diffeonesis only at the price of global synesis, purchasing its informational distinctions only at the price of coherence. No question, no answer reflects the fact that reality consists not of mere information, but infocognition, and that information on state is crucially linked to and dependent on syntax…the syntax of the “questions” asked of itself by the self-configuring universe. Due to the self-configurative freedom inherited by reality from UBT, the dynamically self-configuring universe displays uncertainty and complementarity and thus cannot be locked into locally-determinate answers for all possible questions at once, while the extended self-connectivity of conspansive spacetime unavoidably implicates the environment in the Q&A."

Conspansion and Conation have similarities:

"Psychology has traditionally identified and studied three components of mind: cognition, affect, and conation (Huitt, 1996; Tallon, 1997). Cognition refers to the process of coming to know and und
erstand; the process of encoding, storing, processing, and retrieving information. It is generally associated with the question of "what" (e.g., what happened, what is going on now, what is the meaning of that information.)

Affect refers to the emotional interpretation of perceptions, information, or knowledge. It is generally associated with one’s attachment (positive or negative) to people, objects, ideas, etc. and asks the question "How do I feel about this knowledge or information?"

Conation refers to the connection of knowledge and affect to behavior and is associated with the issue of "why." It is the personal, intentional, planful, deliberate, goal-oriented, or striving component of motivation, the proactive (as opposed to reactive or habitual) aspect of behavior (Baumeister, Bratslavsky, Muraven & Tice, 1998; Emmons, 1986). It is closely associated with the concept of volition, defined as the use of will, or the freedom to make choices about what to do (Kane, 1985; Mischel, 1996). It is absolutely critical if an individual is successfully engage in self-direction and self-regulation."

"There is much discussion in education and psychology about the need to "develop the whole child." But what exactly does that mean? In order to answer that question one needs to have some view of the nature of a human being and the purpose of this life. The concept of "Becoming a Brilliant Star" is designed to address this issue. This page and associated links provide access to materials used in an Institute course and deepening programs that address issues of human development and education from the perspective of Bahá'í scripture. The primary audiences include parents, educators and others responsible for developing the qualities of children and youth. Two additional data bases are available that provide information based on scripture from other world religions as well as from the perspective of science.

The intent is to demonstrate the unity of science and religion as it relates to human growth and development.

The Brilliant Star graphic shown below has as its focus three critical issues that face young people today: vision, character and competence. Vision relates to dreams and goals of what is possible and desirable to do. Character has to do with the habits or patterns of thinking, feeling, willing and behaving that link to issues of right and wrong, justice and equity, and morality. Competence has to do with the knowledge, values, attitudes and skills that relate to successful performance. All three issues are intertwined and difficult to separate, although our experience suggests these can be observed separately in people.

The Brilliant Star is comprised of ten domains of competence, plus
character and style. Five of the domains are in essence more internal:

spirit (soul, connection with the divine, purpose of life);
body, (action, doing, physical, relation to nature) and
three faculties of mind traditionally identified in psychology:

cognition (thinking, reasoning, intelligence),
affect (feeling, emotions, values) and
conation (commitment, will, volition).

Five of the domains are more external:

family (mate selection, marriage relationship, parenting),
friends (human relationships in small groups), work and career (arts and professions), wealth and finances (true wealth, material wealth, stewardship), and sociocultural (institutional relationships, peace, unity of humankind).

Moral character and personal style are central to development of competence in each of the nine domains and are shown in the middle of the star. Vision relates to all of these domains in that one's ideas about possibilities and desires leads one to set goals and strive for excellence in each area."

‎"Replacing Cartesian dualism with an advanced form of dual-aspect monism, the CTMU treats the abstract and mathematical, and the concrete and physical, as coincident aspects of the same reality.

Reality becomes a self-distributed “hologic
al” system whose essential structure is replicated everywhere within it as mathematical rules of self-recognition and self-processing. Hology, the central attribute of any self-recognizing, self-processing entity, is a logical species of self-similarity according to which such an entity distributes over itself as rules of structure and evolution…rules that inhere in, and are obeyed by, every interacting part of the system.

Thus, what the system becomes is always consistent with what it already is (and vice versa); its causal integrity is tautologically preserved. In the CTMU, these rules – the syntax of the language spoken to reality by reality itself - are understood to be largely mathematical in nature.

The theoretic vantage of the CTMU is essentially logical, with an accent on model theory. Its perspective is associated with the mathematical discipline governing the formulation and validation of theories, namely logic, with emphasis on the branch of logic which deals with the mapping of theories to their universes, namely model theory. This elevates it to a higher level of discourse than ordinary scientific theories, which are simply compact mathematical descriptions of observational data, and even most mathematical theories, which are compact mathematical descriptions of mathematical objects, structures and processes. This is reflected in the name of the theory; “CTMU” is just a way of saying “the metatheory that describes a model, or valid interpretation, of the theory of cognition, including logic and mathematics, in the real universe (and vice versa).
Stephen Hawking is among those who have proposed a way out of the regress. In collaboration with James Hartle, he decided to answer the last question - what is the universe and who made it? - as follows. “The universe made itself, and its structure is determined by its ability to do just that.” This is contained in the No Boundary Proposal, which Hawking describes thusly: “This proposal incorporates the idea that the universe is completely self-contained, and that there is nothing outside the universe. In a way, you could say that the boundary conditions of the universe are that there is no boundary.” To mathematically support this thesis, Hawking infuses the quantum wavefunction of the universe with a set of geometries in which space and time are on a par. The fact that time consists of a succession of individual moments thus becomes a consequence of spatial geometry, explaining the “arrow of time” by which time flows from past to future.

Unfortunately, despite the essential correctness of the “intrinsic cosmology” idea (to make the universe self-contained and self-explanatory), there are many logical problems with its execution.

These problems cannot be solved simply by choosing a convenient set of possible geometries (structurings of space); one must also explain where these geometric possibilities came from. For his own part, Hawking explains them as possible solutions of the equations expressing the laws of physics. But if this is to be counted a meaningful explanation, it must include an account of how the laws of physics originated…and there are further requirements as well. They include the need to solve paradoxical physical conundrums like ex nihilo cosmogony (how something, namely the universe, can be created from nothing), quantum nonlocality (how subatomic particles can instantaneously communicate in order to preserve certain conserved physical quantities), accelerating cosmic expansion (how the universe can appear to expand when there is no external medium of expansion, and accelerate in the process to boot), and so on. Even in the hands of experts, the conventional picture of reality is too narrow to meaningfully address these issues. Yet it is too useful, and too accurate, to be “wrong”. In light of the fundamentality of the problems just enumerated, this implies a need for additional logical structure, with the extended picture reducing to the current one as a limiting case.

The CTMU takes the reflexive self-containment relationship invoked by Hawking and some of his cosmological peers and predecessors and explores it in depth, yielding the logical structures of which it is built. Together, these structures comprise an overall structure called SCSPL, acronymic for Self-Configuring Self-Processing Language. The natural terminus of the cosmological self-containment imperative, SCSPL is a sophisticated mathematical entity that possesses logical priority over any geometric explanation of reality, and thus supersedes previous models as a fundamental explanation of the universe we inhabit. In doing so, it relies on a formative principle essential to its nature, the Telic Principle. A logical analogue of teleology, the Telic Principle replaces the usual run of ontological hypotheses, including quasi-tautological anthropic principles such as “we perceive this universe because this universe supports our existence,” as the basis of cosmogony."

"Infocognition is the monic substance that results from removing the Cartesian distinction between self (mind) and other (external reality)

space and time are generalized information and cognition respectively subjectivity is simply reflex
ive infocognition or spacetime...an infocognitive domain for which the spatiotemporal radius is minimal and thus coherent

Enlarge the radius relative to a given conspansive layer of spacetime, and the system decoheres into subject-object interaction."

"In Section 1, using the ideas of the past two chapters, I will present the radical but necessary idea that self and reality are belief systems.

Then, in Section 2, I will place this concept in the context of the theory of hypersets and situation semantics, giving for the first time a formal model of the universe in which mind and reality reciprocally contain one another. This "universal network" model extends the concept of the dual network, and explains how the cognitive equation might actually be considered as a universal equation."

Essays on Life Itself (Robert Rosen):

Reflections on the fate of spacetime:

"Telesis, which can be characterized as “infocognitive potential”, is the primordial active medium from which laws and their arguments and parameters
emerge by mutual refinement or telic recursion. In other words, telesis is a kind of “pre-spacetime” from which time and space, cognition and information, state-transitional syntax and state, have not yet separately emerged. Once bound in a primitive infocognitive form that drives emergence by generating “relievable stress” between its generalized spatial and temporal components - i.e., between state and state-transition syntax – telesis continues to be refined into new infocognitive configurations, i.e. new states and new arrangements of state-transition syntax, in order to relieve the stress between syntax and state through telic recursion (which it can never fully do, owing to the contingencies inevitably resulting from independent telic recursion on the parts of localized subsystems). As far as concerns the primitive telic-recursive infocognitive MU form itself, it does not “emerge” at all except intrinsically; it has no “external” existence except as one of the myriad possibilities that naturally exist in an unbounded realm of zero constraint."

Theoretical Framework for Learnable Universe

• The framework described here does not depend on the specifics of the laws of physics in our universe

• Requirements for dynamics (Stapp’s terminology)
– Shrödinger evolution between observations
– Dirac probabilities to answer questions posed to Nature
– Heisenberg process evolves toward

» Choose to observe using value of information
» Choose what to observe using maximum expected utility

• Requirements for structure of Hamiltonian
– Infinite dimensional
– Can be approximated by sequence of finite dimensional models
– Self-similar structure


• Conscious agents construct representations

• Conscious agents learn better representations over time

• Common mathematics and algorithms for
– Simulating physical systems
– Learning complex representations

» Many parameters
» High degree of conditional independence
» High degree of self-similarity

• As physical system evolves to minimize free energy its conscious subsystems evolve to construct better representations of the system they inhabit
– Maximum physical entropy corresponds to maximum simultaneous knowledge of (UT,ET)

"Notes on Self-Representing, and Other, Information Structures:

Is The Universe Deterministic?

Some people spend many hours in their youth wondering if the universe is deterministic. Some of these people end their pondering with the thought that, if we believe the universe is deterministic, then we behave differently than if we believe it isn’t, or than if we had never thought about the question in the first place.

It is well known in computation theory that there does not exist a Turing Machine M that can predict, for every Turing Machine M′ and every input i to M, whether M′ will eventually halt. Yet the reason is not that the instructions in M or M′ are vaguely defined. In keeping with one of the themes of these notes, we may say, it is as though there isn’t enough room in the space of Turing Machines for Machines such as M.

Thus, even if God does not play dice, there may be non-determinism in the universe, simply because there are not enough bits for the left hand always to represent what the right is doing."

"I’ve traced out what seemed to me an interesting path. First I stumbled upon Bart Jacob’s book Introduction to Coalgebra: Towards Mathematics of States and Observations. This I’d thoroughly recommend. Let me give you some nuggets from it:

The duality with algebras forms a source of inspiration and of opposition: there is a “hate-love” relationship between algebra and coalgebra. (p. v)

As already mentioned, ultimately, stripped to its bare minimum, a programming language involves both a coalgebra and an algebra. A program is an element of the algebra that arises (as so-called initial algebra) from the programming language that is being used. Each language construct corresponds to certain dynamics, captured via a coalgebra. The program’s behaviour is thus described by a coalgebra acting on the state space of the computer. (p. v)"

‎"A rule of thumb is: data types are algebras, and state-based systems are coalgebras. But this does not always give a clear-cut distinction.

For instance, is a stack a data type or does it have a state? In many cases however, this rule of
thumb works: natural numbers are algebras (as we are about to see), and machines are coalgebras. Indeed, the latter have a state that can be observed and modified. (pp. 47-8)
Initial algebras are special, just like final coalgebras. Initial algebras (in Sets) can be built as so-called term models: they contain everything that can be built from the operations themselves, and nothing more. Similarly, we saw that final coalgebras consist of observations only. (p. 48)"

The Duality of the Universe:

"It is proposed that the physical universe is an instance of a mathematical structure which possesses a dual structure, and that this dual structure is the collection of all possible knowledge of the physical universe. In turn, the physical universe is then the dual space of the latter."

Event-State Duality: The Enriched Case

"Enriched categories have been applied in the past to both event-oriented true concurrency models and state-oriented information systems, with no evident relationship between the two. Ordinary Chu spaces expose a natural duality between partially ordered temporal spaces (pomsets, event structures), and partially ordered information systems. Barr and Chu's original definition of Chu spaces however was for the general V-enriched case, with ordinary Chu spaces arising for V = Set (equivalently V = Pos at least for biextensional Chu spaces). We extend time-information duality to the general enriched case, and apply it to put on a common footing event structures, higher-dimensional automata (HDAs), a cancellation-based approach to branching time, and other models treatable by enriching either event (temporal) space or state (information) space."

"Any set that can be constructed by adding elements to the space between two brackets can be defined by restriction on the set of all possible sets. Restriction involves the Venn-like superposition of constraints that are subtractive in nature; thus, it is like a subtractive color process involving the stacking of filters. Elements, on the other hand, are additive, and the process of constructing sets is thus additive; it is like an additive color process involving the illumination of the color elements of pixels in a color monitor. CF (constructive-filtrative) duality simply asserts the general equivalence of these two kinds of process with respect to logico-geometric reality.

CF duality captures the temporal ramifications of TD (topological-descriptive, state-syntax, attributive) duality, relating geometric operations on point sets to logical operations on predicates.

Essentially, CF duality says that any geometric state or continuous transformation is equivalent to an operation involving the mutual “filtration” of intersecting hological state-potentials. States and objects, instead of being constructed from the object level upward, can be regarded as filtrative refinements of general, internally unspecified higher-order relations.

CF duality is necessary to show how a universe can be “zero-sum”; without it, there is no way to refine the objective requisites of constructive processes “from nothingness”. In CTMU cosmogony, “nothingness” is informationally defined as zero constraint or pure freedom (unbound telesis or UBT), and the apparent construction of the universe is explained as a self-restriction of this potential. In a realm of unbound ontological potential, defining a constraint is not as simple as merely writing it down; because constraints act restrictively on content, constraint and content must be defined simultaneously in a unified syntax-state relationship." - Langan, 2002, PCID, pg. 26-27

"Thus, conspansive duality relates two complementary views of the universe, one based on the external (relative) states of a set of objects, and one based on the internal structures and dynamics of objects considered as language processors. The former, which depicts the universe as it is usually understood in physics and cosmology, is called ERSU, short for Expanding Rubber Sheet Universe, while the latter is called USRE (ERSU spelled backwards), short for Universe as a Self-Representational Entity. Simplistically, ERSU is like a set, specifically a topological-geometric point set, while USRE is like a self-descriptive nomological language. Whereas ERSU expands relative to the invariant sizes of its contents, USRE “conspands”, holding the size of the universe invariant while allowing object sizes and time scales to shrink in mutual proportion, thus preserving general covariance." - Langan, PCID, 2002

"Reflections on a Self-Representing Universe:

"To put some flesh on this, the kind of duality I am talking about in this post is typified in physics by position-momentum or wave-particle duality. Basically, the structure of addition in flat space X is represented by waves f. Here f is expressed numerically as the momentum of the wave. But the allowed f themselves form a space X*, called ‘momentum space’. The key to the revolution of quantum mechanics was to think of X and X* as equally real, allowing Heisenberg to write down his famous Heisenberg commutation relations between position and momentum. They key was to stop thinking of waves as mere representations of a geometrical reality X but as elements in their own right of an equally real X*. The idea that physics should be position-momentum symmetric was proposed by the philosopher Max Born around the birth of quantum mechanics and is called Born Reciprocity. This in turn goes back (probably) to ideas of Ernst Mach."

"Relative Realism. Are attributions of any special realism uniform across contexts? Relative Realism holds that realism is relative to context of use. Abell accepts this also, for what is relevant depends on the state of the spectator - information in P is relevant to S. It is also predicted by McMahon's account of One Realism if users differ in picture-reading skills - that is, if they have different object-centered descriptions in iconic memory and so see different configurations in picture surfaces. Lifelike Realism requires a specification of the features that are central in our encounters with real objects. Do those features vary historically or culturally? If they do, then incompatible attributions of Lifelike Realism are true in different contexts. Thus Armstrong's account is also consistent with Relative Realism."

Algebraic Approach to Quantum Gravity: relative realism

"In the first of three articles, we review the philosophical foundations of an approach to quantum gravity based on a principle of representation-theoretic duality and a vaguely Kantian-Buddist perspective on the nature of physical reality which I have called `relative realism'. Central to this is a novel answer to the Plato's cave problem in which both the world outside the cave and the `set of possible shadow patterns' in the cave have equal status. We explain the notion of constructions and `co'constructions in this context and how quantum groups arise naturally as a microcosm for the unification of quantum theory and gravity. More generally, reality is `created' by choices made and forgotten that constrain our thinking much as mathematical structures have a reality created by a choice of axioms, but the possible choices are not arbitary and are themselves elements of a higher-level of reality. In this way the factual `hardness' of science is not lost while at the same time the observer is an equal partner in the process. We argue that the `ultimate laws' of physics are then no more than the rules of looking at the world in a certain self-dual way, or conversely that going to deeper theories of physics is a matter of letting go of more and more assumptions. We show how this new philosophical foundation for quantum gravity leads to a self-dual and fractal like structure that informs and motivates the concrete research reviewed in parts II,III. Our position also provides a kind of explanation of why things are quantized and why there is gravity in the first place, and possibly why there is a cosmological constant."

"Rather, I think that this deepest and most long-standing of all problems in fundamental physics still needs a revolutionary new idea or two for which we are still grasping. More revolutionary even than time-reversal. Far more revolutionary and imaginative than string theory. In this post I’ll take a personal shot at an idea — a new kind of duality principle that I think might ultimately relate gravity and information."

"3. Relative Realism

What the bicrossproduct models illustrate is the following general proposition:

there is no absolute physical reality as usually considered but rather it is we in order to make sense of the Universe, who artificially impose a division into ‘abstract laws of nature’ (in the form of abstract structures deemed to exist) and ‘measurements’ made or experiments done to illustrate them. I believe this division is necessary but it is also arbitrary in that the true picture of physical reality should be formulated in such a way as to be independent of this division."

"Illuminationism is a doctrine in theology according to which the process of human thought needs to be aided by God. It is the oldest and most influential alternative to naturalism in the theory of mind and epistemology. It was an important feature of ancient Greek philosophy, Neoplatonism, medieval philosophy, and in particular, the Illuminationist school of Persian Islamic philosophy."

"In the “Hidden Words” Bahá’u’lláh says, “Justice is to be loved above all.” Praise be to God, in this country the standard of justice has been raised; a great effort is being made to give all souls an equal and a true place.
615. When i
n the course of evolution, the stage of thought and reason has been reached, the human mind acts as a mirror reflecting the glory of God.

The face of nature is illumined, the grass, the stones, the hills and valleys shine; but they shine not of themselves, but because they reflect the rays of the sun. It is the sun which shines. In the same way, our minds reflect God. Those who live thinking good thoughts, doing good deeds, and with love in their hearts -- the minds of these become ever clearer, reflecting more and more perfectly the love of God, while the minds of those who live in ignorance
and desire are clouded and obscured, and give forth His light but meagerly.

A stone reflects but slightly the rays of the sun; but if a mirror be held up, though it be small, the whole of the sun will be reflected in it, because the mirror is clear and bright. Just so it is with the minds of men and the Sun of Reality. The great Masters and Teachers so purified their minds by the love of God and of men that they became like polished mirrors, reflecting faithfully the Glory of God.

‘Abbas Effendi, His Life and Teachings,
by Myron H. Phelps, pp. 153-157."

‎"609. God is Love and Peace. God it Truth. God is Omniscience. God is without beginning and without end. God is uncreated and uncreating, yet the Source, the Causeless Cause. God is pure Essence, and cannot be said to be anywhere or in any place.

God is infinite; and as terms are finite, the nature of God cannot be expressed in terms, but as man desires to express God in some way, he calls God “Love” and “Truth,” because these are the highest things he knows. Life is eternal; so man, in order to express God’s infinity, calls God “Life.” But these things in themselves are not God. God is the Source of all, and all things that are, are mirrors reflecting His Glory.

But while God does not create, the first principle of God, Love, is the creative principle. Love is an outpour from God, and is pure spirit. It is one aspect of the Logos, the Holy Spirit. It is the immediate cause of the laws which govern nature, the endless verities of nature which science has uncovered. In brief, it is Divine Law and a Manifestation of God. This Manifestation of God is active, creative, spiritual. It reflects the positive aspect of God.

There is another Manifestation of God which is characterized by passivity, quiescence, inactivity. In itself it is without creative power. It reflects the negative aspect of God. This Manifestation is matter.

Matter, reflecting the negative aspect of God, is self-existent, eternal, and fills all space. Spirit, flowing out from God, permeates all matter. This spirit, Love, reflecting the positive and active aspect of God, impresses its nature upon the atoms and elements. By its power, they are attracted to each other under certain ordered relations, and thus, uniting and continuing to unite, give birth to worlds and systems of worlds. The same laws working under developed conditions bring into existence living beings. Spirit is the life of the form, and the form is shaped by the spirit. The evolution of life and form proceeds hand in hand. The powers of spirit are evolved by the experiences of the form, and the plasticity of the matter of the form is developed by the activity of the spirit. Working up through the mineral and vegetable kingdoms, sense-perception is reached in the animal, and the perfection of form is attained in man.

610. The forms or bodies of component parts, infinite in variety, which in the course of evolution spirit builds as the vehicles of its expression, are, because of the instability of matter, subject to dissolution. As they disappear, others are built following the same patterns, carrying on the characteristics of each.

611. Sense-perception gives rise to desire, desire to will, will to action, and action again to sense-perception. This chain ever repeats itself, and so the powers of thought, memory, reason, and the emotional capacities are evolved in spirit. These powers and capacities of spirit, expressed in individual human beings, constitute human characters.

Through these successive evolutionary steps, spirit develops characters having Divine attributes. The positive, creative aspect of God is reflect in the them. Individuality is derived from expression in individual form. Self-consciousness accompanies individualized character, and the being thus endowed has the potentiality of rising to the knowledge of God.

Characters inspired by the universal human spirit continue in lines of specific developing types, as did species in the vegetable and animal kingdoms.

612. Similar types recur again and again, but without a continuing individual life from one human being to another. This recurrence may be likened to that of the seasons. Spring, summer, autumn and winter return in succession, each season the counterpart of the like season in the previous year -- the same, yet not the same. So flower and fruits come this year from like seed or from the same bush or tree as those of last year, each in the line of succession of its kind, the same in essence, but differing in substance."

Glocal Memory: A New Perspective on Knowledge Representation, Neurodynamics, Distributed Cognition, and the Nature of Mind:

Mirror Neurons, Mirrorhouses, and the Algebraic Structure of t
he Self:

"The GOD, or primary teleological operator, is self-distributed at points of conspansion. This means that SCSPL evolves through its coherent grammatical processors, which are themselves generated in a background-free way by one-to-many endomorphism. The teleo-grammatic functionality of these processors is simply a localized "internal extension" of this one-to-many endomorphism; in short, conspansive spacetime ensures consistency by atemporally embedding the future in the past. Where local structure conspansively mirrors global structure, and global distributed processing "carries" local processing, causal inconsistencies cannot arise; because the telic binding process occurs in a spacetime medium consisting of that which has already been bound, consistency is structurally enforced."

"Idealism is the philosophical theory which maintains that the ultimate nature of reality is based on the mind or ideas. In the philosophy of perception, idealism is contrasted with realism, in which the external world is said to have an apparent absolute existence. Epistemological idealists (such as Kant) claim that the only things which can be directly known for certain are just ideas (abstraction). In literature, idealism means the thoughts or the ideas of the writer."

"Sometimes Hegel used the terms, immediate-mediate-concrete, to describe his triads. The most abstract concepts are those that present themselves to our consciousness immediately. For example, the notion of Pure Being for Hegel was the most abstract concept of all. The negative of this infinite abstraction would require an entire Encyclopedia, building category by category, dialectically, until it culminated in the category of Absolute Mind or Spirit (since the German word, 'Geist', can mean either 'Mind' or 'Spirit')."

‎"Algebraic semiotics is a new approach to meaning and representation, and in particular to user interface design, that builds on five important insights from the last hundred years:

Semiotics: Signs are not isolated items; they come in syst
ems, and the structure of a sign is to a great extent inherited from the system to which it belongs. Signs do not have pre-given "Platonic" meanings, but rather their meaning is relational, because signs are always interpreted in particular contexts. (The first sentence reflects the influence of Saussure, the second that of Pierce.)

Social Context: Signs are used by people as part of their participation in social groups; meaning is primarily a social phenomenon; its purpose is communication. (This reflects some concerns of post-structuralism.)

Morphisms: If some class of objects is interesting, then structure preserving maps or morphisms of those objects are also interesting - perhaps even more so. For semiotics, these morphisms are representations. Objects and morphisms together form structures known as categories.

Blending and Colimits: If some class of objects is interesting, then putting those objects together in various ways is probably also interesting. Morphisms can be used to indicate that certain subojects are to be shared in such constructions, and colimits of various kinds are a category theoretic formalization of ways to put objects together. In cognitive linguistics, blending has been identified as an important way to combine conceptual systems.

Algebraic Specification: Sign systems and their morphisms can be described and studied in a precise way using semantic methods based on equational logic that were developed for the theory of abstract data types."

Systems Science via Computational Semiotics and Generalized Information Theory:

Semiotic Aspects of Generalized Bases of Data:

"Even though most mathematicians do not accept the constructivist's thesis, that only mathematics done based on constructive methods is sound, constructive methods are increasingly of interest on non-ideological grounds. For example, constructive proofs in analysis may ensure witness extraction, in such a way that working within the constraints of the constructive methods may make finding witnesses to theories easier than using classical methods. Applications for constructive mathematics have also been found in typed lambda calculi, topos theory and categorical logic,which are notable subjects in foundational mathematics and computer science. In algebra, for such entities as toposes and Hopf algebras, the structure supports an internal language that is a constructive theory; working within the constraints of that language is often more intuitive and flexible than working externally by such means as reasoning about the set of possible concrete algebras and their homomorphisms.
Physicist Lee Smolin writes in Three Roads to Quantum Gravity that topos theory is "the right form of logic for cosmology" (page 30) and "In its first forms it was called 'intuitionistic logic'" (page 31). "In this kind of logic, the statements an observer can make about the universe are divided into at least three groups: those that we can judge to be true, those that we can judge to be false and those whose truth we cannot decide upon at the present time" (page 28).

"Constructivism is a theory of knowledge (epistemology) that argues that humans generate knowledge and meaning from an interaction between their experiences and their ideas. During infancy, it is an interaction between their experiences and their reflexes or behavior-patterns. Piaget called these systems of knowledge schemata."

"Constructivist epistemology is an epistemological perspective in philosophy about the nature of scientific knowledge. Constructivists maintain that scientific knowledge is constructed by scientists and not discovered from the world. Constructivists claim that the concepts of science are mental constructs proposed in order to explain our sensory experience. Constructivism believes that there is no single valid methodology and there are other methodologies for social science: qualitative research. It thus is opposed to positivism, which is a philosophy that holds that the only authentic knowledge is that which is based on actual sense experience."

"In the discipline of international relations, constructivism is the claim that significant aspects of international relations are historically and socially contingent, rather than inevitable consequences of human nature or other essential characteristics of world politics."

"A semiotic approach to culture views culture as a knowledge system. From this perspective, cultural forms have both symbolic and cognitive dimensions. As symbolic forms, culture comprises a set of objectively observable public institutions. A particular kind of handshake, a set of kinship terms, a method of preparing sago, an origin myth, an arrangement of house space, a conception of femaleness are all examples of possible cultural conventions. As a cognitive construct, culture comprises forms of knowledge embodied in cognitive models or schemata. It is by means of cultural schemata that objective cultural forms become available to the mind as one of its constituting features. Culture thus has a kind of double life as an objective social fact in the world and as a dimension of subjective experience. A semiotic view of culture invites us to bridge these perspectives."

Information Flow: A Web of Constraints on a Giant Global Graph:

The semiotics of control and modeling relations in complex systems

Distributed Knowledge Systems and Modeling Team, Modeling, Algorithms, and Informatics Group (CCS-3), Los Alamos National Laboratory, MS B265, Los Alamos, NM 87545, USA

"We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.

Author Keywords: Models; Control; Semiotics; Semantic closure; Systems theory; Cybernetics"

Semiotic Brains and Artificial Minds: How Brains Make Up Material Cognitive Systems

Biological Evolution: A Semiotically Constrained Growth of Complexity

"Any living system possesses internal embedded description and exists as a superposition of different potential realisations, which are reduced in interaction with the environment. This reduction cannot be recursively deduced from the state in time present, it includes unpredictable choice and needs to be modelled also from the state in time future. Such non-recursive establishment of emerging configuration, after its memorisation via formation of reflective loop (sign-creating activity), becomes the inherited recursive action. It leads to increase of complexity of the embedded description, which constitutes the rules of generative grammar defining possible directions of open evolutionary process. The states in time future can be estimated from the point of their perfection, which represents the final cause in the Aristotelian sense and may possess a selective advantage. The limits of unfolding of the reflective process, such as the golden ratio and the golden wurf are considered as the canons of perfection established in the evolutionary process."

Chaos, Holofractals, (Ultra-)Complexity, Hyperincursion, Autopoiesis, Biological Information, Cybersemiosis, Systems Biology, Natural Computing, Intelligent Self-Design, Transductive Evolution, Superorganisms, Local-Global Confluence & Temporal Feedback Models:

Open Problems in Artificial Life:

‎"The challenges are classified under three broad issues: the transition to life, the evolutionary potential of life, and the relation between life and mind and culture. The challenges falling under the third issue are necessarily more speculative and open-ended, so this whole list may best be viewed as ten challenges plus four areas of investigation. Moreover, some of the questions about mind and culture interweave
scientific and nonscientific issues. Those issues are still important, though, not least because addressing them is probably the best way to clarify what in this area can be
known scientifically.

A. How does life arise from the nonliving?

1. Generate a molecular proto-organism in vitro.
2. Achieve the transition to life in an artificial chemistry in silico.
3. Determine whether fundamentally novel living organizations can exist.
4. Simulate a unicellular organism over its entire lifecycle.
5. Explain how rules and symbols are generated from physical dynamics in living systems.

B. What are the potentials and limits of living systems?

6. Determine what is inevitable in the open-ended evolution of life.
7. Determine minimal conditions for evolutionary transitions from specific to generic response systems.
8. Create a formal framework for synthesizing dynamical hierarchies at all scales.
9. Determine the predictability of evolutionary consequences of manipulating organisms and ecosystems.
10. Develop a theory of information processing, information flow, and
information generation for evolving systems.

C. How is life related to mind, machines, and culture?

11. Demonstrate the emergence of intelligence and mind in an artificial living system.
12. Evaluate the influence of machines on the next major evolutionary transition of life.
13. Provide a quantitative model of the interplay between cultural and biological evolution.
14. Establish ethical principles for artificial life."


Supposedly "ethical principles" are meant to avoid problems, constraints influence outcomes, if the desired outcomes are invariant, the constraints will be subject to change as needed.

"In the Baha'i consultation process, people are taught to present their ideas to the group for consideration, but not to hold fast to their own opinions.

The goal is to achieve a spirit of cooperative and harmonious decision-making. The ultimate decision may be a new idea bearing little resemblance to the original separate opinions of the people participating. The final outcome is often greater than the sum of the parts. Individual opinions and personalities do not prevail, and good, non-partisan, decisions are made. This process diminishes any dominance of personalities and strengthens the sense of community."