Enterprise Design Process:
*PARTS INTERACTING AROUND AN OVERARCHING BUSINESS PURPOSE
*NOT A CONGLOMERATE
*NOT NECESSARILY A GROUP WITH PARTS MORE OR LESS IN THE SAME BUSINESS
*NOT A FINANCIAL HOLDING
*THE DELIBERATE ARRANGEMENT OF FACTORS INTO A SYSTEM
*THE INTEGRATION OF INTERACTIONS INTO A REGULATED WHOLE
*A regulated set of relationships
*Interacting and interrelated parts
*Parts organised for a purpose
*a Whole with novel features
DEFINITION OF STRUCTURE
*Relationships that remain unchanged
*Duration of interest
*Stability and relative change
Process view: PURPOSE
INTRODUCES CONCEPT OF ENTERPRISE AS SYSTEM AS LINKED PROCESSES
BROADENS SCOPE OF POSSIBLE INTERVENTIONS
BASES OF DIFFERENTIATION AND INTEGRATION
-The whole is integrated at the top
-Optimisation of the parts yields optimisation of the whole
-The whole is integrated at the bottom
-Optimisation of the whole is different from optimisation of the parts
*Organise around outcomes, not tasks
*Let output consumers produce output
*Integrate information processing with real work producing the information
*Place decision making where work is performed and build control into process
*Treat geographically dispersed resources as centralised
*Link parallel activities instead of integrating results
*Capture information once and at source
M Hammer, HBR ,1990
CHARACTERISTICS OF BUSINESS RE-ENGINEERING
*Re-work the transformation, not the output.
*Singular (insular) view (process) of the organisational structure
*Substitution of one basis for organisation for another
*Heavy dependence on IT perspective
*Patchwork of (some good) concepts; lacks rigour
*Transcends current boundaries
*Promotes questioning --- What framework?
*Stretches value chain thinking
Endo, Exo, Centro-teleon
(Gyuri Jaros & Anakrion Cloete)
Woven mat of processes:
*Sets of connected activities aimed at purpose
*Interlinked and intersecting processes
*INPUT, TRANSFORMATION, OUTPUT
*HAS PURPOSE AND GOALS
*MEASURES OF PERFORMANCE
*RIGIDITY, FLEXIBILITY & REDUNDANCY
*INVERSE OF LIKELIHOOD OF ACHIEVING ITS GOAL
-Low telentropy = good chance of achieving goal
-High telentropy = low chance of achieving goal
*TELENTROPY “=“ STRESS
PURPOSE OF DESIGN PROCESS
*DESIGN A DESIGN: Model of what ought to be
*CRITICAL REFLECTION: Template for questioning design and reality
*ALIGNMENT: Building up SHARED model of how business works
*PARTICIPATION: Framework for participative design
Biomatrix Theory is a process and web-based systems theory. It is a meta theory which integrates the major systems approaches, models and theoretical concepts developed by other systems thinkers into one coherent theoretical framework. This integration is made possible by the unique conceptual contributions of Biomatrix theory.
The term Biomatrix is derived from the words bios (life) and matrix (mould, womb or pattern). Thus, it literally means pattern of life, or how life is organised. We use the term Biomatrix to describe the whole web of life or the web of all interacting systems on earth.
The fundamental unit of observation of Biomatrix Theory is purposeful, structured and regulated process, which is referred to as “activity system” (or in some of our research articles we also call it process system or teleon).Activity systems link up with each other to form supply chains across and along levels in the systems hierarchy. These supply chains interact with each other in a multitude of ways. In fact, one can view the whole web of life (i.e. the Biomatrix) as a web of interacting supply chains. This gives rise to a web-based view of the world.At various points in the web the interaction of activity systems becomes dense and gives rise to field-like entity systems.
Besides making unique conceptual contributions, Biomatrix Theory also integrates the various systems concepts, models and approaches of other systems thinkers (e.g. of the systems dynamics and ideal systems redesign schools, amongst others) into one coherent meta-systems theory.This integration of the field of General Systems Theory into Biomatrix Theory is a synergistic integration, whereby - to paraphrase a famous systems dictum - Biomatrix Theory is more than the sum of the conceptual parts derived from the various other systems approaches.
Biomatrix Theory makes some unique conceptual contributions to systems thinking, namely:
A distinction between activity and entity systems.Analogous to a fishing net that consists of threads and knots, the Biomatrix is a web that consists of activity systems (i.e. thread-like or vector-like systems) and entity systems (i.e. knot-like or field-like systems).Examples of an entity system are the planet, a society, an organization, an individual, a cell and an atom, while activity system refers to the various activities or functions performed by these entity systems. An entity system emerges from a field of interacting activity systems, yet is more than the sum of its participating activity systems.
This distinction between activity and entity systems is important, because the design and management of entity and activity systems involve different methods and theoretical guiding principles.
Entity systems are characterised by a three-fold organisation.Entity systems consist of three types of activity systems, namely outward, inward and self-directed systems in terms of their purpose.
Amongst others, this three-fold organisation gives rise to a generic organisational structure, namely a three-dimensional process matrix. (We regard this matrix structure as the new organisational structure of the information age, replacing that of the traditional hierarchy of the industrial age.)
Entity systems and activity systems interact and co-produce each other.Activity systems give rise to entity systems and vice versa. Thereby all systems co-produce each other and co-evolve.
This implies that continuous change is inevitable and that systems need to be designed and structured to manage ongoing change without loosing stability, similar to the surfer who needs to keep moving to affect a stable ride, wave after wave.
Co-production and co-evolution occurs across levels.An entity system emerges from the interaction with systems in the outer and inner environment and with itself. Thus, systems co-evolve across three levels.
This implies that a systems intervention needs to span three levels - the interaction of a system with its outer environment, its inner environment and itself (e.g. self-reference, self-reflection and self-management).Systems emerge in the middle from the co-production across three levels. This concept of the emerging middle is a contribution of Biomatrix Theory to evolutionary theory in general.
The Biomatrix consists of three interacting sub-webs. One can distinguish between three types of systems - systems that evolved in nature, systems that emerge from the mind of sentient beings and their interaction with each other (i.e. psychological and social systems) and systems produced by them (i.e. technological systems).We refer to these qualitatively different systems as the naturosphere, psycho-sociosphere and technosphere.
In spite of sharing the same organizational principles, these three types of systems also show differences in organization, thereby requiring different problem (dis)solving approaches and interventions. Managing the interface between them raises issues of carrying capacity and sustainability, amongst others.
Biomatrix Theory emphasises the duality of process and structure.Analogous to the wave - particle duality in physics - Biomatrix Theory emphasises the duality as well as complementing aspects of the process and structure perspective of a system and outlines the organising principles associated with each.
This duality of perspectives gives rise to a worldview that balances change and stability, connectivity and containment, amongst others.
Biomatrix Theory emphasizes the duality of organization in time and space.The existence and continuity of the Biomatrix in time and space gives rise to different organising principles in terms of time and space.
Harmonious co-existence between systems requires management from both perspectives. Likewise, the sustainable development of systems must be managed from a temporal and spatial perspective.
Systems link up with each other through tapping. A contribution offered by a system to another system, needs to be tapped by the receiving system in order to continue. Thus tapping facilitates the continuity of flow of substance, purpose and regulation across system boundaries. The tapping interface also highlights the boundaries between systems.
Without tapping there is no continuity of systems. If tapping does not take place, it can be mediated. During tapping the responsibility shifts from one system to another which has governance implications (e.g. power issues).
The substance of a system is comprised of mei fields.The substance of a system is an interacting field of matter, energy and information or mei fields. It is also referred to as the resources of the system.
Process and supply chain design and management need to consider the optimisation of mei flow and the splitting of mei fields during processing into products and by-products, which become part of different supply chains. The mei composition is also of relevance in resource management.
Systems have a conceptual and physical reality.Analogous to a house that is built (i.e. in physical reality) according to a plan (i.e. its conceptual reality), the physical (Mei) reality of a system is in-formed (i.e. put into form) according to its conceptual (meI) reality. Both types of systems are real with feedback loops between them.
A fault in the conceptual reality of a system will lead to a faulty physical reality of the system.A systems redesign represents a change in the conceptual reality of the system. A systemic performance management system in an organisation links the two realities, allowing continuous improvement of both.
There are seven forces of organisation in a system.Biomatrix Theory identifies seven aspects of systems organisation, namely ethos, aims, process, structure, governance, substance and environmental interaction. Each of these aspects represents a different force that co-produces the overall organisation of a system.
Optimal development of systems requires the development of the system in terms of each of the seven organising aspects (whereby each aspect is associated with different change management approaches), as well as the management of coherence and integration between the different systems aspects.
One can distinguish two types of change within a system.The seven forces of organisation interact with each other to give rise to two fundamentally different flows of change, namely a clockwise flow of intended change and a counter-clockwise one of inherent change.
This distinction of different types of change provides an understanding of how systems develop, change and transform and how one needs to manage change within a system.
The various spatial organising principles of Biomatrix Theory give rise to a generic systems dynamics model.The three types of activity systems within an entity system, the hierarchical organisation of entity systems, the continuity of activity systems along and across levels, and the multi-dimensionality of systems provide a generic systems dynamics. More specifically, the generic systems dynamics within the Biomatrix involves a multi-dimensional inward, outward and self-directed flow of purpose and its associated flow of substance.This generic systems dynamics provides a generic framework to analyse the flow and impact of change throughout the Biomatrix. It prompts the systematic, as well as systemic identification of the variables of a systems dynamics model.
The generic systems dynamics allows for multi-dimensional interaction analysis along and across levels which is useful in both, systems analysis and systems (re)design.
Systems have a teleonic nature.Biomatrix Theory suggests that systems are teleonic, meaning that their activities are driven by a “purpose”.This purpose can be evolved, emergent or designed.
“Without vision, the systems perish”. A change in teleos (purpose or aim) will lead to a fundamental change of the system.
There is telentropy in a system.Until the outcomes of the activity have actually been achieved, systems have telentropy, implying uncertainty of outcomes. Put differently, the concept of telentropy links the teleonic (i.e. conceptual) realty of a system to its physical reality as expressed by the mei flow and configuration of the system, whereby telentropy refers to a misalignment or gap between the two.Because of the interaction of systems, this telentropy is passed from one system to another, following the generic systems dynamics of the Biomatrix.
Telentropy needs to be managed. The method of tracing the nature and flow of telentropy through the generic systems dynamics of the Biomatrix is referred to as telentropy tracing. It is also a useful tool in problem analysis within and across systems and to optimise the interaction between systems across systems boundaries (e.g. in supply chain management).
The spatial and temporal organising principles give rise to frameworks for problem analysis and problem (dis)solving.The spatial organisation of the Biomatrix provides various frameworks for systems analysis and ideal systems (re)design, while the temporal organising principles provide a generic methodology for managing change in a systemic manner.
This facilitates the (dis)solving of any type of organisational, societal or ecological problem, as well as the restoration of systems in nature and the transformation of social systems. It provides the methodology to develop strategies for dissolving society's most pervasive and perplexing problems (e.g. poverty, ecological deterioration, unsustainable societal development, pandemics and infrastructure problems), as well as methods to transform organisations and governments into learning organisations capable of implementing those strategies.
The use of the concept autopoiesis in the theory of viable systems:
Multidisciplinary System Design Optimization:
Isoperformance: Analysis and Design of Complex
Systems with Known or Desired Outcomes
Abstract. Tradeoffs between performance, cost and risk frequently arise during analysis and design of complex systems. Many such systems have both human a...nd technological components and can be described by mathematical input-output models. Often times such systems have known or desired outcomes or behaviors. This paper proposes “isoperformance” as an alternative approach for analyzing and designing systems by working backwards from a set of desired performance targets to a set of acceptable solutions. This is in contrast to the traditional “forward” process, which starts first in the design space and attempts to predict performance in objective space. Isoperformance can quantify and visualize the tradeoffs between determinants (independent design variables) of a known or desired outcome. For deterministic systems, performance invariant contours can be computed using sensitivity analysis and contour following. In the case of stochastic systems, the isoperformance curves can be obtained by regression analysis, given a statistically representative data set. Examples from opto-mechanical systems design and human factors are presented to illustrate specific applications of the method.
Isoperformance is a methodology for obtaining a performance invariant set of analysis or design solutions. These solutions approximate performance invariant contours or surfaces based on an empirical or deterministic system model. The word isoperformance by itself is used interchangeable with the isoperformance approach.
“The experience of the 1960’s has shown that for military aircraft the cost of the final increment of performance usually is excessive in terms of other characteristics and that the overall system must be optimized, not just performance.”
Swarm Intelligence in the Optimization of Concurrent Service Systems:
Operational costs of service systems
A service system is a configuration of technology and organizational networks designed with the intention of providing service to the end users. Practical service systems include hospitals, banks, ticket-issuing and reservation offices, restaurants, ATM, etc. The managerial authorities
are often pressed to drastically reduce the operational costs of active and fully functioning service systems, while the system designers are forced to design (new) service systems operating at minimal costs. Both these situations involve system optimization.
Any optimization problem involves the objective to be optimized and a set of constraints. In this study, we seek to minimize the total cost (tangible and intangible) to the system. The total cost can be divided into two broad categories - cost associated with the incoming customers
having to wait for the service (waiting cost) and that associated with the personnel (servers) engaged in providing service (service cost). Waiting cost is the estimate of the loss to business as some customers might not be willing to wait for the service and may decide to go to the competing organizations, while serving cost is mainly due to the salaries paid to employees.
Business enterprises and companies often mistakenly “throw” capacity at a problem by adding manpower or equipment to reduce the waiting costs. However, too much capacity decreases the profit margin by increasing the production and/or service costs. The managerial staff, therefore, is required to balance the two costs and make a decision about the provision of an optimum level of service. In recent years, customer satisfaction has become a major issue in marketing research and a number of customer satisfaction measurement techniques have been proposed.
Mechanism Design Theory:
"In mid-October, the Nobel prize for economics was awarded to Leo Hurwicz, Eric Maskin and Roger Myerson for their work on mechanism design. Newspaper economics correspondents made what they could of this news, but obviously had only the vaguest idea of what mechanism design is.
Mechanism design is too important to get this kind of treatment." http://www.prospectmagazine.co.uk/2007/11/rulesofthegame/
"In this paper we study the existence of Paréto equilibria of a multicriteria metagame. A theorem on existence of a Paréto equilibrium and a theorem on existence of a Nash equilibrium with weights are presented, which improve and extend some known results in the theory of games with multiple payoffs. Also relations between a Paréto equilibrium and other solution concepts of an optimization problem with multiple criteria are discussed." http://www.springerlink.com/content/c3v35023j7027vl3
"Multiobjective problems involve several competing measures of solution quality, and multiobjective evolutionary algorithms (MOEAs) and multiobjective problem solving have become important topics of research in the evolutionary computation community over the past 10 years. This is an advanced text aimed at researchers and practitioners in the area of search and optimization. The book focuses on how MOEAs and related techniques can be used to solve problems, particularly in the disciplines of science and engineering. Contributions by leading researchers deal with the concepts of problem, solution, objective, constraint, utility and preference, and show how these concepts are being investigated in current practice. The book is distinguished from other texts on MOEAs in that it is not primarily about the algorithms, nor specific applications, but about the concepts and processes involved in solving problems using a multiobjective approach. Each chapter contributes to the central, deep concepts and themes of the book: evaluating the utility of the multiobjective approach; discussing alternative problem formulations; showing how problem formulation affects the search process; and examining solution selection and decision-making. The book will be of benefit to researchers, practitioners and graduate students engaged with the underlying general theories involved in the multiobjective approach in fields such as natural computing and heuristics." Multi-competence Cybernetics: The Study of Multiobjective Artificial Systems and Multi-fitness Natural Systems, , , 
Multiobjective Water Resource Planning: http://tinyurl.com/2d4ykaw
"Machine learning usually has to achieve multiple targets, which are often conflicting with each other. For example in feature selection, minimizing the number of features and the maximizing feature quality are two conflicting objectives. It is also well realized that model selection has to deal with the trade-off between model complexity and approximation or classification accuracy. Traditional learning algorithms attempt to deal with multiple objectives by combining them into a scalar cost function so that multi-objective machine learning problems are reduced to single-objective problems. Recently, increasing interest has been shown in applying Pareto-based multi-objective optimization to machine learning, particularly inspired by the successful developments in evolutionary multi-objective optimization. It has been shown that the multi-objective approach to machine learning is particularly successful in 1) improving the performance of the traditional single-objective machine learning methods 2) generating highly diverse multiple Pareto-optimal models for constructing ensembles and, 3) in achieving a desired trade-off between accuracy and interpretability of neural networks or fuzzy systems. Multi-objective machine learning covers the following main aspects:
Multi-objective clustering, feature extraction and feature selection
Multi-objective model selection to improve the performance of learning models, such as neural networks, support vector machines, decision trees, and fuzzy systems
Multi-objective model selection to improve the interpretability of learning models, e.g., to extract symbolic rules from neural networks, or to improve the interpretability of fuzzy systems
Multi-objective generation of ensembles
Multi-objective learning to deal with tradeoffs between plasticity and stability, long-term and short-term memories, specialization and generalization."Multi-Objective Machine Learning, International Journal of Systemics, Cybernetics and Informatics, M. R. Meybodi
"In typical strategic interactions under incomplete information, different types (of a player) can choose from among a menu of different actions (strategies) that comprises the possibility that they mimic the behavior of other types (of the same, or of another player). Incentive compatibility conditions ensure that different types (of each player) align themselves such that they can be identified by their equilibrium choices. Typically, they are used to prevent that some type profits from copying another type's action (given the other types do not disguise themselves behind others' choices). More generally, incentive compatibility conditions force a desired constellation of choices to form a strategic equilibrium for a given array of types. In particular, they might as well ensure that it be worthwhile for different types to choose the same action (the types pool on an action). Yet in most economic problems, incentive compatibility conditions serve to induce a strategic equilibrium which reveals the players' private information by having them choose different 'characteristic' equilibrium actions, i.e. they have the types 'sort themselves out'."Incentive compatibility (SFB 504: Glossary)
"In a distributed/open environment where agents are self-interested and goal oriented, they might pursue any means available to them to maximize their own utility. That could lead to undesirable situations where some agents would try to influence the solving process towards solutions that are more preferable to them, but not necessarily acceptable to others, or sub optimal in any case. It would be desirable for the agents to behave truthfully during the entire solving process, otherwise the optimality/fairness of the final solution is not assured.
We will investigate possible ways to design algorithms/protocols that are incentive compatible in the sense that it's always individually rational for each rational to behave truthfully. To this end, we will look at the distributed problem solving process from a game theoretic perspective, and use promising schemes like the Clarke tax or side payments to devise mechanisms that motivate the agents to behave truthfully.
Again, as with the previous point, collusion is a problem of big concern. Coalitions of malicious agents acting together in a coordinated fashion could possibly circumvent the measures we are trying to take in order to ensure that everybody behaves truthfully. For example, in the case of the Clarke tax, it has been shown that even a coalition of two agents can manipulate the system in such a way that their preferred outcome is chosen, and they still have to pay no tax.
We are going to investigate possible ways to achieve incentive compatibility and at the same time ensure to the largest extent possible that coalitions of malicious agents cannot manipulate the system in their own interest. Possible methods are either ``naturally robust against collusion, or rendered so through cryptographic solutions or randomization techniques." Incentive Compatibility, 
"The Design Decisions Laboratory was established at Carnegie Mellon University by Professor Jeremy J. Michalek in 2005. The lab develops theories and tools to understand and assist decision-making in design and product development. The group is interested in the preferences and economics that drive design tradeoff decisions as well as the impact of those decisions on public and private stakeholders. Drawing upon research in economics, econometrics, marketing and public policy as well as engineering and design optimization, the lab pursues three primary thrust areas: Systems Optimization: Develop fundamental knowledge and new methods for multidisciplinary design and complex systems optimization; Design for Market Systems: Measure and model consumer choice in the marketplace to optimize engineering systems for profitability; and Green Design & Environmental Policy: Study the effects of economics, competition and public policy on design decisions and the resulting environmental impact of those decisions." Design Decisions Wiki
"Back in the early ‘90s the term ‘Product Semantics’ coined by Klaus Krippendorf and Reinhart Butter helped to define the meaning of information transferred by product designers through product forms. They state that the mantra of product semantics is not “form follows function” but rather “form follows meaning” and that designers are part of a two-part equation of designer and user. One of the obvious problems with Product Semantics theory is that there can never be a truly one-to-one direct translation from the designer’s intended meanings and the meanings interpreted by users. The term ‘teleosemantics’ comes from information theories in genetic research. Teleosemantic theory generally serves as a means to elucidate an involvement relationship between organisms and their environments. This paper proposes the argument that the process of designing is ‘teleosemantic’ by nature paralleling teleosemantic theories of DNA as information systems and that the inner workings within the gene parallels the relationship between designers and users. When looking at product design through a Product Teleosemantic lens, a designer’s intentions would no longer be seen as invalidated by misinterpretation but rather validated by reinterpretations that lead to new ways of product usage." Product Teleosemantics: The Next Stage in the Evolution of Product Semantics, 
"Since its coinage in 1984, the use of “product semantics” has mushroomed. In 2009, a Google search identified over 18,000 documents referring to it. The semantics of artifacts has become of central importance in courses taught at leading design departments of many universities all over the world...It has also permeated other disciplines, notably ergonomics, marketing, cognitive engineering. Reviews can be found by writers on design theory, design history,corporate strategy, national design policy, design science studies, participatory design, interaction design, human-computer interaction, and cybernetics."
"The term affordance was coined by Gibson as a part of the theory of direct perception, also known as the Ecological Approach, to refer to the actionable properties between the environment and the organism that lives in the environment. According to the paradigms of cognitive psychology, human behaviors, such as thinking, acting and perceiving, are guided by mental schemata or cognitive model, which are mainly based on their previous experience and knowledge. In contrast, Gibson’s theory of direct perception stresses that attributes of an object could provide effective perceptual information about the object itself. In short, “The object offers what it does because it is what it is”. Essential to this theory is “the reciprocal relationship between animal and environment”, and the notion of affordance was developed to express the property of the environment in relation to the organism that lives within."
"Sarit Kraus is concerned here with the cooperation and coordination of intelligent agents that are self-interested and usually owned by different individuals or organizations. Conflicts frequently arise, and negotiation is one of the main mechanisms for reaching agreement. Kraus presents a strategic-negotiation model that enables autonomous agents to reach mutually beneficial agreements efficiently in complex environments. The model, which integrates game theory, economic techniques, and heuristic methods of artificial intelligence, can be automated in computer systems or applied to human situations. The book provides both theoretical and experimental results." Strategic Negotiation in Multiagent Environments
Two-Level of Nondominated Solutions Approach to Multiobjective Particle Swarm Optimization:
In multiobjective particle swarm optimization (MOPSO) methods, selecting the local best and the global best for each particle of the population has a... great impact on the convergence and diversity of solutions, especially when optimizing problems with high number of objectives. This paper presents a two-level of nondominated solutions approach to MOPSO. The ability of the proposed approach to detect the true Pareto optimal solutions and capture
the shape of the Pareto front is evaluated through experiments on well-known non-trivial test problems. The diversity of the nondominated solutions obtained is demonstrated through different measures. The proposed approach has been assessed through a comparative study with the reported results in the literature.
Categories and Subject Descriptors
I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search – heuristic methods.
"This school of thought contends that the PSO algorithm and its parameters must be chosen so as to properly balance between exploration and exploitation to avoid premature convergence to a local optimum yet still ensure a good rate of convergence to the optimum."
"Groups collaborate to create value that their members cannot create through individual effort. Collaboration, however, engenders interpersonal, social, political, cognitive, and technical challenges. Croups can improve key outcomes using collaboration technologies, but any technology that can be used well can also be used badly; IS/IT artifacts do not assure successful collaboration. The value of a collaboration technology can only be realized in the larger context of a collaboration system, a combination of actors, hardware, software, knowledge, and work practices to advance groups toward their goals. Designers of collaboration systems must therefore many issues when creating a new collaboration system. This track seeks new work from researchers in many disciplines to foster a growing a body of exploratory, theoretical, experimental, and applied research that could inform design and deployment choices for collaboration systems. We seek papers that address individual, group, organizational, and social factors that affect outcomes of interest among people making joint efforts toward a group goal. We look for papers from the range of epistemological and methodological perspectives."
"This is the home page for the Service Science space.
The science of studying service is evolving. The call-to-action was heard around the world. Many institutions have integrated or are beginning to integrate their studies with the service perspective. These pages offer a collection of resources for your use for the development of courses, case studies and degree curricula. We continue to update the site as new developments occur."
Steps Towards a Science of Service Systems:
"A service system can be understood as a system composed of people and technologies that adaptively computes and adjusts to the changing value of knowledge in the system."
A Research Manifesto for Services Science:
"There is, apart from temporal empirical knowledge (i.e. implying duration), a further, non-temporal access to cognition of temporal structures. A non-temporal access enables us to explain subjectively (in each case) varying empirical knowledge of duration, as well as insight and precognition."
"Towards the end of the book, Yau makes a point that I very much agree with: fundamental physics may get (or have already gotten..) to the point where it can no longer rely upon frequent inspiration from unexpected experimental results, an...d when that happens one avenue left to try is to get inspiration from mathematics:
"So that’s where we stand today, with various leads being chased down – only a handful of which have been discussed here – and no sensational results yet. Looking ahead, Shamit Kachru, for one, is hopeful that the range of experiments under way, planned, or yet to be devised will afford many opportunities to see new things. Nevertheless, he admits that a less rosy scenario is always possible, in the even that we live in a frustrating universe that affords little, if anything in the way of empirical clues…
What we do next, after coming up empty-handed in every avenue we set out, will be an even bigger test than looking for gravitational waves in the CMB or infinitesimal twists in torsion-balance measurements. For that would be a test of our intellectual mettle. When that happens, when every idea goes south and every road leads to a dead end, you either give up or try to think of another question you can ask – questions for which there might be some answers.
Edward Witten, who, if anything, tends to be conservative in his pronouncements, is optimistic in the long run, feeling that string theory is too good not to be true. Though, in the short run, he admits, it’s going to be difficult to know exactly where we stand. “To test string theory, we will probably have to be lucky,” he says. That might sound like a slender thread upon which to pin one’s dreams for a theory of everything – almost as slender as a cosmic string itself. But fortunately, says Witten, “in physics there are many ways of being lucky.”
I have no quarrel with that statement and more often than not, tend to agree with Witten, as I’ve generally found this to be a wise policy. But if the physicists find their luck running dry, they might want to turn to their mathematical colleagues, who have enjoyed their fair share of that commodity as well.""
"What is topology? Is it like geometry?
Geometry is specific and topology is general. Topologists study larger patterns and categories of shapes. For example, in geometry, a cube and a sphere are distinct. But in topology they are the same because you can deform one into the other without cutting through the surface. The torus, a sphere with a hole in the middle, is a different form. It is clearly distinct from the sphere because you cannot deform a torus into a sphere no matter how you twist it.
Does that mean geometry and topology are really two perspectives on the same thing?
Yes. It is like Chinese literature. A poem might describe a farewell between lovers. But in the language of the poem, instead of a man and woman, there is a willow tree, where the leaves are soft and hanging down. The way the branch is hanging down is like the feeling of the man and the woman wanting to be together. Geometry gives us a structure of that willow tree that is solid and extensive. Topology describes the overall shape of the tree without the details—but without the tree to start with, we would have nothing.
It has always amazed me to observe how different groups of people look at the same subject. My friends in physics look at space-time purely from the perspective of real physics, yet the general theory of relativity describes space-time in terms of geometry, because that’s how Einstein looked at the problem.
When you looked at the world through the lens of geometry and topology, what did you learn?
That nonlinear equations were fundamental because in nature, curves abound. Climate isn’t linear. If the wind blows stronger that way, it may cause more trouble over there; it may even depend on the geometry of the earth. Usually you see the stock market described by linear equations and straight lines, but that is not really correct. The stock market fluctuates up and down in a nonlinear way. The Einstein equation described the curvature of the universe, and it was nonlinear. I ended up learning nonlinear equations from a master, although I didn’t know he was a master at the time. His name was Charles Morrey, and he was a classical gentleman. He always dressed in suits in class. He was a very nice man. Even if I was the only one there, he would lecture to me, just as if he were lecturing to the whole class."
" Studies of Nonlinear Phenomena in Life Science
Why a Watched Kettle Never Boils
by Susie Vrobel (Institute for Fractal Research, Germany)
...This book provides an interdisciplinary introduction to the notion of fractal time, starting from scratch with a philosophical and perceptual puzzle. How subjective duration varies, depending on the way we embed current content into contexts, is explained.
The complexity of our temporal perspective depends on the number of nestings performed, i.e. on the number of contexts taken into account. This temporal contextualization is described against the background of the notion of fractal time. Our temporal interface, the Now, is portrayed as a fractal structure which arises from the distribution of content and contexts in two dimensions: the length and the depth of time. The leitmotif of the book is the notion of simultaneity, which determines the temporal structure of our interfaces.
Recent research results are described which present and discuss a number of distorted temporal perspectives. It is suggested that dynamical diseases arise from unsuccessful nesting attempts, i.e. from failed contextualization. Successful nesting, by contrast, manifests itself in a “win-win handshake” between the observer-participant and his chosen context. The answer as to why a watched kettle never boils has repercussions in many a discipline. It would be of immense interest to anyone who works in the fields of cognitive and complexity sciences, psychology and the neurosciences, social medicine, philosophy and the arts.
When Time Slows Down
The Fractal Structure of the Now: Time's Length, Depth and Density
Fractal Temporal Perspectives
The View from Within: In-Forming Boundaries
Contextualization: Extended Observer-Participants
Temporal Binding: Synchronizing Perceptions
Nesting Speed: Global vs Local Perspectives
Duration: Distributing Content and Context
Modifying Duration I: Nesting and De-Nesting
Modifying Duration II: Time Condensation
Defining Boundaries: Why is It Always Now?
Outlook: Here There be Dragons.
Readership: Cognitive scientists, philosophers working on the topic of time, cyberneticists and systems theorists focusing on nested systems and connectivity, mathematicians and logicians working on fractals and nested systems, psychologists and psychoanalysts interested in contextualization abilities, psycholinguists and neuro-scientists working on synchronization, medical practitioners focusing on integrative health care, theoretical physicists concerned with time, nonlinear dynamics, causality and connectedness and teachers contemplating the effect of temporal contextualization.
Human electroencephalograms seen as fractal time series:
Mathematical analysis and visualization
The paper presents a novel technique of nonlinear spectral analysis, which has been used for processing encephalograms of humans. This technique is based on the concept of generalized entropy of a given probability distribution, known as the Rényi entropy that allows defining the set of generalized fractal dimensions of encephalogram (EEG)
and determining fractal spectra of encephalographic signals. Unlike the Fourier spectra, the spectra of fractal dimensions contain information of both frequency and amplitude characteristics of EEG and can be used together with well-accepted techniques of EEG analysis as an enhancement of the latter. Powered by volume visualization of the brain activity, the method provides new clues for understanding the mental processes in humans.
2005 Elsevier Ltd. All rights reserved.
Keywords: Fractal time series; Generalized entropy; EEG; Visualization; FRep; Implicit functions
Some researchers in this field posit that positive psychology can be delineated into three overlapping areas of research:
Research into the Pleasant Life, or the "life of enjoyment", examines how people optimally experience..., forecast, and savor the positive feelings and emotions that are part of normal and healthy living (e.g. relationships, hobbies, interests, entertainment, etc.).
The study of the Good Life, or the "life of engagement", investigates the beneficial affects of immersion, absorption, and flow that individuals feel when optimally engaged with their primary activities. These states are experienced when there is a positive match between a person's strength and the task they are doing, i.e. when they feel confident that they can accomplish the tasks they face. (See related concept, Self-efficacy)
Inquiry into the Meaningful Life, or "life of affiliation", questions how individuals derive a positive sense of well-being, belonging, meaning, and purpose from being part of and contributing back to something larger and more permanent than themselves (e.g. nature, social groups, organizations, movements, traditions, belief systems).
These categories appear to be neither widely disputed nor adopted by researchers across the 12 years that this academic area has been in existence.
"The development of the Character Strengths and Virtues (CSV) handbook represents the first attempt on the part of the research community to identify and classify the positive psychological traits of human beings. Much like the Diagnostic ...and Statistical Manual of Mental Disorders (DSM) of general psychology, the CSV provides a theoretical framework to assist in understanding strengths and virtues and for developing practical applications for positive psychology. This manual identifies six classes of virtue (i.e., "core virtues"), made up of twenty-four measurable character strengths.
The introduction of CSV suggests that these six virtues are considered good by the vast majority of cultures and throughout history and that these traits lead to increased happiness when practiced. Notwithstanding numerous cautions and caveats, this suggestion of universality hints that in addition to trying to broaden the scope of psychological research to include mental wellness, the leaders of the positive psychology movement are challenging moral relativism and suggesting that we are "evolutionarily predisposed" toward certain virtues, that virtue has a biological basis.
Comedians are considered masters of humor
The organization of these virtues and strengths is as follows:
Wisdom and Knowledge: creativity, curiosity, open-mindedness, love of learning, perspective, innovation
Courage: bravery, persistence, integrity, vitality
Humanity: love, kindness, social intelligence
Justice: citizenship, fairness, leadership
Temperance: forgiveness and mercy, humility, prudence, self control
Transcendence: appreciation of beauty and excellence, gratitude, hope, humor, spirituality
It should be noted that the organization of these virtues into 5 groups is contested. It has been suggested that the 24 strengths identified are more accurately grouped into just 3 or 4 categories: Intellectual Strengths, Interpersonal Strengths, and Temperance Strengths or alternatively Interpersonal Strengths, Fortitude, Vitality, and Cautiousness
Mindfulness, defined as actively searching for novelty, is also characterized as non-judging, non-striving, accepting, patient, trusting, open, curious, and letting go. Its benefits include reduction of stress, anxiety, depression, and chronic pain.
Flow, or a state of absorption in one's work, is characterized by intense concentration, loss of self-awareness, a feeling of control, and a sense that "time is flying." Flow is an intrinsically rewarding experience, and it can also help one achieve a goal (e.g. winning a game) or improve skills (e.g. becoming a better chess player).
Spirituality is associated with mental health, managing substance abuse, marital functioning, parenting, and coping. It has been suggested that spirituality also leads to finding purpose and meaning in life. This research on the benefits of spirituality is limited, however, to mostly studies using cross-sectional questionnaires.
Self-efficacy is one's belief in one's ability to accomplish a task by one's own efforts. Low self-efficacy is associated with depression; high self-efficacy can help one overcome abuse, overcome eating disorders, and maintain a healthy lifestyle. High self-efficacy also improves the immune system, aids in stress management, and decreases pain. A related but somewhat differing concept is Personal effectiveness which is primarily concerned with the methodologies of planning and implementation of accomplishment.
Learned optimism is the idea that a talent for joy, like any other, can be cultivated. It is contrasted with learned helplessness. Learning optimism is done by consciously challenging self talk if it describes a negative event as a personal failure that permanently affects all areas of the person's life."
If the "how's" of psychological health were "one size fits all" (may be good for an industrial economist) then we'd be(come) a rather boring species.