Koino-: common, mutual, shared + telos: constraint, boundary, initiation Koinotely: consummate closure with all its results to function at full-capacity effectiveness. Conterminous: within a common boundary. Self-Adjoint: closed under the involution operation. Intentionality: the circular process of generalization/abstraction of input and specification/concretization of output among scale-invariant infocognitive operators (telors) and attributive relations (telons).
Monday, December 19, 2011
Relativistic, Event-Driven, Component-Based, Service-Oriented Modeling
Image: Service Oriented Model
http://www.w3.org/TR/ws-arch/#service_oriented_model
"Where a system determines its own composition, properties and evolution independently of external laws or structures, it can determine its own meaning, and ensure by its self-configuration that its inhabitants are crucially implicated therein.
...
A (Minkowski) spacetime diagram is a kind of “event lattice” in which nodes represent events and their connective worldlines represent the objects that interact in those events. The events occur at the foci of past and future light cones to which the worldlines are internal. If one could look down the time axis of such a diagram at a spacelike cross section, one would see something very much like a Venn diagram with circles corresponding to lightcone cross sections. This rotation of the diagram corresponds to conspansive dualization, converting a spatiotemporal lattice of worldlines and events to a layered series of Venn diagrams.
...
Uniting the theory of reality with an advanced form of computational language theory, the CTMU describes reality as a Self-Configuring Self-Processing Language or SCSPL, a reflexive intrinsic language characterized not only by self-reference and recursive self-dfiinition, but full self-configuration and self-execution (reflexive read-write functionality). SCSPL reality embodies a dual-aspect monism consisting of infocognition, self-transducing information residing in self-recognizing SCSPL elements called syntactic operators."
http://www.ctmu.net/
"Using the concept of adjunctive correspondence, for the comprehension of the structure of a complex system, developed in Part I, we introduce the notion of covering systems consisting of partially or locally defined adequately understood objects. This notion incorporates the necessary and sufficient conditions for a sheaf theoretical representation of the informational content included in the structure of a complex system in terms of localization systems. Furthermore, it accommodates a formulation of an invariance property of information communication concerning the analysis of a complex system.
Keywords : Complex Systems, Information Structures, Localization Systems, Coverings, Adjunction, Sheaves."
http://philsci-archive.pitt.edu/1237/1/axiomath2.pdf
"Relativistic programming (RP) is a style of concurrent programming where instead of trying to avoid conflicts between readers and writers (or writers and writers in some cases) the algorithm is designed to tolerate them and get a correct result regardless of the order of events. Also, relativistic programming algorithms are designed to work without the presences of a global order of events. That is, there may be some cases where one thread sees two events in a different order than another thread (hence the term relativistic because in Einstein's theory of special relativity the order of events is not always the same to different viewers)."
http://en.wikipedia.org/wiki/Relativistic_programming
"Event-driven architecture can complement service-oriented architecture (SOA) because services can be activated by triggers fired on incoming events. This paradigm is particularly useful whenever the sink does not provide any self-contained executive.
SOA 2.0 evolves the implications SOA and EDA architectures provide to a richer, more robust level by leveraging previously unknown causal relationships to form a new event pattern. This new business intelligence pattern triggers further autonomous human or automated processing that adds exponential value to the enterprise by injecting value-added information into the recognized pattern which could not have been achieved previously.
Computing machinery and sensing devices (like sensors, actuators, controllers) can detect state changes of objects or conditions and create events which can then be processed by a service or system. Event triggers are conditions that result in the creation of an event."
http://en.wikipedia.org/wiki/Event_driven_architecture
"An individual software component is a software package, a Web service, or a module that encapsulates a set of related functions (or data).
All system processes are placed into separate components so that all of the data and functions inside each component are semantically related (just as with the contents of classes). Because of this principle, it is often said that components are modular and cohesive."
http://en.wikipedia.org/wiki/Component-based_software_engineering
"Service-oriented modeling typically strives to create models that provide a comprehensive view of the analysis, design, and architecture of all 'Software Entities' in an organization, which can be understood by individuals with diverse levels of business and technical understanding. Service-oriented modeling typically encourages viewing software entities as 'assets' (service-oriented assets), and refers to these assets collectively as 'services'."
http://en.wikipedia.org/wiki/Service-oriented_modeling
Sunday, April 17, 2011
Algorithmic Mechanism Design, Granular Problem Structures, Computing Fabric Networks, Adaptive Ontological Grammars, Logistical Engineering Decisions
"Ontology describes the types of things that exist, and rules that govern them; a data model defines records about things, and is the basis for a database design. Not all the information in ontology may be needed (or can even be held) in a data model and there are a number of choices that need to be made. For example, some of the ontology may be held as reference data instead of as entity types [6].
Ontologies are promised to bright future. In this paper we propose that as ontologies are closely related to modern object-oriented software engineering, it is natural to adapt existing object-oriented software development methodologies for the task of ontology development. This is some part of similarity between descriptive ontologies and database schemas, conceptual data models in object oriented are good applicant for ontology modeling, however; the difference between constructs in object models and in current ontology proposals which are object structure, object identity, generalization hierarchy, defined constructs, views, and derivations. We can view ontology design as an extension of logical database design, which mean that the training object data software developers could be a promising approach. Ontology is the comparable of database schema but ontology represent a much richer information model than normal database schema, and also a richer information model compared to UML class/object model. Object modeling focus on identity and behavior is completely different from the relational model’s focus on information.
It is likely to adjust existing object oriented software development methodologies for the ontology development. The object model of a system consists of objects, identified from the text description and structural linkages between them corresponding to existing or established relationships. The ontologies provide metadata schemas, offering a controlled vocabulary of concepts. At the center of both object models and ontologies are objects within a given problem domain. The difference is that while the object model should contain explicitly shown structural dependencies between objects in a system, including their properties, relationships, events and processes, the ontologies are based on related terms only. On the other hand, the object model refers to the collections of concepts used to describe the generic characteristics of objects in object oriented languages. Because ontology is accepted as a formal, explicit specification of a shared conceptualization, Ontologies can be naturally linked with object models, which represent a system-oriented map of related objects easily.
An ontology structure holds definitions of concepts, binary relationship between concepts and attributes. Relationships may be symmetric, transitive and have an inverse. Minimum and maximum cardinality constraints for relations and attributes may be specifies. Concepts and relationships can be arranged in two distinct generalization hierarchies [14]. Concepts, relationship types and attribute abstract from concrete objects or value and thus describe the schema (the ontology) on the other hand concrete objects populate the concepts, concrete values instantiate the attributes of these objects and concrete relationship instantiate relationships. Three types of relationship that may be used between classes: generalization, association, aggregation."
http://www.sersc.org/journals/IJSEIA/vol3_no1_2009/5.pdf
"We present a UML Profile for the description of service oriented applications.
The profile focuses on style-based design and reconfiguration aspects
at the architectural level. Moreover, it has a formal support in terms of an approach
called Architectural Design Rewriting, which enables formal analysis of
the UML specifications. We show how our prototypical implementation can be
used to analyse and verify properties of a service oriented application."
http://www.albertolluch.com/papers/adr.uml4soa.pdf
"Semantic Web Services, like conventional web services, are the server end of a client–server system for machine-to-machine interaction via the World Wide Web. Semantic services are a component of the semantic web because they use markup which makes data machine-readable in a detailed and sophisticated way (as compared with human-readable HTML which is usually not easily "understood" by computer programs).
The mainstream XML standards for interoperation of web services specify only syntactic interoperability, not the semantic meaning of messages. For example, Web Services Description Language (WSDL) can specify the operations available through a web service and the structure of data sent and received but cannot specify semantic meaning of the data or semantic constraints on the data. This requires programmers to reach specific agreements on the interaction of web services and makes automatic web service composition difficult.
...
Choreography is concerned with describing the external visible behavior of services, as a set of message exchanges optionally following a Message Exchange Pattern (MEP), from the functionality consumer point of view.
Orchestration deals with describing how a number of services, two or more, cooperate and communicate with the aim of achieving a common goal."
http://en.wikipedia.org/wiki/Semantic_Web_Services
An Analysis of Service Ontologies:
http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1028&context=pajais
Information and Service Design Lecture Series
The Information and Service Design (ISD) Lecture Series brings together practitioners and researchers from various disciplines to report on their activities in the fields of information modeling, information delivery, service design, and the challenges of integrating these activities. The ISD Lecture Series started as Service Science, Management, and Engineering (SSME) Lecture Series in the fall 2006 and spring 2007 semesters, and the rebranded fall 2007 lecture series will continue to survey the service landscape, and explore issues of law, privacy, semantics, business models, and education.
http://dret.net/lectures/isd-fall07/
Fuzzy Logic as Interfacing Media for Constraint Propogation Based on Theories of Chu Space and Information Flow:
http://tinyurl.com/newdirectionsinroughsetsdatami
Chu Spaces: Towards New Foundations for Fuzzy Logic and Fuzzy Control, with Applications to Information Flow on the World Wide Web:
http://www.cs.utep.edu/vladik/1999/tr99-12.pdf
Chu Spaces - A New Approach to Diagnostic Information Fusion:
http://digitalcommons.utep.edu/cgi/viewcontent.cgi?article=1535&context=cs_techrep&sei-redir=1#search="chu+space+information+flow+games"
"Preference is a key area where analytic philosophy meets philosophical logic. I start with two related issues: reasons for preference, and changes in preference, first mentioned in von Wright’s book The Logic of Preference but not thoroughly explored there. I show how these two issues can be handled together in one dynamic logical framework, working with structured two-level models, and I investigate the resulting dynamics of reason-based preference in some detail. Next, I study the foundational issue of entanglement between preference and beliefs, and relate the resulting richer logics to belief revision theory and decision theory."
http://www.springerlink.com/content/aw4w76p772007g47/
COGNITION AS INTERACTION
"Many cognitive activities are irreducibly social, involving interaction between several different agents. We look at some examples of this in linguistic communication and games, and show how logical methods provide exact models for the relevant information flow and world change. Finally, we discuss possible connections in this arena between logico-computational approaches and experimental cognitive science."
http://www.illc.uva.nl/Publications/ResearchReports/PP-2005-10.text.pdf
http://staff.science.uva.nl/~johan/research.html
"Recent psychological research suggests that the individual human mind may
be effectively modeled as involving a group of interacting social actors: both various subselves representing coherent aspects of personality; and virtual actors embodying “internalizations of others.” Recent neuroscience research suggests the further hypothesis that these internal actors may in many cases be neurologically associated with collections of mirror neurons. Taking up this theme, we study the mathematical and conceptual structure of sets of inter-observing actors, noting that this structure is mathematically isomorphic to the structure of physical entities called “mirrorhouses.”
Mirrorhouses are naturally modeled in terms of abstract algebras such as quaternions and octonions (which also play a central role in physics), which leads to the conclusion that the presence within a single human mind of multiple inter-observing actors naturally gives rise to a mirrorhouse-type cognitive structure and hence to a quaternionic and octonionic algebraic structure as a significant aspect of human intelligence. Similar conclusions would apply to nonhuman intelligences such as AI’s, we suggest, so long as these intelligences included empathic social modeling (and/or other cognitive dynamics leading to the creation of simultaneously active subselves or other internal autonomous actors) as a significant component."
http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf
Ontology Engineering, Universal Algebra, and Category Theory:
...
We merely take the set-theoretic intersection of the classes of the ontologies, and the set theoretic intersection of the relations between those classes. Of course, the foregoing is too naive. Two different ontologies may have two common classes X and Y , and in each ontology there may be a functional relation X / Y , but the two relations may be semantically different. Perhaps we could require that the relations carry the same name when we seek their intersection, but this brings us back to the very problem ontologies are intended to control — people use names inconsistently.
Instead, we take seriously the diagram A o C / B, developed with insightful intervention to encode the common parts of A and B and record that encoding via the two maps. Thus an f : X / Y will only appear in C when there are corresponding semantically equivalent functions in A and B, whatever they might be called. The requirement to check for semantic equivalence is unfortunate but unavoidable."
http://www.mta.ca/~rrosebru/articles/oeua.pdf
"Foundational issues
Basic ontological categories and relations
Formal comparison and evaluation of ontologies
Ontology, language, cognition
Ontology and natural-language semantics
Formal semantics of discourse and dialogue relations
Ontology and lexical resources
Ontology, visual perception and multimedia
Ontology-driven information systems
Methodologies for ontology development
Ontology-driven conceptual modeling
Ontology-driven information integration & retrieval
Application domains
Organizations, social reality, law
Business services and e-government
Industrial artifacts, manufacturing, engineering design
Formal models of interaction and cooperation"
http://www.loa-cnr.it/Research.html
"The Economics Group in the Electrical Engineering and Computer Science Department at Northwestern University studies the interplay between the algorithmic, economic, and social aspects of the Internet and related systems, and develops ways to facilitate users' interactions in these systems. This work draws upon a wide variety of techniques from theoretical and experimental computer science to traditional economic frameworks. By applying these techniques to economic and social systems in place today, we can shed light on interesting phenomena and, ideally, provide guidance for future developments of these systems. This interdisciplinary effort is undertaken jointly with the Managerial Economics and Decision Sciences Department in the Kellogg School of Management, The Center for Mathematical Studies in Economics and Management Science, and other institutions at Northwestern University and the greater Chicago area.
Specific topics under active research include: algorithmic mechanism design, applied mechanism design, bounded rationality, market equilibria, network protocol design, prediction markets, security, social networks, and wireless networking. They are described in more detail below.
--------------------------------------------------------------------------------
Algorithmic Mechanism Design. Mechanism design (in economics) studies how to design economic systems (a.k.a., mechanisms) that have good properties. Algorithm design (in computer science) studies how to design algorithms with good computational properties. Algorithmic mechanism design combines economic and algorithmic analyses, in gauging the performance of a mechanism; with optimization techniques for finding the mechanism that optimizes (or approximates) the desired objective. (Contact: Hartline, Immorlica)
Applied Mechanism Design. Internet services are fundamentally economic systems and the proper design of Internet services requires resolving the advice of mechanism design theory, which often formally only applies in ideological models, with practical needs and real markets and other settings. The interplay between theory and practice is fundamental for making the theoretical work relevant and developing guidelines for designing Internet systems, including auctions like eBay and Google's AdWords auction, Internet routing protocols, reputation systems, file sharing protocols, etc. For example the problem auctioning of advertisements on Internet search engines such as Yahoo!, Google, and Live Search, exemplifies many of the theoretical challenges in mechanism design as well as being directly relevant to a multi-billion dollar industry. Close collaboration with industry in this area is a focus. (Contact: Hartline, Immorlica)
Bounded Rationality. Traditional economic models assume that all parties understand the full consequences of their actions given the information they have, but often computational constraints limit their ability to make fully rational decisions. Properly modeling bounded rationality can help us better understand equilibrium in games, forecast testing, preference revelation, voting and new ways to model difficult concepts like unforeseen consequences in contracts and brand awareness. (Contact: Fortnow, Vohra)
Market Equilibria. The existence of equilibria has been the most important property for a "working" market or economy, and the most studied one in Economics. Existence of a desirable equilibrium is not enough; additionally, the market must converge naturally and quickly to this equilibrium. Of special interest here are decentralized algorithms, ones that do not rely on a central agent, that demonstrate how individual actions in a market may actually working in harmony to reach an equilibrium. (Contact: Zhou)
Network Protocol Design. Much of the literature on designing protocols assumes that all or most participants are honest and blindly follow instructions, whereas the rest may be adversarial. This is particularly apparent in distributed computing, where the aim is to achieve resilience against faults, and in cryptography, where the aim is to achieve security against a malicious adversary. A crucial question is whether such resilient protocols can also be made faithful -- that is, whether the (often unrealistic) assumption of honest participants can be weakened to an assumption of self-interested ones. Another question is whether one may be able to side-step impossibility results on the resilience of protocols by considering adversarial coalitions who do not act maliciously but rather in accordance with their own self-interests. (Contact: Gradwohl)
Prediction Markets. Prediction markets are securities based usually on a one-time binary event, such as whether Hillary Clinton will win the 2008 election. Market prices have been shown to give a more accurate prediction of the probability of an event happening than polls and experts. Several corporations also use prediction markets to predict sales and release dates of products. Our research considers theoretical models of these markets to understand why they seem to aggregate information so well, "market scoring rules" that allow prediction markets even with limited liquidity, the effect of market manipulation and ways to design markets and their presentation to maximize the usefulness of their predictive value. (Contact: Fortnow)
Security. With the increasing popularity of Internet and the penetration of cyber-social networks, the security of computation and communication systems has become a critical issue. The traditional approach to system security assumes unreasonably powerful attackers and often end up with intractability. An economic approach to system security models the attacker as a rational agent who can be thwarted if the cost of attack is high.(Contact: Zhou)
Social Networks. A social network is a collection of entities (e.g., people, web pages, cities), together with some meaningful links between them. The hyperlink structure of the world-wide-web, the graph of email communications at a company, the co-authorship graph of a scientific community, and the friendship graph on an instant messenger service are all examples of social networks. In recent years, explicitly represented social networks have become increasingly common online, and increasingly valuable to the Internet economy. As a result, researchers have unprecedented opportunities to develop testable theories of social networks pertaining to their formation and the adoption and diffusion of ideas/technologies within these networks. (Contact: Immorlica)
Wireless Networking. Recent advances in reconfigurable radios can potentially enable wireless applications to locate and exploit underutilized radio frequencies (a.k.a., spectrum) on the fly. This has motivated the consideration of dynamic policies for spectrum allocation, management, and sharing; a research initiative that spans the areas of wireless communications, networking, game theory, and mechanism design. (Contact: Berry, Honig)"
http://econ.eecs.northwestern.edu/
"Like economics, operations research, and industrial engineering, computer science deals with both the modeling of complex real world systems and the creation of new tools and methods that improve the systems' performance. However, in the former three, the notion of social costs or economic costs is more prominent in the models, and an improved system is more productive from a socioeconomic cost standpoint. In computer science, cost is usually associated with the "difficulty" or computational complexity, and the space (storage) or time (processing) complexity of an algorithm or a system. Mechanism design is one area of research that involves computer scientists where there are clear efforts to bridge the two notions of costs."
http://tinyurl.com/servicessciencefundamentalscha
Mechanism Design for "Free" but "No Free Disposal" Services: The Economics of Personalization Under Privacy Concerns
http://mansci.journal.informs.org/cgi/content/abstract/56/10/1766
Service Science
http://faculty.ucmerced.edu/pmaglio/2007/Lecture1.pdf
DECISION TECHOLOGY, MOBILE TECHNOLOGIES, AND SERVICE SCIENCE
Advanced Analytics Services for Managerial Decision Support
Fuzzy Logic And Soft Computing In Service And Management Support
Information Systems & Decision Technologies for Sustainable Development
Intelligent Decision Support for Logistics and Supply Chain Management
Mobile Value Services / Mobile Business / Mobile Cloud
NETWORK DSS: Mobile Social and Sensor Networks for Man-
Multicriteria Decision Support Systems
Service Science, Systems and Cloud Computing Services
Service Science, Management and Engineering (SSME)
Visual Analysis of Massive Data for Decision Support and Operational Management
http://www.hicss.hawaii.edu/hicss_45/45dt.htm
"Decision Engineering is a framework that unifies a number of best practices for organizational decision making. It is based on the recognition that, in many organizations, decision making could be improved if a more structured approach were used. Decision engineering seeks to overcome a decision making "complexity ceiling", which is characterized by a mismatch between the sophistication of organizational decision making practices and the complexity of situations in which those decisions must be made. As such, it seeks to solve some of the issues identified around complexity theory and organizations. In this sense, Decision Engineering represents a practical application of the field of complex systems, which helps organizations to navigate the complex systems in which they find themselves.
Despite the availability of advanced process, technical, and organizational decision making tools, decision engineering proponents believe that many organizations continue to make poor decisions. In response, decision engineering seeks to unify a number of decision making best practices, creating a shared discipline and language for decision making that crosses multiple industries, both public and private organizations, and that is used worldwide.[1]
To accomplish this ambitious goal, decision engineering applies an engineering approach, building on the insight that it is possible to design the decision itself using many principles previously used for designing more tangible objects like bridges and buildings. This insight was previously applied to the engineering of software—another kind of intangible engineered artifact—with significant benefits.[2]
As in previous engineering disciplines, the use of a visual design language representing decisions is emerging as an important element of decision engineering, since it provides an intuitive common language readily understood by all decision participants. Furthermore, a visual metaphor[3] improves the ability to reason about complex systems[4] as well as to enhance collaboration.
In addition to visual decision design, there are two other aspects of engineering disciplines that aid mass adoption. These are: 1) the creation of a shared language of design elements and 2) the use of a common methodology or process, as illustrated in the diagram above."
http://en.wikipedia.org/wiki/Decision_engineering
"Granular computing is gradually changing from a label to a new field of study. The driving forces, the major schools of thought, and the future research directions on granular computing are examined. A triarchic theory of granular computing is outlined. Granular computing is viewed as an interdisciplinary study of human-inspired computing, characterized by structured thinking, structured problem solving,
and structured information processing.
...
Zhang and Zhang [44] propose a quotient space theory of problem solving. The basic idea is to granulate a problem space by considering relationships between states. Similar states are grouped together at a higher level. This produces a hierarchical description and representation of a problem. Like other theories of hierarchical problem solving, quotient space theory searches a solution using multiple levels,
from abstract-level spaces to the ground-level space. Again, invariant properties are considered, including truth preservation downwards in the hierarchy and falsity preservation upwards in the hierarchy. The truth-preservation property is similar to the monotonicity property.
These hierarchical problem solving methods may be viewed as structured problem solving. Structured programming, characterized by top-down design and step-wise refinement based on multiple levels of details, is another example. The principles employed may serve as a methodological basis of granular computing.
http://www2.cs.uregina.ca/~yyao/PAPERS/ieee_grc_08.pdf
“Granular computing, in our view, attempts to extract the commonalities from existing fields to establish a set of generally applicable principles, to synthesize their results into an integrated whole, and to connect fragmentary studies in a
unified framework. Granular computing at philosophical level concerns structured thinking, and at the application level concerns structured problem solving. While structured thinking provides guidelines and leads naturally to structured problem solving, structured problem solving implements the philosophy of structured thinking.” (Yao, 2005, Perspectives of Granular Computing [25])
“We propose that Granular Computing is defined as a structured combination of algorithmic abstraction of data and non-algorithmic, empirical verification of the semantics of these abstractions. This definition is general in that it does
not specify the mechanism of algorithmic abstraction nor it elaborates on the techniques of experimental verification. Instead, it highlights the essence of combining computational and non-computational information processing.” (Bargiela
and Pedrycz, 2006, The roots of Granular Computing [2])
http://www2.cs.uregina.ca/~yyao/PAPERS/three_perspective.pdf
"What makes fabric computing different from grid computing is that there is a level of entropy where the state of the fabric system is dynamic. The rules that govern a CAS are simple. Elements can self-organize, yes are able to generate a large variety of outcomes and will grow organically from emergence. The fabric allows for the consumer to generate or evolve a large number of solutions from the basic building blocks provided by the fabric. If the elements of the fabric are disrupted, the fabric should maintain the continuity of service. This provides resiliency. In addition, if the tensile strength of the fabric is stressed by a particular work unit within a particular area of the fabric, the fabric must be able scale to distribute the load and adapt to provide elasticity and flexibility."
http://blogs.msdn.com/b/zen/archive/2011/02/28/architecture-considerations-around-fabric-computing.aspx
"Distributed shared-memory architectures
Each cell of a Computing Fabric must present the image of a single system, even though it can consist of many nodes, because this greatly eases programming and system management. SSI (Single System Image) is the reason that symmetric multiprocessors have become so popular among the many parallel processing architectures.
However, all processors in an SMP (symmetric multiprocessing) system, whether a two-way Pentium Pro desktop or a 64-processor Sun Ultra Enterprise 10000 server, share and have uniform access to centralized system memory and secondary storage, with costs rising rapidly as additional processors are added, each requiring symmetric access.
The largest SMPs have no more than 64 processors because of this, although that number could double within the next two years. By dropping the requirement of uniform access, CC-NUMA (Cache Coherent--Non-Uniform Memory Access) systems, such as the SGI/Cray Origin 2000, can distribute memory throughout a system rather than centralize memory as SMPs do, and they can still provide each processor with access to all the memory in the system, although now nonuniformly.
CC-NUMA is a type of distributed shared memory. Nonuniform access means that, theoretically, memory local to a processor can be addressed far faster than the memory of a remote processor. Reality, however, is not so clear-cut. An SGI Origin's worst-case latency of 800 nanoseconds is significantly better (shorter) than the several-thousand-nanosecond latency of most SMP systems. The net result is greater scalability for CC-NUMA, with current implementations, such as the Origin, reaching 128 processors. That number is expected to climb to 1,024 processors within two years. And CC-NUMA retains the single system image and single address space of an SMP system, easing programming and management.
Although clusters are beginning to offer a single point for system management, they don't support a true single system image and a single address space, as do CC-NUMA and SMP designs. Bottom line: CC-NUMA, as a distributed shared-memory architecture, enables an easily programmed single system image across multiple processors and is compatible with a modularly scalable interconnect, making it an ideal architecture for Computing Fabrics."
http://www.infomaniacs.com/Pubs/PCWeek_CompFabrics_Technologies.htm
"The main advantages of fabrics are that a massive concurrent processing combined with a huge, tightly-coupled address space makes it possible to solve huge computing problems (such as those presented by delivery of cloud computing services) and that they are both scalable and able to be dynamically reconfigured.[2]
Challenges include a non-linearly degrading performance curve, whereby adding resources does not linearly increase performance which is a common problem with parallel computing and maintaining security."
http://en.wikipedia.org/wiki/Fabric_computing
Service Science
Content Management System for Collaborative Knowledge Sharing
http://autproject.com/projectaut/
Duality in Knowledge Sharing:
http://www.iiia.csic.es/~marco/pubs/schorlemmer02a.pdf
Tuesday, March 1, 2011
Value Propositions, Utility Substitutions, Telentropic Exchange, Anticipatory Design, Scanning Operators, Service Systems, Cognitive Informatics
"Creating a value proposition is part of business strategy. Kaplan and Norton say "Strategy is based on a differentiated customer value proposition. Satisying customers is the source of sustainable value creation."
Developing a value proposition is based on a review and analysis of the benefits, costs and value that an organization can deliver to its customers, prospective customers, and other constituent groups within and outside the organization. It is also a positioning of value, where Value = Benefits - Cost (cost includes risk)"
http://en.wikipedia.org/wiki/Value_proposition
"In logic and philosophy, the term proposition (from the word "proposal") refers to either (a) the "content" or "meaning" of a meaningful declarative sentence or (b) the pattern of symbols, marks, or sounds that make up a meaningful declarative sentence. The meaning of a proposition includes having the quality or property of being either true or false, and as such propositions are claimed to be truthbearers.
The existence of propositions in sense (a) above, as well as the existence of "meanings", is disputed by some philosophers. Where the concept of a "meaning" is admitted, its nature is controversial. In earlier texts writers have not always made it sufficiently clear whether they are using the term proposition in sense of the words or the "meaning" expressed by the words. To avoid the controversies and ontological implications, the term sentence is often now used instead of proposition to refer to just those strings of symbols that are truthbearers, being either true or false under an interpretation. Strawson advocated the use of the term "statement", and this is the current usage in mathematical logic."
http://en.wikipedia.org/wiki/Proposition
"Propositions, as ways of "measuring" semantic information by the topic-ful, turn out to be more like dollars than like numbers[...]There are no real, natural universal units of either economic value or semantic information."
http://www.consciousentities.com/dennett.htm
"Inasmuch as science is observational or perceptual in nature, the goal of providing a scientific model and mechanism for the evolution of complex systems ultimately requires a supporting theory of reality of which perception itself is the model (or theory-to-universe mapping). Where information is the abstract currency of perception, such a theory must incorporate the theory of information while extending the information concept to incorporate reflexive self-processing in order to achieve an intrinsic (self-contained) description of reality. This extension is associated with a limiting formulation of model theory identifying mental and physical reality, resulting in a reflexively self-generating, self-modeling theory of reality identical to its universe on the syntactic level. By the nature of its derivation, this theory, the Cognitive Theoretic Model of the Universe or CTMU, can be regarded as a supertautological reality-theoretic extension of logic.
...
The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.
A system that evolves by means of telic recursion – and ultimately, every system must either be, or be embedded in, such a system as a condition of existence – is not merely computational, but protocomputational. That is, its primary level of processing configures its secondary (computational and informational) level of processing by telic recursion. Telic recursion can be regarded as the self-determinative mechanism of not only cosmogony, but a natural, scientific form of teleology.
...
The ultimate “boundary of the boundary” of the universe is UBT, a realm of zero constraint and infinite possibility where neither boundary nor content exists. The supertautologically-closed universe buys internal diffeonesis only at the price of global synesis, purchasing its informational distinctions only at the price of coherence.
...
Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so-called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility."
http://www.ctmu.net/
"In economics, utility is a measure of relative satisfaction. Given this measure, one may speak meaningfully of increasing or decreasing utility, and thereby explain economic behavior in terms of attempts to increase one's utility. Utility is often modeled to be affected by consumption of various goods and services, possession of wealth and spending of leisure time.
The doctrine of utilitarianism saw the maximization of utility as a moral criterion for the organization of society. According to utilitarians, such as Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873), society should aim to maximize the total utility of individuals, aiming for "the greatest happiness for the greatest number of people". Another theory forwarded by John Rawls (1921–2002) would have society maximize the utility of the individual initially receiving the minimum amount of utility."
http://en.wikipedia.org/wiki/Utility
"Reflective equilibrium is a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments. Although he did not use the term, philosopher Nelson Goodman introduced the method of reflective equilibrium as an approach to justifying the principles of inductive logic. The term 'reflective equilibrium' was coined by John Rawls and popularized in his celebrated A Theory of Justice as a method for arriving at the content of the principles of justice.
Rawls argues that human beings have a "sense of justice" which is both a source of moral judgment and moral motivation. In Rawls's theory, we begin with "considered judgments" that arise from the sense of justice. These may be judgments about general moral principles (of any level of generality) or specific moral cases. If our judgments conflict in some way, we proceed by adjusting our various beliefs until they are in "equilibrium," which is to say that they are stable, not in conflict, and provide consistent practical guidance. Rawls argues that a set of moral beliefs in ideal reflective equilibrium describes or characterizes the underlying principles of the human sense of justice.
An example of the method of reflective equilibrium may be useful. Suppose Zachary believes in the general principle of always obeying the commands in the Bible, and mistakenly thinks that these are completely encompassed by every Old Testament command. Suppose also that he thinks that it is not ethical to stone people to death merely for being Wiccan. These views may come into conflict (see Exodus 22:18, but see John 8:7). If they do, Zachary will then have several choices. He can discard his general principle in search of a better one (for example, only obeying the Ten Commandments), modify his general principle (for example, choosing a different translation of the Bible, or including Jesus' teaching from John 8:7 "If any of you is without sin, let him be the first to cast a stone" into his understanding), or change his opinions about the point in question to conform with his theory (by deciding that witches really should be killed). Whatever the decision, he has moved toward reflective equilibrium."
http://en.wikipedia.org/wiki/Reflective_equilibrium
"In philosophy, especially that of Aristotle, the golden mean is the desirable middle between two extremes, one of excess and the other of deficiency. For example courage, a virtue, if taken to excess would manifest as recklessness and if deficient as cowardice.
To the Greek mentality, it was an attribute of beauty. Both ancients and moderns realized that there is a close association in mathematics between beauty and truth. The poet John Keats, in his Ode on a Grecian Urn, put it this way:
"Beauty is truth, truth beauty," -- that is all
Ye know on earth, and all ye need to know.
The Greeks believed there to be three 'ingredients' to beauty: symmetry, proportion, and harmony. This triad of principles infused their life. They were very much attuned to beauty as an object of love and something that was to be imitated and reproduced in their lives, architecture, Paideia and politics. They judged life by this mentality.
In Chinese philosophy, a similar concept, Doctrine of the Mean, was propounded by Confucius; Buddhist philosophy also includes the concept of the middle way."
http://en.wikipedia.org/wiki/Golden_mean_(philosophy)
"The term intentionality was introduced by Jeremy Bentham as a principle of utility in his doctrine of consciousness for the purpose of distinguishing acts that are intentional and acts that are not. The term was later used by Edmund Husserl in his doctrine that consciousness is always intentional, a concept that he undertook in connection with theses set forth by Franz Brentano regarding the ontological and psychological status of objects of thought. It has been defined as "aboutness", and according to the Oxford English Dictionary it is "the distinguishing property of mental phenomena of being necessarily directed upon an object, whether real or imaginary". It is in this sense and the usage of Husserl that the term is primarily used in contemporary philosophy."
http://en.wikipedia.org/wiki/Intentionality
"The intentional stance is a term coined by philosopher Daniel Dennett for the level of abstraction in which we view the behavior of a thing in terms of mental properties. It is part of a theory of mental content proposed by Dennett, which provides the underpinnings of his later works on free will, consciousness, folk psychology, and evolution.
"Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do." (Daniel Dennett, The Intentional Stance, p. 17)"
http://en.wikipedia.org/wiki/Intentional_stance
The Intentional Stance
http://consc.net/mindpapers/2.1b
The Intentional Stance: Developmental and Neurocognitive Perspectives
http://ase.tufts.edu/cogstud/incbios/griffinr/datapubs/griffin&bc-dennett.pdf
Anticipation, Design and Interaction
http://www.youtube.com/watch?v=s7l4VF7asXs
Dynamic ontology as an ontological framework of anticipatory systems
http://www.emeraldinsight.com/journals.htm?articleid=1864162&show=abstract
Strong anticipation: Multifractal cascade dynamics modulate scaling in synchronization behaviors
"Previous research on anticipatory behaviors has found that the fractal scaling of human behavior may attune to the fractal scaling of an unpredictable signal [Stephen DG, Stepp N, Dixon JA, Turvey MT. Strong anticipation: Sensitivity to long-range correlations in synchronization behavior. Physica A 2008;387:5271–8]. We propose to explain this attunement as a case of multifractal cascade dynamics [Schertzer D, Lovejoy S. Generalised scale invariance in turbulent phenomena. Physico-Chem Hydrodyn J 1985;6:623–5] in which perceptual-motor fluctuations are coordinated across multiple time scales. This account will serve to sharpen the contrast between strong and weak anticipation: whereas the former entails a sensitivity to the intermittent temporal structure of an unpredictable signal, the latter simply predicts sensitivity to an aggregate description of an unpredictable signal irrespective of actual sequence. We pursue this distinction through a reanalysis of Stephen et al.’s data by examining the relationship between the widths of singularity spectra for intertap interval time series and for each corresponding interonset interval time series. We find that the attunement of fractal scaling reported by Stephen et al. was not the trivial result of sensitivity to temporal structure in aggregate but reflected a subtle sensitivity to the coordination across multiple time scales of fluctuation in the unpredictable signal."
http://dx.doi.org/10.1016/j.chaos.2011.01.005
Introduction to the Natural Anticipator and the Artificial Anticipator
"This short communication deals with the introduction of the concept of anticipator, which is one who anticipates, in the framework of computing anticipatory systems. The definition of anticipation deals with the concept of program. Indeed, the word program, comes from “pro-gram” meaning "to write before" by anticipation, and means a plan for the programming of a mechanism, or a sequence of coded instructions that can be inserted into a mechanism, or a sequence of coded instructions, as genes or behavioural responses, that is part of an organism. Any natural or artificial programs are thus related to anticipatory rewriting systems, as shown in this paper."
http://orbi.ulg.ac.be/handle/2268/81299
http://tinyurl.com/47ogqta
How General is Nilpotency?
"Evidence is presented for the generality, criticality and importance of nilpotence and the associated criteria of Pauli exclusion, quantum phase factor and quantum holographic signal processing in relation to calculation, problem solving and optimum control of Heisenberg uncertainty in Quantum Interaction.
...
Nilpotent logic rather than digital logic reflects this by making the universal automatically the mirror image of the particular because the universe is constrained to have zero totality. This clearly operates in the case of quantum mechanics. The question that then emerges is how much can any system (e.g. life, consciousness, galactic formation, chemistry), which has a strong degree of self-organization manage to achieve this by being modeled on a nilpotent structure. The work of Hill and Rowlands (2007), and Marcer and Rowlands (2007), suggests that this is possible in a wide variety of contexts. The reason is that the nilpotency does not stem
from quantum mechanics initially, but from fundamental conditions of optimal information processing which are prior to physics, chemistry and biology, and even to mathematics."
http://www.naturescode.org.uk/files/HowgeneralANPA_(2).pdf
The 'Logic' of Self-Organizing Systems
A totally new computational grammatical structure has been developed which encompasses the general class of self-organizing systems. It is based on a universal rewrite system and the principle of nilpotency, where a system and its environment have a space-time variation defined by the phase, which preserves the dual mirror-image relationship between the two.
http://www.aaai.org/ocs/index.php/FSS/FSS10/paper/download/2188/2674
A Computational Path to the Nilpotent Dirac Equation
"Using a rewrite approach we introduce a computational path to a nilpotent form of the Dirac equation. The system is novel in allowing new symbols to be added to the initial alphabet and starts with just one symbol, representing ‘nothing’, and two fundamental rules: create, a process which adds news symbols, and conserve, a process which examines the effect of any new symbol on those that currently exist. With each step a new sub-alphabet of an infinite universal alphabet is created. The implementation may be iterative, where a sequence of algebraic properties is required of the emerging subalphabets. The path proceeds from nothing through conjugation, complexification, and dimensionalisation to a steady (nilpotent) state in which no fundamentally new symbol is needed. Many simple ways of implementing the computational path exist.
Keywords. Rewrite system, substitution system, nilpotent, Dirac equation, universal
alphabet.
Rewrite systems are synonymous with computing in the sense that most software is
written in a language that must be rewritten as characters for some hardware to
interpret. Formal rewrite (substitution or production) systems are pieces of software that take an object usually represented as a string of characters and using a set of rewrite rules (which define the system) generate a new string representing an altered state of the object. If required, a second realisation system takes the string and produces a visualisation or manifestation of the objects being represented. Each step of such rewrite systems sees one or more character entities of the complex object, defined in terms of symbols drawn from a finite alphabet Σ, being mapped using rewrite rules of the form L→R, into other character entities. Some stopping mechanism is defined to identify the end of one step and the start of the next (for example we can define that for each character entity or group of entities in a string, and working in a specific order, we will apply every rule that applies). It is usual in such systems to halt the execution of the entire system if some goal state is reached (e.g. all the character entities are in some normal form); if no changes are generated; if changes are cycling; or after a specified number of iterations. The objects being rewritten and differing stopping mechanisms determine different families of rewrite system, and in each family, alternative rules and halting conditions may result in strings representing differing species of object. Allowing new rules to be added dynamically to the existing set and allowing rules to be invoked in a stochastic fashion are means whereby more complexity may be introduced. For examples of various types of rewrite system see: von Koch (1905), Chomsky (1956), Naur et al (1960), Mandelbrot (1982), Wolfram (1985), Prusinkiewicz and Lindenmayer (1990), Dershowitz and Plaisted (2001), Marti-Oliet and Meseguer (2002), etc."
http://www.naturescode.org.uk/files/IJCAS-Diaz-Rowlands.pdf
Vicious Circles in Orthogonal Term Rewrite Systems
http://web.mac.com/janwillemklop/Site/Bibliography_files/94.viciouscircles-entcs.pdf
The Universe from Nothing: A Mathematical Lattice of Empty Sets
http://arxiv.org/abs/physics/0309102
Lattice Duality: The Origin of Probability and Entropy
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.8541
Intelligent machines in the twenty-first century: foundations of inference and inquiry
"The last century saw the application of Boolean algebra to the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in our understanding of the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we recently identified the algebra of questions as the free distributive algebra, which will now allow us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper, we examine the foundations of inference and inquiry. We begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine–learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. Last, we will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, which experiment to perform, or which measurement to take given what they have learned and what they are designed to understand."
Inference Probability Entropy Bayesian Methods Lattice Theory Machine Intelligence
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.59.1028&rep=rep1&type=pdf
Entropy/Information Theory Publications:
http://knuthlab.rit.albany.edu/pubs-entropy.html
Using Cognitive Entropy to Manage Uncertain Concepts in Formal Ontologies:
http://www.cs.us.es/~tchavez/53270315.pdf
Betweeness, Metrics and Entropies in Lattices
http://www.cs.umb.edu/~dsim/papersps/bme.pdf
Scanning the structure of ill-known spaces: Part 1. Founding principles about mathematical constitution of space
Necessary and sufficient conditions allowing a previously unknown space to be explored through scanning operators are reexamined with respect to measure theory. Generalized conceptions of distances and dimensionality evaluation are proposed, together with their conditions of validity and range of application to topological spaces. The existence of a Boolean lattice with fractal properties originating from nonwellfounded properties of the empty set is demonstrated. This lattice provides a substrate with both discrete and continuous properties, from which existence of physical universes can be proved, up to the function of conscious perception. Spacetime emerges as an ordered sequence of mappings of closed 3-D Ponicare sections of a topological 4-space provided by the lattice. The possibility of existence of spaces with fuzzy dimension or with adjoined parts with decreasing dimensions is raised, together with possible tools for their study. The work provides the introductory foundations supporting a new theory of space whose physical predictions (suppressing the opposition of quantum and relativistic approaches) and experimental proofs are presented in details in Parts 2 and 3 of the study.
http://arxiv.org/abs/physics/0211096
"That is to say, there may be properties of the known universe that can only be known or explained - "scanned" - only by dimensional probes of more than three dimensions."
http://tinyurl.com/4shlsfr
Topology in Computer Science: Constructivity; Asymmetry and Partiality; Digitization
http://www.dagstuhl.de/Reports/00/00231.pdf
Partiality I: Embedding Relation Algebras
"As long as no cooperation between processes is supposed to take place, one may consider them separately and need not ask for the progress of the respective other processes. If a composite result of processes is to be delivered, it is important in which way the result is built, only by non-strict/continuous “accumulation” (i.e., open for partial evaluation) or with additional intermittent strict/non-continuous “transactions”.
We define the concept of partiality to cope with partial availability. To this end relations are handled under the aspect that orderings are defined in addition to the identities in every relation algebra. Only continuous functions with respect to these orderings are considered to regulate transfer of partialities."
http://homepage.mac.com/titurel/Papers/SchmidtJLAP.pdf
"7. The CTMU and Teleology
Historically, the Telic Principle can be understood as a logical analogue of teleology incorporating John Archibald Wheeler’s Observer Participation Thesis (approximately, “man participates in the ongoing quantum-scale creation of reality by observing it and thereby collapsing the wavefunction representing its potential”). More directly, the Telic Principle says that reality is a self-configuring entity that emerges from a background of unbound potential as a protean recursive construct with a single imperative: self-actualization. In other words, existence and its amplification is the tautological raison d’être of the cosmos. The phrase “raison d’être” has a literal significance; in order to exist, a self-contained universe must configure itself to recognize its own existence, and to configure itself in this way it must, by default, generate and parameterize its own self-configuration and self-recognition functions. This describes a situation in which the universe generates its own generalized utility: to self-configure, the universe must have a “self-actualization criterion” by which to select one of many possible structures or “futures” for itself, and this criterion is a generalized analogue of human utility…its raison d’être.
In addition to generalized utility and generalized volition (teleology), the universe also possesses generalized cognition (coherent self-recognition). By any reasonable definition of the term “mental”, this makes the universe mental in a generalized sense, where “generalized” means that these attributes conform to general functional descriptions of what humans do in the process of volition, cognition and mentation. The “coherent self-recognition” feature of reality appears as an explicit feature of conspansive spacetime, a model-theoretic dual of the expanding cosmos. Whereas the expanding cosmos is simplistically depicted in terms of a model called ERSU, short for Expanding Rubber-Sheet Universe, conspansive spacetime is depicted by a model-theoretic dual of ERSU called USRE, short for the Universe as a Self-Representational Entity. While ERSU is a product of Cartesian mind-matter dualism that effectively excludes mind in favor of matter, USRE, which portrays the universe as a “self-simulation”, is a form of dual-aspect monism according to which reality is distributively informational and cognitive in nature.
It is important to understand that the CTMU does not arbitrarily “project” human attributes onto the cosmos; it permits the logical deduction of necessary general attributes of reality, lets us identify any related human attributes derived from these general attributes, and allows us to explain the latter in terms of the former. CTMU cosmology is thus non-anthropomorphic. Rather, it uses an understanding of the cosmological medium of sentience to explain the mental attributes inherited by sentient organisms from the cosmos in which they have arisen. Unlike mere anthropomorphic reasoning, this is a logically correct description of human characteristics in terms of the characteristics of the universe from which we derive our existence."
http://www.megafoundation.org/CTMU/Articles/Nexus.html
"As our knowledge of things, even of created and limited things, is knowledge of their qualities and not of their essence, how is it possible to comprehend in its essence the Divine Reality, which is unlimited? For the substance of the essence of anything is not comprehended, but only its qualities. For example, the substance of the sun is unknown, but is understood by its qualities, which are heat and light. The substance of the essence of man is unknown and not evident, but by its qualities it is characterized and known. Thus everything is known by its qualities and not by its essence. Although the mind encompasses all things, and the outward beings are comprehended by it, nevertheless these beings with regard to their essence are unknown; they are only known with regard to their qualities.
Then how can the eternal everlasting Lord, who is held sanctified from comprehension and conception, be known by His essence? That is to say, as things can only be known by their qualities and not by their essence, it is certain that the Divine Reality is unknown with regard to its essence, and is known with regard to its attributes. Besides, how can the phenomenal reality embrace the Pre-existent Reality? For comprehension is the result of encompassing --embracing must be, so that comprehension may be --and the Essence of Unity surrounds all, and is not surrounded.
Also the difference of condition in the world of beings is an obstacle to comprehension. For example: this mineral belongs to the mineral kingdom; however far it may rise, it can never comprehend the power of growth. The plants, the trees, whatever progress they may make, cannot conceive of the power of sight or the powers of the other senses; and the animal cannot imagine the condition of man, that is to say, his spiritual powers. Difference of condition is an obstacle to knowledge; the inferior degree cannot comprehend the superior degree. How then can the phenomenal reality comprehend the Pre-existent Reality? Knowing God, therefore, means the comprehension and the knowledge of His attributes, and not of His Reality. This knowledge of the attributes is also proportioned to the capacity and power of man; it is not absolute. Philosophy consists in comprehending the reality of things as they exist, according to the capacity and the power of man. For the phenomenal reality can comprehend the Pre-existent attributes only to the extent of the human capacity. The mystery of Divinity is sanctified and purified from the comprehension of the beings, for all that comes to the imagination is that which man understands, and the power of the understanding of man does not embrace the Reality of the Divine Essence. All that man is able to understand are the attributes of Divinity, the radiance of which appears and is visible in worlds and souls.
When we look at the worlds and the souls, we see wonderful signs of the divine perfections, which are clear and apparent; for the reality of things proves the Universal Reality. The Reality of Divinity may be compared to the sun, which from the height of its magnificence shines upon all the horizons and each horizon, and each soul, receives a share of its radiance. If this light and these rays did not exist, beings would not exist; all beings express something, and partake of some ray and portion of this light. The splendors of the perfections, bounties, and attributes of God shine forth and radiate from the reality of the Perfect Man, that is to say, the Unique One, the universal Manifestation of God. Other beings receive only one ray, but the universal Manifestation is the mirror for this Sun, which appears and becomes manifest in it, with all its perfections, attributes, signs, and wonders."
http://bcca.org/bahaivision/BWF/0712mansknowledgeofgod.html
"Duality principles thus come in two common varieties, one transposing spatial relations and objects, and one transposing objects or spatial relations with mappings, functions, operations or processes. The first is called space-object (or S-O, or S<-->O) duality; the second, time-space (or T-S/O, or T<-->S/O) duality. In either case, the central feature is a transposition of element and a (spatial or temporal) relation of elements. Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures. From this, we may extract a third kind of duality: ST-O duality. In this kind of duality, associated with something called conspansive duality, objects can be “dualized” to spatiotemporal transducers, and the physical universe internally “simulated” by its material contents.
...
Deterministic computational and continuum models of reality are recursive in the standard sense; they evolve by recurrent operations on state from a closed set of “rules” or “laws”. Because the laws are invariant and act deterministically on a static discrete array or continuum, there exists neither the room nor the means for optimization, and no room for self-design. The CTMU, on the other hand, is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity. This amounts to self-generative freedom, hologically providing reality with a “self-simulative scratchpad” on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes.
Standard recursion is “Markovian” in that when a recursive function is executed, each successive recursion is applied to the result of the preceding one. Telic recursion is more than Markovian; it self-actualizatively coordinates events in light of higher-order relationships or telons that are invariant with respect to overall identity, but may display some degree of polymorphism on lower orders. Once one of these relationships is nucleated by an opportunity for telic recursion, it can become an ingredient of syntax in one or more telic-recursive (global or agent-level) operators or telors and be “carried outward” by inner expansion, i.e. sustained within the operator as it engages in mutual absorption with other operators. Two features of conspansive spacetime, the atemporal homogeneity of IEDs (operator strata) and the possibility of extended superposition, then permit the telon to self-actualize by “intelligently”, i.e. telic-recursively, coordinating events in such a way as to bring about its own emergence (subject to various more or less subtle restrictions involving available freedom, noise and competitive interference from other telons). In any self-contained, self-determinative system, telic recursion is integral to the cosmic, teleo-biological and volitional levels of evolution.
..
Where emergent properties are merely latent properties of the teleo-syntactic medium of emergence, the mysteries of emergent phenomena are reduced to just two: how are emergent properties anticipated in the syntactic structure of their medium of emergence, and why are they not expressed except under specific conditions involving (e.g.) degree of systemic complexity?"
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf
"More complex anticipatory capabilities, which are referred to as mental simulations, permit the prediction and processing of expected stimuli in advance. For example, Hesslow (2002) describes how rats are able to ‘plan in simulation’ and compare alternative paths in a T-maze before acting in practice. This capability can be implemented by means of the above described internal forward models. While internal models typically run on-line with action to generate predictions of an action’s effects, in order to produce mental simulations they can be run off-line, too, i.e., they can ’chain’ multiple short-term predictions and generate lookahead predictions for an arbitrary number of steps. By ’simulating’ multiple possible course of events and comparing their outcomes, and agent can select ’the best’ plan in advance (see fig. 2)."
http://www.istc.cnr.it/doc/1a_0000b_20080724d_anticipation.pdf
Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation
Mutual information among three or more dimensions (μ* = –Q) has been considered as interaction information. However, Krippendorff [1,2] has shown that this measure cannot be interpreted as a unique property of the interactions and has proposed an
alternative measure of interaction information based on iterative approximation of
maximum entropies. Q can then be considered as a measure of the difference between
interaction information and redundancy generated in a model entertained by an observer. I argue that this provides us with a measure of the imprint of a second-order observing system—a model entertained by the system itself—on the underlying information processing. The second-order system communicates meaning hyper-incursively; an observation instantiates this meaning-processing within the information processing. The net results may add to or reduce the prevailing uncertainty. The model is tested empirically for the case where textual organization can be expected to contain intellectual organization in terms of distributions of title words, author names, and cited references."
http://www.mdpi.com/1099-4300/12/1/63/pdf
Telentropy: Uncertainty in the biomatrix
"Teleonics is a systemic approach for the study and management of complex living systems, such as human beings, families, communities, business organisations and even countries and international relationships. The approach and its applications have been described in several publications, quoted in the paper. The units of teleonics are teleons, viz, end-related, autonomous process systems. An indication of malfunction in teleons is a high level of telentropy that can be caused by many factors, among which the most common are the lack of well defined goals, inefficient governance, inappropriate interference and undeclared sharing of subsystems between teleons. These factors, as well as other modes of telentropy generation and transfer are described, together with some suggestions about ways to avoid them."
http://www.informaworld.com/smpp/content~db=all~content=a922786098
"Stressors that challenge homeostasis, often regarded as the most urgent of needs, are the best known. When an organism's competence to maintain homeostasis within a specific range is exceeded, responses are evoked that enable the organism to cope by either removing the stressor or facilitating coexistence with it (Antelman and Caggiula, 1990). While many stressors can evoke dramatic neural and endocrine responses, a more modest or “subclinical” response may be exhibited in response to milder stimuli. These responses may build on or extend homeostatic mechanisms or they may be more or less tightly linked to homeostatic responses in a hierarchical manner creating a functional continuum. For example, such a hierarchical system was described for thermoregulation in mammals by Satinoff (1978) in which more recently evolved regulatory mechanisms are invoked when more conservative ones are unable to restore balance."
http://icb.oxfordjournals.org/content/42/3/508.full
"We have developed the proposal by Satinoff (1978) of a parallel hierarchical system, parallel in that each effector could be assigned to its own controller, and hierarchical in that some controllers have a greater capacity to influence thermoregulation than others, to include subsystem controllers responsible for the autoregulation of elements such as scrotal and brain temperature (see Mitchell and Laburn, 1997). Thus, if autoregulation fails or is overwhelmed, then a higher-ranking system can be invoked to regulate the temperature of the subsystem by regulating the whole system containing it (see Fig. 5)."
http://tinyurl.com/4apgpuu
"The bulk of theoretical and empirical work in the neurobiology of emotion indicates that isotelesis—the principle that any one function is served by several structures and processes—applies to emotion as it applies to thermoregulation, for example (Satinoff, 1982)...In light of the preceding discussion, it is quite clear that the processes that emerge in emotion are governed not only by isotelesis, but by the principle of polytelesis as well. The first principle holds that many functions, especially the important ones, are served by a number of redundant systems, whereas the second holds that many systems serve more than one function. There are very few organic functions that are served uniquely by one and only one process, structure, or organ. Similarly, there are very few processes, structures, or organs that serve one and only one purpose. Language, too, is characterized by the isotelic and polytelic principles; there are many words for each meaning and most words have more than one meaning. The two principles apply equally to a variety of other biological, behavioral, and social phenomena. Thus, there is no contradiction between the vascular and the communicative functions of facial efference; the systems that serve these functions are both isotelic and polytelic."
http://tinyurl.com/4dt4gqs
"In other words, telesis is a kind of “pre-spacetime” from which time and space, cognition and information, state-transitional syntax and state, have not yet separately emerged. Once bound in a primitive infocognitive form that drives emergence by generating “relievable stress” between its generalized spatial and temporal components - i.e., between state and state-transition syntax – telesis continues to be refined into new infocognitive configurations, i.e. new states and new arrangements of state-transition syntax, in order to relieve the stress between syntax and state through telic recursion (which it can never fully do, owing to the contingencies inevitably resulting from independent telic recursion on the parts of localized subsystems). As far as concerns the primitive telic-recursive infocognitive MU form itself, it does not “emerge” at all except intrinsically; it has no “external” existence except as one of the myriad possibilities that naturally exist in an unbounded realm of zero constraint.
Telic recursion occurs in two stages, primary and secondary (global and local). In the primary stage, universal (distributed) laws are formed in juxtaposition with the initial distribution of matter and energy, while the secondary stage consists of material and geometric state-transitions expressed in terms of the primary stage. That is, where universal laws are syntactic and the initial mass-energy distribution is the initial state of spacetime, secondary transitions are derived from the initial state by rules of syntax, including the laws of physics, plus telic recursion. The primary stage is associated with the global telor, reality as a whole; the secondary stage, with internal telors (“agent-level” observer-participants). Because there is a sense in which primary and secondary telic recursion can be regarded as “simultaneous”, local telors can be said to constantly “create the universe” by channeling and actualizing generalized utility within it."
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf
"The notion of classifying topos is part of a trend, begun by Lawvere, of viewing a mathematical theory as a category with suitable exactness properties and which contains a “generic model”, and a model of the theory as a functor which preserves those properties. This is described in more detail at internal logic and type theory, but here are some simple examples to give the flavor. "
http://ncatlab.org/nlab/show/classifying+topos
Space and time in the context of equilibrium-point theory
"Advances to the equilibrium-point (EP) theory and solutions to several classical problems of action and perception are suggested and discussed. Among them are (1) the posture–movement problem of how movements away from a stable posture can be made without evoking resistance of posture-stabilizing mechanisms resulting from intrinsic muscle and reflex properties; (2) the problem of kinesthesia or why our sense of limb position is fairly accurate despite ambiguous positional information delivered by proprioceptive and cutaneous signals; (3) the redundancy problems in the control of multiple muscles and degrees of freedom. Central to the EP hypothesis is the notion that there are specific neural structures that represent spatial frames of reference (FRs) selected by the brain in a task-specific way from a set of available FRs. The brain is also able to translate or/and rotate the selected FRs by modifying their major attributes—the origin, metrics, and orientation—and thus substantially influence, in a feed-forward manner, action and perception. The brain does not directly solve redundancy problems: it only limits the amount of redundancy by predetermining where, in spatial coordinates, a task-specific action should emerge and allows all motor elements, including the environment, to interact to deliver a unique action, thus solving the redundancy problem (natural selection of action). The EP theory predicts the existence of specific neurons associated with the control of different attributes of FRs and explains the role of mirror neurons in the inferior frontal gyrus and place cells in the hippocampus."
http://onlinelibrary.wiley.com/doi/10.1002/wcs.108/full
Scoring Rules, Generalized Entropy, and Utility Maximization
Information measures arise in many disciplines, including forecasting (where scoring rules are used to provide incentives for probability estimation), signal processing (where information gain is measured in physical units of relative entropy), decision
analysis (where new information can lead to improved decisions), and finance (where investors optimize portfolios based on their private information and risk preferences). In this paper, we generalize the two most commonly used parametric
families of scoring rules and demonstrate their relation to well-known generalized entropies and utility functions, shedding new light on the characteristics of alternative scoring rules as well as duality relationships between utility maximization and entropy minimization.
http://faculty.fuqua.duke.edu/~rnau/scoring_rules_and_generalized_entropy.pdf
A conversion between utility and information
"Rewards typically express desirabilities or preferences over a set of alternatives. Here we propose that rewards can be defined for any probability distribution based on three desiderata, namely that rewards should be real-valued, additive and order-preserving, where the latter implies that more probable events should also be more desirable. Our main result states that rewards are then uniquely determined by the negative information content. To analyze stochastic processes, we define the utility of a realization as its reward rate. Under this interpretation, we show that the expected utility of a stochastic process is its negative entropy rate. Furthermore, we apply our results to analyze agent-environment interactions. We show that the expected utility that will actually be achieved by the agent is given by the negative cross-entropy from the input-output (I/O) distribution of the coupled interaction system and the agent's I/O distribution. Thus, our results allow for an information-theoretic interpretation of the notion of utility and the characterization of agent-environment interactions in terms of entropy dynamics."
http://arxiv.org/abs/0911.5106
"In 1993, Gerard’t Hooft published Dimensional Reduction in Quantum Gravity, in
which he made the first direct comparison of theories of quantum gravity to holograms:
We would like to advocate here a somewhat extreme point of view. We suspect that
there simply are not more degrees of freedom to talk about than the ones one can draw
on a surface, as given by eq. (3). The situation can be compared with a hologram of a
3-dimensional image on a 2-dimensional surface."
http://physics.ucsc.edu/~jeff/holographic.pdf
"On the surface, holographic reduced representations are utterly different from logical unification. But I can't help feeling that, at a deeper level, they are closely related. And there is a categorical formulation of logical unification, described in the first reference below, by Rydeheard and Burstall. They say their formulation is derived from an observation by Goguen. So it may be based (I'm not an expert) on the ideas in the second reference:
David Rydeheard and Rod Burstall, Computational Category Theory. Prentice Hall, 1988. See Chapter 8.
http://www.cs.man.ac.uk/~david/categories/book/book.pdf
Joseph Goguen, What is unification? A categorical view of substitution, equation and solution. In Resolution of Equations in Algebraic Structures, 1: Algebraic Techniques, Academic Press, 1989.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.9221
So, can we extend that categorical formulation to holographic reduced representations? I don't know. But if we could, we would better understand how they are related to logic programming, and we might gain new tools for analogical reasoning. It's worth trying."
http://drdobbs.com/blogs/228700165#unihrr
Utility, rationality and beyond: from behavioral finance to informational finance
http://books.google.com/books?id=_LFdBxG9w-kC&lr=&source=gbs_navlinks_s
"In the era of knowledge-driven economy, technological innovation is a key character. The thesis describes the connotation, purpose and core topics of service science through implementing knowledge management, and finally put forward the suggestion of improving the technological innovation capacity through knowledge management and service science."
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5301012
"The essential difference is that in a knowledge economy, knowledge is a product, while in a knowledge-based economy, knowledge is a tool. This difference is not yet well distinguished in the subject matter literature. They both are strongly interdisciplinary, involving economists, computer scientists, engineers, mathematicians, chemists and physicists, as well as cognitivists, psychologists and sociologists.
Various observers describe today's global economy as one in transition to a "knowledge economy," as an extension of an "information society." The transition requires that the rules and practices that determined success in the industrial economy need rewriting in an interconnected, globalized economy where knowledge resources such as know-how and expertise are as critical as other economic resources. According to analysts of the "knowledge economy," these rules need to be rewritten at the levels of firms and industries in terms of knowledge management and at the level of public policy as knowledge policy or knowledge-related policy."
http://en.wikipedia.org/wiki/Knowledge_economy
"Baumol's cost disease (also known as the Baumol Effect) is a phenomenon described by William J. Baumol and William G. Bowen in the 1960s. It involves a rise of salaries in jobs that have experienced no increase of labor productivity in response to rising salaries in other jobs which did experience such labor productivity growth. This goes against the theory in classical economics that wages are always closely tied to labor productivity changes.
The rise of wages in jobs without productivity gains is caused by the necessity to compete for employees with jobs that did experience gains and hence can naturally pay higher salaries, just as classical economics predicts. For instance, if the banking industry pays its bankers 19th century style salaries, the bankers may decide to quit and get a job at an automobile factory where salaries are commensurate to high labor productivity. Hence, bankers' salaries are increased not due to labor productivity increases in the banking industry, but rather due to productivity and wage increases in other industries."
http://en.wikipedia.org/wiki/Baumol's_cost_disease
"Ever since Harvard sociologist Daniel Bell published his book, The Coming of Post-Industrial Society, in 1973, there has been a strong sense of inevitability about the rise and dominance of services in the world’s advanced economies. And, in general, people have concluded that this is a good thing. But there’s danger lurking in services. At this point in their evolution, they’re less efficient and productive than modern manufacturing and farming. Also, while manufacturing took over 200 years before its “quality revolution,” services have only been dominant for a few decades and have yet to figure out quality. These issues could mean big trouble not just for developed countries but for the entire global economy.
Some of today’s top thinkers about services are sounding alarms. Robert Morris, head of service research at IBM Research, says that unless services become more scientific and technical, economic growth could stagnate. Henry Chesbrough, the UC Berkeley professor who coined the term “open innovation,” says this is a major issue facing the world economy long term. He calls it the “commodity services trap.”
Underpinning their thinking is an economic theory called Baumol’s disease. The idea is that as services become an ever larger piece of the economy, they consume an ever larger share of the human and capital resources–but don’t create enough value, in return. Think of an electricity generation plant that consumes more energy than it produces. “Productivity and quality of services isn’t growing comparably to other sectors, including manufacturing and agriculture, so the danger is that it swamps the economy–employment, the share of GDP, and what people have to pay for,” says Morris. “The world economy could stall.”
Developed nations are particularly vulnerable to Baumol’s disease. In Europe and the United States, a lot of progress has been made in the past decade in improving the efficiency of IT services, but other service industries and frightfully inefficient and ineffective: think government, health care and education.
So while adding jobs is vitally important to countries that are still reeling from the economic meltdown, if the jobs that are added are commodity service jobs, long term, it’s adding to the inefficiency of the economy. That’s why governments need to invest aggressively in science and education and technology to improve services in spite of their budget deficits.
One area that deserves investment is service science. It’ s the academic discipline that IBM (with help from Chesbrough) began promoting in 2002. A multidisciplinary approach, service science addresses Baumol’s disease head on by using the ideas and skills of computer science, engineering, social science and business management to improve the productivity, quality and innovation in services. Many of the techniques that have already been developed in computer, mathematical and information sciences can be directly applied to helping services. But new breakthroughs and the better interactions with behavioral and business sciences are also essential, because services are, and always will be, people-centric businesses.
Today, more than 450 universities worldwide offer some sort of service science program. But much more can and should be done to avoid falling into the commodity services trap. Otherwise, the the post-industrial society could take on a post-apocalyptic tinge."
http://asmarterplanet.com/blog/category/smarter-systems
Innovation in services: a review of the debate and a research agenda
http://www.slideshare.net/rooteranalysis/articulo-3-innovacionservicios
The service paradox and endogenous economic growth
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=358350
"In this blog we will explore how recent ideas in cognitive science can be used to develop new products, services and organizations that enhance how we think and feel.
We want exciting, beautiful, easy-to-use things. We ask our artifacts (anything that is designed) to make us smarter, reflect our values, invoke the respect and admiration of others and involve our friends and family when appropriate. We want all of this on top of whatever it is they are suppose to do.
The basic functionality of any artifact is now table stakes. What designers must do is go beyond the basics and deliver the aesthetic, emotional, experiential, profound and even transformational. We must make the ordinary extraordinary in an authentic way. In many respects, that has always been the goal of design and exceptional designers achieve it (somehow) everyday.
But it goes beyond that.
There are things that we design that fail to achieve their intended purpose because they don’t reflect sufficient understanding of how the mind works. And the consequence can be dire. Take for example weight loss or chronic disease management programs that are designed to change our behaviors but fail to do so. The cost of that design failure is very high.
Over the last two decades there has been an explosion in what we know about how the minds works. Significant advances in the neuro and cognitive sciences and a wide range of emerging high-potential fields including neuroeconomics, cognitive ergonomics, behavioral finance, augmented cognition and others promise to provide the principles, models and tools needed to systematically design artifacts that not only support cognition but actually make it better.
Cognitive design seeks to paternalistically harness these insights and translate them into improved products, services, change programs, workflow, organizational designs, workspaces and any other artifact that impacts how we think and feel. Cognitive design, like human factors, interactive design and most other modern design movements looks to put the latest findings from the human sciences to work. But it goes further than that.
It goes further by insisting that the scope and orientation of the design problem itself must change. The central idea is in fact somewhat radical:
We need a new design stance that says we are not just designing the functionality of the artifact but we are also designing the mental states of the user.
In this sense the mental functioning and states of the end user are ever bit as much a part of the design problem and specification as are the more traditional considerations of feature, function and form. We seek to break down the distinction between an artifact and the user’s reaction to it by including both as the “thing to be designed”. Now it is feature, function, form and mental state. The fact that we have the science and soon the practice to do this is both exciting and worrisome.
We will cover both the promise and the peril (ethical considerations) of cognitive design in this blog."
http://newvaluestreams.com/wordpress/?page_id=2
"I am hoping soon to start work on the final draft of a book whose working title has been Cognitive Design. This book is about the design and standardization of sets of things – such as the letters of the alphabet, the fifty United States, the notes in the musical octave, the different grades you can give your students, or the stops on a subway line. Every person deals with one or another of these sets of things on a daily basis, and for many people they hold a sort of fascination. We sometimes forget that societies, cultures, and the human beings within them – not nature – designed these sets, chose labels for their members, and made them into standards. Many people have a sense that these different sets have something in common – but most would be hard-pressed to say what that is. My book lays out the answer. I submitted the most recent draft of it as my Ph.D. dissertation at Rutgers University in April 2005."
http://www.ianwatson.org/contrast_set_design_overview.html
Cognitive Design Features on Traffic Signs
http://www.engineeringletters.com/issues_v14/issue_1/EL_14_1_3.pdf
6 cognitive design principles (simplicity, consistency, organization, natural order, clarity, and attractiveness)
http://www.ncbi.nlm.nih.gov/pubmed/18359412
"We are a multidisciplinary research center devoted to the study of medical decision-making, cognitive foundations of health behaviors and the effective use of computer-based information technologies.Our research is deeply rooted in theories and methods of cognitive science, with a strong focus on the analysis of medical error, development of models of decision-making, and design and evaluation of effective human-computer interactions. These studies are guided by a concern for improving the performance of individuals and teams in the health care system."
http://www.uthouston.edu/cognitive-informatics/
Cognitive Informatics: Exploring the theoretical foundations for Natural Intelligence, Neural Informatics, Autonomic Computing, and Agent Systems:
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=978138C997A258571CD8F3F58A44D558?doi=10.1.1.89.2133&rep=rep1&type=pdf
Developing a value proposition is based on a review and analysis of the benefits, costs and value that an organization can deliver to its customers, prospective customers, and other constituent groups within and outside the organization. It is also a positioning of value, where Value = Benefits - Cost (cost includes risk)"
http://en.wikipedia.org/wiki/Value_proposition
"In logic and philosophy, the term proposition (from the word "proposal") refers to either (a) the "content" or "meaning" of a meaningful declarative sentence or (b) the pattern of symbols, marks, or sounds that make up a meaningful declarative sentence. The meaning of a proposition includes having the quality or property of being either true or false, and as such propositions are claimed to be truthbearers.
The existence of propositions in sense (a) above, as well as the existence of "meanings", is disputed by some philosophers. Where the concept of a "meaning" is admitted, its nature is controversial. In earlier texts writers have not always made it sufficiently clear whether they are using the term proposition in sense of the words or the "meaning" expressed by the words. To avoid the controversies and ontological implications, the term sentence is often now used instead of proposition to refer to just those strings of symbols that are truthbearers, being either true or false under an interpretation. Strawson advocated the use of the term "statement", and this is the current usage in mathematical logic."
http://en.wikipedia.org/wiki/Proposition
"Propositions, as ways of "measuring" semantic information by the topic-ful, turn out to be more like dollars than like numbers[...]There are no real, natural universal units of either economic value or semantic information."
http://www.consciousentities.com/dennett.htm
"Inasmuch as science is observational or perceptual in nature, the goal of providing a scientific model and mechanism for the evolution of complex systems ultimately requires a supporting theory of reality of which perception itself is the model (or theory-to-universe mapping). Where information is the abstract currency of perception, such a theory must incorporate the theory of information while extending the information concept to incorporate reflexive self-processing in order to achieve an intrinsic (self-contained) description of reality. This extension is associated with a limiting formulation of model theory identifying mental and physical reality, resulting in a reflexively self-generating, self-modeling theory of reality identical to its universe on the syntactic level. By the nature of its derivation, this theory, the Cognitive Theoretic Model of the Universe or CTMU, can be regarded as a supertautological reality-theoretic extension of logic.
...
The currency of telic feedback is a quantifiable self-selection parameter, generalized utility, a generalized property of law and state in the maximization of which they undergo mutual refinement (note that generalized utility is self-descriptive or autologous, intrinsically and retroactively defined within the system, and “pre-informational” in the sense that it assigns no specific property to any specific object). Through telic feedback, a system retroactively self-configures by reflexively applying a “generalized utility function” to its internal existential potential or possible futures. In effect, the system brings itself into existence as a means of atemporal communication between its past and future whereby law and state, syntax and informational content, generate and refine each other across time to maximize total systemic self-utility. This defines a situation in which the true temporal identity of the system is a distributed point of temporal equilibrium that is both between and inclusive of past and future. In this sense, the system is timeless or atemporal.
A system that evolves by means of telic recursion – and ultimately, every system must either be, or be embedded in, such a system as a condition of existence – is not merely computational, but protocomputational. That is, its primary level of processing configures its secondary (computational and informational) level of processing by telic recursion. Telic recursion can be regarded as the self-determinative mechanism of not only cosmogony, but a natural, scientific form of teleology.
...
The ultimate “boundary of the boundary” of the universe is UBT, a realm of zero constraint and infinite possibility where neither boundary nor content exists. The supertautologically-closed universe buys internal diffeonesis only at the price of global synesis, purchasing its informational distinctions only at the price of coherence.
...
Moreover, in order to function as a selection principle, it generates a generalized global selection parameter analogous to “self-utility”, which it then seeks to maximize in light of the evolutionary freedom of the cosmos as expressed through localized telic subsystems which mirror the overall system in seeking to maximize (local) utility. In this respect, the Telic Principle is an ontological extension of so-called “principles of economy” like those of Maupertuis and Hamilton regarding least action, replacing least action with deviation from generalized utility."
http://www.ctmu.net/
"In economics, utility is a measure of relative satisfaction. Given this measure, one may speak meaningfully of increasing or decreasing utility, and thereby explain economic behavior in terms of attempts to increase one's utility. Utility is often modeled to be affected by consumption of various goods and services, possession of wealth and spending of leisure time.
The doctrine of utilitarianism saw the maximization of utility as a moral criterion for the organization of society. According to utilitarians, such as Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873), society should aim to maximize the total utility of individuals, aiming for "the greatest happiness for the greatest number of people". Another theory forwarded by John Rawls (1921–2002) would have society maximize the utility of the individual initially receiving the minimum amount of utility."
http://en.wikipedia.org/wiki/Utility
"Reflective equilibrium is a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments. Although he did not use the term, philosopher Nelson Goodman introduced the method of reflective equilibrium as an approach to justifying the principles of inductive logic. The term 'reflective equilibrium' was coined by John Rawls and popularized in his celebrated A Theory of Justice as a method for arriving at the content of the principles of justice.
Rawls argues that human beings have a "sense of justice" which is both a source of moral judgment and moral motivation. In Rawls's theory, we begin with "considered judgments" that arise from the sense of justice. These may be judgments about general moral principles (of any level of generality) or specific moral cases. If our judgments conflict in some way, we proceed by adjusting our various beliefs until they are in "equilibrium," which is to say that they are stable, not in conflict, and provide consistent practical guidance. Rawls argues that a set of moral beliefs in ideal reflective equilibrium describes or characterizes the underlying principles of the human sense of justice.
An example of the method of reflective equilibrium may be useful. Suppose Zachary believes in the general principle of always obeying the commands in the Bible, and mistakenly thinks that these are completely encompassed by every Old Testament command. Suppose also that he thinks that it is not ethical to stone people to death merely for being Wiccan. These views may come into conflict (see Exodus 22:18, but see John 8:7). If they do, Zachary will then have several choices. He can discard his general principle in search of a better one (for example, only obeying the Ten Commandments), modify his general principle (for example, choosing a different translation of the Bible, or including Jesus' teaching from John 8:7 "If any of you is without sin, let him be the first to cast a stone" into his understanding), or change his opinions about the point in question to conform with his theory (by deciding that witches really should be killed). Whatever the decision, he has moved toward reflective equilibrium."
http://en.wikipedia.org/wiki/Reflective_equilibrium
"In philosophy, especially that of Aristotle, the golden mean is the desirable middle between two extremes, one of excess and the other of deficiency. For example courage, a virtue, if taken to excess would manifest as recklessness and if deficient as cowardice.
To the Greek mentality, it was an attribute of beauty. Both ancients and moderns realized that there is a close association in mathematics between beauty and truth. The poet John Keats, in his Ode on a Grecian Urn, put it this way:
"Beauty is truth, truth beauty," -- that is all
Ye know on earth, and all ye need to know.
The Greeks believed there to be three 'ingredients' to beauty: symmetry, proportion, and harmony. This triad of principles infused their life. They were very much attuned to beauty as an object of love and something that was to be imitated and reproduced in their lives, architecture, Paideia and politics. They judged life by this mentality.
In Chinese philosophy, a similar concept, Doctrine of the Mean, was propounded by Confucius; Buddhist philosophy also includes the concept of the middle way."
http://en.wikipedia.org/wiki/Golden_mean_(philosophy)
"The term intentionality was introduced by Jeremy Bentham as a principle of utility in his doctrine of consciousness for the purpose of distinguishing acts that are intentional and acts that are not. The term was later used by Edmund Husserl in his doctrine that consciousness is always intentional, a concept that he undertook in connection with theses set forth by Franz Brentano regarding the ontological and psychological status of objects of thought. It has been defined as "aboutness", and according to the Oxford English Dictionary it is "the distinguishing property of mental phenomena of being necessarily directed upon an object, whether real or imaginary". It is in this sense and the usage of Husserl that the term is primarily used in contemporary philosophy."
http://en.wikipedia.org/wiki/Intentionality
"The intentional stance is a term coined by philosopher Daniel Dennett for the level of abstraction in which we view the behavior of a thing in terms of mental properties. It is part of a theory of mental content proposed by Dennett, which provides the underpinnings of his later works on free will, consciousness, folk psychology, and evolution.
"Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do." (Daniel Dennett, The Intentional Stance, p. 17)"
http://en.wikipedia.org/wiki/Intentional_stance
The Intentional Stance
http://consc.net/mindpapers/2.1b
The Intentional Stance: Developmental and Neurocognitive Perspectives
http://ase.tufts.edu/cogstud/incbios/griffinr/datapubs/griffin&bc-dennett.pdf
Anticipation, Design and Interaction
http://www.youtube.com/watch?v=s7l4VF7asXs
Dynamic ontology as an ontological framework of anticipatory systems
http://www.emeraldinsight.com/journals.htm?articleid=1864162&show=abstract
Strong anticipation: Multifractal cascade dynamics modulate scaling in synchronization behaviors
"Previous research on anticipatory behaviors has found that the fractal scaling of human behavior may attune to the fractal scaling of an unpredictable signal [Stephen DG, Stepp N, Dixon JA, Turvey MT. Strong anticipation: Sensitivity to long-range correlations in synchronization behavior. Physica A 2008;387:5271–8]. We propose to explain this attunement as a case of multifractal cascade dynamics [Schertzer D, Lovejoy S. Generalised scale invariance in turbulent phenomena. Physico-Chem Hydrodyn J 1985;6:623–5] in which perceptual-motor fluctuations are coordinated across multiple time scales. This account will serve to sharpen the contrast between strong and weak anticipation: whereas the former entails a sensitivity to the intermittent temporal structure of an unpredictable signal, the latter simply predicts sensitivity to an aggregate description of an unpredictable signal irrespective of actual sequence. We pursue this distinction through a reanalysis of Stephen et al.’s data by examining the relationship between the widths of singularity spectra for intertap interval time series and for each corresponding interonset interval time series. We find that the attunement of fractal scaling reported by Stephen et al. was not the trivial result of sensitivity to temporal structure in aggregate but reflected a subtle sensitivity to the coordination across multiple time scales of fluctuation in the unpredictable signal."
http://dx.doi.org/10.1016/j.chaos.2011.01.005
Introduction to the Natural Anticipator and the Artificial Anticipator
"This short communication deals with the introduction of the concept of anticipator, which is one who anticipates, in the framework of computing anticipatory systems. The definition of anticipation deals with the concept of program. Indeed, the word program, comes from “pro-gram” meaning "to write before" by anticipation, and means a plan for the programming of a mechanism, or a sequence of coded instructions that can be inserted into a mechanism, or a sequence of coded instructions, as genes or behavioural responses, that is part of an organism. Any natural or artificial programs are thus related to anticipatory rewriting systems, as shown in this paper."
http://orbi.ulg.ac.be/handle/2268/81299
http://tinyurl.com/47ogqta
How General is Nilpotency?
"Evidence is presented for the generality, criticality and importance of nilpotence and the associated criteria of Pauli exclusion, quantum phase factor and quantum holographic signal processing in relation to calculation, problem solving and optimum control of Heisenberg uncertainty in Quantum Interaction.
...
Nilpotent logic rather than digital logic reflects this by making the universal automatically the mirror image of the particular because the universe is constrained to have zero totality. This clearly operates in the case of quantum mechanics. The question that then emerges is how much can any system (e.g. life, consciousness, galactic formation, chemistry), which has a strong degree of self-organization manage to achieve this by being modeled on a nilpotent structure. The work of Hill and Rowlands (2007), and Marcer and Rowlands (2007), suggests that this is possible in a wide variety of contexts. The reason is that the nilpotency does not stem
from quantum mechanics initially, but from fundamental conditions of optimal information processing which are prior to physics, chemistry and biology, and even to mathematics."
http://www.naturescode.org.uk/files/HowgeneralANPA_(2).pdf
The 'Logic' of Self-Organizing Systems
A totally new computational grammatical structure has been developed which encompasses the general class of self-organizing systems. It is based on a universal rewrite system and the principle of nilpotency, where a system and its environment have a space-time variation defined by the phase, which preserves the dual mirror-image relationship between the two.
http://www.aaai.org/ocs/index.php/FSS/FSS10/paper/download/2188/2674
A Computational Path to the Nilpotent Dirac Equation
"Using a rewrite approach we introduce a computational path to a nilpotent form of the Dirac equation. The system is novel in allowing new symbols to be added to the initial alphabet and starts with just one symbol, representing ‘nothing’, and two fundamental rules: create, a process which adds news symbols, and conserve, a process which examines the effect of any new symbol on those that currently exist. With each step a new sub-alphabet of an infinite universal alphabet is created. The implementation may be iterative, where a sequence of algebraic properties is required of the emerging subalphabets. The path proceeds from nothing through conjugation, complexification, and dimensionalisation to a steady (nilpotent) state in which no fundamentally new symbol is needed. Many simple ways of implementing the computational path exist.
Keywords. Rewrite system, substitution system, nilpotent, Dirac equation, universal
alphabet.
Rewrite systems are synonymous with computing in the sense that most software is
written in a language that must be rewritten as characters for some hardware to
interpret. Formal rewrite (substitution or production) systems are pieces of software that take an object usually represented as a string of characters and using a set of rewrite rules (which define the system) generate a new string representing an altered state of the object. If required, a second realisation system takes the string and produces a visualisation or manifestation of the objects being represented. Each step of such rewrite systems sees one or more character entities of the complex object, defined in terms of symbols drawn from a finite alphabet Σ, being mapped using rewrite rules of the form L→R, into other character entities. Some stopping mechanism is defined to identify the end of one step and the start of the next (for example we can define that for each character entity or group of entities in a string, and working in a specific order, we will apply every rule that applies). It is usual in such systems to halt the execution of the entire system if some goal state is reached (e.g. all the character entities are in some normal form); if no changes are generated; if changes are cycling; or after a specified number of iterations. The objects being rewritten and differing stopping mechanisms determine different families of rewrite system, and in each family, alternative rules and halting conditions may result in strings representing differing species of object. Allowing new rules to be added dynamically to the existing set and allowing rules to be invoked in a stochastic fashion are means whereby more complexity may be introduced. For examples of various types of rewrite system see: von Koch (1905), Chomsky (1956), Naur et al (1960), Mandelbrot (1982), Wolfram (1985), Prusinkiewicz and Lindenmayer (1990), Dershowitz and Plaisted (2001), Marti-Oliet and Meseguer (2002), etc."
http://www.naturescode.org.uk/files/IJCAS-Diaz-Rowlands.pdf
Vicious Circles in Orthogonal Term Rewrite Systems
http://web.mac.com/janwillemklop/Site/Bibliography_files/94.viciouscircles-entcs.pdf
The Universe from Nothing: A Mathematical Lattice of Empty Sets
http://arxiv.org/abs/physics/0309102
Lattice Duality: The Origin of Probability and Entropy
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.8541
Intelligent machines in the twenty-first century: foundations of inference and inquiry
"The last century saw the application of Boolean algebra to the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in our understanding of the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we recently identified the algebra of questions as the free distributive algebra, which will now allow us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper, we examine the foundations of inference and inquiry. We begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine–learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. Last, we will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, which experiment to perform, or which measurement to take given what they have learned and what they are designed to understand."
Inference Probability Entropy Bayesian Methods Lattice Theory Machine Intelligence
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.59.1028&rep=rep1&type=pdf
Entropy/Information Theory Publications:
http://knuthlab.rit.albany.edu/pubs-entropy.html
Using Cognitive Entropy to Manage Uncertain Concepts in Formal Ontologies:
http://www.cs.us.es/~tchavez/53270315.pdf
Betweeness, Metrics and Entropies in Lattices
http://www.cs.umb.edu/~dsim/papersps/bme.pdf
Scanning the structure of ill-known spaces: Part 1. Founding principles about mathematical constitution of space
Necessary and sufficient conditions allowing a previously unknown space to be explored through scanning operators are reexamined with respect to measure theory. Generalized conceptions of distances and dimensionality evaluation are proposed, together with their conditions of validity and range of application to topological spaces. The existence of a Boolean lattice with fractal properties originating from nonwellfounded properties of the empty set is demonstrated. This lattice provides a substrate with both discrete and continuous properties, from which existence of physical universes can be proved, up to the function of conscious perception. Spacetime emerges as an ordered sequence of mappings of closed 3-D Ponicare sections of a topological 4-space provided by the lattice. The possibility of existence of spaces with fuzzy dimension or with adjoined parts with decreasing dimensions is raised, together with possible tools for their study. The work provides the introductory foundations supporting a new theory of space whose physical predictions (suppressing the opposition of quantum and relativistic approaches) and experimental proofs are presented in details in Parts 2 and 3 of the study.
http://arxiv.org/abs/physics/0211096
"That is to say, there may be properties of the known universe that can only be known or explained - "scanned" - only by dimensional probes of more than three dimensions."
http://tinyurl.com/4shlsfr
Topology in Computer Science: Constructivity; Asymmetry and Partiality; Digitization
http://www.dagstuhl.de/Reports/00/00231.pdf
Partiality I: Embedding Relation Algebras
"As long as no cooperation between processes is supposed to take place, one may consider them separately and need not ask for the progress of the respective other processes. If a composite result of processes is to be delivered, it is important in which way the result is built, only by non-strict/continuous “accumulation” (i.e., open for partial evaluation) or with additional intermittent strict/non-continuous “transactions”.
We define the concept of partiality to cope with partial availability. To this end relations are handled under the aspect that orderings are defined in addition to the identities in every relation algebra. Only continuous functions with respect to these orderings are considered to regulate transfer of partialities."
http://homepage.mac.com/titurel/Papers/SchmidtJLAP.pdf
"7. The CTMU and Teleology
Historically, the Telic Principle can be understood as a logical analogue of teleology incorporating John Archibald Wheeler’s Observer Participation Thesis (approximately, “man participates in the ongoing quantum-scale creation of reality by observing it and thereby collapsing the wavefunction representing its potential”). More directly, the Telic Principle says that reality is a self-configuring entity that emerges from a background of unbound potential as a protean recursive construct with a single imperative: self-actualization. In other words, existence and its amplification is the tautological raison d’être of the cosmos. The phrase “raison d’être” has a literal significance; in order to exist, a self-contained universe must configure itself to recognize its own existence, and to configure itself in this way it must, by default, generate and parameterize its own self-configuration and self-recognition functions. This describes a situation in which the universe generates its own generalized utility: to self-configure, the universe must have a “self-actualization criterion” by which to select one of many possible structures or “futures” for itself, and this criterion is a generalized analogue of human utility…its raison d’être.
In addition to generalized utility and generalized volition (teleology), the universe also possesses generalized cognition (coherent self-recognition). By any reasonable definition of the term “mental”, this makes the universe mental in a generalized sense, where “generalized” means that these attributes conform to general functional descriptions of what humans do in the process of volition, cognition and mentation. The “coherent self-recognition” feature of reality appears as an explicit feature of conspansive spacetime, a model-theoretic dual of the expanding cosmos. Whereas the expanding cosmos is simplistically depicted in terms of a model called ERSU, short for Expanding Rubber-Sheet Universe, conspansive spacetime is depicted by a model-theoretic dual of ERSU called USRE, short for the Universe as a Self-Representational Entity. While ERSU is a product of Cartesian mind-matter dualism that effectively excludes mind in favor of matter, USRE, which portrays the universe as a “self-simulation”, is a form of dual-aspect monism according to which reality is distributively informational and cognitive in nature.
It is important to understand that the CTMU does not arbitrarily “project” human attributes onto the cosmos; it permits the logical deduction of necessary general attributes of reality, lets us identify any related human attributes derived from these general attributes, and allows us to explain the latter in terms of the former. CTMU cosmology is thus non-anthropomorphic. Rather, it uses an understanding of the cosmological medium of sentience to explain the mental attributes inherited by sentient organisms from the cosmos in which they have arisen. Unlike mere anthropomorphic reasoning, this is a logically correct description of human characteristics in terms of the characteristics of the universe from which we derive our existence."
http://www.megafoundation.org/CTMU/Articles/Nexus.html
"As our knowledge of things, even of created and limited things, is knowledge of their qualities and not of their essence, how is it possible to comprehend in its essence the Divine Reality, which is unlimited? For the substance of the essence of anything is not comprehended, but only its qualities. For example, the substance of the sun is unknown, but is understood by its qualities, which are heat and light. The substance of the essence of man is unknown and not evident, but by its qualities it is characterized and known. Thus everything is known by its qualities and not by its essence. Although the mind encompasses all things, and the outward beings are comprehended by it, nevertheless these beings with regard to their essence are unknown; they are only known with regard to their qualities.
Then how can the eternal everlasting Lord, who is held sanctified from comprehension and conception, be known by His essence? That is to say, as things can only be known by their qualities and not by their essence, it is certain that the Divine Reality is unknown with regard to its essence, and is known with regard to its attributes. Besides, how can the phenomenal reality embrace the Pre-existent Reality? For comprehension is the result of encompassing --embracing must be, so that comprehension may be --and the Essence of Unity surrounds all, and is not surrounded.
Also the difference of condition in the world of beings is an obstacle to comprehension. For example: this mineral belongs to the mineral kingdom; however far it may rise, it can never comprehend the power of growth. The plants, the trees, whatever progress they may make, cannot conceive of the power of sight or the powers of the other senses; and the animal cannot imagine the condition of man, that is to say, his spiritual powers. Difference of condition is an obstacle to knowledge; the inferior degree cannot comprehend the superior degree. How then can the phenomenal reality comprehend the Pre-existent Reality? Knowing God, therefore, means the comprehension and the knowledge of His attributes, and not of His Reality. This knowledge of the attributes is also proportioned to the capacity and power of man; it is not absolute. Philosophy consists in comprehending the reality of things as they exist, according to the capacity and the power of man. For the phenomenal reality can comprehend the Pre-existent attributes only to the extent of the human capacity. The mystery of Divinity is sanctified and purified from the comprehension of the beings, for all that comes to the imagination is that which man understands, and the power of the understanding of man does not embrace the Reality of the Divine Essence. All that man is able to understand are the attributes of Divinity, the radiance of which appears and is visible in worlds and souls.
When we look at the worlds and the souls, we see wonderful signs of the divine perfections, which are clear and apparent; for the reality of things proves the Universal Reality. The Reality of Divinity may be compared to the sun, which from the height of its magnificence shines upon all the horizons and each horizon, and each soul, receives a share of its radiance. If this light and these rays did not exist, beings would not exist; all beings express something, and partake of some ray and portion of this light. The splendors of the perfections, bounties, and attributes of God shine forth and radiate from the reality of the Perfect Man, that is to say, the Unique One, the universal Manifestation of God. Other beings receive only one ray, but the universal Manifestation is the mirror for this Sun, which appears and becomes manifest in it, with all its perfections, attributes, signs, and wonders."
http://bcca.org/bahaivision/BWF/0712mansknowledgeofgod.html
"Duality principles thus come in two common varieties, one transposing spatial relations and objects, and one transposing objects or spatial relations with mappings, functions, operations or processes. The first is called space-object (or S-O, or S<-->O) duality; the second, time-space (or T-S/O, or T<-->S/O) duality. In either case, the central feature is a transposition of element and a (spatial or temporal) relation of elements. Together, these dualities add up to the concept of triality, which represents the universal possibility of consistently permuting the attributes time, space and object with respect to various structures. From this, we may extract a third kind of duality: ST-O duality. In this kind of duality, associated with something called conspansive duality, objects can be “dualized” to spatiotemporal transducers, and the physical universe internally “simulated” by its material contents.
...
Deterministic computational and continuum models of reality are recursive in the standard sense; they evolve by recurrent operations on state from a closed set of “rules” or “laws”. Because the laws are invariant and act deterministically on a static discrete array or continuum, there exists neither the room nor the means for optimization, and no room for self-design. The CTMU, on the other hand, is conspansive and telic-recursive; because new state-potentials are constantly being created by evacuation and mutual absorption of coherent objects (syntactic operators) through conspansion, metrical and nomological uncertainty prevail wherever standard recursion is impaired by object sparsity. This amounts to self-generative freedom, hologically providing reality with a “self-simulative scratchpad” on which to compare the aggregate utility of multiple self-configurations for self-optimizative purposes.
Standard recursion is “Markovian” in that when a recursive function is executed, each successive recursion is applied to the result of the preceding one. Telic recursion is more than Markovian; it self-actualizatively coordinates events in light of higher-order relationships or telons that are invariant with respect to overall identity, but may display some degree of polymorphism on lower orders. Once one of these relationships is nucleated by an opportunity for telic recursion, it can become an ingredient of syntax in one or more telic-recursive (global or agent-level) operators or telors and be “carried outward” by inner expansion, i.e. sustained within the operator as it engages in mutual absorption with other operators. Two features of conspansive spacetime, the atemporal homogeneity of IEDs (operator strata) and the possibility of extended superposition, then permit the telon to self-actualize by “intelligently”, i.e. telic-recursively, coordinating events in such a way as to bring about its own emergence (subject to various more or less subtle restrictions involving available freedom, noise and competitive interference from other telons). In any self-contained, self-determinative system, telic recursion is integral to the cosmic, teleo-biological and volitional levels of evolution.
..
Where emergent properties are merely latent properties of the teleo-syntactic medium of emergence, the mysteries of emergent phenomena are reduced to just two: how are emergent properties anticipated in the syntactic structure of their medium of emergence, and why are they not expressed except under specific conditions involving (e.g.) degree of systemic complexity?"
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf
"More complex anticipatory capabilities, which are referred to as mental simulations, permit the prediction and processing of expected stimuli in advance. For example, Hesslow (2002) describes how rats are able to ‘plan in simulation’ and compare alternative paths in a T-maze before acting in practice. This capability can be implemented by means of the above described internal forward models. While internal models typically run on-line with action to generate predictions of an action’s effects, in order to produce mental simulations they can be run off-line, too, i.e., they can ’chain’ multiple short-term predictions and generate lookahead predictions for an arbitrary number of steps. By ’simulating’ multiple possible course of events and comparing their outcomes, and agent can select ’the best’ plan in advance (see fig. 2)."
http://www.istc.cnr.it/doc/1a_0000b_20080724d_anticipation.pdf
Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation
Mutual information among three or more dimensions (μ* = –Q) has been considered as interaction information. However, Krippendorff [1,2] has shown that this measure cannot be interpreted as a unique property of the interactions and has proposed an
alternative measure of interaction information based on iterative approximation of
maximum entropies. Q can then be considered as a measure of the difference between
interaction information and redundancy generated in a model entertained by an observer. I argue that this provides us with a measure of the imprint of a second-order observing system—a model entertained by the system itself—on the underlying information processing. The second-order system communicates meaning hyper-incursively; an observation instantiates this meaning-processing within the information processing. The net results may add to or reduce the prevailing uncertainty. The model is tested empirically for the case where textual organization can be expected to contain intellectual organization in terms of distributions of title words, author names, and cited references."
http://www.mdpi.com/1099-4300/12/1/63/pdf
Telentropy: Uncertainty in the biomatrix
"Teleonics is a systemic approach for the study and management of complex living systems, such as human beings, families, communities, business organisations and even countries and international relationships. The approach and its applications have been described in several publications, quoted in the paper. The units of teleonics are teleons, viz, end-related, autonomous process systems. An indication of malfunction in teleons is a high level of telentropy that can be caused by many factors, among which the most common are the lack of well defined goals, inefficient governance, inappropriate interference and undeclared sharing of subsystems between teleons. These factors, as well as other modes of telentropy generation and transfer are described, together with some suggestions about ways to avoid them."
http://www.informaworld.com/smpp/content~db=all~content=a922786098
"Stressors that challenge homeostasis, often regarded as the most urgent of needs, are the best known. When an organism's competence to maintain homeostasis within a specific range is exceeded, responses are evoked that enable the organism to cope by either removing the stressor or facilitating coexistence with it (Antelman and Caggiula, 1990). While many stressors can evoke dramatic neural and endocrine responses, a more modest or “subclinical” response may be exhibited in response to milder stimuli. These responses may build on or extend homeostatic mechanisms or they may be more or less tightly linked to homeostatic responses in a hierarchical manner creating a functional continuum. For example, such a hierarchical system was described for thermoregulation in mammals by Satinoff (1978) in which more recently evolved regulatory mechanisms are invoked when more conservative ones are unable to restore balance."
http://icb.oxfordjournals.org/content/42/3/508.full
"We have developed the proposal by Satinoff (1978) of a parallel hierarchical system, parallel in that each effector could be assigned to its own controller, and hierarchical in that some controllers have a greater capacity to influence thermoregulation than others, to include subsystem controllers responsible for the autoregulation of elements such as scrotal and brain temperature (see Mitchell and Laburn, 1997). Thus, if autoregulation fails or is overwhelmed, then a higher-ranking system can be invoked to regulate the temperature of the subsystem by regulating the whole system containing it (see Fig. 5)."
http://tinyurl.com/4apgpuu
"The bulk of theoretical and empirical work in the neurobiology of emotion indicates that isotelesis—the principle that any one function is served by several structures and processes—applies to emotion as it applies to thermoregulation, for example (Satinoff, 1982)...In light of the preceding discussion, it is quite clear that the processes that emerge in emotion are governed not only by isotelesis, but by the principle of polytelesis as well. The first principle holds that many functions, especially the important ones, are served by a number of redundant systems, whereas the second holds that many systems serve more than one function. There are very few organic functions that are served uniquely by one and only one process, structure, or organ. Similarly, there are very few processes, structures, or organs that serve one and only one purpose. Language, too, is characterized by the isotelic and polytelic principles; there are many words for each meaning and most words have more than one meaning. The two principles apply equally to a variety of other biological, behavioral, and social phenomena. Thus, there is no contradiction between the vascular and the communicative functions of facial efference; the systems that serve these functions are both isotelic and polytelic."
http://tinyurl.com/4dt4gqs
"In other words, telesis is a kind of “pre-spacetime” from which time and space, cognition and information, state-transitional syntax and state, have not yet separately emerged. Once bound in a primitive infocognitive form that drives emergence by generating “relievable stress” between its generalized spatial and temporal components - i.e., between state and state-transition syntax – telesis continues to be refined into new infocognitive configurations, i.e. new states and new arrangements of state-transition syntax, in order to relieve the stress between syntax and state through telic recursion (which it can never fully do, owing to the contingencies inevitably resulting from independent telic recursion on the parts of localized subsystems). As far as concerns the primitive telic-recursive infocognitive MU form itself, it does not “emerge” at all except intrinsically; it has no “external” existence except as one of the myriad possibilities that naturally exist in an unbounded realm of zero constraint.
Telic recursion occurs in two stages, primary and secondary (global and local). In the primary stage, universal (distributed) laws are formed in juxtaposition with the initial distribution of matter and energy, while the secondary stage consists of material and geometric state-transitions expressed in terms of the primary stage. That is, where universal laws are syntactic and the initial mass-energy distribution is the initial state of spacetime, secondary transitions are derived from the initial state by rules of syntax, including the laws of physics, plus telic recursion. The primary stage is associated with the global telor, reality as a whole; the secondary stage, with internal telors (“agent-level” observer-participants). Because there is a sense in which primary and secondary telic recursion can be regarded as “simultaneous”, local telors can be said to constantly “create the universe” by channeling and actualizing generalized utility within it."
http://www.megafoundation.org/CTMU/Articles/Langan_CTMU_092902.pdf
"The notion of classifying topos is part of a trend, begun by Lawvere, of viewing a mathematical theory as a category with suitable exactness properties and which contains a “generic model”, and a model of the theory as a functor which preserves those properties. This is described in more detail at internal logic and type theory, but here are some simple examples to give the flavor. "
http://ncatlab.org/nlab/show/classifying+topos
Space and time in the context of equilibrium-point theory
"Advances to the equilibrium-point (EP) theory and solutions to several classical problems of action and perception are suggested and discussed. Among them are (1) the posture–movement problem of how movements away from a stable posture can be made without evoking resistance of posture-stabilizing mechanisms resulting from intrinsic muscle and reflex properties; (2) the problem of kinesthesia or why our sense of limb position is fairly accurate despite ambiguous positional information delivered by proprioceptive and cutaneous signals; (3) the redundancy problems in the control of multiple muscles and degrees of freedom. Central to the EP hypothesis is the notion that there are specific neural structures that represent spatial frames of reference (FRs) selected by the brain in a task-specific way from a set of available FRs. The brain is also able to translate or/and rotate the selected FRs by modifying their major attributes—the origin, metrics, and orientation—and thus substantially influence, in a feed-forward manner, action and perception. The brain does not directly solve redundancy problems: it only limits the amount of redundancy by predetermining where, in spatial coordinates, a task-specific action should emerge and allows all motor elements, including the environment, to interact to deliver a unique action, thus solving the redundancy problem (natural selection of action). The EP theory predicts the existence of specific neurons associated with the control of different attributes of FRs and explains the role of mirror neurons in the inferior frontal gyrus and place cells in the hippocampus."
http://onlinelibrary.wiley.com/doi/10.1002/wcs.108/full
Scoring Rules, Generalized Entropy, and Utility Maximization
Information measures arise in many disciplines, including forecasting (where scoring rules are used to provide incentives for probability estimation), signal processing (where information gain is measured in physical units of relative entropy), decision
analysis (where new information can lead to improved decisions), and finance (where investors optimize portfolios based on their private information and risk preferences). In this paper, we generalize the two most commonly used parametric
families of scoring rules and demonstrate their relation to well-known generalized entropies and utility functions, shedding new light on the characteristics of alternative scoring rules as well as duality relationships between utility maximization and entropy minimization.
http://faculty.fuqua.duke.edu/~rnau/scoring_rules_and_generalized_entropy.pdf
A conversion between utility and information
"Rewards typically express desirabilities or preferences over a set of alternatives. Here we propose that rewards can be defined for any probability distribution based on three desiderata, namely that rewards should be real-valued, additive and order-preserving, where the latter implies that more probable events should also be more desirable. Our main result states that rewards are then uniquely determined by the negative information content. To analyze stochastic processes, we define the utility of a realization as its reward rate. Under this interpretation, we show that the expected utility of a stochastic process is its negative entropy rate. Furthermore, we apply our results to analyze agent-environment interactions. We show that the expected utility that will actually be achieved by the agent is given by the negative cross-entropy from the input-output (I/O) distribution of the coupled interaction system and the agent's I/O distribution. Thus, our results allow for an information-theoretic interpretation of the notion of utility and the characterization of agent-environment interactions in terms of entropy dynamics."
http://arxiv.org/abs/0911.5106
"In 1993, Gerard’t Hooft published Dimensional Reduction in Quantum Gravity, in
which he made the first direct comparison of theories of quantum gravity to holograms:
We would like to advocate here a somewhat extreme point of view. We suspect that
there simply are not more degrees of freedom to talk about than the ones one can draw
on a surface, as given by eq. (3). The situation can be compared with a hologram of a
3-dimensional image on a 2-dimensional surface."
http://physics.ucsc.edu/~jeff/holographic.pdf
"On the surface, holographic reduced representations are utterly different from logical unification. But I can't help feeling that, at a deeper level, they are closely related. And there is a categorical formulation of logical unification, described in the first reference below, by Rydeheard and Burstall. They say their formulation is derived from an observation by Goguen. So it may be based (I'm not an expert) on the ideas in the second reference:
David Rydeheard and Rod Burstall, Computational Category Theory. Prentice Hall, 1988. See Chapter 8.
http://www.cs.man.ac.uk/~david/categories/book/book.pdf
Joseph Goguen, What is unification? A categorical view of substitution, equation and solution. In Resolution of Equations in Algebraic Structures, 1: Algebraic Techniques, Academic Press, 1989.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.9221
So, can we extend that categorical formulation to holographic reduced representations? I don't know. But if we could, we would better understand how they are related to logic programming, and we might gain new tools for analogical reasoning. It's worth trying."
http://drdobbs.com/blogs/228700165#unihrr
Utility, rationality and beyond: from behavioral finance to informational finance
http://books.google.com/books?id=_LFdBxG9w-kC&lr=&source=gbs_navlinks_s
"In the era of knowledge-driven economy, technological innovation is a key character. The thesis describes the connotation, purpose and core topics of service science through implementing knowledge management, and finally put forward the suggestion of improving the technological innovation capacity through knowledge management and service science."
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5301012
"The essential difference is that in a knowledge economy, knowledge is a product, while in a knowledge-based economy, knowledge is a tool. This difference is not yet well distinguished in the subject matter literature. They both are strongly interdisciplinary, involving economists, computer scientists, engineers, mathematicians, chemists and physicists, as well as cognitivists, psychologists and sociologists.
Various observers describe today's global economy as one in transition to a "knowledge economy," as an extension of an "information society." The transition requires that the rules and practices that determined success in the industrial economy need rewriting in an interconnected, globalized economy where knowledge resources such as know-how and expertise are as critical as other economic resources. According to analysts of the "knowledge economy," these rules need to be rewritten at the levels of firms and industries in terms of knowledge management and at the level of public policy as knowledge policy or knowledge-related policy."
http://en.wikipedia.org/wiki/Knowledge_economy
"Baumol's cost disease (also known as the Baumol Effect) is a phenomenon described by William J. Baumol and William G. Bowen in the 1960s. It involves a rise of salaries in jobs that have experienced no increase of labor productivity in response to rising salaries in other jobs which did experience such labor productivity growth. This goes against the theory in classical economics that wages are always closely tied to labor productivity changes.
The rise of wages in jobs without productivity gains is caused by the necessity to compete for employees with jobs that did experience gains and hence can naturally pay higher salaries, just as classical economics predicts. For instance, if the banking industry pays its bankers 19th century style salaries, the bankers may decide to quit and get a job at an automobile factory where salaries are commensurate to high labor productivity. Hence, bankers' salaries are increased not due to labor productivity increases in the banking industry, but rather due to productivity and wage increases in other industries."
http://en.wikipedia.org/wiki/Baumol's_cost_disease
"Ever since Harvard sociologist Daniel Bell published his book, The Coming of Post-Industrial Society, in 1973, there has been a strong sense of inevitability about the rise and dominance of services in the world’s advanced economies. And, in general, people have concluded that this is a good thing. But there’s danger lurking in services. At this point in their evolution, they’re less efficient and productive than modern manufacturing and farming. Also, while manufacturing took over 200 years before its “quality revolution,” services have only been dominant for a few decades and have yet to figure out quality. These issues could mean big trouble not just for developed countries but for the entire global economy.
Some of today’s top thinkers about services are sounding alarms. Robert Morris, head of service research at IBM Research, says that unless services become more scientific and technical, economic growth could stagnate. Henry Chesbrough, the UC Berkeley professor who coined the term “open innovation,” says this is a major issue facing the world economy long term. He calls it the “commodity services trap.”
Underpinning their thinking is an economic theory called Baumol’s disease. The idea is that as services become an ever larger piece of the economy, they consume an ever larger share of the human and capital resources–but don’t create enough value, in return. Think of an electricity generation plant that consumes more energy than it produces. “Productivity and quality of services isn’t growing comparably to other sectors, including manufacturing and agriculture, so the danger is that it swamps the economy–employment, the share of GDP, and what people have to pay for,” says Morris. “The world economy could stall.”
Developed nations are particularly vulnerable to Baumol’s disease. In Europe and the United States, a lot of progress has been made in the past decade in improving the efficiency of IT services, but other service industries and frightfully inefficient and ineffective: think government, health care and education.
So while adding jobs is vitally important to countries that are still reeling from the economic meltdown, if the jobs that are added are commodity service jobs, long term, it’s adding to the inefficiency of the economy. That’s why governments need to invest aggressively in science and education and technology to improve services in spite of their budget deficits.
One area that deserves investment is service science. It’ s the academic discipline that IBM (with help from Chesbrough) began promoting in 2002. A multidisciplinary approach, service science addresses Baumol’s disease head on by using the ideas and skills of computer science, engineering, social science and business management to improve the productivity, quality and innovation in services. Many of the techniques that have already been developed in computer, mathematical and information sciences can be directly applied to helping services. But new breakthroughs and the better interactions with behavioral and business sciences are also essential, because services are, and always will be, people-centric businesses.
Today, more than 450 universities worldwide offer some sort of service science program. But much more can and should be done to avoid falling into the commodity services trap. Otherwise, the the post-industrial society could take on a post-apocalyptic tinge."
http://asmarterplanet.com/blog/category/smarter-systems
Innovation in services: a review of the debate and a research agenda
http://www.slideshare.net/rooteranalysis/articulo-3-innovacionservicios
The service paradox and endogenous economic growth
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=358350
"In this blog we will explore how recent ideas in cognitive science can be used to develop new products, services and organizations that enhance how we think and feel.
We want exciting, beautiful, easy-to-use things. We ask our artifacts (anything that is designed) to make us smarter, reflect our values, invoke the respect and admiration of others and involve our friends and family when appropriate. We want all of this on top of whatever it is they are suppose to do.
The basic functionality of any artifact is now table stakes. What designers must do is go beyond the basics and deliver the aesthetic, emotional, experiential, profound and even transformational. We must make the ordinary extraordinary in an authentic way. In many respects, that has always been the goal of design and exceptional designers achieve it (somehow) everyday.
But it goes beyond that.
There are things that we design that fail to achieve their intended purpose because they don’t reflect sufficient understanding of how the mind works. And the consequence can be dire. Take for example weight loss or chronic disease management programs that are designed to change our behaviors but fail to do so. The cost of that design failure is very high.
Over the last two decades there has been an explosion in what we know about how the minds works. Significant advances in the neuro and cognitive sciences and a wide range of emerging high-potential fields including neuroeconomics, cognitive ergonomics, behavioral finance, augmented cognition and others promise to provide the principles, models and tools needed to systematically design artifacts that not only support cognition but actually make it better.
Cognitive design seeks to paternalistically harness these insights and translate them into improved products, services, change programs, workflow, organizational designs, workspaces and any other artifact that impacts how we think and feel. Cognitive design, like human factors, interactive design and most other modern design movements looks to put the latest findings from the human sciences to work. But it goes further than that.
It goes further by insisting that the scope and orientation of the design problem itself must change. The central idea is in fact somewhat radical:
We need a new design stance that says we are not just designing the functionality of the artifact but we are also designing the mental states of the user.
In this sense the mental functioning and states of the end user are ever bit as much a part of the design problem and specification as are the more traditional considerations of feature, function and form. We seek to break down the distinction between an artifact and the user’s reaction to it by including both as the “thing to be designed”. Now it is feature, function, form and mental state. The fact that we have the science and soon the practice to do this is both exciting and worrisome.
We will cover both the promise and the peril (ethical considerations) of cognitive design in this blog."
http://newvaluestreams.com/wordpress/?page_id=2
"I am hoping soon to start work on the final draft of a book whose working title has been Cognitive Design. This book is about the design and standardization of sets of things – such as the letters of the alphabet, the fifty United States, the notes in the musical octave, the different grades you can give your students, or the stops on a subway line. Every person deals with one or another of these sets of things on a daily basis, and for many people they hold a sort of fascination. We sometimes forget that societies, cultures, and the human beings within them – not nature – designed these sets, chose labels for their members, and made them into standards. Many people have a sense that these different sets have something in common – but most would be hard-pressed to say what that is. My book lays out the answer. I submitted the most recent draft of it as my Ph.D. dissertation at Rutgers University in April 2005."
http://www.ianwatson.org/contrast_set_design_overview.html
Cognitive Design Features on Traffic Signs
http://www.engineeringletters.com/issues_v14/issue_1/EL_14_1_3.pdf
6 cognitive design principles (simplicity, consistency, organization, natural order, clarity, and attractiveness)
http://www.ncbi.nlm.nih.gov/pubmed/18359412
"We are a multidisciplinary research center devoted to the study of medical decision-making, cognitive foundations of health behaviors and the effective use of computer-based information technologies.Our research is deeply rooted in theories and methods of cognitive science, with a strong focus on the analysis of medical error, development of models of decision-making, and design and evaluation of effective human-computer interactions. These studies are guided by a concern for improving the performance of individuals and teams in the health care system."
http://www.uthouston.edu/cognitive-informatics/
Cognitive Informatics: Exploring the theoretical foundations for Natural Intelligence, Neural Informatics, Autonomic Computing, and Agent Systems:
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=978138C997A258571CD8F3F58A44D558?doi=10.1.1.89.2133&rep=rep1&type=pdf
Subscribe to:
Posts (Atom)