Top-down causation

January 9, 2013 Leave a comment

Quotes from George Ellis’ FQXi 2012 Contest Essay (2nd prize winner)

http://www.fqxi.org/community/forum/topic/1337#post_56189

The degree of complexity that can arise by bottom-up causation alone is strictly limited. Sand piles, the game of life, bird flocks, or any dynamics governed by a local rule [28] do not compare in complexity with a single cell or an animal body. The same is true in physics: spontaneously broken symmetry is powerful [16], but not as powerful as symmetry breaking that is guided top-down to create ordered structures (such as brains and computers). Some kind of coordination of effects is needed for such complexity to emerge.

I suggest top-down effects from these [upper] levels is the key to the rise of genuine complexity (such as computers and human beings)

Hypothesis: bottom up emergence by itself is strictly limited in terms of the complexity it can give rise to. Emergence of genuine complexity is characterised by a reversal of information flow from bottom up to top down [27].

But can we really rule out the possibility for this ‘kind of coordination of effects’ itself to spontaneously emerge in an artificial system such as a cellular automaton?  Would it not be possible to observe this type of high, ‘biological’ complexity to emerge in a simulation of an artificial system like Wolfram’s automaton n. 110, provided we are willing to wait for a sufficiently long (likely astronomic) time?

Anthropic Principle

October 30, 2012 Leave a comment

I’ve read about the Anthropic Principle in several books and articles, but explanations are often obscure or ambiguous.  A quick, clarifying introduction to the subject is given, in my opinion, by Richard Dawkins in this video:

Lee Smolin’s cosmological natural selection conjecture — the idea that universes evolve by reproducing across black holes, and are subject to natural selection —  is also briefly mentioned. The relevant paper would be ‘Did the universe evolve?’, Smolin 1992.

In ‘A Brief History of Time’, Stephen Hawking writes:

‘There are either many different universes or many different regions of a single universe, each with its own initial configuration and, perhaps, with its own set of laws of science.  In most of these universes the conditions would not be right for the development of complicated organisms; only in the few universes that are like ours would intelligent beings develop and ask the question: “Why is the universe the way we see it?”  The answer is then simple: it it had been different, we would not be here.

Wolfram-Hilbert sequence in process algebra

September 3, 2012 Leave a comment

Several techniques for the experimental investigation of the computing power of various formal models, from Cellular Automata to Turing Machines, have been proposed by Stepen Wolfram with his massive ‘New Kind of Science’ (NKS) effort.  Various visual complexity indicators can be used for revealing the emergent ‘internal shapes’ of computations and for nicely exposing their constant, periodic, nested/fractal, pseudo-random, and even more sophisticated dynamics.

Having focused my research activity for over two decades on process algebra and related topics (specifically on the LOTOS specification language, whose definition has been influenced mainly by Milner’s CCS and Hoare’s CSP), when I started looking at the topic of emergence in computation I found it natural to begin by investigating visual complexity indicators for process algebraic languages.

In process algebra the behaviour of a system — usually, a dynamic set of interacting, concurrent entities, or ‘processes’, e.g. a computer communication protocol — is described by an algebraic ‘behavior expression’ – a ‘bex’ – formed by special operators; these are meant to express action, sequentiality, parallelism, synchronization, communication, choice, and so on.

A process algebra always possesses well defined syntax and semantics.  The Structural Operational Semantics of a process algebra formally defines, via axioms and inference rules, the transition relation:

bex — action —> bex’

for any syntactically correct bex.  Note that, for the same bex,  we may well have

bex — otherAction —> otherBex’.

Indeed, in general, behaviors branch, like trees. But when a specifications is deterministic, and we drop the action-labels of the transition relation, the behavior of the specified system reduces to a sequence S = (bex1, bex2, …, bexN, …) of expressions.  If we then code the different behavioral operators by square cells of different color, or different grey level, each bex becomes a finite, 1-D array of such cells; by stacking the cell arrays of all the expressions in sequence S we obtain a pictorial representation of the computation similar to the diagrams used in NKS for illustrating cellular automata behaviors.

In my 2007 JLAP paper (see list of publications) I have studied the power of some fundamental process algebraic operators under this light.  In particular, I have shown that pseudo-random diagrams can be obtained by a subset of process algebraic operators that is provably non-universal, thus providing an argument against Wolfram’s conjecture that pseudo-ramdomness is an indicator of computational universality.

The cited paper also includes a process algebraic specification that simulates Elementary Cellular Automaton 110 (which is universal).

The picture below shows the grey-level-coded deterministic computation of a process algebraic specification of the Hilbert-Wolfram pseudorandom numeric sequence:

a[0]  = 1

a[n+1] =

  • 3/2 * a[n]            if a[n] is even
  • 3/2 * (a[n]+1)     if a[n] is odd.

(The pseudo-randomness of the sequence is apparent by checking, for example, the parities of its elements, which does not seem to follow a defined pattern.)

One can retrieve the a[i]’s by checking the heights of the growing triangular shapes appearing at the right edge of the diagram.

Tommaso Bolognesi

August 30, 2012 1 comment

I am a senior researcher at ISTI, an Institute of CNR located in the CNR Pisa Area.  I am member of the FM&&T group.  Check the links at the top of the page for further information.

Image

It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time … So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequerboard with all its apparent complexities.

Richard Feynman, 1964.