One of the attractive features of the algorithmic, network-based spacetime ideas discussed in this Blog and in the NKS book is that they immediately suggest a range of computing experiments that any curious, scientifically oriented mind can carry out rather easily (maybe using Mathematica).

The requirement of full abstraction — that everything should emerge from this universal computation without plugging in features such as known physical constants —

appears to offer an accessible entry point to the fascinating quest for the ultimate laws of nature, for scientists as well as amateurs not familiar with current quantum gravity proposals.

In this respect, the massive exploration, by experiment and without preconceptions, of the ‘computational universe’, and of undirected/directed graph-rewriting systems in particular (for space/spacetime) might still be a sensible item in the agenda.

However, the work that I have myself carried out in the last few years

(https://tommasobolognesi.wordpress.com/publications/), in particular on trivalent networks and on algorithmic causal sets, has increasingly convinced me that brute-force ‘universe hunting’ will not progress significantly if the following issues are not addressed.

1. Closer contact with the Causal Set Programme.

Since the late 1980’s, Bombelli, Sorkin, Lee, Rideout, Reid, Henson, Dawker and others have explored *stochastic* causal sets, and have collected a number of results from which research on algorithmic, deterministic causal sets (or ‘networks’) can greatly benefit.

For example, consider the fundamental requirement of Lorentz invariance, whose transposition from the continuous to the discrete setting is anything but trivial.

The NKS take on this is based on assimilating the different inertial frames in continuous spacetime to the different total orders of the rewrite events that build up a discrete spacetime via a string rewrite system. The idea is quite appealing, in light of the existence of ’confluent’ rewrite systems for which the final partial order is unique and subsumes all the different total order realisations (‘causal invariance’).

Nevertheless a 2006 paper by Bombelli, Henson and Sorkin [‘Discreteness without symmetry breaking: a theorem’ https://arxiv.org/abs/gr-qc/0605006 ] proves that a directed graph aiming at ‘Lorentzianity’ – intended now as one that does support the identification of a preferred reference frame (or direction) – cannot have finite-degree nodes. (One refers here to the transitively *reduced*, Hasse graph, not to the transitive closure.) In other words, the node degrees of algorithmic causets *must* grow unbounded. I suspect that meeting this requirement may involve substantial rethinking of deterministic causet construction techniques…

2. New paradigms promoting multi-level hierarchies of emergence

The manifesto of the computational universe conjecture can be summarized in one sentence: Complexity in Nature = Emergence in deterministic computation.

And the most effective example of emergence in computation is probably represented by the digital particles of Elementary Cellular Automaton (ECA) 110.

However, as far as I know, nobody has ever been able to set up a simple program that

exhibits more than one level of emergence. In ECA 110, Level 0 is represented by the boolean function of three variables that defines local cell behaviour, and Level 1 is represented by the emergent particle interaction rules. No Level 2 emerges where particles interactions yield a new layer of entities and rules.

G. Ellis has observed that simple programs (such as those considered in the NKS book)

cannot boost complexity up to a level akin to, say, the biosphere: that would require some radically new concept. One suggested possibility is ‘backward causation’.

Another (possibly related) ingredient that might boost complexity is the appearance, as the computation unfolds, of elementary ’agents’ provided with some form of autonomy/initiative, able to interfere with the initial rules of the game – the same rules from which they have emerged! This resonates with the idea of self-modifying code.

How can one incorporate these or similar concepts (with notions of ‘observation’/’consciousness’ pulling at the top) in simple models of computation, or promote their emergence? Very hard question indeed! But I currently do not see any other way out from the dead end in which the more ’traditional’ NKS-type experiments on discrete spacetime are currently stuck.

]]>

Notions of observation play a central role in Physics, but also in Theoretical Computer Science, notably in Process Algebra. The importance in Physics of the mutual influences between observer and observed phenomena is well recognized, and yet the properties of the former are in general fuzzily specified, in spite (or because) of the fact that they may include sophisticated cognitive and operative skills.

Our purpose is to transpose the observer/observed interplay in a simple formal context that greatly facilitates their being treated on an equal footing. We plan to identify observers and observed entities in the context of discrete models of spacetime – in particular, algorithmic causal sets (causets). We shall define simple forms of observer, with associated synchronisation/observation mechanisms, and detect their emergence in dynamic causets. The search can be automated by model checkers using spatial or spatio-temporal logics.

We shall focus on the internal (frog’s) view, as opposed to the external (bird’s) view, by assimilating frogs with simple causet substructures, e.g. fattened causal chains provided with some persistent identity (akin to ‘digital particles’ in cellular automata), that synchronise and communicate with their environment. We shall investigate what ‘observation’ means to these entities and how they might ‘subjectively’ picture their neighborhood or remote environment. These partial observations may be reminiscent, in spirit, of stroboscopic sampling or Poincaré maps.

The emergent features of various models of computation have been widely investigated, notably for the spatio-temporal diagrams of cellular automata, but always under an external viewpoint. Our aim is to restart this analysis under the radically different perspective of an internal (proto-)observer, with a focus on stochastic and deterministic, labelled or unlabelled causets. Simply stated, our goal is to discover qualitative differences between the bird’s and the frog’s view. The issue becomes more challenging when considering different classes of observers of the same phenomena. Identifying their different viewpoints leads to considerations on invariance, akin to Lorentz covariance.

Suitable notions of abstraction shall be considered, since an observer may be blind to the tiniest causet details but sensitive to a coarse-grained version of it. Additionally, ‘smart’ observers may be required to tell apart regular from random-like causet regions.

In our investigation on observations within causets, we will adopt techniques that proved successful in theoretical computer science and process calculi, such as partially ordered Event Structures, and category-theoretical models. Coalgebras, in particular, provide flexibile notions of observation. Causality and locality are achieved using the emerging nominal computation model that endows observers with primitive forms of naming, whose power is balanced by a finite memory principle. We consider it an interesting additional research question to explore the impact of such basic computational constraints on algorithmic causal sets.

]]>

In this proposal we are particularly interested in event concepts from process algebras such as Milner’s Calculus of Communicating Systems (CCS) and Hoare’s Communicating Sequential Processes (CSP), and related languages (e.g. LOTOS), since their high abstraction level makes it possible and attractive to investigate the impact of these event notions, and associated constructions, in the apparently remote field of causal sets, intended as discrete models of spacetime.

While, in line with General Relativity, a spacetime event in a causal set is simply a node in a directed, acyclic graph (DAG), in process algebras events represent the building blocks of more elaborate structures called processes. Events may occur ‘internally’ – inside a process – or as part of the interaction with another process; they may be atomic or involve the exchange of data items; they may or may not induce changes in the local states of the interacting parties; they may be organized in temporal/causal patterns and be encapsulated in processes to be used, in turn, as building blocks of more complex event patterns.

In spite of the richer event-related constructions offered by process algebras, the formal, ‘true concurrency semantics’ of the latter maps the syntax of an algebraic specification, no matter how complex, into a DAG, thus providing a common basis for comparing the event patterns arising in the two fields.

In particular, we are interested in exploring algorithmic causal sets – those obtained by deterministic rather than stochastic procedures – and in verifying the extent to which their emergent properties match the structured behavioral patterns typical of process algebra.

One may argue that an event happens when a trace of its occurrence gets recorded somewhere. In process algebra, event occurrences may indeed affect the local states of the interacting processes. Can we meaningfully separate event threads in algorithmic causal sets? Can we detect the emergence of process-like substructures in them – maybe a subgraph with some kind of boundary? Can such ‘processes’ be stateful too? Can we distinguish between their internal and external events? More generally, which subset of the compact set of behavioural operators of process algebras (sequential and parallel composition, choice, etc.) can find a counterpart in the emergent structures of algorithmic causal sets?

]]>

This is the abstract of ‘Let’s consider two spherical chickens‘, my contribution to the FQXi 2015 Essay Contest ‘Trick or Truth? The Mysterious Connection Between Physics and Mathematics’, which obtained a Third Prize and the mention for ‘Most Creative Presentation’, out of 203 submissions.

I wish to dedicate this essay and these results to the memory of my father Giampaolo, who was very amused by the spherical chickens concept, referring to the habit of some theoretical physicists to make over-simplifications when developing models of real systems. As a chemist he must have considered himself immune from this attitude!

]]>

We set up a systematic study of the Cellular Automaton Interpretation of quantum mechanics. We hope to inspire more physicists to do so, to consider seriously the possibility that quantum mechanics as we know it is not a fundamental, mysterious, impenetrable feature of our physical world, but rather an instrument to statistically describe a world where the physical laws, at their most basic roots, are not quantum mechanical at all. Sure, we do not know how to formulate the most basic laws at present, but we are collecting indications that a classical world underlying quantum mechanics does exist. Our models show how to put quantum mechanics on hold when we are constructing models such as string theory and “quantum” gravity, and this may lead to much improved understanding of our world at the Planck scale.

]]>

]]>

]]>

http://www.fqxi.org/community/forum/topic/1337#post_56189

The degree of complexity that can arise by bottom-up causation alone is strictly limited. Sand piles, the game of life, bird flocks, or any dynamics governed by a local rule [28] do not compare in complexity with a single cell or an animal body. The same is true in physics: spontaneously broken symmetry is powerful [16], but not as powerful as symmetry breaking that is guided top-down to create ordered structures (such as brains and computers). Some kind of coordination of effects is needed for such complexity to emerge.

I suggest top-down effects from these [upper] levels is the key to the rise of genuine complexity (such as computers and human beings)

Hypothesis: bottom up emergence by itself is strictly limited in terms of the complexity it can give rise to. Emergence of genuine complexity is characterised by a reversal of information flow from bottom up to top down [27].

But can we really rule out the possibility for this ‘kind of coordination of effects’ itself to spontaneously emerge in an artificial system such as a cellular automaton? Would it not be possible to observe this type of high, ‘biological’ complexity to emerge in a simulation of an artificial system like Wolfram’s automaton n. 110, provided we are willing to wait for a sufficiently long (likely astronomic) time?

]]>

Lee Smolin’s cosmological natural selection conjecture — the idea that universes evolve by reproducing across black holes, and are subject to natural selection — is also briefly mentioned. The relevant paper would be ‘Did the universe evolve?’, Smolin 1992.

In ‘A Brief History of Time’, Stephen Hawking writes:

‘There are either many different universes or many different regions of a single universe, each with its own initial configuration and, perhaps, with its own set of laws of science. In most of these universes the conditions would not be right for the development of complicated organisms; only in the few universes that are like ours would intelligent beings develop and ask the question: “Why is the universe the way we see it?” The answer is then simple: it it had been different, we would not be here.

]]>

Having focused my research activity for over two decades on process algebra and related topics (specifically on the LOTOS specification language, whose definition has been influenced mainly by Milner’s CCS and Hoare’s CSP), when I started looking at the topic of emergence in computation I found it natural to begin by investigating visual complexity indicators for process algebraic languages.

In *process algebra* the behaviour of a system — usually, a dynamic set of interacting, concurrent entities, or ‘processes’, e.g. a computer communication protocol — is described by an algebraic ‘behavior expression’ – a ‘bex’ – formed by special operators; these are meant to express action, sequentiality, parallelism, synchronization, communication, choice, and so on.

A process algebra always possesses well defined syntax *and* semantics. The Structural Operational Semantics of a process algebra formally defines, via axioms and inference rules, the transition relation:

bex — action —> bex’

for any syntactically correct bex. Note that, for the same bex, we may well have

bex — otherAction —> otherBex’.

Indeed, in general, behaviors branch, like trees. But when a specifications is deterministic, and we drop the action-labels of the transition relation, the behavior of the specified system reduces to a sequence S = (bex1, bex2, …, bexN, …) of expressions. If we then code the different behavioral operators by square cells of different color, or different grey level, each bex becomes a finite, 1-D array of such cells; by stacking the cell arrays of all the expressions in sequence S we obtain a pictorial representation of the computation similar to the diagrams used in NKS for illustrating cellular automata behaviors.

In my 2007 JLAP paper (see list of publications) I have studied the power of some fundamental process algebraic operators under this light. In particular, I have shown that pseudo-random diagrams can be obtained by a subset of process algebraic operators that is provably non-universal, thus providing an argument against Wolfram’s conjecture that pseudo-ramdomness is an indicator of computational universality.

The cited paper also includes a process algebraic specification that simulates Elementary Cellular Automaton 110 (which *is* universal).

The picture below shows the grey-level-coded deterministic computation of a process algebraic specification of the Hilbert-Wolfram pseudorandom numeric sequence:

a[0] = 1

a[n+1] =

- 3/2 * a[n] if a[n] is even

- 3/2 * (a[n]+1) if a[n] is odd.

(The pseudo-randomness of the sequence is apparent by checking, for example, the parities of its elements, which does not seem to follow a defined pattern.)

One can retrieve the a[i]’s by checking the heights of the growing triangular shapes appearing at the right edge of the diagram.

]]>