Home > Uncategorized > Algorithmic spacetime the NKS-way

Algorithmic spacetime the NKS-way

September 28, 2016 Leave a comment Go to comments

[see also http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/]

One of the attractive features of the algorithmic, network-based spacetime ideas discussed in this Blog and in the NKS book is that they immediately suggest a range of computing experiments that any curious, scientifically oriented mind can carry out rather easily (maybe using Mathematica).

The requirement of full abstraction — that everything should emerge from this universal computation without plugging in features such as known physical constants —
appears to offer an accessible entry point to the fascinating quest for the ultimate laws of nature, for scientists as well as amateurs not familiar with current quantum gravity proposals.

In this respect, the massive exploration, by experiment and without preconceptions, of the ‘computational universe’, and of undirected/directed graph-rewriting systems in particular (for space/spacetime) might still be a sensible item in the agenda.

However, the work that I have myself carried out in the last few years
(https://tommasobolognesi.wordpress.com/publications/), in particular on trivalent networks and on algorithmic causal sets, has increasingly convinced me that brute-force ‘universe hunting’ will not progress significantly if the following issues are not addressed.

1. Closer contact with the Causal Set Programme.

Since the late 1980’s, Bombelli, Sorkin, Lee, Rideout, Reid, Henson, Dawker and others have explored *stochastic* causal sets, and have collected a number of results from which research on algorithmic, deterministic causal sets (or ‘networks’) can greatly benefit.
For example, consider the fundamental requirement of Lorentz invariance, whose transposition from the continuous to the discrete setting is anything but trivial.
The NKS take on this is based on assimilating the different inertial frames in continuous spacetime to the different total orders of the rewrite events that build up a discrete spacetime via a string rewrite system. The idea is quite appealing, in light of the existence of ’confluent’ rewrite systems for which the final partial order is unique and subsumes all the different total order realisations (‘causal invariance’).
Nevertheless a 2006 paper by Bombelli, Henson and Sorkin [‘Discreteness without symmetry breaking: a theorem’ https://arxiv.org/abs/gr-qc/0605006 ] proves that a directed graph aiming at ‘Lorentzianity’ – intended now as one that does support the identification of a preferred reference frame (or direction) – cannot have finite-degree nodes. (One refers here to the transitively *reduced*, Hasse graph, not to the transitive closure.) In other words, the node degrees of algorithmic causets *must* grow unbounded. I suspect that meeting this requirement may involve substantial rethinking of deterministic causet construction techniques…

2. New paradigms promoting multi-level hierarchies of emergence

The manifesto of the computational universe conjecture can be summarized in one sentence: Complexity in Nature = Emergence in deterministic computation.
And the most effective example of emergence in computation is probably represented by the digital particles of Elementary Cellular Automaton (ECA) 110.

However, as far as I know, nobody has ever been able to set up a simple program that
exhibits more than one level of emergence. In ECA 110, Level 0 is represented by the boolean function of three variables that defines local cell behaviour, and Level 1 is represented by the emergent particle interaction rules. No Level 2 emerges where particles interactions yield a new layer of entities and rules.

G. Ellis has observed that simple programs (such as those considered in the NKS book)
cannot boost complexity up to a level akin to, say, the biosphere: that would require some radically new concept. One suggested possibility is ‘backward causation’.

Another (possibly related) ingredient that might boost complexity is the appearance, as the computation unfolds, of elementary ’agents’ provided with some form of autonomy/initiative, able to interfere with the initial rules of the game – the same rules from which they have emerged! This resonates with the idea of self-modifying code.

How can one incorporate these or similar concepts (with notions of ‘observation’/’consciousness’ pulling at the top) in simple models of computation, or promote their emergence? Very hard question indeed! But I currently do not see any other way out from the dead end in which the more ’traditional’ NKS-type experiments on discrete spacetime are currently stuck.

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: