Tommaso Bolognesi
I am a senior researcher at ISTI, an Institute of CNR located in the CNR Pisa Area. I am member of the FM&&T group. Check the links at the top of the page for further information.
It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time … So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequerboard with all its apparent complexities.
Richard Feynman, 1964.
Mutual information in weakly‐coupled process‐algebraic mutual observers
The notions of observer and observation play a central role in Physics, but also in Theoretical Computer Science, notably in Process Algebra. In the introduction of his 1980 fundamental book on CCS, R. Milner states that the two central ideas of the formalism – observation and synchronised communication – are really one: the only way to observe a system is to communicate with it, and to place two system components in communication is to let them observe each other. Various forms of the parallel composition operator enable one to compose processes that behave as independent transition systems, except at some user-specified events, that should be executed in sync by all interacting parties.
We plan to develop code for exploring applications of various notions of information (Shannon entropy, Mutual, Effective, Integrated Information) in the context of small networks of strongly/weakly interacting processes – ones in which internal transitions are relatively less/more frequent than external synchronizations.
Algorithmic spacetime the NKS-way
[see also http://blog.stephenwolfram.com/2015/12/what-is-spacetime-really/]
One of the attractive features of the algorithmic, network-based spacetime ideas discussed in this Blog and in the NKS book is that they immediately suggest a range of computing experiments that any curious, scientifically oriented mind can carry out rather easily (maybe using Mathematica).
The requirement of full abstraction — that everything should emerge from this universal computation without plugging in features such as known physical constants —
appears to offer an accessible entry point to the fascinating quest for the ultimate laws of nature, for scientists as well as amateurs not familiar with current quantum gravity proposals.
In this respect, the massive exploration, by experiment and without preconceptions, of the ‘computational universe’, and of undirected/directed graph-rewriting systems in particular (for space/spacetime) might still be a sensible item in the agenda.
However, the work that I have myself carried out in the last few years
(https://tommasobolognesi.wordpress.com/publications/), in particular on trivalent networks and on algorithmic causal sets, has increasingly convinced me that brute-force ‘universe hunting’ will not progress significantly if the following issues are not addressed.
1. Closer contact with the Causal Set Programme.
Since the late 1980’s, Bombelli, Sorkin, Lee, Rideout, Reid, Henson, Dawker and others have explored *stochastic* causal sets, and have collected a number of results from which research on algorithmic, deterministic causal sets (or ‘networks’) can greatly benefit.
For example, consider the fundamental requirement of Lorentz invariance, whose transposition from the continuous to the discrete setting is anything but trivial.
The NKS take on this is based on assimilating the different inertial frames in continuous spacetime to the different total orders of the rewrite events that build up a discrete spacetime via a string rewrite system. The idea is quite appealing, in light of the existence of ’confluent’ rewrite systems for which the final partial order is unique and subsumes all the different total order realisations (‘causal invariance’).
Nevertheless a 2006 paper by Bombelli, Henson and Sorkin [‘Discreteness without symmetry breaking: a theorem’ https://arxiv.org/abs/gr-qc/0605006 ] proves that a directed graph aiming at ‘Lorentzianity’ – intended now as one that does support the identification of a preferred reference frame (or direction) – cannot have finite-degree nodes. (One refers here to the transitively *reduced*, Hasse graph, not to the transitive closure.) In other words, the node degrees of algorithmic causets *must* grow unbounded. I suspect that meeting this requirement may involve substantial rethinking of deterministic causet construction techniques…
2. New paradigms promoting multi-level hierarchies of emergence
The manifesto of the computational universe conjecture can be summarized in one sentence: Complexity in Nature = Emergence in deterministic computation.
And the most effective example of emergence in computation is probably represented by the digital particles of Elementary Cellular Automaton (ECA) 110.
However, as far as I know, nobody has ever been able to set up a simple program that
exhibits more than one level of emergence. In ECA 110, Level 0 is represented by the boolean function of three variables that defines local cell behaviour, and Level 1 is represented by the emergent particle interaction rules. No Level 2 emerges where particles interactions yield a new layer of entities and rules.
G. Ellis has observed that simple programs (such as those considered in the NKS book)
cannot boost complexity up to a level akin to, say, the biosphere: that would require some radically new concept. One suggested possibility is ‘backward causation’.
Another (possibly related) ingredient that might boost complexity is the appearance, as the computation unfolds, of elementary ’agents’ provided with some form of autonomy/initiative, able to interfere with the initial rules of the game – the same rules from which they have emerged! This resonates with the idea of self-modifying code.
How can one incorporate these or similar concepts (with notions of ‘observation’/’consciousness’ pulling at the top) in simple models of computation, or promote their emergence? Very hard question indeed! But I currently do not see any other way out from the dead end in which the more ’traditional’ NKS-type experiments on discrete spacetime are currently stuck.
Internal observers in causet-based algorithmic spacetime.
(T.B. and Vincenzo Ciancia)
Notions of observation play a central role in Physics, but also in Theoretical Computer Science, notably in Process Algebra. The importance in Physics of the mutual influences between observer and observed phenomena is well recognized, and yet the properties of the former are in general fuzzily specified, in spite (or because) of the fact that they may include sophisticated cognitive and operative skills.
Our purpose is to transpose the observer/observed interplay in a simple formal context that greatly facilitates their being treated on an equal footing. We plan to identify observers and observed entities in the context of discrete models of spacetime – in particular, algorithmic causal sets (causets). We shall define simple forms of observer, with associated synchronisation/observation mechanisms, and detect their emergence in dynamic causets. The search can be automated by model checkers using spatial or spatio-temporal logics.
We shall focus on the internal (frog’s) view, as opposed to the external (bird’s) view, by assimilating frogs with simple causet substructures, e.g. fattened causal chains provided with some persistent identity (akin to ‘digital particles’ in cellular automata), that synchronise and communicate with their environment. We shall investigate what ‘observation’ means to these entities and how they might ‘subjectively’ picture their neighborhood or remote environment. These partial observations may be reminiscent, in spirit, of stroboscopic sampling or Poincaré maps.
The emergent features of various models of computation have been widely investigated, notably for the spatio-temporal diagrams of cellular automata, but always under an external viewpoint. Our aim is to restart this analysis under the radically different perspective of an internal (proto-)observer, with a focus on stochastic and deterministic, labelled or unlabelled causets. Simply stated, our goal is to discover qualitative differences between the bird’s and the frog’s view. The issue becomes more challenging when considering different classes of observers of the same phenomena. Identifying their different viewpoints leads to considerations on invariance, akin to Lorentz covariance.
Suitable notions of abstraction shall be considered, since an observer may be blind to the tiniest causet details but sensitive to a coarse-grained version of it. Additionally, ‘smart’ observers may be required to tell apart regular from random-like causet regions.
In our investigation on observations within causets, we will adopt techniques that proved successful in theoretical computer science and process calculi, such as partially ordered Event Structures, and category-theoretical models. Coalgebras, in particular, provide flexibile notions of observation. Causality and locality are achieved using the emerging nominal computation model that endows observers with primitive forms of naming, whose power is balanced by a finite memory principle. We consider it an interesting additional research question to explore the impact of such basic computational constraints on algorithmic causal sets.
Event patterns: from process algebra to algorithmic causal sets
Notions of event and event occurrence play a central role in various areas of computer science and ICT (Information and Communication Technology).
In this proposal we are particularly interested in event concepts from process algebras such as Milner’s Calculus of Communicating Systems (CCS) and Hoare’s Communicating Sequential Processes (CSP), and related languages (e.g. LOTOS), since their high abstraction level makes it possible and attractive to investigate the impact of these event notions, and associated constructions, in the apparently remote field of causal sets, intended as discrete models of spacetime.
While, in line with General Relativity, a spacetime event in a causal set is simply a node in a directed, acyclic graph (DAG), in process algebras events represent the building blocks of more elaborate structures called processes. Events may occur ‘internally’ – inside a process – or as part of the interaction with another process; they may be atomic or involve the exchange of data items; they may or may not induce changes in the local states of the interacting parties; they may be organized in temporal/causal patterns and be encapsulated in processes to be used, in turn, as building blocks of more complex event patterns.
In spite of the richer event-related constructions offered by process algebras, the formal, ‘true concurrency semantics’ of the latter maps the syntax of an algebraic specification, no matter how complex, into a DAG, thus providing a common basis for comparing the event patterns arising in the two fields.
In particular, we are interested in exploring algorithmic causal sets – those obtained by deterministic rather than stochastic procedures – and in verifying the extent to which their emergent properties match the structured behavioral patterns typical of process algebra.
One may argue that an event happens when a trace of its occurrence gets recorded somewhere. In process algebra, event occurrences may indeed affect the local states of the interacting processes. Can we meaningfully separate event threads in algorithmic causal sets? Can we detect the emergence of process-like substructures in them – maybe a subgraph with some kind of boundary? Can such ‘processes’ be stateful too? Can we distinguish between their internal and external events? More generally, which subset of the compact set of behavioural operators of process algebras (sequential and parallel composition, choice, etc.) can find a counterpart in the emergent structures of algorithmic causal sets?
FQXi Essay 2015
“Confronted with a pythagorean jingle derived from simple ratios, a sequence of 23 moves from knot theory, and the interaction between a billiard-ball and a zero-gravity field, a young detective soon realizes that three crimes could have been avoided if math were not so unreasonably effective in describing our physical world. Why is this so? Asimov’s fictional character Prof. Priss confirms to the detective that there is some truth in Tegmark’s Mathematical Universe Hypothesis, and reveals him that all mathematical structures entailing self-aware substructures (SAS) are computable and isomorphic. The boss at the investigation agency is not convinced and proposes his own views on the question.”
This is the abstract of ‘Let’s consider two spherical chickens‘, my contribution to the FQXi 2015 Essay Contest ‘Trick or Truth? The Mysterious Connection Between Physics and Mathematics’, which obtained a Third Prize and the mention for ‘Most Creative Presentation’, out of 203 submissions.
I wish to dedicate this essay and these results to the memory of my father Giampaolo, who was very amused by the spherical chickens concept, referring to the habit of some theoretical physicists to make over-simplifications when developing models of real systems. As a chemist he must have considered himself immune from this attitude!
Excerpt from Gerard ’t Hooft: ‘The Cellular Automaton Interpretation of Quantum Mechanics’ – June 10, 2014
We set up a systematic study of the Cellular Automaton Interpretation of quantum mechanics. We hope to inspire more physicists to do so, to consider seriously the possibility that quantum mechanics as we know it is not a fundamental, mysterious, impenetrable feature of our physical world, but rather an instrument to statistically describe a world where the physical laws, at their most basic roots, are not quantum mechanical at all. Sure, we do not know how to formulate the most basic laws at present, but we are collecting indications that a classical world underlying quantum mechanics does exist. Our models show how to put quantum mechanics on hold when we are constructing models such as string theory and “quantum” gravity, and this may lead to much improved understanding of our world at the Planck scale.
Citazione di Erwin Schroedinger (premio Nobel per la Fisica, 1933)
Noi percepiamo chiaramente che soltanto ora incominciamo a raccogliere materiale attendibile per saldare insieme, in un unico complesso, la somma di tutte le nostre conoscenze; ma, d’altro lato, è diventato quasi impossibile per una sola mente il dominare più di un piccolo settore specializzato in tutto ciò. Io non vedo altra via di uscita da questo dilemma (a meno di non rinunciare per sempre al nostro scopo) all’infuori di quella che qualcuno di noi si avventuri a tentare una sintesi di fatti e teorie, pur con una conoscenza di seconda mano e incompleta di alcune di esse, e correre il rischio di farsi ridere dietro. (citato in E. Klein, Sette volte la rivoluzione, 2006)
Demonstration on space growth in De Sitter spacetime
A demonstration illustrating the exponential growth of space in De Sitter spacetime was added to the Wolfram Demostration site. Find it here.
Top-down causation
Quotes from George Ellis’ FQXi 2012 Contest Essay (2nd prize winner)
http://www.fqxi.org/community/forum/topic/1337#post_56189
The degree of complexity that can arise by bottom-up causation alone is strictly limited. Sand piles, the game of life, bird flocks, or any dynamics governed by a local rule [28] do not compare in complexity with a single cell or an animal body. The same is true in physics: spontaneously broken symmetry is powerful [16], but not as powerful as symmetry breaking that is guided top-down to create ordered structures (such as brains and computers). Some kind of coordination of effects is needed for such complexity to emerge.
I suggest top-down effects from these [upper] levels is the key to the rise of genuine complexity (such as computers and human beings)
Hypothesis: bottom up emergence by itself is strictly limited in terms of the complexity it can give rise to. Emergence of genuine complexity is characterised by a reversal of information flow from bottom up to top down [27].
But can we really rule out the possibility for this ‘kind of coordination of effects’ itself to spontaneously emerge in an artificial system such as a cellular automaton? Would it not be possible to observe this type of high, ‘biological’ complexity to emerge in a simulation of an artificial system like Wolfram’s automaton n. 110, provided we are willing to wait for a sufficiently long (likely astronomic) time?
Anthropic Principle
I’ve read about the Anthropic Principle in several books and articles, but explanations are often obscure or ambiguous. A quick, clarifying introduction to the subject is given, in my opinion, by Richard Dawkins in this video:
Lee Smolin’s cosmological natural selection conjecture — the idea that universes evolve by reproducing across black holes, and are subject to natural selection — is also briefly mentioned. The relevant paper would be ‘Did the universe evolve?’, Smolin 1992.
In ‘A Brief History of Time’, Stephen Hawking writes:
‘There are either many different universes or many different regions of a single universe, each with its own initial configuration and, perhaps, with its own set of laws of science. In most of these universes the conditions would not be right for the development of complicated organisms; only in the few universes that are like ours would intelligent beings develop and ask the question: “Why is the universe the way we see it?” The answer is then simple: it it had been different, we would not be here.