© Springer Nature Switzerland AG 2019
Fabio Scardigli, Gerard 't Hooft, Emanuele Severino and Piero CodaDeterminism and Free Willhttps://doi.org/10.1007/978-3-030-05505-9_2

Free Will in the Theory of Everything

Fabio Scardigli1  , Gerard ’t Hooft2  , Emanuele Severino3   and Piero Coda4  
(1)
Department of Mathematics, Politecnico of Milano, Milano, Italy
(2)
Institute for Theoretical Physics, Utrecht University, Utrecht, The Netherlands
(3)
Brescia, Italy
(4)
Istituto Universitario Sophia, Firenze, Italy
 
 
Fabio Scardigli (Corresponding author)
 
Gerard ’t Hooft
 
Emanuele Severino

Abstract

From what is known today about the elementary particles of matter and the forces that control their behaviour, it may be observed that a host of obstacles to our further understanding remain to be overcome. Most researchers conclude that drastically new concepts must be investigated, new starting points are needed, older structures and theories, in spite of their successes, will have to be overthrown, and new, superintelligent questions will have to be asked and investigated. In short, they say that we shall need new physics. Here, we argue in a different manner. Today, no prototype, or toy model, of any so-called theory of everything exists, because the demands required of such a theory appear to be conflicting. The demands that we propose include locality, special and general relativity, together with a fundamental finiteness not only of the forces and amplitudes, but also of nature’s set of dynamical variables. We claim that the two ingredients we have today, quantum field theory and general relativity, do indeed go a long way towards satisfying such elementary requirements. Putting everything together in a grand synthesis is like solving a gigantic puzzle. We argue that we need the correct analytical tools to solve this puzzle. Finally, it seems obvious that this solution will leave room neither for `divine intervention’, nor for `free will’, an observation that, all by itself, can be used as a clue. We claim that this reflects on our understanding of the deeper logic underlying quantum mechanics.

Theories of Everything

What is a ‘theory of everything’? When physicists use this term, we begin by emphasising that this should not be taken in a literal sense. It would be preposterous for any domain of science to claim that it can lead to formalisms that explain ‘everything’. When we use this phrase, we have a deductive chain of exposition in mind, implying that there are ‘fundamental’ laws describing space, time, matter, forces, and dynamics at the tiniest conceivable distance scale. Using advanced mathematics, these laws prescribe how elementary particles behave, how they exchange energy, momentum, and charges, and how they bind together to form larger structures, such as atoms, molecules, solids, liquids, and gases. The laws have the potential to explain the basic features of nuclear physics, astrophysics, cosmology, and materials science. With statistical methods they explain the basis of thermodynamics and more. Further logical chains of reasoning connect this knowledge to chemistry, the life sciences, and so on. Somewhat irreverently, some might try to suggest that a ‘theory of everything’ lies at the basis of most of the other sciences, while of course this is not the case. We must avoid giving the impression that the other sciences would be thought of as less ‘fundamental’. In practice, a theory of everything would not much affect the rest of science, simply because each of the elements of such a deductive chain would be far too complex and far too poorly understood to be of any practical value. The theory applies to ‘everything’ only in a formal sense.

What do physicists imagine a ‘theory of everything’ to look like? Should it be a ‘grand unified theory’ of all particles and forces? If so, we are still a long way off, since the relevant distance scale at which fundamental modifications to our present theoretical views are expected to be needed is the so-called Planck length, some $$10^{ - 33}$$ cm, which is more than a billion times a billion times smaller than anything that can be studied directly in laboratory experiments. Is it ‘quantised gravity’? Deep and fundamental problems arise when we try to apply the principles of quantum mechanics to the gravitational force. Forces and quantum mechanical amplitudes tend to infinity, and the remedies for that, as proposed so far, still seem to be very primitive. Since they lead us out of the perturbative regime, calculations are imprecise, and accurate definitions explaining what we are talking about are still lacking.

Is it ‘superstring theory’? The problem here is that this theory hinges largely on ‘conjectures’. Typically, it is not understood how most of these conjectures should be proven, and many researchers are more interested in producing new conjectures rather than proving old ones, as this seems to be well nigh impossible. When trying to do so, one discovers that the logical basis of such theories is still quite weak. One often hears the argument that, although we do not quite understand the theory ‘yet’, the theory is so smart, that it understands how it works itself. Or again, its mathematics is so beautiful and coherent that it ‘must be true’. In the view of the present author, such arguments are more dubious than often realised, if the history of science is anything to go by.

Finally, many researchers are tempted to part from the established paths to try ‘completely new and different’ starting points. In many cases, these are not based on sound reasoning and healthy philosophies, and the chances for success appear to be minimal. The reader may not realize at first glance, but the present paper is a plea for rigorous reasoning and carefully keeping established scientific results in mind.

Is humanity smart enough to fathom the complexities of the laws of nature? If history can serve as a clue, the answer is: perhaps. We are equipped with brains that have evolved a little bit since we descended from the apes, hardly more than a million years ago, and we have managed to unravel some of nature’s secrets way beyond what is needed to build our houses, hunt for food, fight off our enemies, and copulate. In terms of cosmic time units, a million years is not much, and our brains may or may not have had enough opportunity to evolve to a state where they can carry out this particular new task. However, we may just about manage to figure things out, making numerous mistakes on our way. And the nice thing about science is that mistakes can be corrected, so we do stand a reasonable chance.

Today’s attempts at formulating ‘theories of everything’ must look extremely clumsy in the eyes of beings whose brains have had more time, say another few million years, to evolve further. The present author is convinced that many of the starting points researchers have investigated up to now are totally inappropriate, but that cannot be helped. We are just baboons who have only barely arrived on the scene of science. Using my own limited brain power, I am proposing a somewhat different starting point.

The following two sections, the main body of my lecture, may look peculiar, contemplating the laws of nature from an unconventional vantage point. We argue that the fundamental laws of nature appear to be chosen in an extremely efficient way. The only thing that may seem not to agree with our philosophy is quantum mechanics. On the other hand, quantum mechanics does also appear to be an extremely efficient theory. Without quantum mechanics, we would not have been able to construct meaningful theories for atoms and sub-atomic particles.

The good thing about quantum mechanics is the simple fact that many of nature’s variables that used to take continuously varying values in classical physics now turn out to be quantised. In Sect. 5 we summarise observations concerning the mathematical coherence of quantum mechanics. One could ascribe the very special logical structure of quantum mechanics to the inherent discreteness of the physical variables it describes. Now this special form of logic also seems to force us to abandon the notion of definiteness of observables, as if nothing can be absolutely certain in a quantum system. But looking deeper into the mathematical structure of the theory, one can question such conclusions. The author has somewhat different views on quantum mechanics, which we briefly explain in Sect. 6.

Our conclusion will be that our world may well be superdeterministic,1 so that, in a formal sense, free will and divine intervention are both outlawed. However, we emphasise that, in daily life, nobody will suffer from the consequences of such an observation; it pertains to the deeper fundamental nature of physical laws.

God’s Assignment

Imagine that you are God.2 Your assignment is: run a universe. Your universe may look like a big aquarium, containing things like stars and planets, plants, animals, humans, and more elementary objects such as atoms and sub-atomic particles. To make it all work, you will want billions or more of all of these. You may steer all these objects in any way you like, and you want interesting things to happen. What should you do?

You would have a problem. To tell every individual object in your universe what to do will require a massive amount of administration. Suppose you want to be efficient, isn’t there an easier way? The answer is yes. You declare that there are rules. Every object, every particle this object is made of, moves around as directed by laws of nature. Now, there are only two things left to be done: design laws of nature, and obtain a powerful computer to help you implement the laws of nature. Let us assume that you have such a powerful computer. Then the question is: how do you choose the laws of nature?

Stars, planets, and people are quite complex, so you do not want the rules to be too simple, since then nothing of interest will happen in your universe. Computer scientists would have ideas about designing rules, a software routine, a program, telling you how your universe evolves, depending on the laws you feed it with, but to make your universe sufficiently realistic, their programs will tend to become lengthy, complex, and ugly. You want to be more demanding.

So, being God, you have a second great idea. Before formulating your laws of nature, you decide about a couple of demands that you impose upon your laws of nature. Tell your computer scientists and mathematicians that they must give you the simplest laws of nature that comply with your demands. While listening to what your computer scientists and mathematicians tell you about the viability of your demands, you copy the rules they formulate. You impose the rules, and you press the button.

In this paper, it will be argued that very simple demands can be imposed, and that at least some of these demands already lead to a structure that may well resemble our universe. The construction that will eventually emerge will be called the ‘theory of everything’. It describes everything that happens in this universe.

Now, it will appear at first sight that the first demand suggested here will not be obeyed by the actual universe. But these are only appearances. Remember that our brains were not designed for this, so keep your prejudices in check for the moment. We claim to be able to make three observations:
  • The set of demands that we will formulate now are nearly inevitable and non-negotiable.

  • Even though the demands are simple, the mathematical structure of the rules, or laws of physics, will turn out to be remarkably complex, possibly too complex for simple humans to grasp.

  • As far as we do understand them, the resulting rules do resemble the laws of nature governing our actual universe. In fact, it may well be that they lead exactly to our universe.

This is my projected path towards a ‘theory of everything’.

Demands and Rules

Demand #1: Our rules must be unambiguous. At every instant, the rules lead to a single, unambiguous prescription of what will happen next.

Here, most physicists will already object: What about quantum mechanics? Our favoured theory for the sub-atomic, atomic, and molecular interactions dictates that these respond according to chance. The probabilities are dictated precisely by the theory, but there is no single, unambiguous response.

I would make three points here. The first is that this would be a natural demand for our God. As soon as he admits ambiguities in the prescribed motion, he will be thrown back to the position where gigantic amounts of administration are needed: what will be the ‘actual’ events when particles collide? Or alternatively, God would have to do the administration for infinitely many universes all at once. This would be extremely inefficient, and when you think of it, quite unnecessary. God would much prefer a single outcome for any of his calculations. This, by the way, would also entail that his computer be a classical computer, not a quantum computer (Zuse 1969; Fredkin et al. 2003; Feinstein 2017).

The second point is this. Look at the universe we live in. The ambiguities we have are in the theoretical predictions about what happens when particles collide. What actually happens is that every particle involved chooses exactly one path. So God’s administrator must be using a rule for making up his mind when subatomic particles collide.

The third point is that there are ways around this problem. Mathematically, it is quite conceivable that a theory exists that underlies quantum mechanics (Hooft 2016). This theory will only allow single, unambiguous outcomes. The only problem is that, at present, we do not know how to calculate these outcomes. I am aware of the large numbers of baboons around me whose brains have arrived at different conclusions: they proved that hidden variables do not exist. But the theorems applied in these proofs contain small print. It is not taken into account that the particles and all other objects in our aquarium will tend to be strongly correlated. They howl at me that this is ‘super-determinism’, and would lead to ‘conspiracy’. Yet I see no objections against super-determinism, while ‘conspiracy’ is an ill-defined concept, which only exists in the eyes of the beholder. A few more observations on this topic are made in Sect. 6.

Demand #2: We must have causality: every event must have a cause, and these causes must all lie in the past, not in the future.

A demand of this sort is mandatory. What it really means is that, when our God looks up his rules to figure out what is supposed to happen next, he should never be confronted with a circular situation, or in other words, he should always know in which order the rules must be applied. Whatever that order is can be used to define time. So now we can distinguish future from past. Only the past events are relevant for what happens next, and whatever they dictate, will only affect the future. This principle has been instrumental in helping us understand quantum field theories, for instance.

Demand #3: Efficiency: Not all events in the past, but only a few of them, should dictate an event in the present.

This suggests that there is a power limitation in God’s laptop. We cannot have a situation where the complete past history of every particle is necessary to determine the behaviour of a given particle in the future. If, for computing the behaviour of one particle, we only need the data concerning a few particles in its immediate environment, then the calculation will go through a lot more quickly. We are simply asking for a maximum of efficiency; there are still a lot of calculations to do.

This demand will now also lead to our first rule, or law of physics:

Rule #1: Locality. All the configurations one needs to know to determine the behaviour of an object at a given spot must lie in its vicinity.

This means that we will have to define distances. Only points at very small distances from a given point are relevant to what happens there. What ‘vicinity’ really means still has to be defined. In our universe it is defined by stating that space is three-dimensional, with a Euclidean definition of distance. Details will be left for later.

Now we still have to decide how things interact, but before that, we have to decide how things can move. This is a delicate subject. In our universe, things that can stand still will be allowed to move along straight lines, with, at first sight, any speed. This makes things difficult for God’s computer programmer. A programmer will find it easy to define objects that move with a predefined speed in a predefined direction, but what are the rules if something may move with any speed in any direction? A deceptively simple-looking answer comes in the form of a new law of nature:

Rule #2: Velocity. Any object for which it has been decided how it behaves when at rest, will behave very similarly when it moves along a straight line in any direction, with any constant speed (within limits, see the next rule).

The rules governing its behaviour at this velocity must be derivable in a simple way from the rules governing its behaviour when it stands still. Think of someone sitting in a train, playing chess. How this person feels, how he moves his arm while moving a pawn, as well as the rules for chess, all are the same when the train moves as they are when the train stands still.

This is an important rule, since it builds an enormous amount of structure and complexity into our universe. The relative positions of things can now change with time, but things can also collide, they can have moving parts, etc. At first sight, the price we pay for this added complexity may seem to be mild, since we get moving things for free, if we know how they function when standing still.

But there is a problem. Should we accept all speeds, or should there be a speed limit? If we don’t impose a speed limit, we run into trouble. The trouble is with locality. If things move infinitely fast, they can be simultaneously here and far away. In practice, this also means that there will be trouble with our demand of locality and/or efficiency. Many of our particles will move so fast that our computer needs infinite processing speed. This we should not allow, so we impose:

Rule #3: There is a speed limit. Call it $$c$$, the speed of light.

This fits well with locality: the only neighbours that interact with a particle of matter are the ones that can be reached by a light signal within a limited time step.

We know that there is a speed limit in our universe: the speed of light. Thus, we saved efficiency and locality, but now there is a new problem. The person in the train moves his arm. The train may be moving more slowly than the speed limit, but what about the arm? Well, God’s mathematicians tell God that this problem can be solved, but a number of amendments must be made to rules #2 and #3. First, let us slow the arm down:

Rule #3a: Things that go faster will experience time going more slowly.

This slows down the arm, but it is not quite enough. We also need:

Rule #3b: Things in motion will also contract in the forward direction.

This makes the arm shorter, but again it does not help quite enough. A more drastic measure:

Rule #3c: Inside moving things, clocks will no longer go synchronously.

In combination with rules #3a and #3b, this works: a person inside the train may stick his arm out, but as seen from outside the train, the arm reaches the pawn at the moment that the body has nearly overtaken the arm. Mathematicians tell us that we need all three of these amendments to rule #3, and now the logic works out fine; the arm will not exceed the speed limit.

Physicists have learned about this rule, with its amendments, along somewhat different lines of reasoning, but the result is the same: Einstein’s special relativity. As we see here, Einstein’s relativity theory could have been deduced from purely logical arguments, if our brains had been hundreds of times smarter. In our world, it was arrived at by Hendrik Antoon Lorentz, by studying the laws of electromagnetism. We see now why we could never have understood our universe if there hadn’t been special relativity.

We still haven’t tried to determine how things behave, even if everything stands still. This is because there is another problem. Think of an object, such as a lump of sugar. It is extended in space. Because of locality, one side of our lump of sugar should behave independently from the other side. Should it not be possible to break the lump of sugar in half? And the pieces we then get, should we not be able to break these in half again? And so on? Can we break these pieces in half forever?

This question was already raised by the Greek philosophers Democritus, Leucippus, and Epicurus around 400 BC. In a stroke of genius, they stumbled upon the right answer: No, this series of divisions will stop. There will be a smallest quantity of sugar. They called these smallest quantities ‘atoms’. So it was these Greeks who first tried the concept of quantisation. They quantised matter, purely by using their brains. Our God is forced to assume something like this as well, since, if things could be divided into pieces ad infinitum, this would imply infinite complexity, which needs to be avoided.

In the name of our efficiency requirement, we must quantise as well. All objects in this universe can be broken into smallest units. The ‘atoms of sugar’ are now known as ‘molecules’, but that’s just a detail. Molecules were found to be composed of smaller things, and these are now called ‘atoms’, which can be divided further: the smallest possible objects are now called elementary particles. Note that, in modern theories of elementary particles, these particles are considered to occupy single points in space. A point cannot be divided in two. However, when a particle such as an electron, emits another particle, for instance a photon, then in particle physics we say that the photon is created at that spot; it does not hide inside an electron. In conclusion, it turns out that we need:

Rule #4: Matter is quantised. The smallest quanta of matter are the elementary particles.

But for God’s mathematicians, these quanta of matter have caused considerable trouble. The quanta will probably carry mass, energy, and momentum, but even if they are point-like, we sometimes do need a property replacing the notion of size. It was found how to do this: for all particles with finite amounts of momentum, there is a natural smallest size limit. The math needed is called quantum mechanics.

Quantum mechanics works, but it is complicated. Yet, up to this point, today’s physicists and mathematicians have discovered how to combine these rules in a working configuration. In particular, adding special relativity, rule #3, took us nearly 50 years, so it wasn’t easy. The result was called ‘quantum field theory’. There is one problem with quantum field theory: it is not known where the forces come from; this leaves us with lots of freedom, as it is not known how God made his decisions here.

Thus, we have to introduce one more concept: forces. In our universe, it must be possible to change the velocities of objects, and decide about a rule for this, of the following type:

Rule #5: Forces. If it is known how an object behaves while moving with constant speed on a straight line, it should be possible to deduce how it behaves while moving with a varying speed on a curved line.

The primary force that can be deduced in this way is gravity. In our world, we know that other forces exist, but these may be due to secondary effects resulting from complex behaviour at ultrashort distances. Again, there will be a price to pay: the best way to add the notion of curved lines in the logic of our rules is to have curvature in the fabric of space-time itself. One may then create the situation that the fundamental differences between straight lines and curved lines disappear; on curved spaces, straight lines do not exist.

There is also another advantage: curved universes have no fixed size, they can expand. This means that our universe may begin by being very tiny and very simple, and grow all by itself, just by the action of our rules. In our universe, this situation occurs. The theory describing these aspects is Einstein’s theory of general relativity.

This leads us to one more rule:

Rule #6: God must tell his computer what the initial state is.

Again, efficiency and simplicity will demand that the simplest possible choice is made here. This is an example of Occam’s rule. Perhaps the simplest possible initial state is a single particle inside an infinitesimally small universe.

Final step:

Rule #7: Combine all these rules into one computer program to calculate how this universe evolves.

So we’re done. God’s work is finished. Just push the button. However, we have reached a level where our monkey brains are at a loss. Rules #5 to 7 have proven to be too difficult for us. The theory of general relativity manages to take Rule #5 into account, but unfortunately does not handle quantum mechanics, rule #4, correctly.

Why is this so difficult? Quite possibly, more rules will have to be invented to reach a coherent evolution law, but up to now, we have been confronted with this question. Can we implement all the rules given above into a single, working scheme? What comes out will be of secondary importance. Perhaps a framework will be found with many possibilities (a ‘multiverse’). In that case, more rules will have to be invented to single out one preferred choice. So far, it seems that the requirements we mention above have all been taken into consideration in the laws of physics of our universe. This is why this author suspects that the given rules make a lot of sense.

We note that the actual laws of physics known to hold in our universe are quite close to what we have constructed purely by mental considerations. Of course, the author admits that this will be attributed to hindsight, but we claim that a super intelligent entity could perhaps have ‘guessed’ nature’s laws of physics from such first principles. This would be important to know, since this would encourage us to use similar guesses to figure out how the remaining physical laws, not yet known to us today, might also be guessed.

The idea of underlying laws that are completely mechanical, while what we currently know as quantum mechanics should be an emergent feature of the universe, has been suggested several times (Zuse 1969; Fredkin et al. 2003; Feinstein 2017), but there are deep problems with it, which will be addressed now.

Free Will

Note what has motivated the demands formulated in Sect. 3: unambiguity, simplicity, efficiency, and finiteness. In particular this last demand, finiteness, is not (yet?) completely implemented in the known laws of nature today. There are various things that can go out of control due to infinities. In quantum field theories, we managed to keep one kind of infinities under control, the infinities in the quantum amplitudes and all physical effects associated with those. This means that the effects of forces in the theory stay finite and computable.

This is important. However, we always need to restrict ourselves to approximations, in this case, perturbation expansion techniques. An infinity that we left aside because, on the face of things, it did no harm, is the infinity of all the relevant dynamical physical variables. For us this seems to cause no problems, as long as our integrals converge, but for a ‘God’ who wishes to stay in control of everything, this is not an option: the total number of independent variables must be finite. His laptop must be able to compute exactly what happens in a finite stretch of time. Here, our arguments seem to favour a universe that is spatially compact rather than unbounded.

The reader might accuse me of an ill-motivated religious standpoint, but we do note that the rules we arrived at by using this standpoint are remarkably effective in generating laws of nature that are known to work quite well.

Then, the reader might point out that no classical laptop at all can compute quantum mechanical amplitudes with infinite precision, and insist that a quantum laptop would be needed. This however, might be the result of an elementary incompleteness in our present understanding of quantum mechanics, as we argued some time ago (’t Hooft 2016). There is every reason to suspect that a novel theory underlying quantum mechanics will be required. To satisfy our demand of unambiguity, all phenomena must be entirely computable, not left to chance.

If this is right, the laws of nature we arrive at leave no room for two things:
  • divine intervention, and

  • free will.

We claim that there would be very little justification for the existence of either. If we allowed for divine intervention, for instance in all quantum mechanical phenomena, our theory would leave such a gigantic amount of arbitrariness in its prescriptions that all the laws of physics would seem to be there for no particular reason. We would find ourselves back at square one. As for free will, the argument is very similar. If quantum mechanics left room for free will, there would be far too much room for it. There is every reason to suspect that today’s voids in the theory of quantum mechanics will be filled by additional laws.

Most importantly, quantum mechanics itself can be used to show us how the voids might be filled in. It is not hard to imagine versions of our dynamical theories where quantum mechanics as it appears today can be aptly described in classical terminology, but we need these missing laws.

It is important to note that quantum mechanics accurately predicts the statistics we observe when experiments are repeated many times. If there are additional laws that decide about individual events, these laws must reproduce the statistics as predicted by quantum mechanics alone. This implies that the question whether the additional laws exist or not will not be decidable experimentally. Physicists who are content with a theory that never gives better answers than statistical ones will categorically reject speculations concerning hidden variables, but religious people who assume that our universe is reigned over by some God should require quantum mechanics to be supplemented with theories of evolving hidden degrees of freedom, in such a way that all events that take place can be attributed to something that has happened nearby in the past.

Whether devine intervention takes place or not, and whether our actions are controlled by ‘free will’ or not, will never be decidable in practice. It is thus suggested here that, where we succeeded in guessing the reasons for many of nature’s laws, we may as well assume that the remaining laws, to be discovered in the near or distant future, will also be found to agree with similar fundamental demands. Thus, our suspicion of the absence of free will can be used to guess how to take the next step in our science.

Quantum Mechanics

Today’s scientists have not yet reached that point. All dynamical laws in the world of (sub)atomic particles have been found to be controlled by quantum mechanics. Quantum mechanics appears to add a sense of ‘uncertainty’ to all dynamical variables describing these particles: the positions and momenta of particles cannot be sharply defined at the same time, the components of the spin vector contain a similar notion of uncertainty, and sometimes the creation of a particle can be confused with the annihilation of an antiparticle—and so on. This is actually not a shortcoming of the theory, because in spite of these apparent uncertainties, the statistical properties of the elementary particles can be determined very precisely. So what is going on?

The rules according to which quantum mechanics works are precisely formulated in what is sometimes called the Copenhagen interpretation. This is not the place to explain what the Copenhagen rules are, but they can be summarised by stating that the behaviour of a particle can be described as completely as if there were no ‘uncertainties’ at all. Instead, we have variables that do not commute:
$$x \cdot p - p \cdot x\; = \;[{\kern 1pt} x,{\kern 1pt} p{\kern 1pt} ]\; = \;{\text{i}}{\kern 1pt} \hbar \;.$$
(1)

In practice, what this means is that, when a system is described quantum mechanically, we apply a number system that is more general than in classical physics, but just as applicable. In fact, this number system is more useful than the old, commuting numbers, when it is used to refer to quantities that are quantised, i.e., that only come in integer multiples of fixed packages.

Thus, imagine a system that allows its dynamical variables only to occur in distinct states, typically indicated by integer numbers, $$\left| 1 \right\rangle ,\left| 2 \right\rangle ,\left| 3 \right\rangle$$ …. The non-commuting numbers that we use can then be called observables when they simply describe the state the system is in, like indicating the value that a particular integer has. When they are applied to replace a state by another state, they are called operators; for instance, $$a|n\rangle = |n - 1\rangle$$. The manipulations we use to handle these numbers act the same way regardless of whether we are dealing with observables or operators. This makes quantum mechanics extremely flexible, but it sometimes obscures the situation when we are unable to distinguish observables from operators.

A quantum transformation replaces observables by operators, or more often, mixes the two types completely. One ends up having to use wave functions to describe the states a system can be in. The beauty of this formalism is that such numbers, the non-commuting numbers, regain continuity even if the original system was discrete, and thereby allow us to use the machinery of advanced mathematics. This leads to such powerful results that few physicists are ready to return to the original system of discrete physical states describing ‘reality’. After most quantum transformations, reality is replaced by the more abstract notion of a wave function. This notion only appears to be abstract if we ask ‘What is going on here?’, but in practice serves us very well if we only ask ‘What will the result of this experiment be?’

In fact, according to the Copenhagen interpretation, questions such as ‘What is going on here?’ are ill-posed questions, as they cannot be answered by doing experiments. In practice, therefore, we usually refrain from asking such questions. All that matters is the reproduction of the answers given by experiments.

Nevertheless, our question ‘What is gong on here?’ is not ill-posed. We can always attempt to find answers of principle: we do not know what is going on here, but we can imagine very precisely what it could be. Something is going on, and the assumption that there is something going on that might explain what is happening next, even if we cannot be certain what it was, may be used as an important constraint in constructing theories. A typical example is the Standard Model of the sub-atomic particles. This model was established partly by doing experiments with elementary particles, but also by imagining how these particles should behave. It makes sense to use as an assumption: every particle behaves in a completely deterministic way, even though its behaviour cannot be completely determined by any known observation technique. The assumption that particles behave in such a way that a completely deterministic theory is responsible is not a crazy assumption, but it requires guesswork. In principle, such guesses could help us to guess correctly what the next stage for the Standard Model might be.

Bell’s Theorem

The reasoning summarised above, which we explained more elaborately in (’t Hooft 2016), may seem to be logical, yet it is nearly universally rejected by researchers in quantum mechanics. The reason for that is that there seems to exist a rigorous proof of the contrary statement: experiments can be carried out, for which standard quantum theory provides very firm predictions concerning their outcomes, predictions that have indeed been confirmed by experiment, while they do not allow for anyontological’ description at all. The question ‘What is going on here?’ cannot be answered at all without running into apparent contradictions.

For a complete description of J. S. Bell’s Gedanken experiment, we refer to the literature, and references therein (Bell 1964; Clauser et al. 1969). Here, we summarise. Bell’s starting point is that experimenters can put small particles in any quantum state they like, and this appears to be true in most cases. In particular, we can put a pair of particles in an entangled quantum state. Photons, for example, are described not only by a plane wave that determines in which direction a photon goes, but also by their polarisation state. Photons can be linearly polarised or circularly polarised, but there are always exactly two possibilities for the polarisation: vertically or horizontally, or alternatively, left or right circularly polarised.

An atom can be put in an excited state in such a way that it emits two photons, which, together, form only one possible quantum state: if one photon is found to be vertically polarised, the other will necessarily be vertically polarised as well, and if one photon is circularly polarised to the left, the other is polarised to the left as well. In this case, the two photons form an entangled state. Together, however, this is only a single, allowed quantum state that the pair of photons can be in.

Far from the decaying atom, Bell now imagines two detectors, called Alice and Bob, each monitoring one of the photons. They both work with linear polarisation filters, checking the polarisation of the photon that they found. They do a series of experiments, and afterwards compare their results. They do not disclose in advance how they will rotate their polarisation filters. Now, whenever the two polarisation filters happen to be aligned, it turns out that they both measure the same polarisation of their photons. When the two polarisation filters form an angle of $$45^{ \circ }$$, they find the two photons to be totally uncorrelated. But when the relative angle is $$22.5^{ \circ }$$ or $$67.5^{ \circ }$$, they find a relatively high correlation of the two polarisations. In classical physics, no simple model can be constructed that reproduces this kind of correlation pattern.

The only way to describe a conceivable model of ‘what really happens’, is to admit that the two photons emitted by this atom know in advance what Bob’s and Alice’s settings will be, or that, when doing the experiment, Bob and/or Alice know something about the photon or about the other observer. Phrased more precisely, the model asserts that the photon’s polarisation is correlated3 with the filter settings later to be chosen by Alice and Bob. We can compute what kind of correlation is needed. One finds that the correlation is a pure three-body correlation: if we average over all possible polarisations of the photon pair, we find that Alice’s and Bob’s settings are uncorrelated. If we average over all possible settings Alice can choose, then Bob’s settings and the polarisation of the photons are again uncorrelated, and vice versa.

But this three-body correlation is said to be impossible. How can the photons know, in advance, what Bob and Alice will do? In deriving his inequalities, Bell, and later Clauser, Horne, Shimony, and Holt (Clauser et al. 1969), assumed that the polarisation state of the entangled photons was independent of the settings chosen by Alice and Bob. This assumption has been discussed many times in the literature. It was subsequently concluded that it is inevitable, but, as we shall argue, there can be correlations, and they must be strong. This invalidates the inequalities derived by Bell and CHSH. Bell’s theorem is violated because these inequalities are violated. What remains to be done is to explain how this could have happened, because the required correlations seem to run against common sense; they appear to contradict the notion that, after the photons have been emitted, both Alice and Bob have the free will to choose any setting they like. But have they?

It is easy to say that they have not. If we adhere to a deterministic model, it is clear that the polarization of the photon, as well as the settings chosen by Alice and Bob, have been determined by the initial state of the universe, together with deterministic equations of motion. But this is not the complete answer to our problem. How do we make a model for these photons?

Apparently, what quantum mechanics dictates is a strong 3-body correlation. The three points in space and time that are correlated may well all be spatially separated from one another. This means that no signal can have been transmitted from one to the other, but this is not a problem. It is well known in the quantum theory of sub-atomic particles that correlations need not vanish outside the light cone.4 The real problem here is that Alice’s and Bob’s settings are classical, and the quantized atom was there first. What kind of model can bring about such strong correlations, even if they are 3-point correlations, when two of the variables considered are classical?

If this is the way to look at the problem raised by Bell’s theorem, we can limit ourselves to a more elementary question. Consider just a single, polarised photon. It may have been emitted by some quasar, billions of years ago. An observer detects it after it has passed a polarisation filter. The photon either passes or it does not. In both cases, the ‘true polarisation state’ of the photon was either in line with the observer’s filter, or orthogonal to it, but not in any other direction. It seems as if the quasar, billions of years ago, already knew that these were the two polarisation directions the photon had to choose from. This will be a strange aspect of any model that we might want to apply.

And now for what this author believes to be the correct answer, both for the single photon problem and the Bell experiment. Our theory is that there does exist a true, ontological state, for all atoms and all photons to be in. All ontological states form an orthonormal set, the elements of an ontological basis. The universe started out in such a state, and its evolution law is such that, at all times in the future, the universe will still be in an ontological state. Regardless of which ontological initial state we start from, the state in the future will be an ontological one as well, that is, not a quantum superposition of different ontological states. What we have here, is a conservation law, the conservation of ontology. It selects out which quantum superpositions can be allowed and which not, just because, according to our model, the evolution law is ontological.

An ontological photon can be polarised in any sort of way, but it cannot evolve into any superposition of ontological states, and this law is universal, it holds for all states the universe can be in. The outcomes of both Alice’s and Bob’s measurements are ontological, so this ensures that the photons they look at, including ones that can have travelled billions of years, have been ontological at all times. What is not widely known is that this conservation rule is also respected by the Schrödinger equation, so that no modification of quantum mechanics is necessary.

The effect of this law is so strong that it looks like ‘conspiracy’, but this law is not more conspiring than the law of conservation of angular momentum. The correlation function needed in a simple model for Bell’s Gedanken experiment was calculated in (’t Hooft 2016). Our argument is similar to several raised earlier, such as (Vervoort 2013).

Conclusion

The author agrees with Bell’s and CHSH’s inequalities, as well as their conclusions, given their assumptions. We do not agree with the assumptions, however. The main assumption is that Alice and Bob choose what to measure, and that this should not be correlated with the ontological state of the entangled particles emitted by the source. However, when either Alice or Bob change their minds ever so slightly in choosing their settings, they decide to look for photons in different ontological states. The free will they do have only refers to the ontological state that they want to measure; this they can draw from the chaotic nature of the classical underlying theory. They do not have the free will, the option, to decide to measure a photon that is not ontological. What will happen instead is that, if they change their minds, the universe will go to a different ontological state than before, which includes a modification of the state it was in billions of years ago.5 Only minute changes were necessary, but these are enough to modify the ontological state the entangled photons were in when emitted by the source.

More concretely perhaps, Alice’s and Bob’s settings can and will be correlated with the state of the particles emitted by the source, not because of retrocausality or conspiracy, but because these three variables do have variables in their past light cones in common. The change needed to realise a universe with the new settings, must also imply changes in the overlapping regions of these three past light cones. This is because the universe is ontological at all times.