Periodic Reporting for period 3 - Qosmology (Quantum Effects in Early Universe Cosmology)
Período documentado: 2021-09-01 hasta 2023-02-28
Why is it so hard to unify gravity and quantum theory? To illustrate this, it helps to recall the bizarreness of the quantum world. According to quantum theory, nature explores all possibilities to determine the probability with which an event occurs. So when a ball is thrown, absolutely all conceivable trajectories are considered, but only a few weigh in significantly. (In the case of the ball, in fact, only the familiar classical trajectory is relevant.) When applied to space-time, however, it gets tricky. All possible space-time evolutions must be considered, but the trouble is that on tiny scales, the fabric of space-time tends to become very messy to the extent that it is typically impossible to reliably calculate anything. And to understand the big bang, we would certainly need to know what happens with space-time on tiny scales.
Faced with those difficulties, three cosmologists had a truly ingenious and elegant idea. They were James Hartle and Stephen Hawking, with their "no-boundary" proposal, and Alexander Vilenkin, with his "tunneling" model, both theories formulated in the early 1980s.The idea is that one should not consider all possible space-time evolutions but only those that have a smooth initial space-time geometry. You can picture this by imagining that the early universe would have been rounded off like the surface of a ball, not only in space but also in time. Time would have had no edge—hence the name of the proposal. This idea has two highly desirable consequences. The first is that it might actually allow one to calculate things, as the geometries are forced to be smooth initially, and hence one might not have to deal with the small-scale messiness. The second is that by providing a theory of what effectively replaces the big bang, we would know the most likely starting point of the universe. Still, until recently, it remained difficult to calculate the true consequences of this idea. Despite the simplification of having to deal only with these "no-boundary" geometries, it is not easy to calculate how all the different space-time evolutions sum up, or how to determine which ones are the most important. Some aspects of the calculations that had been done since the 1980s were rigorous; other aspects were based on guesses, or, to make it sound less dubious, on "intuition."
About four years ago, my collaborators Job Feldbrugge and Neil Turok, both of the Perimeter Institute in Canada, and I realised that there exists a mathematical framework, which has been continuously developed by mathematicians over the last 100 years, that is perfectly suited to performing this kind of calculation reliably. This framework is called the Picard-Lefschetz theory, named after the two mathematicians who initiated the study of these techniques. It is only over the decade that physicists have become aware of its existence. Using these methods, we encountered a surprise. When we imposed the condition that the universe starts from zero size, the resulting universes (once they have grown to a large size) inevitably developed large fluctuations. In other words, the geometries that develop strong irregularities contributed the most to the final answer. This implies that one would get a highly crumpled universe popping out of nothing and presumably collapsing again right away, rather than expanding into the vast universe we know.
This was the puzzling situation we found ourselves in at the beginning of this project. The main aim of the project is to see if either the no-boundary idea must be reformulated so as to work, or to look for alternative descriptions of the big bang and the emergence of space and time.
There are two consequences to this idea: the first is that in principle one now knows nothing about the initial size of the universe. Since we specified the expansion rate, we could not simultaneously demand that the universe started at zero size - this would have been incompatible with Heisenberg's uncertainty principle. However, according to the rules of quantum theory each universe gets a different probability, and it turns out that the universe that not only starts off smoothly but also happens to start at zero size is indeed the most likely one. This is a nice surprise, as it means that we truly have a theory of the initial conditions of the universe. The second aspect is that our condition that the expansion rate should be such that the universe is very smooth at the "beginning", demands that at the beginning there was only space, and no time. This is because the requirement of smoothly rounding off the universe is very much like demanding that the geometry is that of the surface of a ball, and the surface of a ball is evidently a surface in space. Thus according to this theory time must emerge from space. In other words, as the universe grows one space direction turns into a time direction. This is a rather mysterious aspect of the no-boundary proposal, which we intend to study in more detail in the future.
The description that I gave above was a little oversimplified in that it might have given the impression that the universe came out completely smooth, without any features. This is not quite true: small fluctuations are allowed to grow, it is just that they are most likely very small. However, this is just what is needed in order for structure (i.e. stars, galaxies, etc.) to form. We need primordial fluctuations, otherwise the universe would be completely empty. Hence it is a good thing that the no-boundary proposal allows such small fluctuations, in fact it allows for precisely the right kind of fluctuations to explain the distribution of galaxies in our universe. But still, the no-boundary proposal is just an overall framework, the details of how such fluctuations develop can differ. With another member of the group, Jerome Quintin, we found a useful way of characterising the fluctuations stemming from different models. Different models can be thought of as giving different prescriptions for how to produce such fluctuations. These different prescriptions can ten be thought of as being like different computer programs for what the universe should do. In other words, one can say that in some sense the universe automatically executes a kind of computer program containing the instructions on how to produce these perturbations. But then one can ask: just how complicated would a computer have to be in order to reproduce what the universe did? This could not be an ordinary computer, rather it would have to be a quantum computer, very much like those currently being built by the most advanced technology companies, just much more complicated. What Dr. Quintin and I calculated was just how complicated such a quantum computer would have to be to simulate different models of the fluctuations in the early universe. And we found that overall such computers would have to be pretty complicated, but different models again would require vastly different sizes of quantum computers. This provides a new way of classifying early universe models. What we do not know at present is whether it would be better or worse to have a model that is complicated to simulate - this is a question that remains for the future.
Our main advance was to find a good mathematical description of quantum theory applied to the big bang. Now the big question is what the result actually means for observations of the universe. Quantum theory gives probabilities for various outcomes, and this is well understood when applied to experiments on Earth, where one has a laboratory in which everything can be prepared as desired, and where (at least in principle) experiments can be repeated as often as one likes. But in cosmology it is much trickier to find out what a quantum description of the universe means: after all, we are immersed in the universe (i.e. we are inside the experiment) and moreover we only have one universe, so the experiment cannot be repeated. In this context it is still unresolved how to interpret the quantum description, known as the wave function of the universe. One long term goal of this project is to make progress on this thorny issue. A further, and probably related, question is how out of this quantum beginning the universe came to behave classically. In other words, even though the beginning of the universe was likely a highly quantum event, we know that the later universe can be described very well via classical physics. But exactly how and when did the transition from quantum to classical physics occur? And are there any remnants of the quantum era that left an imprint that we can look out for with telescopes and satellites?
A final point of great interest for this project is the question whether different consistent theories of the beginning of the universe can be formulated. So far we made good progress on the so-called no-boundary proposal of Hartle and Hawking, and this can be regarded as the best understood theory of initial conditions. But perhaps other such theories are possible. In fact, we already found a different possible theory of initial conditions in the context of a special theory of gravity known as quadratic gravity. It will be worthwhile studying such alternative proposals further, to see if there are different ways in which the universe could have come into being. String theory, which is currently a promising theory that is attempting to combine relativity with quantum physics, suggests that there might be rather different possibilities: for example the universe might have existed, but might not have been describable in terms of local geometry, i.e. in terms of what happens at specific locations in space and time, but rather it could have been that there was a phase in which only the overall shape of the universe was important. Then out of this phase the standard expanding universe would have emerged, and this emergence would have looked like the big bang to us. At the moment such ideas are highly speculative, but they show that we must be ready to explore entirely new avenues. We will keep our minds open!