Cosmology is a funny kind of science. At one end of the spectrum, cosmologists can explain vast swathes of the Universe with remarkable accuracy. Unfortunately, at the point where this explanation breaks down, cosmology theories don't gently fail, allowing you to come to a safe halt on the shoulder of the inflationary-universe-highway. No, three tires blow out at once, leaving you spinning toward a high-velocity crash barrier, followed by an unsympathetic truck.
How does this happen? It comes down to the problem that quantum mechanics and gravity do not share a common theoretical framework. To describe the very earliest times of the Universe, a unified theory of quantum mechanics and gravity appears to be necessary, but there are many competing ideas for how to achieve this. Now, scientists have had a chance to digest data from the Planck mission to explore the cosmic microwave background, and some of these ideas are being tested against this new data.
Unfortunately, the test results are not as clear as we'd like them to be.
Who sent Planck on a mission?
The Planck satellite was launched by the European Space Agency in 2009. It followed in the path of NASA's Wilkinson Microwave Anisotropy mission, mapping the cosmic microwave background at higher resolution and sensitivity. The cosmic microwave background is light from the earliest moments of the Universe. It is light that comes to us from the moment that the Universe went from an opaque plasma—a gas of charged particles, like protons and electrons—to a transparent neutral hydrogen gas (with tiny bits of helium and lithium thrown in).
For the most part, the cosmic microwave background is uniformly dull, like the glow of a light through frosted glass. However, on close inspection, the glow varies very slightly depending on the direction we look. These slight variations are our view to what the Universe was doing before it became transparent. The survival of these variations tells us that the Universe had to undergo a period of extremely fast expansion. They tell us that the Universe was not an evenly distributed ball of particles, rather there were density fluctuations that set the stage for stars and galaxies to form.
The Planck mission ended in 2013, but data processing and validation can take a long time. The full data from Planck was only released in 2015, and scientists have been picking over the details since then. This data, along with new data trickling in from BiCEP 2 and other recent experiments, is the best that cosmologists have to distinguish between different theories of quantum gravity.
Cold, dark theories
All our observational evidence is dominated by events that happened late enough in the life of the Universe that quantum gravity was no longer required. It is therefore difficult to link this observational data directly to specific aspects of a given theory, as it only holds indirect hints for events that happened earlier in time. There are challenges on the theory side, too. When you want to test your theory against data, you have to drag specific behaviors out of the theory. This turns out to be rather challenging, because many parts of a theory are mathematically difficult to solve.
For the new analysis, the two theories that the researchers compared were the "standard" approach to quantum gravity and holographic theory. The standard approach makes use of quantum field theory, which, when simplified to the point of predictions, results in a range of "cold dark matter plus inflation" models.
These models are already known to fit the data from Planck superbly. But in recent years, there has been substantial progress in calculating some of the more annoying parts of quantum field theory. These are referred to as loop corrections, and they can be very difficult. The researchers made use of an additional loop correction to constrain the parameters that go into the model, which resulted in a narrower range of possible quantum field theory models that fit the data.
The theory doesn't mean it's a hologram
Holographic theory is not that different from quantum field theory. (Though as with quantum field theory, I won't claim to understand it.) It seems to me that holographic theory is a computational tool that allows one to obtain solutions to the field theory equations. This is different from the standard approach in that it results in different solutions, so a different set of models can be compared. The idea behind the holographic principle is this: the Universe has, for the sake of simplicity, four dimensions and forces. The holographic principle tells us that we can achieve the same physics by reducing the number of dimensions by one and removing gravity.
If you want to calculate something, you do it in the 3D space without gravity, for which we have relatively well-behaved equations. Afterwards, you perform a transform back to 4D, and there is your answer. Except, like the standard quantum field theories, most of the theory is too intractable to grab answers from directly—instead, you have to simplify. Nevertheless, holographic theory does allow you to predict the cosmic microwave background.
The question then is: does one of these theories do a better job than the other?
As you will be stunned to read, they are pretty much the same. Both predict the data very well. If you take the full Planck data, then quantum field theory does a better job. But you can't really compare that directly to the holographic approach, because it turns out that the simplification used to obtain solutions from holographic theory breaks down for one part of the data.
By analyzing a sub-set of the Planck data, where the results from holographic theory were not influenced by the simplification, the researchers found that holographic theory fit as well, or maybe even marginally better, than quantum field theory. Unfortunately, that margin does not rise to statistical significance, so the best we can say is that holographic theory is as good as quantum field theory—at least when it's not breaking.
In the end, what can we conclude? Some versions of quantum field theory do not fit the data, so they can be excluded. Simply put, that's the way it's going to be. Each new hit of data will allow us to restrict the allowable parameter space for the competing theories, slowly narrowing them down. I doubt there will be any moment in time where we get some data that allows us to eliminate whole classes of models in one go.