The “information paradox” surrounding black holes has sucked in many noteworthy physicists over the years. For more than three decades Stephen Hawking of Cambridge University in the UK insisted that any information associated with particles swallowed by black holes is forever lost, despite this going against the rule of quantum mechanics that information cannot be destroyed. When four years ago Hawking famously made a volte-face - that information can be recovered after all - not everyone was convinced.

 

Latest buzz world on the astro-black-hole scene is - firewalls.  And this is a theoretical idea related to the concept of black hole. If you read article about black holes, you probably realized that nothing really special happens to an infalling observer when he or she crosses a black hole event horizon. You are not aware of it, but there is not return from that point. You get torn apart much later and that happens once you approach the black hole singularity which may be much much later. And then summer 2012 came and some new twist has been introduced. Ahmed Almheiri, Donald Marolf, Joseph Polchinski and James Sully wrote paper titled "Black Holes: Complementarity or Firewalls?" in which they argue that the following three statements cannot all be true:

  • Hawking radiation is in a pure state,
  • the information carried by the radiation is emitted from the region near the horizon, with low energy effective field theory valid beyond some microscopic distance from the horizon, and
  • the infalling observer encounters nothing unusual at the horizon.

 

Perhaps the most conservative resolution is that the infalling observer burns up at the horizon. Alternatives would seem to require novel dynamics that nevertheless cause notable violations of semiclassical physics at macroscopic distances from the horizon. Authors considered some thought experiments about entangled qubits that fall into the black hole - constructed out of the s-wave or other waves in the spherical harmonic decomposition - and decided that the only sensible conclusion is that when a black hole becomes "old" (ie. when it emits or loses one-half of its initial Bekenstein-Hawking entropy), its event horizon gets transformed into a firewall that destroys everything that gets there. Polchinski made a guest blog over at Discover and shared more details on this idea. It is fair to say that new idea comes from urge to simultaneously satisfy the demands of quantum mechanics and the aspiration that black holes don’t destroy information.

 

jp.jpg

 

The story starts with another thought experiment, this time made by Stephen Hawking in 1976, where he envisioned a black hole to form from ordinary matter and then evaporate into radiation via the process that he had discovered two years before. According to the usual laws of quantum mechanics, the state of a system at any time is described by a wavefunction. Hawking argued that after the evaporation there is not a definite wavefunction, but just a density matrix. Roughly speaking, this means that there are many possible wavefunctions, with some probability for each (this is also known as a mixed state). In addition to the usual uncertainty that comes with quantum mechanics, there is the additional uncertainty of not knowing what the wavefunction is: information has been lost. Hawking had thrown down a gauntlet that was impossible to ignore, arguing for a fundamental change in the rules of quantum mechanics that allowed - information loss. A common reaction was that he had just not been careful enough, and that as for ordinary thermal systems the apparent mixed nature of the final state came from not keeping track of everything, rather than a fundamental property. But a black hole is different from a lump of burning coal: it has a horizon beyond which information cannot escape, and many attempts to turn up a mistake in Hawking’s reasoning failed. If ordinary quantum mechanics is to be preserved, the information behind the horizon has to get out, but this is something tantamount to sending information faster than light. Eventually it came to be realized that quantum mechanics in its usual form could be preserved only if our understanding of spacetime and locality broke down in a big way. So Hawking may have been wrong about what had to give (and he conceded in 2004), but he was right about the most important thing: his argument required a change in some fundamental principle of physics.

 

sh4.jpg

 

To get a closer look at the argument for information loss, suppose that an experimenter outside the black hole takes an entangled pair of spins and throws the first spin into the black hole. The equivalence principle tells us that nothing exceptional happens at the horizon, so the spin passes freely into the interior. But now the outside of the black hole is entangled with the inside, and by itself the outside is in a mixed state. The spin inside can’t escape, so when the black hole decays, the mixed state on the outside is all that is left. In fact, this process is happening all the time without the experimenter being involved: the Hawking evaporation is actually due to production of entangled pairs, with one of each pair escaping and one staying behind the horizon, so the outside state always ends up mixed.

 

A couple of outs might come to mind. Perhaps the dynamics at the horizon copies the spin as it falls in and sends the copy out with the later Hawking radiation. However, such copying is not consistent with the superposition principle of quantum mechanics; this is known as the no-cloning theorem. Or, perhaps the information inside escapes at the last instant of evaporation, when the remnant black hole is Planck-sized and we no longer have a classical geometry. Historically, this was the third of the main alternatives: (1) information loss, (2) information escaping with the Hawking radiation, and (3) remnants, with subvariations such as stable and long-lived remnants. The problem with remnants that these very small objects need an enormous number of internal states, as many as the original black hole, and this leads to its own problems.

 

ls.jpg

 

In 1993 Lenny Susskind (working with Larus Thorlacius and John Uglum and building on ideas of Gerard ‘t Hooft and John Preskill) tried to make precise the kind of nonlocal behavior that would be needed in order to avoid information loss. Their principle of black hole complementarity requires that different observers see the same bit of information in different places. An observer outside the black hole will see it in the Hawking radiation, and an observer falling into the black hole will see it inside. This sounds like cloning but it is different: there is only one bit in the Hilbert space, but we can’t say where it is: locality is given up, not quantum mechanics. Another aspect of the complementarity argument is that the external observer sees the horizon as a hot membrane that can radiate information, while in infalling observer sees nothing there. In order for this to work, it must be that no observer can see the bit in both places, and various thought experiments seemed to support this.

 

malcadena.jpg

At the time, this seemed like an intriguing proposal, but not convincingly superior to information loss, or remnants. But in 1997 Juan Maldacena discovered AdS/CFT duality, which constructs gravity in an particular kind of spacetime box, anti-de Sitter space, in terms of a dual quantum field theory. You will find more details about it here.

 

The dual description of a black hole is in terms of a hot plasma, supporting the intuition that a black hole should not be so different from any other thermal system. This dual system respects the rules of ordinary quantum mechanics, and does not seem to be consistent with remnants, so we get the information out with the Hawking radiation.

 

This is consistent too with the argument that locality must be fundamentally lost: the dual picture is holographic, formulated in terms of field theory degrees of freedom that are projected on the boundary of the space rather than living inside it. Indeed, the miracle here is that gravitational physics looks local at all, not that this sometimes fails. AdS/CFT duality was discovered largely from trying to solve the information paradox. After Andy Strominger and Cumrun Vafa showed that the Bekenstein-Hawking entropy of black branes could be understood statistically in terms of D-branes, people began to ask what happens to the information in the two descriptions, and this led to seeming coincidences that Maldacena crystallized as a duality. As for a real experiment, the measure of a thought experiment is whether it teaches us about new physics, and Hawking’s had succeeded in a major way.

 

For AdS/CFT, there are still some big questions: precisely how does the bulk spacetime emerge, and how do we extend the principle out of the AdS box, to cosmological spacetimes? Can we get more mileage here from the information paradox? On the one hand, we seem to know now that the information gets out, but we do not know the mechanism, the point at which Hawking’s original argument breaks down. But it seemed that we no longer had the kind of sharp alternatives that drove the information paradox. Black hole complementarity, though it did not provide a detailed explanation of how different observers see the same bit, seemed to avoid all paradoxes.

 

Earlier this year, Polchinski and his students Ahmed Almheiri and Jamie Sully, set out to sharpen the meaning of black hole complementarity, starting with some simple bit models of black holes that had been developed by Samir Mathur and Steve Giddings. But quickly they found a problem. Susskind had nicely laid out a set of postulates, and they found that they could not all be true at once. The postulates are

(a) Purity: the black hole information is carried out by the Hawking radiation,

(b) Effective Field Theory (EFT): semiclassical gravity is valid outside the horizon, and

(c) No Drama: an observer falling into the black hole sees no high energy particles at the horizon.

 

EFT and No Drama are based on the fact that the spacetime curvature is small near and outside the horizon, so there is no way that strong quantum gravity effects should occur. Postulate (b) also has another implication, that the external observer interprets the information as being radiated from an effective membrane at (or microscopically close to) the horizon. This fits with earlier observations that the horizon has effective dynamical properties like viscosity and conductivity. Purity has an interesting consequence, which was developed in a 1993 paper of Don Page and further in a 2007 paper of Patrick Hayden and Preskill. Consider the first two-thirds of the Hawking photons and then the last third. The early photons have vastly more states available. In a typical pure state, then, every possible state of the late photons will be paired with a different state of the early radiation. We say that any late Hawking photon is fully entangled with some subsystem of the early radiation. However, No Drama requires that this same Hawking mode, when it is near the horizon, be fully entangled with a mode behind the horizon. This is a property of the vacuum in quantum field theory, that if we divide space into two halves (here at the horizon) there is strong entanglement between the two sides. Authors used the EFT assumption implicitly in propagating the Hawking mode backwards from infinity, where we look for purity, to the horizon where we look for drama; this propagation backwards also blue-shifts the mode, so it has very high energy. So this is effectively illegal cloning, but unlike earlier thought experiments a single observer can see both bits, measuring the early radiation and then jumping in and seeing the copy behind the horizon.

 

Once the paper was out, puzzlement by this idea continued. Within days Leonard Susskind replied. In another day, Susskind released the second version of the manuscript. Two weeks later, he withdrew the paper because he "no longer believed the argument was right". He is now a believer in the firewall, though Polchinski and Susskind are still debating whether it forms at the Page time (half the black hole lifetime) or much faster, the so-called fast-scrambling time. The argument for the latter is that this is the time scale over which most black hole properties reach equilibrium. The argument for the former is that self-entanglement of the horizon should be the origin of the interior spacetime, and this runs out only at the Page time. A week after the initial paper, Raphael Bousso replied arguing Polchinski et al. were sloppy about the information that various observers, especially the infalling one, may access. When one realizes that they can only evaluate the "causal diamond", all the proofs of contradictions become impossible. Daniel Harlow posted another reply four days after Boussom but soon after he withdrew the paper, just like Susskind. Yasunori Nomura, Jaime Varela, and Sean J. Weinberg argued in a way that is somewhat similar to Harlow: one must be careful when she constructs the map between the unitary quantum mechanics with the qubits on one side and the semiclassical world on the other side. The paper exists in the version v3 as well but unlike Harlow's paper, it hasn't been withdrawn yet. Samir D. Mathur and David Turton disagree with the firewall, too. Polchinski et al. assumed that an observer near the event horizon may say lots about the Hawking radiation even if he only looks outside the stretched horizon; Mathur and Turton say that he must actually go all the way to the real horizon and all the answers therefore depend on the Planckian physics. Borun D. Chowdhury and Andrea Puhm are the closest ones so far to the original paper. They claim that all the critics of Polchinski et al. are just babbling irrelevant nonsense. Chowdhury and Puhm declare that it's important to get rid of the observer-centric description and talk about decoherence. When it's done, Alice burns when she is a low-energy packet but she may keep on living in the complementary fuzzball picture when she is a high-energy excitation. Iosif Bena, Andrea Puhm and Bert Vercnocke took their call too. Amit Giveon and Nissan Itzhaki became supporters of the firewall. Tom Banks and Willy Fischler use Tom's somewhat incomprehensible axiomatic framework, the holographic spacetime and they conclude that this axiomatic framework doesn't imply any firewalls. In words of Amos Ori, a small black hole behaves as a black hole remnant. For Ram Brustein the event horizon is a wrong concept; it only exists in the classical theory. In the quantum theory, the black hole's Compton wavelength is nonzero which, the author believes, creates a region near the horizon where the densities are inevitably high and quantum gravity is needed to predict what happens in this new extreme region. Lubos Motl, seems to follow same trail as Raphael Bousso. Bousso (and sone others) want to say that an infalling observer sees the mode entangled with a mode behind the horizon, and the asymptotic observer sees it entangled with the early radiation. This is an appealing idea,but the problem is that the infalling observer can measure the mode and send a signal to infinity, giving a contradiction. Bousso now realizes this, and is trying to find an improved version. The precise entanglement statement in original paper is an inequality known as strong subadditivity of entropy, discussed with references in the wikipedia article on Von Neumann entropy.

 

Where is this going? So far, there is no argument that the firewall is visible outside the black hole, so perhaps no observational consequences there. For cosmology, one might try to extend this analysis to cosmological horizons, but there is no analogous information problem there, so it’s a guess. Information paradox might have emerged once again. The black hole interior will always be a mostly inaccessible place for most lucky people so these questions will remain theoretical most likely - forever.

 

 

Credits: Sean Carroll, Joe Polchinski, Lubos Motl, Wikipedia, arXiv