If you want to see a hologram, you don't have to look much farther than your wallet. There are holograms on most driver's licenses, ID cards and credit cards. If you're not old enough to drive or use credit, you can still find holograms around your home. They're part of CD, DVD, software packaging and many others. These holograms aren't very impressive. You can see changes in colors and shapes when you move them back and forth, but they usually just look like sparkly pictures or smears of color. Even the mass-produced holograms that feature movie and comic book heroes can look more like green photographs than amazing 3-D images.


Yoda holographic it is...

On the other hand, large-scale holograms, illuminated with lasers or displayed in a darkened room with carefully directed lighting, are incredible. They're two-dimensional surfaces that show absolutely precise, three-dimensional images of real objects. You don't even have to wear special glasses or look through a View-Master to see the images in 3D.  If you look at these holograms from different angles, you see objects from different perspectives, just like you would if you were looking at a real object. Some holograms even appear to move as you walk past them and look at them from different angles. Others change colors or include views of completely different objects, depending on how you look at them. Is there any relation to holograms and physics?  Or even better, to our multiverse story?  It turns out - there is!


The holographic principle is a property of quantum gravity and string theories which states that the description of a volume of space can be thought of as encoded on a boundary to the region - preferably a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string-theory interpretation by Leonard Susskind who combined his ideas with previous ones of Gerard 't Hooft and Charles Thorn. Thorn observed in 1978 that string theory admits a lower dimensional description in which gravity emerges from it in what would now be called a holographic way.  In a larger and more speculative sense, the theory suggests that the entire universe can be seen as a two-dimensional information structure "painted" on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the cosmological horizon has a finite area and grows with time.  The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects which have fallen into the hole can be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory.  Confused?  Don't be - we'll start with black holes - yet another singularity in our Universe.


John Wheeler once said our Universe - matter and radiation - should be viewed as secondary, as carriers of a more abstract and fundamental entity - information. Information forms an irreducible kernel at the heart of reality.  From this perspective, the universe can be thought of as an information processor. It takes information regarding how things are now and produces information delineating how things will be at the next now, and the now after that. Our senses become aware of such processing by detecting how the physical environment changes over time. But the physical environment itself is emergent; it arises from the fundamental ingredient, information, and evolves according to the fundamental rules, the laws of physics.  Now, let's step into black hole territory.


I doubt you never heard of black holes.  If nothing, you have heard of area in space which has such gravitational pull that no one and nothing can escape it. Including light - thus it is black.  Based on earlier Einstein's work on general relativity, Karl Schwarzschild did some calculations and found something no one expected or have seen up to that point - if enough mass were crammed into a small enough ball, a gravitational abyss would form.  At that time this used to be called dark stars, then frozen starts and at the end it was ealier mentioned John Wheeler who nailed black holes naming which has been in use since.  At that early time, Einstein didn't really like the whole idea.  For a star as massive as the sun to be a black hole, it would need to be squeezed into a ball about three kilometers across; a body as massive as the earth would become a black hole only if squeezed to a centimetor across.  It is hard to imagine such thing, isn't it?  Yet, in the decades since, astronomers have gathered overwhelming observational evidence that black holes are both real and numerous. There is wide agreement that a great many galaxies are powered by an enormous black hole at their center; Milky Way revolves around a black hole whose mass is about three million times that of our Sun.


A 19th century branch of physics called thermodynamics (today statistical mechanics) gave rise to some fundamental laws of physics we know today.  One most important is second law of thermodynamics.  Sometimes, instead of definition things get moe clear through example; we'll do that too here on example of the steam engine (innovation that initially drove thermodynamics).



The core of a steam engine is a vat of water vapor that expands when heated, driving the engine's piston forward, and contracts when cooled, returning the piston to its initial position, ready to drive forward once again. In the late 19th and early 20th centuries, physicists worked out the molecular underpinnings of matter, which among other things provided a microscopic picture of the steam’s action. As steam is heated, its H2O molecules pick up increasing speed and career into the underside of the piston. The hotter they are, the faster they go and the bigger the push. To understand the steam’s force we do not need the details of which particular molecules happen to have this or that velocity or which happen to hit the piston precisely here or there. To figure out the piston’s push, we need only the average number of molecules that will hit it in a given time interval, and the average speed they’ll have when they do.


Now, these are much coarser data, but it’s exactly such pared-down information that’s useful.  In crafting mathematical methods for systematically sacrificing detail in favor of such higher-level aggregate understanding, physicists honed a wide range of techniques and developed a number of powerful concepts. One such concept is entropy which provides a characterization of how finely arranged (or not) the constituents of a given system need to be for it to have the overall appearance that it does.  When something is highly disordered, like kids room usually, a great many possible rearrangements of its constituents leave its overall appearance intact. If you have untidy room, and you reorder elements found everywhere on floor (like tossed toys), the room will look the same. But when something is highly ordered, like tidy room, even small rearrangements are easily detected.  Take any system and count the number of ways its constituents can be rearranged without affecting its gross, overall, macroscopic appearance. That number is the system’s entropy.  If there’s a large number of such rearrangements, then entropy is high: the system is highly disordered. If the number of such rearrangements is small, entropy is low: the system is highly ordered.  If you wave through vat of steam, you move rearrange millions of H2O molecules, but still it looks kind of same - undisturbed.  Imagine another form of H2O molecules - ice cubes.  Try to rearrange those and you get to see difference imediately.  The entropy of the steam is high (many rearrangements will leave it looking the same); the entropy of the ice is low (few rearrangements will leave it looking the same).


The Second Law of Thermodynamics states that, over time, the total entropy of a system will increase.  By definition, a higher-entropy configuration can be realized through many more microscopic arrangements than a lower-entropy configuration. As a system evolves, it's overwhelmingly likely to pass through higher-entropy states since, simply put, there are more of them.   Ice melting in a warm room is a common example of increasing entropy, described in 1862 by Rudolf Clausius as an increase in disaggregate of the ice molecules.  The idea is general. Glass shattering, a candle burning, ink spilling, perfume pervading: these are different processes, but the statistical considerations are the same. In each, order degrades to disorder and does so because there are so many ways to be disordered.  Being statistical, the Second Law does not say that entropy can't decrease, only that it is extremely unlikely to do so. The milk molecules you just poured into your coffee might, as a result of their random motions, coalesce into a floating figurine of Santa Claus. But don't hold your breath - a floating milk Santa has very low entropy.  Similar considerations hold for the vast majority of high-to-low-entropy evolutions, making the Second Law appear inviolable.  How does this apply to black holes?Ice_water.jpg

Wheeler noticed black holes would seem to violate this law.  No matter how much entropy system would have - if it gets to black hole it is gone. Since nothing escapes from a black hole, the system's disorder would appear permanently gone. Black hole would seem to be entropy-free.


According to basic thermodynamics, there’s a close association between entropy and temperature. Temperature is a measure of the average motion of an object’s constituents: hot objects have fast-moving constituents, cold objects have slow-moving constituents. Entropy is a measure of the possible rearrangements of these constituents that, from a macroscopic viewpoint, would go unnoticed. Both entropy and temperature thus depend on aggregate features of an object's constituents; they go hand in hand. Any object with a nonzero temperature radiates. Hot coal radiates visible light; we humans, typically, radiate in the infrared. If a black hole has a nonzero temperature, it too should radiate. But that conflicts blatantly with the established understanding that nothing can escape a black hole's gravitational grip. So, initial conclusion was black holes do not have a temperature. Black holes do not harbor entropy. Black holes are entropy sinkholes. In their presence, the Second Law of Thermodynamics fails.  Ouch!  And then Stephen Hawking stepped in.

In 1971, Stephen Hawking realized that black holes obey a particular law.  If you have a collection of black holes with various masses and sizes, some engaged in stately orbital waltzes, others pulling in nearby matter and radiation, and still others crashing into each other, the total surface area of the black holes increases over time. By "surface area" Hawking meant the area of each black hole's event horizon. Now, there are many results in physics that ensure quantities don't change over time (conservation of energy, conservation of charge, conservation of momentum, and so on), but there are very few that require quantities to increase. It was natural, then, to consider a possible relation between Hawking's result and the Second Law. If we envision that, somehow, the surface area of a black hole is a measure of the entropy it contains, then the increase in total surface area could be read as an increase in total entropy.eventhor.jpg

To fully assess the nature of black holes and understand how they interact with matter and radiation, we must include quantum considerations.  Hawking studied how quantum fields would behave in a very particular spacetime arena: that created by the presence of a black hole. A well-known feature of quantum fields in ordinary, empty, uncurved spacetime is that their jitters allow pairs of particles, for instance an electron and its antiparticle the positron, to momentarily erupt out of the nothingness, live briefly, and then smash into each other, with mutual annihilation the result. This process, called quantum pair production, has been intensively studied both theoretically and experimentally, and is thoroughly understood.  Characteristic of quantum pair production is that while one member of the pair has positive energy, the law of energy conservation dictates that the other must have an equal amount of negative energy. So far so good.  Over and over again, quantum jitters result in particle pairs being created and annihilated, created and annihilated, etc.  Hawking reconsidered such ubiquitous quantum jitters near the event horizon of a black hole. He found that sometimes events look much as they ordinarily do. Pairs of particles are randomly created; they quickly find each other; they are destroyed. But every so often something new happens; If the particles are formed sufficiently close to the black hole’s edge, one can get sucked in (negative) while the other escapes into space (positive). To someone watching from afar they look like radiation, a form since named Hawking radiation. The other particle, the one that fall into the black hole, has also detectable impact. Much as a black hole's mass increases when it absorbs anything that carries positive energy, so its mass decreases when it absorbs anything that carries negative energy. Black hole emits a steady outward stream of radiation as its mass gets ever smaller. When quantum considerations are included, black holes are thus not completely black.


In recent years, physicists have been toying with laboratory experiments that imitate the physics of an event horizon. This marks the point where escape from a black hole is impossible because the velocity required exceeds the speed of light, the cosmic speed limit.  Analogue black holes have a similar point that cannot be crossed because the speed required is too great. Unlike in a real black hole however, this "horizon" is not created by intense gravity, since we do not know how to synthesise a black hole, but by some other mechanism – utilising sound or light waves, for example. However, no one had seen photons resembling Hawking radiation emerging from these analogues, until 2010. To create their lab-scale event horizon (as on picture above), Daniele Faccio & Francesco Belgiorno and their colleagues focused ultrashort pulses of infrared laser light at a wavelength of 1055 nanometres into a piece of glass. The extremely high intensity of these pulses – trillions of times that of sunlight – temporarily skews the properties of the glass. In particular, it boosts the glass's refractive index, the extent to which the glass slows down light travelling through it.  The result is a moving point of very high refractive index, equivalent to a physical hill, which acts as a horizon. Photons entering the glass behind this "hill", including ones that are part of a virtual pair, slow as they climb the hill and are unable to pass through it. Relative to the slow-moving pulse, they have come to a stop and remain behind the pulse until it has passed through the glass's length. To see if this lab-made event horizon was producing any Hawking radiation, the researchers placed a light detector next to the glass, perpendicular to the laser beam to avoid being swamped by its light. Some of the photons they detected were due to the infrared laser interacting with defects in the glass: this generates light at known wavelengths, for example between 600 and 700 nanometres.  But mysterious, "extra" photons also showed up at wavelengths of between 850 and 900 nanometres in some runs, and around 300 nanometres in others, depending on the exact amount of energy that the laser pulse was carrying. Because this relationship between the wavelength observed and pulse energy fits nicely with theoretical calculations based on separating pairs of virtual photons, Faccio's team concludes that the extra photons must be Hawking radiation.  Hawking radiation is also popping up in other, less direct black hole imitators. A team led by Silke Weinfurtner announced in August 2010 that they had observed a water-wave version of Hawking radiation in an experiment involving waves slowed to a halt to form a horizon.

blackhole3.jpgBack to black hole, as particles stream from just outside of it, they fight an uphill battle to escape the strong gravitational pull. In doing so, they expend energy and cool down substantially. Hawking calculated that an observer far from the black hole would find that the temperature for the resulting "tired" radiation was inversely proportional to the black hole's mass. A huge black hole, like the one at the center of our galaxy, has a temperature that's less than a trillionth of a degree above absolute zero. A black hole with the mass of the Sun would have a temperature less than a millionth of a degree. For a black hole's temperature to be high enough to barbecue the family dinner, its mass would need to be about a ten-thousandth of the Earth's. But the magnitude of a black hole's temperature is secondary. Although the radiation coming from distant astrophysical black holes won't light up the night sky, the fact that they do have a temperature, that they do emit radiation, suggests black holes do have entropy. Hawking's theoretical calculations determining a given black hole's temperature and the radiation it emits gave him all the data he needed to determine the amount of entropy the black hole should contain, according to the standard laws of thermodynamics.  And the answer he found is proportional to the surface area of the black hole. By the end of 1974, the Second Law was law once again.

Nevertheless, with time another question raised - where is the entropy stored and this is how information came to play crucial role.  If we extend previous definition of entropy, seen as measure of disorder,  we say - entropy measures the additional information hidden within the microscopic details of the system, which, should you have access to it, would distinguish the configuration at a micro level from all the macro look-alikes.  Let's say your cleanup your room including your coin collection which previously was scattered around the floor.  This collection hides high entropy.  Each coin can be either head of tail. With 2 coins you have 4 possible configurations, with 3 coins you have 8 possible combinations, etc. With 1000 coins that would be 21000 combinations.  At macroscopic level this does not matter really what it is to make room tidy, but overall it adds up to entropy of the system.  So, entropy of a system is related to the number of indistinguishable rearrangements of its constituents, but properly speaking is not equal to the number itself. The relationship is expressed by a mathematical operation called a logarithm (using logarithms has the advantage of allowing us to work with more manageable numbers).

Now, ask yourself what is information?  Research by mathematicians, physicists, and computer scientists have made this precise. Their investigations have established that the most useful measure of information content is the number of distinct yes-no questions the information can answer. The coins' information answers 1000 such questions: Is the first coin heads? Yes. Is the second coin heads? Yes. Is the third coin heads? No. Is the fourth coin heads? No. And so on. A datum that can answer a single yes-no question is called a bit - short for binary digit, meaning a 0 or a 1, which you can think of as a numerical representation of yes or no. The heads-tails arrangement of the 1000 coins thus contains 1000 bits of information.  Value of the entropy and the amount of hidden information are equal. With entropy defined as the logarithm of the number of such rearrangements - 1000 in this case - entropy is the number of yes-no questions any one such sequence answers.  So, a system’s entropy is the number of yes-no questions that its microscopic details have the capacity to answer, and so the entropy is a measure of the system's hidden information content.


When Hawking worked out the detailed quantum mechanical argument linking a black hole's entropy to its surface area he also provided an algorithm for calculating it.  He showed mathematically that the entropy of a black hole equals the number of Planck-sized cells that it takes to cover its event horizon. It’s as if each cell carries one bit, one basic unit of information. 


Take the event horizon of a black hole and divide it into a grid-like pattern in which the sides of each cell are one Planck length (10-33 cm) long. Hawking proved mathematically that the black hole's entropy is the number of such cells needed to cover its event horizon - the black hole's surface area, that is, as measured in square Planck units (10-66 square cm per cell). In the language of hidden information, it’s as if each such cell secretly carries a single bit, a 0 or a 1, that provides the answer to a single yes-no question delineating some aspect of the black hole’s microscopic makeup.  This picture brings another question to focus; why would the amount of information be dictated by the area of the black hole’s surface?  Infomation contained in library building is determined by certain information we get from inside the library and not the surface.  Nevertheless, when it comes to black holes the information storage capacity is determined not by the volume of its interior but by the area of its surface and this comes straight from the mathematics.  This is somewhat hard to grasp as in everyday routine we do not deal with such micro details.  This came as surprise (since confirmed by both string and quantum loop theories), but also as the first hint of holography - information storage capacity determined by the area of a bounding surface and not by the volume interior to that surface. This hint would evolve into a new way of thinking too leading to some exciting ideas and questioning further our understanding of reality.


Imagine being in space ship.  As you are floating in free fall towards black hole, there is now way for you to distinguish when you have passed hole's event horizon - point of no return; you continue free fall towards singularity in center. To make it easier to imagine, let us assume this is really big black hole so that gravitational squeeze is still something you do not feel once you passed event horizon.  Yes, bigger black holes are more gentle than smaller ones. With small one the first thing you'll be likely to notice as you approach the hole is the tidal forces.  Tidal forces are nothing more than the difference in gravitational force between the near and far side of an object, and they aren't particular to blackholes (the tidal force of the moon on the Earth causes tides and hence the name).  For any reasonable sized blackhole (less than thousands of suns), the tidal force between different parts of your body will be greater than your body's ability to stay intact so you'll be pulled apart in the up-down direction.  For much more obscure reasons, you'll also be crushed from the sides.  These two effects combined are called spagettification.  Assuming that you somehow survive spagettification, or that you're falling into an super-massive blackhole then you can look forward to some bizarre time effects.  The point at which tidal forces destroy an object or kill a person depends on the black hole's size. For a supermassive black hole, such as those found at a galaxy's center, this point lies within the event horizon, so an astronaut may cross the event horizon without noticing any squashing and pulling (although it's only a matter of time, because once inside an event horizon, falling towards the center is inevitable). For small black holes whose Schwarzschild radius is much closer to the singularity, the tidal forces would kill even before the astronaut reaches the event horizon.  In our example we focus on supermassive black hole.  Example of spagettification is shown on picture below.

Spaghettification2.jpgIf information is placed on surface of black hole, event horizon, it feels a bit strange that we can pass this invisible barrier without any apparent notice, doesn't it?  If as you pass through the horizon of a black hole you find nothing there, nothing at all to distinguish it from empty space, how can it store information?  Answer lies in something called duality (briefly mentioned when discussing Brane Worlds).  Duality refers to a situation in which there are complementary perspectives that seem completely different, and yet are intimately connected through a shared physical anchor (we used Albert-Marilyn image to illustrate it).  Let us apply this to our journey to black hole.  One essential perspective is yours as you freely fall toward a black hole.  Another is that of a distant observer, watching your journey through (powerful) telescope. The remarkable thing is that as you pass uneventfully through a black hole's horizon, the distant observer perceives a very different sequence of events. The discrepancy has to do with earlier mentioned Hawking radiation.  When the distant observer measures the Hawking radiation’s temperature, he finds it to be tiny; let’s say it’s 10-13 K, indicating that the black hole is roughly the size of the one at the center of Milky Way. But the distant observer knows that the radiation is cold only because the photons, traveling to him from just outside the horizon, have expended their energy valiantly fighting against the black hole's gravitational pull (photons are "tired").  As you get ever closer to the black hole's horizon, you'll encounter ever-fresher photons, ones that have only just begun their journey and so are more energetic and hotter.  As distant observer watches you approach to within a hair’s breadth of the horizon, he sees your body bombarded by increasingly intense Hawking radiation - until finally all that's left is your charred remains.  Your experience is complely different though.  You don't see nor feel any of this hot radiation. Again, because your free-fall motion cancels the effects of gravity, your experience is indistinguishable from that of floating in empty space (so you don't suddenly burst into flames). So the conclusion is that from your perspective, you pass seamlessly through the horizon and are headed towards the black hole's singularity, while from the distant observer's perspective, you are immolated by a scorching corona that surrounds the horizon.

Confused?  Let's try again. It’s been established for decades that "time moves slower the lower" (GPS satellites have to deal with an additional 45 microseconds every day due to their altitude for example).  Also, one way to think about gravity is as a "bending" of the time direction downward.  In this way anything that moves forward in time will also naturally move downward.  At the event horizon of a blackhole (the outer boundary) time literally points straight down.  As a result, escaping from a blackhole is no more difficult than going back in time.  Once you're inside all directions literally point toward the singularity in the center (since no matter what direction you move in will be toward the future).  We don't experience time moving at different rates or being position dependent, so when we start talking about messed up spacetime it's useful to look at things from more than one point of view as we did above.  So, as someone falls in they will move slower and slower through time.  They will appear redder, colder, and dimmer.  As they approach the event horizon their movement through time will halt, as they fade completely from view.  Technically, you'll never actually see someone fall into a blackhole, you'll just see them get really close.  That's our distant observer view.  From an insider's perspective (falling into the blackhole) things farther from the blackhole move through time faster, so the rest of the universe will speed up from your point of view.  As a result the rest of the universe becomes bluer, hotter, and brighter.  If the black hole is large enough - as in our example - you do not feel uncomfortable at all falling into the black hole. Falling into the black hole is defined by passing the event horizon, the point of no return, where the velocity you would need in order to escape is larger than the velocity of light. You are trapped, but as long as you do not try to escape, you may not notice anything unusual for quite a while.


OK, so we have two different description here of the same event - sounds like duality business, doesn't it?  This is hard to square with ordinary logic - the logic by which you are either alive or not alive. First, different perspectives can never confront each other. You can't get out of the black hole and prove to the distant observer that you are alive. The distant observer can't jump into the black hole and confront you with evidence that you're not obviously.  What about information? From your perspective, all your information, stored in your body and brain and in the laptop you're holding, passes with you through the black hole’s horizon. From the perspective of the distant observer, all the information you carry is absorbed by the layer of radiation incessantly bubbling just above the horizon. The bits contained in your body, brain, and laptop would be preserved, but would become thoroughly scrambled as they joined, jostled, and intermingled with the sizzling hot horizon. Which means that to the distant observer, the event horizon is a real place, populated by real things that give physical expression to the information symbolically depicted in the picture above where we presented grid with bits across black hole.  The conclusion is that the distant observer infers that a black hole's entropy is determined by the area of its horizon because the horizon is where the entropy is stored. Still, it is unexpected it is that the storage capacity isn't set by the volume, but rather surface.  Which brings us to next question; what is the maximum amount of information that can be stored within the region of space?


Imagine adding matter to the region until you reach a critical juncture. At some point, the region will be so thoroughly stuffed that were you to add even a single grain of sand, the interior would go dark as the region turned into a black hole (imagine adding dots with pen to piece of paper for example to visualize it easier). When that happens - game over. A black hole's size is determined by its mass, so if you try to increase the information storage capacity by adding yet more matter, the black hole will respond by growing larger - you can't increase the black hole's information capacity without forcing the black hole to enlarge.  The amount of information contained within a region of space, stored in any objects of any design, is always less than the area of the surface that surrounds the region (measured in square Planck units).  If you max out a region's storage capacity, you'll create a black hole, but as long as you stay under the limit, no black hole will form.  You may wonder with all nano technology business going are we in any danger of reaching the limits any soon?  The answer is no; stack of five off-the-shelf terabyte hard drives fits comfortably within a sphere of radius 50 centimeters, whose surface is covered by about 1070 Planck cells. The surface's storage capacity is thus about 1070 bits, which is about a billion, trillion, trillion, trillion, trillion terabytes, and so enormously exceeds anything you can buy.


Susskind and ’t Hooft stressed that the lesson should be general: since the information required to describe physical phenomena within any given region of space can be fully encoded by data on a surface that surrounds the region, then there’s reason to think that the surface is where the fundamental physical processes actually happen. According to this statement, our familiar 3D reality would then be likened to a holographic projection of those distant 2D physical processes.  If this line of reasoning is correct, then there are physical processes taking place on some distant surface that, much like a puppeteer pulls strings, are fully linked to the processes taking place in your fingers, arms, and brain as you read these words. Using words of Brian Greene (as for the much of this post too), our experiences here, and that distant reality there, would form the most interlocked of parallel worlds - Holographic Parallel Universes.


That familiar reality may be mirrored, or perhaps even produced, by phenomena taking place on a faraway, lower-dimensional surface ranks among the most unexpected developments in all of theoretical physics. But how confident should we be that the holographic principle is right?  In 1998, young Argentinean scientist Juan Malcadena made amazing discovery which rocked the world.  Though I made myself aware of it some 10 years later, I didn't stop thinking about it since.  To me Malcadena was new Einstein.  He was only 30 years old when he made announcement which later on left me breathless.  What did Malcadena found?  Malcadena provided the first mathematical example of Holographic Parallel Universes.  He achieved this by considering string theory in a universe whose shape differs from ours but for the purpose at hand proves easier to analyze. In a precise mathematical sense, the shape has a boundary, an impenetrable surface that completely surrounds its interior. By zeroing in on this surface, Maldacena argued convincingly that everything taking place within the specified universe is a reflection of laws and processes acting themselves out on the boundary.  Although Maldacena's method may not seem directly applicable to a universe with the shape of ours, his results are decisive because they established a mathematical proving ground in which ideas regarding holographic universes could be made explicit and investigated quantitatively.  Most exciting of all, there’s now evidence that a link between these theoretical insights and physics in our universe can be forged.  Let's peek into Malcadena's work.


Branes are objects of multiple dimensions that exist within the full 10D space required by string theory. In the language of string theorists, this full space is called the bulk.  In 1995, Joe Polchinski proved that it wasn’t possible to avoid them. Any consistent version of M-theory had to include higher-dimensional branes.  Now, imagine a stack of three-branes, so closely spaced that they appear as a single monolithic slab (as on picture below) and let's see how strings would react there.  If you read Brane Worlds, we encountered two types of strings - open snippets and closed loops.  Wndpoints of open strings can move within and through branes but not off them, while closed strings have no ends and so can move freely through the entire spatial expanse.  This means closed strings can move through the bulk of space.  Maldacena's first step was to confine his mathematical attention to strings that have low energy - that is, ones that vibrate relatively slowly.  Why?  Because the force of gravity between any two objects is proportional to the mass of each; the same is true for the force of gravity acting between any two strings. Strings that have low energy have small mass, and so they hardly respond to gravity at all. By focusing on low energy strings, Maldacena was thus suppressing gravity's influence. That brings a substantial simplification.  In string theory gravity is transmitted from place to place by closed loops. Eliminating the force of gravity is like eliminating the influence of closed strings on anything they might encounter (open string snippets living on the brane stack).  By ensuring that the two kinds of strings wouldn’t affect each other, Maldacena was ensuring that they could be analyzed independently.

bulk.pngThen Malcadena changed perspective and considered these three-branes and single object.  Previous research by scientists established that as you stack more and more branes together and their collective gravitational field will grow. Ultimately, the slab of branes behaves much like a black hole, but one that's brane-shaped (and so is called a black brane). As with black hole, if you get too close to a black brane, you can't escape. If you stay far away but are watching something approach a black brane, the light you'll receive will be exhausted from its having fought against the black brane's gravity (makes object appear to have less energy and to be moving slower).  With this new perspective, he he realized that the low-energy physics involved two components that could be analyzed independently:

  • slowly vibrating closed strings, moving anywhere in the bulk of space, are the most obvious low-energy carriers
  • the second component relies on the presence of the black brane. Imagine you are far from the black brane and have in your possession a closed string that's vibrating with an arbitrarily large amount of energy. Then, imagine lowering the string toward the event horizon while you maintain a safe distance. Black brane will make the string's energy appear ever lower; the light you'll receive will make the string look as though it's in a slow-motion movie. The second low-energy carriers are thus any and all vibrating strings that are sufficiently close to the black brane’s event horizon.


Final move was to compare the two perspectives. Malcadena noted that because they describe the same brane stack, only from different points of view, they must agree (remember, duality). Each description involves low-energy closed strings moving through the bulk of space, so this part of the agreement is manifest. But the remaining part of each description must also agree. The remaining part of the first description consists of low-energy open strings moving on the three-branes. Low-energy strings are well described by point particle quantum field theory, and that is the case here. The particular kind of quantum field theory involves a number of sophisticated mathematical ingredients, but two vital characteristics are readily understood. The absence of closed strings ensures the absence of the gravitational field. And, because the strings can move only on the tightly sandwiched three-dimensional branes, the quantum field theory lives in three spatial dimensions (in addition to the one dimension of time, for a total of four spacetime dimensions).  The remaining part of the second description consists of closed strings, executing any vibrational pattern, as long as they are close enough to the black branes' event horizon to appear lethargic (that is to appear to have low energy). Such strings, although limited in how far they stray from the black stack, still vibrate and move through nine dimensions of space (in addition to one dimension of time, for a total of ten spacetime dimensions). And because this sector is built from closed strings, it contains the force of gravity.  However different the two perspectives might seem, they're describing one and the same physical situation, so they must agree.  This is pretty much similar to what we had with black holes.  Nevertheless, this leads to a bizarre conclusion; a particular nongravitational, point particle quantum field theory in four spacetime dimensions (the first perspective) describes the same physics as strings, including gravity, moving through a particular swath of ten spacetime dimensions (the second perspective).  The gravity of the black brane slab imparts a curved shape to the tendimensional spacetime swath in its vicinity (curved spacetime is called anti–De Sitter space); the black brane slab is itself the boundary of this space. And so, Maldacena's showed that string theory within the bulk of this spacetime shape is identical to a quantum field theory living on its boundary. This is holography come to life.


Still confused?  Nothing to worry about - it takes time and some background to swallow this.  All of us are familiar with Euclidean geometry, where space is flat (that is, not curved). It is the geometry of figures drawn on flat sheets of paper. To a very good approximation, it is also the geometry of the world around us: parallel lines never meet, and all the rest of Euclid’s axioms hold. We are also familiar with some curved spaces. Curvature comes in two forms, positive and negative. The simplest space with positive curvature is the surface of a sphere. A sphere has constant positive curvature. That is, it has the same degree of curvature at every location (unlike an egg, say, which has more curvature at the pointy end). The simplest space with negative curvature is called hyperbolic space, which is defined as space with constant negative curvature. This kind of space has long fascinated scientists and artists alike.   By including time in the game, physicists can similarly consider spacetimes with positive or negative curvature. The simplest spacetime with positive curvature is called de Sitter space, after Willem de Sitter, the Dutch physicist who introduced it. Many cosmologists believe that the very early universe was close to being a de Sitter space. The far future may also be de Sitter-like because of cosmic acceleration. Conversely, the simplest negatively curved spacetime is called anti-de Sitter space. It is similar to hyperbolic space except that it also contains a time direction. Unlike our universe, which is expanding, anti-de Sitter space is neither expanding nor contracting. It looks the same at all times. Despite that difference, anti-de Sitter space turns out to be quite useful in the quest to form quantum theories of spacetime and gravity.  The idea is as follows: a quantum gravity theory in the interior of an anti-de Sitter spacetime is completely equivalent to an ordinary quantum particle theory living on the boundary. If true, this equivalence means that we can use a quantum particle theory (which is relatively well understood) to define a quantum gravity theory (which is not).  To make an analogy, imagine you have two copies of a movie, one on reels of 70-millimeter film and one on a DVD.  The two formats are utterly different, the first a linear ribbon of celluloid with each frame recognizably related to scenes of the movie as we know it, the second a two-dimensional platter with rings of magnetized dots that would form a sequence of 0s and 1s if we could perceive them at all. Yet both "describe" the same movie!  Similarly, the two theories, superficially utterly different in content, describe the same universe. The DVD looks like a metal disk with some glints of rainbowlike patterns. The boundary particle theory "looks like" a theory of particles in the absence of gravity. From the DVD, detailed pictures emerge only when the bits are processed the right way. From the boundary particle theory, quantum gravity and an extra dimension emerge when the equations are analyzed the right way.


What does it really mean for the two theories to be equivalent? First, for every entity in one theory, the other theory has a counterpart. The entities may be very different in how they are described by the theories: one entity in the interior might be a single particle of some type, corresponding on the boundary to a whole collection of particles of another type, considered as one entity. Second, the predictions for corresponding entities must be identical. Thus, if two particles have a 40 percent chance of colliding in the interior, the two corresponding collections of particles on the boundary should also have a 40 percent chance of colliding.


The particles that live on the boundary interact in a way that is very similar to how quarks and gluons interact in reality (quarks are the constituents of protons and neutrons; gluons generate the strong nuclear force that binds the quarks together - in other words gluons are glue for quarks). Quarks have a kind of charge that comes in three varieties (called colors) and the interaction is called chromodynamics. The difference between the boundary particles and ordinary quarks and gluons is that the particles have a large number of colors, not just three. Gerard ’t Hooft studied such theories and predicted that the gluons would form chains that behave much like the strings of string theory. The precise nature of these strings remained elusive, but in 1981 Alexander M. Polyakov noticed that the strings effectively live in a higher-dimensional space than the gluons do. In our holographic theories that higher-dimensional space is the interior of anti-de Sitter (AdS) space.  To understand where the extra dimension comes from, start by considering one of the gluon strings on the boundary. This string has a thickness, related to how much its gluons are smeared out in space. When physicists calculate how these strings on the boundary of AdS space interact with one another, they get a very odd result: two strings with different thicknesses do not interact very much with each other. It is as though the strings were separated spatially. One can reinterpret the thickness of the string to be a new spatial coordinate that goes away from the boundary. Thus, a thin boundary string is like a string close to the boundary, whereas a thick boundary string is like one far away from the boundary (see previous picture above). The extra coordinate is precisely the coordinate needed to describe motion within the 4D AdS spacetime.  From the perspective of an observer in the spacetime, boundary strings of different thickness appear to be strings (all of them thin) at different radial locations. The number of colors on the boundary determines the size of the interior. To have a spacetime as large as the visible universe, the theory must have about 1060 colors.  It turns out that one type of gluon chain behaves in the 4D spacetime as the graviton - the fundamental quantum particle of gravity. In this description, gravity in 4D is an emergent phenomenon arising from particle interactions in a gravityless, 3D world (physicists have known since 1974 that string theories always give rise to quantum gravity). 


Edward Witten on one side and Steven Gubser, Igor Klebanov, and Alexander Polyakov on the other, supplied the next level of understanding. They established a precise mathematical dictionary for translating between the two perspectives: given a physical process on the brane boundary, the dictionary showed how it would appear in the bulk interior, and vice versa. In a hypothetical universe, then, the dictionary rendered the holographic principle explicit. On the boundary of this universe, information is embodied by quantum fields. When the information is translated by the mathematical dictionary, it reads as a story of stringy phenomena happening in the universe’s interior.  We can say that boundary physics gives raise to bulk physics.


An everyday hologram bears no resemblance to the 3D image it produces. On its surface appear only various lines, arcs, and swirls etched into the plastic. Yet a complex transformation, carried out operationally by shining a laser through the plastic, turns those markings into a recognizable 3D image. Which means that the plastic hologram and the 3D image embody the same data, even though the information in one is unrecognizable from the perspective of the other.


Similarly, examination of the quantum field theory on the boundary of Maldacena's universe shows that it bears no obvious resemblance to the string theory inhabiting the interior. Even the physicist when presented with both theories, not being told of the connections, would more than likely conclude that they were unrelated.


Nevertheless, the mathematical dictionary linking the two makes explicit that anything taking place in one has an incarnation in the other.

As a particularly impressive example, Witten investigated what an ordinary black hole in the interior of Maldacena's universe would look like from the perspective of the boundary theory (boundary theory does not include gravity so a black hole necessarily translates into something very unlike a black hole). Witten's result showed black hole is the holographic projection of something equally ordinary: a bath of hot particles in the boundary theory. Like a real hologram and the image it generates, the two theories - a black hole in the interior and a hot quantum field theory on the boundary - bear no apparent resemblance to each other, and yet they embody identical information.


In analyzing the relationship between quantum field theory on the boundary and string theory in the bulk, Maldacena realized that when the coupling of one theory was small, that of the other was large, and vice versa (approximation techniques used in science are only accurate only if the relevant coupling constant is a small number).  The natural test, and a possible means of proving that the two theories are secretly identical, is to perform independent calculations in each theory and then check for equality. But this is difficult to do, since when perturbative methods work for one, they fail for the other.  If we accept all said above and duality nature, we can look this from another perspective; this duality gives us a framework for calculations and research where we would encounter usually large coupling contstant and thus we can get to the result.  And in recent years experimentally testable result has emerged!


Black holes are predicted to emit Hawking radiation. This radiation comes out of the black hole at a specific temperature. For all ordinary physical systems, a theory called statistical mechanics explains temperature in terms of the motion of the microscopic constituents (for example, this theory explains the temperature of a glass of water or the temperature of the sun).  So, what about the temperature of a black hole? To understand it, we would need to know what the microscopic constituents of the black hole are and how they behave. Only a theory of quantum gravity can tell us that.  Some aspects of the thermodynamics of black holes have raised doubts as to whether a quantum-mechanical theory of gravity could be developed at all. It seemed as if quantum mechanics itself might break down in the face of effects taking place in black holes. For a black hole in an AdS spacetime, we now know that quantum mechanics remains intact, thanks to the boundary theory. Such a black hole corresponds to a configuration of particles on the boundary. The number of particles is very large, and they are all zipping around, so that theorists can apply the usual rules of statistical mechanics to compute the temperature. The result is the same as the temperature that Hawking computed by very different means, indicating that the results can be trusted. Most important, the boundary theory obeys the ordinary rules of quantum mechanics; no inconsistency arises. Physicists have also used the holographic correspondence in the opposite direction - employing known properties of black holes in the interior spacetime to deduce the behavior of quarks and gluons at very high temperatures on the boundary.

The Relativistic Heavy Ion Collider (RHIC) is one of two existing heavy-ion colliders (another one being LHC) and the only spin-polarized proton collider in the world. It is located at Brookhaven National Laboratory in Upton (NY). By using RHIC to collide ions traveling at relativistic speeds, physicists study the primordial form of matter that existed in the universe shortly after the Big Bang. By colliding spin-polarized protons, the spin structure of the proton is explored. In 2010, RHIC physicists published results of temperature measurements from earlier experiments which concluded that temperatures in excess of 4 trillion kelvins had been achieved in gold ion collisions.  These collision temperatures resulted in the breakdown of "normal matter" and the creation of a liquid-like quark-gluon plasma.Rhictunnel.jpg

Because the nuclei contain many protons and neutrons, the collisions create a commotion of particles that can be more than 200000 times as hot as the sun's core.  That's hot enough to melt the protons and neutrons into a fluid of quarks and the gluons that act between them. Quark gluon plasma it’s likely the matter which briefly formed soon after the big bang.  The challenge is that the quantum field theory (quantum chromodynamics) describing the hot soup of quarks and gluons has a large value for its coupling constant and that compromises the accuracy of perturbative methods used in calculations.   For example, as any fluid flows (water, molasses, or the quark gluon plasma) each layer of the fluid exerts a drag force on the layers flowing above and below. The drag force is known as shear viscosity.  Experiments at RHIC measured the shear viscosity of the quark gluon plasma, and the results are far smaller than those predicted by the perturbative quantum field theory calculations.  Can we use duality here?  If we introduce the holographic principle, the perspective taken is to imagine that everything we experience lies in the interior of spacetime while processes mirroring those experiences take place on a distant boundary. If we reverse that perspective, we get our universe (more precisely, the quarks and gluons in our universe) living on the boundary, and so that's where the RHIC experiments take place. Invoking Maldacena, his result shows that the RHIC experiments (described by quantum field theory) have an alternative mathematical description in terms of strings moving in the bulk. Difficult calculations in the boundary description (where the coupling is large) are translated into easier calculations in the bulk description (where the coupling is small).  This is exactly what Pavel Kovtun, Andrei Starinets, and Dam Son did.  They did the math and the results they found come impressively close to the experimental data.  This is impressive because boundary theory doesn't model our universe fully (it doesn't contain the gravitational force), but this doesn't compromise relation to RHIC data because in those experiments the particles have such small mass (even when traveling near light speed) that the gravitational force plays virtually no role. It turns out that analyzing quarks and gluons by using a higher dimensional theory of strings can be viewed as a potent string-based mathematical trick.  This is possible as unlike previous multiverse models, this one states our notion, experience and feeling or reality is driven by processes in brain to decipher reality around us.  Our experience gives us one shape of universe; research into secrets of matter and how things works suggests another world.  These two descriptions are essentially the same and mathematics of duality is used as translator.  Parallel mathematics describing parallel worlds (universes).  Obviously, this model can be applied to any previous multiverse model as those spoke more about features where parallel world would exist, while this model is more description of how existing universe works (no matter which one it is).


Many questions about the holographic theories remain to be answered. In particular, does anything similar hold for a universe like ours in place of the AdS space?  A crucial aspect of AdS space is that it has a boundary where time is well defined.  The boundary has existed and will exist forever. An expanding universe, like ours, that comes from a big bang does not have such a well-behaved boundary. Consequently, it is not clear how to define a holographic theory for our universe; there is no convenient place to put the hologram.  An important lesson that one can draw from the holographic conjecture, however, is that quantum gravity, which has perplexed some of the best minds on the planet for decades, can be very simple when viewed in terms of the right variables.


The encoding of information on the 2D event horizon surface is similar to that in a black hole as mentioned before. What's special in this case is the realization that the number of information on the surface must match the number of bits contained inside the volume of the universe. Since the storage in the volume is bigger on the surface, the world inside must be made up of grains bigger than the smallest space-time unit in Planck length (on the surface) - at around 10-14 cm instead of 10-33 cm (first value being limit of current gravitation wave detector and second one being Planck lenght). Or, to put it another way, a holographic universe grainy structure is much easier to detect. Quantum effects will cause the space-time quanta to convulse wildly resulting in the noise picked up by some gravitational wave detectors like GEO600 (actually, noise has been picked up already in 2008). If this interpretation is proven to be correct, it will be ranked at the same level of achievement as the discovery of the CMB, which also appeared as noise in the microwave detector. 

laser.jpgGEO600 is the only experiment in the world able to test this controversial theory at this time. Unlike the other large laser interferometers, GEO600 reacts particularly sensitively to lateral movement of the beam splitter because it is constructed using the principle of signal recycling. Normally this is inconvenient, but we need the signal recycling to compensate for the shorter arm lengths compared to other detectors. The holographic noise, however, produces exactly such a lateral signal and so the disadvantage becomes an advantage in this case. In September 2011 it has been announced GEO600 would start using  "squeezed light" method making its first application outside labs.  The light from a squeezed laser radiates much more calmly than light from a conventional laser source so sensitivity of GEO600 will be raised for some 150%.

The noise picked up by GEO600 in 2008 got many on alerts.  This signal isn't a noise source that’s been overlooked, but it appears to be quantum fluctuations in the fabric of space-time itself. This is where things start to get interesting.  It is possible noise at these scales are caused by a holographic projection from the horizon of our universe. A good analogy is to think about how an image becomes more and more blurry or pixelated the more you zoom in on it. The projection starts off at Planck scale lengths at the Universe's event horizon, but its projection becomes blurry in our local space-time.  Over at Fermilab holometer is being built (meaning holographic interferometer) to verify this idea.


Carefully prepared laser light travels to a beam splitter, which reflects about half the light toward a mirror at the end of one arm and transmits the rest to a mirror on the second arm. The light from both mirrors bounces back to the beam splitter, where half is again reflected and half transmitted. A photodiode measures the total intensity of the combined light from the two arms, which provides an extremely sensitive measure of the position difference of the beam splitter in two directions.  The holometer as constructed at Fermilab will include two interferometers in evacuated 6-inch steel tubes about 40 meters long. Optical systems (not shown above) in each one "recycle" laser light to create a very steady, intense laser wave with about a kilowatt of laser power to maximize the precision of the measurement. The outputs of the two photodiodes are correlated to measure the holographic jitter of the spacetime the two machines share. The holometer will measure jitter as small as a few billionths of a billionth of a meter.  The holometer should start collecting data in 2012 and could show results in two days or two years, depending on the fine-tuning needed. Regardless of whether evidence of a holographic existence materializes, the experiment will develop laser technology for new dark matter experiments and help test potential background noise for the next generation of experiments searching for gravitational waves.


Credits: Brian Greene, Juan Malcadena, Sascha Vongehr, Stephen Hawking, Scientific American, MIT, Wikipedia, arXiv, Fermilab, Symmetry Magazine


Related posts:

Deja vu Universe



Landscape Multiverse

Many worlds

Simulation Argument