Find Communities by: Category | Product

Richard Phillips Feynman was an American physicist known for his work in the path integral formulation of quantum mechanics, the theory of QED and the physics of the superfluidity of supercooled liquid helium, as well as in particle physics. For his contributions to the development of QED, Feynman, jointly with Julian Schwinger and Sin-Itiro Tomonaga, received the Nobel Prize in Physics in 1965. He developed a widely used pictorial representation scheme for the mathematical expressions governing the behavior of subatomic particles, which later became known as Feynman diagrams. During his lifetime, Feynman became one of the best-known scientists in the world.


When you listen to Feynman it is impossible not to be taken by simplicity of his thinking and marvelous insight this man had.  He looks a bit like a dirty Harry of science, but listening to his interviews and lectures on youtube is something I could do for days probably.  BBC also placed, now in high res, parts of "Fun to imagine" on their site. There other sites too (example).


Listening to Feynman is very inspirable experience and will hardly leave you without any emotion. 


Feynman was also known for many quotes he has spoken during his life.  He was also known as the 'Great Explainer' because of his passion for helping non-scientists to imagine something of the beauty and order of the universe as he saw it.  It is exactly his will to describe things in simple way, but understandable to people that drove him to create set of diagrams explaining relationships in particle physics.  Today we call those - Feynman diagrams.


The interaction of sub-atomic particles can be complex and difficult to understand intuitively, and the Feynman diagrams allow for a simple visualization of what would otherwise be a rather arcane and abstract formula.  Feynman first introduced his diagrams in the late 1940s as a bookkeeping device for simplifying lengthy calculations in QED.  Soon the diagrams gained adherents throughout the fields of nuclear and particle physics. Not long thereafter, other theorists adopted - and subtly adapted - Feynman diagrams for solving many-body problems in solid-state theory. By the end of the 1960s, some physicists even used versions of Feynman's line drawings for calculations in gravitational physics. With the diagrams' aid, entire new calculational vistas opened for physicists. Theorists learned to calculate things that many had barely dreamed possible before WW II. It might be said that physics can progress no faster than physicists' ability to calculate. Thus, in the same way that computer-enabled computation might today be said to be enabling a genomic revolution, Feynman diagrams helped to transform the way physicists saw the world, and their place in it.


Feynman introduced his novel diagrams in a private, invitation-only meeting at the Pocono Manor Inn in rural Pennsylvania during the spring of 1948. Twenty-eight theorists had gathered at the inn for several days of intense discussions about problems they were trying to address and Feynman offered his view using his diagrams.  If you into the details, David Kaiser did great overview of it which is online and can be found here.  The simplicity of these diagrams has a certain aesthetic appeal, though as one might imagine there are many layers of meaning behind them. The good news is that’s it’s really easy to understand the first few layers and today you will learn how to draw your own Feynman diagrams and interpret their physical meaning.  You do not need to know any fancy-schmancy math or physics to do this which for most of people reading this is a good news.


A Feynman diagram is a representation of quantum field theory processes in terms of particle paths.  You can draw two kinds of lines, a straight line with an arrow or a wiggly line. 



You can draw these pointing in any direction.  The rules are:

  • straight line, going from left to right, represents electron
  • straight line, going from right to left, represents positron (electron's anti-particle)
  • wiggly line is photon
  • you may only connect these lines if you have two lines with arrows meeting a single wiggly line
  • up and down (vertical) displacement in a diagram indicates particle motion, but no attempt is made to show direction or speed, except schematically
  • any vertex (point where three lines meet) represents an electromagnetic interaction


Of course, Feynman rules are much broader, but then again this is not class of physics.


The particle trajectories are represented by the lines of the diagram, which can be squiggly or straight, with an arrow or without, depending on the type of particle.


A point where lines connect to other lines is an interaction vertex, and this is where the particles meet and interact: by emitting or absorbing new particles, deflecting one another, or changing type.


Picture on right shows electron at vertex emitting photon and continuing its own way.


Note that the orientation of the arrows is important!


You must have exactly one arrow going into the vertex and exactly one arrow coming out.    Using rules above can you say what picture on right means?  It's easy; an electron (arrow coming from down left) and a positron (arrow coming from top right) meet and annihilate (disappear as this is what happens when matter and anti-matter meet) producing a photon (wigggly line).

As with anything else, the diagram and whole representation may become very complicated to follow without excersize.  With the time these diagrams have been upgraded to be able to represent whole particle zoo outside.  Here is an example on left side?  What does it say?  It says that an electron and a positron annihilate, producing a virtual photon (represented by the blue wavy line) that becomes a quark-antiquark pair. Then one radiates a gluon (represented by the green spiral).feynman04.png


So, these diagrams tell us a story about how a set of particles interact. We read the diagrams from left to right, so if you have up-and-down lines you should shift them a little so they slant in either direction. This left-to-right reading is important since it determines our interpretation of the diagrams. Matter particles with arrows pointing from left to right are electrons or any other fermion if noted. Matter particles with arrows pointing in the other direction are positrons or any other anti-matter particle if noted. In fact, you can think about the arrow as pointing in the direction of the flow of electric charge.


But here comes a cool thing; the interaction with a photon information about the conservation of electric charge: for every arrow coming in, there must be an arrow coming out.  Not just that, we can also rotate the interaction so that it tells a different story; we will take as an example electron and positron anihilation example from above and rotate it Here are a few examples of the different ways one can interpret the single interaction (reading from left to right):



In essence, we rotated picture and created 4 new interactions. Dare to say what they are?  It's easy.  These are to be interpreted as:

  • an electron emits a photon and keeps going
  • a positron absorbs a photon and keeps going
  • an electron and positron annihilate into a photon
  • a photon spontaneously produces an electron and positron


Because Feynman diagrams represent terms in a quantum calculation, the intermediate stages in any diagram cannot be observed. Physicists call the particles that appear in intermediate, unobservable, stages of a process "virtual particles". Only the initial and final particles in the diagram represent observable objects, and these are called "real particles". 


On diagrams above, on the left side of a diagram we have "incoming particles" - these are the particles that are about to crash into each other to do something interesting. For example, in accelerators where protons and neutrons are collided, these "incoming particles" are the quarks and gluons. On the right side of a diagram we have "outgoing particles", these are the things which are detected after an interesting interaction.  Bot "incoming particles" and "outgoing" are "real particles".  What about the internal lines? These represent "virtual particles" that are never directly observed. They are created quantum mechanically and disappear quantum mechanically, serving only the purpose of allowing a given set of interactions to occur to allow the incoming particles to turn into the outgoing particles.


Again, using electron-positron example, we can this describe using diagram as:

feynman06.pngIn the first diagram the electron and positron annihilate into a photon which then produces another electron-positron pair. In the second diagram an electron tosses a photon to a nearby positron (without ever touching the positron).


In physics, Compton scattering is a type of scattering that X-rays and gamma rays (both photons with different energy ranges) undergo in matter. The inelastic scattering of photons in matter results in a decrease in energy (increase in wavelength) of an X-ray or gamma ray photon, called the Compton effect. Part of the energy of the X/gamma ray is transferred to a scattering electron, which recoils and is ejected from its atom (which becomes ionized which means atom is no longer neutral, but rather charged), and the rest of the energy is taken by the scattered, "degraded" photon. Inverse Compton scattering also exists, in which a charged particle transfers part of its energy to a photon.  In such case, electron can to become "virtual particle" as seen below.

feynman07.pngAbove we see a process where light (the photon) and an electron bounce off each other and it is electron as well that is "virtual particle" here.


By reading these diagrams from left to right, we interpret the x axis as time. You can think of each vertical slice as a moment in time. The y axis is roughly the space direction.  The path that particles take through actual space is determined not only by the interactions (which are captured by Feynman diagrams), but the kinematics (which is not). For example, one would still have to impose things like momentum and energy conservation. The point of the Feynman diagram is to understand the interactions along a particle’s path, not the actual trajectory of the particle in space.


Feyman diagrams can be used to show some rather complicated relationships in rather easy way.  Without getting into those, I will close this example section with 3 more diagrams; 2 for how Higgs may be produced at LHC and one special.



First above is Feynman diagram of one way the Higgs boson may be produced at the LHC; two gluons convert to two top/anti-top quark pairs, which then combine to make a neutral Higgs.  Second one is another way the Higgs boson may be produced at the LHC; two quarks each emit a W or Z boson, which combine to make a neutral Higgs.  Third one is easy and just a joke.


Feynman developed two rare forms of cancer dying shortly after a final attempt at surgery aged 69 in 1988. His last recorded words are noted as "I'd hate to die twice. It's so boring."  What a genious!




Credits: Wikipedia, BBC, David Kaiser, CERN, Flip Tanedo

Hrvoje Crvelin

Fundamental interaction

Posted by Hrvoje Crvelin Nov 26, 2011

In particle physics, fundamental interactions (sometimes called interactive forces) are the ways that elementary particles interact with one another. An interaction is fundamental when it cannot be described in terms of other interactions. The four known fundamental interactions, all of which are non-contact forces, are electromagnetism, strong interaction, weak interaction (also known as "strong" and "weak nuclear force" respectively) and gravitation. With the possible exception of gravitation, these interactions can usually be described in a set of calculational approximation methods known as perturbation theory, as being mediated by the exchange of gauge bosons between particles.  It is believed that in early moments of Universe there used to be symmetry and thus all forces could be explained (or to be seen) as one (except possibly gravity again).  Since then symmetry has broken (I mentioned this partially while discussing Higgs) and 4 individual forces have been identified.



According to the present understanding, there are four fundamental interactions or forces: gravitation, electromagnetism, the weak interaction, and the strong interaction. Their magnitude and behavior vary greatly. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. The modern (perturbative) quantum mechanical view of the fundamental forces (other than gravity) is that particles of matter (fermions) do not directly interact with each other, but rather carry a charge, and exchange virtual particles (gauge bosons), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges, and gluons mediate the interaction of color charges (which, by the way, has nothing to do with colors of course).

GG.jpgWe see consequence of gravitation every day.  We feel gravitation all the time.  Throw ball in the air and it comes back thanks to gravitation.  Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and early 17th centuries in his famous experiment dropping ball from the Tower of Pisa. Galileo showed that gravitation accelerates all objects at the same rate. This was a major departure from Aristotle's belief that heavier objects accelerate faster. Galileo correctly postulated air resistance as the reason that lighter objects may fall more slowly in an atmosphere. Galileo's work set the stage for the formulation of Newton's theory of gravity.
In 1687, Sir Isaac Newton published Principia and stated "I deduced that the forces which keep the planets in their orbs must [be] reciprocally as the squares of their distances from the centers about which they revolve: and thereby compared the force requisite to keep the Moon in her Orb with the force of gravity at the surface of the Earth; and found them answer pretty nearly."  Newton's theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets.  Still, a discrepancy in Mercury's orbit pointed out flaws in Newton's theory. The issue was resolved in 1915 by Albert Einstein's new theory of general relativity, which accounted for the small discrepancy in Mercury's orbit.IN.jpg


Although Newton's theory has been superseded, most modern non-relativistic gravitational calculations are still made using Newton's theory because it is a much simpler theory to work with than general relativity, and gives sufficiently accurate results for most applications involving sufficiently small masses, speeds and energies.  Today, we send rocketships in space to nearby planets using Newton formulas.

AE.jpgIn general relativity, the effects of gravitation are ascribed to spacetime curvature instead of a force. The starting point for general relativity is the equivalence principle, which equates free fall with inertial motion, and describes free-falling inertial objects as being accelerated relative to non-inertial observers on the ground. In Newtonian physics, however, no such acceleration can occur unless at least one of the objects is being operated on by a force. Einstein proposed that spacetime is curved by matter, and that free-falling objects are moving along locally straight paths (called geodesics) in curved spacetime. Like Newton's first law of motion, Einstein's theory states that if a force is applied on an object, it would deviate from a geodesic. For instance, we are no longer following geodesics while standing because the mechanical resistance of the Earth exerts an upward force on us, and we are non-inertial on the ground as a result. This explains why moving along the geodesics in spacetime is considered inertial.
In the decades after the discovery of general relativity it was realized that general relativity is incompatible with quantum mechanics.  Due to success of general relativity in its predictions it is believed thus it is uncompleted. Still, It is possible to describe gravity in the framework of quantum field theory like the other fundamental forces, such that the attractive force of gravity arises due to exchange of virtual gravitons, in the same way as the electromagnetic force arises from exchange of virtual photons. This reproduces general relativity in the classical limit. Still, approach fails at short distances of the order of the Planck length, where a more complete theory of quantum gravity is required. Many believe the complete theory to be string theory (currently M-theory and F-theory.  On the other hand, it may be a background independent theory such as loop quantum gravity or causal dynamical triangulation or any other candidate currently being developed by number of scientists around the world.Spinnetwork.jpg


Every planetary body (Earth included of course) is surrounded by its own gravitational field, which exerts an attractive force on all objects. Assuming a spherically symmetrical planet, the strength of this field at any given point is proportional to the planetary body's mass and inversely proportional to the square of the distance from the center of the body.  We know that from Newton.  The strength of the gravitational field is numerically equal to the acceleration of objects under its influence, and its value at the Earth's surface (g) is approximately expressed as standard average is 9.81 m/s.  This means that, ignoring air resistance, an object falling freely near the Earth's surface increases its velocity by 9.81 m/s for each second of its descent. An object starting from rest will attain a velocity of 9.81 m/s after one second, 19.6 m/s after two seconds, etc. Also, again ignoring air resistance, any and all objects, when dropped from the same height, will hit the ground at the same time. According to Newton's 3rd Law, the Earth itself experiences a force equal in magnitude and opposite in direction to that which it exerts on a falling object. This means that the Earth also accelerates towards the object until they collide. Because the mass of the Earth is huge, however, the acceleration imparted to the Earth by this opposite force is negligible in comparison to the object's. If the object doesn't bounce after it has collided with the Earth, each of them then exerts a repulsive contact force on the other which effectively balances the attractive force of gravity and prevents further acceleration.



Gravitation is by far the weakest of the four interactions. This is why it gets ignored when doing particle physics.  The weakness of gravity can easily be demonstrated by suspending a pin using a simple magnet (such as a refrigerator magnet). The magnet is able to hold the pin against the gravitational pull of the entire Earth.  Yet gravitation is very important for macroscopic objects and over macroscopic distances for the following reasons:

  • gravitation is the only interaction that acts on all particles having mass
  • gravitation has an infinite range (like electromagnetism but unlike strong and weak interaction)
  • gravitation cannot be absorbed, transformed, or shielded against
  • gravitation always attracts and never repels (though in inflation theories we use term repulsive gravitation to describe fast expansion)


The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies, black holes, and the expansion of the universe. Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits, as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground; and we can only jump so high.  As shows on picture on top, current field of science dealing with gravity is called quantum gravity and mediator (force carrying particle) is graviton which up to date remains to be hypothetical.


Electromagnetism is the interaction responsible for practically all the phenomena encountered in daily life. Ordinary matter takes its form as a result of intermolecular forces between individual molecules in matter. Electromagnetism attracts electrons to an atomic nucleus to form atoms, which are the building blocks of molecules. This governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, which are in turn determined by the interaction between electromagnetic force and the momentum of the electrons.  Electromagnetism manifests as both electric fields and magnetic fields. Both fields are simply different aspects of electromagnetism, and hence are intrinsically related. Thus, a changing electric field generates a magnetic field; conversely a changing magnetic field generates an electric field. This effect is called electromagnetic induction, and is the basis of operation for electrical generators, induction motors, and transformers.


When electricity passed through a wire, a magnetic field is created around the wire. Looping the wire increases the magnetic field. Adding an iron core greatly increases the effect and creates an electromagnet (without iron core is usually called a solenoid).  The most interesting feature of the electromagnet is that when the electrical current is turned off, the magnetism is also turned off (especially true if the core is made of soft iron). Being able to turn the magnetism on and off has lead to many amazing inventions and applications. A theory of electromagnetism, known as classical electromagnetism, was developed by various physicists over the course of the 19th century, culminating in the work of James Clerk Maxwell, who unified the preceding developments into a single theory in 1873 when published Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be regulated by one force.  He also discovered the electromagnetic nature of light.


In classical electromagnetism, the electromagnetic field obeys a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.


There are 4 main interactions (experimentally confirmed) between electricity and magnetism:

  • Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: unlike charges attract, like ones repel
  • Magnetic poles (or states of polarization at individual points) attract or repel one another in a similar way and always come in pairs: every north pole is yoked to a south pole
  • An electric current in a wire creates a circular magnetic field around the wire, its direction (clockwise or counter-clockwise) depending on that of the current
  • A current is induced in a loop of wire when it is moved towards or away from a magnetic field, or a magnet is moved towards or away from it, the direction of current depending on that of the movement


Electromagnetism is infinite-ranged like gravity, but vastly stronger, and therefore describes a number of macroscopic phenomena of everyday experience such as friction, rainbows, lightning, and all human-made devices using electric current, such as television, lasers, and computers.  Light, in other words world we experience through our eyes, is nothing more than electromagnetic radiation (of course, light is defined as visible spectrum of electromagnetic radiation; others include radio waves and X-rays).  The thinking that happens inside your brain can be traced to chemical signals passing between neurons, and those chemicals move the way they do because of electromagnetism.



Above picture is real; it depict an eruption of the Chaiten volcano in Chile in May 2008.  Lightning caused by volcanos are usually called dirty thunderstorms. Electrical charges are generated when rock fragments, ash, and ice particles in a volcanic plume collide and produce static charges, just as ice particles collide in regular thunderstorms.  Volcanic eruptions also release large amounts of water, which may help fuel these thunderstorms.


While speed of light (c) is nowadays linked to Albert Einstein, same can be derived from Maxwell's equations. Einstein's 1905 theory of special relativity, however, which flows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electro-magnetism on the very nature of time and space.  Einstein also explained the photoelectric effect (for which he won the Nobel prize for physics in 1921) by hypothesizing that light was transmitted in quanta, which we now call photons. Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism. Further work in the 1940s, by Richard Feynman, Freeman Dyson, Julian Schwinger and Sin-Itiro Tomonaga, completed this theory (they won the Nobel Prize in 1965).  This theory is nowadays called quantum electrodynamics (QED), the revised theory of electromagnetism. QED and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling, quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount, that is necessary for everyday electronic devices such as transistors to function.

QED does not predict what is going to happen. However, what it does predict is the probability that something will happen. What are the chances that a particular magnitude is the one that will be exhibited? Because the QED results match experiments to such a high degree of accuracy (10-12), it has been considered one of the most accurate physical theories ever created.  Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons too, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails totally to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem".



In the Standard Model of particle physics the weak interaction is theorized as being caused by the exchange (i.e. emission or absorption) of W and Z bosons. Since the mass of these particles is on the order of 80 GeV, the uncertainty principle dictates a range of about 10-18 meters which is about 0.1% of the diameter of a proton.  Most particles will decay by a weak interaction over time; beta decay, a form of radioactivity, is one of the best known consequences by weak force.  It also has one unique property called quark flavor changing; it will allow for quarks to swap their "flavor" (one of six) for another.


Weak interaction is crucial to the structure of the universe because:

  • sun would not burn without it since the weak interaction causes the transmutation proton to neutron so that deuterium can form and deuterium fusion can take place
  • it is necessary for the buildup of heavy nuclei


The role of the weak force in the transmutation of quarks makes it the interaction involved in many decays of nuclear particles which require a change of a quark from one flavor to another. It was in radioactive decay such as beta decay that the existence of the weak interaction was first revealed. Such decay also makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence. The weak interaction is the only process in which a quark can change to another quark, or a lepton to another lepton.  The weak interaction acts between both quarks and leptons, whereas the strong force does not act between leptons. Leptons have no color, so they do not participate in the strong interactions; neutrinos have no charge, so they experience no electromagnetic forces; but all of them join in the weak interactions.


The weak force normally applies to big, heavy atoms on the atomic scale (yet the atoms are not big enough to see, of course). What happens is that the strong force can only cover a very, very, very, very teeny tiny small area - the normal size of an atom's nucleus. When the nucleus gets too big, the strong force can not hold the nucleus together because of the repelling like charges inside the atom (the positive protons). Then, the weak force causes the atom to break up into smaller atoms and particles releasing electromagnetic energy. Some of the atom's mass has been transformed into energy. Such a reaction is known as a nuclear chain reaction. Once the atoms are broken apart, the strong force can take control again and keep those atoms together.  Sometimes, this process can take a very long time and some of the particles or gamma rays that they give off are harmful to live forms.

Enrico Fermi was an Italian-born, naturalized American physicist particularly known for his work on the development of the first nuclear reactor, Chicago Pile-1, and for his contributions to the development of quantum theory, nuclear and particle physics, and statistical mechanics. He was awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity.


The weak force was originally described, in the 1930s, by Fermi's theory of a contact four-fermion interaction, which is to say, a force with no range (i.e. entirely dependent on physical contact). However, it is now best described as a field, having very short range. In 1968, the electromagnetic force and the weak interaction were unified, when they were shown to be two aspects of a single force, now termed the electro-weak force.  The discovery of the W and Z particles in 1983 was hailed as a confirmation of the theories which connect the weak force to the electromagnetic force in electroweak unification.


Electromagnetism and weak interaction appear to be very different at everyday low energies so they can be modeled using two different theories.



However, above unification energy, on the order of 100 GeV, they would merge into a single electroweak force.  In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons. The weak interaction is the only known interaction which does not conserve parity (P symmetry); it is left-right asymmetric (this brough Nobel Prize in 1957 to Chen Ning Yang and Tsung-Dao Lee). The weak interaction also violates CP symmetry - symmetry of physical laws under transformations that involve the inversions of charge and parity; violation is now believed to be key in search to answer why matter dominates over anti-matter (this brough Nobel Prize in 1980 to James Cronin and Val Fitch), but does conserve CPT (fundamental symmetry of physical laws under transformations that involve the inversions of charge, parity and time simultaneously).  Electroweak theory is very important for modern cosmology, particularly on how the universe evolved. This is because shortly after the Big Bang, the temperature was approximately above 1015 K. Electromagnetic force and weak force were merged into a combined electroweak force.  For contributions to the unification of the weak and electromagnetic interaction between elementary particles, Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979.


Finally, strong nuclear force came along.  This is the most complicated interaction, mainly because of the way it varies with distance. At distances greater than 10 femtometers (1 femtometer = 10-15 meters), the strong force is practically unobservable. Moreover, it holds only inside the atomic nucleus (discovered in 1908). Moreover, the force had to be strong enough to squeeze the protons into a volume that is 10-15 of that of the entire atom. If you consider that the nucleus of all atoms (except hydrogen) contain more than one proton, and each proton carries a positive charge, then why would the nuclei of these atoms stay together?  The protons must feel a repulsive force from the other neighboring protons. This is where the strong nuclear force comes in. The strong nuclear force is created between nucleons by the exchange of particles called mesons. This exchange can be likened to constantly hitting a ping-pong ball or a tennis ball back and forth between two people. As long as this meson exchange can happen, the strong force is able to hold the participating nucleons together.  The nucleons must be extremely close together in order for this exchange to happen. The distance required is about the diameter of a proton or a neutron. If a proton or neutron can get closer than this distance to another nucleon, the exchange of mesons can occur, and the particles will stick to each other. If they can't get that close, the strong force is too weak to make them stick together, and other competing forces (usually the electromagnetic force) can influence the particles to move apart.


The exact origin of the strong force (holding compound atomic nuclei together) is not yet a completely settled matter. Some authors attribute this force to the exchange of virtual mesons between protons and neutrons (as in the original theory of Yukawa), while others claim this old model has been superseded by the modern theory of quantum chromodynamics (QCD), and attribute the binding of nucleons to a magnetic analog of the color charge, originating in the exchange of gluons between quarks.


In the case of approaching protons/nuclei, the closer they get, the more they feel the repulsion from the other proton/nucleus (the electromagnetic force). As a result, in order to get two protons/nuclei close enough to begin exchanging mesons, they must be moving extremely fast (which means the temperature must be really high), and/or they must be under immense pressure so that they are forced to get close enough to allow the exchange of meson to create the strong force. Now, back to the nucleus. One thing that helps reduce the repulsion between protons within a nucleus is the presence of any neutrons. Since they have no charge they don't add to the repulsion already present, and they help separate the protons from each other so they don't feel as strong a repulsive force from any other nearby protons. Also, the neutrons are a source of more strong force for the nucleus since they participate in the meson exchange. These factors, coupled with the tight packing of protons in the nucleus so that they can exchange mesons creates enough strong force to overcome their mutual repulsion and force the nucleons to stay bound together. The preceding explanation shows the reason why it is easier to bombard a nucleus with neutrons than with protons. Since the neutrons have no charge, as they approach a positively charged nucleus they will not feel any repulsion. They therefore can easily "break" the electrostatic repulsion barrier to being exchanging mesons with the nucleus, thus becoming incorporated into it. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive particle, whose mass is approximately 100 MeV.


It was later discovered that protons and neutrons were not fundamental particles, but were made up of constituent particles called quarks.


The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together in the protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge, although it has no relation to visible color.  Quarks with unlike color charge attract one another as a result of the strong interaction, which is mediated by particles called gluons.


Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961.


Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of QCD as simple models for the interactions of quarks.


QCD is a theory of fractionally charged quarks interacting by means of 8 photon-like particles called gluons. The gluons interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances, but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances.


Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance. After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about 10,000 N, no matter how much further the distance between the quarks. In QCD this phenomenon is called color confinement; it implies that only hadrons, not individual free quarks, can be observed. The explanation is that the amount of work done against a force of 10,000 N (about the weight of a one-metric ton mass on the surface of the Earth) is enough to create particle-antiparticle pairs within a very short distance of an interaction. In simple terms, the very energy applied to pull two quarks apart will turn into new quarks that pair up again with the original ones. The failure of all experiments that have searched for free quarks is considered to be evidence for this phenomenon.  Quarks are never alone today, but in the first second of the universe, "free" quarks are believed to have been present. With current experiments we reached the energy levels where we are able to create quark-gluon plasma (QGP), sometimes called quark soup.  So far both RHIC and LHC reported QGP occurrence (picture below shows RHIC detection).



Since 2008, there is a discussion about a hypothetical precursor state of the Quark-gluon plasma, the so-called "Glasma", where the dressed particles are condensed into some kind of glassy (or amorphous) state, below the genuine transition between the confined state and the plasma liquid. This would be analogeous to the formation of metallic glasses, or amorphous alloys of them, below the genuine onset of the liquid metallic state.


For decades, theoretical physicists have been able to explain the Universe in terms of four fundamental forces: the electromagnetic force, which causes electricity and magnetism; the weak nuclear force, which moderates some nuclear decays; the strong nuclear force, which binds quarks together inside atomic nuclei; and gravity. All except gravity have been incorporated into a Standard model of particle physics.


There are signs today that an even more fundamental theory may be out there. At high energies, electromagnetism and the weak force merge into a single electroweak force; and, at even higher energies, some as yet untested theories known as supersymmetry combine the electroweak and strong nuclear force. Tests at LHC may provide evidence for this combined strong and electroweak force.  But gravity remains a stubborn holdout against efforts to create a theory of everything.



Grand Unified Theories (GUTs) are proposals to show that all of the fundamental interactions, other than gravity, arise from a single interaction with symmetries that break down at low energy levels. GUTs predict relationships among constants of nature. GUTs also predict gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces, a prediction verified at the Large Electron-Positron Collider in 1991 for supersymmetric theories. Theories of everything, which integrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which includes string theory, loop quantum gravity, and twistor theory have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it.


Can only four forces control the entire universe?  Bold claim, but then again - putting aside microscopic processes happening inside atoms, everything we see can be accounted for in terms of particles interacting through just gravity and electromagnetism. From the orbits of the planets to the flexing of your muscles, every movement in the macroscopic world arises from the interplay of these two aspects of nature.  Interesting enough, both are macroscopic and have infinite range.  On the other hand, if you want to invent a new force of nature, you have to specify three things: which particles feel the force, how strong it is, and the range over which it interacts. Once you’ve fixed these properties, you know everything important about your hypothetical force, and you can set about tracking it down.


Some theories beyond the Standard Model include a hypothetical fifth force, and the search for such a force is an ongoing line of experimental research in physics. In supersymmetric theories, there are particles that acquire their masses only through supersymmetry breaking effects and these particles, known as moduli, can mediate new forces. Another reason to look for new forces is the recent discovery that the expansion of the universe is accelerating (also known as dark energy), giving rise to a need to explain a nonzero cosmological constant, and possibly to other modifications of general relativity. Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter, and dark flow.  The force is generally believed to have roughly the strength of gravity (i.e. it is much weaker than electromagnetism or the nuclear forces) and to have a range of anywhere from less than a millimeter to cosmological scales. The idea is difficult to test, because gravity is such a weak force: the gravitational interaction between two objects is only significant when one has a great mass. Therefore, it takes very precise equipment to measure gravitational interactions between objects that are small compared to the Earth. Nonetheless, in 1986 a fifth force, operating on municipal scales (i.e. with a range of about 100 meters), was reported by researchers who were reanalyzing results of Loránd Eötvös from earlier in the century. Over a number of years, other experiments have failed to duplicate this result.



Credits: Wikipedia, Ron Kurtus, Sean Caroll, Tech-FAQ, Geoff Brumfiel

Hrvoje Crvelin

What is Vacuum?

Posted by Hrvoje Crvelin Nov 19, 2011

After reading this you should show some respect to vaccum - thus I used capital letters in title.  After all, if there was no vacuum most likely you would not see anything you see today at all.  Hm, wait a moment, I hear you say, isn't vacuum sort of empty space?  Tabula rasa, nothingness, just nothing at all?  As it turns around, this is one of the greatest misconceptions. Although the classical vacuum is a void, the quantum vacuum is a virtual "soup" of particle-antiparticle pairs that interact with real atoms to produce the Lamb shift (slight energy shift in atomic levels) and the Casimir effect (attraction of two plates in a vacuum).  It was few year ago exactly the Casimir effect that captured my imagination and left me wondering - what is vacuum?  I guess for that I need to explain what Casimir effect is first.


My interest on this subject started with atom.  We all know atom is very small.  You may wear superglasses, but you still won't see it.  It is just too small.  Check it out:


Having said that, and illustrated above, you may see there is a bit of empty space between nucleus and atom border line.  To get an idea, consider following: if a baseball were the size of the Earth its atoms would be the size of grapes. If an atom were fourteen stories tall its nucleus would about the size of a grain of salt and its electrons would be about the size of dust particles.   Now, if you do some simple math by calculating volume of atom and how much space is used by electrons and nucleus you end up with sort of shocking fact - 99.999999999999% of an atom's volume is just empty space.  The human body consists of ~7 x 1027 atoms arranged in a highly a periodic physical structure. Although 41 chemical elements are commonly found in the body's construction, Carbon+Hydrogen+Oxygen+Nitrogen comprises 99% of its atoms. Fully 87% of human body atoms are either hydrogen or oxygen.  When you calculate it, that's loads of empty space.  As a kid that made wonder - why are we so solid?  Can I walk through the wall and secretly view next door girl under shower?  Since, I have grown up and wonder different questions (yeah right).  Anyway, despite emptiness inside, there are forces in action keeping atom bonds in place and making things look and feel solid.   When your hand meets the table, the force fields in the atoms of your hand come up against the equally strong fields in the atoms of the table. The mutual repulsion of these billions of tiny, but immensely strong, force fields prevents your hand penetrating the table, giving rise to the appearance of solidness. But however real it may seem, this solidness is only how things appear to us; it is not an intrinsic part of matter.


But, what about vacuum?  Vacuum is referred as empty space... no matter they say.  That surely means no atoms too.  Pure nothing, right?  Well, a perfect vacuum would be one with no particles in it at all, which is impossible to achieve in practice. Physicists often discuss ideal test results that would occur in a perfect vacuum, which they simply call "vacuum" or "free space", and use the term partial vacuum to refer to an actual imperfect vacuum as one might have in a laboratory or in space.  The quality of a vacuum refers to how closely it approaches a perfect vacuum. For example, a typical vacuum cleaner produces enough suction to reduce air pressure by around 20%. Much higher-quality vacuums are possible. Ultra-high vacuum chambers, common in chemistry, physics, and engineering, operate below one trillionth (10−12) of atmospheric pressure (100 nPa), and can reach around 100 particles/cm3. Outer space is an even higher-quality vacuum, with the equivalent of just a few hydrogen atoms per cubic meter on average. However, even if every single atom and particle could be removed from a volume, it would still not be "empty" due to vacuum fluctuations, dark energy, and other phenomena in quantum physics. In modern particle physics, the vacuum is considered as the ground state of matter.  Humans and animals exposed to vacuum will lose consciousness after a few seconds and die of hypoxia within minutes, but the symptoms are not nearly as graphic as commonly depicted in media and popular culture.  Animal experiments show that rapid and complete recovery is normal for exposures shorter than 90 seconds, while longer full-body exposures are fatal and resuscitation has never been successful.  RIP.


In quantum mechanics and quantum field theory, the vacuum is defined as the state with the lowest possible energy. This is a state with no matter particles (hence the name), and also no photons, no gravitons, etc. As described above, this state is impossible to achieve experimentally (even if every matter particle could somehow be removed from a volume, it would be impossible to eliminate all the blackbody photons).  This hypothetical vacuum state often has interesting and complex properties. For example, it contains vacuum fluctuations (virtual particles that hop into and out of existence). It has a finite energy too, called vacuum energy. Vacuum fluctuations are an essential and ubiquitous part of quantum field theory. Some readily-apparent effects of vacuum fluctuations include Lamb shift and the Casimir effect.  Lamb shift is a small difference in energy between two energy levels of the hydrogen atom in quantum electrodynamics (QED). The interaction between the electron and the vacuum causes a tiny energy shift and this shift has been measured 1947.  Casimir effect is more catchy thing (the effect was predicted by the Dutch physicist Hendrick Casimir in 1948 thus the name).


If all the air is pumped out of a chamber then we say that we have a vacuum, meaning that there is no matter inside and hence zero energy. But down at the quantum level, even the empty vacuum is a busy place. Quantum jitters, as described when talking about holographic principle, create particles out of no where.  Obviously, some sort of energy is needed for that.  Casimir showed how to harness this process to extract energy from the vacuum even though it has nothing to give.  Now, if we borrow the money from a bank we must soon pay it back. The rules of quantum mechanics, as expressed within the Heisenberg uncertainty principle, operate in a similar way. But unlike a bank loan where we are free to choose the period over which we make the repayments, the uncertainty principle is rather more strict. It states that energy can be borrowed from the vacuum provided it is paid back very quickly. The more energy that is borrowed, the quicker the dept must be repaid.

Now consider what is going on in a vacuum if we could zoom down to the microscopic level. Among the particles that are forming from this borrowed energy are photons (the particles of light). What's more, photons of all energies are being created, with the higher energy ones, corresponding to short wavelength light, being able to stick around for much less time than the lower energy, longer wavelength, ones. Thus at any given moment, the vacuum contains many of these photons (and other particles) and yet will have an average energy equal to zero since each particle has only temporarily borrowed the energy needed for it be created.


Casimir showed how the vacuum can be coaxed into giving up a tiny amount of its energy permanently. This is achieved by taking two flat metal plates and placing them up close to each other inside a vacuum. When the distance between the plates is not equal to a whole number of wavelengths, corresponding to photons of a particular energy, then those photons will not be able to form in the gap because they will not fit. This is a rather difficult concept to appreciate, since we must consider both the wave nature of light (wavelengths) and its particle nature (photons) at the same time. Nevertheless, the number of photons forming in the vacuum between the plates is less than the number on the other side of the plates and it will therefore have a lower energy. But since the vacuum outside the gap has zero energy already then the region between the plates must have less than zero (or negative) energy. This causes the two plates to be pushed together with a very weak force that has nevertheless been experimentally measured.  Get it?


Here is another take; if mirrors are placed facing each other in a vacuum, some of the waves will fit between them, bouncing back and forth, while others will not. As the two mirrors move closer to each other, the longer waves will no longer fit - the result being that the total amount of energy in the vacuum between the plates will be a bit less than the amount elsewhere in the vacuum. Thus, the mirrors will attract each other, just as two objects held together by a stretched spring will move together as the energy stored in the spring decreases.




The Casimir force is too small to be observed for plates that are not within microns of each other. Two mirrors with an area of 1 cm2 separated by a distance of about 1 μm have an attractive Casimir force of about 10-7 N. Although this force seems very small, at distances of less than a micrometer the Casimir force becomes the strongest force between two neutral objects! At separations of 10 nanometer - roughly 100 times the size of an atom - the Casimir effect produces a force that is the equivalent of 1 atmosphere of pressure. The resurgence of interest in the Casimir force is because micromechanical devices on the scale of tens of nanometers must accommodate its effects.


Casimir's original theory applied only to ideal metals and dielectric materials; however, in the 1950s and '60s, the Russian physicist Evgeny Lifshitz extended Casimir's theory to include real metals and found that the forces at work could be repulsive as well as attractive. Because of his contribution, the Casimir effect is now also known as the Casimir-Lifshitz effect.   To date, only the attractive form of the effect has been studied in detail and without any immediate practical application. But the emergence of nanoscale devices has brought to light a drawback of the Casimir–Lifshitz effect: it can cause tiny pieces of machinery, such as microscopic cogs, to stick together. As such devices continue to shrink, the consequences of the effect will need to be taken more seriously.


The Casimir effect has been linked to the possibility of faster-than-light (FTL) travel because of the fact that the region inside a Casimir cavity has negative energy density. Zero energy density, by definition, is the energy density of normal "empty space." Since the energy density between the conductors of a Casimir cavity is less than normal, it must be negative. Regions of negative energy density are thought to be essential to a number of hypothetical faster-than-light propulsion schemes, including stable wormholes and the Alcubierre warp drive.


There is another interesting possibility for breaking the light-barrier by an extension of the Casimir effect. Light in normal empty space is "slowed" by interactions with the unseen waves or particles with which the quantum vacuum seethes. But within the energy-depleted region of a Casimir cavity, light should travel slightly faster because there are fewer obstacles. A few years ago, K. Scharnhorst of the Alexander von Humboldt University in Berlin published calculations5 showing that, under the right conditions, light can be induced to break the usual light-speed barrier. Under normal laboratory conditions this increase in speed is incredibly small, but future technology may afford ways of producing a much greater Casimir effect in which light can travel much faster. If so, it might be possible to surround a space vehicle with a "bubble" of highly energy-depleted vacuum, in which the spacecraft could travel at FTL velocities, carrying the bubble along with it.


And now, for the piece which motivated me to write this blog.   40 years ago, G. Moore suggested that a mirror undergoing relativistic motion could convert virtual photons into directly observable real photons (that was in 1970 to be more precise). This effect was later named the dynamical Casimir effect (DCE). In 2011, this has been finally confirmed!  Scientists have succeeded in creating light from vacuum; in an innovative experiment, the scientists have managed to capture some of the photons that are constantly appearing and disappearing in the vacuum.  Here is the reference to paper.  This has been now published in Nature.


In essence, at slow speeds, the sea of virtual particles can easily adapt to the mirror's movement and continue to come into existence in pairs and then disappear as they annihilate each other.  But when the speed of the mirror begins to match the the speed of the photons, in other words at relativistic speeds, some photons become separated from their partners and so do not get annihilated. These virtual photons then become real and the mirror begins to produce light.  That's the theory. The problem in practice is that it's hard to get an ordinary mirror moving at anything like relativistic speeds.


As it's not possible to get a mirror to move fast enough, scientists involved developed another method for achieving the same effect; instead of varying the physical distance to a mirror, they have varied the electrical distance to an electrical short circuit that acts as a mirror for microwaves.  The "mirror" consists of a quantum electronic component referred to as a SQUID (Superconducting quantum interference device), which is extremely sensitive to magnetic fields. By changing the direction of the magnetic field several billions of times a second the scientists were able to make the "mirror" vibrate at a speed of up to 25% of the speed of light.  The result was that photons appeared in pairs from the vacuum, which they were able to measure in the form of microwave radiation. They were also able to establish that the radiation had precisely the same properties that quantum theory says it should have when photons appear in pairs in this way.

DCE.jpgAbove picture shows virtual photons bounce off a "mirror" that vibrates at a high speeds. The round mirror in the picture is a symbol, and under that is the quantum electronic component (referred to as a SQUID), which acts as a mirror. This makes real photons appear (in pairs) in vacuum.


What happens during the experiment is that the "mirror" transfers some of its kinetic energy to virtual photons, which helps them to materialise. According to quantum mechanics, there are many different types of virtual particles in vacuum, as mentioned earlier. The reason why photons appear in the experiment is that they lack mass.  Relatively little energy is therefore required in order to excite them out of their virtual state.  In principle, one could also create other particles from vacuum, such as electrons or protons, but that would require a lot more energy. The scientists may find the photons that appear in pairs in the experiment interesting to study in closer detail (eg. for use in the research field of quantum information which includes the development of quantum computers).  Still, the main value of the experiment is that it increases our understanding of basic physical concepts, such as vacuum fluctuations - the constant appearance and disappearance of virtual particles in vacuum.  It is also believed that vacuum fluctuations may have a connection with "dark energy" which drives the accelerated expansion of the universe so these are exciting times for science.


So what is vacuum?  Be prepared for more surprises as nothingness is buzzing with life...


Credits: arXiv, Wikipedia, Chalmers press release

Hrvoje Crvelin

Life after Higgs

Posted by Hrvoje Crvelin Nov 17, 2011

atom_zoom_b.jpgIn the world of science, you are either an elementary particle or you are a hadron. An elementary particle is one that can't be broken down (yet) into smaller particles. Scientists refer to elementary particles as "fundamental".  There are three types of elementary particles: quarks, leptons, and bosons. hadrons are made of quarks and therefore are not fundamental.  There are two main families of hadrons: the baryons and the mesons. Protons and neutrons are part of the baryon family. In the meson family there are several hadrons named mostly after the Greek alphabet such as omega, eta, chi, and psi. Mesons are also made up of various combinations of quarks.


We can also say that particles that interact by the strong interaction are called hadrons. This general classification includes mesons and baryons but specifically excludes leptons, which do not interact by the strong force. The weak interaction acts on both hadrons and leptons.  Baryons are massive particles which are made up of three quarks in the standard model. This class of particles includes the proton and neutron. Other baryons are the lambda, sigma, xi, and omega particles. Baryons are distinct from mesons in that mesons are composed of only two quarks.



Hadrons are viewed as being composed of quarks, either as quark-antiquark pairs (mesons) or as three quarks (baryons). There is much more to the picture than this, however, because the constituent quarks are surrounded by a cloud of gluons, the exchange particles for the color force.


A property of quarks labeled color is an essential part of the quark model. The force between quarks is called the color force. Since quarks make up the baryons, and the strong interaction takes place between baryons, you could say that the color force is the source of the strong interaction, or that the strong interaction is like a residual color force which extends beyond the proton or neutron to bind them together in a nucleus.


Color is the strong interaction analog to charge in the electromagnetic force. The term "color" was introduced to label a property of the quarks which allowed apparently identical quarks to reside in the same particle, for example, two "up" quarks in the proton. To allow three particles to coexist and satisfy the Pauli exclusion principle, a property with three values was needed. The idea of three primary colors like red, green, and blue making white light was attractive, and language about "colorless" particles sprang up. It has nothing whatever to do with real color, but provides three distinct quantum states. The property can be considered something like a "color charge" with three distinct values, with only color neutral particles allowed. The terms "color force" and even "quantum chromodynamics" have been used, extending the identification with color terms. The antiquarks have anti-colors, so the mesons can be colorless by having a red and an "anti-red" quark. The idea of color is supported by the fact that all commonly observed particles have either three quarks (baryons) or two (mesons), the combinations which can be "colorless" or "color neutral" with the three values of color. This does not exclude "di-baryons" with 6 quarks and other combinations of more than three. The only experimental indication of the presence of such particles is recent evidence for a penta-quark particle.

Confused?  In particle physics, the term particle zoo is used colloquially to describe a relatively extensive list of the known elementary particles that almost look like hundreds of species in the zoo.  The situation was particularly confusing in the late 1960s, before the discovery of quarks, when hundreds of strongly interacting particles (hadrons) were known. It turned out later that they were not elementary but rather composites of the quarks.


Just like periodic table of elements in Chemistry exists so does physics have its own table to organize particle zoo out there.  This table, or better said arrangement, is called standard model.  This particles are divided into two major sections; fermions and bosons. As always, there are these specific properties one has to obey to be part of one group.  Indeed, fermion is any particle any particle which obeys the Fermi–Dirac statistics (and follows the Pauli exclusion principle). Fermions contrast with bosons which obey Bose-Einstein statistics.  In shorter and more basic wording, we can say fermions are particles with half-integer spin where no two identical fermion particles may occupy the same quantum state simultaneously (for example no two electrons in a single atom can have the same four quantum numbers).  Bosons, on the other hand, are integer spin particles and not subject to the Pauli exclusion principle (any number of identical bosons can occupy the same quantum state - like photons for example).  We see there are two characteristics here to determine whether particle is boson or fermion.


A fermion can be an elementary particle (for example electron) or it can be a composite particle (for example proton).  Fermions have properties, such as charge and mass, which can be seen in everyday life. They also have other properties, such as spin (always positive), weak charge, hypercharge, and colour charge, whose effects do not usually appear in everyday life. These properties are given numbers called quantum numbers.  There are 12 different types of fermions. Each type is called a "flavor." Their names are:

  • Quarks — up, down, strange, charm, bottom, top (2 up quarks and 1 down quark make a proton, and 2 down quarks and 1 up quark make a neutron)
  • Leptons — electron, muon, tau, electron neutrino, muon neutrino, tau neutrino


All bosons have an integer spin so many of them can be in the same place at the same time. There are two types of bosons:

  • Gauge bosons
  • Higgs boson


Gauge bosons are what make the fundamental forces of nature possible (we are not yet sure if gravity works through a gauge boson). Every force that acts on fermions happens because gauge bosons are moving between the fermions, carrying the force. Bosons follow a theory called Bose-Einstein statistics. The Standard Model says that there are 12 gauge bosons:

  • 8 kinds of gluons
  • photon
  • W+, W-, and Z


The fundamental members of the boson family include photons, gravitons, and gluons. Photons are little packets of electromagnetic radiation, that is, light; gravitons are presumed to be responsible for gravitational force; and gluons are responsible (you may have guessed this one) gluing and holding together other fundamental particles that comprise hadrons.

bosons_b.jpgThere are four basic known forces of nature. These forces affect fermions, and are carried by bosons traveling between those fermions. The Standard Model explains three of these four forces.

  • Strong force; this force holds quarks together to make hadrons such as protons and neutrons. The strong force is carried by gluons. The theory of quarks, the strong force, and gluons is called quantum chromodynamics (QCD). The residual strong force holds protons and neutrons together to make the nucleus of every atom. This force is carried by mesons, which are made up of two quarks.
  • Weak force; this force can change the flavor of a fermion and causes beta decay. The weak force is carried by three gauge bosons: W+, W-, and the Z boson.
  • Electromagnetic force; this force explains electricity, magnetism, and other electromagnetic waves including light. This force is carried by the photon. The combined theory of the electron, photon, and electromagnetism is called quantum electrodynamics.
  • Gravity; this is the only fundamental force that is not explained by the Standard Model. It may be carried by a particle called the graviton. Physicists are looking for the graviton, but they have not found it yet.


The strong and weak forces are only seen inside the nucleus of an atom, and they only work over very tiny distances: distances that are about as far as a proton is wide. The electromagnetic force and gravity work over any distance, but the strength of these forces goes down as the affected objects get farther apart. The force goes down with the square of the distance between the affected objects: for example, if two objects become 2 times as far away from each other, the force of gravity between them becomes 4 times less strong (22=4).


All these particles have all been seen either in nature or in the laboratory. The Standard Model also predicts that there is a Higgs boson. The Standard Model says that fermions have mass (they are not just pure energy) because Higgs bosons travel back and forth between them.  The Higgs boson is the only elementary particle in the Standard Model that physicists have not yet found.  So, standard model at the end looks like following.


The Higgs boson is the only elementary particle predicted by the Standard Model that has not been observed in particle physics experiments. It is a necessary requirement of the so-called Higgs mechanism, the part of the SM which explains how most of the known elementary particles obtain their mass. For example, the Higgs mechanism would explain why the W and Z bosons, which mediate weak interactions, are massive whereas the related photon, which mediates electromagnetism, is massless. The Higgs boson is expected to be in a class of particles known as scalar bosons (vector bosons are particles with integer spin equal to one and scalar bosons have spin 0). 


So, does it mean that whole fuss about Higgs is because Standard model is massless and we explained observed masses in experiments through interaction with Higgs field?  Yes!  How?  In the Standard Model the fundamental particles are initially massless. The masses are then generated through interactions with hypothetical scalar fields called Higgs fields without violating the gauge symmetry. At least one of these Higgs fields should be visible as a massive scalar boson called the Higgs particle (H) which is yet to be discovered.  The heaviest quark (top) was found in 1995 and the existence of the tau neutrino was confirmed in July 2000, completing our experimental knowledge of all three quark-lepton families.


In physics, the conservation of energy means that the total amount of energy in an isolated system remains constant, although it may change forms, e.g. friction turns kinetic energy into thermal energy. In thermodynamics, the first law of thermodynamics is a statement of the conservation of energy for thermodynamic systems. In short, the law of conservation of energy states that energy can not be created or destroyed, it can only be changed from one form to another, such as when electrical energy is changed into heat energy.  It is this very same mechanism which gave birth to Higgs field idea and how massless particles gain mass.  Einstein long ago showed energy and mass are intercovertable (remember, E=mc2); energy can become mass and vise versa.  If massless particles are gaining their mass through interaction with Higgs field, we see their mass is different for some reason.  Check Standard model table above and you will see for example electron is mostly energy, muon is a bit more mass, W is more mass and top quark is almost all mass.  So how does this work?  Idea is that Higgs field is everywhere and as any field in quantum mechanics it consists of field particles - in this case Higgs boson. When electron passes through there is small amount of friction, very very tiny one, so electron has low mass and mostly it is energy,  When muon gets through, there is a bit of friction.  As consequence you part of kinetic energy is transformed to mass.  Same goes on W boson.  Finally, up quark tends to have loads of friction where most of his energy gets converted to mass.

One way to think about this is to invoke celebrity evening approach.  Whenever you have some dinner with some host shots and some less familiar faces, you see media surrounding celebs and taking away their kinetic speed while trying to move through.  Less familiar faces are having less friction and obviously moving faster.


From the point of view of elementary particle theory: All energy has mass; however all mass is not necessarily a form of energy. Rest mass of an elementary particle is a measure of its "stickiness" (aka coupling constant) to the Higgs field.


It's a nice theory, but how do you test it and prove it?  With accelerators.  A particle accelerator is a device that uses electromagnetic fields to propel charged particles to high speeds and to contain them in well-defined beams. We are surrounded by accelerators; an ordinary CRT television set is a simple form of accelerator.  Two accelerators after Higgs are Fermilab and LHC.  The idea behind this is, if the hypothesis of Higgs bosons coming in and out of existence is correct, it should be possible to create and destroy Higgs bosons.  Scientists predict that a direct collision between two protons traveling at the speed of light should force the creation of a Higgs boson.  The energy of each collision at LHC will reach 14 trillion electron volts (TeV). This represents an enormous amount of energy, when one considers that the kinetic energy of a flying mosquito is 1.6 X 10-7 joules or one TeV, but the scale is miniscule. The search for the Higgs is a statistical hunt that involves looking at the particles that emanate from the high-energy collisions of protons inside the LHC, measuring their energies and directions of flight, as well as other parameters, and trying to assess whether it is likely that some of these particles result from the decay of a Higgs boson created by the collision. These assessments carry a probability measure, such as 95%, 99%, or - as traditionally required in particle physics for a “definitive” conclusion about the existence of a new particle: 99.99997% (this is the infamous "five-sigma" requirement).

lhc-particle-collisions.jpgThere are two primary scenarios: one that involves a high-mass Higgs boson (heavier than 130 GeV; up to around 600 GeV), and one that predicts a low-mass Higgs (between 114 GeV and 129 GeV).  Current status, involving results by both Fermilab and CERN's LHC, have managed to show that Higgs does not have mass between in range between 156 and 177 GeV (Fermilab) and 145 GeV and 466 GeV (LHC).  Put this together and it becomes clear that Higgs is running out of space to hide (if it exists in the first place).  Big majority of physicists today expects Higgs to be within range 114 GeV - 145 GeV or it does not exist at all.  It is expected LHC to probe this range somewhere within 2012 - exciting times ahead indeed.


Here is another view on Higgs - from symmetry point of view.  In physics, symmetry includes all features of a physical system that exhibit the property of symmetry - that is, under certain transformations, aspects of these systems are "unchanged", according to a particular observation.  A good example would be position in space;  it doesn't matter where in the world you set up your experiment to measure the charge of the electron, you should get the same answer. Of course, if your experiment is to measure the Earth's gravitational field, you might think that you do get a different answer by moving somewhere else in space. But the rules of the game are that everything has to move - you, the experiment, and even the Earth!  If you do that, the gravitational field should indeed be the same.  So, symmetry is there. Symmetry can be sneaky and thus hidden or broken.   The loss of the observable manifestation of symmetry is named spontaneous symmetry breaking.  How do such symmetries get hidden?


We know several symmetry groups.  An example of such a group is the Circle group, which is  the rotations of a circle about its axis, called U(1).  The rotations have a product  (adding angles together), an identity, (the rotation of 0 degrees), and an inverse, (a rotation in the opposite direction).  It’s called U(1) because this group of transformations is represented by the set of all unitary matrices of dimension 1.  The Standard Model has 3 such symmetry groups: U(1), SU(2), and SU(3).  They represent 3 of the fundamental forces of nature: electromagnetic, weak nuclear, and strong nuclear respectively.  SU stands for special unitary, and SU(n) is the group of special unitary matrices of n-dimensions.  The Standard Model says that the generators of these symmetry groups actually represent particles!  For U(1) there is 1 generator, which is the photon.  SU(2) has 3 generators which are the Z, W+, and W- particles.  SU(3) has 8 generators which are the 8 different gluons.  As a first example, let’s see what it means to break U(1) symmetry.  If someone hands you a perfect circle, it is impossible to differentiate any point on the circle from any other.  However, if one breaks the symmetry by marking a point, then all points can be differentiated by describing how far they are from the marked point.  The classic example for hidden symmetry in physics is so called weak interactions of particle physics: the interactions by which, for example, a neutron decays into a proton, an electron, and an anti-neutrino. It turns out that a very elegant understanding of the weak interactions emerges if we imagine that there is actually a symmetry (that is SU(2)) between certain particles; examples include the up and down quarks, as well as the electron and the electron neutrino (this is the insight for which Glashow, Salam and Weinberg won the Nobel Prize in 1979).  If this electroweak symmetry were manifest, that means that it would be impossible to tell the difference between ups and downs, or between electrons and their neutrinos.  But we can.  At the high energies of the early universe, the weak nuclear force and the electromagnetic force are thought to converge into one electroweak force. As the temperature of the universe fell below a certain point, the two forces suddenly became separate. This "electroweak symmetry breaking" can be explained in terms of a field (hint: Higgs field) shifting from an effectively empty high-energy state to its ground state, filling space with a field that gives some particles their mass.   So, there is something about the vacuum - empty space itself - which knows the difference between an up quark and a down quark, and it’s the influence of the vacuum on these particles that makes them look different to us.


If the vacuum is not invariant under some symmetry, there must be some field that is making it not invariant, by taking on a "vacuum expectation value". In other words, this field likes to have a non-zero value even in its lowest-energy state. That's not what we're used to; the electromagnetic field, for example, has its minimum energy when the field itself is zero. But "zero" doesn't break any symmetries; it's only when a field has a nonzero value in the vacuum that it can affect different particles in different ways.  To visualize what is going on, we use so called Mexican hat model (picture below on right is animated gif, but most likely you need to click on it to get it going - at least my experience with Google Chrome).


Picture above on left side shows the top of the hat is the point with highest symmetry; everywhere you look you get the same view.  It is, however, unstable.  A ball placed at this point would roll down and eventually would stop on a point on the hat's rim.  These points represent the several states of minimum energy that characterize Higgs field.  Right picture above is a graph of the potential energy of a set of two fields φ1 and φ2. Fields like to sit at the minimum of their potentials; notice that in this example, the minimum is not at zero, but along a circle at the brim of the hat. Notice also that there is a symmetry - we can rotate the hat, and everything looks the same. But in reality the field would actually be sitting at some particular point in the brim of the hat. The point is that you should imagine yourself as sitting there along with the field, in the brim of the hat. As noted before, if you were at the peak in the center of the potential, the symmetry would be manifest - spin around, and everything looks the same. But there in the brim, the symmetry is hidden - spin around, and things look dramatically different in different directions. The symmetry is still there, but it's nonlinearly realized (hidden).


Fields can oscillate back and forth, and in quantum field theory, what you see when you look at an oscillating field is a set of particles. Furthermore, the amount of curvature in the potential tells you the mass of the particle. Sitting at the brim of the hat, there are two directions in which you can oscillate (as shown by animated gif) - a flat direction along the brim, and a highly-curved direction moving radially away from the center. That's one massless particle (motion along the brim of the hat) and one quite massive particle (radial motion). The fact that there will always be a massless particle when you have spontaneous symmetry breaking is called Goldstone's theorem, and the particle itself is called Nambu-Goldstone boson.  Where is the massless particle?The point is that not all symmetries are created equal. Sometimes you have a "global" symmetry, which is an honest equivalence between two or more different fields. Breaking global symmetries really does give rise to Nambu-Goldstone bosons. But other times you have gauge symmetries, which aren't really symmetries at all - they are just situations in which it's useful to introduce more fields than really exist, along with a symmetry between them, to make a more elegant description of the physics. Gauge symmetries come along with gauge bosons, which are massless force-carrying particles like the photon and the gluons.  And here's the secret of the Higgs mechanism: when you spontaneously break a gauge symmetry, the would-be Nambu-Goldstone boson gets "eaten" by the gauge bosons (W and Z bosons)!  What you thought would be a massless spin-1 gauge boson and a massless spin-0 NG-boson shows up as a single particle, a massive spin-1 gauge boson. In the case of the weak interactions, these massive gauge bosons are the two charged W particles and the neutral Z particle.  While it's true that the gauge bosons eat the would-be massless NG-boson, what about the massive particle corresponding to radial oscillations in the Mexican hat potential? That should be there, and we call it the Higgs boson. So, from symmetry point of view, there's no doubt something is breaking the symmetry. The question that is worth asking is: Can we imagine breaking the symmetry without introducing any new particles? The experiments will have the final say, as they tend to do.


There is one interesting story about Higgs boson and search for it; a few years ago Stephen Hawking was widely reported in the press to have placed a provocative public bet that the LHC (along with all particle accelerators that preceded it) would never find the Higgs boson.  If no positive results about the Higgs should come out, Stephen Hawking - betting against the entire world of physics, as it were - would be able to cash in on his wager. So, what if we do not find Higgs?  It means Standard model is not complete and there is no shortage of theories addressing Higgless approach (so called Higgsless models).  At the moment those are:

  • Technicolor models
  • Extra-dimensional Higgsless models
  • Models of composite W and Z vector bosons
  • Top quark condensate
  • "Unitary Weyl gauge"
  • Asymptotically safe weak interactions based on some nonlinear sigma models
  • "Regular Charge Monopole Theory"
  • Preon and models inspired by preons such as Ribbon model of Standard Model particles
  • Symmetry breaking driven by non-equilibrium dynamics of quantum fields above the electroweak scale
  • Unparticle physics and the unhiggs
  • In theory of superfluid vacuum masses of elementary particles can arise as a result of interaction with the physical vacuum, similarly to the gap generation mechanism in superconductors


So, plenty ideas to consider. Experiments will decided at the end.  The Standard Model has been successfully tested both at high and low energies and the only hint of a deviation from its predictions are the neutrino oscillations. Nevertheless there are still many open questions even without Higgs: Why do quarks and leptons have such different masses? How small are the neutrino masses? Why are there three and only three families? What causes the pattern of mixing between families? Why are the discrete symmetries C (particle <-> antiparticle), P (space reflection), T (time reversal) or the combination CP violated for part of the interactions? It is generally expected that phenomena outside the Standard Model will be found at the TeV scale, which may answer some of the above questions. The existing elementary particles might again turn out to be composite, but the most promising option for an extension of the Standard Model at present seems to be the existence of a "supersymmetry" which connects bosons and fermions. Supersymmetry has very attractive theoretical features such as unification of the coupling constants at an energy of about 1015 GeV and solution of the problem why the Higgs mass is not of that magnitude. Phenomenologically it predicts a light Higgs particle and allows the decay muon -> electron + photon or an electric dipole moment of the neutron. However, supersymmetry also implies the existence of many additional particles, the supersymmetric partners to the known fermions and bosons. They have not been seen experimentally up to now, but are searched for in present and future colliders (this would be also broken symmetry as particles with required properties for unbroken one haven't been observed).


In addition to all the particles that make up matter, there also exists particles of antimatter. For instance, the antiparticle of the negatively charged electron would be a similar particle with a positive charge, called a positron. There are also antiprotons, antineutrons, and anti almost anything else (I wonder what would be antiphoton though - sounds like dark matter stuff).  Then we have dark matter still being a puzzle.  All these are not part of Standard model, but they might be one day in the future. Finding Higgs (or not) will surely open some new book pages (and close old ones).


Credits: Wikipedia, Uncle John's Bathroom Reader Plunges Into the Universe, HyperPhysics, ETH IPP, PBS/Brian Greene, Christopher Lester, Amir D. Aczel, Charis Anastopoulos, Sean Caroll

Hrvoje Crvelin

Simulation argument

Posted by Hrvoje Crvelin Nov 15, 2011
sa.pngThe Simulation Hypothesis (simulation argument or simulism) proposes that reality is a simulation and those affected are generally unaware of this. The concept is reminiscent of René Descartes' Evil Genius but posits a more futuristic simulated reality. The same fictional technology plays, in part or in whole, in the science fiction films Star Trek, Dark City, The Thirteenth Floor, The Matrix, Open Your Eyes, Vanilla Sky, Total Recall, and Inception.  I think in recent times idea took quite a swing thanks to Matrix trilogy, but in somewhat wrong direction by public (mostly thanks to the storyline).  I usually dismiss automatically hyped ideas by movies, but somewhere in 2008 my friend sent a link to simulation argument site which made some sense to me, though I saw it (and I still do) more of philosophical approach to reality.  In 2011, I accepted the fact there would be nothing strange or unexpected if we would run within simulation.  I will try to explain why is this so and how is this related to multiverse concepts I have been focusing so far.

In its current form, the Simulation Argument began in 2003 with the publication of a paper by Nick Bostrom.  Davis J. Chalmers, took it a bit further in his The Matrix as Metaphysics analysis where he identified three separate hypotheses, which, when combined give what he terms as Matrix Hypothesis; the notion that reality is but a computer simulation:

  • The Creation Hypothesis states "Physical space-time and its contents were created by beings outside physical space-time"
  • The Computational Hypothesis states "Microphysical processes throughout space-time are constituted by underlying computational processes"
  • The Mind-Body Hypothesis states "mind is constituted by processes outside physical space-time, and receives its perceptual inputs from and sends its outputs to processes in physical space-time"


For the sake of the arguments and discussion I will just mention there is also the dream argument which contends that a futuristic technology is not required to create a simulated reality, but rather, all that is needed is a human brain. More specifically, the mind's ability to create simulated realities during REM sleep affects the statistical likelihood of our own reality being simulated.  I remember thinking about this idea when in teen period without even knowing someone might have taken this much farther than I could possibly know back then.  Nevertheless, I do plan to focus on computational process mostly in this blog (more correctly I plan to focus on virtual world simulation or as in words by Nick Bostrom - ancestor simulations).


So far, when talking about multiple and parallel universes, we mostly relied on matematics and its laws and what they tell us.  This models of parallel worlds simply came out of it, pretty much as many other hard to believe theoretical prediction which would eventually be confirmed at some later stage.  As per se, this doesn't mean that every prediction is right, but some of those ideas are logical and discoveries in past 100 years have placed ground in such manner that major number of serious physicist today stands behind this ideas (or at least one of them).  Nevertheless, let's forget a math for a moment and change roles.  Can we create universe?  We are pretty sure that processes involved during big bang were such that we can't create it and even if we could we would have hard time following what is going on (think of inflation).  What do we do then?  We create models.  Computer models.  Simulations.  Playing god would simply prove irresistible, wouldn't be?  (Something Michio Kaku likes to point out to be the future anyway)

matrix.jpgTo make it clear, we are now not talking about real universes from our point of view, but rather virtual ones.  You probably had more than once (or at least once) dream which at the moment felt so real.  You might have had high temperature and halucinations as well.  The bottom line is, if we modify normal brain function just a bit, though the outside world remains stable, our perception of it does not. This raises a classic philosophical question; since all of our experiences are filtered and analyzed by our respective brains, how sure are we that our experiences reflect what’s real?  How do you know you’re reading this sentence, and not floating in a vat on a distant planet, with alien scientists stimulating your brain to produce the thoughts and experiences you deem real?  Branch of phylosophy to deal with this is called epistemology (term introduced by James Frederick Ferrier).  It addresses following questions:

  • What is knowledge?
  • How is knowledge acquired?
  • How do we know what we know?


The bottom line is that you can’t know for sure! We see our world through our senses, which stimulate our brain in ways our neural circuitry has evolved to interpret. If someone artificially stimulates our brain so as to elicit electrical crackles exactly like those produced by eating pizza, reading this sentence, or skydiving, the experience will be indistinguishable from the real thing. Experience is dictated by brain processes, not by what activates those processes.


OK, so let's take this step further.  We know brain can be stimulated and to be able to do so we should at least be able to match what we see as our current brain processing power.  Next, we should have enough processing power to stimulate brains of all other beings.  On further thought, we need processing powers to simulate and stimulate all processes happening within at least active region of objects in simulation.  Each object here being fundamental ingredient.  Of course, not every single particle in universe would need to be addressed (think of role of observer discussed in Many Worlds).  But wait a minute.  If we are part of such a simulation, why should we believe anything we read in neurobiology texts - the texts would be simulations too, written by simulated biologists, whose findings would be dictated by the software running the simulation and thus could easily be irrelevant to the workings of "real" brains. Well, this is a valid point (phylosophy is always likes to leave open questions), but let's assume whoever simulates reality wishes to simulate it as real as it is.  While I'm agnostic and tend to leave God out of any discussion, it is hard not to quote here famous line "So God created man in his own image, in the image of God he created him; male and female he created them."



Now, try to forget above lines and imagine you are real - which most likely won't be a problem as that was something you thought so far anyway.  What is the processing speed of the human brain, and how does it compare with the capacity of computers?  This is a difficult question.  Our brain is still pretty much unknown teritory and it was just recently some serious efforts in that direction have been made.  I believe my first belief that we might be onto something was when I listened to Henry Markram lecture I found on youtube back in 2009.  Henry is leading Blue Brain Project with goal of reconstructing the brain piece by piece and building a virtual brain in a supercomputer. The virtual brain would be tool giving neuroscientists a new understanding of the brain and a better understanding of neurological diseases.  The Blue Brain project began in 2005 with an agreement between the EPFL and IBM, which supplied the BlueGene/L supercomputer acquired by EPFL to build the virtual brain.   


Now, the computing power needed is considerable. Each simulated neuron requires the equivalent of a laptop. A model of the whole brain would have billions of such laptops.  Nevertheless, supercomputing technology is rapidly approaching a level where simulating the whole brain becomes a concrete possibility.  As a first step, the project succeeded in simulating a rat cortical column.  This neuronal network, the size of a pinhead, recurs repeatedly in the cortex. A rat's brain has about 100,000 columns of in the order of 10000 neurons each. In humans, the numbers are dizzying - a human cortex may have as many as  two million columns, each having in the order of 100000 neurons each. 


The human retina, a light-sensitive tissue lining the inner surface of the eye, has 100 million neurons (it is smaller than a dime and about as thick as a few sheets of paper) and it is one of the best-studied neuronal clusters. The robotics researcher, Hans Moravec, has estimated that for a computer-based retinal system to be on a par with that of humans, it would need to execute about a billion operations each second. To scale up from the retina's volume to that of the entire brain requires a factor of roughly 100000. Moravec suggests that effectively simulating a brain would require a comparable increase in processing power, for a total of about 100 million million (1014) operations per second. Independent estimates based on the number of synapses in the brain and their typical firing rates yield processing speeds within a few orders of magnitude of this result, about 1017 operations per second. Although it's difficult to be more precise, this gives a sense of the numbers that come into play.  Currently (2H 2011), Japan's K computer, built by Fujitsu, is the fastest in the world; it achieves speed of 10.51 petaflops (1015 operations per second).  This statistic will change most likely in recent future.  If we use the faster estimate for brain speed, we find that a hundred million laptops or a hundred supercomputers, approach the processing power of a human brain.


Now, such comparisons are likely naïve; the mysteries of the brain are manifold, and speed is only one gross measure of function. But most everyone agrees that one day we will have raw computing capacity equal to, and likely far in excess of, what biology has provided. Obvious unknown is whether we will ever leverage such power into a radical fusion of mind and machine.  Dualist theories, of which there are many varieties, maintain that there's an essential nonphysical component vital to mind. Physicalist theories of mind, of which there are also many varieties, deny this, emphasizing instead that underlying each unique subjective experience is a unique brain state. Functionalist theories go further in this direction, suggesting that what really matters to making a mind are the processes and functions - the circuits, their interconnections, their relationships - and not the particulars of the physical medium within which these processes take place.  Physicalists would agree that were you to faithfully replicate my brain by whatever means - molecule by molecule, atom by atom - the end product would indeed think and feel as you do. Functionalists would agree that were you to focus on higher-level structures - replicating all your brain connections, preserving all brain processes while changing only the physical substrate through which they occur - the same conclusion would hold. Dualists would disagree on both counts.  The possibility of artificial sentience clearly relies on a functionalist viewpoint. Earlier mentioned Henry Markram, anticipates that before 2020 the Blue Brain Project, leveraging processing speeds that are projected to increase by a factor of more than a million, will achieve a full simulated model of the human brain. It needs to be said that Blue Brain's goal is not to produce artificial sentience, but rather to have a new investigative tool for developing treatments for various forms of mental illness; still, Markram has gone out on a limb to speculate that, when completed, Blue Brain may very well have the capacity to speak and to feel.  What if we apply this to virtual universe model?


The history of technological innovation suggests that iteration by iteration, the simulations would gain verisimilitude, allowing the physical and experiential characteristics of the artificial worlds to reach convincing levels of nuance and realism. Whoever was running a given simulation would decide whether the simulated beings knew that they existed within a computer; simulated humans who surmised that their world was an elaborate computer program might find themselves taken away by simulated technicians in white coats and confined to simulated locked wards. But probably the vast majority of simulated beings would consider the possibility that they're in a computer simulation too silly to warrant attention.  Even if you accept the possibility of artificial sentience, you may be persuaded that the overwhelming complexity of simulating an entire civilization, or just a smaller community, renders such feats beyond computational reach.  One may usefully distinguish between two types of simulation: in an extrinsic simulation, the consciousness is external to the simulation, whereas in an intrinsic simulation the consciousness is entirely contained within it and has no presence in the external reality.  It's time to play with numbers.



Scientists have estimated that a present-day high-speed computer the size of the earth could perform anywhere from 1033 to 1042 operations per second. If we assume that our earlier estimate of 1017 operations per second for a human brain is correct, then an average brain performs about 1024 total operations in a single hundred year life span. Multiply that by the roughly 100 billion people who have ever walked the planet and the total number of operations performed by every human brain since Ardi is about 1035. Using the conservative estimate of 1033 operations per second, we see that the collective computational capacity of the human species could be achieved with a run of less than two minutes on an earth-sized computer with today’s technology.

Quantum computing has the capacity to increase processing speeds by spectacular factors (although we are still very far from mastering this application of quantum mechanics).  Researchers have estimated that a quantum computer no bigger than a laptop has the potential to perform the equivalent of all human thought since the dawn of our species in a tiny fraction of a second.  Again, this is just simulating brain operations; to simulate not just individual minds but also their interactions among themselves and with an evolving environment, the computational load would grow orders of magnitude larger. On the other hand a sophisticated simulation could be optimized with minimal impact on quality. For example, simulated humans on a simulated Earth won't be bothered if the computer simulates only things lying within the cosmic horizon (we can't see beyond that range anyway so why simulate it).  Further, the simulation might simulate stars beyond the sun only during simulated nights, and then only when the simulated local weather resulted in clear skies (imposing some load balancing too). When no one is looking, the computer's celestial simulator routines could take a break from working out the appropriate stimulus to provide each and every person who could look skyward.  Remember discussion Bohr and Einstein had which is described in Many Worlds? It's exactly that!  Well-structured program would keep track of the mental states and intentions of its simulated inhabitants, and so would anticipate, and appropriately respond to, any impending stargazing. The same goes for simulating cells, molecules and atoms. 


The march toward increasingly powerful computers, running ever more sophisticated programs, is inexorable. Even with today's rudimentary technology, the fascination of creating simulated environments is strong; with more capability it's hard to imagine anything but more intense interest. The question is not whether our descendants will create simulated computer worlds - we're already doing it. The unknown is how realistic the worlds will become.  At this point Nick Bostrom makes a simple, but powerful observation.  Our descendants are bound to create an immense number of simulated universes, filled with a great many self-aware, conscious inhabitants. If someone can come home at night, kick back, and fire up the create-a-universe software, it's easy to envision that they'll not only do so, but do so often.  One future day, a cosmic census that takes account of all sentient beings might find that the number of flesh and blood humans pales in comparison with those made of chips and bytes, or their future equivalents. If the ratio of simulated humans to real humans were colossal, then brute statistics suggests that we are not in a real universe. The odds would overwhelmingly favor the conclusion that you and everyone else are living within a simulation.  That's a bit of shocking and unavoidable observation.


Once we conclude that there's a high likelihood that we're living in a computer simulation, how do we trust anything (including the very reasoning that led to the conclusion)?  Will the sun rise tomorrow?  Maybe, as long as whoever is running the simulation doesn't pull the plug or gets BSOD. Are all our memories trustworthy? They seem so, but whoever is at the keyboard may have a penchant for adjusting them from time to time.  Logic alone can't ensure that we're not in a computer simulation.

Maybe sentience can't be simulated - full stop. Or maybe, as Bostrom also suggests, civilizations en route to the technological mastery necessary to create sentient simulations will inevitably turn that technology inward and destroy themselves.  Or maybe when our distant descendants gain the capacity to create simulated universes they choose not to do so, perhaps for moral reasons or simply because other currently inconceivable pursuits prove so much more interesting that, much as we noted with universe creation, universe simulation falls by the wayside.  There are numerous loopholes, but are they large enough?  And if you were living in a simulation, could you figure that out?  Let's examine some possibilities.


The Simulator might choose to let you in on the secret. Or maybe this revelation would happen on a worldwide scale, with giant windows and a booming voice surrounding the planet, announcing that there is in fact an All Powerful Programmer up in the heavens. But even if your Simulator shied away from exhibitionism, less obvious clues might turn up. Simulations allowing for sentient beings would certainly have reached a minimum fidelity threshold, but as they do with designer clothes and cheap knock offs, quality and consistency would likely vary. For example, one approach to programming simulations ("emergent strategy") would draw on the accumulated mass of human knowledge, judiciously invoking relevant perspectives as dictated by context. Collisions between protons in particle accelerators would be simulated using quantum field theory. The trajectory of a batted ball would be simulated using Newton’s laws. The reactions of a mother watching her child's first steps would be simulated by melding insights from biochemistry, physiology, and psychology. The actions of governmental leaders would fold in political theory, history, and economics. Being a patchwork of approaches focused on different aspects of simulated reality, the emergent strategy would need to maintain internal consistency as processes nominally construed to lie in one realm spilled over into another. A psychiatrist needn't fully grasp the cellular, chemical, molecular, atomic, and subatomic processes underlying brain function.  But in simulating a person, the challenge for the emergent strategy would be to consistently meld coarse and fine levels of information, ensuring for example that emotional and cognitive functions interface sensibly with physiochemical data. Simulators employing emergent strategies would have to iron out mismatches arising from the disparate methods, and they'd need to ensure that the meshing was smooth. This would require fiddles and tweaks which, to an inhabitant, might appear as sudden, baffling changes to the environment with no apparent cause or explanation. And the meshing might fail to be fully effective; the resulting inconsistencies could build over time, perhaps becoming so severe that the world became incoherent, and the simulation crashed.


A possible way to obviate such challenges would be to use a different approach called "ultra-reductionist strategy".  Here, simulation would proceed by a single set of fundamental equations, much as physicists imagine is the case for the real universe. Such simulations would take as input a mathematical theory of matter and the fundamental forces and a choice of "initial conditions" (how things were at the starting point of the simulation); the computer would then evolve everything forward in time, thereby avoiding the meshing issues of the emergent approach. These simulations have their own set of problems. If the equations our descendants have in their possession are similar to those we work with today - involving numbers that can vary continuously - then the simulations would necessarily invoke approximations. To exactly follow a number as it varies continuously, we would need to track its value to an infinite number of decimal places (let's say, a variation from .9 to 1 pass through numbers like .9, .97, .971, .9713, .97131, .971312, and so on, with an arbitrarily large number of digits required for full accuracy). That’s something a computer with finite resources can't manage: it will run out of time and memory. So, even if the deepest equations were used, it's still possible that computer-based calculations would inevitably be approximate, allowing errors to build up over time.  Round-off errors, when accumulated over a great many computations, can yield inconsistencies. Of course, a Simulator might wish to conceal herself too.  As inconsistencies would start to build, she might reset the program and erase the inhabitants' memory of the anomalies. So it would seem a stretch to claim that a simulated reality would reveal its true nature through glitches and irregularities.


simulationreality.jpgIf and when we do generate simulated worlds, with apparently sentient inhabitants, an essential question will arise: Is it reasonable to believe that we have become the very first creators of sentient simulations? Perhaps yes, but if we're keen to go with the odds, we must consider alternative explanations that, in the grand scheme of things, don't require us to be so extraordinary. Once we accept that idea, we're led to cconsider that we too may be in a simulation, since that's the status of the vast majority of sentient beings in a Simulated Multiverse. Evidence for artificial sentience and for simulated worlds is grounds for rethinking the nature of your own reality.  So, it is just a matter of time before come to that point.
Why stop there?  There's a philosophical perspective, coming from the structural realist school of thought, suggesting physicists may have fallen prey to a false dichotomy between mathematics and physics. For example, it is common for theoretical physicists to speak of mathematics providing a quantitative language for describing physical reality. But maybe, this perspective suggests, math is more than just a description of reality - maybe math is reality. The computer simulation is nothing but a chain of mathematical manipulations that take the computer's state at one moment and according to specified mathematical rules evolve those bits through subsequent arrangements.matrix1.jpg

Deeper point of this perspective is that the computer simulation is an inessential intermediate step, a mere mental stepping-stone between the experience of a tangible world and the abstraction of mathematical equations.  The mathematics itself (through the relationships it creates, the connections it establishes, and the transformations it embodies), contains you, your actions and your thoughts. You don't need the computer - you are in the mathematics.  In this way of thinking, everything you're aware of is the experience of mathematics. Reality is how math feels.


Max Tegmark calls it Mathematical Universe Hypothesis (also known as the Ultimate Ensemble or MUH) and says that, the deepest description of the universe should not require concepts whose meaning relies on human experience or interpretation. Reality transcends our existence and so shouldn't, in any fundamental way, depend on ideas of our making. Tegmark's view is that mathematics is precisely the language for expressing statements that shed human contagion.  As per Tegmark, nothing can possibly distinguish a body of mathematics from the universe it depicts.  Were there some feature that did distinguish math from the universe then they  would have to be non-mathematical.  But, according to this line of thought, if the feature were non-mathematical, it must bear a human imprint, and so can't be fundamental. Thus, there’s no distinguishing what we conventionally call the mathematical description of reality from its physical embodiment - they are the same.


Originally, there was a bit of an inconsistency in original model with Gödel's incompleteness theorems.  Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics. The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (essentially, a computer program) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, a corollary of the first, shows that such a system cannot demonstrate its own consistency.  Tegmark's response is to offer a new hypothesis "that only Godel-complete (fully decidable) mathematical structures have physical existence. This drastically shrinks the Level IV multiverse, essentially placing an upper limit on complexity, and may have the attractive side effect of explaining the relative simplicity of our universe." Tegmark goes on to note that although conventional theories in physics are Godel-undecidable, the actual mathematical structure describing our world could still be Godel-complete, and "could in principle contain observers capable of thinking about Godel-incomplete mathematics, just as finite-state digital computers can prove certain theorems about Godel-incomplete formal systems like Peano arithmetic." Later on Tegmark gives a more detailed response, proposing as an alternative to MUH - the more restricted "Computable Universe Hypothesis" (CUH) which only includes mathematical structures that are simple enough that Gödel's theorem does not require them to contain any undecidable/uncomputable theorems. Tegmark admits that this approach faces "serious challeges", including (a) it excludes much of the mathematical landscape; (b) the measure on the space of allowed theories may itself be uncomputable; and (c) "virtually all historically successful theories of physics violate the CUH".  His approach is also known as "shut up and calculate" and represents introduction into Level IV of multiverse - where everything is almost mathematical structure.


This is closely related to another question you may have heard before - did we discover mathematics or did we just found it?  Did we create it?  For centuries people have debated whether - like scientific truths - mathematics is discoverable, or if it is simply invented by the minds of our great mathematicians. But two questions are raised, one for each side of the coin. For those who believe these mathematical truths are purely discoverable, where, exactly, are you looking? And for those on the other side of the court, why cannot a mathematician simply announce to the world that he has invented 2 + 2 to equal 5.  The Classical Greek philosopher Plato was of the view that math was discoverable, and that it is what underlies the very structure of our universe. He believed that by following the intransient inbuilt logic of math, a person would discover the truths independent of human observation and free of the transient nature of physical reality.  Obviously, if you accept matematical universe then you accept Platonic view too.

pi-day.jpgAlbert Einstein said: "The most incomprehensible thing about the universe is that it is comprehensible." Physicist Eugene Wigner wrote of "the unreasonable effectiveness of mathematics" in science. So is mathematics invented by humans, like cars and computers, music and art? Or is mathematics discovered, always out there, somewhere, like mysterious islands waiting to be found?  The question probes the deepest secrets of existence.  Roger Penrose, one of the world's most distinguished mathematicians, says that "people often find it puzzling that something abstract like mathematics could really describe reality." But you cannot understand atomic particles and structures, such as gluons and electrons, he says, except with mathematics.   Penrose, Mark Belaguer and the others tend to be aware of the other side too.  So, is mathematics invented or discovered? Here’s what we know. Mathematics describes the physical world with remarkable precision. Why? There are two possibilities.  First, math somehow underlies the physical world, generates it. Or second, math is a human description of how we describe certain regularities in nature, and because there is so much possible mathematics, some equations are bound to fit.  As for the essence of mathematics, there are four possibilities. Only one is really true. Math could be: physical, in the real world, actually existing; mental, in the mind, only a human construct; Platonic, nonphysical, nonmental abstract objects; or fictional, anti-realist, utterly made up. Math is physical or mental or Platonic or fictional. Choose only one.


As to the question of whether we are living in a simulated reality or a "real' one", the answer may be "indistinguishable". Physicist Bin-Guang Ma proposed the theory of  "Relativity of reality", though this notion has been suggested in other contexts like ancient philosophy (Zhuangzi's 'Butterfly Dream') and psychologic analytics. By generalizing the relativity principle in physics, which is mainly about the relativity of motion, stating that the motion has no absolute meaning (to say if something is in motion or rest, one must adopt some reference frame; without a reference frame, one cannot tell the state of being in rest or in uniform motion), a similar property has been suggested for reality, meaning that without a reference world, one cannot tell the world one is living in is real or a simulated one. Therefore, there is no absolute meaning for reality. Similar to the situation in Einstein's relativity, there are two fundamental principles for the theory 'Relativity of reality'.

  • All worlds are equally real.
  • Simulated events and simulating events coexist.


The first principle (equally real) says that all worlds are equal in reality, even for partially simulated worlds (if there are living beings, they feel the same level of reality just as we feel). In this theory, the question "whether are we living in a simulated reality or a "real one" is meaningless, because they are indistinguishable in principle. The "equally real principle" doesn't mean that we cannot differentiate a concrete computer simulation from our own world, since when we are talking about a computer simulation, we already have a reference world (the world we are in).  Coupled with the second principle ("coexistence"), the space-time transformation between two across-reality objects (one is in real world and the other is in virtual world) was supposed in this theory, which is an example of interreality (mixed reality) system. The first "interreality physics" experiment may be the one conducted by V. Gintautas and A. W. Hubler, where a mixed-reality correlation between two pendula (one is real and the other is virtual) was indeed observed.


Going back to our simulation, there several classes of computation computer can do:

  • computable functions, which are functions that can be evaluated by a computer running through a finite set of discrete instructions
  • noncomputable functions, which are set of well-defined problems that cannot be solved by any computational procedure


Computer trying to calculate a noncomputable function will churn away indefinitely without coming to an answer, regardless of its speed or memory capacity. Imagine a simulated universe in which a computer is programmed to provide a wonderfully efficient simulated chef who provides meals for all those simulated inhabitants - and only those simulated inhabitants - who don't cook for themselves. The question is: Whom does the computer charge with feeding the chef?Think about it, and it makes your head hurt. The chef can't cook for himself as he only cooks for those who don't cook for themselves, but if the chef doesn't cook for himself, he is among those for whom he is meant to cook.  The successful universes constituting the Simulated Multiverse would therefore had to be based on computable functions.  Then the simplest explanation of our universe is the simplest program that computes it. In 1997 Jürgen Schmidhuber pointed out that the simplest such program actually computes all possible universes with all types of physical constants and laws, not just ours. His essay also talks about universes simulated within parent universes in nested fashion, and about universal complexity-based measures on possible universes.  It is hard not see parallels here with what is called Ultimate universe and anthropic principle discusses earlier in previous multiverse blog entries.  Here a clip where Schmidhuber talks about all computable universes at the World Science Festival 2011.


Our reality is far from what it seems, but we should be open minded.  Matehmatics so far has been embraced as our framework which not only explains what we know, but also gives us directions with some new strange paths to explore and discover.  It would be mistake not to say that has happened before and will happen again for sure.  We like to believe in testable experiments, but sometime mathematics is all there is.  Until we do not find something else... And so the story continues and with technological advance we have some exciting times ahead.  It is exactly these kinds of theories and their testing which make me wish how gladly I would live forever...

Credits: Brian Greene, Michio Kaku, Nick Bostrom, David J. Chalmers, Blue Brain Project, Max Tegmark, Josh Hill, Jürgen Schmidhuber


Related posts:

Deja vu Universe



Landscape Multiverse

Many worlds

Holographic Principle to Multiverse Reality

If you want to see a hologram, you don't have to look much farther than your wallet. There are holograms on most driver's licenses, ID cards and credit cards. If you're not old enough to drive or use credit, you can still find holograms around your home. They're part of CD, DVD, software packaging and many others. These holograms aren't very impressive. You can see changes in colors and shapes when you move them back and forth, but they usually just look like sparkly pictures or smears of color. Even the mass-produced holograms that feature movie and comic book heroes can look more like green photographs than amazing 3-D images.


Yoda holographic it is...

On the other hand, large-scale holograms, illuminated with lasers or displayed in a darkened room with carefully directed lighting, are incredible. They're two-dimensional surfaces that show absolutely precise, three-dimensional images of real objects. You don't even have to wear special glasses or look through a View-Master to see the images in 3D.  If you look at these holograms from different angles, you see objects from different perspectives, just like you would if you were looking at a real object. Some holograms even appear to move as you walk past them and look at them from different angles. Others change colors or include views of completely different objects, depending on how you look at them. Is there any relation to holograms and physics?  Or even better, to our multiverse story?  It turns out - there is!


The holographic principle is a property of quantum gravity and string theories which states that the description of a volume of space can be thought of as encoded on a boundary to the region - preferably a light-like boundary like a gravitational horizon. First proposed by Gerard 't Hooft, it was given a precise string-theory interpretation by Leonard Susskind who combined his ideas with previous ones of Gerard 't Hooft and Charles Thorn. Thorn observed in 1978 that string theory admits a lower dimensional description in which gravity emerges from it in what would now be called a holographic way.  In a larger and more speculative sense, the theory suggests that the entire universe can be seen as a two-dimensional information structure "painted" on the cosmological horizon, such that the three dimensions we observe are only an effective description at macroscopic scales and at low energies. Cosmological holography has not been made mathematically precise, partly because the cosmological horizon has a finite area and grows with time.  The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects which have fallen into the hole can be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory.  Confused?  Don't be - we'll start with black holes - yet another singularity in our Universe.


John Wheeler once said our Universe - matter and radiation - should be viewed as secondary, as carriers of a more abstract and fundamental entity - information. Information forms an irreducible kernel at the heart of reality.  From this perspective, the universe can be thought of as an information processor. It takes information regarding how things are now and produces information delineating how things will be at the next now, and the now after that. Our senses become aware of such processing by detecting how the physical environment changes over time. But the physical environment itself is emergent; it arises from the fundamental ingredient, information, and evolves according to the fundamental rules, the laws of physics.  Now, let's step into black hole territory.


I doubt you never heard of black holes.  If nothing, you have heard of area in space which has such gravitational pull that no one and nothing can escape it. Including light - thus it is black.  Based on earlier Einstein's work on general relativity, Karl Schwarzschild did some calculations and found something no one expected or have seen up to that point - if enough mass were crammed into a small enough ball, a gravitational abyss would form.  At that time this used to be called dark stars, then frozen starts and at the end it was ealier mentioned John Wheeler who nailed black holes naming which has been in use since.  At that early time, Einstein didn't really like the whole idea.  For a star as massive as the sun to be a black hole, it would need to be squeezed into a ball about three kilometers across; a body as massive as the earth would become a black hole only if squeezed to a centimetor across.  It is hard to imagine such thing, isn't it?  Yet, in the decades since, astronomers have gathered overwhelming observational evidence that black holes are both real and numerous. There is wide agreement that a great many galaxies are powered by an enormous black hole at their center; Milky Way revolves around a black hole whose mass is about three million times that of our Sun.


A 19th century branch of physics called thermodynamics (today statistical mechanics) gave rise to some fundamental laws of physics we know today.  One most important is second law of thermodynamics.  Sometimes, instead of definition things get moe clear through example; we'll do that too here on example of the steam engine (innovation that initially drove thermodynamics).



The core of a steam engine is a vat of water vapor that expands when heated, driving the engine's piston forward, and contracts when cooled, returning the piston to its initial position, ready to drive forward once again. In the late 19th and early 20th centuries, physicists worked out the molecular underpinnings of matter, which among other things provided a microscopic picture of the steam’s action. As steam is heated, its H2O molecules pick up increasing speed and career into the underside of the piston. The hotter they are, the faster they go and the bigger the push. To understand the steam’s force we do not need the details of which particular molecules happen to have this or that velocity or which happen to hit the piston precisely here or there. To figure out the piston’s push, we need only the average number of molecules that will hit it in a given time interval, and the average speed they’ll have when they do.


Now, these are much coarser data, but it’s exactly such pared-down information that’s useful.  In crafting mathematical methods for systematically sacrificing detail in favor of such higher-level aggregate understanding, physicists honed a wide range of techniques and developed a number of powerful concepts. One such concept is entropy which provides a characterization of how finely arranged (or not) the constituents of a given system need to be for it to have the overall appearance that it does.  When something is highly disordered, like kids room usually, a great many possible rearrangements of its constituents leave its overall appearance intact. If you have untidy room, and you reorder elements found everywhere on floor (like tossed toys), the room will look the same. But when something is highly ordered, like tidy room, even small rearrangements are easily detected.  Take any system and count the number of ways its constituents can be rearranged without affecting its gross, overall, macroscopic appearance. That number is the system’s entropy.  If there’s a large number of such rearrangements, then entropy is high: the system is highly disordered. If the number of such rearrangements is small, entropy is low: the system is highly ordered.  If you wave through vat of steam, you move rearrange millions of H2O molecules, but still it looks kind of same - undisturbed.  Imagine another form of H2O molecules - ice cubes.  Try to rearrange those and you get to see difference imediately.  The entropy of the steam is high (many rearrangements will leave it looking the same); the entropy of the ice is low (few rearrangements will leave it looking the same).


The Second Law of Thermodynamics states that, over time, the total entropy of a system will increase.  By definition, a higher-entropy configuration can be realized through many more microscopic arrangements than a lower-entropy configuration. As a system evolves, it's overwhelmingly likely to pass through higher-entropy states since, simply put, there are more of them.   Ice melting in a warm room is a common example of increasing entropy, described in 1862 by Rudolf Clausius as an increase in disaggregate of the ice molecules.  The idea is general. Glass shattering, a candle burning, ink spilling, perfume pervading: these are different processes, but the statistical considerations are the same. In each, order degrades to disorder and does so because there are so many ways to be disordered.  Being statistical, the Second Law does not say that entropy can't decrease, only that it is extremely unlikely to do so. The milk molecules you just poured into your coffee might, as a result of their random motions, coalesce into a floating figurine of Santa Claus. But don't hold your breath - a floating milk Santa has very low entropy.  Similar considerations hold for the vast majority of high-to-low-entropy evolutions, making the Second Law appear inviolable.  How does this apply to black holes?Ice_water.jpg

Wheeler noticed black holes would seem to violate this law.  No matter how much entropy system would have - if it gets to black hole it is gone. Since nothing escapes from a black hole, the system's disorder would appear permanently gone. Black hole would seem to be entropy-free.


According to basic thermodynamics, there’s a close association between entropy and temperature. Temperature is a measure of the average motion of an object’s constituents: hot objects have fast-moving constituents, cold objects have slow-moving constituents. Entropy is a measure of the possible rearrangements of these constituents that, from a macroscopic viewpoint, would go unnoticed. Both entropy and temperature thus depend on aggregate features of an object's constituents; they go hand in hand. Any object with a nonzero temperature radiates. Hot coal radiates visible light; we humans, typically, radiate in the infrared. If a black hole has a nonzero temperature, it too should radiate. But that conflicts blatantly with the established understanding that nothing can escape a black hole's gravitational grip. So, initial conclusion was black holes do not have a temperature. Black holes do not harbor entropy. Black holes are entropy sinkholes. In their presence, the Second Law of Thermodynamics fails.  Ouch!  And then Stephen Hawking stepped in.

In 1971, Stephen Hawking realized that black holes obey a particular law.  If you have a collection of black holes with various masses and sizes, some engaged in stately orbital waltzes, others pulling in nearby matter and radiation, and still others crashing into each other, the total surface area of the black holes increases over time. By "surface area" Hawking meant the area of each black hole's event horizon. Now, there are many results in physics that ensure quantities don't change over time (conservation of energy, conservation of charge, conservation of momentum, and so on), but there are very few that require quantities to increase. It was natural, then, to consider a possible relation between Hawking's result and the Second Law. If we envision that, somehow, the surface area of a black hole is a measure of the entropy it contains, then the increase in total surface area could be read as an increase in total entropy.eventhor.jpg

To fully assess the nature of black holes and understand how they interact with matter and radiation, we must include quantum considerations.  Hawking studied how quantum fields would behave in a very particular spacetime arena: that created by the presence of a black hole. A well-known feature of quantum fields in ordinary, empty, uncurved spacetime is that their jitters allow pairs of particles, for instance an electron and its antiparticle the positron, to momentarily erupt out of the nothingness, live briefly, and then smash into each other, with mutual annihilation the result. This process, called quantum pair production, has been intensively studied both theoretically and experimentally, and is thoroughly understood.  Characteristic of quantum pair production is that while one member of the pair has positive energy, the law of energy conservation dictates that the other must have an equal amount of negative energy. So far so good.  Over and over again, quantum jitters result in particle pairs being created and annihilated, created and annihilated, etc.  Hawking reconsidered such ubiquitous quantum jitters near the event horizon of a black hole. He found that sometimes events look much as they ordinarily do. Pairs of particles are randomly created; they quickly find each other; they are destroyed. But every so often something new happens; If the particles are formed sufficiently close to the black hole’s edge, one can get sucked in (negative) while the other escapes into space (positive). To someone watching from afar they look like radiation, a form since named Hawking radiation. The other particle, the one that fall into the black hole, has also detectable impact. Much as a black hole's mass increases when it absorbs anything that carries positive energy, so its mass decreases when it absorbs anything that carries negative energy. Black hole emits a steady outward stream of radiation as its mass gets ever smaller. When quantum considerations are included, black holes are thus not completely black.


In recent years, physicists have been toying with laboratory experiments that imitate the physics of an event horizon. This marks the point where escape from a black hole is impossible because the velocity required exceeds the speed of light, the cosmic speed limit.  Analogue black holes have a similar point that cannot be crossed because the speed required is too great. Unlike in a real black hole however, this "horizon" is not created by intense gravity, since we do not know how to synthesise a black hole, but by some other mechanism – utilising sound or light waves, for example. However, no one had seen photons resembling Hawking radiation emerging from these analogues, until 2010. To create their lab-scale event horizon (as on picture above), Daniele Faccio & Francesco Belgiorno and their colleagues focused ultrashort pulses of infrared laser light at a wavelength of 1055 nanometres into a piece of glass. The extremely high intensity of these pulses – trillions of times that of sunlight – temporarily skews the properties of the glass. In particular, it boosts the glass's refractive index, the extent to which the glass slows down light travelling through it.  The result is a moving point of very high refractive index, equivalent to a physical hill, which acts as a horizon. Photons entering the glass behind this "hill", including ones that are part of a virtual pair, slow as they climb the hill and are unable to pass through it. Relative to the slow-moving pulse, they have come to a stop and remain behind the pulse until it has passed through the glass's length. To see if this lab-made event horizon was producing any Hawking radiation, the researchers placed a light detector next to the glass, perpendicular to the laser beam to avoid being swamped by its light. Some of the photons they detected were due to the infrared laser interacting with defects in the glass: this generates light at known wavelengths, for example between 600 and 700 nanometres.  But mysterious, "extra" photons also showed up at wavelengths of between 850 and 900 nanometres in some runs, and around 300 nanometres in others, depending on the exact amount of energy that the laser pulse was carrying. Because this relationship between the wavelength observed and pulse energy fits nicely with theoretical calculations based on separating pairs of virtual photons, Faccio's team concludes that the extra photons must be Hawking radiation.  Hawking radiation is also popping up in other, less direct black hole imitators. A team led by Silke Weinfurtner announced in August 2010 that they had observed a water-wave version of Hawking radiation in an experiment involving waves slowed to a halt to form a horizon.

blackhole3.jpgBack to black hole, as particles stream from just outside of it, they fight an uphill battle to escape the strong gravitational pull. In doing so, they expend energy and cool down substantially. Hawking calculated that an observer far from the black hole would find that the temperature for the resulting "tired" radiation was inversely proportional to the black hole's mass. A huge black hole, like the one at the center of our galaxy, has a temperature that's less than a trillionth of a degree above absolute zero. A black hole with the mass of the Sun would have a temperature less than a millionth of a degree. For a black hole's temperature to be high enough to barbecue the family dinner, its mass would need to be about a ten-thousandth of the Earth's. But the magnitude of a black hole's temperature is secondary. Although the radiation coming from distant astrophysical black holes won't light up the night sky, the fact that they do have a temperature, that they do emit radiation, suggests black holes do have entropy. Hawking's theoretical calculations determining a given black hole's temperature and the radiation it emits gave him all the data he needed to determine the amount of entropy the black hole should contain, according to the standard laws of thermodynamics.  And the answer he found is proportional to the surface area of the black hole. By the end of 1974, the Second Law was law once again.

Nevertheless, with time another question raised - where is the entropy stored and this is how information came to play crucial role.  If we extend previous definition of entropy, seen as measure of disorder,  we say - entropy measures the additional information hidden within the microscopic details of the system, which, should you have access to it, would distinguish the configuration at a micro level from all the macro look-alikes.  Let's say your cleanup your room including your coin collection which previously was scattered around the floor.  This collection hides high entropy.  Each coin can be either head of tail. With 2 coins you have 4 possible configurations, with 3 coins you have 8 possible combinations, etc. With 1000 coins that would be 21000 combinations.  At macroscopic level this does not matter really what it is to make room tidy, but overall it adds up to entropy of the system.  So, entropy of a system is related to the number of indistinguishable rearrangements of its constituents, but properly speaking is not equal to the number itself. The relationship is expressed by a mathematical operation called a logarithm (using logarithms has the advantage of allowing us to work with more manageable numbers).

Now, ask yourself what is information?  Research by mathematicians, physicists, and computer scientists have made this precise. Their investigations have established that the most useful measure of information content is the number of distinct yes-no questions the information can answer. The coins' information answers 1000 such questions: Is the first coin heads? Yes. Is the second coin heads? Yes. Is the third coin heads? No. Is the fourth coin heads? No. And so on. A datum that can answer a single yes-no question is called a bit - short for binary digit, meaning a 0 or a 1, which you can think of as a numerical representation of yes or no. The heads-tails arrangement of the 1000 coins thus contains 1000 bits of information.  Value of the entropy and the amount of hidden information are equal. With entropy defined as the logarithm of the number of such rearrangements - 1000 in this case - entropy is the number of yes-no questions any one such sequence answers.  So, a system’s entropy is the number of yes-no questions that its microscopic details have the capacity to answer, and so the entropy is a measure of the system's hidden information content.


When Hawking worked out the detailed quantum mechanical argument linking a black hole's entropy to its surface area he also provided an algorithm for calculating it.  He showed mathematically that the entropy of a black hole equals the number of Planck-sized cells that it takes to cover its event horizon. It’s as if each cell carries one bit, one basic unit of information. 


Take the event horizon of a black hole and divide it into a grid-like pattern in which the sides of each cell are one Planck length (10-33 cm) long. Hawking proved mathematically that the black hole's entropy is the number of such cells needed to cover its event horizon - the black hole's surface area, that is, as measured in square Planck units (10-66 square cm per cell). In the language of hidden information, it’s as if each such cell secretly carries a single bit, a 0 or a 1, that provides the answer to a single yes-no question delineating some aspect of the black hole’s microscopic makeup.  This picture brings another question to focus; why would the amount of information be dictated by the area of the black hole’s surface?  Infomation contained in library building is determined by certain information we get from inside the library and not the surface.  Nevertheless, when it comes to black holes the information storage capacity is determined not by the volume of its interior but by the area of its surface and this comes straight from the mathematics.  This is somewhat hard to grasp as in everyday routine we do not deal with such micro details.  This came as surprise (since confirmed by both string and quantum loop theories), but also as the first hint of holography - information storage capacity determined by the area of a bounding surface and not by the volume interior to that surface. This hint would evolve into a new way of thinking too leading to some exciting ideas and questioning further our understanding of reality.


Imagine being in space ship.  As you are floating in free fall towards black hole, there is now way for you to distinguish when you have passed hole's event horizon - point of no return; you continue free fall towards singularity in center. To make it easier to imagine, let us assume this is really big black hole so that gravitational squeeze is still something you do not feel once you passed event horizon.  Yes, bigger black holes are more gentle than smaller ones. With small one the first thing you'll be likely to notice as you approach the hole is the tidal forces.  Tidal forces are nothing more than the difference in gravitational force between the near and far side of an object, and they aren't particular to blackholes (the tidal force of the moon on the Earth causes tides and hence the name).  For any reasonable sized blackhole (less than thousands of suns), the tidal force between different parts of your body will be greater than your body's ability to stay intact so you'll be pulled apart in the up-down direction.  For much more obscure reasons, you'll also be crushed from the sides.  These two effects combined are called spagettification.  Assuming that you somehow survive spagettification, or that you're falling into an super-massive blackhole then you can look forward to some bizarre time effects.  The point at which tidal forces destroy an object or kill a person depends on the black hole's size. For a supermassive black hole, such as those found at a galaxy's center, this point lies within the event horizon, so an astronaut may cross the event horizon without noticing any squashing and pulling (although it's only a matter of time, because once inside an event horizon, falling towards the center is inevitable). For small black holes whose Schwarzschild radius is much closer to the singularity, the tidal forces would kill even before the astronaut reaches the event horizon.  In our example we focus on supermassive black hole.  Example of spagettification is shown on picture below.

Spaghettification2.jpgIf information is placed on surface of black hole, event horizon, it feels a bit strange that we can pass this invisible barrier without any apparent notice, doesn't it?  If as you pass through the horizon of a black hole you find nothing there, nothing at all to distinguish it from empty space, how can it store information?  Answer lies in something called duality (briefly mentioned when discussing Brane Worlds).  Duality refers to a situation in which there are complementary perspectives that seem completely different, and yet are intimately connected through a shared physical anchor (we used Albert-Marilyn image to illustrate it).  Let us apply this to our journey to black hole.  One essential perspective is yours as you freely fall toward a black hole.  Another is that of a distant observer, watching your journey through (powerful) telescope. The remarkable thing is that as you pass uneventfully through a black hole's horizon, the distant observer perceives a very different sequence of events. The discrepancy has to do with earlier mentioned Hawking radiation.  When the distant observer measures the Hawking radiation’s temperature, he finds it to be tiny; let’s say it’s 10-13 K, indicating that the black hole is roughly the size of the one at the center of Milky Way. But the distant observer knows that the radiation is cold only because the photons, traveling to him from just outside the horizon, have expended their energy valiantly fighting against the black hole's gravitational pull (photons are "tired").  As you get ever closer to the black hole's horizon, you'll encounter ever-fresher photons, ones that have only just begun their journey and so are more energetic and hotter.  As distant observer watches you approach to within a hair’s breadth of the horizon, he sees your body bombarded by increasingly intense Hawking radiation - until finally all that's left is your charred remains.  Your experience is complely different though.  You don't see nor feel any of this hot radiation. Again, because your free-fall motion cancels the effects of gravity, your experience is indistinguishable from that of floating in empty space (so you don't suddenly burst into flames). So the conclusion is that from your perspective, you pass seamlessly through the horizon and are headed towards the black hole's singularity, while from the distant observer's perspective, you are immolated by a scorching corona that surrounds the horizon.

Confused?  Let's try again. It’s been established for decades that "time moves slower the lower" (GPS satellites have to deal with an additional 45 microseconds every day due to their altitude for example).  Also, one way to think about gravity is as a "bending" of the time direction downward.  In this way anything that moves forward in time will also naturally move downward.  At the event horizon of a blackhole (the outer boundary) time literally points straight down.  As a result, escaping from a blackhole is no more difficult than going back in time.  Once you're inside all directions literally point toward the singularity in the center (since no matter what direction you move in will be toward the future).  We don't experience time moving at different rates or being position dependent, so when we start talking about messed up spacetime it's useful to look at things from more than one point of view as we did above.  So, as someone falls in they will move slower and slower through time.  They will appear redder, colder, and dimmer.  As they approach the event horizon their movement through time will halt, as they fade completely from view.  Technically, you'll never actually see someone fall into a blackhole, you'll just see them get really close.  That's our distant observer view.  From an insider's perspective (falling into the blackhole) things farther from the blackhole move through time faster, so the rest of the universe will speed up from your point of view.  As a result the rest of the universe becomes bluer, hotter, and brighter.  If the black hole is large enough - as in our example - you do not feel uncomfortable at all falling into the black hole. Falling into the black hole is defined by passing the event horizon, the point of no return, where the velocity you would need in order to escape is larger than the velocity of light. You are trapped, but as long as you do not try to escape, you may not notice anything unusual for quite a while.


OK, so we have two different description here of the same event - sounds like duality business, doesn't it?  This is hard to square with ordinary logic - the logic by which you are either alive or not alive. First, different perspectives can never confront each other. You can't get out of the black hole and prove to the distant observer that you are alive. The distant observer can't jump into the black hole and confront you with evidence that you're not obviously.  What about information? From your perspective, all your information, stored in your body and brain and in the laptop you're holding, passes with you through the black hole’s horizon. From the perspective of the distant observer, all the information you carry is absorbed by the layer of radiation incessantly bubbling just above the horizon. The bits contained in your body, brain, and laptop would be preserved, but would become thoroughly scrambled as they joined, jostled, and intermingled with the sizzling hot horizon. Which means that to the distant observer, the event horizon is a real place, populated by real things that give physical expression to the information symbolically depicted in the picture above where we presented grid with bits across black hole.  The conclusion is that the distant observer infers that a black hole's entropy is determined by the area of its horizon because the horizon is where the entropy is stored. Still, it is unexpected it is that the storage capacity isn't set by the volume, but rather surface.  Which brings us to next question; what is the maximum amount of information that can be stored within the region of space?


Imagine adding matter to the region until you reach a critical juncture. At some point, the region will be so thoroughly stuffed that were you to add even a single grain of sand, the interior would go dark as the region turned into a black hole (imagine adding dots with pen to piece of paper for example to visualize it easier). When that happens - game over. A black hole's size is determined by its mass, so if you try to increase the information storage capacity by adding yet more matter, the black hole will respond by growing larger - you can't increase the black hole's information capacity without forcing the black hole to enlarge.  The amount of information contained within a region of space, stored in any objects of any design, is always less than the area of the surface that surrounds the region (measured in square Planck units).  If you max out a region's storage capacity, you'll create a black hole, but as long as you stay under the limit, no black hole will form.  You may wonder with all nano technology business going are we in any danger of reaching the limits any soon?  The answer is no; stack of five off-the-shelf terabyte hard drives fits comfortably within a sphere of radius 50 centimeters, whose surface is covered by about 1070 Planck cells. The surface's storage capacity is thus about 1070 bits, which is about a billion, trillion, trillion, trillion, trillion terabytes, and so enormously exceeds anything you can buy.


Susskind and ’t Hooft stressed that the lesson should be general: since the information required to describe physical phenomena within any given region of space can be fully encoded by data on a surface that surrounds the region, then there’s reason to think that the surface is where the fundamental physical processes actually happen. According to this statement, our familiar 3D reality would then be likened to a holographic projection of those distant 2D physical processes.  If this line of reasoning is correct, then there are physical processes taking place on some distant surface that, much like a puppeteer pulls strings, are fully linked to the processes taking place in your fingers, arms, and brain as you read these words. Using words of Brian Greene (as for the much of this post too), our experiences here, and that distant reality there, would form the most interlocked of parallel worlds - Holographic Parallel Universes.


That familiar reality may be mirrored, or perhaps even produced, by phenomena taking place on a faraway, lower-dimensional surface ranks among the most unexpected developments in all of theoretical physics. But how confident should we be that the holographic principle is right?  In 1998, young Argentinean scientist Juan Malcadena made amazing discovery which rocked the world.  Though I made myself aware of it some 10 years later, I didn't stop thinking about it since.  To me Malcadena was new Einstein.  He was only 30 years old when he made announcement which later on left me breathless.  What did Malcadena found?  Malcadena provided the first mathematical example of Holographic Parallel Universes.  He achieved this by considering string theory in a universe whose shape differs from ours but for the purpose at hand proves easier to analyze. In a precise mathematical sense, the shape has a boundary, an impenetrable surface that completely surrounds its interior. By zeroing in on this surface, Maldacena argued convincingly that everything taking place within the specified universe is a reflection of laws and processes acting themselves out on the boundary.  Although Maldacena's method may not seem directly applicable to a universe with the shape of ours, his results are decisive because they established a mathematical proving ground in which ideas regarding holographic universes could be made explicit and investigated quantitatively.  Most exciting of all, there’s now evidence that a link between these theoretical insights and physics in our universe can be forged.  Let's peek into Malcadena's work.


Branes are objects of multiple dimensions that exist within the full 10D space required by string theory. In the language of string theorists, this full space is called the bulk.  In 1995, Joe Polchinski proved that it wasn’t possible to avoid them. Any consistent version of M-theory had to include higher-dimensional branes.  Now, imagine a stack of three-branes, so closely spaced that they appear as a single monolithic slab (as on picture below) and let's see how strings would react there.  If you read Brane Worlds, we encountered two types of strings - open snippets and closed loops.  Wndpoints of open strings can move within and through branes but not off them, while closed strings have no ends and so can move freely through the entire spatial expanse.  This means closed strings can move through the bulk of space.  Maldacena's first step was to confine his mathematical attention to strings that have low energy - that is, ones that vibrate relatively slowly.  Why?  Because the force of gravity between any two objects is proportional to the mass of each; the same is true for the force of gravity acting between any two strings. Strings that have low energy have small mass, and so they hardly respond to gravity at all. By focusing on low energy strings, Maldacena was thus suppressing gravity's influence. That brings a substantial simplification.  In string theory gravity is transmitted from place to place by closed loops. Eliminating the force of gravity is like eliminating the influence of closed strings on anything they might encounter (open string snippets living on the brane stack).  By ensuring that the two kinds of strings wouldn’t affect each other, Maldacena was ensuring that they could be analyzed independently.

bulk.pngThen Malcadena changed perspective and considered these three-branes and single object.  Previous research by scientists established that as you stack more and more branes together and their collective gravitational field will grow. Ultimately, the slab of branes behaves much like a black hole, but one that's brane-shaped (and so is called a black brane). As with black hole, if you get too close to a black brane, you can't escape. If you stay far away but are watching something approach a black brane, the light you'll receive will be exhausted from its having fought against the black brane's gravity (makes object appear to have less energy and to be moving slower).  With this new perspective, he he realized that the low-energy physics involved two components that could be analyzed independently:

  • slowly vibrating closed strings, moving anywhere in the bulk of space, are the most obvious low-energy carriers
  • the second component relies on the presence of the black brane. Imagine you are far from the black brane and have in your possession a closed string that's vibrating with an arbitrarily large amount of energy. Then, imagine lowering the string toward the event horizon while you maintain a safe distance. Black brane will make the string's energy appear ever lower; the light you'll receive will make the string look as though it's in a slow-motion movie. The second low-energy carriers are thus any and all vibrating strings that are sufficiently close to the black brane’s event horizon.


Final move was to compare the two perspectives. Malcadena noted that because they describe the same brane stack, only from different points of view, they must agree (remember, duality). Each description involves low-energy closed strings moving through the bulk of space, so this part of the agreement is manifest. But the remaining part of each description must also agree. The remaining part of the first description consists of low-energy open strings moving on the three-branes. Low-energy strings are well described by point particle quantum field theory, and that is the case here. The particular kind of quantum field theory involves a number of sophisticated mathematical ingredients, but two vital characteristics are readily understood. The absence of closed strings ensures the absence of the gravitational field. And, because the strings can move only on the tightly sandwiched three-dimensional branes, the quantum field theory lives in three spatial dimensions (in addition to the one dimension of time, for a total of four spacetime dimensions).  The remaining part of the second description consists of closed strings, executing any vibrational pattern, as long as they are close enough to the black branes' event horizon to appear lethargic (that is to appear to have low energy). Such strings, although limited in how far they stray from the black stack, still vibrate and move through nine dimensions of space (in addition to one dimension of time, for a total of ten spacetime dimensions). And because this sector is built from closed strings, it contains the force of gravity.  However different the two perspectives might seem, they're describing one and the same physical situation, so they must agree.  This is pretty much similar to what we had with black holes.  Nevertheless, this leads to a bizarre conclusion; a particular nongravitational, point particle quantum field theory in four spacetime dimensions (the first perspective) describes the same physics as strings, including gravity, moving through a particular swath of ten spacetime dimensions (the second perspective).  The gravity of the black brane slab imparts a curved shape to the tendimensional spacetime swath in its vicinity (curved spacetime is called anti–De Sitter space); the black brane slab is itself the boundary of this space. And so, Maldacena's showed that string theory within the bulk of this spacetime shape is identical to a quantum field theory living on its boundary. This is holography come to life.


Still confused?  Nothing to worry about - it takes time and some background to swallow this.  All of us are familiar with Euclidean geometry, where space is flat (that is, not curved). It is the geometry of figures drawn on flat sheets of paper. To a very good approximation, it is also the geometry of the world around us: parallel lines never meet, and all the rest of Euclid’s axioms hold. We are also familiar with some curved spaces. Curvature comes in two forms, positive and negative. The simplest space with positive curvature is the surface of a sphere. A sphere has constant positive curvature. That is, it has the same degree of curvature at every location (unlike an egg, say, which has more curvature at the pointy end). The simplest space with negative curvature is called hyperbolic space, which is defined as space with constant negative curvature. This kind of space has long fascinated scientists and artists alike.   By including time in the game, physicists can similarly consider spacetimes with positive or negative curvature. The simplest spacetime with positive curvature is called de Sitter space, after Willem de Sitter, the Dutch physicist who introduced it. Many cosmologists believe that the very early universe was close to being a de Sitter space. The far future may also be de Sitter-like because of cosmic acceleration. Conversely, the simplest negatively curved spacetime is called anti-de Sitter space. It is similar to hyperbolic space except that it also contains a time direction. Unlike our universe, which is expanding, anti-de Sitter space is neither expanding nor contracting. It looks the same at all times. Despite that difference, anti-de Sitter space turns out to be quite useful in the quest to form quantum theories of spacetime and gravity.  The idea is as follows: a quantum gravity theory in the interior of an anti-de Sitter spacetime is completely equivalent to an ordinary quantum particle theory living on the boundary. If true, this equivalence means that we can use a quantum particle theory (which is relatively well understood) to define a quantum gravity theory (which is not).  To make an analogy, imagine you have two copies of a movie, one on reels of 70-millimeter film and one on a DVD.  The two formats are utterly different, the first a linear ribbon of celluloid with each frame recognizably related to scenes of the movie as we know it, the second a two-dimensional platter with rings of magnetized dots that would form a sequence of 0s and 1s if we could perceive them at all. Yet both "describe" the same movie!  Similarly, the two theories, superficially utterly different in content, describe the same universe. The DVD looks like a metal disk with some glints of rainbowlike patterns. The boundary particle theory "looks like" a theory of particles in the absence of gravity. From the DVD, detailed pictures emerge only when the bits are processed the right way. From the boundary particle theory, quantum gravity and an extra dimension emerge when the equations are analyzed the right way.


What does it really mean for the two theories to be equivalent? First, for every entity in one theory, the other theory has a counterpart. The entities may be very different in how they are described by the theories: one entity in the interior might be a single particle of some type, corresponding on the boundary to a whole collection of particles of another type, considered as one entity. Second, the predictions for corresponding entities must be identical. Thus, if two particles have a 40 percent chance of colliding in the interior, the two corresponding collections of particles on the boundary should also have a 40 percent chance of colliding.


The particles that live on the boundary interact in a way that is very similar to how quarks and gluons interact in reality (quarks are the constituents of protons and neutrons; gluons generate the strong nuclear force that binds the quarks together - in other words gluons are glue for quarks). Quarks have a kind of charge that comes in three varieties (called colors) and the interaction is called chromodynamics. The difference between the boundary particles and ordinary quarks and gluons is that the particles have a large number of colors, not just three. Gerard ’t Hooft studied such theories and predicted that the gluons would form chains that behave much like the strings of string theory. The precise nature of these strings remained elusive, but in 1981 Alexander M. Polyakov noticed that the strings effectively live in a higher-dimensional space than the gluons do. In our holographic theories that higher-dimensional space is the interior of anti-de Sitter (AdS) space.  To understand where the extra dimension comes from, start by considering one of the gluon strings on the boundary. This string has a thickness, related to how much its gluons are smeared out in space. When physicists calculate how these strings on the boundary of AdS space interact with one another, they get a very odd result: two strings with different thicknesses do not interact very much with each other. It is as though the strings were separated spatially. One can reinterpret the thickness of the string to be a new spatial coordinate that goes away from the boundary. Thus, a thin boundary string is like a string close to the boundary, whereas a thick boundary string is like one far away from the boundary (see previous picture above). The extra coordinate is precisely the coordinate needed to describe motion within the 4D AdS spacetime.  From the perspective of an observer in the spacetime, boundary strings of different thickness appear to be strings (all of them thin) at different radial locations. The number of colors on the boundary determines the size of the interior. To have a spacetime as large as the visible universe, the theory must have about 1060 colors.  It turns out that one type of gluon chain behaves in the 4D spacetime as the graviton - the fundamental quantum particle of gravity. In this description, gravity in 4D is an emergent phenomenon arising from particle interactions in a gravityless, 3D world (physicists have known since 1974 that string theories always give rise to quantum gravity). 


Edward Witten on one side and Steven Gubser, Igor Klebanov, and Alexander Polyakov on the other, supplied the next level of understanding. They established a precise mathematical dictionary for translating between the two perspectives: given a physical process on the brane boundary, the dictionary showed how it would appear in the bulk interior, and vice versa. In a hypothetical universe, then, the dictionary rendered the holographic principle explicit. On the boundary of this universe, information is embodied by quantum fields. When the information is translated by the mathematical dictionary, it reads as a story of stringy phenomena happening in the universe’s interior.  We can say that boundary physics gives raise to bulk physics.


An everyday hologram bears no resemblance to the 3D image it produces. On its surface appear only various lines, arcs, and swirls etched into the plastic. Yet a complex transformation, carried out operationally by shining a laser through the plastic, turns those markings into a recognizable 3D image. Which means that the plastic hologram and the 3D image embody the same data, even though the information in one is unrecognizable from the perspective of the other.


Similarly, examination of the quantum field theory on the boundary of Maldacena's universe shows that it bears no obvious resemblance to the string theory inhabiting the interior. Even the physicist when presented with both theories, not being told of the connections, would more than likely conclude that they were unrelated.


Nevertheless, the mathematical dictionary linking the two makes explicit that anything taking place in one has an incarnation in the other.

As a particularly impressive example, Witten investigated what an ordinary black hole in the interior of Maldacena's universe would look like from the perspective of the boundary theory (boundary theory does not include gravity so a black hole necessarily translates into something very unlike a black hole). Witten's result showed black hole is the holographic projection of something equally ordinary: a bath of hot particles in the boundary theory. Like a real hologram and the image it generates, the two theories - a black hole in the interior and a hot quantum field theory on the boundary - bear no apparent resemblance to each other, and yet they embody identical information.


In analyzing the relationship between quantum field theory on the boundary and string theory in the bulk, Maldacena realized that when the coupling of one theory was small, that of the other was large, and vice versa (approximation techniques used in science are only accurate only if the relevant coupling constant is a small number).  The natural test, and a possible means of proving that the two theories are secretly identical, is to perform independent calculations in each theory and then check for equality. But this is difficult to do, since when perturbative methods work for one, they fail for the other.  If we accept all said above and duality nature, we can look this from another perspective; this duality gives us a framework for calculations and research where we would encounter usually large coupling contstant and thus we can get to the result.  And in recent years experimentally testable result has emerged!


Black holes are predicted to emit Hawking radiation. This radiation comes out of the black hole at a specific temperature. For all ordinary physical systems, a theory called statistical mechanics explains temperature in terms of the motion of the microscopic constituents (for example, this theory explains the temperature of a glass of water or the temperature of the sun).  So, what about the temperature of a black hole? To understand it, we would need to know what the microscopic constituents of the black hole are and how they behave. Only a theory of quantum gravity can tell us that.  Some aspects of the thermodynamics of black holes have raised doubts as to whether a quantum-mechanical theory of gravity could be developed at all. It seemed as if quantum mechanics itself might break down in the face of effects taking place in black holes. For a black hole in an AdS spacetime, we now know that quantum mechanics remains intact, thanks to the boundary theory. Such a black hole corresponds to a configuration of particles on the boundary. The number of particles is very large, and they are all zipping around, so that theorists can apply the usual rules of statistical mechanics to compute the temperature. The result is the same as the temperature that Hawking computed by very different means, indicating that the results can be trusted. Most important, the boundary theory obeys the ordinary rules of quantum mechanics; no inconsistency arises. Physicists have also used the holographic correspondence in the opposite direction - employing known properties of black holes in the interior spacetime to deduce the behavior of quarks and gluons at very high temperatures on the boundary.

The Relativistic Heavy Ion Collider (RHIC) is one of two existing heavy-ion colliders (another one being LHC) and the only spin-polarized proton collider in the world. It is located at Brookhaven National Laboratory in Upton (NY). By using RHIC to collide ions traveling at relativistic speeds, physicists study the primordial form of matter that existed in the universe shortly after the Big Bang. By colliding spin-polarized protons, the spin structure of the proton is explored. In 2010, RHIC physicists published results of temperature measurements from earlier experiments which concluded that temperatures in excess of 4 trillion kelvins had been achieved in gold ion collisions.  These collision temperatures resulted in the breakdown of "normal matter" and the creation of a liquid-like quark-gluon plasma.Rhictunnel.jpg

Because the nuclei contain many protons and neutrons, the collisions create a commotion of particles that can be more than 200000 times as hot as the sun's core.  That's hot enough to melt the protons and neutrons into a fluid of quarks and the gluons that act between them. Quark gluon plasma it’s likely the matter which briefly formed soon after the big bang.  The challenge is that the quantum field theory (quantum chromodynamics) describing the hot soup of quarks and gluons has a large value for its coupling constant and that compromises the accuracy of perturbative methods used in calculations.   For example, as any fluid flows (water, molasses, or the quark gluon plasma) each layer of the fluid exerts a drag force on the layers flowing above and below. The drag force is known as shear viscosity.  Experiments at RHIC measured the shear viscosity of the quark gluon plasma, and the results are far smaller than those predicted by the perturbative quantum field theory calculations.  Can we use duality here?  If we introduce the holographic principle, the perspective taken is to imagine that everything we experience lies in the interior of spacetime while processes mirroring those experiences take place on a distant boundary. If we reverse that perspective, we get our universe (more precisely, the quarks and gluons in our universe) living on the boundary, and so that's where the RHIC experiments take place. Invoking Maldacena, his result shows that the RHIC experiments (described by quantum field theory) have an alternative mathematical description in terms of strings moving in the bulk. Difficult calculations in the boundary description (where the coupling is large) are translated into easier calculations in the bulk description (where the coupling is small).  This is exactly what Pavel Kovtun, Andrei Starinets, and Dam Son did.  They did the math and the results they found come impressively close to the experimental data.  This is impressive because boundary theory doesn't model our universe fully (it doesn't contain the gravitational force), but this doesn't compromise relation to RHIC data because in those experiments the particles have such small mass (even when traveling near light speed) that the gravitational force plays virtually no role. It turns out that analyzing quarks and gluons by using a higher dimensional theory of strings can be viewed as a potent string-based mathematical trick.  This is possible as unlike previous multiverse models, this one states our notion, experience and feeling or reality is driven by processes in brain to decipher reality around us.  Our experience gives us one shape of universe; research into secrets of matter and how things works suggests another world.  These two descriptions are essentially the same and mathematics of duality is used as translator.  Parallel mathematics describing parallel worlds (universes).  Obviously, this model can be applied to any previous multiverse model as those spoke more about features where parallel world would exist, while this model is more description of how existing universe works (no matter which one it is).


Many questions about the holographic theories remain to be answered. In particular, does anything similar hold for a universe like ours in place of the AdS space?  A crucial aspect of AdS space is that it has a boundary where time is well defined.  The boundary has existed and will exist forever. An expanding universe, like ours, that comes from a big bang does not have such a well-behaved boundary. Consequently, it is not clear how to define a holographic theory for our universe; there is no convenient place to put the hologram.  An important lesson that one can draw from the holographic conjecture, however, is that quantum gravity, which has perplexed some of the best minds on the planet for decades, can be very simple when viewed in terms of the right variables.


The encoding of information on the 2D event horizon surface is similar to that in a black hole as mentioned before. What's special in this case is the realization that the number of information on the surface must match the number of bits contained inside the volume of the universe. Since the storage in the volume is bigger on the surface, the world inside must be made up of grains bigger than the smallest space-time unit in Planck length (on the surface) - at around 10-14 cm instead of 10-33 cm (first value being limit of current gravitation wave detector and second one being Planck lenght). Or, to put it another way, a holographic universe grainy structure is much easier to detect. Quantum effects will cause the space-time quanta to convulse wildly resulting in the noise picked up by some gravitational wave detectors like GEO600 (actually, noise has been picked up already in 2008). If this interpretation is proven to be correct, it will be ranked at the same level of achievement as the discovery of the CMB, which also appeared as noise in the microwave detector. 

laser.jpgGEO600 is the only experiment in the world able to test this controversial theory at this time. Unlike the other large laser interferometers, GEO600 reacts particularly sensitively to lateral movement of the beam splitter because it is constructed using the principle of signal recycling. Normally this is inconvenient, but we need the signal recycling to compensate for the shorter arm lengths compared to other detectors. The holographic noise, however, produces exactly such a lateral signal and so the disadvantage becomes an advantage in this case. In September 2011 it has been announced GEO600 would start using  "squeezed light" method making its first application outside labs.  The light from a squeezed laser radiates much more calmly than light from a conventional laser source so sensitivity of GEO600 will be raised for some 150%.

The noise picked up by GEO600 in 2008 got many on alerts.  This signal isn't a noise source that’s been overlooked, but it appears to be quantum fluctuations in the fabric of space-time itself. This is where things start to get interesting.  It is possible noise at these scales are caused by a holographic projection from the horizon of our universe. A good analogy is to think about how an image becomes more and more blurry or pixelated the more you zoom in on it. The projection starts off at Planck scale lengths at the Universe's event horizon, but its projection becomes blurry in our local space-time.  Over at Fermilab holometer is being built (meaning holographic interferometer) to verify this idea.


Carefully prepared laser light travels to a beam splitter, which reflects about half the light toward a mirror at the end of one arm and transmits the rest to a mirror on the second arm. The light from both mirrors bounces back to the beam splitter, where half is again reflected and half transmitted. A photodiode measures the total intensity of the combined light from the two arms, which provides an extremely sensitive measure of the position difference of the beam splitter in two directions.  The holometer as constructed at Fermilab will include two interferometers in evacuated 6-inch steel tubes about 40 meters long. Optical systems (not shown above) in each one "recycle" laser light to create a very steady, intense laser wave with about a kilowatt of laser power to maximize the precision of the measurement. The outputs of the two photodiodes are correlated to measure the holographic jitter of the spacetime the two machines share. The holometer will measure jitter as small as a few billionths of a billionth of a meter.  The holometer should start collecting data in 2012 and could show results in two days or two years, depending on the fine-tuning needed. Regardless of whether evidence of a holographic existence materializes, the experiment will develop laser technology for new dark matter experiments and help test potential background noise for the next generation of experiments searching for gravitational waves.


Credits: Brian Greene, Juan Malcadena, Sascha Vongehr, Stephen Hawking, Scientific American, MIT, Wikipedia, arXiv, Fermilab, Symmetry Magazine


Related posts:

Deja vu Universe



Landscape Multiverse

Many worlds

Simulation Argument

Filter Blog

By date:
By tag: