Find Communities by: Category | Product

Hrvoje Crvelin

Exoplanets galore II

Posted by Hrvoje Crvelin Oct 28, 2011

My frist post on this blog space was about exoplanets.  In case you forgot, exoplanet is short of extrasolar planet which again stands for a planet in some solar system.  Given the human life span, we our presence and search for knowlede, we can fairly say until recently we could just guess such planets existed, but with advance of technology we are finding ever increasing number of those.  This blog post is about an update since my previous post.  In summary, I stated I expect to see more and more of those to be found and its importance for finding new corners of space suitable for life forms similar to ours.  Those would need to be within so called habitable zone as it is called.


The habitable zone may also be referred to as the life zone, Comfort Zone, Green Belt or Goldilocks Zone.  A Goldilocks planet is a planet that falls within a star's habitable zone, and the name is often specifically used for planets close to the size of Earth. The name comes from the story of Goldilocks and the Three Bears, in which a little girl chooses from sets of three items, ignoring the ones that are too extreme (large or small, hot or cold, etc.), and settling on the one in the middle, which is "just right". Likewise, a planet following this Goldilocks Principle is one that is neither too close nor too far from a star to rule out liquid water on its surface and thus life (as humans understand it) on the planet. However, planets within a habitable zone that are unlikely to host life (e.g., gas giants) may also be called Goldilocks planets. The best example of a Goldilocks planet is the Earth itself.


Launched successfully on March 6, 2009, Kepler, a NASA Discovery mission will help scientists determine just how many Earthlike planets may exist in our galactic neighborhood.  Kepler will detect planets indirectly, using the "transit" method. A transit occurs each time a planet crosses the line-of-sight between the planet's parent star that it is orbiting and the observer. When this happens, the planet blocks some of the light from its star, resulting in a periodic dimming. This periodic signature is used to detect the planet and to determine its size and its orbit.


Wesley A. Traub, author behind the study of data obtained by Kepler mission, has released paper by end of September 2011 with some pretty much amazing results.  In the paper, he estimates that on average 34% (+/-14%) of Sun-like stars have terrestrial planets in that Goldilocks zone.  WOW!  Let's take worst case scenario here (20%) - that would be every fifth planet.  But on average, taking statistical error into account, results show this to be every third one.  That's just beyond my best wish!


OK, now "not so fast moment".  This is paper based on math and current observation and some water will need to pass under the bridge before we get other people doing analysis.  Also, this result may or may not become more relevant as more data becomes available for analysis (reults from paper are based on observations done for only 136 days) and finally verified using some other observational method.  Phil Plait, "The Bad Astronomer" blogger, had a take on this paper results and I will use some of his thoughts here.


As paper itself states, there are couple of biases introduced in calculations and one that you must already thought of is coming from the fact that this calculations are based on 136 days data of observations (available at time of analysis).  That length of time is too short to conclusively find planets in their stars' habitable zones so Wesley was forced to look at only short-period planets (with periods of 42 days or less), much closer to their stars, and extrapolate the data from there.  He looked at stars similar to the Sun with a range from somewhat hotter to somewhat cooler, roughly F, G, and K stars.  Now this letters are coming from star classification which is sometimes referred as "Oh Be A Fine Girl (or Guy), Kiss Me!" as letters O, B, A, F, G, K, M represent spectral classes.  You can find more information here. Stars analyzed (F, G, and K) comprise very roughly a quarter of the stars in the Milky Way, or something like 50 billion stars total (rough estimate).


Moving on, Wesley then looked at data for all planets detected - terrestrials (or simply said Earth sized), ice giants (like Uranus and Neptune), and gas giants (like Jupiter), getting their size and orbital period.  Then he found the ratio of terrestrial planets to all the planets seen (this ratio was found for planets somewhat close in to their stars due to observational period lenght as noted before).  This has been plotted versus distance from their parent stars. He then found an equation (called a mathematical fit) that did a good job predicting the shape of the plot. Once we have this, it’s easy enough to extrapolate it out to the distance of the habitable zones of the stars.  Assuming Traub is correct, and based on rough estimate of stars (F, G and K), there could be 15 billion warm terrestrial planets in our galaxy alone!


Now, extrapolation is always dangerous because you can't be sure your fit behaves well outside the range in which you calculated it. As Phil in his analysis says, imagine you took a census of 1000 people ages 0 to 17, and made a fit to their height versus age. You’d find their height gets bigger with time, in general. But if you extrapolate that out to someone who is 40 years old, you might estimate they'll be 4 meters tall which doesn't make any sense as we know it.  And since we do not know very well how planets form in their solar systems (and how they move around after), this results should be taken with a grain of salt.  Nevertheless, at the end of the day, this might turn around to be on the right track and only time will tell.  Kepler mission was launched exactly for this purpose and with more data available we will know more for sure. 


Astrobiology is the study of the origin, evolution, distribution, and future of life in the universe. This interdisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System, the search for evidence of prebiotic chemistry, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in outer space. Astrobiology addresses the question of whether life exists beyond Earth, and how humans can detect it if it does.  In looking for Earth-like planets around other stars, astrobiologists search for planets that can support liquid water. So these planets must have a temperature in the relatively narrow range that exists on Earth. The general thinking is that these conditions can only exist at a certain distance from the star (habitable zone).  Next study (paper), released by end of October 2011, claims habitable zone to be dramatically bigger around red dwarfs bigger than previously thought - for some 30%.


In our Solar System, the habitable zone stretches from about 0.7 to 3 AU, approximately from the orbit of Venus to about twice the orbit of Mars. AU (astronomical unit) is a unit of length equal to about 149597870.7 km or approximately the mean Earth–Sun distance.  The size and temperature of the star are crucial, but much depends on conditions on the exoplanet itself, in particular how much light is reflected back into space, the albedo.  Albedo is the fraction of solar energy (shortwave radiation) reflected from the Earth back into space. It is a measure of the reflectivity of the earth's surface. Ice, especially with snow on top of it, has a high albedo: most sunlight hitting the surface bounces back towards space. Water is much more absorbent and less reflective. So, if there is a lot of water, more solar radiation is absorbed by the ocean than when ice dominates.

Scientists from National Centre for Atmospheric Science in Reading (UK) and NASA Ames Research Centre in new study pointed out an important new factor that dramatically extends the habitable zone around an important class of stars.  They say that the amount of light that snow and ice reflects depends on the fraction emitted at different wavelengths. The Sun produces much of its light at visible wavelengths. The albedo at these wavelengths for snow and ice is 0.8 and 05 respectively.  But the vast majority of stars are red dwarfs and these emit far more of their light at longer wavelengths.  A red dwarf star is a small and relatively cool star, of the main sequence, either late K or M spectral type.  They constitute the vast majority of stars and have a mass of less than half that of the Sun (down to about 0.075 solar masses, which are brown dwarfs) and a surface temperature of less than 4000 K. The albedos of ice and snow on planets orbiting M-stars are much lower than their values on Earth.albedo.jpg

Scientists have calculated the albedo for snow and ice on planets orbiting two nearby red dwarfs - Gliese 436 (just 33 light years from us) and GJ 1214 (40 light years away).  Both are known to have exoplanets, although not in the habitable zone. The wavelengths that these stars emit mean that snow and ice here have albedos of about 0.4 and 0.1 respectively.  In other words, water-bearing planets orbiting these stars ought to absorb far more energy than Earth. Therefore, this extends the radius of the potential habitable zone.  The outer edge of the habitable zone around M-stars may be 10-30% further away from the parent star than previously thought.  And not only are red dwarfs by far the most common type of star, they are also the most likely to provide us with our first view of Earth 2.0 (if we haven't seen it already). That's because they are smaller, which makes it easer to see planets orbiting close to them.  Having an extended zone makes it just that little more likely than we'll find another Earth sooner rather than later.



Credits: NASA, Wikipedia, arXiv, Phil Plait

Related posts: Exoplanets galore I

Hrvoje Crvelin

Many Worlds

Posted by Hrvoje Crvelin Oct 19, 2011

With this blog post I continue series of posts about multiverse.  So far we touched following models:

- quilted universe

- inflationary universe

- braneworld universe

- landscape multiverse


Before I stepped into branes I did a bit of introduction to basics of string theory and dimensions.  What lies ahead of us now is another small introduction to something you probably have heard of before - quantum mechanics.  You probably have heard of it as something that makes SciFi reality or where Star Trek is possible.  When I first met content of quantum mechanics I thought this must be some crackpot idea where due to us being limited in understanding and knowledge we create framework where "everything" is possible.  The more I dived deeper into this matter more it turned around this is a real thing and something scientists have been working on for almost a century.  Some of quantum discoveries are pretty much amazing and will lead you to question what reality is.  Bottom line is, reality is not what you see and feel, but you shouldn't worry about as quantum reality still does not affect your usual rhythm day by day.  But for past decades, achievements in field of quantum research have been pretty much breath taking and most recently we see achievements in fields of quantum computing as well (though there is a long ride yet).  The best way to understand the quantum story is to do quick trip into the past.  Brain Greene, in his book Fabric of Cosmos (highly recommended by the way), does a nice job giving historical overview of events.

dice.jpgEinstein's general relativity (I plan to spend one entire post on relativity topic once I'm done with multiverse story) and quantum mechanics have been the two greatest achievements of physics in 20th century.  As it happens, both were born in same era.  They rely on very different types of mathematics and have completely separate rules and underlying principles. General relativity breaks down at singularities and closed time loops, while quantum mechanics fails to describe the force of gravity within its framework.  This is why today we hear a lot about efforts in science community to develop theory of quantum gravity. But world at quantum level as described by quantum mechanics is sp strange and it is hard to find any comparison to what we experience in our ordinary lives. Great and late Richard Feynman (I encourage you to watch videos with him on youtube) said once: "I think I can safely say that nobody understands quantum mechanics" and "If you think you understand quantum mechanics, you don't understand quantum mechanics."  Let's explore the quantum path now.


Classic physics teaches us if you know where every particle is, how fast it goes and in what direction then you can use the laws of physics to predict everything.  It like those physics tests in elementary school where based on speed (or acceleration) and current location you must predict where object will be in future for example.  Quantum mechanics breaks this whole concept by saying this is not possible.  That doesn't mean quantum mechanics is incomplete or bad, but rather it gives new insight into the world we are part of.  Quantum mechanics states we can't know the exact location and exact velocity any single particle. We can know one of those, but not both.  The more precisely I know the speed, less will I know about location and vice versa.  What quantum mechanics shows the best we can ever do is predict the probability that an experiment will turn out this way or that. And this has been verified through decades of accurate experiments.  If you have 100 boxes, each with 1 electron inside, and you along with 99 friends would open them and measure electron position, you would find different results.  Do it again and you will find very close results again. The regularity isn’t evident in any single measurement; you can’t predict where any given electron will be. Instead, the regularity is found in the statistical distribution of many measurements. The regularity shows probability (likelihood) of finding an electron at any particular location.  Quantum mechanics applies not just to electrons but to all types of particles.


Let's push out imagination bit further.  Think of two objects.  If you have two birds in the sky each flying in its own direction, two men walking each side of the street, TV remote control and TV, any two objects with some space between - we usually see those objects as independent of each other.  In order to influence each other they have to do something to traverse the space between them.  If I'm person on one side of the street while you are the second on the other side, I need to traverse the space to reach you - either by walking to you or yelling across the street or whatever method I find suitable to reach you.  Whatever it is, something from here where I'm has to go over there where are you.  And this is in essence how objects influence each other as they never share same location.  Physicists call this feature of the universe locality, emphasizing the point that you can directly affect only things that are next to you, that are local.  For the few past decades, scientists have done experiments which have established another truth - you can do something here (eg. point A) and that has direct influence there (eg. point B) without anything being sent from here to there.  Voodoo?  Nope - just quantum mechanics.  Roughly speaking and particle wise, even though the two particles are widely separated, quantum mechanics shows that whatever one particle does, the other will do too.  No matter what distance between them.


To quote Greene's example, if you are wearing a pair of sunglasses, quantum mechanics shows that there is a 50-50 chance that a particular photon - like one that is reflected toward you from the surface of a lake or from an asphalt roadway - will make it through your glare-reducing polarized lenses: when the photon hits the glass, it randomly "chooses" between reflecting back and passing through. The astounding thing is that such a photon can have a partner photon that has sped miles away in the opposite direction and yet, when confronted with the same 50-50 probability of passing through another polarized sunglass lens, will somehow do whatever the initial photon does. Even though each outcome is determined randomly and even though the photons are fir apart in space, if one photon passes through, so will the other.  This is the kind of nonlocality predicted by quantum mechanics.  This property is called quantum entanglement.


This "spooky action at a distance", in Einstein's words who didn't like it at all, is a serious blow to our conception of how the world really works. In 1964, physicist John Bell (CERN) showed just how serious this is. He calculated a mathematical inequality that encapsulated the maximum correlation between the states of remote particles in experiments in which three "reasonable" conditions hold:

  1. Experimenters have free will in setting things up as they want
  2. Particle properties being measured are real and pre-existing, not just popping up at the time of measurement
  3. No influence travels faster than the speed of light (the so called cosmic speed limit)


As many experiments since have shown, quantum mechanics regularly violates Bell's inequality, yielding levels of correlation way above those possible above his conditions hold. This opens several dilemmas and it is great ground for philosophical discussions.  Do we not have free will, meaning something, somehow predetermines what measurements taken?  Are the properties of quantum particles not real - implying that nothing is real at all, but exists merely as a result of our perception?  Or is there really an influence that travels faster than light?  In 2008 physicist Nicolas Gisin and his colleagues at the University of Geneva showed that, if reality and free will hold, the speed of transfer of quantum states between entangled photons held in two villages 18 kilometers apart was somewhere above 10 million times the speed of light.  Make your pick.


My introduction to quantum entanglement was through Elegant Universe series based upon same titled book by Brian Greene.  So called double-slit experiment is described which shows the point and this experiment since had captured my imagination about quantum reality (or what reality really is).  To get some basics, we have to start with wave.  In nature we know many waves (electromagnetic, acoustic, etc) so we stick to something easy to visualize - water wave.  Throw a pebble into the water and you get wave.  A water wave disturbs the flat surface of a surface by creating regions where the water level is higher than usual and regions where it is lower than usual. The highest part of a wave is called its peak and the lowest part is called its trough. A typical wave involves a periodic succession: peak followed by trough followed by peak, and so forth. If two waves head toward each other (if you throw two pebbles into water at nearby locations, producing outward-moving waves that run into each other) when they cross there results an important effect known as interference.  Picture below shows it.


When a peak of one wave and a peak of the other cross, the height of the water is even greater, being the sum of the two peak heights. Similarly, when a trough of one wave and a trough of the other cross, the depression in the water is even deeper, being the sum of the two depressions. And here is the most important combination: when a peak of one wave crosses the trough of another, they tend to cancel each other out, as the peak tries to make the water go up while the trough tries to drag it down. If the height of one wave's peak equals the depth of the other's trough, there will be perfect cancellation when they cross, so the water at that location will not move at all. 



Let's go now into the history.  The year is 1803 and our hero is called Thomas Young.  This is the year he performed famous two-slit experiment (referred also as Young's experiment).   In the basic version of the experiment, a coherent light source such as a laser beam illuminates a thin plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen - a result that would not be expected if light consisted strictly of particles. However, at the screen, the light is always found to be absorbed as though it were composed of discrete particles or photons. This establishes the principle known as wave–particle duality.

So far so good.  We learned in school about dual nature of light being both waves and particles so there is nothing new here as you already knew it.  Now we jump to 20th century.  In 1927, Clinton Davisson and Lester Germer fired a beam of electrons (no apparent connection to waves) at a piece of nickel crystal. While the details are less important, what matter is that this experiment is equivalent to firing a beam of electrons at a barrier with two slits. When the experimenters allowed the electrons that passed through the slits to travel onward to a phosphor screen where their impact location was recorded by a tiny flash (the same kind of flashes responsible for the picture on your television screen), the results were all but expected! (at this point you see where this is leading us, don't you?)   Let's take this with slow steps.



A gun (obeying classical physics) sprays bullets towards a target. Before they reach the target, they must pass through a screen with two slits. If bullets go through the slits they will most likely land directly behind the slit, but if they come in at a slight angle, they will land slightly to the sides. The resulting pattern is a map of the likelihood of a bullet landing at each point.


Picture left showed two-slit pattern happens to be simply the sum of the patterns for each slit considered separately: if half the bullets were fired with only the left slit open and then half were fired with just the right slit open, the result would be the same.


Thinking of the electrons as bullets, you'd naturally expect their impact positions to line up with the two slits.  This is sane and logical thinking.  If you imagine house with two windows and you are shooting with paintballs from outside picture on left is the pattern you would get on the wall facing you inside the house.  This is what we expect particle to do.


With waves, however, the result is very different, because of interference. If the slits were opened one at a time, the pattern would resemble that for bullets: two distinct peaks. But when both slits are open, the waves pass through both slits at once and interfere with each other: where they are in phase they reinforce each other; where they are out of phase they cancel each other out.


This sort of pattern is expected for photons - particles of light as showed by Young.  Nevertheless, in 1924 Louis de Broglie questioned this and introduced something called matter wave.  In 1926, Erwin Schrodinger published an equation describing how this matter wave should evolve - the matter wave equivalent of Maxwell’s equations - and used it to derive the energy spectrum of hydrogen. That same year Max Born published his now-standard interpretation that the square of the amplitude of the matter wave gives the probability to find the particle at a given place. This interpretation was in contrast to De Broglie's own interpretation, in which the wave corresponds to the physical motion of a localized particle.


In 1927, Davisson and Germer decided to test the whole thing.


Now the quantum paradox: Electrons, like bullets, strike the target one at a time. Yet, like waves, they create an interference pattern.  If each electron passes individually through one slit, with what does it interfere? Although each electron arrives at the target at a single place and time, it seems that each has passed through - or somehow felt the presence of both slits at once. Thus, the electron is understood in terms of a wave-particle duality.


One again, we see this even if we fire electrons one by one - sequentially!  We see that even individual, particulate electrons, moving to the screen independently, separately, one by one, build up the interference pattern characteristic of waves.  If an individual electron is also a \lave, what is it that is waving?  As noted above, in 1926 it was Schrodinger who made a first guess, but it was year later Born who nailed it.  The wave, Born proposed, is a probability wave.


The wave-particle duality is the central mystery of quantum mechanics - the one to which all others can ultimately be reduced.



To understand what probability wave means, picture a snapshot of a water wave that shows regions of high intensity (near the peaks and troughs) and regions of low intensity (near the flatter transition regions between peaks and troughs). The higher the intensity, the greater the potential the water wave has for exerting force on nearby ships or on coastline structures. The probability waves envisioned by Born also have regions of high and low intensity, but the meaning he ascribed to these wave shapes was sort of unexpected: the size of a wave at any given point in space is proportional to the probability that the electron is located at that point In space. Places where the probability wave is large are locations where the electron is most likely to be found. Places where the probability wave is small are locations where the electron is unlikely to be found. And places where the probability wave is zero are locations where the electron will not be found.


We have reached a conclusion which is far from what our common sense tells us.  But it doesn't stop there.  This is just a tip of the iceberg.  Things get even more more strange as we dive deep adding up to quantum weirdness.

OK, so there is this wave which described electron position and we call it probability wave.   As matter can't be everywhere at the same time so can't be electrons making up that matter.  Probability wave thus claims there is highest probability for electron to be where we finally observe it.  At first ball, this doesn't sound like science though in theory it does explain what has been observed above.


No one has ever directly seen a probability wave, and conventional quantum mechanical reasoning says that no one ever will. Instead, scientics use mathematical equations (developed by Schrodinger, Niels Bohr, Werner Heisenberg, Paul Dirac, and others) to figure out what probability wave should look like in a given situation. They then test such theoretical calculations by comparing them with experimental results in the following way. After calculating the purported probability wave for the electron in a given experimental setup, they carry out identical versions of the experiment over and over again from scratch, each time recording the measured position of the electron. Sometimes we find the electron here, sometimes there, and every so often we find it way over there. If quantum mechanics is right, the number of times we find the electron at a given point should be proportional to the size (actually, the square of the size), at that point, of the probability wave that we calculated. Nine decades of experiments have shown that the predictions of quantum mechanics are confirmed to spectacular precision.


Every probability wave extends throughout all of space, throughout the entire Universe. In many circumstances, a particle's probability wave quickly drops very close to zero outside some small region, indicating the overwheiming likelihood that the particle is in that region. In such cases, areas where it is unlikely for particle to be are seen to have probability wave quite flat and near the vaiue zero.  Nevertheless, so long as the probability wave somewhere in the some distant galaxy has a nonzero value, no matter how small, there is a tiny but genuine-nonzero-chance that the electron could be found there.  Looking at picture below and thinking of yourself, electrons making you up are inside you - no discussion there.  But they all have probability wave and place where they are inside you takes the place of most probable position in wave while your electron being on Mars or Andromeda galaxy is very small and somewhere within part where wave probability is very low.


Regardless of improvements in data collection or in computer power, the best we can ever do, according to quantum mechanics, is predict the probability of this or that outcome. The best we can ever do is predict the probability that an electron, or a proton, or a neutron, or any other of nature's constituents, will be found here or there.


OK, if this is how it works then following logical question arises; what make position on wave to be most probable one?  We saw that electron passing through double slit is sort of interfering with itself (it is the probability wave which travels through slits which can be seen as same electron being at two different places at the same time).  What makes electron materialize where really is at the end - when we measure it (or in other words, when we observe it).  What happens if we measure both slits - can we see same electron passing through both slits at the same time?  How all these electrons on same probability wave know when the one at most probable position is materialized so they can vanish (do they vanish at all)?  These are interesting questions and quite sound ones too - and of course scientists have made tests leading to even bigger surprises.


We do not directly encounter the probabilistic aspects of quantum mechanics in day-to-day life.  To get sense of it, think of an electron you just exhaled in room where you are reading this.  What are the chances of that electron to appear on Mars next moment.  They are not zero, but very very small.  This is because on scale set by atoms Mars is so far away which gives already small probability.  Next, there are a lot of electrons, as well as protons and neutrons, making up the air in your room. The chance that all of these particles will do what is extremely unlikely even for one is just too small.  Thus probability of discussed outcome is low - near zero (but never zero).  Einstein didn't find this whole story amusing and simply didn't agree that reality might have such bizarre elements.  Einstein argued what could be more natural than to expect a particle to be located at, or, at the very least, near where it's found a moment later? 

cop.gifThe Copenhagen interpretation was the first general attempt to understand the world of atoms as this is represented by quantum mechanics. The founding father was mainly the Danish physicist Niels Bohr, but also Werner Heisenberg, Max Born and other physicists made important contributions to the overall understanding of the atomic world that is associated with the name of the capital of Denmark (in fact Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics).  It was this interpretation which was opposing to Einstein.  According to Bohr and the Copenhagen interpretation of quantum mechanics, before one measures the electron's position there is no sense in even asking where it is. It does not have a definite position. The probability wave encodes the likelihood that the electron, when examined suitably, will be found here or there, and that truly is all that can be said about its position. Period. The electron has a definite position in the usual intuitive sense only at the moment we "look" at it - at the moment when we measure its position - identifying its location with certainty. But before (and after) we do that, all it has are potential positions described by a probability wave that, like any wave, is subject to interference effects. It's not that the electron has a position and that we don't know the position before we do our measurement.



Rather, contrary to what you'd expect, the electron simply does not have a definite position before the measurement is taken.  This is a radically strange reality. In this view, when we measure the electron's position we are not measuring an objective, preexisting feature of reality. Rather, the act of measurement is deeply enmeshed in creating the very reality it is measuring.


This raises the question of observer's role.  The quantum world can be not be perceived directly, but rather through the use of instruments. And, so, there is a problem with the fact that the act of measuring disturbs the energy and position of subatomic particles. This is called the measurement problem.


The best known is the "paradox" of the Schrodinger's cat: a cat is apparently evolving into a linear superposition of basis vectors that can be characterized as an "alive cat" and states that can be described as a "dead cat". Each of these possibilities is associated with a specific nonzero probability amplitude; the cat seems to be in a "mixed" state. However, a single, particular observation of the cat does not measure the probabilities: it always finds either a living cat, or a dead cat. After the measurement the cat is definitively alive or dead. The question is: How are the probabilities converted into an actual, sharply well-defined outcome?



Einstein didn't really buy this.  He believed in a universe that exists completely independent of human observation. "Do you really believe that the moon is not there unless we are looking at it?" he asked.  Soon, Bohr and company answered by saying if no one is looking at the moon, if no one is "measuring its location by seeing it" - then there is no way for us to know whether it's there, so there is no point in asking the question.  Einstein was still fuming at this bizarre concept.  His biggest attack against quantum weirdness was attack against something called the uncertainty principle (a direct consequence of quantum mechanics) introduced by Werner Heisenberg in 1927.


It says, roughly speaking, that the physical features of the microscopic realm (particle positions, velocities, energies, angular momenta, etc) can be divided into two groups. And as Heisenberg discovered, knowledge of the first feature from first group fundamentally compromises your ability to have knowledge about the first feature from second group; knowledge of the second feature from first group fundamentally compromises your ability to have knowledge of the second feature from second group and so on. As an example, the more precisely you know where a particle is, the less precisely you can possibly know its speed. Similarly, the more precisely you know how fast a particle is moving, the less you can possibly know about where it is. You can determine with precision certain physical features of the microscopic realm, but in so doing you eliminate the possibility of precisely determining certain other, complementary features.



Einstein liked simple things.  Together with two colleagues in Princeton, Nathan Rosen and Boris Podolsky, he found what appeared to be a serious inconsistency in one of the cornerstones of quantum theory - the uncertainty principle.  Remember, the very act of observing a particle also disturbs it, Heisenberg argued. If a physicist measures a particle's position, for example, he will also lose information about its velocity in the process and vice versa. Einstein, Podolsky, and Rosen disagreed, and they suggested a simple thought experiment to explain why:  Imagine that a particle decays into two smaller particles of equal mass and that these two daughter particles fly apart in opposite directions. To conserve momentum, both particles must have identical speeds. If you measure the velocity or position of one particle, you will know the velocity or position of the other - and you will know it without disturbing the second particle in any way. The second particle, in other words, can be precisely measured at all times.  Bohr argued that Einstein’s thought experiment was meaningless: If the second particle was never directly measured, it was pointless to talk about its properties before or after the first particle was measured. It wasn't until 1982, when the French physicist Alain Aspect constructed a working experiment based on Einstein’s ideas, that Bohr's argument was vindicated (in theory, it was John Bell who came up with idea first, but technology available didn't allowed to test it). In 1935 Einstein was convinced that he had refuted quantum mechanics. He was wrong.  Why?  You could have chosen to measure the right-moving particle's velocity.  Had you done so, you would have disturbed its position; on the other hand, had you chosen to measure its position you would have disturbed its velocity. If you don't have both of these attributes of the right-moving particle in hand, you don't have them for the left-moving particle either.  Thus, there is no conflict with the uncertainty principle at all.


What all these tests throughout the years have shown is one really strange feature.  Even though quantum mechanics shows that particles randomly acquire this or that property when measured, we learn that the randomness can be linked across space.  They are like a pair of magical dice, one thrown in Atlantic City and the other in Las Vegas, each of which randomly comes up one number or another, yet the two of which somehow manage always to agree. Entangled particles act similarly, except they require no magic. Entangled particles, even though spatially separate, do not operate autonomously.  They way they stay entangled remains to be mystery.  Today, we have successfully tested and observed this phenomena with electrons, molecules even as large as "buckyballs", photons, etc.


Let's give it a try with photons now.  Particles have property called spin.   It rotational motion akin to a soccer ball's spinning around as it heads toward the goal.  Electrons and photons can spin only clockwise or counterclockwise at one never-changing rate about any particular axis; a particle's spin axis can change directions, but its rate of spin cannot slow down or speed up. Quantum uncertainty applied to spin show that just as you can't simultaneously determine the position and the velocity of a particle, so also you can't simultaneously determine the spin of a particle about more than one axis.  The experiments show that from the viewpoint of an experimenter In the laboratory, at the precise moment one photon's spin is measured, the other photon immediately takes on the same spin property. If something were traveling from the one photon to the second one, alerting the one photon that the second photon's spin had been determined through a measurement, it would have to travel between the photons instantaneously, conflicting with the speed limit set by special relativity.   Two photons, even spatially separate, are seen (and still are) part of one physical system.  And so, it's really not that a measurement on one photon forces another distant photon to take on identical properties. Rather, the two photons are so intimately bound up that it is justified to consider them - even though they are spatially separate - as parts of one physical entity. Then we can say that one measurement on this single entity affects both photons at once.  When special relativity says that nothing can travel faster than the speed of light, the "nothing" refers to familiar matter or energy.  But here it doesn't appear that any matter or energy is traveling between the two photons, and so there isn't anything whose speed we are led to measure.


At the end of the day, two widely separated particles, each of which is governed by the randomness of quantum mechanics, somehow stay sufficiently "in touch" so that whatever one does, the other instantly does too. And that seems to suggest that some kind of faster-than-Iight something is operating between them.  According to standard quantum mechanics, when we perform a measurement and find a particle to be here, we cause its probability wave to change: the previous range of potential outcomes is reduced to the one actual result that our measurement finds.  Backup to probability wave - this means all potential positions of electron for example materialize to one that we measured in that moment.


Physicists say the measurement causes the probability wave to collapse and they say the larger the initial probability wave at some location, the larger the likelihood that the wave will collapse to that point - that is, the larger the likelihood that the particle will be found at that point. In the standard approach, the collapse happens instantaneously across the whole universe.  Remember, probability wave goes across whole universe.  This means once you find the particle here, the probability of being found anywhere else immediately drops to zero, and this is reflected in an immediate collapse of the probability wave. This indicates that all potential particles are connected via probability wave which once collapsed kills all other potential "same" particles riding same wave.  At the same moment.  Across the whole universe.  When I learned first about this feature it reminded me of oldish computer games where you would have big surface on which you would play, but only as you would move screen to different regions you would see objects materialize.  More modern view would be for example Google maps; depending what region you choose you get to see objects being created for selected area while rest disappears.  The mathematics of quantum mechanics makes this qualitative discussion precise and real life experiments confirms it precisely. Nevertheless, a little bit less than a century, no one understands how or even whether the collapse of a probability wave really happens.


If quantum theory is right and the world unfolds probabilistically, why is Newton’s nonprobabilistic framework so good at predicting the motion of things from baseballs to planets to stars?  This is because while Newton's laws predict precisely the trajectory of a baseball, quantum theory offers only the most minimal refinement, saying there's a nearly 100 percent probability that the ball will land where Newton says it should, and a nearly 0 percent probability that it won't.  Also, probability wave for a macroscopic object is generally narrowly peaked. The probability wave for a microscopic object, say, a single particle, is typically widely spread.  According to quantum theory, the smaller an object, the more spread-out its probability wave typically is.  And that’s why it's the microrealm where the probabilistic nature of reality comes to the fore.


Can we see the probability waves on which quantum mechanics relies?  No. As seen before, standard approach to quantum mechanics, developed by Bohr and his group, and called the Copenhagen interpretation in their honor, envisions that whenever you try to see a probability wave, the very act of observation thwarts your attempt.  When you look at an electron’s probability wave, where "look" means "measure its position", the electron responds by snapping to attention and coalescing at one definite location. Correspondingly, the probability wave surges to 100 percent at that spot, while collapsing to 0 percent everywhere else.



Schrödinger's equation, the mathematical engine of quantum mechanics, dictates how the shape of a probability wave evolves in time. BUT, the instantaneous collapse of a wave at all but one point does not emerge from Schrödinger's math. So was Copenhagen approach right?  Collapse of probability wave was also a bit unlear.  Will a sidelong glance from a mouse suffice as Einstein once asked? How about a computer's probe, or even a nudge from a bacterium or virus? Do these "measurements" cause probability waves to collapse?


We and computers and bacteria and viruses and everything else material are made of molecules and atoms, which are themselves composed of particles like electrons and quarks. Schrödinger's equation works for electrons and quarks, and all evidence points to its working for things made of these constituents (regardless of the number of particles involved).  This means that Schrödinger's equation should continue to apply during a measurement which in return is just one collection of particles (the person, the equipment, the computer …..) coming into contact with another (the particle or particles being measured). Schrödinger's equation doesn’t allow waves to collapse.   Here is why. 

linearity.pngFirst row (a) represents probability wave function for electron at time t to be at some locattion.  Second row (b) shows this wave at t+1 time.  You can decompose wave form from first column for both (a) and (b) to two simpler pieces.  We can divide it at any number of pieces if we want.  What we get in return is less complex picture where sum of parts gives us back initial wave we started to observe.  In mathematics this feature is called linearity.  If we check first row for example, we see two spikes representing 2 high chances electron would be at given position.  What Copenhagen approach dictates is that in moment of measurement all but one spikes will collapse.  On the other hand, in no place Schrödinger's equation show any cause or result leading to collapse.  If this reasoning is right and probability waves do not collapse, how do we pass from the range of possible outcomes that exist before a measurement to the single outcome the measurement reveals? What happens to a probability wave during a measurement that allows a familiar, definite, unique reality to materialize?



In 1954, Hugh Everett III came to a realize something different.  He found that a proper understanding of the theory might require a vast network of parallel universes. Everett's approach is called today Many Worlds interpretation. History says his insight was not accepted by hist peers.  Then in 1967 Bryce DeWitt picked his idea up and in 1970 placed it back to spotlight.  In essence, Everett propossed that each spike yields a reality in which electron materializes itself.  If you measured the position of an electron whose probability wave has any number of spikes, for example five, the result would be five parallel realities differing only by the location of an electron.  And since you are measuring position, that means there must be also two of you where of each will experience an electron being materialized at different location.  In this approach, everything that is possible, everything is materialized in its own separate world.

One vital aspect of the Many Worlds theory is that when the universe splits, the person is unaware of himself in the other version of the universe. This means that you who used condom during one night stand and ended up living happily ever after is completely unaware of the version of yourself who didn't use condom and now faces Jerry Springer show, and vice versa.


Above is called Many Worlds approach to quantum mechanics.  As we can say Many Worlds is also Many Universe then we get to the point where many Worlds approach gives us Quantum Multiverse.  And as it may come as surprise, Many Worlds approach is, in some ways, the most conservative framework for defining quantum physics.  As Brian Greene puts it, it all comes down to 2 stories; mathematical one and physical one.


Every mathematical symbol in Newton's equations has a direct and transparent physical world translation.  For example, x is ball’s position, v is ball’s velocity. By the time we get to quantum mechanics, translation becomes far more subtle. The mathematics of Many Worlds, unlike that of Copenhagen, is pure, simple, and constant. Schrödinger's equation determines how probability waves evolve over time, and it is never set aside - it is always in effect (it guides the shape of probability waves, causing them to shift, morph and undulate over time). Schrödinger's equation takes the particles' initial probability wave shape as input and then provides the wave's shape at any future time as output. And that, according to this approach, is how the universe evolves. It is not mathematical part which brings multiverse to life - it is physical story.  If you apply linearity as discussed above and you evolve each wave peak you end up with equally valid future points for the same observed system (eg. particle) which can only be truth if it involves two realities.  If the electron's original probability wave had four spikes, or five, or a hundred, or any number, the wave evolution would result in four, or five, or a hundred, or any number of universes.


When we consider a probability wave for a single electron that has two (or more) spikes, we usually don't speak of two (or more) worlds.  Usually we speak of one (ours) world with an electron whose position is ambiguous. In Many Worlds real when we measure or observe that electron, we speak in terms of multiple worlds. Isn't this confusing?  This is the point when we go back to our double slit experiment.  We saw before an electron's probability wave encounters the barrier, and two wave fragments make it through the slits and travel onward to the detector screen.  Are these two also two different realities?


Placing detectors at the slits to determine which one a particle is passing through destroys the interference pattern on the screen behind. This is a manifestation of Werner Heisenberg's uncertainty principle, which states that it is not possible to precisely measure both the position (which of the two slits has been traversed) and the momentum (represented by the interference pattern) of a photon.  Physicists say that the probability waves have decohered. Once decoherence sets in, the waves for each outcome evolve independently and each can thus be called a world or a universe of its own. For the case at hand, in one such universe the electron goes through the left slit, and detectors displays left; in another universe the electron goes through the right slit, and detectors records right.  And once two or more waves can't affect one another, they become mutually invisible; each "thinks" the others have disappeared.


This brings following question: how can we speak of some outcomes being likely and others being unlikely if they all take place?  Consider situation in which the probability wave heights are unequal. If the wave is a hundred times larger at X than at Y, then quantum mechanics predicts that you are a hundred times more likely to find the electron at X. But in the Many Worlds approach, your measurement still generates one you who sees X and another you who sees Y; the odds based on counting the number of yous is thus still 50:50 - the wrong result. Here is why; the number of yous who see one result or another is determined by the number of spikes in the probability wave. But the quantum mechanical probabilities are determined by something else - not by the number of spikes but by their relative heights. And it’s these predictions, the quantum mechanical predictions, which have been convincingly confirmed by experiments.  Nevertheless, question whether place X is 100 times more genuine than place Y, as in above example, still persists and stays unanswered even today. The lack of consensus on the crucial question of how to treat probability in the Many Worlds approach continues.  And this can easily be seen outside quantum world; when you roll a die, we all agree that you have a 1 in 6 chance of getting a 3, and so we’d predict that over the course of 1200 rolls the number 3 will turn up about 200 times.  But since it’s possible, in fact likely, that the number of 3s will deviate from 200, what does the prediction mean? We want to say that it’s highly probable that ⅙/6 th of the outcomes will be 3s, but if we do that, then we have defined the probability of getting a 3 by invoking the concept of probability. We have gone circular.


For all these controversies, quantum mechanics itself remains as successful as any theory in the history of ideas.  Many Worlds interpretation is clean mathematical model emerging from quantum mechanics.  Still, it may be disturbing to some people as well as it takes out of our hands any power over the quantum universe. Instead, we are merely passengers of the "splits" that take place with each possible outcome. In essence, under the Many-Worlds theory, our idea of cause and effect goes out the window.  There are many quantum features that already have been proved in reality (eg. superposition to name one) and since math fits the glove so nicely, it is hard not to wonder how this theory will end up in future (for some 100 to 200 years from now).


The ability to predict behavior is a big part of physics' power, but the heart of physics would be lost if it didn't give us a deep understanding of the hidden reality underlying what we observe. And should the Many Worlds approach be right, it might change our philosophy of life for good.   Many world interpretation of quantum mechanics helps you cope with your realities already now - check the following comic.


Credits: Brian Greene, James Schombert, Wikipedia, Nature, Forskning och Framsteg, Sunny Kalara


Related posts:

Deja vu Universe



Landscape Multiverse

Holographic Principle to Multiverse Reality

Simulation Argument

Hrvoje Crvelin

Landscape multiverse

Posted by Hrvoje Crvelin Oct 13, 2011

While reading following article you may wonder if this is science or philosophy (more likely it is philosophy of science).  You are not alone.  The "Landscape Multiverse" combines string theory and inflation to give us bubble universes in many dimensions. A number of physicists don't like the string landscape/multiverse idea. Leonard Susskind in 2006 said:

Why is it that so many physicists find these ideas alarming? Well, they do threaten physicists' fondest hope, the hope that some extraordinarily beautiful mathematical principle will be discovered: a principle that would completely and uniquely explain every detail of the laws of particle physics (and therefore nuclear, atomic, and chemical physics). The enormous Landscape of Possibilities inherent in our best theory seems to dash that hope.


What further worries many physicists is that the Landscape may be so rich that almost anything can be found: any combination of physical constants, particle masses, etc. This, they fear, would eliminate the predictive power of physics. Environmental facts are nothing more than environmental facts. They worry that if everything is possible, there will be no way to falsify the theory - or, more to the point, no way to confirm it. Is the danger real? We shall see.

In our exploration of multiverse idea, this however is unavoidable stop (before we gaze into wonderful world of quantum mechanics). Popular descriptions of the landscape seem to imply that the landscape exists because string theory gives different results depending on what geometry you choose to use for the "extra" dimensions, and that the landscape is basically supposed to be a collection of every conceivable way of folding up those extra dimensions.


Have you read article about distance measuring?  We saw that Einstein introduced cosmological constant to make space static.  He though gravity would lead space to contract and wanted to add something that would server as repulsive gravitation to make space static.  Just as -1 and +1 will give you 0.  Later on Hubble came with redshift and Einstein abandoned the whole idea.  Nevertheless, by the end of 20th century this constant was back to headlines once we figure out space is expanding at accelerating pace.  Scientists were also able to figure out the the numbers too.  And since constant is the same everywhere and applies the same push to every cubic cm of space this leads to following: as space extended due to Big Bang process, distance between matter in universe extended too.  Remember that matter provides gravity.  As space continues to extend gravity provided by matter dilutes which in return gives more effect to repulsive gravity coming from cosmological constant (or dark energy if you want).


This is why dark energy continues to be dominant content of our Universe. Scientists express the cosmological constant’s value as a multiple of the so called Planck mass (about 2.17651x10–8 kg) per cubic Planck length (a cube that measures about 10–33 cm on each side and so has a volume of 10–99 cubic cm). In these units, the cosmological constant’s measured value is about 1.38 x 10–123.


Because of quantum uncertainty (I will cover Heisenberg uncertainty principle in next blog) and the jitters experienced by all quantum fields (quantum jitters are the incessant self-creation and self-annihilation of sub-atomic particles in empty space - or what we used to think was empty space), even empty space is full of microscopic activity.  These quantum jitters harbor energy and they are everywhere. Since the cosmological constant is nothing but energy that fills the space, quantum field jitters provide a microscopic mechanism that generates a cosmological constant.  How much energy is contained in these quantum jitters? When theorists calculated the answer, they got a ridiculous result: infinite amount of energy in every volume of space. This is related to Planck size and what happens if we get jitters below this size.  And to describe them properly we require a framework that joins quantum mechanics and general relativity (this would shift the discussion to string theory).  But scientist found more pragmatic response.  They simply disregarded calculations for jitters on scales smaller than the Planck length. If you ignore jitters shorter than the Planck length, you're left with only a finite number, so the total energy they contribute to a region of empty space is also finite.  Even so, energy levels calculated was just too high.  Puzzle continued.  Then, back in 1987, Steven Weinberg came up with very small value for cosmological constant; very small, but no zero.  It was anthropic principle to get him to that conclusion.


The anthropic principle was proposed in Poland in 1973. It was proposed by Brandon Carter, who had the audacity to proclaim that humanity did indeed hold a special place in the Universe.  Carter was not, however, claiming that the Universe was our own personal playground, made specifically with humanity in mind. The version of the anthropic principle that he proposed, which is now referred to as the Weak Anthropic Principle (WAP) stated only that by our very existence as carbon-based intelligent creatures, we impose a sort of selection effect on the Universe. For example, in a Universe where just one of the fundamental constants that govern nature was changed - say, the strength of gravity - we wouldn't be here to wonder why gravity is the strength it is.  There is one arena in which we do play an absolutely indispensable role: our own observations.  Because of this position, we must take into account of what statisticians call selection bias.  For example, if you are interviewing a group of refugees who have endured astoundingly harsh conditions during their trek to safety, you might conclude that they are among the hardiest ethnicities on the planet. Yet, when you learn the devastating fact that you are speaking with less than 1 percent of those who started out, you realize that such a deduction is biased because only the phenomenally strong survived the journey.  So, selection bias occurs when individuals or groups being compared are different. Two main factors that can contribute to selection bias are self selection, when the sample selects itself, and convenience sampling, when individuals are selected because they are easy to obtain. To help insure external validity, subjects in the study should be very similar to the population in which study results will be applied.  Biased observations can launch you on meaningless quests to explain things that a broader, more representative view renders moot.


If we fail to take proper account of the impact such intrinsic limitations have on our observations, then, as in the examples above, we can draw wildly erroneous conclusions, including some that may impel us on fruitless journeys. 

thinking.jpgImagine you wish to know why is Earth orbiting the sun at distance it does. Well, we have laws of gravity which explain that so only specific thing left is specific distance.  Without that specific distance things would be way different here on Earth.  This reveals the inbuilt bias. The very fact that we measure the distance from our planet to the sun mandates that the result we find must be within the limited range compatible with our own existence.  If Earth were the only planet in the solar system, or the only planet in the universe, you still might feel compelled to wonder why.  But the Earth is not the only planet in the universe, let alone in the solar system. There are many others. And this fact casts such questions in a very different light.

Perhaps more down to earth example.  When you enter the shop with shoes and you ask for shoes fitting your number - you do not find unusual the fact they have it.  You do not question whether there is deeper meaning to the fact they have exactly the shoes you want in size you carry.  Once you learn they have whole range of numbers then all questions as to why your number was there - disappear.  Just as it's no big surprise that among all the shoes in the shop there's at least one pair that fits you, so it's no big surprise that among all the planets in all the solar systems in all the galaxies there's at least one at the right distance from its host star to yield a climate conducive to our form of life. And it's on one of those planets, of course, that we live. We simply couldn't evolve or survive on the others.  Simple as that.  So, there is no fundamental reason why our planet is at specific distance from Sun.  We just happen to be on one of planets where life could evolve which is at that distance and there is no need to search for any deeper meaning behind.


How does this translate to our universe?  We know there are few specific and constant numbers in universe (mass of electron, EM force, gravitational constant, speed of light, etc).  We do not know why they are the way they are, but key question is should we really care?  If we apply the what we discussed before, to ask why the constants have their particular values is to ask the wrong kind of question. There is no law dictating their values; their values can and do vary across the multiverse. Our intrinsic selection bias ensures that we find ourselves in that part of the multiverse in which the constants have the values with which we’re familiar simply because we're unable to exist in the parts of the multiverse where the values are different.  Why do we mention multiverse now?  Simple; as in example with shoe shop having whole range of shoes, we are required to have whole range of different universes having different values for constants.  This assumes:

  • our universe is part of a multiverse
  • from universe to universe in the multiverse, the constants take on a broad range of possible values
  • for most variations of the constants away from the values we measure, life as we know it would fail to take hold


For many of nature’s constants, even modest variations would render life as we know it impossible. Make the gravitational constant stronger, and stars burn up too quickly for life on nearby planets to evolve. Make it weaker and galaxies don’t hold together. Make the electromagnetic force stronger, and hydrogen atoms repel each other too strongly to fuse and supply power to stars. But what about the cosmological constant? Does life’s existence depend on its value? This is the issue Steven Weinberg decided to address in 1987.


Formation of life is a complex process about which our understanding is in its earliest stages. Weinberg recognized it was hopeless to determine how one or another value of the cosmological constant directly impacts steps that breathe life into matter. Instead of giving up Weinberg had nice insight.  He introduced a proxy for the formation of life: the formation of galaxies. Without galaxies, the formation of stars and planets would be compromised with a devastating impact on the chance that life might emerge. This was useful approach as it shifted the focus to determining the impact that cosmological constants of various sizes would have on galaxy formation and that was a problem Weinberg could solve.  While precise details of galaxy formation are an active area of research, basics are known. A clump of matter forms here or there, and by virtue of being more dense than its surroundings, it exerts a greater gravitational pull on nearby matter and thus grows larger still (kind of snowball effect). The cycle continues feeding on itself to ultimately produce a swirling mass of gas and dust, from which stars and planets coalesce. Weinberg soon realized that a cosmological constant with large value would disrupt the clumping process (repulsive gravity would thwart galactic formation). He worked out the idea mathematically and found that a cosmological constant any larger than a few hundred times the current cosmological density of matter (a few protons per cubic meter) would disrupt the formation of galaxies.  Math further shows the only universes that could have galaxies, and hence the only universes we could inhabit potentially, are ones in which the cosmological constant is no larger than Weinberg's limit, which in Planck units is about 10–121.  This was the first time someone has come up with value of cosmological constant which is not infinite or absurdly large.


This brings us to another interesting point.  If you imagine that cosmological constant can take values between 0 and 1 with increments between in Planck units you soon come up to at least 10124 universes.  This is LARGE number.  To get an idea how large, consider following:

  • number of cells in your body - 1013
  • number of seconds since Big Bang - 1018
  • number of photons in observable universe - 1088
  • number of different forms for extra dimensions in string theory - 10500


There are three possible alternatives from the anthropic principle;

  1. There exists one possible Universe "designed" with the goal of generating and sustaining "observers" (theological universe)
  2. Observers are necessary to bring the Universe into being (participatory universe)
  3. An ensemble of other different universes is necessary for the existence of our Universe (multiple universes)

The Inflationary Multiverse contains a vast, ever increasing number of bubble universes. The idea is that when inflationary cosmology and string theory are melded, the process of eternal inflation sprinkles string theory’s 10500 possible forms for the extra dimensions across the bubbles - one form for the extra dimensions per bubble universe - providing a cosmological framework that realizes all possibilities. By this reasoning, we live in that bubble whose extra dimensions yield a universe, cosmological constant and all, that's hospitable to our form of life and whose properties agree with observations. 


In string theory, the range of possible universes is richer still. The shape of the extra dimensions determines the physical features within a given bubble universe, and so the possible "resting places" (various valleys) now represent the possible shapes the extra dimensions can take. To accommodate the all possible forms for these dimensions, the mountain terrain therefore needs a lush assortment of valleys, ledges, and outcroppings. Such landscape is called the string landscape.


The string landscape can be visualized schematically as a mountainous terrain in which different valleys represent different forms for the extra dimensions, and altitude represents the cosmological constant’s value.  Picture above is just simplified view, but it suggests that universes with different forms for the extra dimensions are part of a connected terrain.  And this is where process called quantum tunneling comes to life.  Quantum tunneling refers to the quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount (sort of passing through the wall).


Imagine an electron encountering a solid barrier, say a slab of steel which is thick, that classical physics predicts it can't penetrate. A hallmark of quantum mechanics is that the rigid classical notion of "can't penetrate" often translates into the softer quantum declaration of "has a small but nonzero probability of penetrating" (quantum mechanics makes everything probable, not matter how unlikely it is).  This has been observed and confirmed for electrons in 2007 by researchers at Max Planck Institute. The reason is that the quantum jitters of a particle allow it, every so often, to suddenly materialize on the other side of an otherwise impervious barrier. The moment at which such quantum tunneling happens is random; the best we can do is predict the likelihood that it will take place. But the math says that if you wait long enough, penetration through just about any barrier will happen. And it does happen! If it didn't, the sun wouldn't shine: for hydrogen nuclei to get close enough to fuse, they must tunnel through the barrier created by the electromagnetic repulsion of their protons for example.

qt.jpgSame principle can be applied to our bubble universe, but theory shows that we get bubbles within bubbles then. The result is a more intricate version of the Swiss cheese multiverse we found in earlier encounter with eternal inflation. In that version, we had two types of regions: the "cheesy" ones that were undergoing inflationary expansion and the "holes" that weren't which represented separated universes. That was a direct reflection of the simplified landscape with a single mountain whose base we assumed to be at sea level. The richer string theory landscape, as the one seen on picture showing string landscape, with its sundry peaks and valleys corresponding to different values of the cosmological constant, gives rise to the many different regions.  This in return has bubbles inside of bubbles inside of bubbles. Ultimately, the relentless series of quantum tunnelings through the mountainous string landscape realizes every possible form for the extra dimensions in one or another bubble universe. This is called Landscape Multiverse.  When the string landscape combines with eternal inflation, all possible forms for the extra dimensions, including those with such a small cosmological constant, are brought to life.  And according to this line of thought, it is in one of those bubbles that we live in.


What about all other features?  Cosmological constant is just one.  What about other constants of Nature?   Researchers surveying the string landscape have found that these numbers, just like the cosmological constant, also vary from place to place, and hence - at least in our current understanding of string theory - are not uniquely determined.



Generic notion of a multiverse has a being beyond testability. After all, we're considering universes other than our own, but since we have access only to this one, we might as well be talking about ghosts.  So, how do we test this theory?   Bubbles can collide.  Such impact would send shock waves rippling through space, generating modifications to the pattern of hot and cold regions in the CMB and that should be observable.  Scientists are now working out the detailed fingerprint such a disruption would leave, laying the groundwork for observations that could one day provide evidence that our universe has collided with others - evidence that other universes are out there.


Nevertheless, we have to bare in mind also the fact that in 20th century fundamental science came to increasingly rely on inaccessible features.  Think of cosmic horizon for example.

Objects that have always been beyond our cosmic horizon are objects that we have never observed and never will observe; conversely, they have never observed us, and never will. Objects that at some time in the past were within our cosmic horizon but have been dragged beyond it by spatial expansion are objects that we once could see but never will again. Yet we can agree that such objects are as real as anything tangible, and so are the realms they inhabit.


When quantum mechanics invokes probability waves, its impressive ability to describe things we can measure, such as the behavior of atoms and subatomic particles, compels us to embrace the ethereal reality it posits. When general relativity predicts the existence of places we can't observe, its phenomenal successes in describing those things we can observe, such as the motion of planets and the trajectory of light, compels us to take the predictions seriously.  So for confidence in a theory to grow we don't require that all of its features be verifiable; a robust and varied assortment of confirmed predictions is enough.  The Brane, Cyclic, and Landscape Multiverses are based on string theory, so they suffer multiple uncertainties. Remarkable as string theory may be, rich as its mathematical structure may have become, the dearth of testable predictions, and the concomitant absence of contact with observations or experiments, relegates it to the realm of scientific speculation.  

Currently, the best information about the primordial universe comes from the CMB. As stated before, collision will produce inhomogeneities in the early stages of cosmology inside our bubble, which are then imprinted as temperature and polarization fluctuations of the CMB. One can look for these fingerprints of a bubble collision in data from the WMAP or Planck satellites.


Since a collision affects only a portion of our bubble interior, and because the colliding bubbles are nearly spherical, the signal is confined to a disc on the CMB sky.  Imagine now two merging soap bubbles; their intersection is a ring. The effect of the collision inside the disc is very broad because it has been stretched by inflation. In addition, there might be a jump in the temperature at the boundary of the disc.


In 2010, Scientics have run models of such collision and found few interesting facts. Existence of a temperature discontinuity at the boundary of the disc greatly increases our ability to make a detection.  However, they did not find any circular temperature discontinuities in the WMAP data.  BUT!  They did find four features in the WMAP data that are better explained by the bubble collision hypothesis than by the standard hypothesis of fluctuations.


One of the features identified is the famous Cold Spot, which has been claimed as evidence for a number of theories including textures, voids, primordial inhomogeneities, and various other candidates. Note that some recent work, however, has called into question the statistical significance of this cold spot. 

While identifying the four features consistent with being bubble collisions was an exciting result, these features are on the edge of sensitivity thresholds, and so should be considered only as a hint that there might be bubble collisions to find in future data analysis.  One of many dilemmas facing physicists is that humans are very good at cherry-picking patterns in the data that may just be coincidence. However, the team's algorithm is much harder to fool, imposing very strict rules on whether the data fits a pattern or whether the pattern is down to chance.

The good news is that we can do much more with data from the Planck satellite, which has better resolution and lower noise than the WMAP experiment.  Launched in 2009, the Planck satellite is probing the entire sky at microwave wavelengths from 0.35 mm to one cm. By measuring the CMB at these wavelengths, Planck has already provided (and yet will) an unprecedented view of the sky and new insights into existing theories.


Credits: Brian Greene, Leonard Susskind, Delia Schwartz-Perlov, Matt Johnson, ESA, Wikipedia, Encyclopedia Britannica


Related posts:

Deja vu Universe



Many worlds

Holographic Principle to Multiverse Reality

Simulation Argument

In 1998 announcement was made that our universe is accelerating. This achievement has been officially honored now, with 2011 Nobel Physics Prize going to Saul Perlmutter, Adam Riess, and Brian Schmidt.


This brought to my attention that many people may lack details on how do we measure distances in space (as such measurement led to this discovery in first place).  It is interesting process which involves our current knowledge of observed processes and events on Earth and within our Universe.  This article will shred some light on this process and should be also good introduction into next article which will deal with so called Landscape Multiverse.  You may find this strange, but this discovery brought Albert Einstein back to headlines too.  So I will start with good old man and what he saw as greatest blunder of his life.

Einstein_tongue.jpgEinstein remarked that the introduction of the cosmological term was the biggest blunder of his life.   This remark has become part of the folklore of physics, but was he right?  He had introduced it into his general theory of relativity in 1917 to force the equations to yield a static universe - something he believed to be the case.  While Einstein's original equations were trying to tell him, his blindness lost him the chance to make one of the great predictions in physics. Then, 12 years later, Edwin Hubble discovered that the universe is not static - it is actually expanding.  So Einstein scrapped his idea of a cosmological constant and dismissed it as his biggest blunder.  In 1998, however, two teams of scientists (2011 Nobel prize winners), discovered that the universe is not only expanding, but its expansion is actually accelerating - going faster and faster.  So there had to be some other force that had overcome the force of gravity and is driving the universe into an exponential acceleration. This opposing force is what scientists still call dark energy, and it is believed to constitute roughly 74 percent of the universe.

Scientists can then use the value of the acceleration to figure out the density of dark energy, which they then use to calculate what is called the w-parameter. For Einstein's cosmological constant to be correct, the w-parameter must equal -1, and so far, the results of the ESSENCE project seem to confirm that it is indeed very close to -1.  Today, cosmological constant is seen as physically equivalent to vacuum energy (underlying background energy that exists in space even when the space is devoid of matter).  So Einstein is back - big time.

dark_expansion-lg.jpgToday we estimate Universe to be old 13.7 billion years old, plus or minus about 130,000 years (based on WMAP observation).  This is perhaps good moment to explain how do we calculate these distances (it's a challenge - no doubts about that) - in few moment inclusion of supernova in above picture will become more clear.


One of the first techniques for doing so was parallax.  Focus on some object and close one eye.  Then, at the same time, close opened one and open closed one.  The object appears to jump from side to side. This "jump" happens because your left and right eyes, being spaced apart, have to point at different angles to focus on the same spot. For objects that are farther away, the jumping is less noticeable, because the difference in angle gets smaller. This simple observation can be made quantitative, providing a precise correlation between the difference in angle between the lines of sight of your two eyes - the parallax - and the distance of the object you're viewing.  As noted before, at long distances parallax is small.  It is actually too small to be reliable.  But there’s a way around this: measure the position of a star on two occasions, some six months apart, thus using the two locations of the earth in place of the two locations of your eyes. The larger separation of the observing locations increases the parallax; it’s still small, but in some cases is big enough to be measured.  Today this technique has been refined and is now undertaken by satellites allowing accurate distance measurements of stars that are up to a few thousand light-years away (beyond that angular differences again become too small).

Parallax.jpgWith parallax usage limit and space being way bigger we have to use something else to measure distances beyond parallax approach limit.  Another approach, which does have capacity to measure yet greater celestial distances, is based on an even simpler idea. Farther away you move a light emitting object, be it a car's headlights or a blazing star, the more the emitted light will spread out during its journey toward you, and so the dimmer it will appear. By comparing an object’s apparent brightness (how bright it appears when observed from earth) with its intrinsic brightness (how bright it would appear if observed from close by), you can thus work out its distance.  Obviously there is a catch here - how do you figure out intrinsic brightness of astrophysical objects. Is a star dim because it’s especially distant or because it just doesn't give off much light? The answer lies in so-called standard candles.  A standard candle is a class of astrophysical objects, such as supernovae or variable stars, which have known luminosity due to some characteristic quality possessed by the entire class of objects.


I will focus on supernovae, but few words about stars. Certain stars that have used up their main supply of hydrogen fuel are unstable and pulsate.  Some types of pulsating variable stars such as Cepheids exhibit a definite relationship between their period and their intrinsic luminosity. Such period-luminosity relationships are invaluable to astronomers as they are a vital method in calculating distances within and beyond our galaxy.  During the first decade of the 1900s Henrietta Leavitt, studying photographic plates of the Large and Small Magellanic Clouds, compiled a list of 1777 periodic variables. Eventually she classified 47 of these in the two clouds as Cepheid variables and noticed that those with longer periods were brighter than the shorter-period ones. She correctly inferred that as the stars were in the same distant clouds they were all at much the same relative distance from us. Any difference in apparent magnitude was therefore related to a difference in absolute magnitude. When she plotted her results for the two clouds she noted that they formed distinct relationships between brightness and period.  Her plot showed what is now known as the period-luminosity relationship; cepheids with longer periods are intrinsically more luminous than those with shorter periods.  The Danish astronomer, Ejnar Hertzsprung quickly realised the significance of this discovery. By measuring the period of a Cepheid from its light curve, the distance to that Cepheid could be determined.  Harlow Shapley, by using a larger number of Cepheids, was able to deduce the size of our galaxy.  In 1924 Edwin Hubble detected Cepheids in the Andromeda nebula, M31 and the Triangulum nebula M33. Using these he determined that their distances were 900,000 and 850,000 light years respectively. He thus established conclusively that these "spiral nebulae" were in fact other galaxies and not part of our Milky Way. This was a momentous discovery and dramatically expanded the scale of he known Universe. Hubble later went on to observe the redshift of galaxies and propose that this was due to their recession velocity, with more distant galaxies moving away at a higher speed than nearby ones. This relationship is now called Hubble's Law and is interpreted to mean that the Universe is expanding.


In recent times, the most fruitful method has made use of a kind of stellar explosion called a Type Ia supernova. A Type Ia supernova occurs when a white dwarf star pulls material from the surface of a companion, typically a nearby red giant that it’s orbiting. Well-developed physics of stellar structure establishes that if the white dwarf pulls away enough material (so that its total mass increases to about 1.4 times that of the sun), it can no longer support its own weight. The bloated dwarf star collapses, setting off an explosion so violent that the light generated rivals the combined output of the other 100 billion or so stars residing in the galaxy it inhabits.  This makes these supernovae ideal standard candles.  Because the explosions are so powerful, we can see them out to very very large distances. Crucially, because the explosions are all the result of the same physical process, they all share very similar peak of intrinsic brightness. Again, there is a challenge; in a typical galaxy they take place only once every few hundred years.  How do you catch them in the act? By using telescopes equipped with wide-field-of-view detectors capable of simultaneously examining thousands of galaxies, the researchers were able to locate dozens of Type Ia supernovae, which could then be closely observed with more conventional telescopes. On the basis of how bright each appeared, the teams were able to calculate the distance to dozens of galaxies situated billions of light-years away.

Type-1a-supernova.jpgAbove picture shows SN 1994D, a type Ia supernova in the galaxy NGC 4526.  I mentioned above "very similar peak of intrinsic brightness."  Indeed, there is certain variability.  For the most part, this variability would not produce systematic errors in measurement studies as long as researchers use large numbers of observations and apply the standard corrections. So, some supernovae are intrinsically brighter than others, but fade more slowly, and this correlation between the brightness and the width of the light curve allows astronomers to apply a correction to standardize their observations. So astronomers can measure the light curve of a type 1a supernova, calculate its intrinsic brightness, and then determine how far away it is, since the apparent brightness diminishes with distance (just as a candle appears dimmer at a distance than it does up close).


When we're talking about distances on such fantastically large scales, and in the context of a universe that's continually expanding, the question inevitably arises of which distance the astronomers are actually measuring. Is it the distance between the locations we and a given galaxy each occupied eons ago, when the galaxy emitted the light we're just now seeing? Is it the distance between our current location and the location the galaxy occupied eons ago, when it emitted the light we're just now seeing? Or is it the distance between our current location and the galaxy's current location?  Brian Greene has made best explanation on this subject I had chance to read and will share it here.


Imagine you want to know the distances, as the crow flies, among three cities, New York, Los Angeles, and Austin, so you measure their separation on a map of the US. You find that New York is 39 cm from Los Angeles; Los Angeles is 19 cm from Austin; and Austin is 24 cm from New York. You then convert these measurements into real world distances by looking at the map's legend, which provides a conversion factor -1 cm = 100 km - which allows you to conclude that the three cities are about 3,900 km, 1,900 km, and 2,400 km apart. Now imagine that the earth’s surface doubles. Even so your map of US would continue to be perfectly valid as long as you made one important change -  conversion factor should read 1 cm = 200 km now.  Similar considerations apply to the expanding cosmos. Galaxies don't move under their own power.  Rather, like the cities in example above, they race apart because the substrate in which they're embedded - space itself is expanding.  This means that had we some cosmic cartographer mapping galaxy locations billions of years ago, the map would be as valid today as it was then and we would just need to update the legend for the map. The cosmological conversion factor is called the universe’s scale factor.  In an expanding universe, the scale factor increases with time.

expansion.pngWhenever you think about the expanding universe, picture an unchanging cosmic map. Now, consider light from a supernova explosion in the distant galaxy. When we compare the supernova’s apparent brightness with its intrinsic brightness, we are measuring the dilution of the light's  intensity between emission and reception, arising from its having spread out on a large sphere during the journey. By measuring the dilution, we determine the size of the sphere - its surface area - and then we can determine the sphere’s radius. This radius traces the light's entire trajectory, and so its length equals the distance the light has traveled.  That's simple - high school geometry.  Now the challenge; during the light’s journey, space has continually expanded. The only change this requires to the static cosmic map is a regular updating of the scale factor recorded in the legend. And since we have just now received the supernova's light, since it has just now completed its journey, we must use the scale factor that's just now written in the map's legend to translate the separation on the map (trajectory from the supernova to us) into the physical distance traveled.  The result is the distance now between us and the current location of the galaxy. When we compare the intrinsic brightness of a supernova with its apparent brightness, we are therefore determining the distance now between us and the galaxy it occupied. Those are the distances the two groups of astronomers measured back in 1998.


While above sounds logical and healthy there is one question which remains to be addressed.  We see that expansion is not uniform or more precisely constant; it used to be slower in past and now it is accelerating.  How do we correlate this?  The color of the photons - their wavelength - is determined by the energy they carry. Light, in general, is produced in a similar manner to chemiluminescent reactions. Light is produced in quantified amounts by the displacement of electrons in individual atoms. Electrons circle the nucleus in fixed orbits; however when atoms are energized, the electrons "jump" from one every level to another before returning to their ground state level.  A photon of light is produced whenever an electron in a higher-than-normal orbit falls back to its normal orbit. During the fall from high-energy to normal-energy, the electron emits a photon with very specific characteristics. The photon has a frequency, or color, that exactly matches the distance the electron falls (its wavelength).

light_wave.gifAstronomers use telescopes to gather light from distant objects, and from the colors they find - the particular wavelengths of light they measure - they can identify the chemical composition of the sources. This also powerful tool in discovery of new elements.  In 1868 Pierre Janssen Joseph and Norman Lockyer (independently) examined light from the outermost shell of the sun during solar eclipse. Peeking just beyond the moon's rim they found a mysterious bright emission with a wavelength that no one could reproduce in the laboratory using known substances. The unknown substance was helium, which thus claims the singular distinction of being the only element discovered in the Sun before it was found on Earth. As you can be uniquely identified by the pattern of lines making up your fingerprint, so an atomic species is uniquely identified by the pattern of wavelengths of the light it emits (and also absorbs).


Astronomers who examined the wavelengths of light gathered from more and more distant astrophysical sources became aware of a peculiar feature. Although the collection of wavelengths resembled those familiar from laboratory experiments with well-known atoms such as hydrogen and helium, they were all somewhat longer. From one distant source, the wavelengths might be 3% longer; from another source, 12% longer; from a third 21% longer. Astronomers named this effect redshift, in recognition that ever longer wavelengths of light, at least in the visible part of the spectrum, become ever redder.  What causes the wavelengths to stretch?  The well-known answer is that the universe is expanding. Imagine a light wave undulating its way from the the distant galaxy toward Earth. As we plot the light’s progress across our unchanging map, we see a uniform succession of wave crests, one following another, as the undisturbed wave train heads toward our telescope. The uniformity of the waves might lead you to think that the wavelength of the light when emitted (the distance between successive wave crests) will be the same as when it’s received. But the delightfully interesting part of the story comes into focus when we use the map’s legend to convert map distances into real distances.  Because universe is expanding, the map's conversion factor is larger when the light concludes its journey than it was at inception. The implication is that although the light's wavelength as measured on the map is unchanging, when converted to real distances, the wavelength grows. When we finally receive the light, its wavelength is longer than when it was emitted. It’s as if light waves are threads stitched through a piece of spandex. Just as stretching the spandex stretches the stitching, so expanding the spatial fabric stretches the light waves. We can be quantitative. If the wavelength appears stretched by 3%, then the universe is 3% larger now than it was when the light was emitted; if the light appears 21%, then the universe has stretched 21% since the light began its journey. Redshift measurements thus tell us about the size of the universe when the light we are now examining was emitted, as compared with the size of the universe today.

redshift.jpgSeries of redshift measurements of various Type Ia supernovae would enable us to calculate how quickly the universe was growing over various intervals in the past. With those data, astronomers could determine the space expansion rate.  When light travels in an expanding universe, it covers a given distance partly because of its intrinsic speed through space, but partly also because of the stretching of space itself. You can compare this with what happens on an airport's moving walkway (without increasing your intrinsic speed, you travel farther than you otherwise would because the moving walkway augments your motion).


After checking, and rechecking, and checking again, our Nobel prize winners released their conclusions.  For the last 7 billion years, contrary to long-held expectations, the expansion of space has not been slowing down. It’s been speeding up.  What does that mean?


If you threw an apple in the air you will see it going up and then it will start falling back to the ground.  This is due to gravity and this gave insight to Newton centuries ago when he postulated first laws of motion.  However, if speed would increase after threw it upward, you’d conclude that something was pushing it away from the Earth’s surface. The supernova researchers similarly concluded that the unexpected speeding up of the cosmic exodus required something to push outward, something to overwhelm the inward pull of attractive gravity.  This is the very job description which makes the cosmological constant, and the repulsive gravity to which it gives rise, the ideal candidate. The supernova observations thus brough back the cosmological constant back into the limelight through the raw power of data.  With accelerated expansion, space will continue to spread indefinitely, dragging away distant galaxies ever farther and ever faster. A hundred billion years from now, any galaxies not now resident in our neighborhood (a gravitationally bound cluster of about a dozen galaxies called our "local group") will exit our cosmic horizon. We'll be floating in a static sea of darkness.

With advance of technology and our understanding of processes, events and their connections we continue our search for other standard candles and way of measurement.  In second half of 2011, Darach Watson at the Dark Cosmology Centre at the University of Copenhagen in Denmark and a few pals, said they've come up with an entirely new kind of standard candle that measures the distance to active galactic nuclei for example.


Active galactic nuclei are galaxies with a central supermassive black hole that emits intense radiation. When this radiation hits nearby gas clouds, it ionises them causing them to emit a characteristic light of their own.  In recent years, astronomers have found that they can see both the emissions from the supermassive black hole as well as the emissions from the gas clouds. These are obviously related but the time it takes for radiation to reach the cloud means that changes here lag those in the supermassive black hole.  This delay, which can be measured with a technique called reverberation mapping, is then clear measure of the radius of the cloud.  Since the flux of the radiation from the black hole drops as an inverse square law, the brightness of these clouds also depends on their radius. So a good measure of their radius also gives an indication of their intrinsic brightness.

As described above, redshift (and blue shift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. In astronomy, it is customary to refer to this change using a dimensionless quantity called z.  Watson and his collegues have used this technique to measure the distance to 38 active galactic nuclei at distances of up to z=4. That's significantly further than is possible with type 1a supernova, whose distance cannot be accurately measured beyond z=1.7.

To say that this is interesting is putting it mildly. When Cepheid variables were identified as standard candles in the early part of the 20th century, Edwin Hubble used them to show that the Universe was expanding. When type Ia supernovae were identified as standard candles in the early 1990s, astronomers used them to discover that the expansion of the Universe is accelerating.  So what of the prospects for this new method?  Active galactic nuclei are among the brightest objects in the universe. Astronomers can see them at distances of up to about z=7, which corresponds to just 750 million years after the Big Bang.  More we can peek into our past more we can learn about the path which has taken us into out present.  Knowinf this path may as well provide invaluable data to predict what future holds.

Credits: Brian Greene, arXiv, Wikipedia, NASA, NY Times, USGS

We are pretty much used to word weather.  Back in my kid's days, weather determined what wardrobe I should prepare for morning school; is it going to be rainy or not for example.  As you grow up you tend to learn weather (and weather forecast) depends on only on what session is, but rather specific atmosphere conditions.  Atmosphere itself can be divided to several layers and most of weather phenomena affecting us happens in troposphere.  Weather does occur in stratosphere too and can affect weather lower down in the troposphere, but the exact mechanisms are poorly understood.  Common weather phenomena include wind, cloud, rain, snow, fog and dust storms. Less common events include natural disasters such as tornadoes, hurricanes, typhoons and ice storms. Weather occurs primarily due to density (temperature and moisture) differences between one place to another.  We do not posses efficient ways to control weather as factors influencing it go beyond our control (that's probably good thing given our current stage of conscience), but there is limited success on small scale in certain modifications using cloud seeding.  There are also another forms of modification as consequence of human evolution path and doing on Earth affecting weather conditions overall (acid rains, anthropogenic pollutants and climate changes). 


Weather is not limited to planetary bodies. Like all stars, the sun's corona is constantly being lost to space, creating what is essentially a very thin atmosphere throughout the Solar System. The movement of mass ejected from the Sun is known as the solar wind.  Inconsistencies in this wind and larger events on the surface of the star, such as coronal mass ejections (CME), form a system that has features analogous to conventional weather systems (such as pressure and wind) and is generally known as space weather. CME have been tracked as far out in the solar system as Saturn (and we are much closer).  The activity of this system can affect planetary atmospheres and occasionally surfaces. This will be exactly the focus in this blog entry as we currently live in times where we might experience potential havoc by such activity and consequences would be far more devastating than ever before.



Every eleven years, our Sun goes through something called solar cycle. Above picture shows complete solar cycle imaged by the sun-orbiting SOHO spacecraft launched in 1995. A solar cycle is caused by the changing magnetic field of the Sun, and varies from solar maximum, when sunspot, CME, and flare phenomena are most frequent, to solar minimum, when such activity is relatively infrequent. Solar minimums occurred in 1996 and 2007, while the last solar maximum occurred in 2001 and it is now expected to have its peak between 2012 and 2014.  In 2011 we started to see increased activity as expected.  So, what are these sunspots, flares, CME and geomagnetic storms?  Why should we care about it since they happen every 11 years and we've been around for much more?


Sunspots are temporary phenomena on the photosphere of the Sun that appear visibly as dark spots compared to surrounding regions. They are caused by intense magnetic activity. Although they are at temperatures of roughly 3000–4500 K (2727–4227 °C), the contrast with the surrounding material at about 5,780 K leaves them clearly visible as dark spots. They are powerful storms on the sun that occur when energy stored in twisted magnetic fields (usually above sunspots) is suddenly released. Flares produce a burst of radiation from radio waves to X-rays and gamma-rays.  Similar phenomena indirectly observed on stars are commonly called starspots and both light and dark spots have been measured (so our Sun is not unique of course). Sunspots expand and contract as they move across the surface of the Sun and can be as large as 80000 km in diameter. That's pretty big!  They may also travel at relative speeds of a few hundred m/s when they first emerge onto the solar photosphere.  Sunspots tend to appear in pairs and are only temporary. Some of the smaller ones (a few thousand km wide) may last less than a day, while larger ones may last a week or two.


sunspot_horseshoe_magnet_sm.jpgLike magnets, sunspots also have two poles causing on surface so called coronal loops - see illustration by Randy Russel on left.  An active region on the Sun is an area with an especially strong magnetic field. Sunspots frequently form in active regions. Active regions appear bright in X-ray and ultraviolet images. Solar activity, in the form of solar flares and coronal mass ejections (CMEs), is often associated with active regions.  Solar cycles are counted since 1755 causing various impacts to Earth surface.  Since then scientists have struggled to predict the size of future maxima and failed. Solar maxima can be intense, as in 1958, or barely detectable, as in 1805.  In 2010 NASA launched SDO mission for 5 years to observe the effects and influence to Earth and near space.


Counting sunspots is not as straightforward as it sounds neither.  There are two official sunspot numbers in common use. The first, the daily "Boulder Sunspot Number", is computed by the NOAA Space Environment Center using a formula devised by Rudolph Wolf in 1848.  The Boulder number (reported daily by SpaceWeather) is usually about 25% higher than the second official index, the "International Sunspot Number", published daily by the Solar Influences Data Center in Belgium. Both are calculated from the same basic formula, but they incorporate data from different observatories.  Last week, active region named AR1302 gave us some really huge sunspots.




Note:  Both pictures have picture of Earth superimposed for the scale.

Swedish scientists studying 2010 sunspots found that plasma within the penumbra (the filamentary region of the sunspot surrounding the dark central umbra) is circulating vertically, rising or falling at various locations with velocities of roughly one kilometer per second, or more than 3,000 kilometers per hour. To use the human eye as an analogue, the umbra is the pupil and the penumbra is the iris. That rise and fall is evidence for convection in the penumbra, a phenomenon that had been predicted by computer simulations of sunspot dynamics but that the researchers say had not been observationally confirmed. The convection occurs as hot plasma rises from below, radiates away its heat, and sinks into the sun again as it cools (that's what classic convection really is;  the rule basically states that the hot gas or liquid will always go up, since it is less dense than, cold gas or liquid, which always goes down).  If you wish to see how sunspots appear and travel on surface, click here (note: it gets interesting after 18 seconds).


Is there a way to predict sunspots?  As of 2011, there is.  Well, there was always a way, but we figure it out in 2011.  By beginning of year scientist at NOAA's Space Weather Prediction Center and her colleagues have found a technique for predicting solar flares two to three days in advance with unprecedented accuracy.  The long-sought clue to prediction lies in changes in twisting magnetic fields beneath the surface of the sun in the days leading up to a flare.  Some half an year later, it has been announced scientists have used data tracking sound waves inside the Sun to see sunspots forming 60,000 kilometers deep in the Sun’s interior, fully two days before the spots erupt onto the surface.  Inside the Sun, hot plasma (gas stripped of one or more electrons), rises and cooler plasma sinks. As it moves, it generates turbulence. This in turn creates acoustic waves (sounds) that travel through the Sun. As these waves move through the solar interior, regions with different densities make them speed up or slow down. By mapping how long it takes a wave to move between two points, the density of the stuff between them can be measured.  On board the SOHO and SDO are instruments that can measure the changes in the solar surface as these sound waves move around. By careful analysis that included eliminating a lot of noise sources, the scientists were able to watch the sound waves change speed as they passed through the volume of plasma below the Sun’s surface where sunspots were rising. They could detect this material 60,000 km deep.  These nascent sunspots move upward some 1000 to 2000 kmph.  At that speed, it takes about two days to rise to the surface and that’s what it has been seen: the subsurface spot erupts through to the surface, where we can see them directly.  Given the potential (and negative) influence of Sun activity, it makes big difference if we can predict those 2 days ahead (more on that later).


A solar flare occurs when magnetic energy that has built up in the solar atmosphere is suddenly released. How does that work?  Simple!  Remember those loops above rising from sunspot?  The magnetic field stores energy which is contained within loops. If the loops get tangled together, they can snap and release their energy in one sudden burst - that's your flare.  If you have rotating sunspot or even cluster of those, you should expect flare for sure.  Once flare goes off, radiation is emitted across virtually the entire electromagnetic spectrum, from radio waves at the long wavelength end, through optical emission to x-rays and gamma rays at the short wavelength end. The amount of energy released is the equivalent of millions of 100-megaton hydrogen bombs exploding at the same time. The first solar flare recorded in astronomical literature was on September 1, 1859.  Again as an analogue, you may imagine this as volcano on surface of Sun, but energy is ten million times greater than the energy released from a volcanic explosion though (on the other hand, it is less than 1/10 of the total energy emitted by the Sun every second).


Scientists classify solar flares according to their x-ray brightness in the wavelength range 1 to 8 Angstroms (unit of length equal to one ten billionth of meter). There are 3 categories: X-class flares are big; they are major events that can trigger planet-wide radio blackouts and long-lasting radiation storms. M-class flares are medium-sized; they can cause brief radio blackouts that affect Earth's polar regions. Minor radiation storms sometimes follow an M-class flare. Compared to X- and M-class events, C-class flares are small with few noticeable consequences here on Earth.  In 2011 we started to see increased number of active regions (AR) and had already few X class flares (first X class flare for solar cycle 24 happened on 15th March 2011).


There were typically 3 stages to a solar flare until recently, but in 2011 this has been extended to 4 based on observations by SDO (only the first three were already known and well documented).  Let's take a look at those by reviewing picture below.


First is the precursor stage, where the release of magnetic energy is triggered, which occurs usually on sunspots. Soft x-ray emission is detected in this stage. In the second or impulsive stage, protons and electrons are accelerated to energies exceeding 1 MeV.  During the second or impulsive stage, the high-energy particles encounter tighter atmospheric layers. As a result, heats up plasma-electrons, protons, and smaller amounts of heavier atomic nuclei and is hurled into the air (corona). Here it forms large loops along the magnetic field lines (coronal loops). This phase can last minutes and even hours. The amount of radiation released (radio waves, hard x-rays, and gamma rays) in this time is similar according to the NASA with the millions of hydrogen bombs. The gradual build up and decay of soft x-rays can be detected in the third, decay stage. At this point Sun’s corona is losing brightness. Often it comes at this time to a CME (Sun flings then gigantic amounts of hot plasma into space).  In newly discovered fourth phase there is 15% of the eruption going over in along the magnetic field lines. No longer there is an increase in the X-rays, but energy released is huge.  The intensity recorded in those late phase flares is usually dimmer than the X-ray intensity, but the late phase goes on much longer, sometimes for multiple hours, so it's putting out just as much total energy as the main flare that typically only lasts for a few minutes.



The duration of these stages can be as short as a few seconds or as long as an hour.  Solar flares extend out to the layer of the Sun called the corona. The corona is the outermost atmosphere of the Sun, consisting of highly rarefied gas. This gas normally has a temperature of a few million degrees K. Inside a flare, the temperature typically reaches 10 or 20 million degrees K, and can be as high as 100 million degrees K.  Solar flares occur only in active regions (AR).


Photo: An X-class flare began at 3:48 AM EDT on August 9, 2011 and peaked at 4:05 AM. The flare burst from sun spot region AR11263. The image here was captured by NASA's SDO in extreme ultraviolet light at 131 Angstroms.

A solar prominence is a towering arc of material lifted off the Sun’s surface by intense magnetic fields. A prominence forms over timescale of about a day, and stable prominence may persist in the corona for several months, looping hundreds of thousands of miles into space. Scientists are still researching how and why prominence are formed.  To give you an idea of how strong the magnetic forces are, a prominence can have a mass upwards of a hundred billion tons, and be cranked up thousands of kilometers off the Sun’s surface despite the crushing gravity of nearly 30 times that of Earth's. When a prominence is viewed from a different perspective so that it is against the sun instead of against space, it appears darker than the surrounding background. This formation is instead called a solar filament.



The red-glowing looped material is plasma, a hot gas comprised of electrically charged hydrogen and helium. The prominence plasma flows along a tangled and twisted structure of magnetic fields generated by the sun’s internal dynamo. An erupting prominence occurs when such a structure becomes unstable and bursts outward, releasing the plasma.


A coronal mass ejection (CME) is a massive burst of solar wind, other light isotope plasma, and magnetic fields rising above the solar corona or being released into space.  CME release huge quantities of matter and electromagnetic radiation into space above the sun's surface, either near the corona (sometimes called a solar prominence) or farther into the planet system or beyond (interplanetary CME). The ejected material is a plasma consisting primarily of electrons and protons, but may contain small quantities of heavier elements such as helium, oxygen, and even iron. It is associated with enormous changes and disturbances in the coronal magnetic field. 



You can imagine the outer solar atmosphere, the corona, structured by strong magnetic fields. Where these fields are closed, often above sunspot groups, the confined solar atmosphere can suddenly and violently release bubbles of gas and magnetic fields and that's your CME.  A large CME can contain a billion tons of matter that can be accelerated to several million miles per hour in a spectacular explosion. Solar material streams out through the interplanetary medium, impacting any planet or spacecraft in its path. CMEs are sometimes associated with flares but can occur independently.  Recent scientific research has shown that the phenomenon of magnetic reconnection is responsible for CME and solar flares. Magnetic reconnection is the name given to the rearrangement of magnetic field lines when two oppositely directed magnetic fields are brought together. This rearrangement is accompanied with a sudden release of energy stored in the original oppositely directed fields.  Picture shows CME as viewed by the SDO on June 7, 2011.

All above travels throughout the space and may reach our planet.  How does this affect us?  Solar flares impact Earth only when they occur on the side of the sun facing Earth. Because flares are made of photons, they travel out directly from the flare site, so if we can see the flare, we can be impacted by it. CME as seen as large clouds of plasma and magnetic field that erupt from the sun. These clouds can erupt in any direction, and then continue on in that direction, plowing right through the solar wind. Only when the cloud is aimed at Earth will the CME hit Earth and therefore cause impacts.  High-speed solar wind streams come from areas on the sun known as coronal holes. These holes can form anywhere on the sun and usually, only when they are closer to the solar equator, do the winds they produce impact Earth.  Solar energetic particles are high-energy charged particles, primarily thought to be released by shocks formed at the front of coronal mass ejections and solar flares. When a CME cloud plows through the solar wind, high velocity solar energetic particles can be produced and because they are charged, they must follow the magnetic field lines that pervade the space between the Sun and the Earth. Therefore, only the charged particles that follow magnetic field lines that intersect the Earth will result in impacts.


The Earth's magnetosphere is created by our magnetic field and protects us from most of the particles the sun emits. When a CME or high-speed stream arrives at Earth it buffets the magnetosphere. If the arriving solar magnetic field is directed southward it interacts strongly with the oppositely oriented magnetic field of the Earth. The Earth's magnetic field is then peeled open like an onion allowing energetic solar wind particles to stream down the field lines to hit the atmosphere over the poles. At the Earth's surface a magnetic storm is seen as a rapid drop in the Earth's magnetic field strength. This decrease lasts about 6 to 12 hours, after which the magnetic field gradually recovers over a period of several days.


Usually, we read there was some kind of explosion on Sun and there would be in 2 to 3 days aurora visible, but by now you probably understand there is more.  Aurora is a natural light display in the sky particularly in the high latitude (Arctic and Antarctic) regions, caused by the collision of energetic charged particles with atoms in the high altitude atmosphere (thermosphere). The charged particles originate in the magnetosphere and solar wind and are directed by the Earth's magnetic field into the atmosphere.  They can be very nice indeed.  Usually you see them as photos made from ground; below is one made from space on board of ISS.  And if you are after video, recently I saw cool timelapse made in Finland which shows auroras too - just click here.


Another group of people who like geomagnetic storms are geologists.  Earth's magnetic field is used by geologists to determine subterranean rock structures. For the most part, these geodetic surveyors are searching for oil, gas, or mineral deposits. They can accomplish this only when Earth's field is quiet, so that true magnetic signatures can be detected. Other geophysicists prefer to work during geomagnetic storms, when strong variations in the Earth's normal subsurface electric currents allow them to sense subsurface oil or mineral structures. This technique is called magnetotellurics. For these reasons, many surveyors use geomagnetic alerts and predictions to schedule their mapping activities.


But there is more than to nice pictures and geology.  The big blast could shake the Earth’s magnetic field, inducing a current in the ground that can actually overload power lines. We can get blackouts from such things, and it’s happened before. This is a real problem that can do millions or even billions of dollars of infrastructure damage (including money in the economy lost during the blackout).  Our magnetic field (Earth's one) does shield us, as seen on picture below, but not from all and not from higher energy levels headed directly towards us.



Some effects we would not like to experience:

  • Radiation hazard - intense solar flares release very-high-energy particles that can cause radiation poisoning to humans (and mammals in general) in the same way as low-energy radiation from nuclear blasts.  Earth's atmosphere and magnetosphere allow adequate protection at ground level, but astronauts in space are exposed to potentially lethal doses of radiation. Solar protons with energies greater than 30 MeV are particularly hazardous. In October 1989, the Sun produced enough energetic particles that, if an astronaut were to have been standing on the Moon at the time, wearing only a space suit and caught out in the brunt of the storm, he would probably have died; the expected dose would be about 7000 rem. (astronauts who had time to gain safety in a shelter beneath moon soil would have absorbed only slight amounts of radiation).  The cosmonauts on the Mir station were subjected to daily doses of about twice the yearly dose on the ground, and during the solar storm at the end of 1989 they absorbed their full-year radiation dose limit in just a few hours.  An increasing number of international business flights cross Earth's Arctic to save time, fuel and money. Solar proton events can also produce elevated radiation aboard aircraft flying at high altitudes. Although these risks are small, monitoring of solar proton events by satellite instrumentation allows the occasional exposure to be monitored and evaluated, and eventually the flight paths and altitudes adjusted in order to lower the absorbed dose of the flight crews.


  • Biology - there is a growing body of evidence that changes in the geomagnetic field affect biological systems. Studies indicate that physically stressed human biological systems may respond to fluctuations in the geomagnetic field. Interest and concern in this subject have led the International Union of Radio Science to create a new commission entitled Commission K - Electromagnetic in Biology and Medicine.  Possibly the most closely studied of the variable Sun's biological effects has been the degradation of homing pigeons navigational abilities during geomagnetic storms. Pigeons and other migratory animals, such as dolphins and whales, have internal biological compasses composed of the mineral magnetite wrapped in bundles of nerve cells. This gives them the sense known as magnetoception. While probably not their primary method of navigation, there have been many pigeon race smashes, a term used when only a small percentage of birds return home from a release site.  Because these losses have occurred during geomagnetic storms, pigeon handlers have learned to ask for geomagnetic alerts and warnings as an aid to scheduling races.
  • Communication disruptions - many communication systems use the ionosphere to reflect radio signals over long distances. Ionospheric storms can affect radio communication at all latitudes. TV and commercial radio stations are little affected by solar activity, but ground-to-air, ship-to-shore, shortwave broadcast, and amateur radio (mostly the bands below 30 MHz) are frequently disrupted. Radio operators using HF bands rely upon solar and geomagnetic alerts to keep their communication circuits up and running.  Some military detection or early warning systems are also affected by solar activity. The over-the-horizon radar bounces signals off the ionosphere to monitor the launch of aircraft and missiles from long distances. During geomagnetic storms, this system can be severely hampered by radio clutter. Some submarine detection systems use the magnetic signatures of submarines as one input to their locating schemes. Geomagnetic storms can mask and distort these signals.  The telegraph lines in the past were affected by geomagnetic storms as well. Geomagnetic storms affect also long-haul telephone lines, including undersea cables unless they are fiber optic.  Damage to communications satellites can disrupt non-terrestrial telephone, television, radio and Internet links (oh no!).
  • Navigation system disruptions - systems such as GPS and LORAN are adversely affected when solar activity disrupts their signal propagation. Think about ships and airplanes and you you may see where this leads (or not if you use GPS during geomagnetic storm).
  • Satellite hardware damage - geomagnetic storms and increased solar ultraviolet emission heat Earth's upper atmosphere, causing it to expand. The heated air rises, and the density at the orbit of satellites up to about 1,000 km increases significantly. This results in increased drag on satellites in space, causing them to slow and change orbit slightly. Unless LEO (low Earth orbit) satellites are routinely boosted to higher orbits, they slowly fall, and eventually burn up in Earth's atmosphere. Skylab is an example of a spacecraft reentering Earth's atmosphere prematurely in 1979 as a result of higher-than-expected solar activity. During the great geomagnetic storm of March 1989, four of the Navy's navigational satellites had to be taken out of service for up to a week, the U.S. Space Command had to post new orbital elements for over 1000 objects affected, and the Solar Maximum Mission satellite fell out of orbit in December the same year.  The vulnerability of the satellites depends on their position as well. As technology has allowed spacecraft components to become smaller, their miniaturized systems have become increasingly vulnerable to the more energetic solar particles.  Another problem for satellite operators is differential charging. During geomagnetic storms, the number and energy of electrons and ions increase. When a satellite travels through this energized environment, the charged particles striking the spacecraft cause different portions of the spacecraft to be differentially charged. Eventually, electrical discharges can arc across spacecraft components, harming and possibly disabling them. Bulk charging (also called deep charging) occurs when energetic particles, primarily electrons, penetrate the outer covering of a satellite and deposit their charge in its internal parts. If sufficient charge accumulates in any one component, it may attempt to neutralize by discharging to other components. This discharge is potentially hazardous to the satellite's electronic systems.
  • Electric grid - this is probably the one we should keep an eye the most.  When magnetic fields move about in the vicinity of a conductor such as a wire, a geomagnetically induced current is produced in the conductor. This happens on a grand scale during geomagnetic storms (the same mechanism also influences telephone and telegraph lines) on all long transmission lines. Power companies which operate long transmission lines (many kilometers in length) are thus subject to damage by this effect. Notably, this chiefly includes operators in China, North America, and Australia; the European grid consists mainly of shorter transmission cables, which are less vulnerable to damage. The (nearly direct) currents induced in these lines from geomagnetic storms are harmful to electrical transmission equipment, especially generators and transformers - induces core saturation, constraining their performance (as well as tripping various safety devices), and causes coils and cores to heat up. This heat can disable or destroy them, even inducing a chain reaction that can overload and blow transformers throughout a system. 
  • Pipelines - rapidly fluctuating geomagnetic fields can produce geomagnetically induced currents in pipelines. Flow meters in the pipeline can transmit erroneous flow information, and the corrosion rate of the pipeline is dramatically increased. If engineers incorrectly attempt to balance the current during a geomagnetic storm, corrosion rates may increase even more.


According to a study by Metatech corporation made in 2008, a storm with a strength comparative to that of 1921, 130 million people would be left without power and 350 transformers would be broken, with a cost totaling 2 trillion dollars. A massive solar flare could knock out electric power for months.


Some past events which should keep us on alerts (and which keep fueling imagination of dooms day followers) are:

  • September 2, 1859, disruption of telegraph service.
  • One of the best-known examples of space weather events is the collapse of the Hydro-Québec power network on March 13, 1989 due to geomagnetically induced currents (GICs). Caused by a transformer failure, this event led to a general blackout that lasted more than 9 hours and affected over 6 million people. The geomagnetic storm causing this event was itself the result of a CME ejected from the sun on March 9, 1989.
  • Today, airlines fly over 7500 polar routes per year. These routes take aircraft to latitudes where satellite communication cannot be used, and flight crews must rely instead on high-frequency (HF) radio to maintain communication with air traffic control, as required by federal regulation. During certain space weather events, solar energetic particles spiral down geomagnetic field lines in the polar regions, where they increase the density of ionized gas, which in turn affects the propagation of radio waves and can result in radio blackouts. These events can last for several days, during which time aircraft must be diverted to latitudes where satellite communications can be used.
  • No large Solar Energetic Particles events have happened during a manned space mission. However, such a large event happened on August 7, 1972, between the Apollo 16 and Apollo 17 lunar missions. The dose of particles would have hit an astronaut outside of Earth's protective magnetic field, had this event happened during one of these missions, the effects could have been life threatening.


In 2006, Mausumi Dikpati of the National Center for Atmospheric Research (NCAR) stated next sunspot cycle would be 30% to 50% stronger than the previous one. If that would be correct, the years ahead could produce a burst of solar activity second only to the historic Solar Max of 1958.  The basis for this claim from the fact that Sun has its own conveyor belt just like Earth's one called Great Conveyor Belt.  It takes about 40 years for the belt to complete one loop and the speed varies anywhere from a 50-year pace (slow) to a 30-year pace (fast).  When the belt is turning "fast," it means that lots of magnetic fields are being swept up, and that a future sunspot cycle is going to be intense. This is a basis for forecasting, but at the moment this is just an assumption.


Related great conveyor belt, in 2008-2009, sunspots almost completely disappeared for two years. Solar activity dropped to hundred-year lows;  Earth's upper atmosphere cooled and collapsed; the sun’s magnetic field weakened, allowing cosmic rays to penetrate the Solar System in record numbers. It was a big event, and solar physicists openly wondered where all the sunspots have gone.


conveyorbelt.jpgA vast system of plasma currents called ‘meridional flows’ (akin to ocean currents on Earth)  travel along the sun's surface, plunge inward around the poles, and pop up again near the sun's equator.  These looping currents play a key role in the 11-year solar cycle.  When sunspots begin to decay, surface currents sweep up their magnetic remains and pull them down inside the star; 300,000 km below the surface, the sun’s magnetic dynamo amplifies the decaying magnetic fields.  Re-animated sunspots become buoyant and bob up to the surface like a cork in water - new solar cycle is born.  According to this model, the trouble with sunspots actually began in back in the late 1990s during the upswing of Solar Cycle 23.  At that time, the conveyor belt sped up.  The fast-moving belt rapidly dragged sunspot corpses down to sun's inner dynamo for amplification. At first glance, this might seem to boost sunspot production, but no. When the remains of old sunspots reached the dynamo, they rode the belt through the amplification zone too hastily for full re-animation.  Sunspot production was stunted.

Later, in the 2000s, according to the model, the Conveyor Belt slowed down again, allowing magnetic fields to spend more time in the amplification zone, but the damage was already done.  New sunspots were in short supply.  Adding insult to injury, the slow moving belt did little to assist re-animated sunspots on their journey back to the surface, delaying the onset of Solar Cycle 24.


While Solar Max is relatively brief, lasting a few years punctuated by episodes of violent flaring, over and done in days, Solar Minimum can grind on for many years. The famous Maunder Minimum of the 17th century lasted 70 years and coincided with the deepest part of Europe's Little Ice Age. Researchers are still struggling to understand the connection.   One thing seems to be clear: During long minima, strange things happen. In 2008-2009, the sun’s global magnetic field weakened and the solar wind subsided. Cosmic rays normally held at bay by the sun’s windy magnetism surged into the inner solar system.  During the deepest solar minimum in a century, ironically, space became a more dangerous place to travel.  At the same time, the heating action of UV rays normally provided by sunspots was absent, so Earth’s upper atmosphere began to cool and collapse.  Space junk stopped decaying as rapidly as usual and started accumulating in Earth orbit.  And so on….


For the past few years, especially as we travel towards peak of solar activity within current cycle, I can see more and more media releasing stories with incorrect information.  Bad news sells and this sucks.  What sucks even more is increased number of people buying it.  It was exactly one such article I read at one of Croatian news portals which made me write this article as I expect more and more misinformation to appear soon.  RTFM!  Solar activity has a regular cycle, with peaks approximately every 11 years. Near these activity peaks, solar flares can cause some interruption of satellite communications, although engineers are learning how to build electronics that are protected against most solar storms. But there is no special risk associated with forthcoming peak of the cycle. The next solar maximum will occur in the 2012-2014 time frame and is predicted to be an average solar cycle, no different than previous cycles throughout history.


To follow real-time Sun activity visit SpaceWeather web site and to engage in search and tracking of spot activity check Solar Stormwatch web site.


Credits: SOHO, DSO, NASA, Wikipedia, Phil Plait, Randy Russel, Jean-Pierre Brahic, Alan Friedman, Ron Garan, SpaceWeather

Filter Blog

By date:
By tag: