Find Communities by: Category | Product

The 2012 phenomenon comprises a range of eschatological beliefs according to which cataclysmic or transformative events will occur on 21 December 2012. This date is regarded as the end-date of a 5125-year-long cycle in the Mesoamerican Long Count calendar. Various astronomical alignments and numerological formulae have been proposed as pertaining to this date, though none have been accepted by mainstream scholarship, but various crackpots around the world accept it as a fact. A New Age interpretation of this transition is that this date marks the start of time in which Earth and its inhabitants may undergo a positive physical or spiritual transformation, and that 2012 may mark the beginning of a new era. Others suggest that the 2012 date marks the end of the world or a similar catastrophe. Scenarios suggested for the end of the world include the arrival of the next solar maximum, an interaction between Earth and the black hole at the centre of the galaxy (no idea what drugs these people use), or Earth's collision with an object such as an asteroid, or a planet called "Nibiru" (so called 12th planet - fans Zacharia Sitchin will know this by heart - I used to read his books as kid too).


So what really happens on 21st December this year?  Several films and documentaries have promoted the idea that the ancient Mayan calendar predicts that doomsday is less than two months away, on December 21, 2012. Well, it going to be Friday and I assume it is going to be yet another "Friday finally" day. No one knows for sure what kind of weather to expect, but rest assured that no end of world will strike either.




Above you see a date inscription for the Mayan Long Count. It is calendar.  Nothing else. Just like our year will finish on 31st December 2012, their cycle will end on 21st December 2012. What happens then? Next cycle stars. Simple as that. I suspect that day, due to numerous crackpots around the globe, it will be a bit more dangerous to be outside for simple psychological reason - same which explains why people are more likely to get heart (or heart others) on Friday 13th if they really believe it. Even Mayas are no so crazy to believe it so it does not come as surprise when last week Guatemala's Mayan people accused the government and tour groups perpetuating the myth that their calendar foresees the imminent end of the world for monetary gain. I believe NASA did rather good movie about it and you can see it below:





So, is there a end? Most likely yes, but there is no way for us to predict a moment. And question of course is what kind of end. End of human race? End of planet Earth? End of Universe? Let's start with last one. 1998 really messed us up, theoretically. Until then, we knew the Universe had to slow down - well, theoretically. But then the Hubble showed us truly distant supernovae and we got the uncomfortable reality that the Universe was actually expanding more slowly in the past than it is now.  That meant gravity has not been slowing Universal expansion, it has been accelerating. What could cause that?  No one knew but theorists were on the case and all of the explanations became a blanket dark energy hypothesis. It also meant we don't even know what we don't know.  Dark energy (unknown) became 70% of existence, dark matter (unknown) another 25% and what we know as matter is about 5%. The Big Rip is one hypothetical possibility for our Cosmic Doomsday, proposed by a group at the the Institute of Theoretical Physics at the Chinese Academy of Sciences. Dark energy has given them the means to end the Universe the same way a Hot Big Bang and inflation began it. According to their calculations, Big Rip happens 16.7 billion years since Big Bang. Plenty of time to ignore it.


End of Earth will happen more soon for sure, but in terms of life span generations anything soon. We might get hit by some bigger rock in space, but rights now odds are small for anything like that for next few decades. We may start global war or natural cycle of Earth climate change may mess up with us and our technology at such pace we can't cope with it.  Extinctions have happened throughout the years before so there is nothing unnatural about it. Most extinctions have occurred naturally, prior to **** sapiens walking on Earth: it is estimated that 99.9% of all species that have ever existed are now extinct. Mass extinctions are relatively rare events; however, isolated extinctions are quite common. Only recently have extinctions been recorded and scientists have become alarmed at the high rates of recent extinctions. Most species that become extinct are never scientifically documented. Some scientists estimate that up to half of presently existing species may become extinct by 2100. It is difficult to estimate the trajectory that biodiversity might have taken without human impact but scientists at the University of Bristol estimate that biodiversity might increase exponentially without human influence. Most popular extinct group of animals is most likely dinosaurs.




There have been at least five mass extinctions in the history of life on earth, and four in the last 3.5 billion years in which many species have disappeared in a relatively short period of geological time. The massive eruptive event is considered to be one likely cause of the "Great Dying" about 252 million years ago, which is estimated to have killed 90% of species existing at the time (about 95 percent of marine life and 70 percent of terrestrial life). There is also evidence to suggest this event was preceded by another mass extinction known as Olson's Extinction. The Cretaceous–Paleogene extinction event occurred 65 million years ago at the end of the Cretaceous period and is best known for having wiped out non-avian dinosaurs, among many other species.


Scientists have uncovered a lot about Earth's greatest extinction event that took place 252 million years ago when rapid climate change wiped out nearly all marine species and a majority of those on land. Through the analysis of various types of dating techniques on well-preserved sedimentary sections from South China to Tibet, researchers determined that the mass extinction peaked about 252.28 million years ago and lasted less than 200000 years, with most of the extinction lasting about 20000 years Although the cause of this event is a mystery, it has been speculated that the eruption of a large swath of volcanic rock in Russia called the Siberian Traps was a trigger for the extinction. Based on their findings, the team estimated that between 6300 and 7800 gigatonnes of sulfur, between 3400 and 8700 gigatonnes of chlorine, and between 7100 and 13700 gigatonnes of fluorine were released from magma in the Siberian Traps during the end of the Permian period. Now, we have discovered a new culprit likely involved in the annihilation: an influx of mercury into the eco-system. No one had ever looked to see if mercury was a potential culprit. This was a time of the greatest volcanic activity in Earth's history and we know today that the largest source of mercury comes from volcanic eruptions. Researchers estimate that the mercury released then could have been up to 30 times greater than today's volcanic activity, making the event truly catastrophic. During the late Permian, the natural buffering system in the ocean became overloaded with mercury contributing to the loss of 95 per cent of life in the sea.



There's some general sense that the event happens, there's some aftermath and then things return to normal, but things don't return to what they were before. They operate at a different pace, sometimes more rapidly, other times more slowly. Evolutionary rates shift, and that shift is permanent until the next mass extinction.


The long-term evolutionary patterns of species diversification following mass extinctions are poorly understood. Paleontologists have extensively debated whether diversity has increased over the last 251 million years, which followed the most devastating mass extinction in Earth history. There's been a lot of talk about the evolutionary role of mass extinctions, but it's like the weather. Everyone talks about it, but no one does much about it.


During the late Pleistocene, 40000 to 10000 years ago, North America lost over 50 percent of its large mammal species. These species include mammoths, mastodons, giant ground sloths, among many others. In total, 35 different genera (groups of species) disappeared, all of different habitat preferences and feeding habits. What event or factor could cause such a mass extinction? The many hypotheses that have been developed over the years include: abrupt change in climate, the result of comet impact, human overkill and disease. Some researchers believe that it may be a combination of these factors, one of them, or none.


A particular issue that has also contributed to this debate focuses on the chronology of extinctions. The existing fossil record is incomplete, making it more difficult to tell whether or not the extinctions occurred in a gradual process, or took place as a synchronous event. In addition, it was previously unclear whether species are missing from the terminal Pleistocene because they had already gone extinct or because they simply have not been found yet. However, new findings indicate that the extinction is best characterized as a sudden event that took place between 13.8 and 11.4 thousand years ago. The massive extinction coincides precisely with human arrival on the continent, abrupt climate change, and a possible extraterrestrial impact event. It remains possible that any one of these or all, contributed to the sudden extinctions.



The survivors of the mass extinction, or the world they inherited, is so different from what went before that the rate of evolution is permanently changed. At least one paper, done by Dr Zhong-Qiang Chen (China University of Geosciences in Wuhan) and Professor Michael Benton (University of Bristol), claims that recovery from the "Great Dying" lasted some 10 million years. There were apparently two reasons for the delay, the sheer intensity of the crisis, and continuing grim conditions on Earth after the first wave of extinction. We often see mass extinctions as entirely negative but in this most devastating case, life did recover, after many millions of years, and new groups emerged. The event had re-set evolution and one could argue that we might not be here without it as we find our self at the end of chain at the moment.


Here is the funny video featuring Neil deGrasse Tyson:





At least one aspect of the 2012, end-of-the-world hype is, for some people, all too real: the fear. And the message from educated people is clear - fear not.



Credits: Hank Campbell, Wikipedia, University of Calgary, Carnegie Institution, University of Chicago, University of Bristol, George Washington University

Hrvoje Crvelin

It's full of stars...!

Posted by Hrvoje Crvelin Oct 29, 2012

Using a whopping nine-gigapixel image from the VISTA infrared survey telescope at ESO's Paranal Observatory, an international team of astronomers has created a catalogue of more than 84 million stars in the central parts of the Milky Way!!! This gigantic dataset contains more than ten times more stars than previous studies and is a major step forward for the understanding of our home galaxy. The image gives viewers an incredible, zoomable view of the central part of our galaxy. It is so large that, if printed with the resolution of a typical book, it would be 9 metres long and 7 metres tall. This huge picture is 108500 by 81500 pixels and contains nearly nine billion pixels. It was created by combining thousands of individual images from VISTA, taken through three different infrared filters, into a single monumental mosaic.


mway1.jpgUnderstanding the formation and evolution of the Milky Way's bulge is vital for understanding the galaxy is a whole. However, obtaining detailed observations of this region is not an easy task. Observations of the bulge of the Milky Way are very hard because it is obscured by dust. To peer into the heart of the galaxy, researchers needed to observe in infrared light, which is less affected by the dust. The large mirror, wide field of view and very sensitive infrared detectors of ESO's 4.1-metre Visible and Infrared Survey Telescope for Astronomy (VISTA) make it by far the best tool for this job. The team of astronomers is using data from the VISTA Variables in the Via Lactea programme, one of six public surveys carried out with VISTA. This is one of the biggest astronomical images ever produced. The team has now used these data to compile the largest catalogue of the central concentration of stars in the Milky Way ever created.


To help analyse this huge catalogue the brightness of each star is plotted against its colour for about 84 million stars to create a colour-magnitude diagram. This plot contains more than ten times more stars than any previous study and it is the first time that this has been done for the entire bulge. Colour-magnitude diagrams are very valuable tools that are often used by astronomers to study the different physical properties of stars such as their temperatures, masses and ages. Each star occupies a particular spot in this diagram at any moment during its lifetime. Where it falls depends on how bright it is and how hot it is. Since the new data gives us a snapshot of all the stars in one go, we can now make a census of all the stars in this part of the Milky Way. The new colour-magnitude diagram of the bulge contains a treasure trove of information about the structure and content of the Milky Way. One interesting result revealed in the new data is the large number of faint red dwarf stars. These are prime candidates around which to search for small exoplanets using the transit method.


Full billion-star image: (304 MB 39300x3750 pixels) (27MB 19650x1875 pixels) (14 MB 19650x1875 pixels)


Shot of star-forming area in Milky Way:


Detail of star-forming area in Milky Way (white box outlined in image above):


More details can be found here.



Credits: ESO

Hrvoje Crvelin

Perfect Storm

Posted by Hrvoje Crvelin Oct 28, 2012

In October 1991, the swordfishing boat Andrea Gail returns to port in Gloucester, Massachusetts with a poor catch. Desperate for money, Captain Billy Tyne, convinces the Andrea Gail crew to join him for one more late season fishing expedition. They head out past their usual fishing grounds, leaving a developing thunderstorm behind them. Initially unsuccessful, they head to the Flemish Cap, where their luck improves. At the height of their fishing the ice machine breaks; the only way to sell their catch before it spoils is to hurry back to shore. After debating whether to sail through the building storm or to wait it out, the crew decide to risk the storm. However, between the Andrea Gail and Gloucester is a confluence of two powerful weather fronts and a hurricane, which the Andrea Gail crew underestimate.


After repeated warnings from other ships, the Andrea Gail loses her antenna, forcing a fellow ship to call in a Mayday. An Air National Guard rescue helicopter responds, but after failing to perform a midair refuel, the helicopter crew ditch the aircraft before it crashes, and all but one of the crew members are rescued by a Coast Guard vessel, the Tamaroa. The Andrea Gail endures various problems: with 12m waves crashing onto the deck, a broken stabilizer ramming the side of the ship, and two men thrown overboard. The crew decide to turn around to avoid further damage by the storm. After doing so, the vessel encounters an enormous rogue wave. They apply full power to ride over the wave; it seems that they may make it over, but the wave starts to plunder and the boat flips over. There were no survivors.


Above is what happened in movie called Perfect Storm. It was released in 2001 and received mixed critical consensus while it was huge success in cinemas. However, the story itself, while it states to based on factual event, it is not. While details do not matter here, it is enough to be said that motive for the story came from something called perfect storm which has really happened in 1991.  So what was this perfect storm and what does it mean at all?


A perfect storm is an expression that describes an event where a rare combination of circumstances will aggravate a situation drastically. The term is also used to describe an actual phenomenon that happens to occur in such a confluence, resulting in an event of unusual magnitude. It sounds as lottery, but with bad luck as outcome. It is important to say that perfect storm is not something we become aware in 1991; the first use of the expression in the meteorological sense goes back to 1936. Still, in 1993, journalist Sebastian Junger planned to write a book about the 1991 Halloween Nor’easter storm. In the course of his research, he spoke with Bob Case, who had been a deputy meteorologist in the Boston office of the National Weather Service at the time of the storm. Case described to Junger the confluence of three different weather-related phenomena that combined to create what Case referred to as the "perfect situation" to generate such a storm:

  • warm air from a low-pressure system coming from one direction,
  • a flow of cool and dry air generated by a high-pressure from another direction, and
  • tropical moisture provided by Hurricane Grace.


From that, Junger keyed on Case's use of the word perfect and coined the phrase perfect storm, choosing to use The Perfect Storm as the title of his book. Junger published his book The Perfect Storm in 1997 and its success brought the phrase into popular culture. Its adoption was accelerated with the release of the 2000 feature film adaptation of Junger's book. Since the release of the movie, the phrase has grown to mean any event where a situation is aggravated drastically by an exceptionally rare combination of circumstances. So what happened in 1991?




It was a nor'easter that absorbed Hurricane Grace and ultimately evolved into a small hurricane late in its life cycle. The initial area of low pressure developed off Atlantic Canada on October 28. Forced southward by a ridge to its north, it reached its peak intensity as a large and powerful cyclone. The storm lashed the East Coast of the United States with high waves and coastal flooding, before turning to the southwest and weakening. Moving over warmer waters, the system transitioned into a subtropical cyclone before becoming a tropical storm. It executed a loop off the Mid-Atlantic states and turned toward the northeast. On November 1 the system evolved into a full-fledged hurricane with peak winds of 120 km/h, although the National Hurricane Center left it unnamed to avoid confusion amid media interest in the predecessor extratropical storm. Damage from the Perfect Storm totaled over $200 million (1991 USD) and the death toll was thirteen. Most of the damage occurred while the storm was extratropical, after waves up to 10 m struck the coastline from Canada to Florida and southeastward to Puerto Rico. In Massachusetts, where damage was heaviest, over 100 homes were destroyed or severely damaged. To the north, more than 100 homes were affected in Maine. More than 38000 people were left without power, and along the coast high waves inundated roads and buildings.


Although the 1991 Halloween Nor'easter was a powerful storm by any measure, there have been other storms that have exceeded its strength. Actually, to my European friends this may look a bit silly due to our cultural differences. For example, my hometown was Rijeka in Croatia.  Not far away, there is small city Senj which is known for condition called bora. Bora is a strong wind which I felt few times and it's not much fun. However, we have no damage because we live in stone houses and in general infrastructure is fit to environment. If such force would hit my hometown, I can see few badly made roofs blown away or cars on the streets damaged due to falling trees. Actually, whole Adriatic coast might be subject to this during winter. If you ever used Croatian highway to the sea then most likely you passed through tunnel called Sveti Rok where bora winds were 220 km/h. Not far away from there, near island called Pag record 250 km/h was measured at the bridge connecting island with the rest of coast. However, human environments differ and the one in US is different than the one over in Europe so when natural force as described above strikes, you get different outcomes and this has to be taken into consideration (even I always get flashbacks of Three Little Piggies). And no one knows what outcome will be of what might be next perfect storm - the one to hit the coast in few hours.




NASA’s Suomi-NPP satellite took above image of the monster storm in the infrared while doing mess over at Cuba and already this picture gives you an idea of dimensions. It's huge! Current predictions are that it may be be bigger and more damaging even than one in 1991. This is because Sandy is a hurricane in its own right, but there is also a nor’easter, a low pressure system, off the coast farther north. Together, these two systems can produce a much larger storm capable of dropping a lot of rain and flooding inland areas (unrelated, much of the snow and rain caused Slovenian/Croatian nuclear power plant to be shut down when nearby river swallowed). On top of that, of course, there’s also high winds. The system is also slow moving, potentially making things a lot worse. That gives it more time to do damage, but we’re also approaching the full Moon on October 29. It’s not the Moon’s phase that matters, but the position: when it is aligned with the Sun in the sky (either at full or new phase) the tidal force from the Moon aligns with that of the Sun, adding together. The tides from the Sun are about half the strength of the Moon’s, but together (called Spring Tide) they can increase the chance of flooding because high tide is slightly higher than normal. On October 26th, ISS did following photo:




You can see eye of the storm. On the left what you see is SpaceX Dragon capsule which today landed back to Earth (ocean). New York City's subway, bus and train system is to be suspended as preparations are stepped up for the arrival of Hurricane Sandy. As many as 375000 people have been ordered to evacuate low-lying areas, and schools will be shut. Up to 60 million people could be affected by the storm. If you planned to fly to affected areas - forget it. Nearly three quarters of the coast along the Delmarva Peninsula is very likely to experience beach and dune erosion as Hurricane Sandy makes landfall, while overwash is expected along nearly half of the shoreline.




The USGS coastal change model forecasting likely dune erosion and overwash from the storm can be viewed online here. Of course, sadly, this is also good opportunity for science.  Working with various partner agencies such as NOAA, FEMA, and the U.S. Army Corps of Engineers, the US Geological Society is securing the storm-tide sensors, frequently called storm-surge sensors, to piers and poles in areas where the storm is expected to make landfall. The instruments being installed will record the precise time the storm-tide arrived, how ocean and inland water levels changed during the storm, the depth of the storm-tide throughout the event, and how long it took for the water to recede. This information will be used to assess storm damage, discern between wind and flood damage, and improve computer models used to forecast future coastal inundation. In addition, rapid deployment gauges will be installed at critical locations to provide real-time information to forecast floods and coordinate flood-response activities in the affected areas. The sensors augment a network of existing U.S. Geological Survey streamgages, which are part of the permanent network of more than 7500 streamgages nationwide. Of the sensors deployed specifically for Sandy, eight have real-time capability that will allow viewing of the storm-tide as the storm approaches and makes landfall. Besides water level, some of these real-time gauges include precipitation and wind sensors that will transmit all data hourly. All data collected by these sensors and the existing USGS streamgage network will be available on the USGS Storm-Tide Mapper link.


An animation of NOAA's GOES-13 satellite observations from Oct. 26-28, 2012 is available here and shows Hurricane Sandy move out of the Bahamas and its western clouds spread over the U.S. eastern seaboard. The circulation is evident over the Atlantic Ocean as Sandy moved northward. By the end of the video you see eye of the storm forming.




To follow what is going, use some of the links within this article and check National Weather Hurricane Center. You may experience disruption in electricity and water so make sure at least you have some water standing by and candles. If communications are down, make sure to shut down wireless options on mobiles or perhaps whole mobile - it will save battery. Most important, follow advices of those who are there to help you. Good luck!



Credits: NOAA, Phil Plait, CNN, BBC, NASA, US Geological Society

Here is the last post in Solar System series and this one is about so called belts and clouds. es, our solar system is also home to a number of regions populated by smaller objects. The asteroid belt, which lies between Mars and Jupiter, is similar to the terrestrial planets as it is composed mainly of rock and metal. Beyond Neptune's orbit lie the Kuiper belt and scattered disc; linked populations of trans-Neptunian objects composed mostly of ices such as water, ammonia and methane. The Oort cloud is a hypothesized spherical cloud of comets which may lie roughly 50000 AU, or nearly a light-year, from the Sun. The outer limit of the Oort cloud defines the cosmographical boundary of the Solar System and the region of the Sun's gravitational dominance.


Asteroids are small Solar System bodies composed mainly of refractory rocky and metallic minerals, with some ice. The asteroid belt occupies the orbit between Mars and Jupiter, between 2.3 and 3.3 AU from the Sun. It is thought to be remnants from the Solar System's formation that failed to coalesce because of the gravitational interference of Jupiter. Asteroids range in size from hundreds of kilometres across to microscopic. All asteroids except the largest, Ceres, are classified as small Solar System bodies, but some asteroids such as Vesta (here and here) and Hygiea may be reclassed as dwarf planets if they are shown to have achieved hydrostatic equilibrium. The asteroid belt contains tens of thousands, possibly millions, of objects over one kilometre in diameter. Despite this, the total mass of the asteroid belt is unlikely to be more than a thousandth of that of the Earth. The asteroid belt is very sparsely populated; spacecraft routinely pass through without incident. Asteroids in the asteroid belt are divided into asteroid groups and families based on their orbital characteristics. Asteroid moons are asteroids that orbit larger asteroids. They are not as clearly distinguished as planetary moons, sometimes being almost as large as their partners. The asteroid belt also contains main-belt comets, which may have been the source of Earth's water.




The asteroid belt formed from the primordial solar nebula as a group of planetesimals, the smaller precursors of the planets, which in turn formed protoplanets. Between Mars and Jupiter, however, gravitational perturbations from the giant planet imbued the protoplanets with too much orbital energy for them to accrete into a planet. Collisions became too violent, and instead of fusing together, the planetesimals and most of the protoplanets shattered. As a result, most of the asteroid belt's mass has been lost since the formation of the Solar System. Some fragments can eventually find their way into the inner Solar System, leading to meteorite impacts with the inner planets. Asteroid orbits continue to be appreciably perturbed whenever their period of revolution about the Sun forms an orbital resonance with Jupiter.


The Kuiper Belt is a disc-shaped region of icy objects beyond the orbit of Neptune - billions of kilometers from our sun. Pluto and Eris are the best known of these icy worlds. There may be hundreds more of these ice dwarfs out there. It is similar to the asteroid belt, although it is far larger - 20 times as wide and 20 to 200 times as massive. Like the asteroid belt, it consists mainly of small bodies, or remnants from the Solar System's formation. While most asteroids are composed primarily of rock and metal, Kuiper belt objects are composed largely of frozen volatiles (termed "ices"), such as methane, ammonia and water. The classical belt is home to at least three dwarf planets: Pluto, Haumea, and Makemake. Some of the Solar System's moons, such as Neptune's Triton and Saturn's Phoebe, are also believed to have originated in the region.




Since the belt was discovered in 1992, the number of known Kuiper belt objects (KBOs) has increased to over a thousand, and more than 100000 KBOs over 100 km in diameter are believed to exist. The Kuiper Belt was first postulated - most famously by Gerard Kuiper - by planetary scientists back in the 1930s, '40s and '50s. But it took until 1992 for technology to mature sufficiently enough to find another object (outside the Pluto system) orbiting the Sun beyond Neptune. The Kuiper belt was initially thought to be the main repository for periodic comets, those with orbits lasting less than 200 years. However, studies since the mid-1990s have shown that the classical belt is dynamically stable, and that comets' true place of origin is the scattered disc, a dynamically active zone created by the outward motion of Neptune 4.5 billion years ago. The Kuiper Belt and even more distant Oort Cloud are believed to be the home of comets that orbit our sun. The Kuiper belt should not be confused with the hypothesized Oort cloud, which is a thousand times more distant. The objects within the Kuiper belt, together with the members of the scattered disc and any potential Hills cloud or Oort cloud objects, are collectively referred to as trans-Neptunian objects (TNOs). This plot below shows one aspect of Kuiper Belt structure: Different numbers of bodies orbit at different distances. This graph includes just the known bodies, which make up a tiny fraction of the grand total.




There are at least three big solar system lessons we have learned from the Kuiper Belt are:

  • That our planetary system is much larger than we used to think. In fact, we were largely unaware of the Kuiper Belt - the largest structure in our solar system - until it was discovered 20 years ago. It's akin to not having maps of the Earth that included the Pacific Ocean as recently as 1992.
  • That the locations and orbital eccentricities and inclinations of the planets in our solar system (and other solar systems as well) can change with time. This even creates whole flocks of migration of planets in some cases. We have firm evidence that many KBOs (including some large ones like Pluto), were born much closer to the Sun, in the region where the giant planets now orbit.
  • And, perhaps most surprisingly, that our solar system, and very likely very many others, was very good at making small planets, which dominate the planetary population. Today we know of more than a dozen dwarf planets in the Solar System, and those dwarfs already outnumber the number of gas giants and terrestrial planets combined. But it is estimated that the ultimate number of dwarf planets we will discover in the Kuiper Belt and beyond may well exceed 10000.


The Oort cloud is thought to comprise two separate regions: a spherical outer Oort cloud and a disc-shaped inner Oort cloud, or Hills cloud. Objects in the Oort cloud are largely composed of ices, such as water, ammonia, and methane. Astronomers believe that the matter composing the Oort cloud formed closer to the Sun and was scattered far out into space by the gravitational effects of the giant planets early in the Solar System's evolution. However, citing the Southwest Research Institute, NASA published a 2010 article that includes the following quotation:


We know that stars form in clusters. The Sun was born within a huge community of other stars that formed in the same gas cloud. In that birth cluster, the stars were close enough together to pull comets away from each other via gravity.


It is therefore speculated that the Oort cloud is, at least partly, the product of an exchange of materials between the Sun and its sister stars as they formed and drifted apart.



Although no confirmed direct observations of the Oort cloud have been made, astronomers believe that it is the source of all long-period and Halley-type comets entering the inner Solar System and many of the centaurs and Jupiter-family comets as well. The outer Oort cloud is only loosely bound to the Solar System, and thus is easily affected by the gravitational pull both of passing stars and of the Milky Way Galaxy itself. These forces occasionally dislodge comets from their orbits within the cloud and send them towards the inner Solar System.


Based on their orbits, most of the short-period comets may come from the scattered disc, but some may still have originated from the Oort cloud. Although the Kuiper belt and the scattered disc have been observed and mapped, only four currently known trans-Neptunian objects - 90377 Sedna, 2000 CR105, 2006 SQ372, and 2008 KV42 - are considered possible members of the inner Oort cloud.


When the New Horizons spacecraft has its close flybys of the Pluto system and smaller KBOs, combined with new giant telescopes coming on line to probe the sky we will learn even more. New information about the Solar System is important to astrobiologists who are trying to determine how our system evolved to support the only habitable planet yet known - Earth. This knowledge is useful in determining where else in the Universe habitable planets might exist. Why? Thisnk of lithopanspermia, the idea that basic life forms are distributed throughout the universe via meteorite-like planetary fragments cast forth by disruptions such as volcanic eruptions and collisions with other matter. Eventually, another planetary system's gravity traps these roaming rocks, which can result in a mingling that transfers any living cargo. Researchers based at Princeton University, the University of Arizona and the Centro de Astrobiología in Spain used a low-velocity process called weak transfer to provide the strongest support yet for lithopanspermia. Under weak transfer, a slow-moving planetary fragment meanders into the outer edge of the gravitational pull, or weak stability boundary, of a planetary system. The system has only a loose grip on the fragment, meaning the fragment can escape and be propelled into space, drifting until it is pulled in by another planetary system.


Previous research on this possible phenomenon suggests that the speed with which solid matter hurtles through the cosmos makes the chances of being snagged by another object highly unlikely. But the Princeton, Arizona and CAB researchers reconsidered lithopanspermia under a low-velocity process called weak transfer wherein solid materials meander out of the orbit of one large object and happen into the orbit of another. In this case, the researchers factored in velocities 50 times slower than previous estimates, or about 100 meters per second. Using the star cluster in which our sun was born as a model, the team conducted simulations showing that at these lower speeds the transfer of solid material from one star's planetary system to another could have been far more likely than previously thought. The researchers suggest that of all the boulders cast off from our solar system and its closest neighbor, five to 12 out of 10000 could have been captured by the other. Earlier simulations had suggested chances as slim as one in a million.


This new research says the opposite of most previous work. It says that lithopanspermia might have been very likely, and it may be the first paper to demonstrate that. If this mechanism is true, it has implications for life in the universe as a whole. This could have happened anywhere.


Low velocities offer very high probabilities for the exchange of solid material via weak transfer and the timing of such an exchange could be compatible with the actual development of our solar system, as well as with the earliest known emergence of life on Earth. The researchers report that the solar system and its nearest planetary-system neighbor could have swapped rocks at least 100 trillion times well before the sun struck out from its native star cluster.



Furthermore, existing rock evidence shows that basic life forms could indeed date from the sun's birth cluster days - and have been hardy enough to survive an interstellar journey and eventual impact. Conclusion from this is that the weak transfer mechanism makes lithopanspermia a viable hypothesis because it would have allowed large quantities of solid material to be exchanged between planetary systems, and involves timescales that could potentially allow the survival of microorganisms embedded in large boulders.


Chaotic in nature, weak transfer happens when a slow moving object such as a meteorite wanders into the outer edge of the gravitational pull of a larger object with a low relative velocity, such as a star or massive Jupiter-like planet. The smaller object partially orbits the large object, but the larger object has only a loose grip on it. This means the smaller object can escape and be propelled into space, drifting until it is pulled in by another large object. Weak transfer was first demonstrated with the Japanese lunar probe Hiten in 1991. A mechanical malfunction left the probe with insufficient fuel to enter the moon's orbit the traditional way, which is to approach at a high speed then fire retrorockets to slow down. Star birth clusters satisfy two requirements for weak transfer. First, the sending and receiving planetary systems must contain a massive planet that captures the passing solid matter in the weak-gravity boundary between itself and its parent star. Earth's solar system qualifies, and several other stars in the sun's birth cluster would too. Second, both planetary systems must have low relative velocities. In the sun's stellar cluster, between 1000 and 10000 stars were gravitationally bound to one another for hundreds of millions of years, each with a velocity of no more than a sluggish one kilometer per second. Researchers simulated 5 million trajectories between single-star planetary systems - in a cluster with 4300 stars - under three conditions: the solid matter's "source" and "target" stars were both the same mass as the sun; the target star was only half the sun's mass; or the source star was half the sun's mass. For lithopanspermia to happen, however, microorganisms first have to survive the long, radiation-soaked journey through space.


belt08.jpgA 2009 paper an international team published in the Astrophysical Journal determined how long microorganisms could survive in space based on the size of the solid matter hosting them. That group's computer simulations showed that survival times ranged from 12 million years for a boulder up to 3 centimeters in diameter, to 500 million years for a solid objects 2.67 meters across. The researchers estimated that under weak transfer, solid matter that had escaped one planet would need tens of millions of years to finally collide with another one. This falls within the lifespan of the sun's birth cluster, but means that lithopanspermia by weak transfer would have been limited to planetary fragments at least one meter, or about three feet, in size. As for the actual transfer of life, the researchers suggest that roughly 300 million lithopanspermia events could have occurred between our solar system and the closest planetary system. But even if microorganisms survived the trip to Earth, the planet had to be ready to receive them. The researchers reference rock-dating evidence suggesting that Earth contained water when the solar system was only 288 million years old and that very early life might have emerged before the solar system was 718 million years old. The sun's birth cluster - assumed to be roughly the same age as Earth's solar system - slowly broke apart when the solar system was approximately 135 million to 535 million years old.


In addition, the sun could have been ripe for weak transfer up to 700 million years after the solar system formed. So, if life arose on Earth shortly after surface water was available, there were possibly about 400 million years when life could have journeyed from Earth to another habitable world, and vice versa. If life had an early start in other planetary systems and developed before the sun's birth cluster dispersed, life on Earth may have originated beyond our solar system. Studying clouds and belts may give us some answers on these topics.



Credits: Wikipedia, NASA, Astrobio, Princeton University

Hrvoje Crvelin

Keck does Uranus

Posted by Hrvoje Crvelin Oct 26, 2012

I recently published article about Uranus. The planet Uranus, known since Voyager's 1986 flyby as a bland, featureless blue-green orb, is beginning to show its face. By using a new technique with the telescopes of the Keck Observatory, astronomers have created the most richly detailed, highest-resolution images ever taken of the giant ice planet in the near infrared, revealing an incredible array of atmospheric detail and more complex weather. The planet, in fact, looks like many of the solar system's other large planets - the gas giants Jupiter and Saturn, and the ice giant Neptune. The planet has bands of circulating clouds, massive swirling hurricanes and an unusual swarm of convective features at its north pole.


Saturn's south pole is characterized by a polar vortex or hurricane, surrounded by numerous small cloud features that are indicative of strong convection and analogous to the heavily precipitating clouds encircling the eye of terrestrial hurricanes. Similar phenomenon would be present on Neptune, based upon Keck observations of that planet. And we may see a vortex at Uranus' pole when the pole comes in full view too. This study was led by Larry Sromovsky, a planetary scientist at the University of Wisconsin, Madison.




Uranus is so far away - 30 times farther from the Sun than Earth - that even with the best of telescopes, almost no detail can be seen. By combining multiple images of the planet taken by the Keck II telescope on the summit of Hawaii's 14000-foot Mauna Kea volcano, the team was able to reduce the noise and tease out weather features that are otherwise obscured. Researchers used two different filters over two observing nights to characterize cloud features at different altitudes.


Above images reveal an astonishing amount of complexity in Uranus' atmosphere. We knew the planet was active, but until now, much of the activity had been masked by the noise in the data. The astronomers found that in the planet's deep atmosphere, composed of hydrogen, helium and methane, winds blow mainly in east-west directions at speeds up to 560 miles per hour, in spite of the small amounts of energy available to drive them. Its atmosphere is the coldest in our solar system, with cloud-top temperatures in the minus 360-degree Fahrenheit range, partly due to Uranus' great distance from the sun. The sun is 900 times weaker than on Earth, so you don't have the same intensity of solar energy driving the system as we do here. Thus, the atmosphere of Uranus must operate as a very efficient machine with very little dissipation. Yet it undergoes dramatic variations that seem to defy that requirement. Large weather systems, which are probably much less violent than the storms we know on Earth, behave in bizarre ways on Uranus. Some stay at fixed latitudes and undergo large variations in activity, while others have been seen to drift towards the equator, while undergoing great changes in size and shape. A key to understanding these behaviors was a better measurement of the wind field surrounding them. That required detecting smaller, more widely distributed features to better sample the atmospheric flow.  One new feature found by the group is a scalloped band of clouds just south of Uranus' equator. The band may indicate atmospheric instability or wind shear. This is new, and we don't fully understand what it means as we haven't seen it anywhere else on Uranus.



Credits: University of California, University of Wisconsin

Hrvoje Crvelin

Grandmother hypothesis

Posted by Hrvoje Crvelin Oct 25, 2012

Do your kids drive you crazy sometime? So what do you do? Call grandparents? You think that will make your life easier? You bet! And not only that - you will live longer too! Computer simulations provide new mathematical support for the "grandmother hypothesis" - a famous theory that humans evolved longer adult lifespans than apes because grandmothers helped feed their grandchildren.


The grandmother hypothesis says that when grandmothers help feed their grandchildren after weaning, their daughters can produce more children at shorter intervals; the children become younger at weaning but older when they first can feed themselves and when they reach adulthood; and women end up with postmenopausal lifespans just like ours. By allowing their daughters to have more children, a few ancestral females who lived long enough to become grandmothers passed their longevity genes to more descendants, who had longer adult lifespans as a result.




The simulations indicate that with only a little bit of grandmothering - and without any assumptions about human brain size - animals with chimpanzee lifespans evolve in less than 60000 years so they have a human lifespan. Female chimps rarely live past child-bearing years, usually into their 30s and sometimes their 40s. Human females often live decades past their child-bearing years. The findings showed that from the time adulthood is reached, the simulated creatures lived another 25 years like chimps, yet after 24000 to 60000 years of grandmothers caring for grandchildren, the creatures who reached adulthood lived another 49 years - as do human hunter-gatherers.


The hypothesis stemmed from observations by Kristen Hawkes and James O'Connell in the 1980s when they lived with Tanzania's Hazda hunter-gatherer people and watched older women spend their days collecting tubers and other foods for their grandchildren. Except for humans, all other primates and mammals collect their own food after weaning. But as human ancestors evolved in Africa during the past 2 million years, the environment changed, growing drier with more open grasslands and fewer forests - forests where newly weaned infants could collect and eat fleshy fruits on their own. So moms had two choices. They could either follow the retreating forests, where foods were available that weaned infants could collect, or continue to feed the kids after the kids are weaned. That is a problem for mothers because it means you can't have the next kid while you are occupied with this one. That opened a window for the few females whose childbearing years were ending - grandmothers - to step in and help, digging up potato-like tubers and cracking hard-shelled nuts in the increasingly arid environment. Those are tasks newly weaned apes and human ancestors couldn't handle as infants. The primates who stayed near food sources that newly weaned offspring could collect "are our great ape cousins. The ones that began to exploit resources little kids couldn't handle, opened this window for grandmothering and eventually evolved into humans.


Evidence that grandmothering increases grandchildren's survival is seen in 19th and 20th century Europeans and Canadians, and in Hazda and some other African people. But it is possible that the benefits grandmothers provide to their grandchildren might be the result of long postmenopausal lifespans that evolved for other reasons, so the new study set out to determine if grandmothering alone could result in the evolution of ape-like life histories into long postmenopausal lifespans seen in humans. The new study isn't the first to attempt to model or simulate the grandmother effect. A 1998 study by Hawkes and colleagues took a simpler approach, showing that grandmothering accounts for differences between humans and modern apes in life-history events such as age at weaning, age at adulthood and longevity. A recent simulation by other researchers said there were too few females living past their fertile years for grandmothering to affect lifespan in human ancestors. The new study grew from Hawkes' skepticism about that finding. Unlike Hawkes' 1998 study, the new study simulated evolution over time, asking, "If you start with a life history like the one we see in great apes - and then you add grandmothering, what happens?


The simulations measured the change in adult longevity - the average lifespan from the time adulthood begins. Chimps that reach adulthood (age 13) live an average of another 15 or 16 years. People in developed nations who reach adulthood (at about age 19) live an average of another 60 years or so - to the late 70s or low 80s. The extension of adult lifespan in the new study involves evolution in prehistoric time; increasing lifespans in recent centuries have been attributed largely to clean water, sewer systems and other public health measures. The researchers were conservative, making the grandmother effect "weak" by assuming that a woman couldn't be a grandmother until age 45 or after age 75, that she couldn't care for a child until age 2, and that she could care only for one child and that it could be any child, not just her daughter's child. Based on earlier research, the simulation assumed that any newborn had a 5 percent chance of a gene mutation that could lead to either a shorter or a longer lifespan. The simulation begins with only 1 percent of women living to grandmother age and able to care for grandchildren, but by the end of the 24000 to 60000 simulated years, the results are similar to those seen in human hunter-gatherer populations: about 43 percent of adult women are grandmothers. The new study found that from adulthood, additional years of life doubled from 25 years to 49 years over the simulated 24000 to 60000 years. The difference in how fast the doubling occurred depends on different assumptions about how much a longer lifespan costs males: Living longer means males must put more energy and metabolism into maintaining their bodies longer, so they put less vigor into competing with other males over females during young adulthood. The simulation tested three different degrees to which males are competitive in reproducing.


The competing "hunting hypothesis" holds that as resources dried up for human ancestors in Africa, hunting became better than foraging for finding food, and that led to natural selection for bigger brains capable of learning better hunting methods and clever use of hunting weapons. Women formed "pair bonds" with men who brought home meat. Many anthropologists argue that increasing brain size in our ape-like ancestors was the major factor in humans developing lifespans different from apes. But the new computer simulation ignored brain size, hunting and pair bonding, and showed that even a weak grandmother effect can make the simulated creatures evolve from chimp-like longevity to human longevity. So Hawkes believes the shift to longer adult lifespan caused by grandmothering "is what underlies subsequent important changes in human evolution, including increasing brain size". If you are a chimpanzee, gorilla or orangutan baby, your mom is thinking about nothing but you. But if you are a human baby, your mom has other kids she is worrying about, and that means now there is selection on you - which was not on any other apes - to much more actively engage her: 'Mom! Pay attention to me!'"



Credits: University of Utah

It's a big claim, but Washington University in St. Louis planetary scientist Frédéric Moynier says his group has discovered evidence that the Moon was born in a flaming blaze of glory when a body the size of Mars collided with the early Earth. The evidence might not seem all that impressive to a nonscientist: a tiny excess of a heavier variant of the element zinc in Moon rocks. But the enrichment probably arose because heavier zinc atoms condensed out of the roiling cloud of vaporized rock created by a catastrophic collision faster than lighter zinc atoms, and the remaining vapor escaped before it could condense. Scientists have been looking for this kind of sorting by mass, called isotopic fractionation, since the Apollo missions first brought Moon rocks to Earth in the 1970s, and Moynier together with his students are the first to find it.




The Moon rocks, geochemists discovered, while otherwise chemically similar to Earth rocks, were woefully short on volatiles (easily evaporated elements). A giant impact explained this depletion, whereas alternative theories for the Moon's origin did not.


But a creation event that allowed volatiles to slip away should also have produced isotopic fractionation (see recent article).


Scientists looked for fractionation but were unable to find it, leaving the impact theory of origin in limbo - neither proved nor disproved - for more than 30 years. The magnitude of the fractionation researchers measured in lunar rocks is 10 times larger than what we see in terrestrial and martian rocks so it's an important difference.



According to the Giant Impact Theory, proposed in its modern form at a conference in 1975, Earth's moon was created in a apocalyptic collision between a planetary body called Theia and the early Earth. This collision was so powerful it is hard for mere mortals to imagine, but the asteroid that killed the dinosaurs is thought to have been the size of Manhattan, whereas Theia is thought to have been the size of the planet Mars. The smashup released so much energy it melted and vaporized Theia and much of the proto-Earth's mantle. The Moon then condensed out of the cloud of rock vapor, some of which also re-accreted to Earth. This seemingly outlandish idea gained traction because computer simulations showed a giant collision could have created a Earth-Moon system with the right orbital dynamics and because it explained a key characteristic of the Moon rocks. Once geochemists got Moon rocks into the lab, they quickly realized that the rocks are depleted in what geochemists call "moderately volatile" elements. They are very poor in sodium, potassium, zinc, and lead. But if the rocks were depleted in volatiles because they had been vaporized during a giant impact, we should also have seen isotopic fractionation.


When a rock is melted and then evaporated, the light isotopes enter the vapor phase faster than the heavy isotopes, so you end up with a vapor enriched in the light isotopes and a solid residue enriched in the heavier isotopes. If you lose the vapor, the residue will be enriched in the heavy isotopes compared to the starting material. The trouble was that scientists who looked for isotopic fractionation couldn't find it. To make sure the effect was global, the team analyzed 20 samples of lunar rocks, including ones from the Apollo 11, Apollo 12, Apollo 15, and Apollo 17 missions - all of which went to different locations on the Moon - and one lunar meteorite. What researchers wanted were the basalts because they're the ones that came from inside the Moon and would be more representative of the Moon's composition. But lunar basalts have different chemical compositions, including a wide range of titanium concentrations. Isotopes can also be fractionating during during the solidification of minerals from a melt. The effect should be very, very tiny, but to make sure this wasn't what they were seeing, they analyzed both titanium-rich and titanium-poor basalts, which are at the two extremes of the range of chemical composition on the Moon.


For comparison, they also analyzed 10 Martian meteorits. A few had been found in Antarctica but the others were from the collections at the Field Museum, the Smithsonian Institution and the Vatican. Mars, like Earth, is very rich in volatile elements. Because there is a decent amount of zinc inside the rocks, they only needed a tiny bit to test for fractionation, and so these samples were easier to get. Compared to terrestrial or martian rocks, the lunar rocks Moynier and his team analyzed have much lower concentrations of zinc but are enriched in the heavy isotopes of zinc. Earth and Mars have isotopic compositions like those of chondritic meteorites, which are thought to represent the original composition of the cloud of gas and dust from which the solar system formed. The simplest explanation for these differences is that conditions during or after the formation of the Moon led to more extensive volatile loss and isotopic fractionation than was experienced by Earth or Mars. The isotopic homogeneity of the lunar materials, in turn, suggests that isotopic fractionation resulted from a large-scale process rather than one that operated only locally. Given these lines of evidence, the most likely large-scale event is wholesale melting during the formation of the Moon. The zinc isotopic data therefore supports the theory that a giant impact gave rise to the Earth-Moon system.


But it gets better.  Around the same time of publishing above, Science Online published another paper. A major challenge to the Giant Impact Theory has been that Earth and Moon have identical oxygen isotope compositions, even though earlier impact models indicated they should differ substantially. In a paper published October 17 in the journal Science online, a new model by Southwest Research Institute (SwRI), motivated by accompanying work by others on the early dynamical history of the Moon, accounts for this similarity in composition while also yielding an appropriate mass for Earth and the Moon. New models developed by Dr. Robin M. Canup involve much larger impactors than were previously considered. In the new simulations, both the impactor and the target are of comparable mass, with each containing about 4 to 5 times the mass of Mars. The near symmetry of the collision causes the disk's composition to be extremely similar to that of the final planet's mantle over a relatively broad range of impact angles and speeds, consistent with the Earth-Moon compositional similarities.




The new impacts produce an Earth that is rotating 2 to 2.5 times faster than implied by the current angular momentum of the Earth-Moon system, which is contained in both Earth's rotation and the Moon's orbit. However, in an accompanying paper, Dr. Matija Ćuk Dr. Sarah T. Stewart show that a resonant interaction between the early Moon and the Sun - known as the evection resonance - could have decreased the angular momentum of the Earth-Moon system by this amount soon after the Moon-forming impact. By allowing for a much higher initial angular momentum for the Earth-Moon system, Ćuk and Stewart work allows for impacts that for the first time can directly produce an appropriately massive disk with a composition equal to that of the planet's mantle. In addition to the impacts identified in Canup's paper, Ćuk and Stewart show that impacts involving a much smaller, high-velocity impactor colliding into a target that is rotating very rapidly due to a prior impact can also produce a disk-planet system with similar compositions. The ultimate likelihood of each impact scenario will need to be assessed by improved models of terrestrial planet formation, as well as by a better understanding of the conditions required for the evection resonance mechanism.


Above papers also have implications for the origin of the Earth because the origin of the Moon was a big part of the origin of the Earth. Without the stabilizing influence of the Moon, Earth would probably be a very different sort of place. Planetary sciences think Earth would spin more rapidly, days would be shorter, weather more violent, and climate more chaotic and extreme. In fact it might have been such a harsh world, it would have been unfit for the evolution of our favorite species: us. The next stage of this research is to investigate why Earth is not similarly depleted of zinc and similar volatile elements, a line of exploration which could lead to answers about how and why Earth is mostly covered by water. Where did all the water on Earth come from? This is a very important question because if we are looking for life on other planets we have to recognize that similar conditions are probably required. So understanding how planets obtain such conditions is critical for understanding how life ultimately occurs on a planet.



Credits: Nature, Washington University in St. Louis, Southwest Research Institute

Hrvoje Crvelin

Eye in the sky VI

Posted by Hrvoje Crvelin Oct 23, 2012

Usually Eye in the sky shows images or videos done in space and most of the time these are made from ISS, but it would be foolish to miss all those photos made of outer universe by our equipment out there (or even here on Earth).  So, I will start with ISS, but will continue with telescope photos. Now check this out:




This looks like Photoshop, but it's not.  So let's try to figure out what it is. I already hinted first photo would be from ISS so big blue behind is our Earth. Those gold panels are solar panels. But those 3 cubes floating around are a bit of a puzzle. Space trash men made? Borgs? Perhaps photoshop after all? Nope, those are satellites too!  They are called CubeSats, they are about 10 cm on a side and have a mass up to a little over a kilo. Even though they’re teeny, they can be packed with a lot of equipment. Typical mission payloads are pretty diverse, from testing hardware for communications and satellite attitude control, to taking images (and other observations) of Earth, monitoring the satellite’s radiation environment, and even detecting dust in space. Because they’re small and relatively cheap (under $100000 including launch), space missions using CubeSats can be done by smaller institutions, including schools. Above three are amateur radio satellites: they transmit a signal amateur operators on the ground can pick up. You can find more pictures and technical info at the UK Amateur Radio Satellite webpage.


Observations using the Atacama Large Millimeter/submillimeter Array have revealed an unexpected spiral structure in the material around the old star R Sculptoris. This feature has never been seen before and is probably caused by a hidden companion star orbiting the star. This slice through the new ALMA data reveals the shell around the star, which shows up as the outer circular ring, as well as a very clear spiral structure in the inner material.




Phil Plait did great explanation for this photo. In his own words, ALMA looks at light far too low energy for our eyes to see; it’s actually out past infrared in the spectrum. Cold dust and gas emits light at this wavelength, including carbon monoxide. That molecule is created copiously in red giants and shines brightly in the submillimeter, making it easy to see with ‘scopes like ALMA. That’s nice, because CO can be used as a tracer for other, harder to detect molecules like hydrogen. Looking at CO really tells you a lot about what’s going on in the gas and dust. When a star like the Sun (either a bit less massive, or up to about 8 times as massive) ages, the core heats up, which causes the outer part of the star to expand (like a hot air balloon), turning it into a red giant. The details are complicated - read this post on a similar star where I explain it in more detail (and you want to because the details are awesome) - but the bottom line is that helium builds up in a thin shell outside the star’s core, where it fuses into carbon. The fusion rate is insanely sensitive to temperature, and periodic imbalances in temperature cause vast and very sudden increases in the fusion rate – and by sudden I mean over a timescale of just a handful of years, the blink of an eye to a star. Called a thermal pulse, this huge fireball of energy is dumped into the star’s interior, blows upward like a tsunami, and then blasts material clear off the star’s surface. The result is an epic paroxysm which blows out a massive wave of material, expanding in a sphere around the star. After a few years, you get an eerie detached shell of expanding material, like a smoke ring trillions of kilometers across. OK, so that’s the thin shell thing on the outside. So what’s the deal with R Sculptoris that makes that freaky inner spiral pattern? R Sculptoris has a companion. It’s probably a smaller star, like a red dwarf or even something more like the Sun. It orbits the red giant once every 350 years, or, to be more accurate, the two stars orbit each other. As material from the red giant expands outward, the combined spin as the two stars’ dance forms the spiral pattern. It’s like a rotating garden sprinkler. Each drop of water from the sprinkler goes straight outward, but each at a slightly different angle - like one drop heading due north, the next slightly east of north, the next slightly east of that, and so on around the compass points. To our eye, this looks like an expanding spiral, even though it’s made of individual drops moving radially out from the center. It’s an illusion of sorts. ESO folks made a good video about it. R Sculptoris has both a shell and a spiral because the thermal pulse had a very sudden, sharp beginning, but continued to drive material off the star’s surface for many years after that first wave. That initial blast expanded rapidly, so rapidly compared to the orbital period of the second star that it was essentially instantaneous. In other words, there wasn’t time to imprint the spiral pattern on it; it expanded outward in a spherical wave. It eventually collided with material previously ejected by the star, got compressed it, and formed the the thin shell. But back at the star the pulse continued, blowing off more material over time. That was slow enough that the orbital motion of the two stars affected the expansion, creating the spiral pattern. Using the ALMA observations combined with models of the star’s behavior, the scientists have found that this last pulse occurred 1800 years ago and lasted for 200 years. They were also able to estimate the amount of material blown away from the star: about 0.3% of the mass of the Sun. It may not sound like much, but in other words that means the mass ejected by this star in this two-century long event was a thousand times the mass of the Earth. Oh, and all this material was blasted outward at 50000 kilometers per hour. Impressed now?


Finally, impressive work was done by Ethan Siegel who summarized little story about our eye in the sky called Hubble. One of the bravest things we ever done with Hubble was to find a patch of sky with absolutely nothing in it - no bright stars, no nebulae, and no known galaxies - and observe it. Not just for a few minutes, or an hour, or even for a day. But for 11 days (between 2003 and 2004).  As you know, in dark you take longer exposures when taking pictures and suddenly you get to see the things. It was a risk, but worth one. Result was something that is called Hubble Ultra Deep Field (HUDF).





The image we got contained an estimated 10000 galaxies. By extrapolating these results over the entire sky (which is some 10 million times larger), we were able to figure out - at minimum - that there were at least 100 billion galaxies in the entire Universe. In August and September 2009, the Hubble's Deep Field was expanded using the infrared channel of the recently attached Wide Field Camera 3 (WFC3). When combined with existing HUDF data, astronomers were able to identify a new list of potentially very distant galaxies.


Now, since those days people were wondering - what if we observed the same patch longer. Not only 11 days. So that's exactly what NASA did, this time looking for a total of 23 days over the last decade - more than twice as long as the Ultra-Deep Field - in an even smaller region of space. On September 25, 2012, Nasa released a further refined version of the Ultra-Deep Field dubbed the eXtreme Deep Field (XDF). The XDF reveals galaxies that span back 13.2 billion years in time, revealing a galaxy theorized to be formed only 450 million years after the big bang event.




Above picture may look familiar to you, even though you’ve probably never seen it before. The Extreme Deep Field is actually a part of the Ultra Deep Field, which you can see for yourself if you rescale both images and rotate them at 4.7 degrees relative to one another and that's exactly what Ethan Siegel did. The XDF has far more galaxies in it than the HUDF does in a comparable region of space. Take a look for yourself.




HUDF is very impressive, especially considering that this is just a blank patch of featureless sky. But there are maybe 75% more galaxies-per-patch-of-sky in the XDF. Applying the XDF results to the entire sky, we find that there are more like 176 billion galaxies in the entire Universe, a huge increase from previous estimate from the HUDF. How do we estimate that there are so many? For starters, the area of the XDF is just a tiny, tiny fraction of the full Moon.




If you assume that the XDF is a typical region of outer space, you can calculate how many XDFs it would take to fill the entire night sky; it’s about 32 million. Multiply by the number of galaxies you find in the XDF - which is around 5500 - and that’s how we arrive at 176 billion galaxies, at least, in the Universe. But there’s more to the story than that. We’re taking a region of space that has very few nearby galaxies, or galaxies whose light takes less than a few billion years to reach us. We’ve selected a deliberately low-density portion of the nearby Universe, remember? The XDF has found many more galaxies whose light has traveled between 5 and 9 billion years to reach us, which are relatively dim galaxies that the HUDF simply couldn’t pick up. But where it really shines is in the early Universe, at finding galaxies whose light has been on its was for more than 9 billion years, finding the majority of new galaxies there. But even the XDF is not optimized for finding these galaxies; we’d need an infrared space telescope for that, which is what James Webb is going to be. When that comes around, one should not be surprised to find that there are maybe even close to a trillion galaxies in the observable Universe; we just don’t have the tools to find them all yet. The new Hubble Extreme Deep Field is the deepest view into the Universe. Ever.



Credits: Phil Plait, UK Amateur Radio society, ALMA, NASA, ESA, Ethan Siegel

Hrvoje Crvelin

Are you talking to me?

Posted by Hrvoje Crvelin Oct 22, 2012

It all started in 1984 when Sam Ridgway of the National Marine Mammal Foundation and others began to notice some unusual sounds in the vicinity of the whale and dolphin enclosure. As they describe it, it sounded as though two people were conversing in the distance, just out of range of their understanding. Those unusually familiar sounds were traced back to one white whale in particular only some time later when a diver surfaced from the whale enclosure to ask his colleagues an odd question: "Who told me to get out?"



They deduced that those utterances came from a most surprising source: a white whale by the name of NOC. That whale had lived among dolphins and other white whales and had often been in the presence of humans. In fact, there had been other anecdotal reports of whales sounding like humans before, but in this case Ridgway's team wanted to capture some real evidence.


They recorded the whale's sounds to reveal a rhythm similar to human speech and fundamental frequencies several octaves lower than typical whale sounds, much closer to that of the human voice. Whale voice prints were similar to human voice and unlike the whale's usual sounds. The sounds we heard were clearly an example of vocal learning by the white whale.


That's all the more remarkable because whales make sounds via their nasal tract, not in the larynx as humans do. To make those human-like sounds, NOC had to vary the pressure in his nasal tract while making other muscular adjustments and inflating the vestibular sac in his blowhole, the researchers found. In other words, it wasn't easy. Sadly, after 30 years at the National Marine Mammal Foundation, NOC passed away five years ago. But the sound of his voice lives on and can be heard here or here. Ed Yong did nice post about this too.

Hrvoje Crvelin

Shame on you Italy!

Posted by Hrvoje Crvelin Oct 22, 2012

Amazing as it may sound, in early 21st century Italy has introduced - witch hunt! The Apennines, the belt of mountains that runs down through the center of Italy, is riddled with faults, and the "Eagle" city of L'Aquila has been hammered time and time again by earthquakes. Sadly, the issue is not "if" but "when" the next tremor will occur in L'Aquila so if you are smart you should not hang in those parts. But it is simply not possible to be precise about the timing of future events. Science does not possess that power. The best it can do is talk in terms of risk and of probabilities, the likelihood that an event of a certain magnitude might occur at some point in the future. And that's it. Still, Italy made decision to prosecute some of Italy's leading geophysicists who failed predict exactly what would happen in L'Aquila on 6 April 2009. But the authorities who pursued the seven defendants stressed that the case was never about the power of prediction - it was about what was interpreted to be an inadequate characterization of the risks; of being misleadingly reassuring about the dangers that faced their city. Really? Let's check this out.




Prosecutors said the defendants gave a falsely reassuring statement before the quake, while the defense maintained there was no way to predict major quakes. The 6.3 magnitude quake devastated the city and killed 309 people. Many smaller tremors had rattled the area in the months before the quake that destroyed much of the historic centre. The seven - all members of the National Commission for the Forecast and Prevention of Major Risks - were accused of having provided "inexact, incomplete and contradictory" information about the danger of the tremors felt ahead of 6 April 2009 quake. It took Judge Marco Billi slightly more than four hours to reach the verdict in the trial, which had begun in September 2011. The six Italian scientists and a government official were sentenced to six years in jail in L'Aquila for multiple manslaughter. Word "manslaughter" means that the judge believes that the seismologists, and not Mother Nature, actually killed the 309 victims. The judge also ordered the defendants to pay court costs and damages. They were also ordered to pay more than nine million euros ($11.7 million) in damages to survivors and inhabitants. Among those convicted were some of Italy's most prominent and internationally respected seismologists and geological experts. Did they just make it illegal for scientists to be wrong?


The issue here is about miscommunication of science, and we should not be putting responsible scientists who gave measured, scientifically accurate information in prison. The best estimate at the time was that the low level seismicity was not likely to herald a bigger quake, but there are no certainties in this game. Earthquakes are inherently unpredictable and there is nothing you can do. Period. Some scientists have warned that the case might set a damaging precedent, deterring experts from sharing their knowledge with the public for fear of being targeted in lawsuits. If the scientific community is to be penalized for making predictions that turn out to be incorrect, or for not accurately predicting an event that subsequently occurs, then scientific endeavor will be restricted to certainties only and the benefits that are associated with findings from medicine to physics will be stalled. Imagine following; if a doctor tells someone who is terminally ill that he may be able to save them and they die on the table, will the doctor be charged with manslaughter even though the patient was going to die anyways? Or if it looks optimistic, but patients dies after all? What if someone gets hit by lightning when there was no rain in the forcast? So what next? Weather forecasters jailed for assault because they failed to predict hail? Book maker jailed for fraud because the winner wasn't the favorite? As most insurance companies class natural disasters as "Act of God" can we also see the Pope jailed? Perhaps no coincidence that the trial of Galileo 400 years ago for upsetting the natural order was also in Italy. "This is a historic sentence, above all for the victims," said lawyer Wania della Vigna, who represents 11 plaintiffs, including the family of an Israeli student who died when a student residence collapsed on top of him. No dear Wania, this is historic failure. "It also marks a step forward for the justice system and I hope it will lead to change, not only in Italy but across the world," she said. Dream on. Media headlines are full of stories of affected families who lost they loved ones by this natural event that can't be predicted. Not back in 2009 nor now. They are happy with what court did and they "feel more safe now". Really? Crackpots!


Excluding Greece, Italy has some of the highest levels of fraud in the society in Europe. It is systemic from the highest top the lowest. This judgement simply entrenches the idea that you can get away with murder if you can blame the people least able to effect any changes. Perhaps one should not expect much from system which made possible for characters like Berlusconi to be on the top. If scientists did predicted a bad quake, would they have been sued for wasting peoples time on unnecessary evacuations if none would have happen? There is basically no way of predicting powerful earthquakes at all. What these people did was totally fine: they shared their prediction about (i.e. estimated probability of) the earthquake based on their best knowledge of seismology. The large earthquake can't even be shown to have been "caused" or "predicted" by the smaller tremors that preceded it. It could have been independent of them, too. As far as demonstrable evidence goes, the people are being convicted for a random unexpected and unpredictable natural catastrophe. You may wonder why to ask scientists' opinion on whether one is coming in the first place? If a scientist says "I have no reason to see it being any more likely than any other day" is the interpretation "no risk"? That can't be true since today we have no way of predicting earthquake - our knowledge and science does not allow it (yet). So what is the deal here?


Lawyers have said that they will appeal against the sentence. As convictions are not definitive until after at least one level of appeal in Italy, it is unlikely any of the defendants will immediately face prison. And hopefully they won't.

Astrophotographer Christoph Malin made following video - it is sort of timelapse from IIS. 





A popular party trick is to fill a glass bottle with water and hit the top of the bottle with an open hand, causing the bottom of the bottle to break open. The bottles break only when filled still water and not when filled with the fizzy stuff or when empty. The question is: why? Following white paper explains it why and link to low resolution video is here. To get high resolution video, click here.



If Northern lights are your thing, do not miss video below.





Finally, Paul Nicklen describes his most amazing experience as a National Geographic photographer - coming face-to-face with one of Antarctica's most vicious predators - some may see this as proof that we are recognized as predators too.







Credits: Phil Plait, arXiv, National Geographic

Hrvoje Crvelin

I, robot V

Posted by Hrvoje Crvelin Oct 20, 2012

It's a bold leap from the pre-programmed factory robots and remote-controlled drones we are most familiar with today. Chengyu Cao, an assistant professor of mechanical engineering, and his research team are creating a new generation of smart machines - devices that are fully autonomous and capable of navigating their way through our complex world unassisted. These machines will not only be able to travel untethered from one point to another in space and perform tasks; they will be able to "think" on their own using artificial intelligence to adjust to unforeseen obstacles and situations in their environment - a tree, a building, a sudden gust of wind or change in tidal current - without human interface. It is the stuff of which science fiction movies are made.




Cao's ultimate goal is to have a vehicle like a helicopter that will work in very complex environments such as urban areas with multiple large buildings, or a submersible that can traverse a complex ocean floor. That is currently a challenge to an autonomous system. UConn's Igor Parsadanov, a trained remote control pilot, is assisting the research team with design and trial applications. He sees great advantages in creating autonomous vehicles that can perform dangerous tasks like deep sea rescue, wilderness firefighting, and underwater exploration without the need for human life support systems or the risk of casualties. These vehicles use global positioning systems, cameras, advanced sensors, and light detecting and ranging (LIDAR) technology to navigate through their environment. In this way, the system allows the vehicle to adjust its course by itself.





While robots have long been invaluable when it comes to doing all sorts of heavy lifting, they lack a gentle touch. Hefting around auto parts is easy enough, but transporting eggs or glassware poses a significant challenge. Scientists have now, however, made a flexible plastic robot tentacle that can, among other dexterities, pick flowers without crushing them, the latest of several robot appendages made of softer materials and able to accomplish delicate tasks. The researchers control the tentacle by pumping air through three separate channels, giving it a wide range of motion and letting it reconfigure to grasp a variety of objects without being limited by the shape of its grip. The parts for the bot - mostly elastomer tubing - cost less than $10, far cheaper than the complex components of many far less flexible robotic hands. To get a glimpse of the hand in action, watch it pick up a horseshoe in the clip below:





Rethink Robotics introduced Baxter to the manufacturing sector with following note: Baxter can ignite a revolution in breaking down costs and safety barriers holding back automation in American manufacturing. The Boston-based company says the $22000 (list price) robot is a fraction of the cost of traditional industrial robots "with zero integration required". Baxter has been expressly designed to work on assembly lines to perform menial tasks. Baxter has two arms, each with seven degrees of freedom, and a reach similar to that of a human, to take over the mindless menial tasks. It can load, unload, sort, pack, unpack, snap-fit, grind and polish. What is not at all mindless about Baxter is its design in that, for an industrial robot, Baxter enjoys an incredible lightness of non-being. Baxter has thick, round arms, but they are not heavy. The arm moves in a fluid motion. When you hold the cuff, the robot goes into gravity-compensation "zero-force mode", as if the arm is floating.



The company offers Baxter with two kinds of grippers to choose from. Electric parallel grippers enable Baxter to pick up objects of varying sizes. Vacuum cup grippers are meant for hard-to-grasp objects, such as smooth, nonporous or relatively flat items. While Baxter is not the ideal choice for tasks that require an extremely strong or fast industrial robot, Baxter is smart enough to adapt to changes. The robot uses vision to locate and grasp objects, and can be programmed to perform a new task just by holding its arms and moving them to the desired position. The robot can continue to work even after missing a pick-up or dropping a part. It can visually detect parts and adapt to variations in part placement and conveyor speed. If Baxter drops an object, it knows to get another before trying to finish the task. Another differentiator is that, while Baxter is smart, it does not require a high learning curve. One of the argued barriers to industrial adoption of robots has been training requirements to operate industrial robots. The disadvantage has been in the thought of requiring employees to train in programming and in interacting with new robotic equipment, eating up time and financial output. Rethink's team claims Baxter units can be retasked in a matter of minutes. No custom application code is required to get it started. So no costly software or manufacturing engineers are required to program it. Baxter is taught via a graphical user interface and through direct manipulation of its robot arms. Nontechnical, hourly workers can train and retrain Baxter right on the line.


As for safety, the designers gave Baxter sensors to detect people within contact distance and trigger the robot to slow to safe operation speeds. If Baxter's power supply were cut, its arms would relax slowly. Employees would have time to move out of the way. Here is the video:





Interesting robot was on display at Asia's biggest tech fair - the Combined Exhibition of Advanced Technologies (CEATEC) exhibition at Makuhari, near Tokyo. Keio University has been able to teach this robot to successfully copy the brush strokes of a master of calligraphy. A perfect copy of a work by long-dead artists such as Monet or Picasso is not possible, as the robot needs a living model to imitate, applying the same pressure and making the same gestures, but the technology could be used in complex surgery or mechanics. In Japan, where the population is quickly ageing, there are fears that valuable skills may not be handed down to younger generations so robots come into the game.




Robots are increasingly being used in place of humans to explore hazardous and difficult-to-access environments, but they aren't yet able to interact with their environments as well as humans. If today's most sophisticated robot was trapped in a burning room by a jammed door, it would probably not know how to locate and use objects in the room to climb over any debris, pry open the door, and escape the building. A research team led by Mike Stilman (Georgia Institute of Technology) hopes to change that by giving robots the ability to use objects in their environments to accomplish high-level tasks. Their goal is to develop a robot that behaves like MacGyver, the television character from the 1980s who solved complex problems and escaped dangerous situations by using everyday objects and materials he found at hand. This project is challenging because there is a critical difference between moving objects out of the way and using objects to make a way. Researchers in the robot motion planning field have traditionally used computerized vision systems to locate objects in a cluttered environment to plan collision-free paths, but these systems have not provided any information about the objects' functions. To create a robot capable of using objects in its environment to accomplish a task, Stilman plans to develop an algorithm that will allow a robot to identify an arbitrary object in a room, determine the object's potential function, and turn that object into a simple machine that can be used to complete an action. Actions could include using a chair to reach something high, bracing a ladder against a bookshelf, stacking boxes to climb over something, and building levers or bridges from random debris. By providing the robot with basic knowledge of rigid body mechanics and simple machines, the robot should be able to autonomously determine the mechanical force properties of an object and construct motion plans for using the object to perform high-level tasks. For example, exiting a burning room with a jammed door would require a robot to travel around any fire, use an object in the room to apply sufficient force to open the stuck door, and locate an object in the room that will support its weight while it moves to get out of the room. Such skills could be extremely valuable in the future as robots work side-by-side with military personnel to accomplish challenging missions.




The US Navy prides itself on recruiting, training and deploying our country's most resourceful and intelligent men and women. Now that robotic systems are becoming more pervasive as teammates for warfighters in military operations, humans must ensure that they are both intelligent and resourceful. Hybrid reasoning system that embeds our physics-based algorithms within a cognitive architecture will create a more general, efficient and structured control system for robots that will accrue more benefits than if we used one approach alone. After the researchers develop and optimize the hybrid reasoning system using computer simulations, they plan to test the software using Golem Krang, a humanoid robot designed and built in Stilman's laboratory to study whole-body robotic planning and control.


Marvel Comic's fictional superhero, Ironman, uses a powered armor suit that allows him superhuman strength. While NASA's X1 robotic exoskeleton can't do what you see in the movies, the latest robotic, space technology, spinoff derived from NASA's Robonaut 2 project may someday help astronauts stay healthier in space with the added benefit of assisting paraplegics in walking here on Earth. NASA and The Florida Institute for Human and Machine Cognition (IHMC) with the help of engineers from Oceaneering Space Systems of Houston, have jointly developed a robotic exoskeleton called X1. This is a robot that a human could wear over his or her body either to assist or inhibit movement in leg joints. In the inhibit mode, the robotic device would be used as an in-space exercise machine to supply resistance against leg movement. The same technology could be used in reverse on the ground, potentially helping some individuals walk for the first time.



Robotics is playing a key role aboard the ISS and will continue to be critical as we move toward human exploration of deep space. What's extraordinary about space technology and work with projects like Robonaut are the unexpected possibilities space tech spinoffs may have right here on Earth. It's exciting to see a NASA-developed technology that might one day help people with serious ambulatory needs begin to walk again, or even walk for the first time. Worn over the legs with a harness that reaches up the back and around the shoulders, X1 has 10 degrees of freedom, or joints - four motorized joints at the hips and the knees, and six passive joints that allow for sidestepping, turning and pointing, and flexing a foot. There also are multiple adjustment points, allowing the X1 to be used in many different ways. X1 currently is in a research and development phase, where the primary focus is design, evaluation and improvement of the technology. NASA is examining the potential for the X1 as an exercise device to improve crew health both aboard the space station and during future long-duration missions to an asteroid or Mars. Without taking up valuable space or weight during missions, X1 could replicate common crew exercises, which are vital to keeping astronauts healthy in microgravity. In addition, the device has the ability to measure, record and stream back, in real-time, data to flight controllers on Earth, giving doctors better feedback on the impact of the crew's exercise regimen. As the technology matures, X1 also could provide a robotic power boost to astronauts as they work on the surface of distant planetary bodies. Coupled with a spacesuit, X1 could provide additional force when needed during surface exploration, improving the ability to walk in a reduced gravity environment, providing even more bang for its small bulk.


Now, imagine a computer chip that can assemble itself. According to Eric M. Furst, professor of chemical and biomolecular engineering at the University of Delaware, engineers and scientists are closer to making this and other scalable forms of nanotechnology a reality as a result of new milestones in using nanoparticles as building blocks in functional materials. The research team studied paramagnetic colloids while periodically applying an external magnetic field at different intervals. With just the right frequency and field strength, the team was able to watch the particles transition from a random, solid like material into highly organized crystalline structures or lattices. This development is exciting because it provides insight into how researchers can build organized structures, crystals of particles, using directing fields and it may prompt new discoveries into how we can get materials to organize themselves. Because gravity plays a role in how the particles assemble or disassemble, the research team studied the suspensions aboard the International Space Station (ISS) through collaborative efforts with NASA scientists and astronauts. One interesting observation was how the structure formed by the particles slowly coarsened, then rapidly grew and separated - similar to the way oil and water separate when combined - before realigning into a crystalline structure. Now, when we have a particle that responds to an electric field, we can use these principles to guide that assembly into structures with useful properties, such as in photonics. The work could potentially prove important in manufacturing, where the ability to pre-program and direct the self-assembly of functional materials is highly desired.


In the 2012 Bot Prize competition, the true winner may be the one who makes the most mistakes. In this match, video game avatars directed by artificial intelligence compete to see which comes across as most human in a fight against real human players. This year, for the first time, human participants mistook the two bots for humans more than half the time, a feat researchers attribute to the fact that these bots were programmed to be less-than-perfect players.




Tokyo Institute of Technology researchers use fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people. Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can 'think' and 'see' in the same way as humans. In these tests, participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently 'label' each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool). After 'training' the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.


Also in Japan, researchers showed off the latest incarnation of HAL, the Hybrid Assistive Limb, a full body suit that could eventually be used by workers dismantling the crippled Fukushima nuclear plant. HAL - coincidentally the name of the evil supercomputer in Stanley Kubrick's "2001: A Space Odyssey" - has a network of sensors that monitor the electric signals coming from the wearer's brain. It uses these to activate the robot's limbs in concert with the worker's, taking weight off his or her muscles. Yoshiyuki Sankai, professor of engineering at the University of Tsukuba, said this means the 60 kg tungsten vest workers at Fukushima have to wear is almost unnoticeable. He said the outer layer of the robot suit also blocks radiation, while fans inside it circulate air to keep the wearer cool, and a computer can monitor their heart-rate and breathing for signs of fatigue. The robot is manufactured by Cyberdyne, a company unrelated to the fictional firm responsible for the Terminator in the 1984 film of the same name.




HAL was on display as part of Japan Robot Week, which also featured small robots that run on caterpillar tracks designed to move across difficult terrain and gather information in places where it is not safe for humans.



Credits: Discover blog, University of Delaware, Rethink Robotics,  Ashley P. Taylor, UConn, Tokyo Institute of Technology, NASA, Georgia Institute of Technology, AFP

Hrvoje Crvelin

Topology and physics

Posted by Hrvoje Crvelin Oct 17, 2012

Topology is the study of shape, in particular the properties that are preserved when a shape is squeezed, stretched and battered but not torn or ripped. In the past, topology was little more than an amusing diversion for mathematicians doodling about the difference between donuts and dumplings. But that is beginning to change. In recent years, physicists have begun to use topology to explain some of the most important puzzles at the frontiers of physics. Back in 1970, a young physicist working in the Soviet Union made a counterintutive prediction. Vitaly Efimov, now at the University of Washington in the US, showed that quantum objects that cannot form into pairs could nevertheless form into triplets. In 2006, a group in Austria found the first example of such a so-called Efimov states in a cold gas of cesium atoms.




That's curious - surely the bonds that allow three particles to bond together should also allow two to become linked? Actually, no and topology explains why. The reason is that the mathematical connection between these quantum particles takes the form of a Borromean ring (see picture above): three circles intertwined in such a way that cutting one releases the other two. Only three rings can be connected in this way, not two. Note that above makes sense in 3D, but in 2D it does not work.


But this kind of topological curiosity is merely the tip of the iceberg if Xiao-Gang Wen at the Perimeter Institute for Theoretical Physics is right in what he writes. He combined topology, symmetry and quantum mechanics in a new theory that predicts the existence of new states of matter, unifies various puzzling phenomena in solid state physics and allows the creation artificial vacuums populated with artificial photons and electrons. Sounds a bit too much and too good. Wen begins by explaining the fundamental role of symmetry in the basic states of matter such as liquids and solids.


A symmetry is a property that remains invariant under a transformation of some kind. In a liquid, for example, atoms are randomly distributed and so the liquid looks the same if it is displaced in any direction by any distance. Physicists say it has a continuous translation symmetry. However, when a liquid freezes, the atoms become locked into a crystal lattice and a different symmetry applies. In this case, the lattice only appears the same if it is displaced along the crystal axis by a specific distance. So the material now has discrete translation symmetry and the original symmetry is broken. In other words, when the material undergoes a phase change, it also undergoes a change in symmetry, a process that physicists call symmetry breaking. But in addition to the four ordinary phases of matter - liquid, solid, gas and plasma - physicists have discovered many quantum phases of matter such as superconductivity, superfluidity and so on. These phases are also the result of symmetry breaking but symmetry alone cannot explain what's going on. It turns out that the mathematics of quantum mechanics has topological properties that, when combined with symmetry, explain how these phases form. This kind of work has led to the discovery of additional phases of matter such as topological conductors and insulators.


The important point here is that the properties of these systems are guaranteed not by the ordinary laws of physics but by the topological properties of quantum mechanics, just like the Borromean rings that explain the Efimov states described earlier. Xiao-Gang Wen's approach is to explore the properties of matter when the topological links between particles become much more general and complex. He generalises these links, thinking of them as strings that can connect many  particles together. In fact, he considers the way many strings can form net-like structures that have their own emergent properties. Wen already published paper about string nets before so obviously he is expanding this now. So what kind of emergent properties do these string-nets have? It turns out that string-nets are not so different from the ordinary matter. String nets can support waves which Wen says are formally equivalent to photons.


Above makes string nets as a kind of "quantum ether" through which electromagnetic waves travel. That's a big and bold claim. Wen also says that various properties of string nets are equivalent to fundamental particles such as electrons. And that it may be possible to derive the properties of other particles too. Of course, no theory is worth more than bag of beans unless it makes testable predictions about the universe.  Wen's ideas will take some digesting. And the implications he discusses need to firmed up into specific experimental predictions. Physicists have known for many decades that symmetry plays a powerful role in the laws of physics. In fact, it's fair to say that symmetry has changed the way we think about the universe. It's just possible that adding topology to the mix could be equally revolutionary. Of course, some people have already explored this ideas before...



Credits: arXiv, Technology Review

Hrvoje Crvelin

Forget Jurassic Park!

Posted by Hrvoje Crvelin Oct 15, 2012

Observe following picture:




What you see is spider attaching wasp. No big deal there. However, attack has been interupted as moments after the wasp’s capture, they were both overtaken by a flow of tree resin and were preserved in amber for the next 100 million years, while their species and their dinosaur contemporaries from the Early Cretaceous period went extinct.


Fast forward into 1993 and your near by cinema.  It was a year that has seen Jurassic Park hitting big screen. Plot of the movie is about billionaire John Hammond (CEO of InGen) who has created Jurassic Park: a theme park populated with dinosaurs cloned from the DNA extracted from insects preserved in prehistoric amber. Actually, it is not movie about him, but rather park and how things go wrong. Now, how realistic is this? After all, we have insects and amber as seen above. Few researchers have given credence to claims that samples of dinosaur DNA have survived to the present day, but no one knew just how long it would take for genetic material to fall apart. Now, a study of fossils found in New Zealand is laying the matter to rest - and putting an end to hopes of cloning dinosaurs.




After cell death, enzymes start to break down the bonds between the nucleotides that form the backbone of DNA, and micro-organisms speed the decay. In the long run, however, reactions with water are thought to be responsible for most bond degradation. Groundwater is almost ubiquitous, so DNA in buried bone samples should, in theory, degrade at a set rate. Determining that rate has been difficult because it is rare to find large sets of DNA-containing fossils with which to make meaningful comparisons. To make matters worse, variable environmental conditions such as temperature, degree of microbial attack and oxygenation alter the speed of the decay process. But palaeogeneticists led by Morten Allentoft (University of Copenhagen) and Michael Bunce (Murdoch University) examined 158 DNA-containing leg bones belonging to three species of extinct giant birds called moa. The bones, which were between 600 and 8000 years old, had been recovered from three sites within 5 kilometers of each other, with nearly identical preservation conditions including a temperature of 13.1 ºC. By comparing the specimens' ages and degrees of DNA degradation, the researchers calculated that DNA has a half-life of 521 years. That means that after 521 years, half of the bonds between nucleotides in the backbone of a sample would have broken; after another 521 years half of the remaining bonds would have gone; and so on. The team predicts that even in a bone at an ideal preservation temperature of −5 ºC, effectively every bond would be destroyed after a maximum of 6.8 million years. The DNA would cease to be readable much earlier - perhaps after roughly 1.5 million years, when the remaining strands would be too short to give meaningful information. Cryo fans rejoice. This confirms the widely held suspicion that claims of DNA from dinosaurs and ancient insects trapped in amber are - incorrect. 


The calculations in the latest study were quite straightforward, but many questions remain. For example, can these findings be reproduced in very different environments such as permafrost and caves. Moreover, the researchers found that age differences accounted for only 38.6% of the variation in DNA degradation between moa-bone samples. Other factors that impact on DNA preservation are clearly at work. Storage following excavation, soil chemistry and even the time of year when the animal died are all likely contributing factors that will need looking into. Jurassic Park IV is expect to hit big screen somewhere in summer 2014 - let's see what storyline will be used then.



Credits: Nature

If you follow this blog you probably know enough about stars, how theyget into the life and their death. Nevertheless, here is the video which is 14 minute long and covers everything from birth of the star until star is ready to die.  Quite nice and funny.





Did you ever wonder what is the weigh of shadow? No? Well, check following video and explore physics behind it.



"Imagine you have an orchestra together, but everyone is playing their own tune, until they begin to follow a conductor. In a normal solid, every atom has its own behavior until very close to absolute zero. Then quantum mechanics takes over and dictates everyone to play the same tune". That's physics professor Moses Chan's musical metaphor for his discovery that atoms in a solid can condense into what he likes to call "one giant atom," a new phase of matter called a supersolid. A supersolid is a spatially ordered material with superfluid properties. Superfluidity is a special quantum state of matter in which a substance flows with zero viscosity. Finally, after eight-year debate over the existence of it, we are one step closer that this material hasn't been found yet. Normally the atoms of a solid form a regular lattice, giving them a rigid structure, but quantum theory predicts that this can change in some ultra-cold solids. Under these conditions, quantum effects should start to dominate, causing some of the atoms to pass through the lattice, flowing like a frictionless liquid. Detecting this strange state isn't easy, though.


In 2004, Chan and his colleague Eunseong Kim cooled helium that they had collected inside a kind of porous glass called Vycor. They placed the cold helium-saturated glass in a suspended chamber designed to alternate the direction of its spin, between clockwise and anticlockwise. As they cooled the set-up to close to absolute zero, the oscillations became faster, suggesting that more and more of the helium was no longer travelling with the moving glass, but instead standing still inside the chamber. This was a tell-tale sign of the lack of friction that is characteristic of a supersolid. The pair later repeated the experiment - but this time started with a chunk of solid helium, known as bulk helium, removing the need for the glass. Again, they found again that as the set-up got really cold, the oscillations got faster. Soon after that the results had been replicated by a number of groups though the size of the effect varied inconsistently.


Then cracks in the supersolidity claims started to emerge. In 2007, John Beamish suggested bulk helium may become much stiffer than expected at low temperatures and that this alone could account for the faster oscillations, without the need for supersolidity. This effect was later dubbed quantum plasticity. However, it did not seem like quantum plasticity could explain the original 2004 result as non-bulk helium inside porous glass would not stiffen in the same way. There was, however, a possibility that a gap between the aluminium chamber and the porous Vycor could have allowed a thin layer of bulk helium to form there, even in the original experiment. Now to rule this out, Chan and his current colleague Duk Kim have redesigned the original Vycor experiment. This time, they sealed the glass with a thin layer of epoxy resin and inserted the helium through a very thin tube. This meant only a tiny fraction of the helium could become a bulk solid - and so any speeding up due to quantum plasticity would be negligible. Chan and Duk Kim found that this set-up completely eliminated the changes in oscillation rate that they had originally observed. That suggests that all of the speeding up in the original experiment must have been due to bulk helium forming a quantum plastic, not supersolidity as originally claimed in 2004. To be absolutely sure, Chan says he still has to design an experiment that suppresses the stiffening effect in bulk helium, just to check if there are any residual signs of supersolidity, but he strongly expects there won't be. Ultimately, we now seem to understand what's going on.



Credits: New Scientist, Wikipedia

Hrvoje Crvelin

Shifting dates... #5

Posted by Hrvoje Crvelin Oct 13, 2012

The remarkably well-preserved fossil of an extinct arthropod shows that anatomically complex brains evolved earlier than previously thought and have changed little over the course of evolution. According neurobiologist Nicholas Strausfeld, the fossil is the earliest known to show a brain. Embedded in mudstones deposited during the Cambrian period 520 million years ago in what today is the Yunnan Province in China, the approximately 7,6cm long fossil, which belongs to the species Fuxianhuia protensa, represents an extinct lineage of arthropods combining an advanced brain anatomy with a primitive body plan. The fossil provides a "missing link" that sheds light on the evolutionary history of arthropods, the taxonomic group that comprises crustaceans, arachnids and insects. The researchers call their find "a transformative discovery" that could resolve a long-standing debate about how and when complex brains evolved. No one expected such an advanced brain would have evolved so early in the history of multicellular animals. According to Strausfeld, paleontologists and evolutionary biologists have yet to agree on exactly how arthropods evolved, especially on what the common ancestor looked like that gave rise to insects.



There has been a very long debate about the origin of insects. Until now, scientists have favored one of two scenariosp; some believe that insects evolved from the an ancestor that gave rise to the malacostracans, a group of crustaceans that include crabs and shrimp, while others point to a lineage of less commonly known crustaceans called branchiopods, which include, for example, brine shrimp. Because the brain anatomy of branchiopods is much simpler than that of malacostracans, they have been regarded as the more likely ancestors of the arthropod lineage that would give rise to insects.


However, the discovery of a complex brain anatomy in an otherwise primitive organism such as Fuxianhuia makes this scenario unlikely. The shape of the fossilized brain matches that of a comparable sized modern malacostracan. Researchers argue the fossil supports the hypothesis that branchiopod brains evolved from a previously complex to a more simple architecture instead of the other way around. This hypothesis arose from neurocladistics, a field pioneered by Strausfeld that attempts to reconstruct the evolutionary relationships among organisms based on the anatomy of their nervous system. Conventional cladistics, on the other hand, usually look to an organism's overall morphology or molecular data such as DNA sequences.


Strausfeld has catalogued about 140 character traits detailing the neural anatomies of almost 40 arthropod groups. There have been all sorts of implications why branchiopods shouldn't be the ancestors of insects. Many thought the proof in the pudding would be a fossil that would show a malacostracan-like brain in a creature that lived long before the origin of the branchiopods; and bingo! - this is what this is. This brain actually comprises three successive neuropils in the optic regions, which is a trait of malacostracans, not branchiopods. Neuropils are portions of the arthropod brain that serve particular functions, such as collecting and processing input from sensory organs. For example, scent receptors in the antennae are wired to the olfactory neuropils, while the eyes connect to neuropils in the optic lobes. When Strausfeld traced the fossilized outlines of Fuxianhuia's brain, he realized it had three optic neuropils on each side that once were probably connected by nerve fibers in crosswise pattern as occurs in insects and malacostracans. The brain was also composed of three fused segments, whereas in branchiopods only two segments are fused. In branchiopods, there are always only two visual neuropils and they are not linked by crossing fibers. In principle, Fuxianhuia's is a very modern brain in an ancient animal.


The fossil supports the idea that once a basic brain design had evolved, it changed little over time. Instead, peripheral components such as the eyes, the antennae and other appendages, sensory organs, etc., underwent great diversification and specialized in different tasks but all plugged into the same basic circuitry. It is remarkable how constant the ground pattern of the nervous system has remained for probably more than 550 million years. The basic organization of the computational circuitry that deals, say, with smelling, appears to be the same as the one that deals with vision, or mechanical sensation.



Credits: Nature, University of Arizona

Dwarf planet is the name used to classify some objects in the solar system. The world was introduced to dwarf planets on August 24, 2006, when petite Pluto was stripped of its planet status and reclassified as a dwarf planet. What differentiates a dwarf planet from a planet? For the most part, they are identical, but there's one key difference: A dwarf planet hasn't "cleared the neighborhood" around its orbit, which means it has not become gravitationally dominant and it shares its orbital space with other bodies of a similar size. At the same meeting the IAU also defined the term planet for the first time.




Some astronomers think that the term "dwarf planet" is too confusing and needs to be changed. And once again we haveto go back to Pluto (not a dog). For almost 50 years Pluto was thought to be larger than Mercury, but with the discovery in 1978 of Pluto's moon Charon, it became possible to measure Pluto's mass accurately and determine that it is much smaller than the initial estimates. It was roughly one-twentieth the mass of Mercury, which made Pluto by far the smallest planet. Although it was still more than ten times as massive as the largest object in the asteroid belt, Ceres, it was one-fifth that of Earth's Moon. Furthermore, having some unusual characteristics such as large orbital eccentricity and a high orbital inclination, it became evident it was a completely different kind of body from any of the other planets. In the 1990s, astronomers began to find objects in the same region of space as Pluto (now known as the Kuiper belt), and some even farther away. Many of these shared some of the key orbital characteristics of Pluto, and Pluto started being seen as the largest member of a new class of objects, plutinos. This led some astronomers to stop referring to Pluto as a planet. Several terms including minor planet, subplanet, and planetoid started to be used for the bodies now known as dwarf planets. By 2005, three other bodies comparable to Pluto in terms of size and orbit (Quaoar, Sedna, and Eris) had been reported in the scientific literature. It became clear that either they would also have to be classified as planets, or Pluto would have to be reclassified. Astronomers were also confident that more objects as large as Pluto would be discovered, and the number of planets would start growing quickly if Pluto were to remain a planet. To make things a bit more confusing, a dwarf planet is not a planet, but a dwarf star is a star. It has been agreed since that it is too late for change so dwarf planet as term has remained.





The five dwarf planets, in order from the Sun are

  • Ceres
  • Pluto
  • Haumea
  • Makemake
  • Eris


Two of these, Ceres and Pluto, are known through direct observation. The other seven are thought to be massive enough to be in hydrostatic equilibrium even if they are dense (primarily rocky) and at the lower end of their estimated diameters. Eris is more massive than Pluto; Haumea and Makemake were accepted as dwarf planets based on their absolute magnitudes. When it comes to mass, below ration should give you an idea of what we are talking about.




The dwarf planets, unlike the terrestrial and gas giant planets, populate more than one region of the solar system. Ceres is in the asteroid belt, while the others are in the trans-Neptune region. Because we thought it was a planet, we tend to know quite few things about Pluto.


In 2011 highly sensitive Cosmic Origins Spectrograph aboard the Hubble Space Telescope discovered a strong ultraviolet-wavelength absorber on Pluto's surface, providing new evidence that points to the possibility of complex hydrocarbon and/or nitrile molecules lying on the surface. Pluto has become significantly redder, while its illuminated northern hemisphere is getting brighter. These changes are most likely consequences of surface ices sublimating on the sunlit pole and then refreezing on the other pole as the dwarf planet heads into the next phase of its 248-year-long seasonal cycle. The dramatic change in color apparently took place in a two-year period, from 2000 to 2002. The Hubble images will remain our sharpest view of Pluto until NASA's New Horizons probe is within six months of its Pluto flyby. Hubble resolves surface variations a few hundred miles across, which are too coarse for understanding surface geology. But in terms of surface color and brightness Hubble reveals a complex-looking and variegated world with white, dark-orange and charcoal-black terrain. The overall color is believed to be a result of ultraviolet radiation from the distant sun breaking up methane that is present on Pluto's surface, leaving behind a dark and red carbon-rich residue. Below is the most detailed view to date of the entire surface of the dwarf planet Pluto, as constructed from multiple NASA Hubble Space Telescope photographs taken from 2002 to 2003. The center disk (180 degrees) has a mysterious bright spot that is unusually rich in carbon monoxide frost.




When Hubble pictures taken in 1994 are compared with a new set of images taken in 2002 to 2003, astronomers see evidence that the northern polar region has gotten brighter, while the southern hemisphere has gotten darker. These changes hint at very complex processes affecting the visible surface, and the new data will be used in continued research. The Hubble pictures underscore that Pluto is not simply a ball of ice and rock but a dynamic world that undergoes dramatic atmospheric changes. These are driven by seasonal changes that are as much propelled by the planet's 248-year elliptical orbit as its axial tilt, unlike Earth where the tilt alone drives seasons. The seasons are very asymmetric because of Pluto's elliptical orbit. Spring transitions to polar summer quickly in the northern hemisphere because Pluto is moving faster along its orbit when it is closer to the sun. Ground-based observations, taken in 1988 and 2002, show that the mass of the atmosphere doubled over that time. This may be due to warming and sublimating nitrogen ice. It has been known since the 1980s that Pluto also has a tenuous atmosphere, which consists of a thin envelope of mostly nitrogen, with traces of methane and probably carbon monoxide. As Pluto moves away from the Sun, during its 248 year-long orbit, its atmosphere gradually freezes and falls to the ground. In periods when it is closer to the Sun - as it is now - the temperature of Pluto's solid surface increases, causing the ice to sublimate into gas. In contrast to the Earth's atmosphere, most, if not all, of Pluto's atmosphere is undergoing a temperature inversion: the temperature is higher, the higher in the atmosphere you look. The change is about 3 to 15 degrees per kilometre. On Earth, under normal circumstances, the temperature decreases through the atmosphere by about 6 degrees per kilometre. The reason why Pluto's surface is so cold is linked to the existence of Pluto's atmosphere, and is due to the sublimation of the surface ice; much like sweat cools the body as it evaporates from the surface of the skin, this sublimation has a cooling effect on the surface of Pluto. In this respect, Pluto shares some properties with comets, whose coma and tails arise from sublimating ice as they approach the Sun.




The first moon of Pluto, Charon, was discovered in 1978. Nix and Hydra were found using Hubble in 2006, and the fourth moon called just P4 last year, in 2011. This year, 2012, we discovered fifth - P5 - see picture above. The new discovery provides additional clues for unraveling how the Pluto system formed and evolved. The favored theory is that all the moons are relics of a collision between Pluto and another large Kuiper belt object billions of years ago.


Despite being infamously demoted from its status as a major planet, Pluto (and its largest companion Charon) recently posed as a surrogate extrasolar planetary system to help astronomers produce exceptionally high-resolution images with the Gemini North 8-meter telescope (picture below). Using a method called reconstructive speckle imaging, the researchers took the sharpest ground-based snapshots ever obtained of Pluto and Charon in visible light, which hint at the exoplanet verification power of a large state-of-the-art telescope when combined with speckle imaging techniques.




Ceres is the only dwarf planet in the inner Solar System and the largest asteroid. It is a rock-ice body some 950 km in diameter, and though the smallest identified dwarf planet, it constitutes a third of the mass of the asteroid belt. Discovered on 1 January 1801 by Giuseppe Piazzi, it was the first asteroid to be identified, though it was classified as a planet at the time. The Cererian surface is probably a mixture of water ice and various hydrated minerals such as carbonates and clays. It appears to be differentiated into a rocky core and icy mantle, and may harbour an ocean of liquid water under its surface. From Earth, the apparent magnitude of Ceres ranges from 6.7 to 9.3, and hence even at its brightest it is still too dim to be seen with the naked eye except under extremely dark skies. Below the photo made by Hubble.




No space probes have visited any of the dwarf planets. This will change if NASA's Dawn and New Horizons missions reach Ceres and Pluto, respectively, as planned in 2015. Dawn is also slated to orbit and observe another probably not sphere-like but still potential dwarf planet, Vesta (here and here), in 2011.



Haumea is just one-third the mass of Pluto and it was discovered in 2004. Haumea's extreme elongation makes it unique among known dwarf planets. Although its shape has not been directly observed, calculations from its light curve suggest it is an ellipsoid, with its major axis twice as long as its minor. Nonetheless, its gravity is believed sufficient for it to have relaxed into hydrostatic equilibrium, thereby meeting the definition of a dwarf planet. This elongation, along with its unusually rapid rotation, high density, and high albedo (from a surface of crystalline water ice), are thought to be the results of a giant collision, which left Haumea the largest member of a collisional family that includes several large trans-Neptunian objects and its two known moons.

Makemake is perhaps the largest Kuiper belt object (KBO) in the classical population with a diameter that is probably about 2/3 the size of Pluto. Makemake has no known satellites, which makes it unique among the largest KBOs and means that its mass can only be estimated. Its extremely low average temperature (−243.2 °C) means its surface is covered with methane, ethane, and possibly nitrogen ices. It was discovered on March 31, 2005.

Eris is the most massive known dwarf planet in the Solar System and the ninth most massive body known to orbit the Sun directly. It is estimated to be 2326 (±12) km in diameter and 27% more massive than Pluto. Eris was discovered in January 2005. It has one known moon - Dysnomia. With the exception of some comets, Eris and Dysnomia are currently the most distant known natural objects in the Solar System. Because Eris appeared to be larger than Pluto, its discoverers and NASA initially described it as the Solar System’s tenth planet. Given the error bars in the different size estimates, it is currently uncertain whether Eris or Pluto has the larger diameter.



Credits: Wikipedia, NASA, ESA

Hrvoje Crvelin

Firewalls in the sky

Posted by Hrvoje Crvelin Oct 9, 2012

The “information paradox” surrounding black holes has sucked in many noteworthy physicists over the years. For more than three decades Stephen Hawking of Cambridge University in the UK insisted that any information associated with particles swallowed by black holes is forever lost, despite this going against the rule of quantum mechanics that information cannot be destroyed. When four years ago Hawking famously made a volte-face - that information can be recovered after all - not everyone was convinced.


Latest buzz world on the astro-black-hole scene is - firewalls.  And this is a theoretical idea related to the concept of black hole. If you read article about black holes, you probably realized that nothing really special happens to an infalling observer when he or she crosses a black hole event horizon. You are not aware of it, but there is not return from that point. You get torn apart much later and that happens once you approach the black hole singularity which may be much much later. And then summer 2012 came and some new twist has been introduced. Ahmed Almheiri, Donald Marolf, Joseph Polchinski and James Sully wrote paper titled "Black Holes: Complementarity or Firewalls?" in which they argue that the following three statements cannot all be true:

  • Hawking radiation is in a pure state,
  • the information carried by the radiation is emitted from the region near the horizon, with low energy effective field theory valid beyond some microscopic distance from the horizon, and
  • the infalling observer encounters nothing unusual at the horizon.


Perhaps the most conservative resolution is that the infalling observer burns up at the horizon. Alternatives would seem to require novel dynamics that nevertheless cause notable violations of semiclassical physics at macroscopic distances from the horizon. Authors considered some thought experiments about entangled qubits that fall into the black hole - constructed out of the s-wave or other waves in the spherical harmonic decomposition - and decided that the only sensible conclusion is that when a black hole becomes "old" (ie. when it emits or loses one-half of its initial Bekenstein-Hawking entropy), its event horizon gets transformed into a firewall that destroys everything that gets there. Polchinski made a guest blog over at Discover and shared more details on this idea. It is fair to say that new idea comes from urge to simultaneously satisfy the demands of quantum mechanics and the aspiration that black holes don’t destroy information.




The story starts with another thought experiment, this time made by Stephen Hawking in 1976, where he envisioned a black hole to form from ordinary matter and then evaporate into radiation via the process that he had discovered two years before. According to the usual laws of quantum mechanics, the state of a system at any time is described by a wavefunction. Hawking argued that after the evaporation there is not a definite wavefunction, but just a density matrix. Roughly speaking, this means that there are many possible wavefunctions, with some probability for each (this is also known as a mixed state). In addition to the usual uncertainty that comes with quantum mechanics, there is the additional uncertainty of not knowing what the wavefunction is: information has been lost. Hawking had thrown down a gauntlet that was impossible to ignore, arguing for a fundamental change in the rules of quantum mechanics that allowed - information loss. A common reaction was that he had just not been careful enough, and that as for ordinary thermal systems the apparent mixed nature of the final state came from not keeping track of everything, rather than a fundamental property. But a black hole is different from a lump of burning coal: it has a horizon beyond which information cannot escape, and many attempts to turn up a mistake in Hawking’s reasoning failed. If ordinary quantum mechanics is to be preserved, the information behind the horizon has to get out, but this is something tantamount to sending information faster than light. Eventually it came to be realized that quantum mechanics in its usual form could be preserved only if our understanding of spacetime and locality broke down in a big way. So Hawking may have been wrong about what had to give (and he conceded in 2004), but he was right about the most important thing: his argument required a change in some fundamental principle of physics.




To get a closer look at the argument for information loss, suppose that an experimenter outside the black hole takes an entangled pair of spins and throws the first spin into the black hole. The equivalence principle tells us that nothing exceptional happens at the horizon, so the spin passes freely into the interior. But now the outside of the black hole is entangled with the inside, and by itself the outside is in a mixed state. The spin inside can’t escape, so when the black hole decays, the mixed state on the outside is all that is left. In fact, this process is happening all the time without the experimenter being involved: the Hawking evaporation is actually due to production of entangled pairs, with one of each pair escaping and one staying behind the horizon, so the outside state always ends up mixed.


A couple of outs might come to mind. Perhaps the dynamics at the horizon copies the spin as it falls in and sends the copy out with the later Hawking radiation. However, such copying is not consistent with the superposition principle of quantum mechanics; this is known as the no-cloning theorem. Or, perhaps the information inside escapes at the last instant of evaporation, when the remnant black hole is Planck-sized and we no longer have a classical geometry. Historically, this was the third of the main alternatives: (1) information loss, (2) information escaping with the Hawking radiation, and (3) remnants, with subvariations such as stable and long-lived remnants. The problem with remnants that these very small objects need an enormous number of internal states, as many as the original black hole, and this leads to its own problems.




In 1993 Lenny Susskind (working with Larus Thorlacius and John Uglum and building on ideas of Gerard ‘t Hooft and John Preskill) tried to make precise the kind of nonlocal behavior that would be needed in order to avoid information loss. Their principle of black hole complementarity requires that different observers see the same bit of information in different places. An observer outside the black hole will see it in the Hawking radiation, and an observer falling into the black hole will see it inside. This sounds like cloning but it is different: there is only one bit in the Hilbert space, but we can’t say where it is: locality is given up, not quantum mechanics. Another aspect of the complementarity argument is that the external observer sees the horizon as a hot membrane that can radiate information, while in infalling observer sees nothing there. In order for this to work, it must be that no observer can see the bit in both places, and various thought experiments seemed to support this.



At the time, this seemed like an intriguing proposal, but not convincingly superior to information loss, or remnants. But in 1997 Juan Maldacena discovered AdS/CFT duality, which constructs gravity in an particular kind of spacetime box, anti-de Sitter space, in terms of a dual quantum field theory. You will find more details about it here.


The dual description of a black hole is in terms of a hot plasma, supporting the intuition that a black hole should not be so different from any other thermal system. This dual system respects the rules of ordinary quantum mechanics, and does not seem to be consistent with remnants, so we get the information out with the Hawking radiation.


This is consistent too with the argument that locality must be fundamentally lost: the dual picture is holographic, formulated in terms of field theory degrees of freedom that are projected on the boundary of the space rather than living inside it. Indeed, the miracle here is that gravitational physics looks local at all, not that this sometimes fails. AdS/CFT duality was discovered largely from trying to solve the information paradox. After Andy Strominger and Cumrun Vafa showed that the Bekenstein-Hawking entropy of black branes could be understood statistically in terms of D-branes, people began to ask what happens to the information in the two descriptions, and this led to seeming coincidences that Maldacena crystallized as a duality. As for a real experiment, the measure of a thought experiment is whether it teaches us about new physics, and Hawking’s had succeeded in a major way.


For AdS/CFT, there are still some big questions: precisely how does the bulk spacetime emerge, and how do we extend the principle out of the AdS box, to cosmological spacetimes? Can we get more mileage here from the information paradox? On the one hand, we seem to know now that the information gets out, but we do not know the mechanism, the point at which Hawking’s original argument breaks down. But it seemed that we no longer had the kind of sharp alternatives that drove the information paradox. Black hole complementarity, though it did not provide a detailed explanation of how different observers see the same bit, seemed to avoid all paradoxes.


Earlier this year, Polchinski and his students Ahmed Almheiri and Jamie Sully, set out to sharpen the meaning of black hole complementarity, starting with some simple bit models of black holes that had been developed by Samir Mathur and Steve Giddings. But quickly they found a problem. Susskind had nicely laid out a set of postulates, and they found that they could not all be true at once. The postulates are

(a) Purity: the black hole information is carried out by the Hawking radiation,

(b) Effective Field Theory (EFT): semiclassical gravity is valid outside the horizon, and

(c) No Drama: an observer falling into the black hole sees no high energy particles at the horizon.


EFT and No Drama are based on the fact that the spacetime curvature is small near and outside the horizon, so there is no way that strong quantum gravity effects should occur. Postulate (b) also has another implication, that the external observer interprets the information as being radiated from an effective membrane at (or microscopically close to) the horizon. This fits with earlier observations that the horizon has effective dynamical properties like viscosity and conductivity. Purity has an interesting consequence, which was developed in a 1993 paper of Don Page and further in a 2007 paper of Patrick Hayden and Preskill. Consider the first two-thirds of the Hawking photons and then the last third. The early photons have vastly more states available. In a typical pure state, then, every possible state of the late photons will be paired with a different state of the early radiation. We say that any late Hawking photon is fully entangled with some subsystem of the early radiation. However, No Drama requires that this same Hawking mode, when it is near the horizon, be fully entangled with a mode behind the horizon. This is a property of the vacuum in quantum field theory, that if we divide space into two halves (here at the horizon) there is strong entanglement between the two sides. Authors used the EFT assumption implicitly in propagating the Hawking mode backwards from infinity, where we look for purity, to the horizon where we look for drama; this propagation backwards also blue-shifts the mode, so it has very high energy. So this is effectively illegal cloning, but unlike earlier thought experiments a single observer can see both bits, measuring the early radiation and then jumping in and seeing the copy behind the horizon.


Once the paper was out, puzzlement by this idea continued. Within days Leonard Susskind replied. In another day, Susskind released the second version of the manuscript. Two weeks later, he withdrew the paper because he "no longer believed the argument was right". He is now a believer in the firewall, though Polchinski and Susskind are still debating whether it forms at the Page time (half the black hole lifetime) or much faster, the so-called fast-scrambling time. The argument for the latter is that this is the time scale over which most black hole properties reach equilibrium. The argument for the former is that self-entanglement of the horizon should be the origin of the interior spacetime, and this runs out only at the Page time. A week after the initial paper, Raphael Bousso replied arguing Polchinski et al. were sloppy about the information that various observers, especially the infalling one, may access. When one realizes that they can only evaluate the "causal diamond", all the proofs of contradictions become impossible. Daniel Harlow posted another reply four days after Boussom but soon after he withdrew the paper, just like Susskind. Yasunori Nomura, Jaime Varela, and Sean J. Weinberg argued in a way that is somewhat similar to Harlow: one must be careful when she constructs the map between the unitary quantum mechanics with the qubits on one side and the semiclassical world on the other side. The paper exists in the version v3 as well but unlike Harlow's paper, it hasn't been withdrawn yet. Samir D. Mathur and David Turton disagree with the firewall, too. Polchinski et al. assumed that an observer near the event horizon may say lots about the Hawking radiation even if he only looks outside the stretched horizon; Mathur and Turton say that he must actually go all the way to the real horizon and all the answers therefore depend on the Planckian physics. Borun D. Chowdhury and Andrea Puhm are the closest ones so far to the original paper. They claim that all the critics of Polchinski et al. are just babbling irrelevant nonsense. Chowdhury and Puhm declare that it's important to get rid of the observer-centric description and talk about decoherence. When it's done, Alice burns when she is a low-energy packet but she may keep on living in the complementary fuzzball picture when she is a high-energy excitation. Iosif Bena, Andrea Puhm and Bert Vercnocke took their call too. Amit Giveon and Nissan Itzhaki became supporters of the firewall. Tom Banks and Willy Fischler use Tom's somewhat incomprehensible axiomatic framework, the holographic spacetime and they conclude that this axiomatic framework doesn't imply any firewalls. In words of Amos Ori, a small black hole behaves as a black hole remnant. For Ram Brustein the event horizon is a wrong concept; it only exists in the classical theory. In the quantum theory, the black hole's Compton wavelength is nonzero which, the author believes, creates a region near the horizon where the densities are inevitably high and quantum gravity is needed to predict what happens in this new extreme region. Lubos Motl, seems to follow same trail as Raphael Bousso. Bousso (and sone others) want to say that an infalling observer sees the mode entangled with a mode behind the horizon, and the asymptotic observer sees it entangled with the early radiation. This is an appealing idea,but the problem is that the infalling observer can measure the mode and send a signal to infinity, giving a contradiction. Bousso now realizes this, and is trying to find an improved version. The precise entanglement statement in original paper is an inequality known as strong subadditivity of entropy, discussed with references in the wikipedia article on Von Neumann entropy.


Where is this going? So far, there is no argument that the firewall is visible outside the black hole, so perhaps no observational consequences there. For cosmology, one might try to extend this analysis to cosmological horizons, but there is no analogous information problem there, so it’s a guess. Information paradox might have emerged once again. The black hole interior will always be a mostly inaccessible place for most lucky people so these questions will remain theoretical most likely - forever.



Credits: Sean Carroll, Joe Polchinski, Lubos Motl, Wikipedia, arXiv


Interesting paper has appeared on arXiv. It is written by Marco Lagi, Yavni Bar-Yam and Yaneer Bar-Yam. They focus on recent US drought, the overall food crises and link with social unrest.  Last year, social unrest swept the world like a forest fire. Many places suffered unusual riots, from the Arab Spring in North Africa and the Middle East to the streets of London and Manchester in the UK. It's not easy to pinpoint the cause of riots but these authors published a fascinating analysis saying that the unrest could be blamed on a single factor - the price of food.


Their conclusion is based on a comparison between the variation of food prices over time and the frequency of riots. This seems to show that when food prices rise above a certain threshold, riots are much more likely. It stands to reason that people become desperate when they can't feed themselves or their families. High food prices simply create the conditions in which riots can flourish. Then almost anything can trigger them.


At the end of 2010, food prices reached a peak. Within days, the events that later became known as the Arab Spring began to unfold. Since then, global food prices have dropped but today, this is about to change. Authors claim food prices are set to peak again because of the drought in the Midwestern US that has devastated crops and led to a dramatic rise in the price of US corn and soya beans. Since US corn production represents about 40% of the global total this is likely to trigger a big rise in food prices around the world, they say. And given the extreme social events that occurred last time food prices peaked, we should brace ourselves for the worst.




There are a couple of factors that may mitigate the problem. The first is corn-to-ethanol conversion, which in 2011 accounted for 40% of the US corn market. This obviously puts pressure on food prices. The second is greater speculation which has caused unsustainable peaks and troughs in prices. US authorities have taken action to reduce the effect of both these processes. At the end of 2011, US government allowed ethanol subsidies to expire (although it still guaranteed demand for 37% of the US corn crop). At the same time, financial regulators have agreed to limit speculative trading on certain foods from the end of 2012, although the extent of these limits and their potential impact is much debated. The big question is to what extent these changes will prevent a spike in food prices - aurhors of paper do not seem to be optimistic. Their big fear is that speculation will push food prices beyond the threshold before the limits come into effect. In that case, the level of earlier riot-inducing bubbles is reached before the end of 2012 and prices continue to rise much higher. Let's hope they are wrong (oh wait, Turkey just attacked Syria).



Credits: arXiv, Technology Review

It is just amazing how environment and time affect and reflect our thinking. Razib Khan just posted article where he quoted article from NYT and in there there is a amazing statement:


As it happens, in the ’80s, the psychologists Betty Hart and Todd R. Risley spent years cataloging the number of words spoken to young children in dozens of families from different socioeconomic groups, and what they found was not only a disparity in the complexity of words used, but also astonishing differences in sheer number. Children of professionals were, on average, exposed to approximately 1500 more words hourly than children growing up in poverty. This resulted in a gap of more than 32 million words by the time the children reached the age of 4.


Wow. Simply amazing. However, this article is not about power of home environment.  Rather than that, we focus on human sensory perception and memory. Ask adults from the industrialized world what number is halfway between 1 and 9, and most will say 5. But pose the same question to small children, or people living in some traditional societies, and they're likely to answer - 3. Cognitive scientists theorize that that's because it's actually more natural for humans to think logarithmically than linearly: 30 is 1, and 32 is 9, so logarithmically, the number halfway between them is 31, or 3. Neural circuits seem to bear out that theory. For instance, psychological experiments suggest that multiplying the intensity of some sensory stimuli causes a linear increase in perceived intensity.




In a paper that appeared online last week in the Journal of Mathematical Psychology, researchers from MIT's Research Laboratory of Electronics use the techniques of information theory to demonstrate that, given certain assumptions about the natural environment and the way neural systems work, representing information logarithmically rather than linearly reduces the risk of error. The new work was led by John Sun and assisted by Vivek Goyal, Lav Varshney (IBM's Watson Research Center) and Grace Wang. Although this problem seems very removed from what we do naturally, it's actually not the case. We do a lot of media compression, and media compression, for the most part, is very well-motivated by psychophysical experiments. So when they came up with MP3 compression, when they came up with JPEG, they used a lot of these perceptual things: What do you perceive well, what don't you perceive well?


One of the researchers' assumptions is that if you were designing a nervous system for humans living in the ancestral environment - with the aim that it accurately represent the world around them - the right type of error to minimize would be relative error, not absolute error. After all, being off by four matters much more if the question is whether there are one or five hungry lions in the tall grass around you than if the question is whether there are 96 or 100 antelope in the herd you've just spotted. Researchers demonstrate that if you're trying to minimize relative error, using a logarithmic scale is the best approach under two different conditions: One is if you're trying to store your representations of the outside world in memory; the other is if sensory stimuli in the outside world happen to fall into particular statistical patterns. If you're trying to store data in memory, a logarithmic scale is optimal if there's any chance of error in either storage or retrieval, or if you need to compress the data so that it takes up less space. The researchers believe that one of these conditions probably pertains - there's evidence in the psychological literature for both - but they're not committed to either. They do feel, however, that the pressures of memory storage probably explain the natural human instinct to represent numbers logarithmically.


In their paper, the MIT researchers also look at the statistical patterns that describe volume fluctuations in human speech. As it turns out, those fluctuations are well approximated by a normal distribution - a bell curve - but only if they're represented logarithmically. Under such circumstances, the researchers show, logarithmic representation again minimizes the relative error. Researchers' information-theoretic model also fits the empirical psychological data in other ways. One is that it predicts the point at which human sensory discrimination will break down. With sound volume, for instance, experimental subjects can make very fine distinctions within a range of values, but experimentally, when we get to the edges, there are breakdowns. Similarly, the model does a better job than its predecessors of describing brain plasticity. It provides a framework in which a straightforward application of Bayes' theorem - the cornerstone of much modern statistical analysis - accurately predicts the extent to which predilections hard-wired into the human nervous system can be revised in light of experience.


There's a whole bunch of different animal species and a whole bunch of different sensory mechanisms, like hearing and vision, and different aspects of all of them, and then taste, and smell, and so on, all of which follow exactly the same law - a logarithmic relationship between stimulus intensity and perceived intensity. Biology is very variable, right? So how come all these organisms come up with the same law? And how come the law is so precise? It's a major philosophical problem...



Credits: MIT

This week competition was strong so top position is shared.  First one explains - why sky is dark at night?  You might think this is because Sun is not starring at us then, but bare in mind that Universe is filled with other stars and so it shouln't be dark place.  Answer is contained in rather funny and educative video below made by great folks behind minutephysics:





Ikeguchi Laboratories has posted interesting video. We see 32 metronomes on a table, all set to the same tempo, but started at slightly different times. What it may come unexpected is that although they begin “out of phase“, after about 2 minutes, they all lock onto the same phase and synchronize (almost all - there’s a rebel that takes an extra minute to sync). Why is that? The key is that the metronomes are not on a solid table, but instead are on a slightly flexible platform hanging from a string and druing video you can see it moving. Thus, as a metronome’s pendulum rod changes direction, it imparts a small force to the platform, which leads to small motions in the platform. The moving platform then gives small nudges back to the metronomes. These forces will tend to push the other metronomes to speed up or slow down to match the timing of the original metronome, bringing the metronomes “in phase”. For those looking for more details, click here.





And finally, something that everyone should learn in early elementary school - our current view of Standard Model. This time, introduction comes from Don Lincoln of Fermilab.



Hrvoje Crvelin


Posted by Hrvoje Crvelin Oct 6, 2012

Last year I wrote first blog - it was about Sun activity and coronal mass ejections (CME) were covered too. A coronal mass ejection is a massive burst of solar wind, other light isotope plasma, and magnetic fields rising above the solar corona or being released into space.  CME releases huge quantities of matter and electromagnetic radiation into space above the sun's surface, either near the corona (sometimes called a solar prominence) or farther into the planet system or beyond (interplanetary CME). The ejected material is a plasma consisting primarily of electrons and protons, but may contain small quantities of heavier elements such as helium, oxygen, and even iron. It is associated with enormous changes and disturbances in the coronal magnetic field.


Fast forward to present and we find magnetic fields near sunspot AR1582 slowly erupted on Oct 5th sparking a B7-class solar flare and hurling a CME toward Earth. The Solar and Heliosphere Observatory (SOHO) captured below image of the expanding cloud. Although Earth is in the line of fire, it won't be a direct hit. Instead, the CME will deliver a glancing blow to our planet's magnetic field. NOAA forecasters estimate a 20% chance of polar geomagnetic storms when the cloud arrives on Oct. 8th. High-latitude sky watchers should be alert for auroras especially during the hours around local midnight.




Four decades of active research and debate by the solar physics community have failed to bring consensus on what drives the sun's powerful coronal mass ejections.Nature Physics published a paper which claims to have settled down this issue. New findings, based on state-of-the-art computer simulations, show the intricate connection between motions in the sun's interior and these eruptions and could lead to better forecasting of hazardous space weather conditions. Geomagnetic storms caused by CMEs can disrupt power grids, satellites that operate global positioning systems and telecommunication networks, pose a threat to astronauts in outer space, lead to rerouting of flights over the polar regions, and cause spectacular auroras. The storms occur when a solar eruption hits Earth's protective magnetic bubble, or magnetosphere.


The Nature Physics paper provides an explanation of the origin of fast ejections of magnetized plasma from the sun's atmosphere and associated X-ray emissions. It thus demonstrates a fundamental connection between the magnetic processes inside the sun's interior and the formation of CMEs. Through this type of computer modeling we are able to understand how invisible bundles of magnetic field rise from under the surface of the sun into interplanetary space and propagate towards Earth with potentially damaging results. These fundamental phenomena cannot be observed even with the most advanced instruments on board NASA satellites but they can be revealed by numerical simulations.


A long-standing goal of the solar physics community has been the forecasting of solar eruptions and predictions of their impact on Earth. In the paper, the authors note, "the model described here enables us not only to capture the magnetic evolution of the CME, but also to calculate the increased X-ray flux directly, which is a significant advantage over the existing models". If confirmed, that's going to be nice progress.

Here we are in second month now and what an exciting month this has been for Curiosity! A Martian day - known as a Sol - is slightly longer than Earths at 24 hours and 39 minutes. Temperatures have risen above freezing during the day for more than half of the Martian Sols since REMS (Curiosity's Remote Environment Monitoring Station) started recording data. Because Mars's atmosphere is much thinner than Earth's and its surface much drier, the effects of solar heating are much more pronounced. At night the air temperatures sink drastically, reaching a minimum of -70 degrees just before dawn. Average daytime air temperatures have reached a peak of 6 degrees Celsius at 2pm local time. Looking very similar to the iconic first footprint on the Moon from the Apollo 11 landing, this new raw image from the Curiosity rover on Mars shows one of the first "scuff" marks from the rover's wheels on a small sandy ridge. This image was taken today by Curiosity's right Navcam on Sol 57 (2012-10-03 19:08:27 UTC).




Just a day after spending a month on Mars, Curiosity extended its robotic arm in the first of six to ten consecutive days of planned activities to test the 2.1-meter arm and the tools it manipulates. The work done at the landing location will prepare Curiosity and the team for using the arm to place two of the science instruments onto rock and soil targets. In addition, the activities represent the first steps in preparing to scoop soil, drill into rocks, process collected samples and deliver samples into analytical instruments. After the arm characterization activities, Curiosity was planned to proceed for a few weeks eastward toward Glenelg. The science team selected that area as likely to offer a good target for Curiosity's first analysis of powder collected by drilling into a rock.




As part of tests, I guess those people like the most are those made with camera (though real tests and results we expect from sensitive gear Curiosity has with it). On 9th September Curiosity made following self-portrait. The picture was taken by the Mars Hand Lens Imager (MAHLI), a camera mounted on the end of the robot arm. It’s designed to look up close at specimens of rocks or whatever else the rover happens to see as it rolls across Mars. It has a transparent dust cover on it, which is why the image is a bit fuzzy.




For some unknown reason this photo reminded me of movie from my youth called Electric Dreams. It is one of those movies I get to listen first and then see (due to very good soundtrack). Anyway, back to Curiosity. Since there has been dust covered shield, engineerins back here on Earth send signals for cover to flip open and then they took the photo of the floor.  Check it out.




Yeehaw!  Dust-free photo with clearly visible rocks.  You might not find this exciting enough as some Mars panorama, but remember geology was this is more important. Even just looking at how the rocks are laid out can be telling; water flowing over a rocky area redistributes rocks in certain patterns, and that can be seen right away in pictures. Of course drilling down will also have highly important value too. And if you want more, here is more.




On September 13th, Curiosity captured eclipse from Mars. Below brief animation, made from ten raw subframe images acquired with Curiosity's Mastcam, shows the silhouette of Mars' moon Phobos as it slipped in front of the Sun's limb. The entire animation spans a real time of about 15 minutes. Curiosity's find was not lucky shot as mission engineers had the Mastcam already positioned to capture the event.




On 19th Curiosity has driven up to a football-size rock that will be the first for the rover's arm to examine.  Rock lies about halfway from the rover's landing site, Bradbury Landing, to a location called Glenelg. The rock has been named "Jake Matijevic". Jacob Matijevic was the surface operations systems chief engineer for Mars Science Laboratory and the project's Curiosity rover. He passed away on August 20th, at age 64. Matijevic also was a leading engineer for all of the previous NASA Mars rovers: Sojourner, Spirit and Opportunity. This rock is strange; just look at the different paterns and colors on each side. But this is optical illusion since shape of the rock is pyramid like. The rock is about 25 centimeters tall and 40 centimeters wide. The rover team has assessed it as a suitable target for the first use of Curiosity's contact instruments on a rock. Data gathering on this rock completed on 24th September.




And while Curiosity was on the way to check Mars pulse, I noticed wave of articles about Mars not being what we thought to be. This came as a surprise to be honest, but both sides should speak and arguments should be put to test. For example, earlier research had theorised that certain minerals detected on the surface of the Red Planet indicated the presence of clay formed when water weathered surface rock some 3.7 billion years ago. This would also have meant the planet was warmer and wetter then, boosting chances that it could have nurtured life forms. But new research by a team from France and the United States said the minerals, including iron and magnesium, may instead have been deposited by water-rich lava, a mixture of molten and part-molten rock beneath Earth's surface.



On the picture we see particles of clay cover the surfaces of crystals in the subaerial basalt flow of the Mururoa Guyot (French Polynesia). Similar clays may have formed in the basaltic rocks of the Noachian crust on Mars (in yellow) which were probably not totally degassed. They could not have formed in Hesperian rocks (in green) which were totally degassed.


If the theory is correct, it would imply that early Mars may not have been as habitable as previously thought at the time when Earth's life was taking hold. However, only on-the-spot examination of Mars' clay minerals can provide conclusive proof of their origin.


Sound like another tough job for Curiosity.


And then on 27th a bomb hit the media outlets: curiosity finds evidence of ancient Martian stream. The finding site lies between the north rim of Gale Crater and the base of Mount Sharp, a mountain inside the crater. Earlier imaging of the region from Mars orbit allows for additional interpretation of the gravel-bearing conglomerate. Curiosity's telephoto camera snapped shots of three rocky outcrops. One of them, called "Goulburn", had been excavated by the rover's own landing gear. The other two were natural outcrops dubbed "Link" and "Hottah". All three, and Hottah in particular, were made of thin layers of rock that had been cemented together. When the rover zoomed in, it saw rounded pebbles in the conglomerates and in surrounding gravel that were fairly large - up to a few centimetres in diameter. On Earth, roundness is a tell-tale sign that rocks have been transported a long way, since their angular edges got smoothed out as they tumbled. The Mars rocks are too big to have been blown by wind, so the team concluded it must have been flowing water. This dovetails with orbital images hinting that the rover landed in an alluvial fan, a feature that is formed on Earth by water flows.




Hottah looks like someone jack-hammered up a slab of city sidewalk, but it's really a tilted block of an ancient streambed. The gravels in conglomerates at both outcrops range in size from a grain of sand to a golf ball. Some are angular, but many are rounded. The shapes tell you they were transported and the sizes tell you they couldn't be transported by wind. They were transported by water flow. The science team may use Curiosity to learn the elemental composition of the material, which holds the conglomerate together, revealing more characteristics of the wet environment that formed these deposits. The stones in the conglomerate provide a sampling from above the crater rim, so the team may also examine several of them to learn about broader regional geology. The slope of Mount Sharp in Gale Crater remains the rover's main destination. Clay and sulfate minerals detected there from orbit can be good preservers of carbon-based organic chemicals that are potential ingredients for life. A long-flowing stream can be a habitable environment.




How long ago was the water there? On Earth, the most reliable way to measure the ages of alluvial fans is by radiocarbon dating - but that requires organic carbon, which we haven't yet found on Mars. And even if it found some, Curiosity's on-board chemistry lab isn't quite up to the task. The best Mars scientists can do is estimate the age of the surrounding surface based on counting craters. On a large scale, the older an area is, the more craters it likely accumulates over time.


On the other hand, preliminary data from the Curiosity Mars Science Laboratory, presented at the European Planetary Science Conference on 28 September, indicate that the Gale Crater landing site might be drier than expected. The Dynamic Albedo of Neutrons (DAN) instrument on board Curiosity is designed to detect the location and abundance of water thanks to the way hydrogen (one of water's components) reflects neutrons. When neutrons hit heavy particles, they bounce off with little loss in energy, but when they hit hydrogen atoms (which are much lighter and have approximately the same mass as neutrons), they lose half of their energy. The DAN instrument works by firing a pulse of neutrons at the ground beneath the rover and detecting the way it is reflected. The intensity of the reflection depends on the proportion of water in the ground, while the time the pulse takes to reach the detector is a function of the depth at which the water is located. The prediction based on previous measurements using the Mars Odyssey orbiter was that the soil in Gale Crater would be around 6% water. But the preliminary results from Curiosity show only a fraction of this.



One possible explanation of the discrepancy lies in the variability of water content across the surface of Mars. There are large-scale variations, with polar regions in particular having high abundances of water, but also substantial local differences even within individual regions on Mars. The Mars Odyssey spacecraft is only able to measure water abundance for an area around 300 by 300 kilometres - it cannot make high resolution maps. It may therefore be that Odyssey's figure for Gale Crater is an accurate (but somewhat misleading) average of significantly varying hydrogen abundances in different parts the crater. Indeed, over the small distance that the rover has already covered, DAN has observed variations in the detector counting rates that may indicate different levels of hydrogen in the ground, hinting that this is likely to be the case. Curiosity's ability to probe the water content in the martian soil in specific locations, rather than averages of broad regions, allows for a far more precise and detailed understanding of the distribution of water ice on Mars.


When you take a look at Mars, you probably wouldn't think that it looks like a nice place to live. It's dry, it's dusty, and there's practically no atmosphere. But some scientists think that Mars may have once looked like a much nicer place to live, with a thicker atmosphere, cloudy skies, and possibly even liquid water flowing over the surface. So how do you go from something like this - to something like seen above on pictures? NASA's MAVEN spacecraft will give us a clearer idea of how Mars lost its atmosphere, and scientists think that several processes have had an impact. Below is the video for a quick dive-in.





Frozen carbon dioxide, better known as "dry ice," requires temperatures of about minus 125 Celsius, which is much colder than needed for freezing water. And there is plenty of it on Mars poles! Carbon-dioxide snow reminds scientists that although some parts of Mars may look quite Earth-like, the Red Planet is very different. Recent data from Mars Reconnaissance Orbiter are the first definitive detections of carbon-dioxide snow clouds. The presence of carbon-dioxide ice in Mars' seasonal and residual southern polar caps has been known for decades, but snow is different. One line of evidence for snow is that the carbon-dioxide ice particles in the clouds are large enough to fall to the ground during the lifespan of the clouds. Another comes from observations when the instrument is pointed toward the horizon, instead of down at the surface. The infrared spectra signature of the clouds viewed from this angle is clearly carbon-dioxide ice particles and they extend to the surface. By observing this way,instruments are able to distinguish the particles in the atmosphere from the dry ice on the surface.


MAVEN will be the first spacecraft ever to make direct measurements of the Martian atmosphere, and is the first mission to Mars specifically designed to help scientists understand the past - also the ongoing - escape of CO2 and other gases into space. MAVEN will orbit Mars for at least one Earth-year, about a half of a Martian year. MAVEN will provide information on how and how fast atmospheric gases are being lost to space today, and infer from those detailed studies what happened in the past. Studying how the Martian atmosphere was lost to space can reveal clues about the impact that change had on the Martian climate, geologic, and geochemical conditions over time, all of which are important in understanding whether Mars had an environment able to support life. The MAVEN will carry eight science instruments that will take measurements of the upper Martian atmosphere during one Earth year, equivalent to about half of a Martian year. MAVEN is scheduled to launch in 2013, with a launch window from Nov. 18 to Dec 7, 2013. Mars Orbit Insertion will be in mid-September 2014.




When the Mars Science Laboratory's Curiosity rover landed on August 6, it was another step forward in the effort to eventually send humans to the Red Planet. Using the lessons of the Apollo era and robotic missions to Mars, NASA scientists and engineers are studying the challenges and hazards involved in any extraterrestrial landing. The journey to the Mars or beyond is plagued with technological problems. Among the most challenging is finding a way to protect humans from the high energy particles that would otherwise raise radiation levels to unacceptable levels. On Earth, humans are protected by the atmosphere, the mass of the Earth itself and the Earth's magnetic field. In low earth orbit, astronauts loose the protection of the atmosphere and radiation levels are consequently higher by two orders of magnitude. In deep space, astronauts loose the protecting effect of the Earth's mass and its magnetic field, raising levels a further five times and beyond the acceptable limits that humans can withstand over the 18 months or so it would take to get to Mars or the asteroids. An obvious way to protect astronauts is with an artificial magnetic field that would steer charged particles away. But previous studies have concluded that ordinary magnets would be too big and heavy to be practical on a space mission. However, superconducting magnets are more powerful, more efficient and less massive. They are much better candidates for protecting humans. The only problem is that nobody has built and tested a superconducting magnet in space. So, humans are an expensive cargo that add little if any value when it comes to science in space. So the message is clear - if we want the best return from our space-bound money, we'll be better of sending robots for the foreseeable future and take better care here on Earth.


And there is some science about this mission to Mars on Earth too. Since Mars has slightly longer day than we do, to control solar-powered rovers like Phoenix and Curiosity, NASA teams must shift their sleeping cycles to match and it’s a lot harder than it sounds: that fraction of an hour extra means that their sleep schedules creep every day, so while 1 pm might be the middle of the night one week, say, it will have become breakfast time by the next. Staying on Mars time is so grueling that staff for the Sojourner rover in 1997 bailed on the schedule a third of the way through the mission. But there may be ways of making the shift more graceful. In a recently published study following personnel of the Phoenix rover mission in 2008, a team of researchers provided training sessions to facilitate the switch to Mars time, including tips on when to drink coffee and when to nap, and then explored whether exposure to blue light, which works on photosensitive cells in the eye involved in circadian management, resulted in better adjustment. Sixteen subjects turned on a blue light at their desks at the beginning of each work “day” and provided a log of their activities and fatigue level and frequent samples of urine to the researchers, who used the levels of a hormone in the urine to evaluate whether the subjects were fitting themselves biologically to Martian time.




When the hormone levels were plotted, the researchers found that they supported the idea that the subjects were adapting to the longer day. Plotting the data under the assumption of a 24-hour day resulted in a garbled picture with no pattern visible; trying a slightly longer day revealed the characteristic rise and fall of the hormone levels that scientists have come to expect. Was this because of the blue light, though, or was it the result of the basic sleep training?Several subjects who didn’t use the blue lights managed to make the switch to Mars time all the same, and the team suggests that there were enough factors beyond their control - for instance, subjects’ exposure to the sun outside of work - to muddy the data in regards to the blue light. They point out, though, that previous studies have shown that sleep scheduling without light manipulation won’t shift people to Martian time. They hope that in future studies, they’ll be able to go farther - replacing all the bulbs in the office, for instance, with blue ones.




Two days ago Curiosity checked in on Mars using the mobile application Foursquare (see picture above). This marks the first check-in on another planet. Users on Foursquare can keep up with Curiosity as the rover checks in at key locations and posts photos and tips, all while exploring the Red Planet. NASA has been on Foursquare since 2010 through a strategic partnership with the platform. This partnership, launched with astronaut Doug Wheelock's first-ever check-in from the International Space Station, has allowed users to connect with NASA and enabled them to explore the universe and re-discover Earth. The partnership launched the NASA Explorer badge for Foursquare users, encouraging them to explore NASA-related locations across the country. It also included the launch of a NASA Foursquare page, where the agency provides official tips and information about the nation's space program.




Credits: NASA, CNRS, Phil Plait, Universe Today, Technology Review, Europlanet,  New Scientist, Veronique Greenwood

Hrvoje Crvelin

Quantum casuality

Posted by Hrvoje Crvelin Oct 4, 2012

In everyday life (and in classical physics) events are ordered in time: a cause can only influence an effect in its future not in its past. As a simple example, imagine a person, Alice, walking into a room and finding there a piece of paper. After reading what is written on the paper Alice erases the message and leaves her own message on the piece of paper. Another person, Bob, walks into the same room at some other time and does the same: he reads, erases and re-writes some message on the paper. If Bob enters the room after Alice, he will be able to read what she wrote; however Alice will not have a chance to know Bob's message. In this case, Alice's writing is the "cause" and what Bob reads the "effect". Each time the two repeat the procedure, only one will be able to read what the other wrote. Even if they don't have watches and don't know who enters the room first, they can deduce it by what they write and read on the paper. For example, Alice might write "Alice was here today (03-10-2012 8:48)", so if Bob reads the message, he will know that he came to the room after her.


As long as only the laws of classical physics are allowed, the order of events is fixed: either Bob or Alice is first to enter the room and leave a message for the other person. When quantum mechanics enters into play, however, the picture may change drastically. According to quantum mechanics, objects can lose their well-defined classical properties, such as for example a particle that can be at two different locations at the same time. In quantum physics this is called a "superposition".



Now an international team of physicists led by Caslav Brukner from the University of Vienna have shown that even the causal order of events could be in such a superposition. If - in our example - Alice and Bob have a quantum system instead of an ordinary piece of paper to write their messages on, they can end up in a situation where each of them can read a part of the message written by the other.


Effectively, one has a superposition of two situations: "Alice enters the room first and leaves a message before Bob" and "Bob enters the room first and leaves a message before Alice".


Such a superposition, however, has not been considered in the standard formulation of quantum mechanics since the theory always assumes a definite causal order between events. But if we believe that quantum mechanics governs all phenomena, it is natural to expect that the order of events could also be indefinite, similarly to the location of a particle or its velocity.


This work provides an important step towards understanding that definite causal order might not be a mandatory property of nature. The real challenge is finding out where in nature we should look for superpositions of causal orders. Obviously, designing quantum computer systems just got a bit more difficult.



Credits: University of Vienna

Hrvoje Crvelin

Trash in space #2

Posted by Hrvoje Crvelin Oct 3, 2012

Back in May I wrote article about trash in space. I'm big fan of sending trash to space - just send it straight to the sun. One way ticket. What could possibly go wrong?  Well, things go wrong when not planned well and then they do tend to stick in oribit around Earth as debris. NASA estimates that more than 21000 fragments of orbital debris larger than 10 centimeters are stuck in earth's orbit, and experts worry that orbiting junk is becoming a growing problem for the space industry.


Our first and the only space station, ISS, currently counts six astronauts - three Russians, two Americans and one from Japan.  Mission Control Center spokeswoman Nadyezhda Zavyalova said today the Russian Zvevda module will fire booster rockets to carry out the operation Thursday at 07:22 am Moscow time (0322 GMT). The space station will perform evasive maneuvers when the likelihood of a collision exceeds one in 10000 and this seems to be the case right now.




Interesting enough, 10 days ago NASA stated this was not needed, but it is not clear to me if they were fererring to same piece of debris (fragments from an Indian rocket and an old Russian satellite). Back in January this year evasive maneuvers were made too. That was 13th time since 1998 that the station has moved because of debris.


UPDATE:  Mission Control Center said in a statement carried by Russian news agencies that a fragment of space debris would fly by too far to pose any danger to the space outpost, so a plan to fire booster rockets to carry out the maneuver on Thursday at 07:22 am. Moscow time (0322 GMT) was canceled.

Hrvoje Crvelin

Fighting CO

Posted by Hrvoje Crvelin Oct 2, 2012

Carbon monoxide (CO) is a toxic gas that can prove fatal at high concentrations. The gas is most commonly associated with faulty domestic heating systems - something you can read in news papers on almost daily basis around the world - and car fumes - something rather knowns from 70s and 80s TV shows about crime scene who used to keep running cars in garage to kill their opponents.  Carbon monoxide is often referred to as "the silent killer" as there is no smell, taste or visibility to it - it's there and if there - it will get you.




But carbon monoxide is also produced within our bodies through the normal activity of cells. Why doesn't that kill us?  How organisms manage to control this internal carbon monoxide production so that it does no harm? Carbon monoxide molecules should be able to readily bind with protein molecules found in blood cells, known as haemproteins. When they do, for instance during high concentration exposure from inhaling, they impair normal cellular functions, such as oxygen transportation, cell signaling and energy conversion. It is this that causes the fatal effects of carbon monoxide poisoning. The haemproteins provide an ideal 'fit' for the CO molecules, like a hand fitting a glove, so the natural production of the gas, even at low concentrations, should in theory bind to the haemproteins and poison the organism, except it doesn't.


Year ago, researchers from University of Manchester and their colleagues at the University of Liverpool and Eastern Oregon University identified the mechanism whereby cells protect themselves from the toxic effects of the gas at these lower concentrations. Working with a simple, bacterial haemprotein, they were able to show that when the haemprotein 'senses' the toxic gas is being produced within the cell, it changes its structure through a burst of energy and the carbon monoxide molecule struggles to bind with it at these low concentrations. This mechanism of linking the CO binding process to a highly unfavourable energetic change in the haemprotein's structure provides an elegant means by which organisms avoid being poisoned by carbon monoxide derived from natural metabolic processes. Similar mechanisms of coupling the energetic structural change with gas release may have broad implications for the functioning of a wide variety of haemprotein systems. For example, haemproteins bind other gas molecules, including oxygen and nitric oxide. Binding of these gases to haemproteins is important in the natural functions of the cell. Without this structural change carbon monoxide would bind to the haemoprotein almost a million times more tightly, which would prevent the natural cellular function of the haemprotein.


There's still so much to learn from Nature and therefore exploring Nature should be our priority.



Credits: Manschester University

Filter Blog

By date:
By tag: