Find Communities by: Category | Product

Hrvoje Crvelin

Music and language

Posted by Hrvoje Crvelin Apr 29, 2012

Music is an art form whose medium is sound and silence. Its common elements are pitch (which governs melody and harmony), rhythm (and its associated concepts tempo, meter, and articulation), dynamics, and the sonic qualities of timbre and texture. Just like math, it is hard to say if music has been invented or just found to exist. Language, on the other hand, may refer either to the specifically human capacity for acquiring and using complex systems of communication, or to a specific instance of such a system of complex communication. Origin of language is somewhat unknown and there are several assumptions. Some theories are based on the idea that language is so complex that one can not imagine it simply appearing from nothing in its final form, but that it must have evolved from earlier pre-linguistic systems among our pre-human ancestors. These theories can be called continuity based theories. The opposite viewpoint is that language is such a unique human trait that it cannot be compared to anything found among non-humans and that it must therefore have appeared fairly suddenly in the transition from pre-hominids to early man. These theories can be defined as discontinuity based. Similarly some theories see language mostly as an innate faculty that is largely genetically encoded, while others see it as a system that is largely cultural, that is learned through social interaction. Currently the only prominent proponent of a discontinuity theory of human language origins is Noam Chomsky.

 

chomsky.jpg

Chomsky proposes that "some random mutation took place, maybe after some strange cosmic ray shower, and it reorganized the brain, implanting a language organ in an otherwise primate brain". While cautioning against taking this story too literally, Chomsky insists that "it may be closer to reality than many other fairy tales that are told about evolutionary processes, including language". Somewhre in November last year Chomsky had an interview with Discover folks - you can read more here.

 

In interview Chomsky states if you look at the archaeological record, a creative explosion shows up in a narrow window, somewhere between 150000 and roughly 75000 years ago. All of a sudden, there’s an explosion 
of complex artifacts, symbolic representation, measurement of celestial events, complex social structures - a burst of creative activity that almost every expert on prehistory assumes must have been connected with the sudden emergence of language.

 

And it doesn’t seem to be connected with physical changes; the articulatory and acoustic (speech and hearing) systems of contemporary humans are not very different from those of 600000 years ago. There was a rapid cognitive change.

 

And nobody knows why.

 

Continuity based theories are currently held by a majority of scholars, but they vary in how they envision this development. Those who see language as being mostly innate, for example Steven Pinker, hold the precedents to be animal cognition, whereas those who see language as a socially learned tool of communication, such as Michael Tomasello see it as having developed from animal communication, either primate gestural or vocal communication. Other continuity based models see language as having developed from music.

 

As per Mark Changizi, we're fish out of water, living in radically unnatural environments and behaving ridiculously for a great ape. So, if one were interested in figuring out which things are fundamentally part of what it is to be human, then those million crazy things we do these days would not be on the list. But what would be on the list? Language is the pinnacle of usefulness, and was key to our domination of the Earth (and the Moon). And music is arguably the pinnacle of the arts. Language and music are fantastically complex, and we're brilliantly capable at absorbing them, and from a young age. That’s how we know we're meant to be doing them, ie., how we know we evolved brains for engaging in language and music. What if we're not, in fact, meant to have language and music? What if our endless yapping and music-filled hours each day are deeply unnatural behaviors for our species?

 

Mark's take on this is that both language and music are not part of our core - that we never evolved by natural selection to engage in them. The reason we have such a head for language and music is not that we evolved for them, but, rather, that language and music evolved - culturally evolved over millennia - for us. Our brains aren't shaped for these pinnacles of humankind. Rather, these pinnacles of humankind are shaped to be good for our brains. If language and music have shaped themselves to be good for non-linguistic and amusical brains, then what would their shapes have to be?

 

lm2.jpg

 

We have auditory systems which have evolved to be brilliantly capable at processing the sounds from nature, and language and music would need to mimic those sorts of sounds in order to harness our brain. Mark base whole book on this subject. The two most important classes of auditory stimuli for humans are:

  • events among objects (most commonly solid objects), and
  • events among humans (for example human behavior).

 

In his research, Mark has shown that the signature sounds in these two auditory domains drive the sounds we humans use in

  • speech and
  • music, respectively.

 

For example, the principal source of modulation of pitch in the natural world comes from the Doppler shift, where objects moving toward you have a high pitch and objects moving away have a low pitch; from these pitch modulations a listener can hear an object’s direction of movement relative to his or her position. In the book Mark provides a battery of converging evidence that melody in music has culturally evolved to sound like the (often exaggerations of) Doppler shifts of a person moving in one’s midst. Consider first that a mover’s pitch will modulate within a fixed range, the top and bottom pitches occurring when the mover is headed, respectively, toward and away from you. Do melodies confine themselves to fixed ranges? They tend to, and tessitura is the musical term to refer to this range. In the book Mark runs through a variety of specific predictions. For the full set of arguments for language and music you'll have to read the book, and the preliminary conclusion of the research is that, human speech sounds like solid objects events, and music sounds like human behavior! That’s just what we expect if we were never meant to do language and music. Language and music have the fingerprints of being unnatural (of not having their origins via natural selection) and the giveaway is, ironically, that their shapes are natural (have the structure of natural auditory events).

 

We also find this for another core capability that we know we're not "meant" to do - reading. Writing was invented much too recently for us to have specialized reading mechanisms in the brain (although there are new hints of early writing as old as 30000 years), and yet reading has the hallmarks of instinct. Mark's research suggests that language and music aren't any more part of our biological identity than reading is. Counterintuitively, then, we aren't "supposed" to be speaking and listening to music. They aren't part of our “core” after all. Or, at least, they aren't part of the core of **** sapiens as the species originally appeared. But, it seems reasonable to insist that, whether or not language and music are part of our natural biological history, they are indeed at the core of what we take to be centrally human now. Being human today is quite a different thing than being the original **** sapiens.

 

lm3.jpg

 

Almost month ago, Geoffrey Miller and Gary Marcus had public discussion whether music is instinct or cultural invention, respectively. In recent years, archaeologists have dug up prehistoric instruments, neuroscientists have uncovered brain areas that are involved in improvisation, and geneticists have identified genes that might help in the learning of music. Yet basic questions persist: Is music a deep biological adaptation in its own right, or is it a cultural invention based mostly on our other capacities for language, learning, and emotion?  Marcus goes to say that the oldest known musical artifacts are some bone flutes that are only 35000 years old, a blink in an evolutionary time. And although kids are drawn to music early, they still prefer language when given a choice, and it takes years before children learn something as basic as the fact that minor chords are sad. Of course, music is universal now, but so are mobile phones, and we know that mobile phones aren't evolved adaptations. When we think about music, it's important to remember that an awful lot of features that we take for granted in Western music - like harmony and 12-bar blues structure, to say nothing of pianos or synthesizers, simply didn't exist 1000 years ago. When ethnomusicologists have traded notes to try figure out what's universal about music, there's been surprisingly little consensus. Some forms of music are all about rhythm, with little pitch, for example. Another thing to consider is the music is not quite universal even with cultures. At least 10% of our population is "tone deaf", unable to reproduce the pitch contours even for familiar songs. Everybody learns to talk, but not everybody learns to sing, let alone play an instrument. Some people, like Sigmund Freud, have no interest in music at all. Music is surely common, but not quite as universal as language.

 

On the other hand, the bone flutes are at least 35000 years old, but vocal music might be a lot older, given the fossil evidence on humans and Neanderthal vocal tracts. Thirty-five-thousand years sounds short in evolutionary terms, but it's still more than a thousand human generations, which is plenty of time for selection to shape a hard-to-learn cultural skill into a talent for music in some people, even if music did originate as a purely cultural invention. Maybe that's not enough time to make music into a finely tuned mental ability like language, but nobody knows yet how long these things take. Whether or not Neanderthals sang, music remains relatively recent in evolutionary terms, less than a 10th of a percent of the time that mammals have been on the planet. Still, we know responsiveness to music starts in the womb and kids show such a keen interest in music. We're born to listen for language, and music sounds sort of like language, so kids respond might respond because of that. But given the choice, infants prefer speech to instrumental music and they analyze language more carefully than music. Video games, television shows and iPhones are all cultural artifacts that were shaped to be irresistible to human brains, and that provoke strong emotions like music, but that doesn't mean that human brains were shaped to be attracted to them. There doesn't seem to be any part of the brain that is fully dedicated to music, and most (if not all) of the areas involved in music seem to have "day jobs" doing other things, like analyzing auditory sounds (temporal cortex), emotion (the amygdala) or linguistic structure (Broca's area). You see much the same diversity of brain regions active when people play video games. Face recognition has a long evolutionary history, and a specific brain region (the fusiform gyrus) attached, but music, like reading, seems to co-opt areas that already had other functions.

 

lm4.jpgGeoffrey, on the other hand argues the traditional way to show that sexual selection shaped a trait is to look for big sex differences in the trait. But that's a bad strategy when you're dealing with a mutual-choice species like ours. In humans, both sexes are choosy - at least about forming the long-term relationships that produce most children - and both sexes display behavioral ornaments to each other, from music, arts, and jokes, to religious ideologies and moral virtues. You see a lot of music in semi-monogamous songbirds and gibbons too, with both sexes singing. So it's a mistake to assume that sexual selection for music required proto-Hendrix virtuosos to attract hundreds of female groupies. All you need is ancestors who fell in love partly on the basis of musical talent, among many other romantically attractive traits. If you had mutual-mate choice for music, wouldn't you expect dedicated neural circuits for music, like the brain areas for song learning and song production in songbirds, hummingbirds and parrots, that don't exist in non-singing birds?

 

Maybe, if we evolved music millions of years ago like they did. But since we're the only great apes with any aptitude for rhythm or melody, human music is probably much more recent: not enough time for such specialization of brain structure. And the songbirds never evolved language. If they had, we'd probably see overlapping brain areas for music and speech in their brains, just like ours. Which would have led their scientist-songbirds to argue that birdsong is just a side-effect of birdspeech.

 

One counterintuitive principle is that for sexually selected mental traits like music to work well as signals of general brain function and intelligence, they need to recruit a lot of different brain areas and mental abilities. Otherwise they wouldn't be very informative about the brain's general health. If musical talent didn't depend on general intelligence, and general mental health, and general learning ability, it wouldn't be worth paying much attention to when you're choosing a mate. Content analyses show that pop song lyrics have usually concerned Iust, love, or jealousy - around the world, at least throughout the 20th century. There's an emotional resonance to courtship music that you just don't see with purely cultural inventions. So why haven't we found any genes that are specifically tied to music? That's not surprising from a sexual selection perspective. For music to work as a "good genes" indicator in mate choice, music needs to recruit a lot of different genes and gene-regulatory systems and biochemical pathways. You shouldn't expect just a few "music genes" that explain most musical talent, but thousands of contributing genes. But that's not why we haven't found any music genes yet. Nobody's really looked. There's very little gene-hunting work on music, and hardly any twin research on the heritability of musical talent. There are two kinds of music genes that could matter: the music-talent genes that explain individual differences in musical talent among humans, and the music-capacity genes that explain why we have musical abilities at all compared to most other mammals. The music-talent genes might number in the tens of thousands. We already know there are more than half a million DNA base pair differences that contribute to general intelligence differences between people, and a similar number might influence musical intelligence. But those music-talent genes will be much easier to identify using standard molecular genetics methods.

 

The music-capacity genes that distinguish musical humans from non-musical chimps might be far fewer in number, but much harder to identify. If we can identify them though, and if they also exist in the Neanderthal genome (which is being pieced together now from fossil DNA), we'd know that music is probably at least 200000 years old, because we diverged from Neanderthals by then. So it's true that music doesn't fossilize, but we still might learn when music evolved from the genetics. If we could really show decisively that Neanderthals could sing, that sort of genetic evidence would certainly help, but unless we find genes that are specifically tied to music, it might be hard to go on in the other direction: to deduce whether Neanderthals can sing based on their genomes. Chimpanzees are much less interested in music than humans are, but we still haven't been able to link that to a particular genetic difference.

 

Of course, as Mark suggests, music might just be illusion of the instinct cause by cultural evolution. Once humans were sufficiently smart and social that cultural evolution could pick up steam, a new blind watchmaker was let loose on the world, one that could muster designs worthy of natural selection, and in a fraction of the time. Cultural selection could shape our artifacts to co-opt our innate capabilities. If the origins of music comes from nature-harnessing then it will have many or all the signature signs of instinct. But it won't be an instinct. Instead, it will be a product of cultural evolution, of nature-harnessing. And it won’t be a mere invention that we must learn. In a sense, the brain doesn't have anything to learn - cultural evolution did all the learning instead, figuring out just the right stimulus shapes that would flow right into our emotional centers and get us hooked. For some further discussion on this topic click here.

 

So, what is it to be human? Unlike **** sapiens, we're grown in a radically different petri dish. Our habitat is filled with cultural artifacts - the two heavyweights being language and music - designed to harness our brains’ ancient capabilities and transform them into new ones. Humans are more than **** sapiens. Humans are **** sapiens who have been nature-harnessed into an altogether novel creature, one designed in part via natural selection, but also in part via cultural evolution.

 

 

Credits: Wikipedia, Discover Magazine, Noam Chomsky, Mark Changizi, Geoffrey Miller, Gary Marcus

Hrvoje Crvelin

South Atlantic Anomaly

Posted by Hrvoje Crvelin Apr 29, 2012

Space sometimes looks as twilight zone to us, but we do not have to go that far to find one. The radiation belts are regions of high-energy particles, mainly protons and electrons, held captive by the magnetic influence of the Earth. They have two main sources. A small but very intense "inner belt" (some call it "The Van Allen Belt" because it was discovered in 1958 by James Van Allen) is trapped within 6500 km or so of the Earth's surface. It consists mainly a high-energy protons (10-50 MeV) and is a by-product of the cosmic radiation, a thin drizzle of very fast protons and nuclei which fill all our galaxy. In addition there exist electrons and protons (and also oxygen particles from the upper atmosphere) given moderate energies (say 1-100 keV) by processes inside the domain of the Earth's magnetic field. Some of these electrons produce the polar aurora ("northern lights") when they hit the upper atmosphere, but many get trapped, and among those, protons and positive particles have most of the energy .

 

Another point of particular interest to high-energy astrophysics is the South Atlantic Anomaly (SAA).We all know that the Earth's magnetic axis is not the same as its rotational axis. As the Earth's molten, ferromagnetic liquid core churns, it generates a magnetic field, and the north-south axis of this field is tilted about 16° from the rotational axis. The north magnetic pole is in the north Canadian islands, but it moves around a lot, and it's currently headed northwest at about 64 km per year. Here's the part that many people don't know. While the north magnetic pole is about 7° from the north rotational pole, the south magnetic pole is about 25° from the south rotational pole. A line drawn from the north magnetic pole to the south does not pass through the center of the Earth. Our magnetic field is torus shaped, like a giant donut around the earth. But it's not only tilted, it's also pulled to one side, such that one inner surface of the donut is more squished up against the side of the earth than the other. It's this offset that causes the South Atlantic Anomaly to be at just one spot on the Earth. This is a region of very high particle flux about 250 km above the Atlantic Ocean off the coast of Brazil and is a result of the fact that the Earth's rotational and magnetic axes are not aligned. The particle flux is so high in this region that often the detectors on our satellites must be shut off (or at least placed in a "safe" mode) to protect them from the radiation. Below is a map of the SAA at an altitude of around 560 km. The map was produced ROSAT by monitoring the presence of charged particles. The dark red area shows the extent of the SAA. The green to yellow to orange areas show Earth's particle belts.

 

saa1.jpg

 

The South Atlantic Anomaly comes about because the Earth's field is not completely symmetric. If we were to represent it by a compact magnet (which reproduces the main effect, not the local wiggles), that magnet would not be at the center of the Earth but a few hundred km away, in the direction away from the "anomaly". Thus the anomaly is the region most distant from the "source magnet" and its magnetic field (at any given height) is thus relatively weak. The reason trapped particles don't reach the atmosphere is that they are repelled (sort of) by strong magnetic fields, and the weak field in the anomaly allows them to reach further down than elsewhere.

 

The shape of this anomaly changes over time. Since its initial discovery in 1958] the southern limits of the SAA have remained roughly constant while a long-term expansion has been measured to the northwest, the north, the northeast, and the east. Additionally, the shape and particle density of the anomaly varies on a diurnal basis, with greatest particle density corresponding roughly to local noon. At an altitude of approximately 500 km, it spans from -50° to 0° geographic latitude and from -90° to +40° longitude. The highest intensity portion of the SAA drifts to the west at a speed of about 0.3 degrees per year. The drift rate is very close to the rotation differential between the Earth's core and its surface, estimated to be between 0.3 and 0.5 degrees per year. Current literature suggests that a slow weakening of the geomagnetic field is one of several causes for the changes in the borders since its discovery. As the geomagnetic field continues to weaken, the inner Van Allen belt gets closer to the Earth, with a commensurate enlargement of the anomaly at given altitudes.

 

saa2.png

Now, what are the effects of this monster? The South Atlantic Anomaly is of great significance to astronomical satellites and other spacecraft that orbit the Earth at several hundred kilometers altitude; these orbits take satellites through the anomaly periodically, exposing them to several minutes of strong radiation, caused by the trapped protons in the inner Van Allen belt. The ISS, orbiting with an inclination of 51.6°, requires extra shielding to deal with this problem. The Hubble Space Telescope does not take observations while passing through this anomaly. Astronauts are also affected by this region which is said to be the cause of peculiar 'shooting stars' (phosphenes) seen in the visual field of astronauts. One of the current ISS astrounaouts, Don Pettit, describes this effects in his blog. The eye retina is an amazing structure - it’s more impressive than film or a CCD camera chip, and it reacts to more than just light. It also reacts to cosmic rays, which are plentiful in space. When a cosmic ray happens to pass through the retina it causes the rods and cones to fire, and you perceive a flash of light that is really not there. The triggered cells are localized around the spot where the cosmic ray passes, so the flash has some structure. A perpendicular ray appears as a fuzzy dot. A ray at an angle appears as a segmented line. Sometimes the tracks have side branches, giving the impression of an electric spark. The rate or frequency at which these flashes are seen varies with orbital position, Don continues to say. When passing through anomaly, where the flux of cosmic rays is 10 to 100 times greater than the rest of the orbital path, eye flashes will increase from one or two every 10 minutes to several per minute.

 

saa3.jpg

Passing through the South Atlantic Anomaly is thought to be the reason for the early failures of the Globalstar network's satellites. The PAMELA experiment, while passing through this anomaly, detected antiproton levels that were orders of magnitude higher than those expected from normal particle decay. This suggests the Van Allen belt confines antiparticles produced by the interaction of the Earth's upper atmosphere with cosmic rays. NASA has reported that modern laptops have crashed when the space shuttle flights passed through the anomaly and Don has confirmed that since in his blog adding cameras suffer too. During the Apollo missions, astronauts saw these flashes after their eyes had become dark-adapted. When it was dark, they reported a flash every 2.9 minutes on average. Only one Apollo crew member involved in the experiments did not report seeing the phenomenon, Apollo 16′s Command Module Pilot Ken Mattingly, who stated that he had poor night vision.

 

There are experiments on board the ISS to monitor how much radiation the crew is receiving. One experiment is the Phantom Torso, a mummy-looking mock-up of the human body which determines the distribution of radiation doses inside the human body at various tissues and organs.

 

There’s also the Alpha Magnetic Spectrometer experiment, a particle physics experiment module that is mounted on the ISS. It is designed to search for various types of unusual matter by measuring cosmic rays, and hopefully will also tell us more about the origins of both those crazy flashes seen in space, and also the origins of the Universe.

 

We know that the South Atlantic Anomaly is hazardous to electronic equipment and to humans who spend time inside it. We know that it dips down close to the Earth. Although the Anomaly is a dangerous place, its edges are pretty well defined. The closest it ever gets to the Earth's surface is about 200 km, and at that height it's very small. Your commercial airplane won't reach that altitude for sure. And with everything we know today about it, it is hardly a twilight zone either.

 

 

Credits: NASA, Wikpedia, Don Pettit, Astrobiology Magazine, Brian Dunning

Hrvoje Crvelin

Eye in the sky I

Posted by Hrvoje Crvelin Apr 28, 2012

Today it was rainy day here in Netherlands and for some reason I felt like listening to Alan Parson Project. The Alan Parsons Project was a British progressive rock band, active between 1975 and 1990, consisting of singer Eric Woolfson and keyboardist Alan Parsons surrounded by a varying number of session musicians. Behind the revolving lineup and the regular sidemen, the true core of the Project was the duo of Parsons and Woolfson. Woolfson was a lawyer by profession, but also a composer and pianist. Parsons was a successful producer and accomplished engineer. Almost all songs on the band's albums are credited to this duo. Though they started around the time I started to walk, I did started to listen to their music before I got to University (which was after they stopped working together). That is not to say that I didn't hear of them before; I did, but I was not aware of their whole opus. One of the songs which you might have heard is called Sirius/Eye In The Sky (it is two songs where first one is sort of instrumental intro). Title gave me idea to post pictures here made in space, but not usual stuff made by satellites - those pictures are all processed. I'm talking about pictures which do not need to be processed like pictures made by astronauts at ISS or satellites in Earth's orbit against our tiny home.  Enjoy!

 

eye001.jpgPicture: ISS makes one revolution every 90 minutes (the Moon takes 28 days). As a result, long-exposure pictures taken from the Station show star trails as circular arcs, with the center of rotation being the poles of Space Station (perpendicular to our orbital plane).

Credits: ISS, Don Pettit

 

 

eye002.jpg

Picture: Don's star trail images are made by taking a time exposure of about 10 to 15 minutes. However, with modern digital cameras, 30 seconds is about the longest exposure possible, due to electronic detector noise effectively snowing out the image. To achieve the longer exposures Don does what many amateur astronomers do; he takes multiple 30-second exposures, then “stack” them using imaging software, thus producing the longer exposure.

Credits: ISS, Don Pettit

 

 

eye003.jpg

Picture: The sky is not the limit for producing artistic compositions. Put a camera on a tripod, point at a dark starry sky, and hold the shutter open for about 10 minutes, and the image will show stars as circular arcs. Normally, these star trails are created as the Earth rotates on its axis, with the center being close to either Polaris, the north star, or the Southern Cross, depending on which hemisphere you are in.

Credits: ISS, Don Pettit

 

 

eye004.jpg

 

Picture: Automated Transfer Vehicle docking to the International Space Station to reload the astronauts with supplies.

Credits: ISS, Don Pettit

 

 

eye005.jpg

Picture: Aurora

Credits: ISS, Don Pettit

 

 

 

 

Video: More auroras

Credits: ISS, Phil Plait

 

 

 

 

Video: One of the most beautiful aurora takes from ISS

Credits: ISS, Michael Koenig

 

 

 

 

Video: A time-lapse taken from the front of the International Space Station as it orbits our planet at night. This movie begins over the Pacific Ocean and continues over North and South America before entering daylight near Antarctica.

Credits: ISS, James Drake

In physics, quasiparticles and collective excitations (which are closely related) are emergent phenomena that occur when a microscopically complicated system such as a solid behaves as if it contained different (fictitious) weakly interacting particles in free space. For example, as an electron travels through a semiconductor, its motion is disturbed in a complex way by its interactions with all of the other electrons and nuclei; however it approximately behaves like an electron with a different mass traveling unperturbed through free space. This "electron" with a different mass is called an "electron quasiparticle". In an even more surprising example, the aggregate motion of electrons in the valence band of a semiconductor is the same as if the semiconductor contained instead positively charged quasiparticles called holes.

 

Other quasiparticles or collective excitations include phonons (particles derived from the vibrations of atoms in a solid), plasmons (particles derived from plasma oscillations), and many others. These fictitious particles are typically called "quasiparticles" if they are fermions (like electrons and holes), and called "collective excitations" if they are bosons (like phonons and plasmons), although the precise distinction is not universally agreed. To get full list of quasiparticles, click here.

 

Last week it has been announced that an electron has been observed to decay into two separate parts, each carrying a particular property of the electron: a spinon carrying its spin - the property making the electron behave as a tiny compass needle - and an orbiton carrying its orbital moment - which arises from the electron's motion around the nucleus. These newly created particles, however, cannot leave the material in which they have been produced. And sadly, announcement said nothing about holon - a quasiparticle resulting from electron spin-charge separation (holon should carry the charge).

 

electron4.jpg

All electrons have a property called "spin," which can be viewed as the presence of tiny magnets at the atomic scale and which thereby gives rise to the magnetism of materials. In addition to this, electrons orbit around the atomic nuclei along certain paths, the so-called electronic "orbitals". Usually, both of these quantum physical properties (spin and orbital) are attached to each particular electron. In an experiment performed at the Paul Scherrer Institute (Switzerland), these properties have now been separated.

 

It had been known for some time that, in particular materials, an electron can in principle be split, but until now the empirical evidence for this separation into independent spinons and orbitons was lacking. Now that we know where exactly to look for them, we are bound to find these new particles in many more materials.

 

The electron's break-up into two new particles has been gleaned from measurements on the copper-oxide compound Sr2CuO3. This material has the distinguishing feature that the particles in it are constrained to move only in one direction, either forwards or backwards. Using X-rays, scientists have lifted some of the electrons belonging to the copper atoms in Sr2CuO3 to orbitals of higher energy, corresponding to motion of the electron around the nucleus with higher velocity. After this stimulation with X-rays, the electrons split into two parts. In this study, the fundamental spin and orbital moments have been observed, for the first time, to separate from each other.

 

In the experiment, X-rays from the Swiss Light Source (SLS) are fired at Sr2CuO3. By comparing the properties (energy and momentum) of the X-rays before and after the collision with the material, the properties of the newly produced particles can be traced. These experiments not only require very intense X-rays, with an extremely well-defined energy, to have an effect on the electrons of the copper atoms, but also extremely high-precision X-ray detectors. In this respect, the SLS at the Paul Scherrer Institute is leading the world at the moment.

 

Observation of the electron splitting apart may also have important implications for another current research field - that of high-temperature superconductivity. Due to the similarities in the behaviour of electrons in Sr2CuO3 and in copper-based superconductors, understanding the way electrons decay into other types of particles in these systems might offer new pathways towards improving our theoretical understanding of high-temperature superconductivity.

 

 

Credits: Paul Scherrer Institut (PSI)

What are we doing in space? This was exact title of speech held by Brian May at STARMUS festival. It is somewhat hard speech and first time I heard it my impression was Brian is wrong. He expressed reservations towards us going into the space as species. My first reaction was a bit of a shock. More, Brian is passionate lover of astronomy so why would he say that in the first place. The speech mostly identifies facts which do not go into the favor of human race and epic question why would be different if we ever get into the space.  Below is the weak YouTube recording of the speech, but you are better of by looking for a transcript. February edition of Astronomy magazine contains whole speech.

 

 

 

 

The fact that political, business, and military or militaristic philosophy have always driven man's exploration is a sad truth of history. Man has explored this world, finding new resources and new lands to expand and colonize. Would we ever get to the Moon if there was no cold war? Perhaps, but not so quickly as when we did. But it happened. Collapse of manned space program has given wings to private sector to send probes into the space. Think of Space X. If you didn't think it would be possible in your lifetime to take a ride through the universe, you might need to think this one over again: a private space flight is scheduled for early May. The man behind success of Space X is Elon Musk.

 

Elon Musk was in team that created PayPal.  That got him going. Then he crated "Tesla Motors". Now Space X and its flying pride called "Dragon". Elon is exceptional person. He is not only funding this projects, but he is involved as much as possible. Take Space X "Dragon" as an example - he is chief designer. Maybe I'm giving him too much credit - after all, mission still didn't start, but given the success so far I do hope he is successful. Of course, it is not just Space X - Space X is just one of many private companies competing to replace NASA's defunct space shuttle program as a means of delivering cargo and, eventually, crews to the ISS, or possibly to other locations. Once private sector jumps in, you know it is there to stay. Private sector is more about exploitation than exploration. Exploration is good if it leads to exploitation. Goverments might be there to supply certain framework to protect common interest, but they do have interest too. Brian asks why go out there when there is so much here on Earth to be fixed? I believe it is unstopable for us to leave this planet and before that start exploitation elsewhere (Moon, Mars, you name it). And I doubt things will be any better out there than are here at home. Our present home.

 

sm0.jpg

Two days ago I read U.S. space entrepreneurs' announced venture to launch robotic prospectors into space in hopes of extracting water and precious metals from asteroids. This generated its share of excitement this week. Phil Plait has been excited for sure - please check here. He spoke with Chris Lewicki (Planetary Resources president and chief engineer. Phil also raised the question of the profit and got following in return: "The investors aren’t making decisions based on a business plan or a return on investment.... they’re basing their decisions on our vision". I'd like to believe that, but I don't. Perhaps my mind is corrupted by daily news and history, but surely history repeat itself and we didn't find a way to escape it yet. Do not get me wrong, I do see many good things coming out of this, but I also fear consequences. And with alarming climate change going against us, I feel uneasy knowing that future infrastructure might be outside Earth where only selected might be taken to safe place using this same private infrastructure which is being built now. It sounds as story from those cheap SciFi movies and I hope I'm wrong. Not because of me, but because of my kids.

 

Meanwhile, buzz about this space venture spread and all sorts of comments could be seen. I certainly found interesting one discussing legal framework for this sort of voyages. The applicable legal system, both in terms of U.S. and international law, must be improved and expanded before any space-mined products are brought back to Earth to market sell according to Frans von der Dunk who is professor of space law at the University of Nebraska-Lincoln and an international expert in the field. Neither the pubic interests, ranging from security, safety and the environment to protecting Neil Armstrong's footsteps, nor the interests of the company in securing its investments are properly protected he said. Consequently, there is no legal certainty that those activities would not become seriously challenged. If plans by Planetary Resources succeed it would create its fair share of confusion about mining rights in space - from who owns what to how business interests beyond Earth's orbit would be specifically protected. Frans cited the 1967 Outer Space Treaty, which forms the basis of international space law and to which all space-faring nations are a party. The treaty says that outer space constitutes a "global commons". This means that extraterrestrial bodies can never be part of one country such as the United States, which therefore means that U.S. laws to protect public or private business interests likely cannot be applied. This prompts several questions: What rights of protection would the mining company have against others wishing to "intrude", given that a global commons is in a principled fashion open to everyone? And, who is going to be held liable - and to what extent - when mining activities cause damage to other space activities or are harmed by them?

 

 

Credits: Brian May, Phil Plait, University of Nebraska-Lincoln

Bacteria are a large domain of prokaryotic microorganisms. Typically a few micrometres in length, bacteria have a wide range of shapes, ranging from spheres to rods and spirals. Bacteria are present in most habitats on Earth, growing in soil, acidic hot springs, radioactive waste, water, and deep in the Earth's crust, as well as in organic matter and the live bodies of plants and animals, providing outstanding examples of mutualism in the digestive tracts of humans, termites and cockroaches. There are typically 40 million bacterial cells in a gram of soil and a million bacterial cells in a millilitre of fresh water; in all, there are approximately five nonillion (that's 30 zeroes) bacteria on Earth, forming a biomass that exceeds that of all plants and animals. The study of bacteria is known as bacteriology, a branch of microbiology.

 

There are approximately ten times as many bacterial cells in the human flora as there are human cells in the body, with large numbers of bacteria on the skin and as gut flora. The vast majority of the bacteria in the body are rendered harmless by the protective effects of the immune system, and a few are beneficial. However, a few species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax, leprosy, and bubonic plague. The most common fatal bacterial diseases are respiratory infections, with tuberculosis alone killing about 2 million people a year, mostly in sub-Saharan Africa. In developed countries, antibiotics are used to treat bacterial infections and in agriculture, so antibiotic resistance is becoming common. In industry, bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese and yogurt through fermentation, the recovery of gold, palladium, copper and other metals in the mining sector, as well as in biotechnology, and the manufacture of antibiotics and other chemicals.

 

Once regarded as plants constituting the class Schizomycetes, bacteria are now classified as prokaryotes. Unlike cells of animals and other eukaryotes, bacterial cells do not contain a nucleus and rarely harbour membrane-bound organelles. Although the term bacteria traditionally included all prokaryotes, the scientific classification changed after the discovery in the 1990s that prokaryotes consist of two very different groups of organisms that evolved independently from an ancient common ancestor. These evolutionary domains are called Bacteria and Archaea.

 

bacteria1.jpg

 

The caverns of Lechuguilla Cave are some of the strangest on the planet. Its acid-carved passages extend for over 120 miles. They’re filled with a wonderland of straws, balloons, plates, stalactites of rust, and chandeliers of crystal. Parts of Lechuguilla have been cut off from the surface for four to seven million years, and the life-forms there - mainly bacteria and other microbes - have charted their own evolutionary courses. Gerry Wright from McMaster University in Canada has found that many of these cave bacteria can resist our antibiotics. They have been living underground for as long as modern humans have existed, but they can fend off our most potent weapons. The bacteria there have barely been exposed to humans, much less our antibiotics. And since Lechuguilla’s rock is also impermeable, and no sources of water flow into its caverns, it’s very unlikely that antibiotics could have washed in from the surface. Despite their isolation, the cave bacteria collectively resisted almost every type of antibiotic that we currently use. That includes last-resort drugs like daptomycin, which are used to treat hard-to-kill infections. Around two-thirds of the strains resisted three or four different families, and three of them could shrug off 14 different classes.

 

The team say that their discovery supports the idea that antibiotic resistance long precedes the rise of modern medicine. This shouldn’t be surprising. Ed Yong wrote last year, many antibiotics come from natural sources, or are tweaked versions of such chemicals. Penicillin, the first to be synthesised, famously comes from a mould that surreptitiously landed on Alexander Fleming’s plate. Daptomycin comes from a bacterium called Streptomyces roseosporus. These chemicals are not human inventions: they’re weapons that microbes evolved to keep each other at bay. They’re also ancient. They evolved between 40 million and 2 billion years ago, and it’s extremely likely that counter-measures have existed for just as long. Indeed, just last year, Wright's group found the oldest examples of resistance genes, from in bacteria in 30000-year-old frozen soil samples. So, in some ways, Wright’s new study simply tells us what many already knew. But it also provided some surprises. While many of the cave bacteria resisted antibiotics with the same strategies as their surface relatives, others used tactics that are new to science. One species, for example, resisted daptomycin by cracking the drug apart at a critical point. These discoveries could act as an early warning system. Bacteria can transfer genes between one another very easily, so the tricks that environmental bacteria use could easily end up in species that kill people in clinics. Studies like this give us early reconnaissance about unfamiliar weapons that could one day fall into enemy hands. But while this is pretty much away from us, there is one place where micro life might just get freedom - Earth's glaciers.

 

bacteria2.jpg

Melting polar ice has a worrisome list of consequences - methane gas release, rising sea levels and the liberation of long frozen 750000 year old microbes. Scientists are concerned how they might affect the environment. Thawing ice sheets will allow ancient microbial genes to mix with modern ones, flooding the oceans with never-before-seen types of organisms. The biggest effect of these newly liberated microbes will likely be seen in the oceans. Earth’s glaciers and sub-glacial sediments contain more microbial cells and carbon than all the lakes and rivers on the surface of the planet - a huge load of organic matter that, if thawed, would end up in the sea. Microorganisms have a remarkable ability to survive in the ice, staying minimally active while repairing DNA damage from radiation or oxidation. Bacteria as old as 750000 years have been thawed and revived from glacial ice before. Read more about the possibly big effect of these small organisms at Scientific American.

 

But there is a brighter side to this story. Bacteria were their own worst enemies for eons before humans arrived. They competed over resources, and killed each other with chemicals. These microscopic wars might furnish modern bacteria with ways of resisting our drugs, but they might also provide us with new drugs. And, we keep finding unconventional bacteria usage - for example for self-healing concrete.

 

Dr Alan Richardson is using a ground-borne bacteria - bacilli megaterium - to create calcite, a crystalline form of natural calcium carbonate. This can then be used to block the concrete’s pores, keeping out water and other damaging substances to prolong the life of the concrete. The bacteria is grown on a nutrient broth of yeast, minerals and urea and is then added to the concrete. With its food source in the concrete, the bacteria breeds and spreads, acting as a filler to seal the cracks and prevent further deterioration. It is hoped the research could lead to a cost-effective cure for "concrete cancer" and has enormous commercial potential. While further research is needed, Dr Richardson is hopeful that the repair mortar will also be effective on existing structures. So-called "concrete cancer" may be caused by the swelling and breaking of concrete and is estimated to cost billions of pounds worth of damage to buildings.

 

 

Credits: Ed Yong, Sarah Zhang, Northumbria University

Some time ago, I did series of posts about particle zoo we have today. It is no longer zoo as it used to be for sure. Let's say our understanding and grouping of particles has evolved. We can describe all matter with two type of particles particles - quarks and electrons. Quarks make proton and neutrons.  Join this folks together and you get atoms.  Join them together and you get molecules.  And so on...  Sean Carroll is writing up a new book called "The Particle At the End of the Universe". This is a popular-level book on the Large Hadron Collider (LHC) and the search for the Higgs boson. He also made a wonderful flowchart for this book.  Check it out (make sure to click to get full size):

 

zoo.pngI did cover most of these if not all in previous articles on Particle Zoo series. But above flowchart is rather cool and gives you easy overview of our little zoo. There is one catch though. Higgs remains an assumption and while we may have strong indication by LHC to see Higgs at 125 GeV, cautious scientists warn that signals like that has been seen in the past and then they were just gone. And, once we know something is there research on what sort of particle begins (yes, it could be something else and not Higgs after all). Of course, there is another potential situation. What if we find a new particle? That's exactly what headline have been telling us past few days. So what did we find? We found new hadron (baryon to be more precise)!

 

In particle physics, the baryon family refers to particles that are made up of three quarks. Quarks form a group of six particles that differ in their masses and charges. The two lightest quarks, the so-called "up" and "down" quarks, form the two atomic components, protons and neutrons. All baryons that are composed of the three lightest quarks ("up," "down" and "strange" quarks) are known. Only very few baryons with heavy quarks have been observed to date. They can only be generated artificially in particle accelerators as they are heavy and very unstable. In other words, matter can be formed in different energy states. The most stable one - that is, the one that survives the longest before decaying - is the so-called “ground state”, in which particles have the lowest possible energy. States with higher energy are called “excited states”. They are still allowed by Nature but they are unstable. The higher the formation energy (i.e. the mass) the more unstable they are.

 

In the course of proton collisions in the LHC at CERN, physicists Claude Amsler, Vincenzo Chiochia and Ernest Aguiló from the University of Zurich's Physics Institute managed to detect a baryon with one light and two heavy quarks. The observed particle called Xi_b^* comprises one "up," one "strange" and one "bottom" quark (usb), is electrically neutral and has a spin of 3/2 (1.5). Its mass is comparable to that of a lithium atom. The new discovery means that two of the three baryons predicted in the usb composition by theory have now been observed. The discovery was based on data gathered in the CMS detector, which the University of Zurich was involved in developing. The new particle cannot be detected directly as it is too unstable to be registered by the detector. However, Xi_b^* breaks up in a known cascade of decay products. Ernest Aguiló identified traces of the respective decay products in the measurement data and was able to reconstruct the decay cascades starting from Xi_b^* decays. A total of 21 Xi_b^* baryon decays were discovered - statistically sufficient to rule out a statistical fluctuation.

 

The discovery of the new particle confirms the theory of how quarks bind and therefore helps to understand the strong interaction, one of the four basic forces of physics which determines the structure of matter. Does this discover makes any difference to the flawchart above. No. This is composite particle and those are not listed above. Besides, theory already predicted their existence and what CMS brought was yet another confirmation our theory works fine for now. Nevertheless, congratulations to CMS!

 

 

Credits: Sean Carroll, University of Zurich, Antonella del Rosso

Hrvoje Crvelin

Magneto among us

Posted by Hrvoje Crvelin Apr 28, 2012
magneto1.PNG

Magneto is a fictional character (that's good news!) that appears in comic books published by Marvel Comics. He is the central villain of the X-Men comics, as well as the TV shows and the films. A powerful mutant with the ability to generate and control magnetic fields, in his early appearances, his motive was simple megalomania, but writers have since fleshed out his character and origin, revealing him to be a Jewish Holocaust survivor whose actions are driven by the purpose of protecting the mutant race from suffering a similar fate. His role in comics has varied from supervillain to antihero to superhero.

 

Sir Ian McKellen has portrayed Magneto through the X-Men film series, while Michael Fassbender plays a younger version of the character in the film X-Men: First Class. Magneto was ranked number 1 by IGN's Top 100 Comic Book Villains list, was listed number 17 in Wizard's Top 100 Greatest Villains Ever list, and was ranked as the 9th Greatest Comic Book Character Ever in Wizard's list of the 200 Greatest Comic Book Characters of All Time, the second highest villain on that list.

 

Being Magneto would be cool. Not as a bad guy of course, but having those powers that is. But as we said this is fictional character so just forget about it. There's nothing on this planet that have those powers. But, there is something that comes closed. Not in terms of using it as power, but making use of it as the power.

 

Of all the super-senses that animals possess, the ability to sense the Earth’s magnetic field must be the most puzzling. We’ve known that birds can do it since the 1960s, but every new attempt to understand this ability - known as magnetoreception - just seems to complicate matters even further. The first clues to the basis of magnetoreception came from a surprising source. In 1975, some bacteria that live in the mud on sea floors were found to contain chains of crystals of iron compounds. As these chains line up with Earth’s field, they align the bacteria along with them, ensuring they swim downwards, away from oxygen-rich waters. Essentially, each bacterium is a tiny compass. That suggested some animals might have cells containing similar crystals, whose movement would allow the animals to sense magnetic fields. Finding such cells, though, proved far from easy. Senses are typically linked to openings in the body that allow organs like eyes, ears and tongues to make contact with the outside world. Magnetic fields, however, pass freely through bone and tissue, so the receptors could be anywhere. Among birds, magnetic crystals were first discovered in homing pigeons and bobolinks. Nerve endings in the skin inside the upper beak contain lots of bullet-shaped structures rich in iron. It took decades to prove they really are used for magnetoreception, though.  In 2010, similar structures were found in robins, garden warblers and domestic chickens. These species hail from diverse lineages, so it now appears that iron-based magnetoreception is common to most, if not all, birds.

 

While some researchers were hunting for magnetic crystals in animals, others took a very different approach. Biophysicis tKlaus Schulten had been studying some unusual chemical reactions that can be affected by magnetism. He realised that if similar reactions took place in living things, it might enable them to detect magnetism. Electrons normally dance round a molecule in pairs, but light can break this happy tango by shunting an electron from one molecule to another. The result is a pair of radicals - molecules with a solo electron. Electrons have a quantum property called spin, and in a radical pair the spins of the two unpaired electrons are linked; they either spin together or in opposite directions. The angle of a magnetic field can affect the flipping of the electrons from one of these spin states to the other, and in doing so, it can affect the outcome or the speed of chemical reactions involving the radical pair.

 

magneto3.jpg

 

Schulten came up with the idea that radical pairs might help to explain magnetoreception back in 1978. His first paper on it was rejected by Science with a note that read, “A less bold scientist might have designated this idea to the waste paper basket.” Instead, Schulten published his idea in an obscure journal and kept on refining it. He realised that because the formation of a radical pair needs light, it probably takes place in the eye. If cells in the retina contained a molecule that formed radical pairs, and each molecule was aligned the same way within the cell, the angle of these molecules - and thus their behaviour in a magnetic field - would change across the bird’s hemispherical retina. If the bird could somehow detect the changing patterns across the retina as it moved, it would thus be able to sense the Earth’s magnetic field. A series of studies in the 1980s and 1990s provided some support. They showed that the compass of several bird species requires light. It does not need much light - night-migrating birds like robins get enough - but it does need some. What’s more, they found the light has to be from the blue-green end of the spectrum. As far as anyone knew, though, no molecule capable of forming radical pairs existed in the eye. Then in 1998, Schulten heard about cryptochromes, proteins found in plants and animals that detect blue light. Their main role appears to be keeping internal clocks running on time. What struck Schulten, though, is that when light hits a cryptochrome, the protein transfers one of its electrons to a smaller molecule called FAD - potentially creating a radical pair. In 2000, Schulten published an updated version of the radical pair hypothesis arguing that the magnetic compass involves cryptochrome and thus depends on bluegreen light. It was predicted that it could be disrupted by high-frequency magnetic fields, which interfere with the flips between spin states. In 2004, it was shown showed that high-frequency magnetic fields can indeed prevent robins from orientating themselves correctly. The same is true of other birds, too. Then in 2007, Miriam Liedvogel found a cryptochrome from the garden warbler can produce a radical pair under blue light that lasts for milliseconds, more than long enough to be affected by the Earth’s magnetic field.  By knocking out genes, it was shown that the compass of fruit flies relies on cryptochromes, and the same appears true for some other insects, including butterflies.

 

The tight connection between vision and magnetoreception suggests that birds can literally see magnetic fields. Schulten has suggested that the fields might appear as areas of light and shade superimposed on top of what birds normally see. This could explain why in 2010 Katrin Stapput managed to disorientate robins by covering their right eyes with frosted goggles. Birds may use lines and edges to distinguish between what they actually see and the more fuzzy overlaid magnetic information. If the underlying image is blurred, the birds may no longer be able to distinguish between image and overlay. Stapput covered only the right eye because earlier they found a robin’s compass is confined to its right eye, and the same appears true for many migratory birds. That may seem surprising, but since having two compasses provides no extra information, there is no reason to have one in each eye.

 

magneto2.jpg

So far, only garden warblers are known to have compasses in both eyes. The idea that birds have a heads-up display of their compass is an evocative idea, but still a speculative one.

 

If birds’ compasses are located in their eyes, though, why do they also have iron-based magnetoreceptors in their beaks? It turns out that birds actually have two magnetic senses. By monitoring nerve activity, researchers have shown the magnetoreceptors in the beak respond to changes in the intensity of the magnetic field, rather than its direction. How is not clear. The crystals could be attached to stretch receptors that pick up the tiny forces involved. Alternatively, the moving crystals could open or close molecular gates on the surface of nerve cells, triggering signals. Whichever, the ability to sense the strength of a magnetic field could be even more helpful than having a compass. Field strength varies from place to place because of varying amounts of magnetic material in Earth’s crust, and it is highest at the poles and lowest at the equator. As birds fly around, they could build up a mental map of these magnetic hills and valleys. To get an idea of how useful such maps could be, imagine being dropped in mountainous terrain in thick mist, and trying to get to a specific location. A compass alone would be of little use. With an altimeter and a contour map instead, you could both pinpoint your location and work out which way to go.

 

The idea that birds create magnetic maps is supported by studies like those on Australian silvereyes done in the 1990s. They exposed the birds to a strong pulse that altered the magnetism of iron crystals in their beaks but left the eye compass unaffected. In juvenile birds that had just left the nest, this made no difference - they still tried to head in the right direction. Birds that had migrated before, however, all headed in the wrong direction after the pulse. This suggests that the juvenile birds were relying on the compass in their eyes, whereas the experienced birds were trying to navigate based on their mental magnetic map, using the intensity receptor in their beaks. Of course in natural situations, birds use a whole range of clues for navigation, not just magnetism. An insight into how they combine these different kinds of information came from a recent study on night-migrating thrushes. When the thrushes were exposed to artificial magnetic fields at sunset, they flew in the wrong direction during the night when released. After seeing the next sunset, however, they corrected their courses. So it appears some birds calibrate their magnetic compasses against the sun each day.

 

In 2012, Le-Qing Wu and David Dickman have found neurons in a pigeon’s brain that encode the properties of a magnetic field. They buzz in different ways depending on how strong the field is, and which direction it’s pointing in. This is important as scientists have identified parts of the brain that are important for magnetoreception, but no one has managed to nail down the actual neurons responsible for the sense until now. It’s a key puzzle piece that has been unavailable for a very long time. But this discovery doesn’t solve the magnetoreception puzzle. If anything, it makes it more complex as it conflicts idea of two separate magnetic detectors (eye and beak). And it looks like the new magnetic neurons don’t hook up to either of these. If these neurons are responding to magnetic fields, which part of the bird is feeding them their information? Is there a third sensor?

 

Wu and Dickman found their neurons by placing pigeons in a set of coils that can produce bespoke magnetic fields. First, they programmed the coils to cancel out the Earth’s magnetic field around the pigeon’s head. Next, they created fields of their own, and gradually altered their strength and direction. As the fields shifted, Wu and Dickman recorded the activity of individual neurons in the pigeons’ vestibular brainstem - an area that connects the brain and spine, and is involved in balance. Based on earlier experiments, they knew that neurons in this area fire when pigeons are using their magnetic sense. The duo found 53 neurons that fire at different strengths depending on how strong the magnetic fields around them are. They’re most sensitive to a range of intensity that’s naturally produced by the Earth’s actual magnetic field. They also fire differently depending on where the field is pointing along the horizon (the azimuth), where it points above or below the horizon (the elevation), and the direction it points in (the polarity). The last bit was a surprise. Earlier experiments from the 1970s showed that birds aren’t sensitive to the polarity of the Earth’s magnetic field.  But Wu and Dickman’s experiments suggest otherwise. As “north” moves around the bird’s head, the neurons fire at their fullest in one direction, and at their weakest in the opposite one.

 

How birds use this information to navigate? It’s easy to take a guess. Sensing the azimuth tells the bird where to head, just like a compass. Sensing the elevation provides information about latitude. Sensing intensity could tell the bird where exactly it is, because the strength of the Earth’s magnetic field varies from place to place, often at a very fine scale. This is all plausible in theory, but how it works in practice is another matter. And there is an even bigger mystery - where’s the sensor? If these neurons are processing magnetic fields, what’s feeding them with information? Where’s the compass? If the magnetic neurons in the brainstem aren’t getting their signals from the eye or the beak, what’s the alternative?  Dickman thinks that the answer lies in the inner ear, and that’s where he is currently looking.

 

magneto4.jpg

What about other animals? The compasses of lobsters, fish and mammals like the naked mole rat definitely do not rely on the radical pair mechanism and are probably iron-based. The compass of sharks and rays, meanwhile, is thought to rely on a different mechanism entirely: electromagnetic induction. As they swim through a magnetic field, it induces electric currents in a sensory organ - but it remains unclear how sharks achieve the extraordinary sensitivity needed to detect Earth’s weak field.

 

 

Credits: Wikipedia, Ed Yong, New Scientists, Nature

Hrvoje Crvelin

Dust eruption

Posted by Hrvoje Crvelin Apr 27, 2012

Strange title, isn't it. I"m not going to talk about chaos in my room though; I will talk about J180956.27-330500.2.  It is catalog name of course and its is a star. The star, catalogued as WISE J180956.27-330500.2 to be precise, was discovered in images taken during the WISE survey in 2010, the most detailed infrared survey to date of the entire celestial sky. It stood out from other objects because it glowed brightly with infrared light. When compared to images taken more than 20 years ago, astronomers found the star was 100 times brighter. Results indicate the star recently exploded with copious amounts of fresh dust, equivalent in mass to our planet Earth. The star is heating the dust and causing it to glow with infrared light. Observing this period of explosive change while it is actually ongoing is very rare - these dust eruptions probably occur only once every 10000 years in the lives of old stars, and they are thought to last less than a few hundred years each time. It's the blink of an eye in cosmological terms. Astronomers know of one other star currently pumping out massive amounts of dust. Called Sakurai's Object, this star is farther along in the aging process than the one discovered by WISE.

 

dust1.jpg

The aging star is in the "red giant" phase of its life (for example, our own sun will expand into a red giant in about 5 billion years). When a star begins to run out of fuel, it cools and expands. As the star puffs up, it sheds layers of gas that cool and congeal into tiny dust particles. This is one of the main ways dust is recycled in our universe, making its way from older stars to newborn solar systems. The other way, in which the heaviest of elements are made, is through the deathly explosions, or supernovae, of the most massive stars. Evolved stars, which this one appears to be, contribute about 50 percent of the particles that make up humans. See here for more details.

 

Researchers calculated the star appears to have brightened dramatically since 1983. The WISE data show the dust has continued to evolve over time, with the star now hidden behind a very thick veil. Researchers now plan to follow up with space and ground based telescopes to confirm its nature and to better understand how older stars recycle dust back into the cosmos.

 

 

Credits: NASA

Hrvoje Crvelin

Egg nebula

Posted by Hrvoje Crvelin Apr 27, 2012

The preplanetary nebula phase is a short period in the cycle of stellar evolution, and has nothing to do with planets. Over a few thousand years, the hot remains of the aging star in the center of the nebula heat it up, excite the gas, and make it glow as a subsequent planetary nebula. The short lifespan of preplanetary nebulae means there are relatively few of them in existence at any one time. Moreover, they are very dim, requiring powerful telescopes to be seen. This combination of rarity and faintness means they were only discovered comparatively recently.

 

The Egg Nebula, the first to be discovered, was first spotted less than 40 years ago, and many aspects of this class of object remain shrouded in mystery. At the center of the image below, and hidden in a thick cloud of dust, is the nebula's central star. While we can't see the star directly, four searchlight beams of light coming from it shine out through the nebula. It is thought that ring-shaped holes in the thick cocoon of dust, carved by jets coming from the star, let the beams of light emerge through the otherwise opaque cloud. The precise mechanism by which stellar jets produce these holes is not known for certain, but one possible explanation is that a binary star system, rather than a single star, exists at the center of the nebula.

 

egg.jpg

 

Above image is produced from exposures in visible and infrared light from Hubble's Wide Field Camera 3.  The onion-like layered structure of the more diffuse cloud surrounding the central cocoon is caused by periodic bursts of material being ejected from the dying star. The bursts typically occur every few hundred years.

 

The distance to the Egg Nebula is only known very approximately, the best guess placing it at around 3000 light-years from Earth. This in turn means that astronomers do not have any accurate figures for the size of the nebula (it may be larger and further away, or smaller but nearer).

 

 

Credits: NASA

Hrvoje Crvelin

Fomalhaut

Posted by Hrvoje Crvelin Apr 25, 2012

Fomalhaut, a star twice as massive as our Sun and around 25 light years away, has been of keen interest to astronomers for many years. With an age of only a few hundred million years it is a fairly young star, and in the 1980s was shown to be surrounded by relatively large amounts of dust by the IRAS infrared satellite. Now Herschel, with its unprecedented resolution, has produced the best ever far-infrared images of the system. The star itself is surrounded by hot gas and dust, and there is a warm, dusty disc surrounding it as well. But the most interesting feature is a belt of dusty material on the outer edges of the system.

 

Fomalhaut.jpg

The belt of dust is relatively far from the star itself, at more than 100 times the distance of Earth from the Sun. This makes it very cold, at around -200 Celsius, with around half of it being made of water ice. This disc is similar to the Kuiper Belt in our Solar System, which lies beyond the planet Neptune, but is much, much younger. As well as relatively large objects, such as Pluto, our Kuiper Belt also contains millions of much smaller objects.

 

Illustration on left shows the size of the debris disc observed around the star Fomalhaut, as compared to the size of the Kuiper Belt and Asteroid Belt in the Solar System. The upper image shows the star Fomalhaut surrounded by its debris disc, which is located at a radius of about 130 astronomical units (AU) from the star. A candidate planet, Fomalhaut b, detected around the star in 2008, is also indicated; this detection is, however, still under debate. The central image shows the outer Solar System, with the Kuiper Belt, a rich reservoir of icy objects located beyond the orbit of Neptune and thought to be the source of short-period comets. The Kuiper Belt extends from 30 to 50 AU from the Sun. The lower image shows the inner Solar System, including the asteroid belt, which is located between the orbits of Mars and Jupiter and extends from roughly 2 to 4 AU from the Sun.

The dusty belt around Fomalhaut is confined into a fairly narrow ring and is also off-centre relative to the star, both of which imply that there could be planets orbiting close to it. In 2008 the Hubble Space Telescope provided possible evidence for a planet orbiting within it - though that has yet to be confirmed. The way that the dust absorbs, emits and scatters the starlight can be used to deduce the size of the grains. The infrared observations with Herschel have found that the dust absorbs light as if it were made of very small grains, just a few thousandths of a millimetre across. Meanwhile the Hubble Space Telescope images indicate that it scatters light in the same way as much larger particles. These two properties are satisfied if the dust grains are "fluffy", being made of small particles loosely stuck together to make larger ones.

 

A significant problem with such fluffy grains is that the smaller ones should be blown out of the system by the intense light from Fomalhaut itself. The fact that they are present implies that there is a continuous supply of small particles, most likely produced bythe continual collisions and disintegration of larger asteroid-sized objects. Such a ring would contain many icy comets, but to produce the amount of dust seen by Herschel requires the equivalent of around 2000 1km sized comets to be destroyed every day. This was an extremely large number. Such a large number of collisions implies that there are trillions of comets in the ring in total, containing enough material to make over one hundred Earths.

 

 

Credits: UK Space Agency, ESA

Hrvoje Crvelin

Dead sea dead?

Posted by Hrvoje Crvelin Apr 25, 2012

Rapidly dropping water levels of the Dead Sea, the lowest point on Earth's surface heralded for its medicinal properties, has been a source of ecological concern for years. Now a drilling project led by researchers from Tel Aviv University and Hebrew University reveals that water levels have risen and fallen by hundreds of meters over the last 200000 years.

 

dsd1.jpg

 

Researchers drilled 460 meters beneath the sea floor and extracted sediments spanning 200000 years. The material recovered revealed the region's past climatic conditions and may allow researchers to forecast future changes. Layers of salt indicated several periods of dryness and very little rainfall, causing water to recede and salt to gather at the center of the lake. During the last interglacial period, approximately 120000 years ago, the sea came close to drying up entirely, the researchers found, with another period of extreme dryness taking place about 13000 years ago.

 

Today, the Dead Sea lies 426 meters below sea level and is receding rapidly. Despite this historical precedent, there is still cause for concern. In the past the change was climate-driven, the result of natural conditions; today, the lake is threatened by human activity.

 

 

Credits: Credits: Tel Aviv University

Hrvoje Crvelin

Got cancer? Eat pizza!

Posted by Hrvoje Crvelin Apr 24, 2012

OK, this is morbid title, I will have to admit that and I appologize. But after reading original source, or better said, after I started to read it this was the first thought I had. Do you know what is the second leading cause of cancer death in American men? OK, that's easy with title like above. It is cancer. Prostate cancer to be more precise. So, how is this anyhow related to pizza!? Not directly, and certainly not something doctor would prescribe, but oregano, the common pizza and pasta seasoning herb, has long been known to possess a variety of beneficial health effects. And now the new study by researchers at Long Island University (LIU) indicates that an ingredient of this spice could potentially be used to treat prostate cancer.

 

pizza.jpg

 

Prostate cancer is a type of cancer that starts in the prostate gland and usually occurs in older men. Recent data shows that about 1 in 36 men will die of prostate cancer. Estimated new cases and deaths from this disease condition in the US in 2012 alone are 241740 and 28170, respectively. Current treatment options for patients include surgery, radiation therapy, hormone therapy, chemotherapy, and immune therapy. Unfortunately, these are associated with considerable complications and/or severe side effects. It is nightmare for every men just as breast cancer is for women.

 

Dr. Supriya Bavadekar at LIU's Arnold & Marie Schwartz College of Pharmacy and Health Sciences, is currently testing carvacrol, a constituent of oregano, on prostate cancer cells. The results of her study demonstrate that the compound induces apoptosis in these cells. Apoptosis is programmed cell death, or simply "cell suicide". Dr. Bavadekar and her group are presently trying to determine the signaling pathways that the compound employs to bring about cancer cell suicide. So far, we know that oregano possesses anti-bacterial as well as anti-inflammatory properties, but its effects on cancer cells really elevate the spice to the level of a super-spice like turmeric.

 

Though the study is at its preliminary stage, initial data indicates a huge potential in terms of carvacrol's use as an anti-cancer agent. A significant advantage is that oregano is commonly used in food and has a "Generally Recognized As Safe: status in the US.

 

Some researchers have previously shown that eating pizza may cut down cancer risk. This effect has been mostly attributed to lycopene, a substance found in tomato sauce, but in light of this new discovery it feels as if oregano seasoning may have played a role. If the study continues to yield positive results, this super-spice may represent a very promising therapy for patients with prostate cancer.

 

 

Credits: Federation of American Societies for Experimental Biology (FASEB)

Hrvoje Crvelin

Cosmic rays

Posted by Hrvoje Crvelin Apr 24, 2012

Cosmic rays are energetic charged subatomic particles, originating in outer space. They may produce secondary particles that penetrate the Earth's atmosphere and surface. The term ray is historical as cosmic rays were thought to be electromagnetic radiation. Most primary cosmic rays (those that enter the atmosphere from deep space) are composed of familiar stable subatomic particles that normally occur on Earth, such as protons, atomic nuclei, or electrons. However, a very small fraction are stable particles of antimatter, such as positrons or antiprotons, and the precise nature of this remaining fraction is an area of active research. About 89% of cosmic rays are simple protons or hydrogen nuclei, 10% are helium nuclei or alpha particles, and 1% are the nuclei of heavier elements. These nuclei constitute 99% of the cosmic rays. Solitary electrons (much like beta particles, although their ultimate source is unknown) constitute much of the remaining 1%. Although cosmic rays were discovered 100 years ago, their origin remains one of the most enduring mysteries in physics.

 

cr1.jpg

Gamma-ray bursts (GRBs) are flashes of gamma rays associated with extremely energetic explosions that have been observed in distant galaxies. They are the most luminous electromagnetic events known to occur in the universe. Bursts can last from ten milliseconds to several minutes; a typical burst lasts 20–40 seconds. The initial burst is usually followed by a longer-lived "afterglow" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave and radio). Most observed GRBs are believed to consist of a narrow beam of intense radiation released during a supernova event, as a rapidly rotating, high-mass star collapses to form a neutron star, quark star, or black hole. A subclass of GRBs (the "short" bursts) appear to originate from a different process. This may be the merger of binary neutron stars and perhaps specifically the development of resonance between the crust and core of such stars as a result of the massive tidal forces experienced in the seconds leading up to their collision, causing the entire crust of the star to shatter.

 

The sources of most GRBs are billions of light years away from Earth, implying that the explosions are both extremely energetic (a typical burst releases as much energy in a few seconds as the Sun will in its entire 10-billion-year lifetime) and extremely rare (a few per galaxy per million years). All observed GRBs have originated from outside the Milky Way galaxy, although a related class of phenomena, soft gamma repeater flares, are associated with magnetars within the Milky Way. It has been hypothesized that a gamma-ray burst in the Milky Way, pointing directly towards the Earth, could cause a mass extinction event.

 

GRBs were first detected in 1967 by the Vela satellites, a series of satellites designed to detect covert nuclear weapons tests. Hundreds of theoretical models were proposed to explain these bursts in the years following their discovery, such as collisions between comets and neutron stars. Little information was available to verify these models until the 1997 detection of the first X-ray and optical afterglows and direct measurement of their redshifts using optical spectroscopy. These discoveries, and subsequent studies of the galaxies and supernovae associated with the bursts, clarified the distance and luminosity of GRBs, definitively placing them in distant galaxies and connecting long GRBs with the deaths of massive stars.

 

Some rare cosmic rays pack an astonishing wallop, with energies prodigiously greater than particles in human-made accelerators like the Large Hadron Collider (LHC). Their sources are unknown, although scientists favor active galacti nuclei or GRBs. If so, GRBs should produce ultra-high-energy neutrinos, but scientists searching for these with IceCube, the giant neutrino telescope at the South Pole have found exactly zero. And so the mystery deepens.

 

cr2.jpg

 

The IceCube neutrino telescope encompasses a cubic kilometer of clear Antarctic ice under the South Pole, a volume seeded with an array of 5160 sensitive digital optical modules (DOMs) that precisely track the direction and energy of speeding muons, massive cousins of the electron that are created when neutrinos collide with atoms in the ice. The IceCube Collaboration recently announced the results of an exhaustive search for high-energy neutrinos that would likely be produced if the violent extragalactic explosions known as gamma-ray bursts (GRBs) are the source of ultra-high-energy cosmic rays.

 

According to a leading model, we would have expected to see 8.4 events corresponding to GRB production of neutrinos in the IceCube data used for this search, but we didn't see any, which indicates that GRBs are not the source of ultra-high-energy cosmic rays. This result represents a coming-of-age of neutrino astronomy. IceCube, while still under construction, was able to rule out 15 years of predictions and has begun to challenge one of only two major possibilities for the origin of the highest-energy cosmic rays, namely gamma-ray bursts and active galactic nuclei. While not finding a neutrino signal originating from GRBs was disappointing, this is the first neutrino astronomy result that is able to strongly constrain extra-galactic astrophysics models, and therefore marks the beginning of an exciting new era of neutrino astronomy.

 

We know nature is capable of accelerating elementary particles to macroscopic energies and there are basically only two ideas on how she does this: in gravitationally driven particle flows near the supermassive black holes at the centers of active galaxies, and in the collapse of stars to a black hole, seen by astronomers as GRBs. In active galactic nuclei (AGNs) the black holes suck in matter and eject enormous particle jets, perpendicular to the galactic disk, which could act as strong linear accelerators. Some GRBs are thought to be collapses of supermassive stars - hypernova - while others are thought to be collisions of black holes with other black holes or neutron stars. Both types produce brief but intense blasts of radiation. The massive fireballs move away from the explosion at nearly the speed of light, releasing most of their energy as gamma rays. The fireballs that give rise to this radiation might also accelerate particles to very high energies through a jet mechanism similar to that in AGNs, although compressed into a much smaller volume. A fireball produced in a black-hole collision or by the collapse of a gigantic star can form jets in which protons and heavier nuclei are accelerated and shock waves produce a burst of gamma rays. The fireball model also predicts the creation of very high energy neutrinos, which ought to be detectable shortly after the gamma-ray burst becomes visible from Earth.

 

cr3.jpg

Accelerated protons in a GRB's jets should interact with the intense gamma-ray background and strong magnetic fields to produce neutrinos with energies about five percent of the proton energy, together with much higher-energy neutrinos near the end of the acceleration process. Neutrinos come in three different types that change and mix as they travel to Earth; the total flux can be estimated from the muon neutrinos that IceCube concentrates on. The muons these neutrinos create can travel up to 10 kilometers through the Antarctic ice. Thus many neutrino interactions occur outside the actual dimensions of the IceCube array but are nevertheless visible to IceCube's detectors, effectively enlarging the telescope's aperture. Picture shows a fireball produced in a black-hole collision or by the collapse of a gigantic star can form jets in which protons and heavier nuclei are accelerated and shock waves produce a burst of gamma rays.

 

Are AGNs the real source of the highest-energy cosmic rays? IceCube has looked for neutrinos from active galactic nuclei, but as yet the data sets are not sensitive enough to set significant limits. For now, IceCube has nothing to say on the subject - beyond the fact that the fireball model of GRBs can't meet the specs.

 

For link to paper describing research click here.

 

 

Credits: Wikipedia, DOE/Lawrence Berkeley National Laboratory, Nature magazine

Hrvoje Crvelin

No God spot

Posted by Hrvoje Crvelin Apr 24, 2012

A belief in God is deeply embedded in the human brain, which is programmed for religious experiences, according to a study that analyses why religion is a universal human feature that has encompassed all cultures throughout history. I'm not religious person though I have been exposed to it, but it might be something is wrong with my program then. Some critics, such as Richard Dawkins and Pascal Boyer, contend that religion is nothing more than a social construct that primitive humans evolved to improve their odds of survival. Dawkins and others have posited that a pre-disposition to believe in superstitions and religion could enhance the survival of the human species, by enhancing fear of imagined (and sometimes real) dangers, and thus increasing the likelihood that humans would take pre-emptive defensive measures.

 

Scientists have speculated that the human brain features a "God spot," one distinct area of the brain responsible for spirituality. Research in 2009 has found that several areas of the brain are involved in religious belief, one within the frontal lobes of the cortex - which are distinctively developed in humans - and another in the more evolutionary-ancient regions deeper inside the brain, which humans share with apes and other primates. The study found that people of different religious persuasions and beliefs, as well as atheists, all tended to use the same circuits in the brain to solve a perceived moral conundrum which were also the same ones used when religiously-inclined people dealt with issues related to God. The findings support the idea that the brain has evolved to be sensitive to any form of belief that improves the chances of survival and suggests the brain is inherently sensitive to believing in almost anything if there are grounds for doing so. This work was followed by a study where scientists tried to stimulate the temporal lobes with a rotating magnetic field. Michael Persinger, from Laurentian University in Ontario, found that he could artificially create the experience of religious feelings in 80% of volunteers.

 

gs1.jpg

 

In 2012, we are step further as far as the research for God spot goes. University of Missouri researchers have completed research that indicates spirituality is a complex phenomenon, and multiple areas of the brain are responsible for the many aspects of spiritual experiences. Based on a previously published study that indicated spiritual transcendence is associated with decreased right parietal lobe functioning, MU researchers replicated their findings. In addition, the researchers determined that other aspects of spiritual functioning are related to increased activity in the frontal lobe. Researchers found a neuropsychological basis for spirituality, but it's not isolated to one specific area of the brain. Spirituality is a much more dynamic concept that uses many parts of the brain. Certain parts of the brain play more predominant roles, but they all work together to facilitate individuals' spiritual experiences.

 

gs4.jpg

 

In the most recent study, researchers studied 20 people with traumatic brain injuries affecting the right parietal lobe, the area of the brain situated a few inches above the right ear. They surveyed participants on characteristics of spirituality, such as how close they felt to a higher power and if they felt their lives were part of a divine plan. Researchers found that the participants with more significant injury to their right parietal lobe showed an increased feeling of closeness to a higher power.

 

Neuropsychology researchers consistently have shown that impairment on the right side of the brain decreases one's focus on the self. Since research shows that people with this impairment are more spiritual, this suggests spiritual experiences are associated with a decreased focus on the self. This is consistent with many religious texts that suggest people should concentrate on the well-being of others rather than on themselves.

 

Right side of the brain is associated with self-orientation, whereas the left side is associated with how individuals relate to others. Although this research studied people with brain injury, previous studies of Buddhist meditators and Franciscan nuns with normal brain function have shown that people can learn to minimize the functioning of the right side of their brains to increase their spiritual connections during meditation and prayer. In addition, researchers measured the frequency of participants' religious practices, such as how often they attended church or listened to religious programs. They measured activity in the frontal lobe and found a correlation between increased activity in this part of the brain and increased participation in religious practices. This finding indicates that spiritual experiences are likely associated with different parts of the brain.

 

 

Credits: University of Missouri-Columbia

I've been busy lately and behind the schedule with some of things I wanted to write (some big ones like next Big Bang article, climate one and few others). Instead of catching up with those, I decided to empty the buffer with few smaller ones and the first one which comes to my mind is related to matter. I will confess that motivation was primarily due to some recent news about dark matter (which has been making headlines past week), but this article - as title suggests - will address much more. All of you have heard of matter. After all, matter is all around us. You might have heard of antimatter too. And then there is dark matter somewhere in space, or perhaps everywhere. And to add up to list, we say matter equals energy? Does that mean there is anti-energy and dark energy too?  What is this all about? Within this article I will try to compose text which will address this questions hopefully using what we know and what has been said by those who deal with this on daily basis - scientists. (Most of the theory here is borrowed by wonderful work of Matt Strassler)

 

Matter is anything that occupies space and has rest mass (or invariant mass). It is a general term for the substance of which all physical objects consist. Typically, matter includes atoms and other particles which have mass. Mass is said by some to be the amount of matter in an object and volume is the amount of space occupied by an object, but this definition confuses mass and matter, which are not the same. Different fields use the term in different and sometimes incompatible ways; there is no single agreed scientific meaning of the word "matter", even though the term "mass" is better-defined. Contrary to the previous view that equates mass and matter, a major difficulty in defining matter consists in deciding what forms of energy (all of which have mass) are not matter. In general, massless particles such as photons and gluons are not considered forms of matter, even though when these particles are trapped in systems at rest, they contribute energy and mass to them. For example, almost 99% of the mass of ordinary atomic matter consists of mass associated with the energy contributed by the gluons and the kinetic energy of the quarks which make up nucleons. In this view, most of the mass of ordinary "matter" consists of mass which is not contributed by matter particles. By now, your head is already spinning, right?

 

matter04.jpg

Matter is commonly said to exist in four states (or phases): solid, liquid, gas and plasma. However, advances in experimental techniques have realized other phases, previously only theoretical constructs, such as Bose-Einstein condensates and fermionic condensates. A focus on an elementary-particle view of matter also leads to new phases of matter, such as the quark-gluon plasma. In the realm of cosmology, extensions of the term matter are invoked to include dark matter and dark energy, concepts introduced to explain some odd phenomena of the observable universe. These exotic forms of "matter" do not refer to matter as "building blocks", but rather to currently poorly understood forms of mass and energy.

 

Matter (no matter how you define it) is a class of objects that you will find in the universe, while mass and energy are not objects; they are properties that every object in the universe can have. Mass and energy are in deep interplay as you will see below.

So, matter is always some kind of stuff, but which stuff depends on context. Energy is not itself stuff; it is something that all stuff has. When talking about energy, at this level, we can refresh our memory by what we learned in elementary school and say that we distinguish following energies of a single system:

  • mass-energy (also known as rest energy) which comes out of famous Einstein equation and describes energy object has when at rest (not moving). This energy can be either 0 or some higher value.
  • motion energy (also known as kinetic energy) which describes energy of moving object where faster object has more such energy and if two objects are equally fast then heavier one will have more of it. This energy can be either 0 or some higher value.
  • relationship energy (also known as potential or stored energy) which describes stored energy in relationship among objects like stretching spring, water behind dam, gravitational interactions, etc. This energy can be either positive or negative.

 

Let's start with something simple.  Imagine an object or collection of objects - let’s call it a “system of objects” — that has a certain amount of energy at this moment (all the mass-energy, all the motion-energy, the stored energy of all types, etc.), and the parts of the system interact with each other but with nothing else, then at the end of the day the amount of energy those objects will have is the same as the amount they have now.  Total energy is conserved or as we say the total amount does not change. It can change from one form to another, but if you keep track of all the forms, you'll find at the end just what you had at the start (total amount wise). The interaction in which energy transformation happens are mostly decays and sceintics state there are some rules as we know them today to govern those:

  1. a particle must decay to two or more particles.
  2. the mass of a decaying particle must exceed the sum of the masses of the particles produced in its decay
  3. the total electric charge before and after a decay must match
  4. the total number of “fermions” before and after the decay can change only by an even number
  5. the total number of quarks minus the total number of antiquarks must not change in a decay

 

For first 4 statements above scientics believe to be exact while fifth one is nearly exact (this is because many theorists believe this to be violated, but in practice - think of proton decay - this has not been observed). These rules (and these are not the only one, but they do cover most of it) help us understand why:

  • photons are stable
  • electrons are stable
  • protons are stable or very long-lived
  • at least one type of neutrino is stable or very long-lived

 

This applies to all known particles - except neutron. There is no rule preventing neutron decay, and indeed it does decay, after on average about 15 minutes, to a proton, an electron, and an anti-neutrino. Why is it so long-lived? This is partly because the proton and the neutron have masses that are so nearly equal.  Although the neutron has mass-energy of almost a GeV, the mass-energy of the neutron is only about 0.0007 GeV larger than the sum of the mass-energies of a proton, an electron and an anti-neutrino.  Decay rates become very slow when the children from a decay have masses that add up very close to that of the parent; that’s not surprising, since by rule 2 the decay rate has to decrease to zero once the children have more mass than the parent. But the really odd thing is that if you put a neutron in certain atomic nuclei, it becomes stable!  Helium, for instance, has two protons and two neutrons.  Even though a neutron by itself lives a quarter of an hour, a helium nucleus will live for the age of the universe and longer.  In fact this is true for the neutrons in the nuclei of all of the stable elements in the periodic table.  For sure, these are not all rules. Think of dark matter; if dark matter consists of a new particle, which rule helps that particle to remain stable?

 

The situation with momentum is similar to that of energy. Pick any direction in space; momentum along that direction is conserved. And since there are three dimensions of space, with three independent directions you could go, there are three independent conservation laws. You can pick whichever three directions you like, as long as they are different. For instance, you can choose the three conservation laws to be momentum in a north-south direction, momentum in an east-west direction, and momentum in an up-down direction. Or you can pick three others, such as toward-and-away from the sun, along-and-opposite the earth’s orbit, and up-and-down out of the plane of the solar system. The most common form of momentum is just that due to simple motion of objects, and it’s more or less what you might think intuitively: if an object is moving in a certain direction, then it has momentum in that direction, and the faster it moves, the more momentum it has.  And a heavy object has more momentum than a light object if the two are traveling at the same speed. One interesting consequence of the conservation of momentum is that if you have a system of objects sitting stationary in front of you (that is, the system as a whole isn’t moving) then it will continue to remain stationary unless something from outside the system pushes on it. The reason is that if it is stationary its total momentum is zero, and since momentum is conserved, it will remain zero forever, as long as nothing from outside the system affects it.

 

When it comes to mass, momentum and energy, we state following (click here for more detailed article - I suggest to read it):

  • energy and momentum of an isolated physical system are conserved (the total energy and the total momentum of an isolated system doesn’t change over time) from every observer’s point of view
  • but different observers, if they are moving relative to one another, will assign a different amount of energy and momentum to the system
  • the sum of the masses of the objects that make up a physical system is not conserved; it may change
  • but the mass of any object is something that all observers will agree on
  • the mass of a physical system of objects is not the sum of the masses of the objects that make up that system
  • the mass of a system of objects is the only thing on our list that is both conserved and agreed by all observers

 

What about anti-matter? Every type of particle has an anti-particle. Usually this is a distinct type of particle, but it can happen that the anti-particle and the particle are the same. Only particles satisfying certain conditions (for example, if they are electrically neutral) may be their own antiparticles. The only examples so far from the list of elementary particles are photons, Z particles, gluons and graviton - and, possibly, the three neutrinos. Every other particle has a distinct anti-particle, with the same mass but opposite electric charge. Do not get confused by names; for example, neutron is an example of an electrically neutral particle that is not its own antiparticle - like the proton, the neutron contains more quarks than anti-quarks, whereas the anti-neutron contains more anti-quarks than quarks. What is important to know is that for all known particles the anti-particle has been observed experimentally.

 

For those particles that differ from their anti-particles, the names of the anti-particles are usually pretty obvious (up anti-quark, anti-neutrino, anti-tau) with the exception of the anti-electron, which is usually called the positron. Anti-matter gained popularity thanks to Sci-Fi thanks to claim that “matter and anti-matter annihilate into pure energy". The reality is sort of more complex of course - if you put a particle and an anti-particle together, almost all their properties cancel.  For instance, the electric charge of a muon plus the electric charge of an anti-muon equals zero; the former is negative, the latter positive, but they are equal in size and so they cancel perfectly. The only things that don’t cancel are their masses and energies and that's where the catch is.  Mass isn’t “conserved”; mass can appear or disappear.  The only thing that is definitely going to stick around is energy.  Energy is conserved: however much you start with, you will end with the same amount.  An example Matt Strassler gives is following one: imagine box with muon and antimuon. Imagine they rest; each in its corner looking at each other. Energy we have is mass energy from both (M) and motion energy is 0 as they are at the rest. Now, nuon and anti-muon can transform into photon (and since photon is also "anti-photon" there is nothing strange about this). Unlike muon, photon moves. But we also know photon is said to be massless. So what happened?  Rest energy went into the motion one.

 

matter01.png

So, total energy remain the same - it is conserved (picture also shows photons having different momentum which also cancels each other at the end). Of course, muon and anti-muon may anihilate each other into something else too, like electron and positron. What we observe in that case is that two new particles are no longer massless (both electron and positron have mass), but still total evergy is conserved using same principle.  Check following picture.

 

matter02.png

What we see from above is that new particles, electron and positron, will have mass (m) and their motion energy will be the one from original system (M while being muon) minus mass of the new particle. Once again, energy is conserved (but mass is not). Momentum of both positron and electron cancels each other just as before.

 

Now let's take this step further. An electron at rest and a positron at rest can turn into two photons, just as a muon and anti-muon can.  In fact we can do the whole calculation just by going back to the muon case - there’s really no difference.

 

matter03.png

What about electron and positron turning into muon and anti-muon? If they are at initial rest, this is not possible as there is no enough energy. However, if they have large motion energy and they do get collided then this is possible - as long as there is enough energy to get muon and anti-muon. And this is the key behind the research work used by accelerators in physics for discovering new particles; we smash a particle and its anti-particle together with very high motion-energy, in hopes that they will turn into a heavy particle that we’ve never before observed, along with its anti-particle. From above, we can draw some very simple rules:

  • particle and its anti-particle that are stationary can annihilate to make a particle and its antiparticle as long as the initial particle is heavier than the final particle
  • particle and its anti-particle that are stationary cannot annihilate to make a particle and its antiparticle if the final particle is heavier than the initial particle
  • particle and its anti-particle that are moving relative to each other can annihilate to make a heavier particle and its antiparticle if they have sufficient motion-energy
  • if the mass-energy plus the motion-energy of the particle equals the mass-energy of the heavier particle, then the heavy particle and anti-particle pair will be produced stationary
  • if the mass-energy plus the motion-energy of the particle is greater the mass-energy of the heavier particle, then the excess energy will go into motion-energy of the heavy particle and anti-particle pair

 

At this point, let's go back to earlier statement - every particle has its own anti-particle. If so, how come we usually there is more matter than anti-matter? First, we need to define what matter is, or more precisely, what do we consider to be matter (electrons and protons and neutrons). If we and the Earth and the Sun had been made entirely from gluons, all of which are their own anti-particles, then what we would in that case have been called "matter" wouldn't have had anti-matter counterpart. We could still have talked about anti-particles - the universe would still have electrons and positrons in it, but what we call "matter" would have been a different thing. However, it happens that we are made from particles which do have distinct anti-particles. And that means that anti-matter is distinct from matter. At that point we can notice that our universe seems to have very little anti-matter in it and this is the puzzle that theories of baryogenesis is attempt to solve.

 

At the end of the March 2012,  an international collaboration of scientists has reported a landmark calculation of the decay process of a kaon into two pions, using breakthrough techniques on some of the world's fastest supercomputers. This is the same subatomic particle decay explored in a 1964 Nobel Prize-winning experiment performed at the U.S. Department of Energy's Brookhaven National Laboratory (BNL), which revealed the first experimental evidence of charge-parity (CP) violation - a lack of symmetry between particles and their corresponding antiparticles that may hold the answer to the question "Why are we made of matter and not antimatter?" When the universe began, did it start with more particles than antiparticles, or did it begin in a symmetrical way, with equal numbers of particles and antiparticles that, through CP violation or a similar mechanism, ended up with more matter than antimatter? Either way, the universe today is composed almost exclusively of matter with virtually no antimatter to be found. Scientists seeking to understand this asymmetry frequently look for subtle violations in predictions of processes described by the Standard Model. One property of these processes, CP symmetry, can be explored by comparing two particle decays - the decay of a particle observed directly and the decay of its anti-particle, viewed in mirror reflection. "C" refers to the exchange of a particle and its antiparticle (which is exactly the same but with opposite charge). "P" specifies the mirror reflection of this decay. But as the Nobel Prize-winning experiments showed, the two decays are not always symmetrical: In some cases you end up with extra particles (matter) and CP symmetry is "violated. Exploring the precise details of the kaon decay process could help elucidate how and why this happens. The new calculation of one aspect of this decay, which required creating unique new computer techniques to use on some of the world's fastest supercomputers, was carried out by physicists from Brookhaven National Laboratory, Columbia University, the University of Connecticut, the University of Edinburgh, the Max-Planck-Institut für Physik, the RIKEN BNL Research Center (RBRC), the University of Southampton, and Washington University. The calculation builds upon extensive theoretical studies done since the first 1964 experiment and much more recent experiments done at CERN and at Fermi National Accelerator Laboratory.

 

The unprecedented accuracy of the measured experimental values - which incorporate distances as minute as one thousandth of a femtometer (one femtometer is 1/1000000000000000th of a meter, the size of the nucleus of a hydrogen atom) - allowed the collaboration to follow the process in extreme detail: the decay of individual quarks and the flitting in and out of existence of other subatomic particles. Viewing the picture from farther away -- a few tenths of a femtometer -- this basic process is obscured by a sea of quark-antiquark pairs and a cloud of the gluons that hold them together. At this distance, the gluons begin to bind the quarks into the observed particles. The last part of the problem is to show the behavior of the quarks as they orbit each other, moving at nearly the speed of light through a swarm formed from gluons and further pairs of quarks and antiquarks, and at last forming the pions of the decay under study. To "translate" the mathematics needed to describe these interactions into a computational problem required the creation of powerful numerical methods and advances in technology that made possible the present generation of massively parallel supercomputers with peak computational speeds of hundreds of teraflops (a teraflop computer can perform one million million operations per second). The actual kaon decay described by the calculation spans distance scales of nearly 18 orders of magnitude, from the shortest distances of one thousandth of a femtometer - far below the size of an atom, within which one type of quark decays into another - to the everyday scale of meters over which the decay is observed in the lab. This range is similar to a comparison of the size of a single bacterium and the size of our entire solar system. Ouch!  This calculation, when compared with predictions from the Standard Model, allows the scientists to determine another remaining unknown quantity important to understanding kaon decay and its relation to CP violation. A direct calculation of this remaining unknown quantity and a higher precision recalculation of the present result will be the focus of future research, requiring even more computing power. New IBM BlueGene/Q machines are expected to have 10 to 20 times the performance of the current machines and with this dramatic boost in computing power we can get a more accurate version of the present calculation, and other important details will come within reach.

 

Than tis month, another news broke out. Majorana particle seems to be detected. Scientists at TU Delft's Kavli Institute and the Foundation for Fundamental Research on Matter (FOM Foundation) have succeeded for the first time in detecting a Majorana particle. In the 1930s, the brilliant Italian physicist Ettore Majorana deduced from quantum theory the possibility of the existence of a very special particle, a particle that is its own anti-particle: the Majorana fermion. Majorana fermions are very interesting - not only because their discovery opens up a new and uncharted chapter of fundamental physics; they may also play a role in cosmology. Further, scientists view these particles as fundamental building blocks for the quantum computer. Contrary to an "ordinary" quantum computer, a quantum computer based on Majorana fermions is exceptionally stable and barely sensitive to external influences.

 

matter05.jpg

It is theoretically possible to detect a Majorana fermion with a particle accelerator such as the one at CERN. The current LHC appears to be insufficiently sensitive for that purpose but, according to physicists, there is another possibility: Majorana fermions can also appear in properly designed nanostructures. In 2010, two different groups of theorists came up with a solution using nanowires, superconductors and a strong magnetic field. TU Delft happened to be very familiar with those ingredients through earlier research. Microsoft approached Leo Kouwenhoven to help them lead a special FOM programme in search of Majorana fermions, resulting in a successful outcome. They did this by combining an extremely small nanowire with a superconducting material and a strong magnetic field. The measurements of the particle at the ends of the nanowire cannot otherwise be explained than through the presence of a pair of Majorana fermions.

 

A proposed theory assumes that the mysterious dark matter, which forms the greatest part of the universe, is composed of Majorana fermions.  What is dark matter? We can tell from studying the motions of stars and other techniques that most of the mass of a galaxy comes from something that doesn’t shine, and lots of hard work has been done to prove that known particles behaving in ordinary ways cannot be responsible. To explain this effect, various speculations have been proposed, and many have been shown (through observation of how  galaxies look and behave, typically) to be wrong. Of the survivors, one of the leading contenders is that dark matter is made from heavy particles of an unknown type. But we don’t know much more than that as yet. Experiments may soon bring us new insights, though this is not guaranteed.  Also, there may be not be any meaning to dark anti-matter as the particles of dark matter (like photons), may well be their own anti-particles. Physicists have been racing to find out with detectors of various kinds and more than one group says it has found evidence that dark matter fills our solar system in quantities even more vast than many theorists expect. If they're right, the Earth and everything on it is ploughing its way through a dense sea of dark matter at this very instant (and there is at least one paper discussing collisions with human body).

 

Around same, it has been stated that about fifty photons coming mostly from the Galactic center seem to be peaked at 130 GeV. The local significance is 4.6 sigma which is huge; when the look-elsewhere tax is imposed, the significance is a bit weniger (lower), about 3.3 sigma. In paper discussing it, by Christoph Weniger, it states the observation of a gamma-ray line in the cosmic-ray fluxes would be a smoking-gun signature for dark matter annihilation or decay in the Universe. Only dark matter can produce a monochromatic photon line; all standard cosmic phenomena we're aware of produce a continuous spectrum of photons that can usually be well approximated by a power law. On the other hand, gamma ray line can be easily produced by annihilation of weak-scale dark matter particles in the galactic center. Today, the average velocity of dark matter particles in our galaxy is about 1/1000 of the speed of light, thus they are practically at rest from the point of view of the relativistic kinematics. If two dark matter particles meet and annihilate into 2 photons (or 1 photon plus 1 other neutral particle) conservation of momentum implies that the energy of the outgoing photons must be equal to the dark matter mass. Therefore an observation of a gamma ray line from the galactic center would be considered a smoking gun signal of dark matter, and as a bonus it would give us an estimate of the mass of the dark matter particle.

 

And because the reported mass is so close to the expectations of a light lightest superpartner and because SUSY is such an excessively natural source of WIMPs (weakly interacting massive particles, a leading opinion about the qualitative properties of dark matter particles), the existence of this signal increases the probability that nature hasn't overlooked low-energy supersymmetry as a handy and beautiful tool to achieve many vital goals it has faced.  The Fermi satellite will unfortunately need years to conclusively settle the question whether this signal is real or a fluke. One should note the analysis has been performed not by the Fermi collaboration but by an outsider. In fact, a similar analysis by Fermi himself found no significant gamma ray line signal (but using less data and less fancy statistical methods). You can also find a bit more here.

 

Then, just one day later, the most accurate study so far of the motions of stars in the Milky Way has found no evidence for dark matter in a large volume around the Sun. According to widely accepted theories, the solar neighbourhood was expected to be filled with dark matter, a mysterious invisible substance that can only be detected indirectly by the gravitational force it exerts. But a new study by a team of astronomers in Chile has found that these theories just do not fit the observational facts. This may mean that attempts to directly detect dark matter particles on Earth are unlikely to be successful.

 

matter06.jpg

 

By very carefully measuring the motions of many stars, particularly those away from the plane of the Milky Way, the team could work backwards to deduce how much matter is present. The motions are a result of the mutual gravitational attraction of all the material, whether normal matter such as stars, or dark matter. Astronomers' existing models of how galaxies form and rotate suggest that the Milky Way is surrounded by a halo of dark matter. They are not able to precisely predict what shape this halo takes, but they do expect to find significant amounts in the region around the Sun. But only very unlikely shapes for the dark matter halo - such as a highly elongated form - can explain the lack of dark matter uncovered in the new study. Theories predict that the average amount of dark matter in the Sun's part of the galaxy should be in the range 0.4-1.0 kilograms of dark matter in a volume the size of Earth. The new measurements find 0.00±0.07 kilograms of dark matter in a volume the size of Earth. The new results also mean that attempts to detect dark matter on Earth by trying to spot the rare interactions between dark matter particles and "normal" matter are unlikely to be successful. Despite the new results, the Milky Way certainly rotates much faster than the visible matter alone can account for. So, if dark matter is not present where we expected it, a new solution for the missing mass problem must be found. On the other hand, Sean Carroll indicates the biggest issue with this paper is that researchers actually do not measure the dark matter distribution near the Sun; they try to measure it in a region between 1500 and 4000 parsecs below the galactic plane (which is actually pretty far away), and then fit to a model and extrapolate to what we should have nearby. This kind of procedure relies on our understanding of the vertical structure of the galactic disk, which isn’t all that great. So it’s definitely an intriguing result, one that should be taken seriously and followed up by other surveys, but nothing to lose sleep over just yet.

 

So what to do of it?  We have 2 different claims regarding dark matter above.  The truth is, they do not cancel each other.  The truth is that the distribution of dark matter in our galaxy is very poorly known. You may have bigger clumps of dark matter in galaxy center and smaller ones elsewhere. Nothing wrong there. Both claims above use novel techniques, and their analyses have not been repeated by anyone else. At this point you should understand that both are tentative, and (based on the history of radical claims) the odds are against them - both might be wrong. But they are surely exciting. The idea of higher concentration of dark matter within galactic center is not new; Phil Plait recently wrote about Abell 520. Abell 520 is more than one cluster: it’s actually a collision between two or more clusters.  In the picture below, gas has been colored green so you can see it (invisible to the eye, the X-rays were detected by the Chandra Observatory). The orange glow is from stars in galaxies (as seen by the Canada-France-Hawaii and Subaru telescopes). The blue is actually a map of the dark matter made using Hubble observations. The gravity of dark matter distorts the light passing through from more distant galaxies, making it possible to map out the location of the otherwise invisible stuff.

 

matter07.jpg

 

The problem is, there’s a clear peak in the dark matter right in the middle of the cluster, not off to the sides as expect. It looks as if the dark matter slammed to halt in the middle of the collision instead of sailing on. The problem is we just don’t have enough examples of cluster collisions to know how weird Abell 520 really is. But these are exciting times for those who follow dark matter research with more and more data coming in. The image is emerging slowly, but soon or later we will get full picture.

 

And Dark Energy? We recently discovered that the universe is expanding faster and faster, not slower and slower as was the case when it was younger. What is presumably responsible is called “dark energy”, but unfortunately, it’s actually not energy. It is tension, not energy - a combination of pressure and energy density. So why do people call it “energy”? Because it sounds cool. Scientists know exactly what is being referred to, so this terminology causes no problem on the technical side; most of the public doesn’t care exactly what is being referred to, so arguably there’s no big problem on the non-technical side too. But if you really want to know what’s going on, it’s important to know that dark-energy isn’t a dark form of energy, but something more subtle. We don’t yet know what is responsible for the dark-energy whose presence we infer from the accelerating universe. And it may be quite a while before we do.

 

 

Credits: Wikipedia, Matt Strassler, DOE/Brookhaven National Laboratory, University of Southampton, Delft University of Technology, Christoph Weniger, Lubos Motl, Jester, ESO, Sean Carroll

Hrvoje Crvelin

/dev/frandom

Posted by Hrvoje Crvelin Apr 19, 2012

Randomness has somewhat differing meanings as used in various fields. It also has common meanings which are connected to the notion of predictability (or lack thereof) of events. The Oxford English Dictionary defines random as "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard". This concept of randomness suggests a non-order or non-coherence in a sequence of symbols or steps, such that there is no intelligible pattern or combination. A random number generator (RNG) is a computational or physical device designed to generate a sequence of numbers or symbols that lack any pattern (appear random). Researchers at The Australian National University have developed the fastest random number generator in the world by listening to the "sounds of silence" (not the song!).

 

frandom.jpg

The researchers have tuned their very sensitive light detectors to listen to vacuum - a region of space that is empty. Vacuum was once thought to be completely empty, dark, and silent until the discovery of the modern quantum theory. Since then scientists have discovered that vacuum is an extent of space that has virtual sub-atomic particles spontaneously appearing and disappearing (virtual particles).

 

It is the presence of these virtual particles that give rise to random noise. This "vacuum noise" is omnipresent and may affect and ultimately pose a limit to the performances of fibre optic communication, radio broadcasts and computer operation.

 

Random number generation has many uses in information technology. Global climate prediction, air traffic control, electronic gaming, encryption, and various types of computer modelling all rely on the availability of unbiased, truly random numbers. To date, most random number generators are based on computer algorithms. Although computer generated random numbers can be useful, knowing the input conditions to the algorithm will lead to predictable and reproducible output, thus making the numbers not truly random. To overcome this issue, random number generators relying on inherently random physical processes, such as radioactive decay and chaotic behaviour in circuits, have been developed.

 

Vacuum noise is one of the ultimate sources of randomness because it is intrinsically broadband and its unpredictability is guaranteed by quantum theory. Because of this, researchers were able to generate billions of random numbers every second. According to researchers, they can easily push this technology even faster but currently they have already reached the capacity of our Internet connection. The random number generator is online and can be accessed from anywhere, anytime around the world by clicking here. Moreover, anyone who downloaded live random numbers from the ANU website will get a fresh and unique sequence of numbers that is different from all other users.

 

 

Credits: Australian National University (ANU)

Hrvoje Crvelin

Auxin

Posted by Hrvoje Crvelin Apr 19, 2012

Did you ever heard of auxin or do you know what it is? Probably not. Me neither. Did you ever wonder why stems grow upwards and roots downwards? Why plants always seem to turn towards the light and climbing plants run up the trellis rather than down? Well, you might have done so, but you would most likely answer this was due to sun.

 

The correct answer is - auxin. And it is not simple one since plant hormones - and auxin is a plant hormone - are regulated by complex combinations of various processes. Elke Barbez, Jürgen Kleine-Vehn and Jirí Friml recently identified an important new link in the transport of auxin through the plant, resulting in auxin being stored at specific sites. The results were published by the journal Nature.

 

auxin.jpgDarwin was already interested in auxin in the 19th century. Only in recent years, however, has the hormone started to relinquish its secrets, thanks to intensive molecular research. Auxin is produced in the young, growing parts of plants and then transported throughout the plant (to a low-lying stem for example). The stem needs to straighten out as soon as possible to be able to absorb the sun's rays efficiently; therefore more auxin will be delivered to the underside of the stem than to the topside, resulting in the underside growing faster and the stem straightening out. For the same reason, plants in front of windows will always turn to the light. This dynamic regulation of auxin transport allows plants to take optimal advantage of local and changing conditions.

 

The transport of auxin through the plant plays a vital role. And, from all appearances, it is not a simple matter. Researchers identified an important new link and means of transport for auxin: PILS proteins. PILS proteins are vital for auxin-dependent plant growth and regulate the intracellular storage of the hormone. It is exactly this compartmentalizing of auxin that seems functionally important for the various developmental processes. Higher auxin levels at the right moment and in the right place result in better growth and greater yields. Better regulation of auxin levels would make plants grow more efficiently. The researchers hope to contribute to the development of more efficient growing processes by continuing to unravel auxin transport processes.

 

 

Credits:  VIB (the Flanders Institute for Biotechnology)

I think we do not need to go back much in time to recall some of recent incidents at the sea when it comes to oil spills. One of the recent most mentioned is 2010 Gulf spill at BP platform. Oil spills can be controlled by chemical dispersion, combustion, mechanical containment, and/or adsorption. Spills may take weeks, months or even years to clean up. Environmental effects are devastating.

 

oil1.jpg

Oil penetrates into the structure of the plumage of birds and the fur of mammals, reducing its insulating ability, and making them more vulnerable to temperature fluctuations and much less buoyant in the water. Oil can impair a bird's ability to fly, preventing it from foraging or escaping from predators. As they preen, birds may ingest the oil coating their feathers, irritating the digestive tract, altering liver function, and causing kidney damage. Together with their diminished foraging capacity, this can rapidly result in dehydration and metabolic imbalance. Some birds exposed to petroleum also experience changes in their hormonal balance, including changes in their luteinizing protein. The majority of birds affected by oil spills die without human intervention. Some studies have suggested that less than one percent of oil-soaked birds survive, even after cleaning, although the survival rate can also exceed ninety percent, as in the case of the Treasure oil spill. Heavily furred marine mammals exposed to oil spills are affected in similar ways. Oil coats the fur of sea otters and seals, reducing its insulating effect, and leading to fluctuations in body temperature and hypothermia. Oil can also blind an animal, leaving it defenseless. The ingestion of oil causes dehydration and impairs the digestive process. Animals can be poisoned, and may die from oil entering the lungs or liver. You just wish you can become Steven Seagall and kick some ***. Of course, humans are affected too. The Deepwater Horizon oil spill in the Gulf of Mexico in April 2010, for example, will have a large economic impact on the U.S. Gulf fisheries. A new study published in the Canadian Journal of Fisheries and Aquatic Sciences says that over 7 years this oil spill could have a $US8.7 billion impact on the economy of the Gulf of Mexico. This includes losses in revenue, profit, and wages, and close to 22 000 jobs could be lost. Obviosuly we depend on oil so is there a way to fight this problem once it happens?

 

Apparently, there is. A new type of sponge that loves oil as much as it hates water could make a big difference when cleaning up an oil spill. Researchers at Rice University and Penn State University say the tiny sponge they've developed can absorb 100 times its weight in oil. The sponge is made out of carbon nanotubes (of course). Extra boron atoms are added at all its junctions to boost the sponge's ability to absorb. One of the main reasons it works so well is because adding a bit of boron to carbon while creating nanotubes turns them into solid, spongy, reusable blocks. This helps the sponge increase its ability to absorb oil spilled in water. Watch a video about the sponge below.

 

 

 

 

The researchers believe the sponge could someday play a significant role in cleaning up oil spills.

 

 

Credits: Nature magazine, Wikipedia

Hrvoje Crvelin

Tarantula nebula II

Posted by Hrvoje Crvelin Apr 18, 2012

Remember Tarantula nebula? That wonderful image, I just learned, has been released in its full size and you may wish to download it.  I use it as background already.  However, I had to resize it. You will most certainly had to do the same.  Why?  Because it comes as 267MB file and picture has resolution of 20323x16259 pixels!!!! I can say for sure that this image can be used as desktop background for many many generation of big screens to come. You have been warned. If you wish to proceed - click here (one again, it is ENORMOUS!!!!).

Hrvoje Crvelin

IRAC

Posted by Hrvoje Crvelin Apr 18, 2012

NASA's Spitzer Space Telescope was launched on August 25, 2003 from Florida's Cape Canaveral Air Force Base. Drifting in a unique Earth-trailing orbit around the Sun, Spitzer sees an optically invisible universe dominated by dust and stars. This is because it observes the universe around us at infrared scale. It can image nebulae of cold dust, peer inside obscured dust clouds where new stars are forming, and detect faint emissions from very distant galaxies. The planned mission period was to be 2.5 years with a pre-launch expectation that the mission could extend to five or slightly more years until the onboard liquid helium supply was exhausted. This occurred on 15 May 2009. Without liquid helium to cool the telescope to the very cold temperatures needed to operate, most instruments are no longer usable. However, the two shortest wavelength modules of the IRAC camera are still operable with the same sensitivity as before the cryogen was exhausted, and will continue to be used in the so called Spitzer Warm Mission.

 

irac1.jpg

 

Spitzer carries three instruments on-board:

  • IRAC (Infrared Array Camera), an infrared camera which operates simultaneously on four wavelengths (3.6 µm, 4.5 µm, 5.8 µm and 8 µm). Each module uses a 256 × 256 pixel detector - the short wavelength pair use indium antimonide technology, the long wavelength pair use arsenic-doped silicon impurity band conduction technology. The two shorter wavelength bands (3.6 µm & 4.5 µm) for this instrument remain productive after LHe depletion in the spring of 2009, at the telescope equilibrium temperature of around 30 K, so IRAC continues to operate as the "Spitzer Warm Mission".
  • IRS (Infrared Spectrograph), an infrared spectrometer with four sub-modules which operate at the wavelengths 5.3-14 µm (low resolution), 10-19.5 µm (high resolution), 14-40 µm (low resolution), and 19-37 µm (high resolution). Each module uses a 128x128 pixel detector - the short wavelength pair use arsenic-doped silicon blocked impurity band technology, the long wavelength pair use antimony-doped silicon blocked impurity band technology.
  • MIPS (Multiband Imaging Photometer for Spitzer), three detector arrays in the far infrared (128 × 128 pixels at 24 µm, 32 × 32 pixels at 70 µm, 2 × 20 pixels at 160 µm). The 24 µm detector is identical to one of the IRS short wavelength modules. The 70 µm detector uses gallium-doped germanium technology, and the 160 µm detector also uses gallium-doped germanium, but with mechanical stress added to each pixel to lower the bandgap and extend sensitivity to this long wavelength.

 

To commemorate 1000 days of infrared wonders, the program is released a gallery of the 10 best IRAC images. They are stunning! The warm-mission images particularly highlight the continuing capabilities of Spitzer. NASA's Senior Review Panel has recommended extending the Spitzer warm mission through 2015. They specifically commended the Spitzer team for telescope improvements that have made it a powerful instrument for science, especially in exoplanet studies. 

 

During its 1000-day undertaking, IRAC used its two shortest-wavelength infrared sensors. However, some of the images include data collected during the cold mission, when all four of its infrared sensors could function. Enjoy!

 

irac2.jpg

IRAC not only probes what is known - it also has uncovered some mysterious objects like this so-called "tornado" nebula.

 

Because the camera is sensitive to light emitted from shocked molecular hydrogen (seen here in green), astronomers think that this strange beast is the result of an outflowing jet of material from a young star that has generated shock waves in surrounding gas and dust.

irac3.jpg

The famous nebula in Orion, located about 1340 light-years from Earth, is actively making new stars today.

 

Although the optical nebula is dominated by the light from four massive, hot young stars, IRAC reveals many other young stars still embedded in their dusty womb.

 

It also finds a long filament of star-forming activity containing thousands of young protostars.

 

Some of these stars may host still-forming planets.

 

This image was taken during Spitzer's warm mission.

irac4.jpg

After a long life of hydrogen-burning nuclear fusion, stars move into later life states whose details depend on their masses.

 

This IRAC image of the Helix Nebula barely spots the star itself at the center, but clearly shows how the aging star has ejected material into space around it, creating a "planetary nebula".

 

The Helix Nebula is located 650 light-years away in the constellation Aquarius.

 

This image was taken during Spitzer's warm mission.

irac5.jpg

The early universe contained only hydrogen and helium. No other chemical elements existed.

 

All of the elements needed for life were created later in the nuclear furnaces of stars, and then ejected into space.

 

IRAC studies how stars mature. It can observe how the processes of stellar evolution affect the environment.

 

The Trifid Nebula hosts stars at all stages of life, surrounded by gas and dust that form a beautiful roseate nebula.

 

It's located 5400 light-years away in the constellation Sagittarius.

irac6.jpg

Within galaxies like the Milky Way, giant clouds of gas and dust coalesce under the influence of gravity until new stars are born.

 

IRAC can both measure the warm dust and peer deeply into it to study the processes at work.

 

In this giant cloud several stellar nurseries can be seen, some still within the tips of dusty "mountains of creation".

 

This image shows the eastern edge of a region known as W5, near the Perseus constellation 7000 light-years away.

irac7.jpg

After blowing away its natal material, the young star cluster seen here emits winds and harsh ultraviolet light that sculpt the remnant cloud into fantastic shapes.

 

Astronomers are not sure when that activity suppresses future star formation by disruption, and when it facilitates star formation through compression.

 

The cluster, known as DR22, is in the constellation Cygnus the Swan.

 

This image was taken during Spitzer's warm mission.

irac8.jpg

IRAC has systematically imaged the entire Milky Way disk, assembling a composite photograph containing billions of pixels with infrared emission from everything in this relatively narrow plane.

 

The image here shows five end-to-end strips spanning the center of our galaxy.

 

This image covers only one-third of the whole galactic plane.

irac9.jpg

Collisions play an important role in galaxy evolution.

 

These two galaxies - the Whirlpool and its companion - are relatively nearby at a distance of just 23 million light-years from Earth.

 

IRAC sees the main galaxy as very red due to warm dust - a sign of active star formation that probably was triggered by the collision.

irac10.jpg

Star formation helps shape a galaxy's structure through shock waves, stellar winds, and ultraviolet radiation.

 

In this image of the nearby Sombrero Galaxy, IRAC clearly sees a dramatic disk of warm dust (red) caused by star formation around the central bulge (blue).

 

The Sombrero is located 28 million light-years away in the constellation Virgo.

irac11.jpg

The many points of light in this field aren't stars but entire galaxies.

 

A few, like the mini-tadpole at upper right, are only hundreds of millions of light-years away so their shapes can be discerned.

 

The most distant galaxies are too far away and appear as dots. Their light is seen as it was over ten billion years ago, when the universe was young.

 

Images in higher resolution can be seen here.

 

 

Credits: NASA, JPL-Caltech, Harvard-Smithsonian Center for Astrophysics

Hrvoje Crvelin

To infinity...

Posted by Hrvoje Crvelin Apr 17, 2012

Mathematics is strange. Especially when it comes to certain hard to comprehend things like infinity. While most people believe if infinity exists there should be just one, it is easy to show there is infinity number of infinities. Imagine numbers like 1, 2, 3, 4, 5, 6.... to infinity. That's what you learn in school. Now take away odd numbers and you get even number row which goes to - infinity. Other way around, take away even numbers, and you end up with infinite series of odd numbers. Difference between these three infinities? They are 3 different infinities. And since there is infinite way of manipulating them, we end up with infinite infinities. Infinitely insane. Mathematics - you either love it or don't. At the moment this is a theoretical mathematical study, but two researchers from Complutense University of Madrid have recently proved that, in certain conditions (one of the conditions is that the field is generated by current loops situated on the same plane), magnetic fields can send particles to infinity.

 

If a particle "escapes" to infinity it means two things: that it will never stop and "something else". Regarding the first, the particle can never stop, but it can be trapped, doing circles forever around a point, never leaving an enclosed space. However, the "something else" goes beyond the established limits. If we imagine a spherical surface with a large radius, the particle will cross the surface going away from it, however big the radius may be. Scientists have confirmed through equations that some particles can escape infinity. One condition is that the charges move below the activity of a magnetic field created by current loops on the same plane. Other requirements should also be met: the particle should be on some point on this plane, with its initial speed being parallel to it and far away enough from the loops.

 

infinity.jpg

These may not be only conditions to escape infinity, there could be others, but in this case, it has been confirmed that the phenomenon occurs. Researchers recognise that the ideal conditions for this study are "with a magnetic field and nothing else." Reality always has other variables to be considered, such as friction and there is a distant possibility of going towards infinity. The movement of particles in magnetic fields is a "very significant" problem in fields such as applied and plasma physics. For example, one of the challenges that the scientists that study nuclear energy face is the confinement of particles to magnetic fields. Accelerators such as LHC also used magnetic fields to accelerate particles. In these conditions they do not escape to infinity, but they remain doing circles until they acquire the speed that the experiments need.

 

 

Credits: Plataforma SINC

Hrvoje Crvelin

Tarantula nebula

Posted by Hrvoje Crvelin Apr 17, 2012

I touched the subjects of nebulae while talking about Bing Bang here - check it out if you missed it. Tarantula nebula (also known as 30 Doradus, or NGC 2070) is the brightest star-forming region in our galactic neighbourhood and home to the most massive stars ever seen. The nebula resides 170000 light-years away in the Large Magellanic Cloud, a small, satellite galaxy of our Milky Way. No known star-forming region in our galaxy is as large or as prolific as 30 Doradus. The image below comprises one of the largest mosaics ever assembled from Hubble photos and consists of observations taken by Hubble's Wide Field Camera 3 and Advanced Camera for Surveys, combined with observations from the European Southern Observatory's MPG/ESO 2.2-metre telescope that trace the location of glowing hydrogen and oxygen (this image has been released to celebrate Hubble's 22nd anniversary by the way). The stars in this image add up to a total mass millions of times bigger than that of our Sun. The image is roughly 650 light-years across and contains some rambunctious stars, from one of the fastest rotating stars to the speediest and most massive runaway star. The colours come from the glowing hot gas that dominates regions of the image. Red signifies hydrogen gas and blue, oxygen.

 

tarantula.jpg

The nebula is close enough to Earth that Hubble can resolve individual stars, giving astronomers important information about the stars' birth and evolution. Many small galaxies have more spectacular starbursts, but the Large Magellanic Cloud's 30 Doradus is one of the only star-forming regions that astronomers can study in detail. The star-birthing frenzy in 30 Doradus may be partly fueled by its close proximity to its companion galaxy, the Small Magellanic Cloud. The image reveals the stages of star birth, from embryonic stars a few thousand years old still wrapped in dark cocoons of dust and gas to behemoths that die young in supernova explosions. 30 Doradus is a star-forming factory, churning out stars at a furious pace over millions of years. The Hubble image shows star clusters of various ages, from about 2 million to about 25 million years old.

 

The region's sparkling centerpiece is a giant, young star cluster named NGC 2070, only 2 million to 3 million years old. Its stellar inhabitants number roughly 500000! The cluster is a hotbed for young, massive stars. Its dense core, known as RMC 136, is packed with some of the heftiest stars found in the nearby Universe, weighing more than 100 times the mass of our Sun.

 

The massive stars are carving deep cavities in the surrounding material by unleashing a torrent of ultraviolet light, which is etching away the enveloping hydrogen gas cloud in which the stars were born. The image reveals a fantasy landscape of pillars, ridges, and valleys. Besides sculpting the gaseous terrain, the brilliant stars also may be triggering a successive generation of offspring. When the radiation hits dense walls of gas, it creates shocks, which may be generating a new wave of star birth.

 

 

Credits: Wikipedia, ESA, Hubble

Hrvoje Crvelin

Bubble shooting Oracle

Posted by Hrvoje Crvelin Apr 13, 2012

It's been more than 6 months now (somewhere in September last year) that I started to write this blog and idea was I would mix news form world of quantum mechanics and share my NetWorker expertise. Things turned out to be a bit different, but for no specific reason. I plan to come back to NetWorker and as first post dedicated to this I will share a little story about troubleshooting I had since last December.  I was not 24/7 working on this issue, but it took more time than it should. Within this post I plan to share problem details and troubleshooting path I took along with resolution at the end.

 

First, few words about my existing backup environment. I have backup server in cluster (2 VPARs HPUX 11v3) running between two sites (active-active) and 4 additional storage nodes (2 per site; one Linux based and one HPUX VPAR based). NetWorker version I run is 7.5SP3 patch 5 (oldies but goldies - and I do not plan to upgrade before this September). To make it simple since start, let me just say that this issue has nothing to do with hardware - you may have single backup server running Windows if you want. Most of our database backups fall into 3 categories: Oracle, SQL and SAP (where last one is more application, but ok). All our Oracle instances are 11R1 (more details later) and most of them are part of geoRAC installation. We have several of those. For backup I typically use NetWorker module for Oracle (NMO) 5.0 build 347. Most of my backups (and that's some 99%) go to VTL (EDL 4406) and Oracle backups are no exception. With that in mind, let's kick this story.

 

futex1.jpg

For reasons not important for this story, we were in process of migrating old DWH RAC to a new one. While major change would be network wise, we also made changes were we changed Linux boxes to be Red Hat 5.5 based (5.2 before), HW wise we were using now G6 blades and RAC itself had 4 nodes per site (8 in total) and we would use vip dedicated for backup initiation. NetWorker version and module would not change so I installed NetWorker client 7.5.3.5 and NMO 5.0b347 on top of that.  Installation is straightforward and nothing outside what already has been covered by EMC documentation. Libraries were relinked on each RAC node to be used (for backup we would use 2). We used ASMlib and as storage we used MC Symmetrix.

 

We originally went for Oracle 11.1.079, but we had some issues so database was degraded later to db11.1.075 and is due to 827955.1 bug even later we moved to 11.1.072. But since I wanted to make tests in early star, I configured everything while still running at 11.1.079. Since both Oracle and Linux people were tweaking and tuning different parameters I didn't expect everything to go smooth and indeed - it didn't.

 

During the run of the backup I noticed that one session hanged.  I just hanged there doing nothing.  OK, that may happen if someone is working on database or host or whatever... so, kill pending session and do it again.  Second run - all ok.  Good, enabling schedule backup and going home.  Tomorrow morning, one session hanged again. Hm. Let's check parameters... the nsrnmo parameters are standard ones I use elsewhere, nothing unusual:

 

ORACLE_HOME=/oracle/base/product/db111079_h1

PATH=/bin:/usr/sbin:/usr/bin:$ORACLE_HOME/bin

ORACLE_SID=<DB_SID>

ORACLE_USER=oradba

NSR_RMAN_ARGUMENTS="msglog '/oracle/base/admin/DB_SID/rman/log/msglog_DB_SID_data.log' append"

NSR_SB_DEBUG_FILE=/nsr/applogs/nsrnmostart.log

 

I have two nsrnmo commands, one for db backups and one for arch logs to split log files and I usually called them nsrnmo_DBSID_[arch|data]. Anyway, nothing wrong there. Let's check rman script I use (DB backup):

 

connect rcvcat login/pass@CNT

connect target login/pass@CNT;

run {

      allocate channel t1 type 'SBT_TAPE' connect 'login/pass@CNT_NODE1'

      parms 'ENV=(NSR_SERVER=fappnw01.corp.vattenfall.com, NSR_DATA_VOLUME_POOL=dbR, NSR_CLIENT=VIP_NODE)';

      allocate channel t2 type 'SBT_TAPE' connect 'login/pass@CNT_NODE1'

      parms 'ENV=(NSR_SERVER=fappnw01.corp.vattenfall.com, NSR_DATA_VOLUME_POOL=dbR, NSR_CLIENT=VIP_NODE)';

      allocate channel t3 type 'SBT_TAPE' connect 'login/pass@CNT_NODE1'

      parms 'ENV=(NSR_SERVER=fappnw01.corp.vattenfall.com, NSR_DATA_VOLUME_POOL=dbR, NSR_CLIENT=VIP_NODE)';

      allocate channel t4 type 'SBT_TAPE' connect 'login/pass@CNT_NODE2'

      parms 'ENV=(NSR_SERVER=fappnw01.corp.vattenfall.com, NSR_DATA_VOLUME_POOL=dbR, NSR_CLIENT=VIP_NODE)';

      allocate channel t5 type 'SBT_TAPE' connect 'login/pass@CNT_NODE2'

      parms 'ENV=(NSR_SERVER=fappnw01.corp.vattenfall.com, NSR_DATA_VOLUME_POOL=dbR, NSR_CLIENT=VIP_NODE)';

      allocate channel t6 type 'SBT_TAPE' connect 'login/pass@CNT_NODE2'

      parms 'ENV=(NSR_SERVER=fappnw01.corp.vattenfall.com, NSR_DATA_VOLUME_POOL=dbR, NSR_CLIENT=VIP_NODE)';

      sql "BEGIN sys.dbms_system.ksdwrt(2, ''RMAN INCREMENTAL 0 BACKUP ON ''||to_char(sysdate, ''DD-MM-YYYY HH24:MI:SS'')|| '' - SCN BEFORE - '' || dbms_flashback.get_system_change_number);END;";

      backup full filesperset 64 force database include current controlfile format 'ORCL:/FULL_DB_%d_%u/';

      sql "BEGIN sys.dbms_system.ksdwrt(2, ''RMAN INCREMENTAL 0 BACKUP ON ''||to_char(sysdate, ''DD-MM-YYYY HH24:MI:SS'')|| '' - SCN AFTER - '' || dbms_flashback.get_system_change_number);END;";

      sql 'ALTER SYSTEM ARCHIVE LOG CURRENT';

      change archivelog all validate;

      backup filesperset 64 archivelog all not backed up 1 times format 'ORCL:/ARCHLOG_%d_%u/';

      delete noprompt archivelog all backed up 1 times to device type 'SBT_TAPE';

      release channel t1;

      release channel t2;

      release channel t3;

      release channel t4;

      release channel t5;

      release channel t6;

}

 

 

So, this is fairly simple script, but we tend to keep things simple. I have some 100+ databases and they all use same approach and same script format. There might be difference in number of channels allocated (which usually depends on size of the database in my case), but they are all the same. If you check logs, you see - nothing.  It is as if someone froze the whole session. When something like this happens, you start to think what is different compared to original DWH setup. Linux version is different, but we use it already on other RACs. Oracle version is different, but we use it on other RACs (with that same Linux version). Storage is a bit different as we used EMC instead of NetApp and we used FC instead of NFS, but there was nothing pointing in that direction.  What about backup itself? Well, it is exactly the same. So it has to be something with either Oracle or Linux or maybe even network. It's not backup for sure. At least that's what I thought.

 

But, talking to my Oracle colleague, I found he does not share this opinion. From his point of view, everything works with Oracle. Actually, if he does RMAN to disk he only gets success. My record was running backup 3x times in raw before I got hang. And when he checks the status in Oracle all he sees there is WAIT on sbtwrite2. And that's coming from backup software. So, sorry my friend he said, but it all comes back to you. And since this was production environment we made decision to dump backups as RMAN to disk to NFS share until we fix this.  OK, this means war! I mean challenge (yeah right, it is war!). I really hate when it all comes back to NetWorker and there is nothing to hold on nor in logs. It is something silly always, but this problem did have potential to be real pain. Actually it was already a pain as I could not do rman to storage node while rman to disk by Oracle worked flawlessly. Things didn't look good for me so I had to start tests.  So let's make a summary first of what has changed comparing original and new DWH is following:

  • RedHat 5.2 to RedHat 5.5
  • Storage from Dell Equalogic using ASM to EMC Symmetrix using ASM with ASMLib
  • Oracle version (in both case 11R1, but different patch level)

 

By stating above, I must also say that:

  • 80% of our Oracle landscape is now running on RH 5.5 without any issues
  • Some % of Oracle installations is running same Oracle level and patch as affected one
  • Nature of the issue does not seem to indicate this could be anyhow related to storage layout (except more IO due to speed which is higher as network pipe is more fat in this case)
  • all our databases are created from template database using RMAN

 

I started to work on this tests around time just before I was about to leave for vacations around week before and week or two after New Year.  But both my Oracle colleague and myself continued to make tests as time allowed us. Since database is 2.5TB it takes some 2 hours to backup and then you see if something is hanging or not (or if you connect to Oracle you just check if you have waits). I did notice that I could reproduce hangs more in the mornings than afternoons, but there was never anything I could say this is 100% reproducible. Sometimes one sessions would fails.  Sometimes two. Sometimes from the same node, sometimes from both. Sometimes going to one drive and sometimes to both. It was just messy. And with primary victim backing know to NFS and being outside my reach the only thing I could do is dedicated test environment and use acceptance copy for tests. First, I tested it against standard backup environment and voila - issue is there.  Good!  This means that acceptance environment is my new victim. You may wonder if this is second RAC - no, it is the same one.  For DWH, we used 4 nodes on site A for production and 4 nodes on site B for acceptance. The fact that I could imediately get the issue with other 4 nodes on other site made me more convinced something specific to this very setup was wrong. Time to fight thus fuzzy puzzle.

 

futex5.png

 

I was lucky to have test environemnt.  Actually, it is standlone Linux blade on which I installed NetWorker 8 alpha code to see what it is. But just as I installed it I had to change it and install 7.5.3.5 and have backup test server running against acceptance DWH. Needless, to say, I could get the issue again and that only meant issue had nothing to do with original backup server being on HPUX ia64 (not that I believed that anyway). It was time of vacations and not much has been tested, but what has been tested was:

  • checking load does not affect operation (as I would get more % of failures like described one in morning tests)
  • checking there are no error messages on VTL HBA - there are none
  • checking system logs on all involved hosts - there are messages related
  • checking if running with different RMAN channels makes any difference - it does not
  • checking if disk backup (Oracle) works fine - it does (****!)
  • running this from one host instead of two - it does not matter - issue is still there
  • playing with RMAN script and testing with different filers per set (as that way you can control how long session lasts) - same thing
  • checking both PowerLink and MetaLink - nothing helpful at this point
  • tested both NMO 5.0b311 and NMDA 1.2 - both have same issue (one thing I did notice is that my backups were a bit faster using NMDA than NMO)
  • checking if running backup against EDL when 100% idle works - nope, you can still reproduce it
  • cursing and using some strong language – yes, but it didn’t help

 

At this point I was frustrated and ready to get back from my holidays.

 

futex4.jpg

 

After being back from vacation I decided to get things more seriously and use only NMDA.  Performance wise, it looks even better than NMO, but issue wise it still seems to have same problem.  When you look in Oracle with SQL query you see that Oracle reports stale session to be alive, but waiting sbtwrite2. Process trace showed that whole thing would hang with following:

 

futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL

 

Above usually can mean a lot of different things and as such it is not helpful much.  It is actually worse thing that can happen as then DBA comes with a smile and says this is your issue. Initially we interpreted futex line as the process which sleeps forever until it gets the WAKE UP signal from NetWorker but that never happens. Because very same binaries were used elsewhere I was still sure this has nothing to do with NetWorker and whole sleep thingy made me think this could be somehow network related too. To ne honest, it could be anything and there is only one way to find out the truth.  So, time to use debug.  I won't number all tests here, especially since there were many, but will list those most important along with finding.

 

futex2.jpg

 

 

Group test 1: running NMDA with debug 6

 

You can easily see that hangs when it happens from timestamps on NMDA debug filename:

 

[root@ACC_NODE1 logs]# ls -ltra

total 961632

drwxrwxrwx 4 root   root          4096 Jan 11 11:52 ..

drwxrwxrwx 3 root   root          4096 Jan 11 12:21 .

-rw-rw-rw- 1 oradba oinstall 191809382 Jan 11 13:01 libnsrora_Oracle_2012_01_11.12_21_49.10694.log

-rw-rw-rw- 1 oradba oinstall 244457256 Jan 11 13:18 libnsrora_Oracle_2012_01_11.12_21_46.10616.log

-rw-rw-rw- 1 oradba oinstall 255931720 Jan 11 13:18 libnsrora_Oracle_2012_01_11.12_21_45.10594.log

-rw-rw-rw- 1 oradba oinstall 291507508 Jan 11 13:18 libnsrora_Oracle_2012_01_11.12_21_44.10572.log

 

[root@ACC_NODE3 tmp]# ls -ltra

total 1032320

drwxrwxrwx 4 root   root          4096 Jan 11 11:52 ..

drwxrwxrwx 3 root   root          4096 Jan 11 12:21 .

-rw-rw-rw- 1 oradba oinstall 129702255 Jan 11 12:51 libnsrora_Oracle_2012_01_11.12_21_49.26325.log

-rw-rw-rw- 1 oradba oinstall 315376885 Jan 11 13:17 libnsrora_Oracle_2012_01_11.12_21_48.26299.log

-rw-rw-rw- 1 oradba oinstall 356607452 Jan 11 13:17 libnsrora_Oracle_2012_01_11.12_21_47.26279.log

-rw-rw-rw- 1 oradba oinstall 254329018 Jan 11 13:17 libnsrora_Oracle_2012_01_11.12_21_47.26204.log

 

 

Do you see the pattern yet?  We'll get there.  Anyway, at the end of the day, I had 4 sessions hanging:

 

[root@ACC_NODE1 logs]# ls -ltra

total 2074096

drwxrwxrwx 4 root   root          4096 Jan 11 11:52 ..

drwxrwxrwx 3 root   root          4096 Jan 11 12:21 .

-rw-rw-rw- 1 oradba oinstall 191809382 Jan 11 13:01 libnsrora_Oracle_2012_01_11.12_21_49.10694.log

-rw-rw-rw- 1 oradba oinstall 571260844 Jan 11 14:11 libnsrora_Oracle_2012_01_11.12_21_45.10594.log

-rw-rw-rw- 1 oradba oinstall 682042138 Jan 11 14:17 libnsrora_Oracle_2012_01_11.12_21_44.10572.log

-rw-rw-rw- 1 oradba oinstall 676644922 Jan 11 14:25 libnsrora_Oracle_2012_01_11.12_21_46.10616.log

 

[root@ACC_NODE3 tmp]# ls -ltra

total 1766728

drwxrwxrwx 4 root   root          4096 Jan 11 11:52 ..

drwxrwxrwx 3 root   root          4096 Jan 11 12:21 .

-rw-rw-rw- 1 oradba oinstall 129702255 Jan 11 12:51 libnsrora_Oracle_2012_01_11.12_21_49.26325.log

-rw-rw-rw- 1 oradba oinstall 355514174 Jan 11 13:21 libnsrora_Oracle_2012_01_11.12_21_48.26299.log

-rw-rw-rw- 1 oradba oinstall 677667874 Jan 11 13:45 libnsrora_Oracle_2012_01_11.12_21_47.26279.log

-rw-rw-rw- 1 oradba oinstall 644436476 Jan 11 14:18 libnsrora_Oracle_2012_01_11.12_21_47.26204.log

 

It is different when you are looking yourself and it is another thing when someone is telling you the story.  If I would tell you that first hints are already above in italic you might have found first pattern.  But I didn't check timestamps with ls command and I got to learn about pattern other way around - by looking what is inside the debug log files.  First, let's check RMAN log (the very end):

 

futex6.jpgWhen we compare start of the session we see:

futex7.jpg

OK, so the bad boys are channels number 2, 6, 7 and 8. At this point you go to Oracle, do SQL query based on SID to see their SPID and kill SPID at OS level.  As consequence, this will report failed channel and fail it over to next one and then start over the failed session. Now, sometimes this will cause backup to end and sometime restarted session will hang.  No apparent reason or pattern there yet. When you check logs though you get to see that hang happens at different place - random.  For example:

 

# tail libnsrora_Oracle_2012_01_11.12_21_49.10694.log

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) Leaving sbtwrite2 (0)

(pid = 10694) (date = 01/11/12 13:01:49) Entering sbtwrite2()

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) Leaving sbtwrite2 (0)

 

First time I saw this I was a bit worried as there it was again - sbtwrite2 again.  By checking other hang sessions and their log I could see that last line reported is random and could be any of what I would call sbtwrite loop:

 

[…]

(pid = 10694) (date = 01/11/12 13:01:49) Entering sbtwrite2()

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) Leaving sbtwrite2 (0)

(pid = 10694) (date = 01/11/12 13:01:49) Entering sbtwrite2()

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) Leaving sbtwrite2 (0)

(pid = 10694) (date = 01/11/12 13:01:49) Entering sbtwrite2()

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Entering.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_stream_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) lnm_nw_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) nwora_sess_write: Exiting.

(pid = 10694) (date = 01/11/12 13:01:49) Leaving sbtwrite2 (0)

[…]

 

To me, being so random meant this probably does not have significance to the case and I should move on to second set of tests. Due to that, I stopped exploring details in this group test (though devil is in details).

 

futex8.jpg

 

Group test 2: running NMDA with debug 6 and trace with debug 2

 

I modified RMAN script and started second test.  Modification has been only adding debug =2 in channel line. I was afraid a bit of this test because:

  • I know it is going to be slow
  • if there is race condition due to high I/O usually you do not see this error during debug sessions

 

Well, I guess it is not race condition as I got it again.

 

7PGA80020I-nst(J) sdlt320    TST00001     writing at 35 MB/s, 779 GB, 4 sessions

7PGA80020J-nst(J) sdlt320    TST00004     writing at 51 MB/s, 926 GB, 4 sessions

 

 

Sessions:

VIP_NODE:RMAN:ORCL:/FULL_DB_A03ORK_6mn0hnrb/ saving to pool 'dbR' (TST00001) 222 GB

VIP_NODE:RMAN:ORCL:/FULL_DB_A03ORK_6nn0hnrc/ saving to pool 'dbR' (TST00001) 222 GB

VIP_NODE:RMAN:ORCL:/FULL_DB_A03ORK_6pn0hnrc/ saving to pool 'dbR' (TST00002) 112 GB

VIP_NODE:RMAN:ORCL:/FULL_DB_A03ORK_6on0hnrc/ saving to pool 'dbR' (TST00001) 221 GB

VIP_NODE:RMAN:ORCL:/FULL_DB_A03ORK_6qn0hnrd/ saving to pool 'dbR' (TST00004) 235 GB

VIP_NODE:RMAN:ORCL:/FULL_DB_A03ORK_6rn0hnrd/ saving to pool 'dbR' (TST00004) 234 GB

VIP_NODE:RMAN:ORCL:/FULL_DB_A03ORK_6sn0hnrd/ saving to pool 'dbR' (TST00004) 222 GB

 

The hang session came from ACC_NODE3 and I could see following (italic represents hang session):

 

[root@ACC_NODE3 logs]# ls -ltra

total 1685080

drwxrwxrwx 4 root   root          4096 Jan 11 11:52 ..

-rw-r--r-- 1 oradba oinstall         0 Jan 11 12:21 nmda_oracle.messages.raw

drwxrwxrwx 2 root   root          4096 Jan 12 01:03 .

-rw-rw-rw- 1 oradba oinstall 237087399 Jan 12 04:53 libnsrora_Oracle_2012_01_12.01_03_30.26860.log

-rw-rw-rw- 1 oradba oinstall 494572172 Jan 12 07:50 libnsrora_Oracle_2012_01_12.01_03_35.27011.log

-rw-rw-rw- 1 oradba oinstall 495547028 Jan 12 07:50 libnsrora_Oracle_2012_01_12.01_03_33.26983.log

-rw-rw-rw- 1 oradba oinstall 496588058 Jan 12 07:50 libnsrora_Oracle_2012_01_12.01_03_31.26950.log

 

[root@ACC_NODE3 logs]# tail libnsrora_Oracle_2012_01_12.01_03_30.26860.log

(pid = 26860) (date = 01/12/12 04:53:31) lnm_nw_sess_write: Exiting.

(pid = 26860) (date = 01/12/12 04:53:31) nwora_sess_write: Exiting.

(pid = 26860) (date = 01/12/12 04:53:31) Leaving sbtwrite2 (0)

(pid = 26860) (date = 01/12/12 04:53:31) Entering sbtwrite2()

(pid = 26860) (date = 01/12/12 04:53:31) nwora_sess_write: Entering.

(pid = 26860) (date = 01/12/12 04:53:31) lnm_nw_sess_write: Entering.

(pid = 26860) (date = 01/12/12 04:53:31) lnm_stream_write: Entering.

(pid = 26860) (date = 01/12/12 04:53:31) lnm_stream_write: Exiting.

(pid = 26860) (date = 01/12/12 04:53:31) lnm_nw_sess_write: Exiting.

(pid = 26860) (date = 01/12/12 04:53:31) nwora_sess_write: Exiting.

 

At the end of the day I had 2 sessions hanging; it was one session per site:

 

[root@ACC_NODE1 logs]# ls -la

total 2546784

drwxrwxrwx 2 root   root          4096 Jan 12 01:03 .

drwxrwxrwx 4 root   root          4096 Jan 11 11:52 ..

-rw-rw-rw- 1 oradba oinstall 682041814 Jan 12 10:31 libnsrora_Oracle_2012_01_12.01_03_26.14762.log

-rw-rw-rw- 1 oradba oinstall 568082215 Jan 12 09:03 libnsrora_Oracle_2012_01_12.01_03_27.14784.log

-rw-rw-rw- 1 oradba oinstall 676644598 Jan 12 10:28 libnsrora_Oracle_2012_01_12.01_03_29.14896.log

-rw-rw-rw- 1 oradba oinstall 678551808 Jan 12 10:29 libnsrora_Oracle_2012_01_12.01_03_34.15149.log

-rw-r--r-- 1 oradba oinstall         0 Jan 11 12:21 nmda_oracle.messages.raw

 

[root@ACC_NODE3 logs]# ls -la

total 2210988

drwxrwxrwx 2 root   root          4096 Jan 12 01:03 .

drwxrwxrwx 4 root   root          4096 Jan 11 11:52 ..

-rw-rw-rw- 1 oradba oinstall 237087399 Jan 12 04:53 libnsrora_Oracle_2012_01_12.01_03_30.26860.log

-rw-rw-rw- 1 oradba oinstall 677667874 Jan 12 09:56 libnsrora_Oracle_2012_01_12.01_03_31.26950.log

-rw-rw-rw- 1 oradba oinstall 666704510 Jan 12 09:50 libnsrora_Oracle_2012_01_12.01_03_33.26983.log

-rw-rw-rw- 1 oradba oinstall 680344424 Jan 12 09:59 libnsrora_Oracle_2012_01_12.01_03_35.27011.log

 

Remember I asked you above, where also multiple sessions were hanging, do you see pattern?  Do you see it now? I could see that channels which have hanged are t2 on ACC_NODE 1 and t4 on ACC_NODE3. Not special pattern there, but when you look inside the debug file and you compare timestamps when last entry was written in you get to see following:

  • 09:03:31 @ ACC_NODE1 for t2
  • 04:53:31 @ ACC_NODE3 for t4

 

Pattern in this specific run seems to be HH:m3:ss (where ss is the same both sessions).  At this point I check crontab for both root and oradba users, but there is nothing that could be correlated to these times.  I checked cron job log, but still nothing happened at these specific times.  Still, chances of having this are rare, do you agree? Ok, I know the drill; restarted both channels by killing them and took rest of the day off and went to sleep.  Next morning, I found one of restarted channels frozen again. Time? 11:43:34.  OK, so seconds might not match exactly, but minute pattern is still there.

 

To make sure I'm not fooled by pure coincidence, I knew I had to run more tests.  But I already had number of test results from previous runs so I decided to explore those first. Here is the data for the test with 4 hangs:

 

# for file in `ls -1 lib*`

> do tail -1 $file

> done

(pid = 10572) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

(pid = 10594) (date = 01/11/12 14:11:46) Entering sbtwrite2()

(pid = 10616) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

(pid = 26204) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

(pid = 26279) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

(pid = 26299) (date = 01/11/12 13:21:48) Leaving sbtwrite2 (0)

(pid = 10694) (date = 01/11/12 13:01:49) Leaving sbtwrite2 (0)

(pid = 26325) (date = 01/11/12 12:51:49) nwora_sess_write: Entering.

 

 

 

OK, there is sort of WTF in the air right now. But WTF is about to become WTF (or WTF big time).  Check this out:

 

# for file in `ls -1 lib*`^Jdo echo $file;head -1 $file;tail -1 $file^Jdone

libnsrora_Oracle_2012_01_11.12_21_44.10572.log

(pid = 10572) (date = 01/11/12 12:21:44) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 10572) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

libnsrora_Oracle_2012_01_11.12_21_45.10594.log

(pid = 10594) (date = 01/11/12 12:21:45) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 10594) (date = 01/11/12 14:11:46) Entering sbtwrite2()

libnsrora_Oracle_2012_01_11.12_21_46.10616.log

(pid = 10616) (date = 01/11/12 12:21:46) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 10616) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

libnsrora_Oracle_2012_01_11.12_21_47.26204.log

(pid = 26204) (date = 01/11/12 12:21:47) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 26204) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

libnsrora_Oracle_2012_01_11.12_21_47.26279.log

(pid = 26279) (date = 01/11/12 12:21:47) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 26279) (date = 01/11/12 18:54:08) nwora_sbt_free_sbtinfo2_mem: Exiting.

libnsrora_Oracle_2012_01_11.12_21_48.26299.log

(pid = 26299) (date = 01/11/12 12:21:48) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 26299) (date = 01/11/12 13:21:48) Leaving sbtwrite2 (0)

libnsrora_Oracle_2012_01_11.12_21_49.10694.log

(pid = 10694) (date = 01/11/12 12:21:49) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 10694) (date = 01/11/12 13:01:49) Leaving sbtwrite2 (0)

libnsrora_Oracle_2012_01_11.12_21_49.26325.log

(pid = 26325) (date = 01/11/12 12:21:49) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 26325) (date = 01/11/12 12:51:49) nwora_sess_write: Entering.

 

I can't describe the ecstasy I was in once I saw this. There is a definite and reproducible pattern (once that frozen session occurs) which states that time when hangs happen is multiply of 10 minutes of the time when session started. Holly poo!!! Could this be truth!? Let's check actual session I was testing with two obvious hangs (note that here due to restarted sessions things are a bit trickier to see as you will see some end lines there which also did freeze, but at different time):

 

[root@ACC_NODE1 logs]# for file in `ls -1`; do echo $file; head -1 $file; tail -1 $file; done

libnsrora_Oracle_2012_01_12.01_03_26.14762.log

(pid = 14762) (date = 01/12/12 01:03:26) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 14762) (date = 01/13/12 13:02:16) Entering sbtwrite2()

libnsrora_Oracle_2012_01_12.01_03_27.14784.log

(pid = 14784) (date = 01/12/12 01:03:27) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 14784) (date = 01/12/12 09:03:31) nwora_sess_write: Exiting.

libnsrora_Oracle_2012_01_12.01_03_29.14896.log

(pid = 14896) (date = 01/12/12 01:03:29) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 14896) (date = 01/12/12 11:43:34) nwora_sess_write: Exiting.

libnsrora_Oracle_2012_01_12.01_03_34.15149.log

(pid = 15149) (date = 01/12/12 01:03:34) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 15149) (date = 01/12/12 10:29:34) Leaving sbtinfo2 (0)

 

[root@ACC_NODE3 logs]# for file in `ls -1`; do echo $file; head -1 $file; tail -1 $file; done

libnsrora_Oracle_2012_01_12.01_03_30.26860.log

(pid = 26860) (date = 01/12/12 01:03:30) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 26860) (date = 01/12/12 04:53:31) nwora_sess_write: Exiting.

libnsrora_Oracle_2012_01_12.01_03_31.26950.log

(pid = 26950) (date = 01/12/12 01:03:31) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 26950) (date = 01/12/12 09:56:48) Leaving sbtinfo2 (0)

libnsrora_Oracle_2012_01_12.01_03_33.26983.log

(pid = 26983) (date = 01/12/12 01:03:33) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 26983) (date = 01/12/12 09:50:20) Leaving sbtinfo2 (0)

libnsrora_Oracle_2012_01_12.01_03_35.27011.log

(pid = 27011) (date = 01/12/12 01:03:35) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 27011) (date = 01/12/12 09:59:27) Leaving sbtinfo2 (0)

 

 

This is strange indeed. Not just my words, but those by Oracle DBAs. And the fact about those 10 minutes seem to suggest more this was an issue by backup application than anything more. My confidence has been shaken - I do admit that, but now more than ever I wanted to know what is behind this story. The fact that session would freeze t*600s since start of backup session meant either Oracle or NetWorker or both were heavily involved in this as these two are the only ones who would care about start time. 10 minute thing did also smell like nsrmmd. Indeed, in test and production environment I was using Linux storage node for these specific backups so perhaps storage node package is broken? Or missing something? If so, why all other Oracle backups using same (production) storage node work fine? Network? Blade? It could be still too many things and 10 minute thingy just raised the bar of mystery. DBA suspected Linux or NetWorker, I suspected everyone and Linux guys said let's sniff network - it's always them anyway.futex9.jpg

 

As this was getting slippery, I decided to open a ticket with support for NetWorker. Sometimes you spend hours on something like this and then the voice on the other side says something like known issue and it's fixed next moment. I didn't expect that, but I had to protect myself. Beside, I planned to run some real debugging now and to decipher those you need someone close to engineering so I just had to play the game.

 

After opening ticket with reseller who forward it to vendor, there was a small period where I had to listen to some initial non sense from first or second level support. I never understood why, but it seems that if you prepare the case with all the details and logs they still want you to do the very same thing all over again and test few things which obviously are not related. But sometimes you just have to comply if you wish to get to engineering as soon as possible as set of strange requirements and tests - based on multiple cases - is most likely something poor support personnel have to address before even addressing it to SME or further. Boring, time wasting and almost always rule though there are exceptions. The only thing you can so it to try to kind of comply, but actually drive the case further and have them more on Webex and turn pressure against them.  Sounds like plan.

 

Support imediatelly found 7.5.3.5 to be out of support. I have no plans to upgrade that version until at least once ticket that I have is not addressed, but I was ready to test this in test environment. Before getting there, some more debug data was provided and this data was used to request support to supply modified binary (library) with debugging symbols included. If you are into the details, here is how the data looks when using original binaries.

 

[root@ACC_NODE3 logs]# head -1 libnsrora_Oracle_2012_01_17.15_17_10.7226.log;tail -1 libnsrora_Oracle_2012_01_17.15_17_10.7226.log

(pid = 7226) (date = 01/17/12 15:17:10) @(#) Module Name:  NetWorker Module for Databases and Applications v1.2.0

(pid = 7226) (date = 01/17/12 16:47:11) Entering sbtwrite2()

 

Data:

[root@ACC_NODE3 logs]# gdb

GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-23.el5)

Copyright (C) 2009 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law.  Type "show copying"

and "show warranty" for details.

This GDB was configured as "x86_64-redhat-linux-gnu".

For bug reporting instructions, please see:

<http://www.gnu.org/software/gdb/bugs/>.

(gdb) attach 7226

Attaching to process 7226

Reading symbols from /oracle/base/product/db111072_h1/bin/oracle...(no debugging symbols found)...done.

Reading symbols from /oracle/base/product/db111072_h1/lib/libskgxp11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libskgxp11.so

Reading symbols from /lib64/librt.so.1...(no debugging symbols found)...done.

Loaded symbols for /lib64/librt.so.1

Reading symbols from /oracle/base/product/db111072_h1/lib/libnnz11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libnnz11.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libclsra11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libclsra11.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libdbcfg11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libdbcfg11.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libhasgen11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libhasgen11.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libskgxn2.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libskgxn2.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libocr11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libocr11.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libocrb11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libocrb11.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libocrutl11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libocrutl11.so

Reading symbols from /usr/lib64/libaio.so.1...(no debugging symbols found)...done.

Loaded symbols for /usr/lib64/libaio.so.1

Reading symbols from /lib64/libdl.so.2...(no debugging symbols found)...done.

Loaded symbols for /lib64/libdl.so.2

Reading symbols from /lib64/libm.so.6...(no debugging symbols found)...done.

Loaded symbols for /lib64/libm.so.6

Reading symbols from /lib64/libpthread.so.0...(no debugging symbols found)...done.

[Thread debugging using libthread_db enabled]

Loaded symbols for /lib64/libpthread.so.0

Reading symbols from /lib64/libnsl.so.1...(no debugging symbols found)...done.

Loaded symbols for /lib64/libnsl.so.1

Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.

Loaded symbols for /lib64/libc.so.6

Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2

Reading symbols from /lib64/libnss_files.so.2...(no debugging symbols found)...done.

Loaded symbols for /lib64/libnss_files.so.2

Reading symbols from /oracle/base/product/db111072_h1/lib/libnque11.so...(no debugging symbols found)...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libnque11.so

Reading symbols from /opt/oracle/extapi/64/asm/orcl/1/libasm.so...(no debugging symbols found)...done.

Loaded symbols for /opt/oracle/extapi/64/asm/orcl/1/libasm.so

Reading symbols from /oracle/base/product/db111072_h1/lib/libobk.so...done.

Loaded symbols for /oracle/base/product/db111072_h1/lib/libobk.so

Reading symbols from /usr/lib/nsr/apps/lib64/libcommonssl2.so...done.

Loaded symbols for /usr/lib/nsr/apps/lib64/libcommonssl2.so

Reading symbols from /usr/lib64/gconv/ISO8859-1.so...(no debugging symbols found)...done.

Loaded symbols for /usr/lib64/gconv/ISO8859-1.so

Reading symbols from /lib64/libnss_dns.so.2...(no debugging symbols found)...done.

Loaded symbols for /lib64/libnss_dns.so.2

Reading symbols from /lib64/libresolv.so.2...(no debugging symbols found)...done.

Loaded symbols for /lib64/libresolv.so.2

0x0000003f9ecdfade in __lll_lock_wait_private () from /lib64/libc.so.6

(gdb) bt

#0  0x0000003f9ecdfade in __lll_lock_wait_private () from /lib64/libc.so.6

#1  0x0000003f9ec8d1cd in _L_lock_1685 () from /lib64/libc.so.6

#2  0x0000003f9ec8cf17 in __tz_convert () from /lib64/libc.so.6

#3  0x0000000001ee575e in sldxgd ()

#4  0x00000000034f107d in nldatxtmil ()

#5  0x00000000034f0fa0 in nldatxt ()

#6  0x00000000076b12f9 in nstimexp ()

#7  0x00000000064d935e in ltmdvp ()

#8  0x00000000064d9277 in ltmdrv ()

#9  0x00000000064be754 in sltmdf ()

#10 0x00000000064adab9 in sslsstehdlr ()

#11 0x00000000064acf7d in sslsshandler ()

#12 <signal handler called>

#13 0x0000003f9ecc5055 in _xstat () from /lib64/libc.so.6

#14 0x0000003f9ec8d73f in __tzfile_read () from /lib64/libc.so.6

#15 0x0000003f9ec8c65f in tzset_internal () from /lib64/libc.so.6

#16 0x0000003f9ec8d100 in tzset () from /lib64/libc.so.6

#17 0x0000003f9ec91a24 in strftime_l () from /lib64/libc.so.6

#18 0x0000003f9ec9213b in strftime_l () from /lib64/libc.so.6

#19 0x00002b2f821940dd in lg_strftime (buf=0x7fffcc205310 "", bufsize=8192, fmt=<value optimized out>, tm=0x18fc3d38) at utf8.c:1574

#20 0x00002b2f8205aee6 in nwora_sbt_trace (Globals=0x2b2f81823668, level=1, format=0x2b2f8219f46e "Entering sbtwrite2()") at errutil.c:118

#21 0x00002b2f8206aff9 in sbtwrite2 (ctx=0x2b2f81823668, flags=0, buf=0x2b2f83a2a000) at sbt2.c:1792

#22 0x0000000006adfc69 in skgfwrt ()

#23 0x00000000039e5c86 in ksfq_go ()

#24 0x00000000039e598b in ksfq_aio ()

#25 0x0000000000994d11 in ksfqwr ()

#26 0x0000000005a2d400 in krbb1qwr ()

#27 0x0000000005a1f5d5 in krbbpc ()

#28 0x00000000074b6114 in krbibpc ()

#29 0x0000000007d92654 in pevm_icd_call_common ()

#30 0x0000000007d89bfa in pfrinstr_ICAL ()

#31 0x0000000007d8901f in pfrrun_no_tool ()

#32 0x0000000002b762c3 in pfrrun ()

#33 0x0000000002b87859 in plsql_run ()

#34 0x000000000708e78c in pricar ()

#35 0x000000000707f8a6 in pricbr ()

#36 0x0000000007087009 in prient2 ()

#37 0x0000000007086126 in prient ()

#38 0x0000000006f9966e in kkxrpc ()

#39 0x0000000005fd34fe in kporpc ()

#40 0x0000000007b038d8 in opiodr ()

#41 0x0000000007cbf904 in ttcpip ()

#42 0x00000000010e1b11 in opitsk ()

#43 0x00000000010e452e in opiino ()

#44 0x0000000007b038d8 in opiodr ()

#45 0x00000000010dd890 in opidrv ()

#46 0x0000000001839eca in sou2o ()

#47 0x0000000000975953 in opimai_real ()

#48 0x000000000183f481 in ssthrdmain ()

#49 0x000000000097587f in main ()

(gdb)

 

[root@ACC_NODE3 ~]# gstack 7226

#1  0x0000003f9ec8d1cd in _L_lock_1685 () from /lib64/libc.so.6

#2  0x0000003f9ec8cf17 in __tz_convert () from /lib64/libc.so.6

#3  0x0000000001ee575e in sldxgd ()

#4  0x00000000034f107d in nldatxtmil ()

#5  0x00000000034f0fa0 in nldatxt ()

#6  0x00000000076b12f9 in nstimexp ()

#7  0x00000000064d935e in ltmdvp ()

#8  0x00000000064d9277 in ltmdrv ()

#9  0x00000000064be754 in sltmdf ()

#10 0x00000000064adab9 in sslsstehdlr ()

#11 0x00000000064acf7d in sslsshandler ()

#12 <signal handler called>

#13 0x0000003f9ecc5055 in _xstat () from /lib64/libc.so.6

#14 0x0000003f9ec8d73f in __tzfile_read () from /lib64/libc.so.6

#15 0x0000003f9ec8c65f in tzset_internal () from /lib64/libc.so.6

#16 0x0000003f9ec8d100 in tzset () from /lib64/libc.so.6

#17 0x0000003f9ec91a24 in strftime_l () from /lib64/libc.so.6

#18 0x0000003f9ec9213b in strftime_l () from /lib64/libc.so.6

#19 0x00002b2f821940dd in lg_strftime () from /oracle/base/product/db111072_h1/lib/libobk.so

#20 0x00002b2f8205aee6 in nwora_sbt_trace () from /oracle/base/product/db111072_h1/lib/libobk.so

#21 0x00002b2f8206aff9 in sbtwrite2 () from /oracle/base/product/db111072_h1/lib/libobk.so

#22 0x0000000006adfc69 in skgfwrt ()

#23 0x00000000039e5c86 in ksfq_go ()

#24 0x00000000039e598b in ksfq_aio ()

#25 0x0000000000994d11 in ksfqwr ()

#26 0x0000000005a2d400 in krbb1qwr ()

#27 0x0000000005a1f5d5 in krbbpc ()

#28 0x00000000074b6114 in krbibpc ()

#29 0x0000000007d92654 in pevm_icd_call_common ()

#30 0x0000000007d89bfa in pfrinstr_ICAL ()

#31 0x0000000007d8901f in pfrrun_no_tool ()

#32 0x0000000002b762c3 in pfrrun ()

#33 0x0000000002b87859 in plsql_run ()

#34 0x000000000708e78c in pricar ()

#35 0x000000000707f8a6 in pricbr ()

#36 0x0000000007087009 in prient2 ()

#37 0x0000000007086126 in prient ()

#38 0x0000000006f9966e in kkxrpc ()

#39 0x0000000005fd34fe in kporpc ()

#40 0x0000000007b038d8 in opiodr ()

#41 0x0000000007cbf904 in ttcpip ()

#42 0x00000000010e1b11 in opitsk ()

#43 0x00000000010e452e in opiino ()

#44 0x0000000007b038d8 in opiodr ()

#45 0x00000000010dd890 in opidrv ()

#46 0x0000000001839eca in sou2o ()

#47 0x0000000000975953 in opimai_real ()

#48 0x000000000183f481 in ssthrdmain ()

#49 0x000000000097587f in main ()

 

Of course, strace on process would show famous FUTEX_WAIT_PRIVATE thingy. Meanwhile I was expanding my tests to isolate this further.

 

 

Group test 3: running NMDA with debug 6 and debug 9 on nsrexecd


This test showed nothing that could be linked to 10 minute interval observed in second test group. That's sort of good news as I'm ruling out backup application on client and issue itself is 99% client side triggered.

 

Things to you need for NMDA and nsrexecd to give you debug output are:

 

NSR_DEBUG_LEVEL = 6

NSR_DIAGNOSTIC_DEST = /tmp/NMDA.H

NSR_DPRINTF=TRUE

 

To run nsrexecd in debug you can either alter it with dbgcommand or just run nsrexecd with -D switch it from CLI.

 

Approach 1:

[root@ACC_NODE1 config]# pgrep nsrexecd

3947

 

[root@ACC_NODE1 config]# dbgcommand -p 3947 Debug=9 #Debug=0 will place it back to normal mode

 

 

Approach 2:

nsr_shutdown #stop NetWoker

/usr/sbin/nsrexecd -D9 >> /tmp/some.log 2>&1 #Crtl-C when done and star it again normally via starup script

 

 

Group test 4: running NMDA with debug 6 and debug 9 on backup server

 

This is similar to above expect that in my test environment - where I carry these tests - my storage node runs on the same server. Placing nsrd into debug should do the trick. I prefer second approach shown above when it comes to test environment for backup server, but if you do it in production and you can't afford yourself to have downtime then dbgcommand is the way to go (make sure to speak with your support to tell you what to place into debug as there are some gotchas with dbgcommand approach).

 

This set of tests showed no correlation to what it seems at his point futex hang on client side (associated with RMAN Oracle process).  This is rather good news to me as finger which went against NetWorker does no longer hold the water. Still, while I'm isolating components, I'm far away from where the real issue is. Failure to see link within backup application infra (server and client side daemons) throws ball back and Linux and Oracle are suspects again.

 

Meanwhile my support ticket was stuck in both other and me be being stuburn. NetWorker support wanted me to upgrade NetWorker and have tests which would go to DBO instead of VTL and rerun the whole thing. I eventually give up to speed up the things. We also went ahead and opened a ticket with Oracle. Having both EMC and Oracle support in same bowl made me wonder how this will look like.

 

 

Group test 5: same as before, but now with 7.6.3.2 on client (instead of 7.5.3.5)

 

Interesting enough, this combination does not run at all. When I say combination, then the combination means NMDA v1.2 and 7.6.3.x on client against server which is 7.5.3.5. Use NMO and it will work. I suspect some "behind the scene" integration exists between NMDA and 7.6.3.2 (and later), most likely due to DD Boost integration, and when talking to server it does not take into account server might be "older".  Since this failure did coincidences with eval license expiration on my test server, I spent few days checking out what "did I do wrong" until I didn't figure out what was it in first place.  Epic fail. For those into the details, this is an error:

 

RMAN-03009: failure of backup command on t1 channel at 02/13/2012 14:18:31

ORA-27192: skgfcls: sbtclose2 returned error - failed to close file

ORA-19511: Error received from media manager layer, error text:

   Authentication error; why = Failed (unspecified error) (0:5:25)

continuing other job steps, job failed will not be re-run

 

 

 

Group test 6: same as 5, but this server is running 7.6.3.2 too (instead of 7.5.3.5)

 

Voila, backup works... but my issue is still there. Just as I thought and debated with support. Can we now focus on trace logs? Seems not as support engineer would like to compare NW logs from 7.5.3.5 and newly ones from 7.6.3.2 and see if there is any difference.  Sounds as they suspect some sort of silent issue which would be picked up by newer code. I don't escalating it to reseller to escalate it further down the chain.

 

In the meantime, one thing has occurred to me: I see futex hang when I do strace on already broken process - why not do strace on process since very start. Then, perhaps, I can catch those 10 minute intervals and see what those are. Can you guess what group test 7 is?

 

Oh, and Oracle came back too. They saw sbtwrite and throw the ball over to EMC. Engineer claimed that from RMAN's point of view it is because it's not getting a confirmation that the write has completed, so it can send the next chunk of data. So this must be looked at by the media manager software vendor (EMC). From my point of view that's non-sense, but I can't exect this guy to know everything so we sent him EMC ticket number. He came back imediately stating EMC should complete their analysis and have something pointing towards Oracle for them to continue. Lazy bustards. I certainly didn't agree that everything points out to NetWorker code, but then if I was DBA or Oracle support person I might have done the same.

 

futex11.jpg
futex10.jpg

 

The task now was to build as many test results as possible, make conclusion, made EMC agree with it and throw it back to Oracle.  For that I need that futex thing traced and have engineer say few words on it. Since trace runs against oracle process, it was a bit ambiguous to expect EMC to be familiar with what it means, but it was a long shot worth try. I love this game!

 

 

Group test 7: running NMDA with debug 1 and strace on oracle processes

 

I do not remember the exact day, but I do remember it was night and I was about to go to sleep and I started this test. I run it with NMDA debug 1 since level 9 or 1 is the same to me. From file generated I could see the pid and I would run strace against it.  Here is the example:

 

strace -o $PID.log -p $PID

 

You attached trace against PID and output goes to file specified by o switch. In this case, there is no need for more fancier line since there are no child processes created.  If there were, you would need to run something like this:

 

strace -ff -v -o $PID -p $PID

 

Next morning, all 8 sessions were hanging. Now, that was strange. I usually get few, 3 at max (out of 8), but not all 8.  And hang appeared to have happened quickly - within some 20 minutes since backup started. Did I miss something?  Going through logs did not reveal much (and we are talking about thousands of lines here) so I just repeated the test. And results repeated themselves too.

 

Meanwhile, I did get comment from EMC that from the stack trace, it looks like Oracle is in deadlock. It would not seem anything to do with the server side (as expected).  The weird thing, according to EMC, there is signal raised when executing nwora_sbt_trace and causing deadlock to happen. I could not see that, but then I'm not an expert in this area. I was left promise to receive diagnostic binary. That took almost two weeks (!?).

 

And then....

 

And then something beautiful happened. You remember that production part of DWH was using flawlessly RMAN to disk. This was trigerred through cronjob, but over the time I moved it to NetWorker as DBAs had 1 incident and no alerts. Another incident happened. I took a look and I saw something familiar:

 

[oradba@PRD_NODE1 /home/oradba] $ sqlplus / as sysdba

 

SQL*Plus: Release 11.1.0.7.0 - Production on Fri Mar 9 11:09:37 2012

 

Copyright (c) 1982, 2008, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

 

SQL> COLUMN CLIENT_INFO FORMAT a30

COLUMN SID FORMAT 999

COLUMN SPID FORMAT 9999

 

SELECT s.SID, p.SPID, s.CLIENT_INFO

FROM   V$PROCESS p, V$SESSION s

WHERE  p.ADDR = s.PADDR

AND    CLIENT_INFO LIKE 'rman%'

/

SQL> SQL> SQL> SQL>   2    3    4    5 

SID SPID                           CLIENT_INFO

---- ------------------------ ------------------------------

246 12014                         rman channel=ORA_DISK_7

431 12024                         rman channel=ORA_DISK_8

376 11943                         rman channel=ORA_DISK_1

440 11953                         rman channel=ORA_DISK_2

346 12034                         rman channel=ORA_DISK_9

272 11967                         rman channel=ORA_DISK_3

353 11971                         rman channel=ORA_DISK_4

452 11987                         rman channel=ORA_DISK_5

437 11991                         rman channel=ORA_DISK_6

259 12046                         rman channel=ORA_DISK_10

 

10 rows selected.

 

SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

[oradba@PRD_NODE1 /home/oradba P03ORK11] $ logout

[root@PRD_NODE1 log]# strace -p 12014

Process 12014 attached - interrupt to quit

read(20,  <unfinished ...>

Process 12014 detached

[root@PRD_NODE1 log]# strace -p 12024

Process 12024 attached - interrupt to quit

read(20,  <unfinished ...>

Process 12024 detached

[root@PRD_NODE1 log]# strace -p 11943

Process 11943 attached - interrupt to quit

read(21,  <unfinished ...>

Process 11943 detached

[root@PRD_NODE1 log]# strace -p 11953

Process 11953 attached - interrupt to quit

read(20,  <unfinished ...>

Process 11953 detached

[root@PRD_NODE1 log]# strace -p 12034

Process 12034 attached - interrupt to quit

futex(0x3eb1953594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

Process 12034 detached

[root@PRD_NODE1 log]# strace -p 11967

Process 11967 attached - interrupt to quit

read(20,  <unfinished ...>

Process 11967 detached

[root@PRD_NODE1 log]# strace -p 11971

Process 11971 attached - interrupt to quit

read(20,  <unfinished ...>

Process 11971 detached

[root@PRD_NODE1 log]# strace -p 11987

Process 11987 attached - interrupt to quit

read(20,  <unfinished ...>

Process 11987 detached

[root@PRD_NODE1 log]# strace -p 11991

Process 11991 attached - interrupt to quit

read(20,  <unfinished ...>

Process 11991 detached

[root@PRD_NODE1 log]# strace -p 12046

Process 12046 attached - interrupt to quit

read(20,  <unfinished ...>

Process 12046 detached

 

 

From Oracle when inspecting sessions we would say this in WAIT state:

 

  1  select inst_id,

  2         sid,

  3         seq#,

  4         event,

  5         seconds_in_wait,

  6         to_char(sysdate - numtodsinterval(seconds_in_wait, 'second'), 'dd-mm-yyyy hh24:mi:ss') as started,

  7         state from gv$session_wait

  8  where  seconds_in_wait > 100

  9  and    sid in (246,431,376,440,346,272,353,452,437,259)

10* order by inst_id, sid

SQL> /

 

   INST_ID        SID       SEQ# EVENT                                                            SECONDS_IN_WAIT STARTED             STATE

---------- ---------- ---------- ---------------------------------------------------------------- --------------- ------------------- -------------------

         1        246      45211 SQL*Net message from client                                                63478 08-03-2012 19:31:03 WAITING

         1        259      41740 SQL*Net message from client                                                63899 08-03-2012 19:24:02 WAITING

         1        272      31068 SQL*Net message from client                                                64934 08-03-2012 19:06:47 WAITING

         1        346      16784 RMAN backup & recovery I/O                                                 67725 08-03-2012 18:20:16 WAITED SHORT TIME

         1        353      33961 SQL*Net message from client                                                64934 08-03-2012 19:06:47 WAITING

         1        376      29171 SQL*Net message from client                                                65773 08-03-2012 18:52:48 WAITING

         1        431      39824 SQL*Net message from client                                                63879 08-03-2012 19:24:22 WAITING

         1        437      37382 SQL*Net message from client                                                63949 08-03-2012 19:23:12 WAITING

         1        440      31315 SQL*Net message from client                                                65529 08-03-2012 18:56:52 WAITING

         1        452      33374 SQL*Net message from client                                                65328 08-03-2012 19:00:13 WAITING

 

I know this signature! It's that futex thingy again. And it happens with Oracle's RMAN to disk! No NetWorker!!! Well, I do use savepnpc, but that really does not matter. Still, if I go NetWorker support folks with this information my case might get cold and I need as much as I can get to shoot towards Oracle support. So, I shared this info with our DBA team who didn't like this. Well, no one did including myself. We updated the Oracle ticket and decided we would do some more testing and data analysis. We also hoped new diagnostic binary from EMC would help further. At this point I knew that whatever the issue is it has no connection to NetWorker, but with SBT_TAPE it happens way more frequent than with disk and placing trace against Oracle's channel OS PID can be used as catalyst to reproduce the issue. Last one got us thinking and we believed issue is most likely caused by some queue at OS level used by Oracle.

 

 

Group test 8: same as above with diagnostic binary

 

Tests with diagnostic binary didn't bring any enlightening. At least not to me. But I kept on going through more than few TB of logs so far from all the tests and I kept searching for additional clues and patterns. I finally found some and it was that cryptic hex number in FUTEX hang line.  As I went through logs from previous tests again and again, I noticed that hex number is the same each time hang occurs. On different hosts it is different, but still the same each time for that specific host.

 

[root@ACC_NODE1 ]# grep FUTEX 1*

10175.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

10211.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

12708.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

12730.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

12752.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

12835.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

16905.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

16927.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

16949.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

16984.log:futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

 

[root@ACC_NODE3 ]# grep FUTEX 1* 2*

15687.log:futex(0x3f9ef53594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

15715.log:futex(0x3f9ef53594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

15736.log:futex(0x3f9ef53594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

15756.log:futex(0x3f9ef53594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

28550.log:futex(0x3f9ef53594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

28590.log:futex(0x3f9ef53594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

28625.log:futex(0x3f9ef53594, FUTEX_WAIT_PRIVATE, 2, NULL <unfinished ...>

 

And if you look more closely, you will see that last 5 digits are the same even for different hosts. I felt this had huge importance and asked support if they could give us an answer to what it was and what significance would there be for these numbers to be the same.

 

Not being patient, I spoke with our DBAs and suggested two tests:

  1. We restart Oracle on all nodes, repeat the test and see if hex value changes
  2. We restart all blades, repeat the test and see if hex value changes

 

Idea behind this very simple; if hex value would change after Oracle restart, I would blame mostly Oracle. If not, I would reboot the machine and if then hex value would change I would blame OS. If it would not change at all, could it be hardware? At this point our Linux gurus joined brainstorming and we were discussing how memory gets allocated for processes and under what conditions this value could be fixed (though I was still missing clear picture to what it was exactly).

 

At this point, you may wonder what do we see in trace before hang. Here is last 50 lines:

 

[root@ACC_NODE1 logs]# tail -50 18908.log.18908

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18908) (date = 03/14/12 1"..., 69) = 69

close(29)                               = 0

munmap(0x2b6b1e19b000, 4096)            = 0

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698185, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=53800, st_size=27509651, st_atime=2012/03/14-11:11:56, st_mtime=2012/03/14-11:21:56, st_ctime=2012/03/14-11:21:56}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b6b1e19b000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18908) (date = 03/14/12 1"..., 70) = 70

close(29)                               = 0

munmap(0x2b6b1e19b000, 4096)            = 0

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698185, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=53800, st_size=27509721, st_atime=2012/03/14-11:11:56, st_mtime=2012/03/14-11:21:56, st_ctime=2012/03/14-11:21:56}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b6b1e19b000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18908) (date = 03/14/12 1"..., 69) = 69

close(29)                               = 0

munmap(0x2b6b1e19b000, 4096)            = 0

writev(32, [{"\200\0\0\0\0\4\0P\233\232\n\215\0\0\0\0\0\0\0\2\0\5\363\330\0\0\0011\0\0\0'"..., 88}, {"\6\302\0\0\221\217\1\23\362\331\346{\276\7\1\4C-\4\0\1\0\0\0\205O\1\0\362\331\346{"..., 262144}], 2) = 262232

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698185, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=53800, st_size=27509790, st_atime=2012/03/14-11:11:56, st_mtime=2012/03/14-11:21:56, st_ctime=2012/03/14-11:21:56}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b6b1e19b000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:55, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=2085, tms_stime=6137, tms_cutime=0, tms_cstime=0}) = 1197978005

 

 

 

I’m not sure what SIGALRM is, but whenever FUTEX hangs happens I see it before.  For example, if I focus on this log above (which run some 10 minutes) I see:

 

[root@ACC_NODE1 logs]# grep SIGALRM 18908.log.18908

--- SIGALRM (Alarm clock) @ 0 (0) ---

--- SIGALRM (Alarm clock) @ 0 (0) ---

 

[…]

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=2085, tms_stime=6137, tms_cutime=0, tms_cstime=0}) = 1197977996

setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={0, 90000}}, NULL) = 0

rt_sigprocmask(SIG_UNBLOCK, [], NULL, 8) = 0

rt_sigreturn(0x1)                       = 62356

writev(32, [{"002155662\022000000005009735214\024000"..., 199876}], 1) = 199876

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_56.18908.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

[…]

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=2085, tms_stime=6137, tms_cutime=0, tms_cstime=0}) = 1197978005

futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL

 

Above might not be a good example as these two are almost next to each other (time wise). So I checked another log for session running 20 minutes:

 

[root@ACC_NODE1 logs]# grep -n SIGALRM 18813.log.18813

5733903:--- SIGALRM (Alarm clock) @ 0 (0) ---

5734828:--- SIGALRM (Alarm clock) @ 0 (0) ---

12209648:--- SIGALRM (Alarm clock) @ 0 (0) ---

12210655:--- SIGALRM (Alarm clock) @ 0 (0) ---

 

[root@ACC_NODE1 logs]# grep -A 20 -B 20 SIGALRM 18813.log.18813

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 69) = 69

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698183, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=52376, st_size=26781199, st_atime=2012/03/14-11:11:52, st_mtime=2012/03/14-11:21:51, st_ctime=2012/03/14-11:21:51}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 70) = 70

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", F_OK) = 0

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=1774, tms_stime=5325, tms_cutime=0, tms_cstime=0}) = 1197977574

setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={0, 90000}}, NULL) = 0

rt_sigprocmask(SIG_UNBLOCK, [], NULL, 8) = 0

rt_sigreturn(0x1)                       = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698183, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=52376, st_size=26781269, st_atime=2012/03/14-11:11:52, st_mtime=2012/03/14-11:21:51, st_ctime=2012/03/14-11:21:51}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 69) = 69

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

writev(32, [{"\200\0\0\0\0\4\0P\345\250\0\332\0\0\0\0\0\0\0\2\0\5\363\330\0\0\0\315\0\0\0'"..., 88}, {"\6\302\0\0\361\206\1\6\22\275S\216\276\7\2\4\360\307\2\0\2\0\0\0\20\222/\0.\271S\216"..., 262144}], 2) = 262232

--

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 70) = 70

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698183, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=52384, st_size=26785035, st_atime=2012/03/14-11:11:52, st_mtime=2012/03/14-11:21:52, st_ctime=2012/03/14-11:21:52}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:21:42, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 69) = 69

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

writev(32, [{"\200\0\0\0\0\4\0P\336\250\0\332\0\0\0\0\0\0\0\2\0\5\363\330\0\0\0\315\0\0\0'"..., 88}, {"\6\302\0\0\341\206\1\f\217\235\220\216\276\7\1\4\274\5\6\0\1\0\0\0g\225/\0\35\234\220\216"..., 262144}], 2) = 231608

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=1774, tms_stime=5325, tms_cutime=0, tms_cstime=0}) = 1197977583

rt_sigprocmask(SIG_BLOCK, [ALRM], NULL, 8) = 0

times({tms_utime=1774, tms_stime=5325, tms_cutime=0, tms_cstime=0}) = 1197977583

setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={600, 0}}, NULL) = 0

rt_sigprocmask(SIG_UNBLOCK, [ALRM], NULL, 8) = 0

rt_sigprocmask(SIG_BLOCK, [ALRM], NULL, 8) = 0

times({tms_utime=1774, tms_stime=5325, tms_cutime=0, tms_cstime=0}) = 1197977583

setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={600, 0}}, NULL) = 0

rt_sigprocmask(SIG_UNBLOCK, [ALRM], NULL, 8) = 0

setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={600, 0}}, NULL) = 0

rt_sigprocmask(SIG_UNBLOCK, [], NULL, 8) = 0

rt_sigreturn(0x1)                       = 231608

writev(32, [{"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 30624}], 1) = 30624

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698183, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=52384, st_size=26785104, st_atime=2012/03/14-11:11:52, st_mtime=2012/03/14-11:21:52, st_ctime=2012/03/14-11:21:52}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

--

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 70) = 70

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698183, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=104912, st_size=53654369, st_atime=2012/03/14-11:11:52, st_mtime=2012/03/14-11:31:52, st_ctime=2012/03/14-11:31:52}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 69) = 69

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

writev(32, [{"\200\0\0\0\0\4\0P\307\345\377\331\0\0\0\0\0\0\0\2\0\5\363\330\0\0\0\315\0\0\0'"..., 88}, {"\6\302\0\0\321\fC\34\333\371\334s\232\5\1\4Dp\1\0\1\0\0\0t\305$\0\r\345\334s"..., 262144}], 2) = 178160

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=3854, tms_stime=13392, tms_cutime=0, tms_cstime=0}) = 1198037574

setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={0, 90000}}, NULL) = 0

rt_sigprocmask(SIG_UNBLOCK, [], NULL, 8) = 0

rt_sigreturn(0x1)                       = 178160

writev(32, [{"\200\2\301\2\2\301\2\1\200\7xo\10\32\24&0\7xo\10\32\24&0\4\303\rc`,\0"..., 84072}], 1) = 84072

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698183, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=104912, st_size=53654438, st_atime=2012/03/14-11:11:52, st_mtime=2012/03/14-11:31:52, st_ctime=2012/03/14-11:31:52}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 68) = 68

close(29)                               = 0

--

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

write(29, "(pid = 18813) (date = 03/14/12 1"..., 62) = 62

close(29)                               = 0

munmap(0x2b076c0c5000, 4096)            = 0

access("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", F_OK) = 0

open("/nsr/apps/logs/libnsrora_Oracle_2012_03_14.11_11_52.18813.log", O_WRONLY|O_APPEND) = 29

fcntl(29, F_GETFL)                      = 0x8401 (flags O_WRONLY|O_APPEND|O_LARGEFILE)

fstat(29, {st_dev=makedev(253, 5), st_ino=11698183, st_mode=S_IFREG|0666, st_nlink=1, st_uid=301, st_gid=205, st_blksize=4096, st_blocks=104920, st_size=53658534, st_atime=2012/03/14-11:11:52, st_mtime=2012/03/14-11:31:52, st_ctime=2012/03/14-11:31:52}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b076c0c5000

lseek(29, 0, SEEK_CUR)                  = 0

fcntl(29, F_SETLKW, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/14-11:31:39, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=3854, tms_stime=13393, tms_cutime=0, tms_cstime=0}) = 1198037583

futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL

 

 

futex12.gif

And as I was about to test restart/reboot theory, another update by EMC engineering kicked in.

 

I received new libnsrora.so library. According to EMC, they found out that this is a issue they have seen before with old Oracle server versions. The claim is that the problem was in Oracle 9.2.x, but, it is supposed to be fixed in 10.x. release. Well, I had 11R1. All my databases are 11R1 and there are hundreds of them. It is just this new DWH which fails so I didn't really believe this story.

 

In received library, EMC temporarily disabled signal handler during sbtwrite2() to see if this helps to work around the issues until Oracle provides a better solution. Hm, I doubt it, but while I wait my change slot for restart of Oracle and later on blades I can make quick test.

 

To my surprise - no issues! Holly kung-fu and pandas! With strace running against Oracle channels I could always get all session hang - 50% of them within after 10 minute mark. Now, after hours of running - no session hang. I had to re-run this several time to believe. Each time it worked. OK, if this is Oracle regression, I expect that hex number will change after Oracle reboot.

 

I had a workaround now - library with signal handlers disabled. It worked though I had no clue what consequence that may have. Anyway, weekend has came and so did my change time. I placed back library with signal handlers enabled, DBAs restarted Oracle and I could start again my test. First test after Oracle restart on each RAC node resulted in - same hex value. This came as surprise as I was quite sure it would change. Beep wrong.

 

Second test, which happened after RAC nodes were restarted (OS level) resulted in - no change again. This came as surprise as I was really sure one of these would result in that hex value to change. But perhaps that was silly thing to expect given I had no clue what that value really stand for.

 

Somewhere parallel to this, I was able to link this issue to another RAC cluster, but it happened very rarely. It would happen for the same database every 2 or 3 weeks. That database was small (few hundreds MB) and idle. Totally idle. There was absolutely nothing in the world going on with that database, but it would have hanged sessions every 2 or 3 weeks once. To make the whole thing more nutty, last time I observed hang for this database it happened during maintenance channel when we do cleanup from RMAN catalog. That once again went into the favor that this has nothing to do with backup application, but that alone didn't really help.

 

Before grand finale, let's make a summary of what we have learned so far:

  • this issue seems to have random nature as far as occurence goes
  • this issue can be forced to happen if you run strace against oracle process which is RMAN channel - this may indicate load issue
  • this issue is very aggressive on one RAC cluster setup while rather rare on other one which is, as far as load goes, more utilized
  • when hangs hapeens we see FUTEX_PRIVATE_WAIT call pending
  • when checking traces we see SIGALRM previous to hang
  • EMC suspects regression in Oracle and they provided library with disabled signal handlers - this seems to help
  • futex hangs seems to be on the same address all the time on the host (where last 5 digits are the same across all hosts)

 

My biggest clue so far was signal handlers so I had to use that information somehow. Once again I dived into the trace outputs and horizons of endless nternet searches. After couple of hours I found something.

 

Fact that this happens with Oracle alone may indicate this either Oracle or Linux or join venture issue due to some parameter or whatever. Fact that only specific databases on cluster where other databases run without problems indicates further this might be due to some hard to spot setting. Finally, there is this puzzling thing which EMC did in their library with signal handlers and pointed out this was issue with Oracle 9.2.x and fixed in 10.x - though I still didn't get reference to original issue. On OS issue front, I went through forest of different cases reported by others for many applications and closest hit I got was - glibc issue. Apparently if you use ctime() in signal handlers you may get issues with futex. If I check closely the strace, I see following:

 

stat("/etc/localtime", {st_dev=makedev(253, 0), st_ino=131133, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=4096, st_blocks=16, st_size=2917, st_atime=2012/03/19-00:02:01, st_mtime=2011/11/04-17:39:56, st_ctime=2011/11/04-17:39:56}) = 0

--- SIGALRM (Alarm clock) @ 0 (0) ---

rt_sigprocmask(SIG_BLOCK, [], NULL, 8)  = 0

times({tms_utime=8321, tms_stime=29798, tms_cutime=0, tms_cstime=0}) = 434144570

futex(0x326b353594, FUTEX_WAIT_PRIVATE, 2, NULL

 

/etc/localtime here seems to be the key as it calls __libc_lock_lock() in glibc. Reference to this problem can be found here. While this seems to fit description of the problem, I do not see why it affects specific databases only (as glibc is general) nor why strace on process can serve as catalyst for hang.

 

At this point everyone joined discussion; DBAs, OS guys, HW guys (due to blades being under loop as issues have been seen on RACs which were G6 systems only) and me of course. Oracle support was still silent despite all findings and there was nothing really to ask EMC any further as they did their part of the job very well.

 

futex3.jpg

After finding above some of new elements in discussion included potential tests with different kernel versions, glibc or even blades, but each theory had pros and cons and it felt as something is still missing. We decided to escalate it big time to Oracle. At the same time EMC engineer came back with some more notes. According to him/her, deadlock is not caused by SIGALRM itself; here is what is happening when you see the hang:

  1. NMDA calls a libc function (in this case localtime_r) during the program execution. localtime_r  that calls __tz_covert actually locks tz data structures before setting them
  2. The function call is interrupted by SIGALRM, before the lock is released.
  3. The Oracle signal handler function registered for SIGALRM is called 4. The Oracle's signal handler registered for SIGALRM also calls locatime_r which is  waiting for the lock to be released by: futex (<addr>, FUTEX_WAIT_PRIVATE, ..)

 

When this happens then neither the function call nor the signal handler will make any further progress and this results in a deadlock. The only way to avoid this deadlock is not to call localtime_r() in the signal handler. This is IEEE standard, but, it looks like Oracle is not following it. I also did get explanation about the hex value I've been puzzled with. On the same system architectures, the offset location of the tz data structures is likely to be the same within the same program's address space thus it is expected to those the same. Now I feel stupid.

 

Engineer further quoted some forums that nicely describe the problem in application independent way:

 

  int x;

  void foo() { while (x) sleep(1); x+=1; sleep(60); x-=1; }

 

If this function is interrupted while it is inside 'sleep(60)', and if the interrupt handler calls it again, then this function (and the interrupt handler) will *never* make any further progress.  If you understand that, then substitute 'x' with a mutex, and you'll understand exactly the nature of deadlock you are observing. Good, time for DBAs to fire up an email/phone call to Oracle folks.

 

Finally, two weeks ago, we got query from Oracle guy asking following if Dead Connection Detection (DCD) is enabled in Oracle Net? You can find it out in sqlnet.ora on the database server's if it has

 

SQLNET.EXPIRE_TIME=n

 

where n > 0. We did have it. And guess what - it was 10 minutes. We had it on both RACs; one giving us this problem like crazy and the one where we had it very rarely. On others, where no issues were seen, it was disabled. Once we said we have it, we didn't hear back from Oracle, but decided to take chances and test it. Sometimes you get this hint questions and you know what to do next.

 

Quickly, DBA changed the setting and restarted Oracle and once again I started my test.  Outcome:\

 

futex13.jpg

I had to test this few times to believe it.  Note that this backup took some time as strace on Oracle process does slow down overall speed big time. Finally. we found what the problem was. I did go to Metalink few times before, but no matter how much I searched I never found match for this specific issue. The closest thing I found was an issue with 11R2 (see Bug 11807012). But this time, knowing exactly what to look for, I found article which seems to explain our issue: Bug 6918493. And it affects only 11R1 (just what we have).

 

As per Oracle article, using Net dead connection detection (DCD) by setting SQLNET.EXPIRE_TIME > 0 in the SQLNET.ORA file can cause an OS level mutex hang under nsexptim -> ... -> localtime_r .. and possibly under other substacks under nsexptim. The hang is very intermittent in nature as it depends on a specific timing scenario. If the hung process is holding a critical resource, such as a latch, then this can lead to a wider hang scenario. Workaround is to disable DCD.

 

It took some time to get there, but I did. You might not have this issue and be running 11R2, but fear not - this one was regression too

In mid February CERN, the European Organization for Nuclear Research, has announced that the Large Hadron Collider (LHC) will run with a beam energy of 4 TeV this year, 0.5 TeV higher than in 2010 and 2011. This decision was taken by CERN management following the annual performance workshop. It is accompanied by a strategy to optimise LHC running to deliver the maximum possible amount of data in 2012 before the LHC goes into a long shutdown to prepare for higher energy running. The data target for 2012 is 15 inverse femtobarns for ATLAS and CMS, three times higher than in 2011. As per Sean Carrol, think of it this way: imagine the protons entering a detector are shooting at a tiny target with some fixed size, measured in units of area. Then we can measure the luminosity by counting the number of protons passing through that area in a fixed moment of time: i.e., the number of protons per square centimeter per second. That’s at any one moment; if we integrate up over the course of a year, the “per second” disappears and leaves us with the total number of protons that have passed through the target area, i.e. a certain number of protons per square centimeter. But that number would be enormously huge, so rather than using square centimeters, particle physicists like to use “barns,” defined as 10-24 cm2. But even measuring the luminosity in inverse barns would be really big, so they go for inverse femtobarns (1 fb = 10-39 cm2). Long story short: 10 inverse femtobarns is equivalent to 1040 protons passing through a 1 cm target area. Bare in mind that ast year they exceeded predicitons by a factor of five but we should not expect the same again this year. Reaching 15/fb will be a great result and anything more will be spectacular.

 

When LHC started operating in 2010, they chose the lowest safe beam energy consistent with the physics they wanted to do. After two good years of operational experience with beam and many additional measurements made during 2011 there is confidence to safely move up a notch, and thereby extend the physics reach of the experiments before we go into the LHC's first long shutdown.

 

lhc2.jpg

The LHC's excellent performance in 2010 and 2011 has brought tantalising hints of new physics, notably narrowing the range of masses available to the Higgs particle to a window of just 16 GeV. Within this window, both the ATLAS and CMS experiments have seen hints that a Higgs might exist in the mass range 124-126 GeV. However, to turn those hints into a discovery, or to rule out the Standard Model Higgs particle altogether (note, this does not rule out Higgs particle - it just rules out simplest form of Higgs particle consistant with current Standard model physics), requires one more year's worth of data.

 

In April, LHC brought two stable beams of 4 TeV protons into collision for the first time both for this year after the winter shutdown and for that energy. This comes after weeks of preparation by the LHC’s team of operators, technicians and accelerator physicists. The experimenters have also been working hard to complete all necessary work on their detectors in time and test new software. Bringing large detectors like ATLAS out of hibernation is a delicate task and new problems are bound to keep showing up. The whole detector is made of 7000 tons of delicate and complex equipment, 4000 km of cables of all sorts and as many kilometers of tubing, all bringing voltages or special fluids to the detector or taking information out. This in part explains why nearly 4000 people are now involved in each of the ATLAS and CMS collaborations, the two largest LHC experiments. ALICE has about a thousand researchers while LHCb around 1500..

 

lhc3.jpg

The schedule announced back in February foresees beams running through to November. There will then be a long technical stop of around 20 months, with the LHC restarting close to its full design energy late in 2014 and operating for physics at the new high energy in early 2015. The ultimate goal is to reach an amazing 14 TeV. Meanwhile, results are still coming out from last year’s run. Sadly, they’re doing a great job at constraining possible new physics, but no convincing discoveries as yet. The next big public landmark for presenting new results will be the 2012 International Conference on High Energy Physics, which starts on July 4 in Melbourne, Australia.

 

 

Credits: CERN, Pauline Gagnon, Ken Bloom

Hrvoje Crvelin

Optical flow

Posted by Hrvoje Crvelin Apr 7, 2012

Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene. The concept of optical flow was first studied in the 1940s and ultimately published by American psychologist James J. Gibson as part of his theory of affordance. Optical flow techniques such as motion detection, object segmentation, time-to-collision and focus of expansion calculations, motion compensated encoding, and stereo disparity measurement utilize this motion of the objects' surfaces and edges.

 

There are clear mathematical relationships between the magnitude of the optic flow and where the object is in relation to you. If you double the speed which you travel, the optic flow you see will also double. If an object is brought twice as close to you, the optic flow will again double. Also the optic flow will vary depending on the angle between your direction of travel and the direction of the object you are looking at. Suppose you are travelling forward. The optic flow is the fastest when the object is to your side by 90 degrees, or directly above or below you. If the object is brought closer to the forward or backward direction, the optic flow will be less. An object directly in front of you will have no optic flow, and appear to stand still. Consider following image.

 

of1.jpg

 

The figure above shows what the optic flow might look like from an aircraft flying over a rocky desert (or over Mars). The blue arrows show the optic flow that would be seen by a camera or a passenger on the aircraft. Looking downward, there is a strong optic flow pattern due to the ground and rocks on the ground. The optic flow is fastest directly below the aircraft. It is especially fast where the tall rock protrudes from the ground. A sensor on the aircraft that responds to optic flow would be able to see this optic flow pattern and recognize the presence of the tall rock. The meaning is clear: “Look out below!!!”

 

Looking forward, there is another optic flow pattern due to the upcoming rock and anything else the aircraft might be approaching. The blue circle directly at the center shows the “focus of expansion” or FOE. The FOE tells the aircraft the specific direction it is flying (if you are travelling in a straight line, the optic flow is zero in the directly forward direction). The aircraft sees a large optic flow to the right of the FOE, which is due to the large rock on the left-hand side of this picture. The aircraft also sees smaller optic flow patterns in the downward-front direction, due to the ground. Towards it’s upper left, it sees no optic flow because this region of the visual field only has the sky. The forward optic flow pattern reveals that the aircraft will fly close by the big rock, perhaps dangerously close. If the optic flow on the aircraft’s right grows larger, then the aircraft should take that as a hint to turn away.

 

Now, that you get basics, let's turn to birds. The extraordinary ability of birds and bats to fly at speed through cluttered environments such as forests has long fascinated pretty much everyone. It raises an obvious question: how do these creatures do it? Clearly they must recognise obstacles and exercise the necessary fine control over their movements to avoid collisions while still pursuing their goal. And they must do this at extraordinary speed.  From a conventional command and control point of view, this is a hard task. Object recognition and distance judgement are both hard problems and route planning even tougher. Even with the vast computing resources that humans have access to, it's not at all obvious how to tackle this problem. So how flying animals manage it with immobile eyes, fixed focus optics and much more limited data processing is something of a puzzle.

 

of2.png

Ken Sebesta and John Baillieul at Boston University revealed how they've cracked it. They believe flying animals use a relatively simple algorithm to steer through clutter and that this has allowed them to derive a fundamental law that determines the limits of agile flight. Their approach relies on an idea called optical flow sensing which has been the subject of growing attention in recent years. The idea here is to think of the field of view, not as a set of discrete objects at different distances, but simply as an array of points that move across the field of vision. The rate of movement across the field of view depends on factors such as the size and distance of the object as well as the speed of flight. However, the optics of eyesight significantly simplifies certain calculations about this system. In particular, it allows a very simple determination of an imminent collision. It turns out that, given an eyeball flying at a constant velocity towards an object, the rate of change of the object's image size on the eyeball retina determine's the time to impact. That's a simple calculation that requires no knowledge of the object's size, distance or even of the closing speed. It then becomes relatively straightforward to determine when a collision is imminent and to adjust course accordingly. That's something that can be done with direct feedback from the optical system in a highly efficient way.

 

The work that Sebesta and Baillieul have done is to generalise this calculation for any point in the visual field and to calculate not just when a collision is imminent but when the eyeball passes the object. They then apply this method to the visual field as a whole to determine when collisions are likely and to create a control system that allows course adjustments to be made. Conclusion is that optical flow approach leads to a fundamental limit on the agility of high speed flight. The factors that determine this are the size and density of the obstacles in the clutter field and a quantity that Sebesta and Baillieul call the steering authority, essentially the flier's turning radius. 

 

That's a fascinating result. It places a fundamental limit on the ability of any flier to navigate an environment at speed. At it also allows the development of relatively straightforward algorithm for achieving this limit, or something close to it, using image data feedback. Sebesta and Baillieul are already exploiting this in a custom-built UAV based on the popular quadcopter airframe, equipped with motion sensors, an onboard camera, and a Gumstix Fire Single Board Computer. That opens up the possibility of autonomous micro-air vehicles swooping and diving through cluttered environments like sparrowhawks through a forest. And doing it in the not too distant future.

 

 

Credits: Wikipedia, Centeye, arXiv, Technology Review

Hrvoje Crvelin

Are you bald?

Posted by Hrvoje Crvelin Apr 7, 2012

For some reason, men tend to be very attached to that hair.  I'm talking about hairy part of the head of course. And in case it is still not clear, it is the head attached to your neck (and neck to our shoulders - just in case you are total pervert and need everything to be detailed). There are dedicated social sites for bald men, brotherhood unions, consulting site, etc. Well, yeah, it is truth, men loose their hair more often than women, but so what? Most of the time, guys look better without hair anyway. Don't take my word for it, ask women if you don't believe what you see in the mirror. Surely, that applies to totally bald men, but those partially bald sometime look silly and, granted, this may be source of much frustration. Although male pattern baldness affects some 80% of Caucasian men by age 70, it’s remained a puzzle to scientists. Dear (partially) bald men, there is a hope!

 

bald1.jpg

Researchers have identified a biological pathway previously unknown to have a role in male pattern hair loss. Published in Science Translational Medicine, the study finds that a lipid compound called prostaglandin D2 (PGD2) has a role in inhibiting hair growth and is likely to lead to new hair growth products based on prostaglandin biology.

 

And it happened by chance. Rogaine was originally a drug for high-blood pressure and Propecia was for prostate enlargement. In a new study, however, researchers have identified a molecule called Prostaglandin that inhibits hair growth in men, which turned out could provide a target for future drugs designed to treat baldness. The first thing researchers did was find a good use for the scalp fragments, usually discarded, from men undergoing hair transplant surgery. Comparing bald and non-bald tissue from these scalp parts, they discovered that the bald scalp had ten times as much PGD2 and elevated levels of PTGDS (the enzyme that makes PGD2) compared to normal scalp. The gene for PTGDS is also expressed more when there’s lots of testosterone floating around, which may explain why baldness is so endemic to men.

 

Picture: A lipid called prostaglandin (green) is made by cells in human hair follicles like this one. A new study shows that the the hormone-like molecule inhibits stem cells (red) needed to make hair grow.

 

bald2.jpg

Once scientists identified PGD2 (that green thing above) as a potential culprit in baldness, trials in mice were the next step (poor mice). They found that mutant mice with unusually high levels of PGD2 also had the atrophied hair follicles of bald men and grew less fur. When the researchers put PGD2 on the skin of live mice, as well as on human follicles they'd grown in a dish, they found the molecule inhibited hair growth there, too.

 

This could be useful in other areas too. There are times when you want hairless skin, e.g. womens’ legs (except those fetishists). This could be a safe, painless hair removal/prevention drug. I would like to stop shave my beard. Certain parts of industry might disagree with me, but customer's always right.

 

I saw few funny comments on Discover about this... I will make a guess and assume I know the sex behind:

  • Assuming male: "Every scientist on Earth needs to work on this until we have cured it. Then move to STDS."
  • Assuming female: "Who cares what’s on top of the head. It’s what’s inside the head that counts."

 

 

Credits: Tina Hesman Saey, Sarah Zhang, Nature, Discover

Hrvoje Crvelin

Google glasses

Posted by Hrvoje Crvelin Apr 7, 2012

There are few companies out there, big names, who really have cool R&D and some of their work looks pretty much futuristic, but it actually exactly those products that bring us technology closer. Think of Nokia, Apple, Google, Sony, etc... I will dedicate this post to latest cool thing coming out from Google. It is called Google glass (well, they call it Project Glass).

 

gglass1.jpg

Google released video on Wednesday showing how wearers of the high-tech specs can free your hands from technology - keeping everything at eye level. Like a smartphone attached to your head, everything - a phone call, check-in, GPS navigation, social network, operational status - eveything is just a tap away. Idea itself is not new and long time ago it went under futuristic idea of first step into VR (virtual virtuality). However, Google - as so far - promisses to bring this product to masses and available for eveyone's consumption.  Just like Apple did giant step with ipad, Google now has chance to take it further with their product. So what exactly is Project Glass?  Let's check the video first.

 

 

 

 

Looks cool. I can see this product being very useful at daily data center operator role.  Actually, if good enough it might replace mobile phone too. According to a report in The New York Times, the glasses - or Google goggles, if you like - will hit shelves by the end of this year, and retail for somewhere between $250 and $600. Guess what glasses will people carry the most next summer?

 

Idea here is not new.  Even if forget about those VR helmets used in labs, we have something similar already - Oakley Thumps glasses with mp3 player - see picture below.

 

gglass2.jpg

Google did more here obviously. Walking around with Google's glasses will make you will like Terminator or Robocop or 6 million dollar man. Of course, I can hear hordes of conspiracy theorists shouting about secret agencies measuring vibrations from our brains and collecting our thoughts.  Well, get life people. This is another product which might speed up the things on the market and be massive hit. Hopefully there is going to be some sort of API for information exchange and processing not reserved only for Google. As usual, in the beginning there is no standard and all is reserved to company developing idea so I doubt to see API friendly product. But expect there is going to be many people talking about privacy issues and real challenge will be how to regulate this - as you are not really going to know what people are seeing behind those glasses (and they will be private).

 

These glasses are being built in the Google X offices, a secretive laboratory near Google's main Mountain View, California, campus where engineers and scientists are also working on robots and space elevators. The glasses will use the same Android software that powers Android smartphones and tablets. Like smartphones and tablets, the glasses will be equipped with GPS and motion sensors. They will also contain a camera and audio inputs and outputs.

 

One not big fan of having antenna on my head though. I do not feel comfortable of 3G, 4G and WiFi signals being next to my head.  Surely, mobile phone is not far away from that, but I never put phone (at least since I use smartphone) next to my head.  I just put person on speaker. And I'm not 3G junkie so while having phone somewhere next to my body - I switch it off. Indeed, there is no study showing any ill effects yet, but I can say in advance several seconds before my phone will ring (call or text) as I tend to feel some sort of tickling on skin next to device. It may be perfectly ok and perhaps my skin is sensitive or whatever, but I choose not to have it. So I can see all the coolness and benefits of soon to be new Google product, but I will skip it while I can for my own personal reasons. With that in mind, I wonder what suggested maximun time lenght of use is going to be.  If you imagine this is going to be used for gaming too, this is fairly important piece of information

 

Whatever it is going to be and how it end up, it looks as finally something interesting instead of boring Apple vs Samsung race. Once Google releases product, I assume patent wars might continue (though based on what has been done so far I would expect all players are fairly ready for next products). But I really look forward to see how this product may shape our technology usage. From this to next step, which is pretty much the same but within eye lenses, is not far away.  We already have good advances today with nano technology so this is not hard to imagine.  Actually, Michio Kaku has been saying this for years.

 

 

 

 

 

 

Credits: Google, New York Times, Michio Kaku

Hrvoje Crvelin

Dipolaritons

Posted by Hrvoje Crvelin Apr 7, 2012

In physics, polaritons are quasiparticles resulting from strong coupling of electromagnetic waves with an electric or magnetic dipole-carrying excitation. They are an expression of the common quantum phenomenon known as level repulsion, also known as the avoided crossing principle. Polaritons describe the crossing of the dispersion of light with any interacting resonance. Thus, a polariton is the result of the mixing of a photon with an excitation of a material. I recently heard someone mentioning dipolatirons and I thought that was type.  Quick search led me to interesting work where dipolaritons are mentiones. Heroes of this story are scientists at the Cavendish Laboratory in Cambridge who used light to help push electrons through a classically impenetrable barrier.

 

dipol1.jpg

Particles cannot normally pass through walls, but if they are small enough quantum mechanics says that it can happen. This occurs during the production of radioactive decay and in many chemical reactions as well as in scanning tunneling microscopes. While quantum tunneling is at the heart of the peculiar wave nature of particles, this is the first time that it has been controlled by light.

 

This marriage between photons and electrons is fated because the light is in the form of cavity photons, packets of light trapped to bounce back and forth between mirrors which sandwich the electrons oscillating through their wall. The offspring of this marriage are actually new indivisible particles, made of both light and matter, which disappear through the slab-like walls of semiconductor at will. One of the features of these new particles, which the research team christened "dipolaritons" (aha!), is that they are stretched out in a specific direction rather like a bar magnet. And just like magnets, they feel extremely strong forces between each other. Such strongly interacting particles are behind a whole slew of recent interest from semiconductor physicists who are trying to make condensates, the equivalent of superconductors and superfluids that travel without loss, in semiconductors.

 

Being in two places at once, these new electronic particles hold the promise of transferring ideas from atomic physics into practical devices, using quantum mechanics visible to the eye.

 

 

Credits: Wikipedia, University of Cambridge

Hrvoje Crvelin

Do you remember?

Posted by Hrvoje Crvelin Apr 7, 2012

There is no doubt that most of us would like to live forever or at least prolong our currently estimated natural life span. I surely would. But, at present, this is just dream and it would cause more pain than any benefits so even if we had technology to do that we surely would not see it on market. Once that we accept that we have certain "life: use before..." period, another question comes to mind. Would you like to know or at least have some sort of hint that you will die soon? For some reason, most people I talk to, say no. We tend to have such fear or stand towards death that we simply do not wish to know when this will happen. It does not matter predicition would say that you would die tomorrow or for 30 years. On the other hand, if you formulate question like would you like to know if you will live 100 years and beyond, then everyone would like to know that. It is obvious why; we all would like to live as much as possible, but we do not wish to know when we will die as that's just too depressing and distracting to us. Knowing if you will live 100 years or beyond is still considered more exception than rule. So, if answer is yes that's great and if answer is no, no big deal - it means nothing (unless you are 99 years old). BUt indeed, is there any sign we can help us to predict that end is near?

 

death1.jpg

If you look at the picture above, you would think of war and say that in war you can get hurt at any time and there is no way to predict that. Same applies to normal daily life as you can get hit by car or something. So, the only tooling is statistics really. But let's forget things like fights or incidents as they do not come as "natural" - our body is natural and it decays in natural way. Is there any sign to tell us that our natural way of life is soon to end? According to recent studies - yes.  New research finds that a person's memory declines at a faster rate in the 2,5 years before death than at any other time after memory problems first begin. A second study shows that keeping mentally fit through board games or reading may be the best way to preserve memory during late life.

 

death2.jpg

 

For the study, 174 Catholic priests, nuns and monks without memory problems had their memory tested yearly for six to 15 years before death. After death, scientists examined their brains for hallmarks of Alzheimer's disease called plaques and tangles. In our first study, study author Robert S. Wilson, used the end of life as a reference point for research on memory decline rather than birth or the start of the study. The study found that at an average of about 2,5 years before death, different memory and thinking abilities tended to decline together at rates that were 8 to 17 times faster than before this terminal period. Higher levels of plaques and tangles were linked to an earlier onset of this terminal period but not to rate of memory decline during it. The findings suggest that the changes in mental abilities during the two to three years before death are not driven directly by processes related to Alzheimer's disease, but instead that the memory and other cognitive decline may involve some biological changes in the brain specific to the end of life. The study by Wilson and his co-authors deepens our understanding of terminal cognitive decline.

 

death3.jpg

The second study, also conducted by Wilson, focused on mental activities and involved 1076 people with an average age of 80 who were free of dementia. Participants underwent yearly memory exams for about five years. They reported how often they read the newspaper, wrote letters, visited a library and played board games such as chess or checkers. Frequency of these mental activities was rated on a scale of one to five, one meaning once a year or less and five representing every day or almost every day. The results showed that people's participation in mentally stimulating activities and their mental functioning declined at similar rates over the years. The researchers also found that they could predict participants' level of cognitive functioning by looking at their level of mental activity the year before but that level of cognitive functioning did not predict later mental activity. The results suggest a cause and effect relationship: that being mentally active leads to better cognitive health in old age.

 

Moral of the story? Keep yourself alive!

 

 

Credits: American Academy of Neurology

Hrvoje Crvelin

Made in space: glass

Posted by Hrvoje Crvelin Apr 6, 2012

In 1998 NASA researchers released statement where they indicated thin fibers of an exotic glass called ZBLAN are clearer when made in near weightlessness than on Earth under gravity's effects. ZBLAN is part of the family of heavy-metal fluoride glasses. Ordinary glass is based on silica, molecules of silicon dioxide (like sand or quartz), plus other compounds to get different qualities (most eyeglasses, though, are made of special plastics). ZBLAN is fluorine combined with metals: zirconium, barium, lanthanum, aluminum, and sodium (Zr, Ba, La, Al, Na, hence the name).

 

Most glass-making research has focused on the silica family, partly because it is easiest to make, especially for optical fibers that carry large volumes of information transmitted by lasers. Silica is good at transmitting visible light reasonably well (hence its use in lenses and windows), is moderately good with near-infrared light, and turns black in the deeper infrared spectrum. For about 20 years, optical scientists have known those exotic blends like ZBLAN can transmit better than silica glass. In fact, a perfect ZBLAN glass should transmit light near the theoretical best allowed by matter.

 

zblan.jpg

 

The challenge has been getting to the theoretical minimum absorption (that is, the minimum loss of signal). ZBLAN tends to crystallize, so long stretches cannot be made for communications fibers. Even when short stretches are made for other purposes, internal crystals act as partial mirrors, reflecting some of the light and bending the rest. Even a few crystals inside an optical fiber can seriously degrade its performance.

 

Now, Dr Martin Castillo from Queensland University of Technology's (QUT) Science and Engineering Faculty, and researcher for the university's micro-gravity drop tower, has partnered with the United States Air Force to fund world-first research into the development of ZBLAN glass. True ZBLAN glass fibres can only be made in the absence of gravity. The synthesis of this material in the absence of gravity has the ability to overcome issue with crystallization.

 

This special glass can be potentially drawn into a solid fibre and signals would be able to be transmitted over much great distances than in current silicate glass fibres. The result of this is potentially eliminating power consuming amplifiers and repeaters while significantly increasing bandwidth.

 

Research will first be conducted at QUT's micro-gravity drop tower in an experiment that will see the glass undergo ~2.1 seconds of microgravity over a 21.3 meter drop inside a drag shield.

 

 

Credits: NASA, Wikipedia, Queensland University of Technology

Hrvoje Crvelin

Fire! Walk with me...

Posted by Hrvoje Crvelin Apr 6, 2012

The fossil record of fire first appears with the establishment of a land-based flora in the Middle Ordovician period, 470 million years ago, permitting the accumulation of oxygen in the atmosphere as never before, as the new hordes of land plants pumped it out as a waste product. When this concentration rose above 13%, it permitted the possibility of wildfire. Wildfire is first recorded in the Late Silurian fossil record, 420 million years ago, by fossils of charcoalified plants. Apart from a controversial gap in the Late Devonian, charcoal is present ever since. The level of atmospheric oxygen is closely related to the prevalence of charcoal: clearly oxygen is the key factor in the abundance of wildfire. Fire also became more abundant when grasses radiated and became the dominant component of many ecosystems, around 6 to 7 million years ago; this kindling provided tinder which allowed for the more rapid spread of fire. These widespread fires may have initiated a positive feedback process, whereby they produced a warmer, drier climate more conducive to fire.

 

The ability to control fire was a dramatic change in the habits of early humans. Making fire to generate heat and light made it possible for people to cook food, increasing the variety and availability of nutrients. The heat produced would also help people stay warm in cold weather, enabling them to live in cooler climates. Fire also kept nocturnal predators at bay. Evidence of cooked food is found from 1.9 million years ago, although fire was probably not used in a controlled fashion until 400000 years ago. Evidence becomes widespread around 50 to 100 thousand years ago, suggesting regular use from this time; interestingly, resistance to air pollution started to evolve in human populations at a similar point in time. The use of fire became progressively more sophisticated, with its being used to create charcoal and to control wildlife from tens of thousands of years ago.

 

But now, there is a possible twist. An international team led by the University of Toronto and Hebrew University has identified the earliest known evidence of the use of fire by human ancestors. Microscopic traces of wood ash, alongside animal bones and stone tools, were found in a layer dated to one million years ago at the Wonderwerk Cave in South Africa. The analysis pushes the timing for the human use of fire back by several 100k years, suggesting that human ancestors as early as **** erectus may have begun using fire as part of their way of life.

 

fire1.jpg

Wonderwerk is a massive cave located near the edge of the Kalahari where earlier excavations by Peter Beaumont had uncovered an extensive record of human occupation. A research project has been doing detailed analysis of the material from Beaumont's excavation along with renewed field work on the Wonderwerk site.

 

Analysis of sediment by lead authors Francesco Berna and Paul Goldberg of Boston University revealed ashed plant remains and burned bone fragments, both of which appear to have been burned locally rather than carried into the cave by wind or water. The researchers also found extensive evidence of surface discoloration that is typical of burning.

 

The control of fire would have been a major turning point in human evolution. The impact of cooking food is well documented, but the impact of control over fire would have touched all elements of human society. Socializing around a camp fire might actually be an essential aspect of what makes us human.

 

Credits: Wikipedia, University of Toronto

Hrvoje Crvelin

Big Bang V: Puberty

Posted by Hrvoje Crvelin Apr 6, 2012

Fifth in series, Big Bang V: Puberty deals with some early and yet somewhat enigmatic events like supernovae, black holes, pulsars, quasars and similar. I do not get into details how they happened as this is something still highly debated. This is by far longest post I made so far, but it is worth reading as it gives overview of some earliest structure we see today.  We believe these may (or most of them) happened before galaxies appeared. We believe these events/structures appeared just before 13.2 billion light years ago (as that is the oldest galacy we see at the moment). If you missed, preivous articles on this subject were:

Big Bang I: Dawn of time

Big Bang II: First cry of baby Universe

Big Bang III: Origins of creation (reionization)

Big Bang IV: First starlight

 

In latest of previous posts, first starlight, we saw creation of first stars and how first light got ignited. We also suspect those were really bug stars and as such their lifetime was somewhat short. As we have seen, big stars end up in explosions (supernova) and depending on mass left behind may produce even a black hole. As with anything first, there is certain level of uncertainty and currently our research is resolving this puzzle on daily basis. If you read last article you might have noticed that gas fillaments caused first stars to be born, but this might have happened on large scale so question here is: what came first - galaxies or black holes? Did we have already galaxy structures in place before first black holes got created? Did the black holes come first, helping to build galaxies by pulling material towards them, or did they arise in the centre of already formed galaxies? In it interesting question and for long time this question hasn't been addressed. According to latest data it turns out black holes came first.

 

Earlier studies had revealed an intriguing link between the masses of black holes and the central "bulges" of stars and gas in galaxies. Generally, the black hole's mass was seen to be about 1000th that of the mass of the surrounding galactic bulge. This indicated an interactive relationship between the black hole and the bulge. What was not clear was whether one grew before the other, or whether they grew together. New radio telescope observations reaching back almost to the birth of the first galaxies may now have answered that question.

 

bb34.jpg

Radio waves received from these galaxies and travelling at the speed of light were emitted only about a billion years after the Big Bang which started the universe. These young distant galaxies had much larger black holes in relation to their bulge mass than older and closer galaxies. The implication is that the black holes started growing first. The next challenge is to work out how the black hole and the bulge affect each other's growth. To understand how the universe got to be the way it is today, we must understand how the first stars and galaxies were formed when the universe was young. The bottom line is that the final mass of a black hole is not primordial; it is determined during the galaxy formation process. Supermassive black holes (SMBHs) with masses 106-109.5 solar masses reside in the centers of most galaxies, including Milky Way.

 

Astronomers plumb the depths of the universe, and probe its history, by measuring how much the light from an object has been stretched by the expansion of space. This is called the redshift value or "z". In general, the greater the observed "z" value for a galaxy, the more distant it is in time and space as observed from our own Milky Way. Before Hubble was launched, astronomers could only see galaxies out to a z of approximately 1, corresponding to halfway across the universe. The original Hubble Deep Field taken in 1995 leapfrogged to z=4, or roughly 90 percent of the way back to the beginning of time. The Advanced Camera for Surveys (ACS) produced the Hubble Ultra Deep Field of 2004, pushing back the limit to z~6. Hubble's first infrared camera, the Near Infrared Camera and Multi-Object Spectrometer, reached out to z=7. The WFC3 first took us back to z~8, and has now plausibly penetrated for the first time to z=10. The very first stars may have formed between z of 30 and 15. Observations of quasars with redshifts z > 6 imply that SMBHs must have already existed at such high redshifts. Stellar explosions can produce black holes with masses up to 10-15 Sun masses, but there is no mechanism by means of which such small objects could grow to become SMBHs, except if dark matter has a sufficient self-interaction to facilitate a rapid transfer of angular momentum and kinetic energy (see this paper for more detailed discussion). The early formation of SMBHs, which is necessary to account for high-redshift quasars, implies that SMBHs may have preceded star formation. As seen before, masses of SMBHs exhibit a remarkable correlation with the bulge masses of their host galaxies. The bulge mass is 1000 times larger than the black hole mass, and the proportionality holds over some four orders of magnitude.

 

Investigating this black hole-galaxy mass correlation at different distances, and thus at different times in cosmic history, allows astronomers to study galaxy and black hole evolution in action. For galaxies further away than 5 billion light-years (corresponding to a redshift of z > 0.5), studies of black hole center and galaxy relationship face considerable difficulties. The typical objects of study are so-called active galaxies, and there are well-established methods to estimate the mass of such a galaxy's central black hole. It is the galaxy's mass itself that is the challenge: At such distances, standard methods of estimating a galaxy's mass become exceedingly uncertain or fail altogether. Max Planck Institute for Astronomy succeeded in directly "weighing" both a galaxy and its central black hole at such a great distance using a sophisticated and novel method. The galaxy, known to astronomers by the number J090543.56+043347.3 (which encodes the galaxy's position in the sky) has a distance of 8.8 billion light-years from Earth (redshift z = 1.3). Using this information, the researchers reconstructed the galaxy's dynamical mass. The star shape indicates the position of the galaxy's active nucleus; the surrounding contour lines indicate brightness levels or light emitted by the nucleus. Dark blue pixels indicate gas moving towards us at a speed of 250 km/s, dark red pixels gas moving away from us at 350 km/s. The key idea is the following: A galaxy's stars and gas clouds orbit the galactic centre; for instance, our Sun orbits the centre of the Milky Way galaxy once every 250 million years. The stars' different orbital speeds are a direct function of the galaxy's mass distribution. Determine orbital speeds and you can determine the galaxy's total mass.

 

bb37.jpg

 

Now that we know black holes might have appeared before, we need to get to the point of creating one. Cosmologists believe that the lightest chemical elements - hydrogen and helium - were created shortly after the Big Bang, together with some lithium, while almost all other elements were formed later in stars. Supernova explosions spread the stellar material into the interstellar medium, making it richer in metals. New stars form from this enriched medium so they have higher amounts of metals in their composition than the older stars. Therefore, the proportion of metals in a star tells us how old it is. Sometimes, we get some mysteries along the way too. Last year a faint star in the constellation of Leo, called SDSS J102915+172927, has been found to have the lowest amount of elements heavier than helium (what astronomers call "metals") of all stars yet studied. It has a mass smaller than that of the Sun and is probably more than 13 billion years old. A widely accepted theory predicts that stars like this, with low mass and extremely low quantities of metals, shouldn't exist because the clouds of material from which they formed could never have condensed. But there it is. Also very surprising was the lack of lithium in SDSS J102915+172927. Such an old star should have a composition similar to that of the Universe shortly after the Big Bang, with a few more metals in it. But researchers found that the proportion of lithium in the star was at least fifty times less than expected in the material produced by the Big Bang. Below is the pcicture of the star.

 

bb36.jpg

 

And while there are few more candidates to match this story, we will mostly focus on main line of the story. And that means we start with star ending its life in supernova creating seed for new generations. Supernovas - stars in the process of exploding - open a window onto the history of the elements of Earth's periodic table as well as the history of the universe. There are two possible routes to a supernova: either a massive star may run out of fuel, ceasing to generate fusion energy in its core, and collapsing inward under the force of its own gravity to form a neutron star or a black hole; or a white dwarf star may accumulate (accrete) material from a companion star until it reaches a critical mass and undergoes a thermonuclear explosion. In either case, the resulting supernova explosion expels much or all of the stellar material with velocities as much as 10% the speed of light.

 

bb46.jpg

 

All of those heavier than oxygen were formed in nuclear reactions that occurred during these explosions. Supernovae are classified into one of two primary types. White dwarfs which gain matter via accretion have their cores collapse once they approach the Chandrasekhar limit of 1.38 solar masses, thus yielding a Type Ia supernova (used as standard candle). Accretion of matter can be accomplished by a variety of means including via a close binary star companion or a merger with another white dwarf. In contrast, type Ib and Ic involve large stars which have exhausted their available fuel and collapse due to gravity. Type II supernovae involve much more massive stars (at least nine solar masses) where the nuclear fusion follows a steady path from lighter to progressively heavier elements (such as hydrogen to helium which is then converted to carbon etc) and until nuclear fusion is no longer possible at the core due to the iron and nickel that has been accumulated, thus leading to a huge core collapse and an ensuing stellar explosion. Spectroscopy has also played a key role in identifying the type of supernova one observes and, in fact, now forms the basis for their classification. More specifically, type Ia supernovae are characterized without any hydrogen emission lines in their spectra and in contrast to type II which exhibit strong hydrogen emission lines. Furthermore, type I are further subdivided on the basis of the presence of a silicon line (615nm, type Ia), a helium line (type Ib) or neither one (type Ic) in their spectra.

 

bb42.jpg

 

An exploding star known as a Type Ia supernova plays a key role in our understanding of the universe. Studies of Type Ia supernovae led to the discovery of dark energy. Yet the cause of this variety of exploding star remains elusive. All evidence points to a white dwarf that feeds off its companions star, gaining mass, growing unstable, and ultimately detonating. But does that white dwarf draw material from a Sun-like star, an evolved red giant star, or from a second white dwarf? Or is something more exotic going on? Clues can be collected by searching for "cosmic crumbs" left over from the white dwarf's last meal. There are two different models for how Type Ia supernovae are created from this type of binary system. In the so-called double-degenerate (or DD) model, the orbit between two white dwarf stars shrinks until the lighter star's path is disrupted and it moves close enough for some of its matter to be absorbed into the primary white dwarf and initiate an explosion. In the so-called single-degenerate (or SD) model, the white dwarf slowly accretes mass from a different, non-white dwarf type of star, until it reaches an ignition point. There are three potential methods for the transfer of mass and--depending on which one is used--the second star is likely to be a red giant, a helium star, or a so-called subgiant or main-sequence star.

 

In two comprehensive studies of SN 2011fe - the closest Type Ia supernova in the past two decades - there is new evidence that indicates that the white dwarf progenitor was a particularly picky eater, leading scientists to conclude that the companion star was not likely to be a Sun-like star or an evolved giant. This supernova occurred in the Pinwheel galaxy, which is located in the "Big Dipper" within the Ursa Major constellation. Early detection gave astronomers the extraordinary opportunity to observe the evolution of the brightness and spectra of the energy emitted from the explosion over time. Based on these data, researchers were able to approximate how big the star was and when it exploded, in addition to details about the companion star in the system. Researchers examined SN 2011fe with a suite of instruments in wavelengths ranging from X-rays to radio. They saw no sign of stellar material recently devoured by the white dwarf. Instead, the explosion occurred in a remarkably clean environment. Additional studies using NASA's Swift satellite, which examined a large number of more distant Type Ia supernovae, appear to rule out giant stars as companions for the white-dwarf progenitors. Taken together, these studies suggest that Type Ia supernovae likely originate from a more exotic scenario, possibly the explosive merger of two white dwarfs. Observations of the early stages of the supernova presented by Lawrence Berkeley Laboratory showed direct evidence that the primary star was a type of white dwarf called a carbon-oxygen white dwarf. Below images from Swift's Ultraviolet/Optical Telescope (UVOT) show the nearby spiral galaxy M101 before and after the appearance of SN 2011fe (circled, right), which was discovered on Aug. 24, 2011. At a distance of 21 million light-years, it was the nearest Type Ia supernova since 1986. Left: View constructed from images taken in March and April 2007. Right: The supernova was so bright that most UVOT exposures were short, so this view includes imagery from August through November 2011 to better show the galaxy.

 

bb41.jpg

 

These explosions, which can outshine their galaxy for weeks, release large and consistent amounts of energy at visible wavelengths. These qualities make them among the most valuable tools for measuring distance in the universe. Because astronomers know the intrinsic brightness of Type Ia supernovae, how bright they appear directly reveals how far away they are. Thanks to unprecedented X-ray and ultraviolet data from Swift, we have a clearer picture of what's required to blow up these stars. The studies suggest the companion to the white dwarf is either a smaller, younger star similar to our sun or another white dwarf. For more details, click here. Very sensitive and early radio and X-ray observations, presented in a separate paper in The Astrophysical Journal, show no evidence of interaction with surrounding material. Combining this data with an analysis of historical images, we can rule out luminous red giants and the vast majority of helium stars for the second star in the binary system before the explosion. These clues mean that the secondary star was either another white dwarf, as in the DD model, or a subgiant or main-sequence star, as created by one of the three SD model methods. Analysis of the matter ejected by the supernova's explosion suggests that the second star is less likely to be another white dwarf. Thus, the solution to the mystery of SN2011fe's origin was thought to be probably a primary white dwarf accreting matter from a neighboring subgiant or main-sequence star. Many possible explanations have been suggested, and all but one of these requires that a companion star near to the exploding white dwarf be left behind after the explosion. So, a possible way to distinguish between the various progenitor models is to look deep in the center of an old supernova remnant to find (or not find) the ex-companion star.

 

bb43.jpg

 

The star system that produces the Type Ia thermonuclear supernova was previously determined to be a closely orbiting pair of white dwarf stars that spiraled inward for an explosive collision. Finally, LSU Professor of Physics & Astronomy Bradley Schaefer and graduate student Ashley Pagnotta used images from the Hubble Space Telescope of a supernova remnant named SNR 0509-67.5 to illustrate the lack of any possible surviving companion star to the exploding white dwarf, allowing the rejection of all possible classes of progenitors except for the close pair of white dwarfs. Any such result naturally requires extensive data processing and analysis as well as detailed theory calculations before it can be considered finalized. When finished, the central region of SNR 0509-67.5 was found to be starless to a very deep limit (visual magnitude 26.9). The faintest possible ex-companion star for all models (except the double degenerate) is a factor of 50 times brighter than the observed limit, and this makes for the rejection of all explanations except for the pair of white dwarf stars.

 

University of Pittsburgh took it a step further this year. There were obvious reasons to suspect that Type Ia supernovae come from the merging of a double white dwarf, but biggest question was whether there were enough double white dwarfs out there to produce the number of supernovae that we see. Because white dwarfs are extremely small and faint, there is no hope of seeing them in distant galaxies. Therefore, researchers turned to the only place where they could be seen: the part of the Milky Way Galaxy within about a thousand light years of the sun. To find the star's companion, the team needed two spectra to measure the velocity between the two. However, SDSS only took one spectrum of most objects. The team decided to make use of a little-known feature in the SDSS spectra to separate each one into three or more subspectra. Although the reprocessing of the data was challenging, the team was able to compile a list of more than 4000 white dwarfs within a year, each of which had two or more high-quality subspectra. They found 15 double white dwarfs in the local neighborhood and then used computer simulations to calculate the rate at which double white dwarfs would merge. Then, they compared the number of merging white dwarfs here to the number of Type Ia supernovae seen in distant galaxies that resemble the Milky Way. The result was that, on average, one double white dwarf merger event occurs in the Milky Way about once a century.

 

bb44.jpg

 

Image above shows mosaic which shows 99 of the nearly 4000 white dwarfs examined. Of the four thousand, they found fifteen double white dwarfs. That number is remarkably close to the rate of Type Ia supernovae we observe in galaxies like our own. This suggests that the merger of a double white dwarf system is a plausible explanation for Type Ia supernovae.

 

While I spent some time on Ia, that doesn't mean others are to neglect. Recetly one IIb type made headlines too - Cassiopeia A. Using very long observations of Cassiopeia A (or Cas A), a team of scientists has mapped the distribution elements in the supernova remnant in unprecedented detail. Now, check following picture first.

 

bb45.jpg

 

An artist's illustration on the left shows a simplified picture of the inner layers of the star that formed Cas A just before it exploded, with the predominant concentrations of different elements represented by different colors: iron in the core (blue), overlaid by sulfur and silicon (green), then magnesium, neon and oxygen (red). The image from NASA's Chandra X-ray Observatory on the right uses the same color scheme to show the distribution of iron, sulfur and magnesium in the supernova remnant. The data show that the distributions of sulfur and silicon are similar, as are the distributions of magnesium and neon. Oxygen, which according to theoretical models is the most abundant element in the remnant, is difficult to detect because the X-ray emission characteristic of oxygen ions is strongly absorbed by gas in along the line of sight to Cas A, and because almost all the oxygen ions have had all their electrons stripped away. A comparison of the illustration and the Chandra element map shows clearly that most of the iron, which according to theoretical models of the pre-supernova was originally on the inside of the star, is now located near the outer edges of the remnant. Surprisingly, there is no evidence from X-ray (Chandra) or infrared (Spitzer Space Telescope) observations for iron near the center of the remnant, where it was formed. Also, much of the silicon and sulfur, as well as the magnesium, is now found toward the outer edges of the still-expanding debris. The distribution of the elements indicates that a strong instability in the explosion process somehow turned the star inside out. That's pretty much cool, isn't it?

 

The most ancient explosions, far enough away that their light is reaching us only now, can be difficult to spot. Last year researchers has uncovered a record-breaking number of supernovas in the Subaru Deep Field, a patch of sky the size of a full moon. Supernovas are nature's "element factories". During these explosions, elements are both formed and flung into interstellar space, where they serve as raw materials for new generations of stars and planets. Closer to home, these elements are the atoms that form the ground we stand on, our bodies, and the iron in the blood that flows through our veins. By tracking the frequency and types of supernova explosions back through cosmic time, astronomers can reconstruct the universe's history of element creation.

 

bb38.jpg

In order to observe the 150000 galaxies of the Subaru Deep Field, the team used the Japanese Subaru Telescope in Hawaii. The telescope's light-collecting power, sharp images, and wide field of view allowed the researchers to overcome the challenge of viewing such distant supernovas. By "staring" with the telescope at the Subaru Deep Field, the faint light of the most distant galaxies and supernovas accumulated over several nights at a time, forming a long and deep exposure of the field.

 

Over the course of observations, the team "caught" the supernovas in the act of exploding, identifying 150 supernovas in all. Out of the 150 supernovas observed, 12 were among the most distant and ancient ever seen.

 

According to analysis, thermonuclear type supernovas, also called Type-la, were exploding about five times more frequently 10 billion years ago than they are today. These supernovas are a major source of iron in the universe, the main component of Earth's core and an essential ingredient of the blood in our bodies.

 

In 2011, only fourteen days after the explosion of a star in the M51 galaxy, coordinated telescopes around Europe have taken a photograph of the cosmic explosion in great detail - equivalent to seeing a golf ball on the surface of the moon. This is the earliest high resolution image of a supernova explosion. From this photograph, we can define the expansion velocity of the shock wave created in the explosion. With this precision, we can look for the previous star on the earlier galaxy photographs, as well as weigh up better our future observations.

 

bb39.jpg

 

The most recent observation of supernova comes from M95 galaxy and it is called SN 2012aw. On March 16th, 2012, news broke of possible supernova. Soon, this has been confirmed. M95 is about 35-40 million light years away, and is part of a small group of a couple of dozen galaxies called the Leo I group. This supernova is type II supernova. Exploding star is sitting right on a spiral arm as seen on picture below. I also attached video giving some more details.

 

bb40.jpg

 

 

 

 

Neutron stars are extremely dense ball of matter only a few kilometers across, the collapsed remnants of the cores of stars that went supernova. By dense, we mean dense: imagine taking a mountain and crushing down in size to where it could fit in your hand. Or think of it this way: a cubic centimeter (roughly the size of a sugar cube or dice) of neutron star material would have about the same mass as all the cars in the US combined. Dense means dense! Simply check picture below; the size of a neutron star compared to Manhattan while neutron star packs more mass than the sun into a sphere just 10 to 15 miles wide.

 

bb49.jpg

 

By being so dense gravity of a neutron star is nearly beyond comprehension. If you let something drop from a height onto the star’s surface, that material will be moving at a large fraction of the speed of light upon impact. The energy release is monumental; a marshmallow traveling at that speed would explode like a nuclear weapon. There could be exotic kinds of particles or states of matter, such as quark matter, in the centers of neutron stars, but it’s impossible to create them in the lab. The only way to find out is to understand neutron stars. The warping of space-time by the neutron star's powerful gravity, an effect of Einstein's general theory of relativity, shifts the neutron star's iron line to longer wavelengths. We see these asymmetric lines from many black holes, but this is the confirmation that neutron stars can produce them as well. It shows that the way neutron stars accrete matter is not very different from that of black holes, and it gives us a new tool to probe Einstein’s theory too. Another study saw gas whipping around just outside the neutron star's surface, and since the inner part of the disk obviously can't orbit any closer than the neutron star's surface, these measurements give us a maximum size of the neutron star's diameter. The neutron stars can be no larger than 29 to 33 km across, results that agree with other types of measurements.

 

bb59.png

 

In general, compact stars of less than 1.38 solar masses (Chandrasekhar limit) are white dwarfs, and above 2 to 3 solar masses (Tolman-Oppenheimer-Volkoff limit), a quark star might be created; however, this is uncertain. Gravitational collapse will usually occur on any compact star between 10 and 25 solar masses and produce a black hole. Current understanding of the structure of neutron stars is defined by existing mathematical models. The inner structure might be derived by analyzing observed frequency spectra of stellar oscillations. On the basis of current models, the matter at the surface of a neutron star is composed of ordinary atomic nuclei crushed into a solid lattice with a sea of electrons flowing through the gaps between them. It is possible that the nuclei at the surface are iron, due to iron's high binding energy per nucleon. It is also possible that heavy element cores, such as iron, simply sink beneath the surface, leaving only light nuclei like helium and hydrogen cores. If the surface temperature exceeds 106 kelvin (as in the case of a young pulsar), the surface should be fluid instead of the solid phase observed in cooler neutron stars (temperature <106 kelvins). The "atmosphere" of the star is hypothesized to be at most several micrometers thick, and its dynamic is fully controlled by the star's magnetic field. Below the atmosphere one encounters a solid "crust". This crust is extremely hard and very smooth (with maximum surface irregularities of ~5 mm), because of the extreme gravitational field. Proceeding inward, one encounters nuclei with ever increasing numbers of neutrons; such nuclei would decay quickly on Earth, but are kept stable by tremendous pressures. Proceeding deeper, one comes to a point called neutron drip where neutrons leak out of nuclei and become free neutrons. In this region, there are nuclei, free electrons, and free neutrons. The nuclei become smaller and smaller until the core is reached, by definition the point where they disappear altogether. The composition of the superdense matter in the core remains uncertain. One model describes the core as superfluid neutron-degenerate matter (mostly neutrons, with some protons and electrons). More exotic forms of matter are possible, including degenerate strange matter (containing strange quarks in addition to up and down quarks), matter containing high-energy pions and kaons in addition to neutrons, or ultra-dense quark-degenerate matter.

 

bb50.jpg

 

A neutron star is the closest thing to a black hole that astronomers can observe directly, crushing half a million times more mass than Earth into a sphere no larger than a city. In October 2010, a neutron star near the center of our galaxy erupted with hundreds of X-ray bursts that were powered by a barrage of thermonuclear explosions on the star's surface. NASA's Rossi X-ray Timing Explorer (RXTE) captured the month-long fusillade in extreme detail. Using this data, an international team of astronomers has been able to bridge a long-standing gap between theory and observation. At low rates of accretion, this system displays the familiar X-ray pattern of fuel build-up and explosion: a strong spike of emission followed by a long lull as the fuel layer reforms. At higher accretion rates, where a greater volume of gas is falling onto the star, the character of the pattern changes: the emission spikes are smaller and occur more often. But at the highest rates, the strong spikes disappeared and the pattern transformed into gentle waves of emission. This as a sign of marginally stable nuclear fusion, where the reactions take place evenly throughout the fuel layer, just as theory predicted. Obviously there is an impact of rotatition too.  Above makes sense with model where rotation is not so fast. Faster rotation would introduce friction between the neutron star's surface and its fuel layers, and this frictional heat may be sufficient to alter the rate of nuclear burning in all other bursting neutron stars previously studied.

 

bb57.jpg

 

Pulsars were discovered in 1967 and that discovery earned the Nobel Prize in 1974. A pulsar (portmanteau of pulsating star) is a highly magnetized, rotating neutron star that emits a beam of electromagnetic radiation. This radiation can only be observed when the beam of emission is pointing towards the Earth, much the way a lighthouse can only be seen when the light is pointed in the direction of an observer, and is responsible for the pulsed appearance of emission. Neutron stars are very dense, and have short, regular rotational periods. They appear to pulse because the magnetic axis is not aligned with the axis of rotation, so the pole comes in and out of view as the neutron star rotates. This produces a very precise interval, between pulses that range from roughly milliseconds to seconds for an individual pulsar. The precise periods of pulsars makes them useful tools. Observations of a pulsar in a binary neutron star system were used to indirectly confirm the existence of gravitational radiation.

 

bb47.jpg

 

Pulsars are among the most exotic celestial bodies known. They have diameters of about 20 kilometres, but at the same time roughly the mass of our sun. A sugar-cube sized piece of its ultra-compact matter on Earth would weigh hundreds of millions of tons. A sub-class of them, known as millisecond pulsars, spin up to several hundred times per second around their own axes. Millisecond pulsars are strongly magnetized, old neutron stars in binary systems which have been spun up to high rotational frequencies by accumulating mass and angular momentum from a companion star. Today we know of about 200 such pulsars with spin periods between 1.4 to 10 milliseconds. These are located in both the Galactic Disk and in Globular Clusters. Previous studies reached the paradoxical conclusion that some millisecond pulsars are older than the universe itself. Through numerical calculations on the base of stellar evolution and accretion torques, astrophysicist Thomas Tauris demonstrated that millisecond pulsars lose about half of their rotational energy in the so-called Roche-lobe decoupling phase. This phase describes the termination of the mass transfer in the binary system. Hence, radio-emitting millisecond pulsars should spin slightly slower than their progenitors, X-ray emitting millisecond pulsars which are still accreting material from their donor star. This is exactly what the observational data seem to suggest. Furthermore, these new findings help explain why some millisecond pulsars appear to have characteristic ages exceeding the age of the Universe and perhaps why no sub-millisecond radio pulsars exist. The key feature of the new results is that it has now been demonstrated how the spinning pulsar is able to break out of its so-called equilibrium spin. At this epoch the mass-transfer rate decreases which causes the magnetospheric radius of the pulsar to expand and thereby expel the collapsing matter like a propeller. This causes the pulsar to lose additional rotational energy and thus slow down its spin rate.

 

bb48.jpg

 

Discover blog pointed out recently to nice article by BBC. On Earth, GPS gives us a highly accurate way of determining position. This works because GPS satellites provide a set of clocks, the relative timings of the signals from which can be translated into positions. Out in deep space, of course, our clocks are unfortunately useless for this purpose, and the best we currently can do is by comparing the timing of signals as they are measured back on Earth by different detectors with limited accuracy. The further away a spacecraft is, the worse this method is. Werner Becker (Max-Planck Institute for Extraterrestrial Physics) realized universe comes equipped with its own set of exquisite clocks – pulsars – the timing of which can, in principle, be used to guide spacecraft in a similar way to how GPS is used here on Earth. A significant obstacle to making this work today is that detecting signals from the pulsars requires X-ray detectors that are compact enough to be easily carried on spacecraft. However, it turns out the relevant technology is also needed by the next generation of X-ray telescopes, and should be ready in twenty years or so. Researching pays off.

 

bb75.jpg

The Crab pulsar is a rapidly spinning neutron star, the collapsed core of a massive star that exploded in a spectacular supernova in the year 1054, leaving behind the brilliant Crab Nebula, with the pulsar at its heart. It is one of the most intensively studied objects in the sky. Rotating about 30 times a second, the pulsar has an intense, co-rotating magnetic field from which it emits beams of radiation. The beams sweep around like a lighthouse beacon because they are not aligned with the star's rotation axis. So although the beams are steady, they are detected on Earth as rapid pulses of radiation. Scientists have long agreed on a general picture of what causes pulsar emission. Electromagnetic forces created by the star's rapidly rotating magnetic field accelerate charged particles to near the speed of light, producing radiation over a broad spectrum. But the details remain a mystery. After many years of observations and results from the Crab, we thought we had an understanding of how it worked, and the models predicted an exponential decay of the emission spectrum above around 10 GeV. So it came as a real surprise when we found pulsed gamma-ray emission at energies above 100 GeV.

 

bb58.jpg

 

Then month ago this value got 4x higher. This was now confirmed by the two MAGIC (Major Atmospheric Gamma-Ray Imaging Cherenkov) Telescopes on the Canary island of La Palma. They observed the pulsar in the area of very high energy gamma radiation from 25 up to 400 gigaelectronvolts (GeV), a region that was previously difficult to access with high energy instruments - 50 to 100 times higher than theorists thought possible. These latest observations are difficult for astrophysicists to explain. There must be processes behind this that are as yet unknown. A few years ago, the MAGIC telescopes detected gamma rays of energy ≥ 25 GeV from the Crab Pulsar. This was very unexpected since the available EGRET satellite data were showing that the spectrum ceases at much lower energies. However, at the very high energies MAGIC demonstrated to have few orders of magnitudes higher sensitivity compared to the satellite missions. At the time, scientists concluded that the radiation must have been produced at least 60 kilometres above the surface of the neutron star. This is because the high-energy gamma rays are so effectively shielded by the star's magnetic field that a source very close to the star could not be detected. As a consequence that measurement ruled out one of the main theories on high energy gamma-ray emission from the Crab pulsar. The recent measurements by MAGIC, together with those of the orbiting Fermi satellite at much lower energies, provide an uninterrupted spectrum of the pulses from 0.1 GeV to 400 GeV. These clear observational results create major difficulties for most of the existing pulsar theories that predict significantly lower limits for highest energy emission. A new theoretical model developed by MAGIC team associate explains the phenomenon with a cascade-like process which produces secondary particles that are able to overcome the barrier of the pulsar's magnetosphere. Another possible explanation links the puzzling emission to the similarly enigmatic physics of the pulsar wind - a current of electrons, positrons and electromagnetic radiation which ultimately develops into the Crab Nebula. However, even though the above models are able to provide explanations for the extremely high energy and the shortness of the pulses, further refinements are necessary for achieving a good agreement with observations. Astrophysicists hope that future observations will improve the statistical precision of the data and help solving the mystery. This could shed new light on pulsars and on the Crab Nebula itself, as one of the most studied objects in our Milky Way.

 

But pulsars are also puzzles for other reasons. The conventional view is that their magnetic field arises from the movement of charged particles as they rotate. These charged particles ought to behave like a superfluid and so should end up becoming aligned with the axis of rotation. That's clearly not the case.

 

What's more, these kinds of superfluid currents are likely to be highly unstable, generating wobbles in the magnetic field. But pulsars are well known for being amazingly stable.

 

Another problem is how pulsars end up with magnetic fields that are so strong. The conventional view is that the process of collapse during a supernova somehow concentrates the original star's field.

 

However, a star loses much of its material when it explodes as a supernova and this presumably carries away much of its magnetic field too.

 

But some pulsars have fields as high as 1012 Tesla, far more than can be explained by this process.

bb56.jpg

 

Johan Hansson and Anna Ponga at Lulea University of Technology in Sweden pointed out there is another way for magnetic fields to form, other than the movement of charged particles. This other process is by the alignment of the magnetic fields of the body's components, which is how ferromagnets form. Their suggestion is that when a neutron star forms, the neutron magnetic moments become aligned because this is the lowest energy configuration of the nuclear forces between them. When this alignment takes place, a powerful magnetic field effectively becomes frozen in place. This makes neutron stars giant permanent magnets (Hansson and Ponga call them neutromagnets). A neutromagnet would be hugely stable, just like a permanent ferromagnet. The field would be likely to align with the star's original field, which although much weaker, acts as a seed when the field forms. Significantly, this needn't be in the same direction as the axis of spin. What's more, since neutron stars all have about the same mass (sort of), Hansson and Ponga can calculate the maximum strength of the fields they ought to generate. This number turns out to be about 1012 Tesla's, exactly the value that's observed in the highest strength fields around neutron stars. That immediately solves several of the outstanding puzzles about pulsars in a remarkably simple way. The theory is testable too - it predicts that neutron stars cannot have magnetic fields greater than 1012 Tesla, so the discovery of a neutron star with a stronger field would immediately scupper it. This idea also raises some questions of its own; the Pauli exclusion principle would, at first sight, seem to exclude the possibility of neutrons being aligned in this way. But Hansson and Ponga point to laboratory experiments which suggest that nuclear spins can become ordered, like ferromagnets. One should remember that the nuclear physics at these extreme circumstances and densities is not known a priori, so several unexpected properties (such as "neutromagnetism") might apply. Keep in mind this idea is speculative, but is surely contains elegance and explanatory power that makes it worth pursuing in significantly more detail.

 

bb53.jpg

One of the most studied objects in the sky, the Crab Nebula is powered by a pulsar. This composite image of the Crab Nebula uses data from the Chandra X-ray Observatory (x-ray image in blue), Hubble Space Telescope (optical image in red and yellow), and Spitzer Space Telescope (infrared image in purple).

 

At this point you should ask yourself, what is nebula? A nebula is an interstellar cloud of dust, hydrogen, helium and other ionized gases. Originally, nebula was a general name for any extended astronomical object, including galaxies beyond the Milky Way. Nebulae are often star-forming regions, such as in the Eagle Nebula. This nebula is depicted in one of NASA's most famous images, the "Pillars of Creation".

 

In these regions the formations of gas, dust, and other materials "clump" together to form larger masses, which attract further matter, and eventually will become massive enough to form stars. The remaining materials are then believed to form planets, and other planetary system objects.

 

Deep in the heart of the southern Milky Way lies a stellar nursery called the Carina Nebula. It is about 7500 light-years from Earth in the constellation of Carina. This cloud of glowing gas and dust is one of the closest incubators of very massive stars to Earth and includes several of the brightest and heaviest stars known. One of them, the mysterious and highly unstable star Eta Carinae, was the second brightest star in the entire night sky for several years in the 1840s and is likely to explode as a supernova in the near future, by astronomical standards. The Carina Nebula is a perfect laboratory for astronomers studying the violent births and early lives of stars. Using VLT hundreds of individual images have been combined to create this picture, which is the most detailed infrared mosaic of the nebula ever taken and one of the most dramatic images ever created by the VLT. It shows not just the brilliant massive stars, but hundreds of thousands of much fainter stars that were previously invisible.

 

The dazzling star Eta Carinae itself appears at the lower left of the picture. It is surrounded by clouds of gas that are glowing under the onslaught of fierce ultraviolet radiation. Across the image there are also many compact blobs of dark material that remain opaque even in the infrared. These are the dusty cocoons in which new stars are forming. Over the last few million years this region of the sky has formed large numbers of stars both individually and in clusters. The bright star cluster close to the centre of the picture is called Trumpler 14. And towards the left side of the image a small concentration of stars that appear yellow can be seen. This grouping was seen for the first time in this new data from the VLT: these stars cannot be seen in visible light at all. This is just one of many new objects revealed for the first time in this spectacular panorama.

bb54.jpg

 

Nebulae come in all sorts of shapes. Below is a SH2-284, a star forming nebula. The image is false color, but each hue represents a different part of the infrared spectrum. Blue and teal is mostly coming from stars, while red and yellow is dust. Green comes from a very specific kind of material called a polycyclic aromatic hydrocarbons - long-chain carbon molecules which are essentially soot. PAHs are made in various ways, but are abundant where stars are being born, and that’s what we’re seeing here. There’s a cluster of young stars in the center of this cloud, and they’re so hot they’re eating out the inside of the cloud, creating that cavity you can see. Like so many of these structures, the clock is ticking: many of those stars will explode, and when they do they’ll tear the cloud apart - this rainbow cloud only has a few million years left before it’s extinct.

 

bb55.jpg

 

Researchers using NASA's Stratospheric Observatory for Infrared Astronomy (SOFIA) have captured an infrared image of the last exhalations of a dying sun-like star. It is named M2-9 (planetary nebula Minkowski 2-9). The SOFIA images provide our most complete picture of the outflowing material on its way to being recycled into the next generation of stars and planets. Objects such as M2-9 (see picture below) are called planetary nebulae due to a mistake made by early astronomers, who discovered these objects while sweeping the sky with small telescopes. Many of these nebulae have the color, shape and size of Uranus and Neptune, so they were dubbed planetary nebulae. The name persists despite the fact that these nebulae are now known to be distant clouds of material, far beyond our solar system, which are shed by stars about the size of our sun undergoing upheavals during their final life stages.

 

b72.jpg

 

Nebulae can be big. Really big really big. Take for example these two: Orion nebula and Drangonfish one.  Orion Nebula is one of the biggest, most active star forming regions in the Milky Way galaxy. It has enough gas to form thousands of stars like the Sun, and it’s one of the brightest and closest such gas clouds in the sky. The stars in the nebula are about 1 million years old. The nebula’s diameter is 14 light-years across. If you look on the Intenet, you will find some amazing picture of it. Enjoy the view.

 

bb74.jpg

 

And now dragonfish. The beast! It’s something like 450 light years across!  Compare that to the Orion Nebula’s 14 light year width and you get the picture. It’s also incredibly massive: it may have a total mass exceeding 100000 times the Sun’s mass, and may contain millions of stars. Even from other galaxies, it must be one of the most obvious features in the Milky Way. Yet, ironically, it’s very difficult to see at all from Earth. It’s located over 30000 light years away, on the other side of the galaxy. There’s a vast amount of interstellar material (like dust) between us and it, absorbing its light, so in optical light it’s essentially invisible, but infrared light can pierce that fog, and the image above was taken using NASA’s Spitzer Space Telescope, designed to look in the infrared.

 

bb73.jpg

Astronomers used a different infrared telescope to look at the individual stars in the nebula, and found that it has an incredible 400+ O-type stars, the most massive stars that can exist. These stars are young, hot, massive, and blast out ultraviolet light. That’s what’s making this huge gas cloud glow, and in fact the cloud is expanding under the influence of the terrible flood of radiation. Those stars will eventually explode in the next million years or so, one after another, blasting out radiation and material that will dwarf even what they’re putting out right now. That will eventually tear through the nebula, ramming it, causing parts of it to collapse and form new stars, and other parts to dissipate entirely.

 

On Christmas 2010, the light from a gamma-ray burst reached Earth and was detected by NASA’s orbiting Swift satellite - GRB 101225A. It lasted a staggering half hour, when most GRBs are over within seconds, or a few minutes at most. Follow up observations came pouring in from telescopes on and above the Earth, and the next weird thing was found: the fading glow from the burst seemed to be coming from good old-fashioned heat: some type of material heated to unbelievable temperatures. Usually, the afterglow is dominated by other forces like rapidly moving super-intense magnetic fields that accelerate gigatons of subatomic particles to huge speeds, but in this case it looked like a regular-old explosion. So what could have caused this burst? Normally, GRBs are the birth cries of black holes. When a giant star explodes, or two tiny but ultra-dense neutrons stars merge, they can form a black hole and send vast amounts of gamma rays (super high-energy light) sleeting out into the Universe. In this case, though, something different happened, and two ideas of what was behind it are emerging, but both involve neutron stars. According to first one, a comet or other large chunk of material was orbiting a neutron star. It got too close, broke apart, and fell on the surface. As each piece hit it released far more energy than all the nukes on Earth combined - by a factor of millions - sending out huge amounts of light into space. That explains both the flash of the GRB detected last year and the fact that the afterglow was in the form of heat; the vast energy of the repeated slamming impacts of the comet chunks would've heated the material (and the neutron star) to millions of degrees. Another approach is that neutron star was orbiting another star. Eventually neutron star stripped off material from swallowing star and eventually it would get its core too. Mind-numbingly powerful gravity would’ve squeezed that stuff as it fell on the neutron star, and the gravity became so intense not even the neutron star-stuff could resist: the neutron star itself collapsed into a black hole, releasing a flash of energy focused into two tightly-focused beams that lasted for a few seconds, equal to the Sun’s total lifetime of energy release. This wave of energy slammed into the material previously ejected from the normal star, heating it and causing the long afterglow seen last year. Following video, made by NASA, reflects both scenarios:

 

 

 

 

This brings us to black holes - ultimate mystery of the Universe. I did touch subject of black holes already when talking about Holographic Universe. Basics of the black holes can be found there, but I will go through necessary bits here too. A black hole is a concentration of mass great enough that the force of gravity prevents anything from escaping it except through quantum tunnelling behaviour (known as Hawking Radiation). The gravitational field is so strong that the escape velocity near it exceeds the speed of light. This implies that nothing, not even light, can escape its gravity. This makes this object invisible to the rest of the universe, hence the word "black". Around a black hole there is a mathematically defined surface called an event horizon that marks the point of no return. Quantum mechanics predicts that black holes emit radiation like a black body with a finite temperature. This temperature is inversely proportional to the mass of the black hole, making it difficult to observe this radiation for black holes of stellar mass or greater.

 

bb60.jpeg

 

Near a black hole, just outside the event horizon, there's some incredible stuff going on. First off, the matter. Every black hole has a flat disc of matter orbiting it at an incredibly high speed. Due to forces like friction and gravitational tides, the matter gets ripped apart into individual molecules, atoms, and subatomic particles, and eventually spirals in towards the center. So we wind up with - somewhat close to the event horizon - a bunch of small, fast-moving, and charged particles.

 

In the early days of the universe, a mere 700 to 800 million years after the Big Bang, most things were small. The first stars and galaxies were just beginning to form and grow in isolated parts of the universe. According to astrophysical theory, black holes found during this era also should be small in proportion with the galaxies in which they reside. Supermassive black holes are the largest black holes, with masses billions of times larger than that of the sun. Typical black holes have masses only up to 30 times larger than the sun's. Astrophysicists have determined that supermassive black holes can form when two galaxies collide and their two black holes merge into one. These galaxy collisions happened in the later years of the universe, but not in the early days. In the first few millions of years after the Big Bang, galaxies were too few and too far apart to merge. Recent observations from the Sloan Digital Sky Survey (SDSS) have shown that this isn't the case - enormous supermassive black holes existed as early as 700 million years after the Big Bang. Computer simulations, completed using supercomputers at the National Institute for Computational Sciences and the Pittsburgh Supercomputing Center and viewed using GigaPan Time Machine technology, show that thin streams of cold gas flow uncontrolled into the center of the first black holes, causing them to grow faster than anything else in the universe. Btw, GigaPan Time Machine technology is really cool stuff; this technology allows the researchers to view their simulation as if it was a movie. You can easily pan across the simulated universe as it forms, and zoom in to events that look interesting, allowing to see greater detail than what could be seen using a telescope. Picture below shows the projected gas density over the whole volume ('unwrapped' into 2D) in the large scale (background) image. The two images on top show two zoom-in of increasing factor of 10, of the regions where the most massive black hole - the first quasars - is formed. The black hole is at the center of the image and is being fed by cold gas streams.

 

bb66.jpg

 

As researchers zoomed in to the creation of the first supermassive black holes, they saw something unexpected. Normally, when cold gas flows toward a black hole it collides with other gas in the surrounding galaxy. This causes the cold gas to heat up and then cool back down before it enters the black hole. This process, called shock heating, would stop black holes in the early universe from growing fast enough to reach the masses we see. Instead, researchers saw in their simulation thin streams of cold dense gas flowing along the filaments that give structure to the universe and straight into the center of the black holes at breakneck speed, making for cold, fast food for the black holes. This uncontrolled consumption caused the black holes to grow exponentially faster than the galaxies in which they reside. This results could also shed light on how the first galaxies formed, giving more clues to how the universe came to be.

 

By pointing Chandra at a patch of sky for more than six weeks, astronomers obtained what is known as the Chandra Deep Field South (CDFS). When combined with very deep optical and infrared images from NASA's Hubble Space Telescope, the new Chandra data allowed astronomers to search for black holes in 200 distant galaxies, from when the universe was between about 800 million to 950 million years old. The observations found that between 30 and 100 percent of the distant galaxies contain growing supermassive black holes. Extrapolating these results from the small observed field to the full sky, there are at least 30 million supermassive black holes in the early universe. This is a factor of 10000 larger than the estimated number of quasars in the early universe. Because these black holes are nearly all enshrouded in thick clouds of gas and dust, optical telescopes frequently cannot detect them. However, the high energies of X-ray light can penetrate these veils, allowing the black holes inside to be studied.

 

bb67.jpg

 

Although evidence for parallel growth of black holes and galaxies has been established at closer distances, the new Chandra results show that this connection starts earlier than previously thought, perhaps right from the origin of both. Most astronomers think in the present-day universe, black holes and galaxies are somehow symbiotic in how they grow. It has been suggested that early black holes would play an important role in clearing away the cosmic "fog" of neutral, or uncharged, hydrogen that pervaded the early universe when temperatures cooled down after the Big Bang. However, the Chandra study shows that blankets of dust and gas stop ultraviolet radiation generated by the black holes from traveling outwards to perform this "reionization." Therefore, stars and not growing black holes are likely to have cleared this fog at cosmic dawn.

 

Early times and black holes are indeed puzzle. The farrest black hole we found is coming from galaxy called J1120+0641 and light we see coming from the galaxy center is only 740 million years after the Big Bang, when the universe was only 1/18th of its current age. Using the IRAM array of millimetre-wave telescopes in the French Alps, a team of European astronomers from Germany, the UK and France have discovered a large reservoir of gas and dust that includes significant quantities of carbon in a galaxy that surrounds the most distant supermassive black hole known. This is quite unexpected, as the chemical element carbon is created via nuclear fusion of helium in the centres of massive stars and ejected into the galaxy when these stars end their lives in dramatic supernova explosions. The presence of so much carbon confirms that massive star formation must have occurred in the short period between the Big Bang and the time we are observing the galaxy. From the emission from the dust, researchers were able to show that the galaxy is still forming stars at a rate that is 100 times higher than in our Milky Way. The astronomers are excited about the fact that this source is also visible from the southern hemisphere where the Atacama Large Millimeter/submillimeter Array (ALMA), which will be the world's most advanced sub-millimetre / millimetre telescope array, is currently under construction in Chile. Observations with ALMA will enable a detailed study of the structure of this galaxy, including the way the gas and dust moves within it.

 

bb69.jpg

 

Image above shows the bright emission from carbon and dust in a galaxy surrounding the most distant supermassive black hole known. At a distance corresponding to 740 Million years after the Big Bang, the carbon line, which is emitted by the galaxy at infrared wavelengths (that are unobservable from the ground), is redshifted, because of the expansion of the Universe, to millimetre wavelengths where it can be observed using facilities such as the IRAM Plateau de Bure Interferometer.

 

But ok, what about "normal" black holes? Although exotic by everyday standards, black holes are everywhere. The lowest-mass black holes are formed when very massive stars reach the end of their lives, ejecting most of their material into space in a supernova explosion and leaving behind a compact core that collapses into a black hole. There are thought to be millions of these low-mass black holes distributed throughout every galaxy. Despite their ubiquity, they can be hard to detect as they do not emit light so are normally seen through their action on the objects around them, for example by dragging in material that then heats up in the process and emits X-rays. But despite this, the overwhelming majority of black holes have remained undetected. In recent years, researchers have made some progress in finding ordinary black holes in binary systems, by looking for the X-ray emission produced when they suck in material from their companion stars. So far these objects have been relatively close by, either in our own Milky Way Galaxy or in nearby galaxies in Local Group. Researchers used the orbiting Chandra X-ray observatory to make six 100000 second long exposures of Centaurus A, detecting an object with 50000 times the X-ray brightness of our Sun. A month later, it had dimmed by more than a factor of 10 and then later by a factor of more than 100, so became undetectable. This behaviour is characteristic of a low mass black hole in a binary system during the final stages of an outburst and is typical of similar black holes in the Milky Way. It implies that the team made the first detection of a normal black hole so far away (12 million light years away), for the first time opening up the opportunity to characterise the black hole population of other galaxies.

 

bb68.jpg

 

The yellow arrow in the picture above identifies the position of the black hole transient inside Centaurus A. The location of the object is coincident with gigantic dust lanes that obscure visible and X-ray light from large regions of Centaurus A. Other interesting X-ray features include the central active nucleus, a powerful jet and a large lobe that covers most of the lower-right of the image. There is also a lot of hot gas. In the image, red indicates low energy, green represents medium energy, and blue represents high energy light.

 

There are two ways to grow a supermassive black hole: with gas clouds and with stars. Sometimes there's gas and sometimes there is not. We know that from observations of other galaxies. But there are always stars. A new study led by a University of Utah astrophysicist found a new explanation for the growth of supermassive black holes in the center of most galaxies: they repeatedly capture and swallow single stars from pairs of stars that wander too close. So, while gas did initial kick, it was stars that continued feeding process. A binary pair of stars orbiting each other is essentially a single object much bigger than the size of the individual stars, so it is going to interact with the black hole more efficiently. The binary doesn't have to get nearly as close for one of the stars to get ripped away and captured. But to prove the theory will require more powerful telescopes to find three key signs: large numbers of small stars captured near supermassive black holes, more observations of stars being "shredded" by gravity from black holes, and large numbers of "hypervelocity stars" that are flung from galaxies at more than 1.5 million kmh when their binary partners are captured. Astrophysicists long have debated how supermassive black holes grew during the 14 billion years since the universe began in a great expansion of matter and energy named the Big Bang. One side believes black holes grow larger mainly by sucking in vast amounts of gas; the other side says they grow primarily by capturing and sucking in stars. The new theory about binary stars - a pair of stars that orbit each other - arose from earlier research to explain hypervelocity stars, which have been observed leaving our Milky Way galaxy at speeds ranging from 1.5 million to 2.9 million kmh, compared with the roughly 560000 kmh speed of most stars.

 

bb71.jpg

Picture above is artist’s conception of a supermassive black hole (lower left) with its tremendous gravity capturing one star (bluish, center) from a pair of binary stars, while hurling the second star (yellowish, upper right) away at a hypervelocity of more than 1 million mph. The grayish blobs are other stars captured in a cluster near the black hole. They appear distorted because the black hole’s gravity curves spacetime and thus bends the starlight. The hypervelocity stars we see come from binary stars that stray close to the galaxy's massive black hole. The hole peels off one binary partner, while the other partner - the hypervelocity star - gets flung out in a gravitational slingshot. The calculations show how the model's rate of binary capture and consumption can explain how the Milky Way's supermassive black hole has at least doubled to quadrupled in mass during the past 5 billion to 10 billion years. When the researchers considered the number of stars near the Milky Way's center, their speed and the odds they will encounter the supermassive black hole, they estimated that one binary star will be torn apart every 1,000 years by the hole's gravity. During the last 10 billion years, that would mean the Milky Way's supermassive black hole ate 10 million solar masses - more than enough to account for the hole's actual size of 4 million solar masses. Confirmation of the theory must await more powerful orbiting and ground-based telescopes. Future observations should address this model for good.

 

bb61.png

Black hole, like all star-type objects, has a magnetic field. Unlike the Earth, which has a magnetic field of about 0.6 Gauss at the surface, or the Sun, which can reach a field strength of up to 4,000 Gauss on a Sunspot, a black hole can have magnetic fields in excess of 1000000000000 Gauss. Go figure! One of the basic law of physics is that, if you hold your right hand in an "L" shape (fingers together, thumb out), and point your fingers in the direction of the magnetic field and thumb in the direction the particle is moving, your palm "pushes" the particle perpendicular to both directions. This causes black holes to suck these particles up and down, perpendicular to the disk, and shoot them out at ultra-high speeds in two jets. Scientists study jets to learn more about the extreme environments around black holes. Much has been learned about the material feeding black holes, called accretion disks, and the jets themselves, through studies using X-rays, gamma rays and radio waves. But key measurements of the brightest part of the jets, located at their bases, have been difficult despite decades of work.

In 2011, theoretical physicist and black hole guru Kip Thorne, unveiled what he considers a new way to visualize how black holes stretch and bend the fabric of space-time. The approach relies on imaginary lines of force called tendex and vortex lines - roughly the gravitational equivalents of the electromagnetic field lines that dictate the arrangement of iron filings around a magnet. Tendex lines radiate from all objects with mass; they describe how gravity compresses or extends space-time. Vortex lines surround rotating objects and depict how space-time becomes twisted, like water swirling down a drain.

bb62.jpg

 

You most likely have heard of Stephen Hawking. Hawking is important for many things; he triggered holographic principle for example which came from his comments about information and black holes. He also established something we call today Hawkings's radiation. Hawking radiation is black body radiation that is predicted to be emitted by black holes, due to quantum effects near the event horizon. Hawking's work followed his visit to Moscow in 1973 where Soviet scientists Yakov Zeldovich and Alexei Starobinsky showed him that according to the quantum mechanical uncertainty principle, rotating black holes should create and emit particles. Hawking radiation reduces the mass and the energy of the black hole and is therefore also known as black hole evaporation. Because of this, black holes that lose more mass than they gain through other means are expected to shrink and ultimately vanish. In June 2008, NASA launched the GLAST satellite, which will search for the terminal gamma-ray flashes expected from evaporating primordial black holes. In September 2010, a signal which is closely related to black hole Hawking radiation was claimed to have been observed in a laboratory experiment involving optical light pulses, however the results remain unverified and debatable. In the event that speculative large extra dimension theories are correct, CERN's Large Hadron Collider may be able to create micro black holes and observe their evaporation (micro black holes are predicted to be larger net emitters of radiation than larger black holes and should shrink and dissipate faster). In 2010, a team of Italian scientists has fired a laser beam into a hunk of glass to create what they believe is an optical analogue of the Hawking radiation that many physicists expect is emitted by black holes. It remains under debate if they are correct or not (click here to see why).

 

Speaking of Hawking and black holes, he is known to make bets on things he believe. In 1974 he made one close to this subject. Back then he bet that Cygnus X-1 did not contain a black hole. Using several telescopes, both ground-based and in orbit, the scientists unravelled longstanding mysteries about the object called Cygnus X-1, a famous binary-star system discovered to be strongly emitting X-rays nearly a half-century ago. The system consists of a black hole and a companion star from which the black hole is drawing material. The scientists' efforts yielded the most accurate measurements ever of the black hole's mass and spin rate. Though Cygnus X-1 has been studied intensely since its discovery, previous attempts to measure its mass and spin suffered from lack of a precise measurement of its distance from Earth. This has changed now. We now know that Cygnus X-1 is one of the most massive stellar black holes in the Milky Way - 15 times more massive than our Sun and is spinning more than 800 times per second.

 

bb70.jpg

Picture above: On the left, an optical image from the Digitized Sky Survey shows Cygnus X-1, outlined in a red box. Cygnus X-1 is located near large active regions of star formation in the Milky Way, as seen in this image that spans some 700 light years across. An artist's illustration on the right depicts what astronomers think is happening within the Cygnus X-1 system. Cygnus X-1 is a so-called stellar-mass black hole, a class of black holes that comes from the collapse of a massive star. The black hole pulls material from a massive, blue companion star toward it. This material forms a disk (shown in red and orange) that rotates around the black hole before falling into it or being redirected away from the black hole in the form of powerful jets.

 

A black hole’s outer boundary, known as the event horizon, is a point of no return. Once trapped inside, nothing - not even light - can escape. At the center is a core, known as a singularity, that is infinitely small and dense, an affront to all known laws of physics. Since no energy, and hence no information, can ever leave that dark place, it seems quixotic to try peering inside. As with Las Vegas, what happens in a black hole stays in a black hole. But one man, Andrew Hamilton, wishes to challenge that. A black hole, Hamilton realized, could be thought of as a kind of Big Bang in reverse. Instead of exploding outward from an infinitesimally small point, spewing matter and energy and space to create the cosmos, a black hole pulls everything inward toward a single, dense point. Whether in a black hole or in the Big Bang, the ultimate point - the singularity - is where everything started and where it all might end.

 

bb63.jpg

 

Hamilton took the known attributes of black holes and plugged them into a basic computer graphics program. All it involved was applying Einstein’s relativity equations, which describe how light rays would bend as they approach a black hole. Hamilton’s first, simple movies were broad and cartoonish, but they served their purpose: showing how different kinds of black holes might look as you approached them from the outside and then ventured in. In one animation, the observer flew by a star system and plunged across a black hole’s event horizon, represented by a spherical red grid. Another movie offered a glimpse of an alternate universe, shown in pink, before the observer met his end at the singularity. In a third, the event horizon split in two as the observer entered the interior - a bizarre effect (later validated by Hamilton) that initially convinced some critics that these simulations must be flawed (below are two animated gifs, but animation might require browser refresh to work).

 

bb64.gifbb65.gif

 

In 2001 Denver Museum of Nature and Science were building a new planetarium with a state-of-the-art digital projection system, and they needed help developing eye-popping shows. Hamilton spent his time developing visualization software far more powerful than the off-the-shelf program he had been using. His final software package had more than 100000 lines of code and it attracted attention.  In 2002 he was invited to collaborate on a Nova documentary about black holes. That is when Hamilton had to face the painful truth that all his visualizations to date had been based on calculations done by others. In Einstein’s geometric conception of gravity, a massive body like the sun dents the fabric of space-time, much as a large person deforms the surface of a trampoline. Earth follows the curved shape of the warped space around the sun, which is why it moves in a circular orbit; this description has been experimentally verified to high precision. Ten linked equations (Einstein’s field equations) describe precisely how space-time is curved for any given distribution of matter and energy, even for something as extreme as a black hole. Relativity is confusing enough for conventional objects; it is far stranger for a black hole because such an object does not merely dent space-time - it creates a discontinuity, a bottomless pit in the middle of an otherwise smooth fabric.

 

Hamilton tried to make the problem more manageable by looking at black holes from a different perspective. He proposed a new analogy to describe what happens when something, or someone, approaches a black hole’s event horizon, likening it to a waterfall crashing into an abyss. A fish can swim near the edge and safely slip away - unless it gets too close, in which case it will be dragged over the precipice no matter how hard it resists. Similarly, any object or even any kind of energy is swept across the event horizon by a “waterfall” of space that is constantly cascading into the black hole. If a flashlight sailed over the edge of that metaphorical waterfall, not only the flashlight but also its light beam would be pulled in. Hamilton describes a black hole as a place where space is falling faster than light (no object can move through space faster than light, but there is no restriction on how quickly space itself can move).

 

 

 

 

The more Hamilton worked with his computer models, the more he realized just how strange the interior of a black hole is. A charged black hole actually has a secondary boundary - an inner horizon - inside the main event horizon that defines the hole’s outer limit. Physics legend Roger Penrose had been the first person to show that something bizarre must happen at that inner horizon, because all the matter and energy falling into a black hole piles up there. The inner horizon may be the most energetic and violently unstable place in the universe. Building on the groundbreaking work of physicists Eric Poisson and Werner Israel, Hamilton describes the conditions at the inner horizon as an "inflationary instability". It is inflationary because everything - mass, energy, pressure - keeps growing exponentially. And it is unstable because, according to Hamilton’s calculations, the surface - the inner horizon - cannot sustain itself and must ultimately collapse. Continuing his quest for realism, Hamilton considered the case of a black hole that rotates (as every known object in the universe, and perhaps the universe itself does too) and plugged it into his computer models. When a particle falls into a black hole and approaches the inner horizon, it is diverted into one of two narrowly focused, laserlike beams. If the particle enters in the direction opposite that of the black hole’s rotation, it will join an "ingoing beam" that has positive energy and moves forward in time. But here is the real brainteaser: If the particle enters in the same direction as the black hole's spin, it joins an "outgoing beam" that has negative energy and moves backward in time. Trying to make physical sense of these abstract conceptual insights, Hamilton discovered that the inner horizon acts as an astonishingly powerful particle accelerator, shooting the ingoing and outgoing beams past each other at nearly the speed of light. A person moving with the outgoing beam (if such a thing were possible) would think he was moving away from the black hole when he was, from an outsider's perspective, actually being pulled toward its center-the same place that someone traveling with the ingoing beam would inevitably go. Even though both parties are moving toward the center, the extreme curvature of space-time would cause them to feel like they were falling in different directions. This particle accelerator has another peculiar attribute: Once started, it never stops. The faster the streams move, the more energy there is; the more energy there is, the more gravity there is, and the faster the particles accelerate.

 

It is then not far fetched idea where black hole’s inner accelerator could spawn entire new universes. According to some cosmological models, our universe began as a blip of extreme energy within some other, preexisting universe (brane world collision), which then bubbled off to create a whole reality of its own. Something like this could occur inside a black hole, with a baby universe forming as a tiny bubble at the inner horizon. For a moment this infant would be connected to its "mother" by a kind of umbilical cord, a minuscule wormhole. Then the baby universe would break off to pursue a destiny completely removed from ours. That means if there’s anywhere in our universe where baby universes are being created, it’s likely happening inside black holes. And this inflationary zone near the inner horizon is where the process may occur. Needless to say, not everyone agrees with this, but when it comes to probing the inside of a black hole, theory is the only available tool. And it is reliable up to a certain point. If interested in visual work by Andrew Hamilton, click here.

 

A quasi-stellar radio source (quasar) is a very energetic and distant active galactic nucleus. Quasars are extremely luminous and were first identified as being high redshift sources of electromagnetic energy, including radio waves and visible light, that were point-like, similar to stars, rather than extended sources similar to galaxies. While the nature of these objects was controversial until as recently as the early 1980s, there is now a scientific consensus that a quasar is a compact region in the center of a massive galaxy surrounding its central supermassive black hole. Its size is 10-10000 times the Schwarzschild radius of the black hole. Quasars are among the brightest objects in the universe, far outshining the total starlight of their host galaxies. The output of light is equivalent to one trillion suns. Quasar host galaxies are hard or even impossible to see if the central quasar far outshines the galaxy. Therefore, it is difficult to estimate the mass of a host galaxy based on the collective brightness of its stars. However, gravitational lensing candidates are invaluable for estimating the mass of a quasar's host galaxy because the amount of distortion in the lens can be used to estimate a galaxy's mass. It comes as an interesting fact that light from quasars might have something in common with ordinary bulbs.

 

bb52.jpg

 

Astronomers have determined that quasars are incredibly variable, with some quasars quadrupling in brightness in the span of just a few hours. Although rarely that dramatic, variability in light output is seen in nearly all quasars, with average quasars changing in brightness by 10 to 15 percent over the course of one year. Recently, researchers at Illinois and NCSA found that this variability is related to both the mass of the black hole at the center of the quasar, and to the efficiency of the quasar at converting gravitational potential energy into light energy. Using data obtained by the Sloan Digital Sky Survey, the researchers monitored the brightness and estimated the central black hole mass of more than 2500 quasars, observed over a period of four years. They found that, for a given brightness, quasars with large black hole masses are more variable than those with low black hole masses. Quasars with more massive black holes have more gravitational energy that can potentially be extracted, which we would see in the optical as light. If two quasars have the same brightness, the one with the larger black hole mass is actually less efficient at converting this gravitational energy into light. These less-efficient quasars have more variable light output. It could be a little like flickering light bulbs - the bulbs that are the most variable are those that are currently the least efficient.

 

Quasars are believed to be powered by accretion of material onto supermassive black holes in the nuclei of distant galaxies, making these luminous versions of the general class of objects known as active galaxies. As stars and interstellar gas fall into the black holes, they swirl around them and then are swallowed up - but not before giving off bright light at nearly all wavelengths of the electromagnetic spectrum. No other currently known mechanism appears able to explain the vast energy output and rapid variability. In seeking to understand how, and when, galaxies such as our own formed, astronomers often turn to quasars. Because quasars are extremely bright, they can be seen at much larger distances from Earth than other galaxies, and so allow us to peer into the early history of the Universe (we are linking them against early galaxies - actually, they are being seen as early galaxies themselves by some). In 2010, discovery of two quasars in the distant Universe that apparently have no hot dust in their environments provides evidence that these systems represent the first generation of their family. Quasars are powered by supermassive black holes. One could say that what is pulsar to neutron star, that is quasar to black hole. Of course, related to quasars you might have heard of blazars too.

 

bb51.jpg

Blazars and quasars are both subclasses of active galactic nuclei (AGN). Blazars and quasars are intrinsically the same object - a supermassive black hole with a surrounding accretion disk, producing a jet - but seen at different orientation angles with respect to the jet’s axis. So, in essence, it is the same thing.

 

What is the most distant quasar we ever found? The object that has been found, named ULAS J1120+0641, is around 100 million years younger than the previously known most distant quasar. It lies at a redshift of 7.1 which corresponds to looking back in time to a Universe that was only 770 million years old, only five per cent of its current age. Prior to this discovery, the most distant quasar known has a redshift of 6.4, the equivalent of a Universe that was 870 million years old.

 

The observations show that the mass of the black hole at the centre of the new quasar was about two billion times that of the Sun. This very high mass is hard to explain. Current theories for the growth of supermassive black holes show a slow build up in mass as the compact object pulls in matter from its surroundings. According to these models, the mass of the quasar's black hole is not expected to be higher than one-quarter of the value now determined for ULAS J1120+0641.

 

Quasars also provide some clues as to the end of the Big Bang's reionization. The oldest quasars (redshift ≥ 6) display a Gunn-Peterson trough and have absorption regions in front of them indicating that the intergalactic medium at that time was neutral gas. More recent quasars show no absorption region but rather their spectra contain a spiky area known as the Lyman-alpha forest. This indicates that the intergalactic medium has undergone reionization into plasma, and that neutral gas exists only in small clouds. Quasars show evidence of elements heavier than helium, indicating that galaxies underwent a massive phase of star formation, creating population III stars between the time of the Big Bang and the first observed quasars. This results contradict theory that says black holes and galaxies become more massive through gravitational mergers as the universe evolves.

 

While often ignored, there is also question of magnetic field role in early universe. Why is the gas found between galaxies or between the stars of the same galaxy magnetized? Recently astrophysicists have put forward the first potential explanation for this phenomenon: an initially weak magnetic field could have been amplified by turbulent motions, like those that take place within Earth and the sun, and which must have existed in the primordial universe.

 

bb35.jpg

According to simulations, this turbulence produced an exponential growth of the magnetic field. Calculations show that this phenomenon is possible even under extreme physical conditions, such as those encountered shortly after the Big Bang, when the first stars formed.

 

3D digital simulations reveal how magnetic field lines can be drawn out, twisted and folded by turbulent "flows".

 

Just as electricity generates a magnetic field through the movement of charged particles, the charges themselves are subjected to a force as they move through a magnetic field.

 

According to the astrophysicists, the interaction between a magnetic field and turbulent energy - a kind of kinetic energy generated by turbulence - can amplify an initially weak field, converting it into a strong field.

 

The researchers hope that their work will shed light on the properties of the very first stars and galaxies to form in the universe.

 

 

 

Credits: US National Radio Astronomy Observatory, NASA, Wikipedia, CNRS, ESO, Max-Planck-Gesellschaft, Phil Plait, Harvard-Smithsonian Center for Astrophysics, National Science Foundation, UCSB, Nature, Louisiana State University, Space Telescope Science Institute, University of Pittsburgh, STScI, University of Nottingham, University of Arizona, ESO, WISE, Max Planck Institute for Physics, Andrew Hamilton, Discover magazine, Carnegie Mellon University, Royal Astronomical Society (RAS), University of Utah, ***** Vargas, Maritxu Poyal, Mark Trodden

Filter Blog

By date:
By tag: