Friday, 8 April 2011

From Eternity to Here

From Eternity to Here: The Quest for the Ultimate Theory of Time by Sean Carroll

Daily Telegraph, 9 April 2011


‘What is time?” It’s the sort of question asked by philosophers, physicists and, sooner or later, children. While reading From Eternity to Here I was relieved that my eight-year-old was actually asking “What is the time?” That was a question I could answer. As for the other, most of us would side with St Augustine: “If no one asks me, I know. If I wish to explain it to one that asketh, I know not.”

St Augustine, having tackled original sin, contemplated the nature of time and concluded that “neither future nor past exists, and it is inexact language to speak of three times – past, present and future”. There, in a nutshell, is the problem that Sean Carroll, a theoretical physicist at the California Institute of Technology, explores in this fascinating book. Why is there a past, present and future? In other words, why is there an “arrow of time”?

Before Einstein, it had long been assumed that time and space were fixed and distinct, the stage on which the never-ending drama of the cosmos was played out. Einstein discovered space and time were not absolute and unchanging, that spatial distances and time intervals between events depended on the relative motion of observers. He found that space and time were woven together to form the fabric of the universe: space-time.

Yet there is one crucial difference between space and time. While it is possible to move in any direction in space, the ticks of a clock forever march time forward. This inexorable flight of time’s arrow from past to present to future is bound up with the second law of thermodynamics. Put simply, the amount of disorder, what physicists call entropy, increases with the passage of time.

Breaking eggs to make an omelette, stirring milk into coffee or spilling wine all exhibit, says Carroll, “the fundamental irreversibility that is the hallmark of the arrow of time”.It is the increase in entropy, in the disorderliness of the world, which makes these everyday events irreversible and separates the past from the future. Eggs can’t spontaneously unscramble or spilt wine jump back into the bottle because that would lead to a decrease in entropy. But why should entropy always increase?

“Understanding the arrow of time is a matter of understanding the origin of the universe,” Carroll argues. For him the reason we can’t unscramble an egg is due to the low entropy conditions in the early universe some 13 billion years ago. Attempting to explain how such a low entropy state was possible has led Carroll to become one of an increasing number of physicists who in recent years have begun to question whether the Big Bang was really the beginning of the universe. For him it is “simply a plausible hypothesis, not a result established beyond reasonable doubt” and it is conceivable that space and time extend beyond the moment that we identify as “the Big Bang”.

Traditionally, questions about what was there “before the Big Bang” have been dismissed as meaningless, since space and time were deemed to be created at the Big Bang there simply was no “before”. Instead of the universe, theorists now talk of the “multiverse” and “baby universes” that Carroll believes provide “a natural mechanism for creating more and more entropy in the universe”.

From Eternity to Here is not for the faint hearted, but it’s a rewarding read because there are no answers yet to some of science’s toughest questions.“There are ideas, and some ideas seem more promising than others, but all of them are somewhat vague, and we certainly haven’t yet put the final pieces together,” admits Carroll as he guides the reader through some of the most exotic parts of the landscape of modern theoretical physics and cosmology: from evaporating black holes to wormhole construction, from the many worlds interpretation to cosmic inflation.

But the question remains: “what is time?” The response of the American physicist John Wheeler is worth remembering: “Time is nature’s way of keeping everything from happening at once.”

Thursday, 24 March 2011

The Book of Universes

The Book of Universes by John D Barrow

Independent, 25 March 2011


"Einstein explained his theory to me every day and on my arrival I was fully convinced that he understood it," reported Chaim Weizmann. He would become the first president of Israel, but in 1921 was accompanying Einstein on a transatlantic voyage to New York. The theory in question was general relativity, in which gravity is due to the warping of space caused by the presence of mass. The Earth moves around the Sun not because some mysterious invisible force pulls it, but because of the warping of space due to the Sun's enormous mass.

"The theory is beautiful beyond comparison," Einstein wrote. When, in November 1919, British astronomers announced that they had discovered that gravity bends light – as predicted by general relativity – it made headlines around the world. Yet buried within his greatest achievement was what Einstein called "my greatest blunder".

He knew that his equations could be solved in a number of different ways, with each solution representing a model of a possible universe. Like everyone else at the time, Einstein believed that the actual universe was eternal and unchanging. So he introduced a term (his "greatest blunder") into the equations that ensured exactly that. It was left to others, a Russian mathematician and then a Belgian Jesuit priest, to find and take seriously the solutions that pointed to an expanding universe. Soon this non-static model attracted some experimental support.

In the 1920s, the American astronomer Edwin Hubble discovered two remarkable facts. First, what we had long assumed to be the universe was actually our host galaxy and there were many other such "island universes". Second, he found that light from these distant galaxies was stretched towards the red end of the visible spectrum. This so-called redshift is evidence that these galaxies are moving away from our own Milky Way and that the universe is expanding.

Eventually, this led theorists to a universe that was exploded into being in a Big Bang some 13 billion years ago from a single point, called a singularity, which was infinitely hot and dense. Add a surge of accelerated expansion only a trillion trillion trillion trillionth of a second after the Big Bang that lasted for only a trillion trillion trillionth of a second, and the discovery that 96 per cent of it is made up of dark matter and dark energy, then we arrive at the most popular model of our universe.

In the 20th century, cosmology became a bonafide scientific discipline, but there remains plenty of room for some metaphysical speculation. What exactly do we mean by "universe"? Is the universe everything that has existed, does exist and will ever exist? asks Cambridge cosmologist John Barrow. What about including all that cannot exist? After all, as he points out, some medieval philosophers "were drawn to this sort of completeness, adding everything that has existed, does exist and will not exist to the catalogue of what was, is and will be".

Barrow and his colleagues are not only interested in the structure and history of our universe. There are other universes that live inside black holes, or are chaotically unpredictable or allow time travel into the past. However, the most mind-numbing concept of all only emerged in the 1990s: the never-ending "multiverse" – the universe of all possible universes. There can be few better guides to the bewildering array of potential universes, and none so readable or entertaining.

Wednesday, 23 March 2011

The Meeting of Minds

The Meeting of Minds
Nature.com, 23 March 2011

I first saw the photograph of those gathered at the fifth Solvay conference, which was held in Brussels from 24 to 29 October 1927, in a biography of Albert Einstein. This was in 1979, when I was just 16. I wondered what brought these people together, and soon learned that the picture included most of the key players involved in the discovery of the quantum, and the subsequent development of quantum physics. With 17 of the 29 invited eventually earning a Nobel Prize, the conference was one of the most spectacular meetings of minds ever held.


When I was 18, I was given a print of the above photograph as a present. Many years later I began to think about it as a possible starting point for a book about the quantum. In the photograph there are nine seated in the front row. Eight men, and one woman; six have Nobel Prizes in either physics or chemistry. The woman has two, one for physics, awarded in 1903, and another for chemistry, awarded in 1911. It could only be Marie Curie. In the centre, the place of honour, sits Albert Einstein. Looking straight ahead, gripping the chair with his right hand, he seems ill at ease. Is it the winged collar and tie that are causing him discomfort, or is it what he has heard during the preceding week? At the end of the second row, on the right, is Niels Bohr, looking relaxed with a half-whimsical smile. It had been a good conference for him. Nevertheless, Bohr would be returning to Denmark disappointed that he had failed to convince Einstein to adopt his Copenhagen interpretation of what quantum mechanics revealed about the nature of reality.

Instead of yielding, Einstein had spent the week attempting to show that quantum mechanics was inconsistent, that Bohr's 'Copenhagen interpretation' was flawed. Einstein said years later that:

This theory reminds me a little of the system of delusions of an exceedingly intelligent paranoic, concocted of incoherent elements of thoughts.

It was Max Planck, sitting on Marie Curie's right, holding his hat and cigar, who discovered the quantum. In 1900 he was forced to accept that the energy of light, and all other forms of electromagnetic radiation, could only be emitted or absorbed by matter in bits, bundled up in various sizes. 'Quantum' was the name Planck gave to an individual packet of energy, with 'quanta' being the plural. The quantum of energy was a radical break with the long-established idea that energy was emitted or absorbed continuously, like water flowing from a tap. In the everyday world of the macroscopic, where the physics of Newton ruled supreme, water could drip from a tap, but energy was not exchanged in droplets of varying size. However, the atomic and subatomic level of reality was the domain of the quantum.

Bohr discovered that the energy of an electron inside an atom was 'quantised'; it could possess only certain amounts of energy and not others. The same was true of other physical properties, as the microscopic realm was found to be lumpy and discontinuous. Not some shrunken version of the large-scale world that we humans inhabit, where physical properties vary smoothly and continuously, where going from A to C means passing through B. Quantum physics, however, revealed that an electron in an atom can be in one place, and then, as if by magic, reappear in another without ever being anywhere in between, by emitting or absorbing a quantum of energy.

By the early 1920s, it had long been apparent that the advance of quantum physics on an ad hoc, piecemeal basis, had left it without solid foundations or a logical structure. Out of this state of confusion and crisis emerged a bold new theory; known as quantum mechanics, with Werner Heisenberg and Erwin Schrödinger, third and sixth from the right in the back row, leading the way. In 1927 Heisenberg made a discovery. It was so at odds with common sense that he initially struggled to grasp its significance. The uncertainty principle said that if you want to know the exact velocity of a particle, then you cannot know its exact location, and vice versa.

Bohr believed he knew how to interpret the equations of quantum mechanics; what the theory was saying about the nature of reality. Questions about cause and effect, or whether the moon exists when no one is looking at it, had been the preserve of philosophers since the time of Plato and Aristotle. However, after the emergence of quantum mechanics they were being discussed by the twentieth century's greatest physicists.

The debate that began between Einstein and Bohr at the Solvay conference in 1927, raised issues that continue to preoccupy many physicists and philosophers to this day; what is the nature of reality, and what kind of description of reality should be regarded as meaningful? 'No more profound intellectual debate has ever been conducted', claimed the scientist and novelist CP Snow. 'It is a pity that the debate, because of its nature, can't be common currency.'

When Einstein and Bohr first met in Berlin in 1920, each found an intellectual sparring partner who would, without bitterness or rancour, push and prod the other into refining and sharpening his thinking about the quantum. 'It was a heroic time,' recalled Robert Oppenheimer, who was a student in the 1920s. 'It was a period of patient work in the laboratory, of crucial experiments and daring action, of many false starts and many untenable conjectures. It was a time of earnest correspondence and hurried conferences, of debate, criticism and brilliant mathematical improvisation. For those who participated it was a time of creation.'

Planck, Einstein, Bohr, Heisenberg, Schrodinger, Born, Pauli, De Broglie, Dirac, the leading lights of the quantum revolution, are all there in that picture.

Sunday, 20 March 2011

Geek Nation

Geek Nation: How Indian science is taking over the world by Angela Saini

Financial Times, 19-20 March 2011

‘It shall be the duty of every citizen of India to develop the scientific temper, humanism and the spirit of inquiry and reform.’ The inclusion of this statement in the Indian constitution, which came into effect on January 26 1950, was insisted upon byJawaharlal Nehru, India’s first prime minister.

Nehru’s ‘scientific temper’ is a wonderfully concise phrase, which describes his vision of a nation in which people could think independently, employ logic and understand the scientific method. In a land of religion, Nehru put his faith in science and technology. He believed that it was ‘science alone that can solve the problems of hunger and poverty, insanitation and illiteracy, of superstition and deadening custom and tradition’ and that the ‘future belongs to science and to those who make friends with science’. Nehru wanted a nation of geeks.

‘Wherever in the world we live, Indians and people of Indian origin are famous for being swots, nerds, dweebs, boffins, and dorks,’ writes Angela Saini in Geek Nation. A British science journalist of Indian parentage, Saini spent six months in India exploring Nehru’s geek nation almost 50 years after his death.

With a population approaching 1.2 billion, India has the largest pool of scientists and engineers in the world. While the literacy rate hovers around a dismal 60 per cent, some 400 universities produce two million graduates every year, including a staggering 600,000 engineers, the most sought after of which are from the 16 Indian Institutes of Technology (IIT’s). Yet, instead of discovering hothouses of intellectual curiosity and innovation, Saini found drones, not geeks. The relentless pressure on India’s students is ‘disabling imaginations’ and driving hundreds to suicide.

From the vast Soviet-style Bhabha Atomic Research Centre to the Academy of Sanskrit Research, ‘the geeky and the bizarre’ sit side-by-side; wacky ideas are more easily tolerated than in the west. Indians, Saini observes, have ‘a unique freedom to explore the edges of what’s believed to be possible’.

Indian science is far from taking over the world: it currently contributes less than 3 per cent of global research output, lagging far behind the US and UK. Yet an increasing number of Indian researchers, having established reputations aboard, are returning home to lead a younger generation.

Saini’s vivid portrait of hi-tech India reveals a country in a hurry. No one knows how long it will take, but India’s present economic expansion is a reminder that more than 1,000 years ago it had a scientific culture as advanced as any in the world. ‘The Empires of the future,’ Winston Churchill once said, ‘are going to be the empires of the mind.’

Wednesday, 2 March 2011

The man who went nuclear


The Man Who Went Nuclear: How Ernest Rutherford Ushered in the Atomic Age


Independent, 3 March 2011

Did the nuclear age begin in 1942, when Chicago Pile-1, a reactor built in a squash court, went "critical" by achieving a self-sustaining chain reaction? Or was it on 16 July 1945 in the Jemez mountains in New Mexico, when "The Gadget", the first atomic bomb, was successfully tested and Robert Oppenheimer quoted the Bhagavad Gita? Maybe it was June 1954, when the Russian Obninsk nuclear station first generated electricity for the grid.

In reality, it was during a meeting of the Manchester Literary and Philosophical Society that the nuclear age was announced, on Tuesday, 7 March 1911, by Professor Ernest Rutherford, the 39-year-old head of physics at Manchester University. Rutherford was born in 1871, in Spring Grove, New Zealand. Descended from Scottish emigrants, it was from this scattered rural community on the north coast of the South Island that Rutherford's aptitude for science and maths led in 1895 to a coveted place at Cambridge. There, under the direction of JJ Thomson, Rutherford established a reputation as a fine experimentalist with a study of X-rays.

Though surrounded at Cambridge by all the excitement generated by Thomson's discovery of the electron in 1897, Rutherford opted to investigate radioactivity and soon found that there were two distinct types of radiation emitted from uranium, which he called alpha and beta, before a third was discovered, called gamma rays.

Aged just 27, in 1898, he was appointed professor of physics at McGill University in Montreal, Canada. Among his successes over the next nine years the most important was the discovery, with his collaborator Frederick Soddy, that radioactivity was the transformation of one element into another due to the emission of an alpha or beta particle. Rutherford regarded "all science as either physics or stamp collecting" but saw the funny side when he received the 1908 Nobel prize for chemistry for this seminal work. By then he was in Manchester.

"Youthful, energetic, boisterous, he suggested anything but the scientist," was how Chaim Weizmann, then a chemist but later the first president of Israel, remembered Rutherford in Manchester. "He talked readily and vigorously on any subject under the sun, often without knowing anything about it. Going down to the refectory for lunch, I would hear the loud, friendly voice rolling up the corridor."

At the time Rutherford was busy using the alpha particle to probe and unlock the secrets of the atom. But what exactly is an alpha particle? It was a question that Rutherford and his German colleague Hans Geiger answered. It was a helium ion; that is, a helium atom that had been stripped of its two electrons. Rutherford had noticed, while still in Montreal, that some alpha particles passing through thin sheets of metal were slightly deflected, causing fuzziness on a photographic plate. It was something he asked Geiger to investigate.

As instructed by Rutherford he fired beams of alpha particles at some gold foil and by the tiny flashes of light when they struck a zinc sulphide screen discovered that a few "were deflected through quite an appreciable angle". Soon afterwards Rutherford assigned a research project to a promising undergraduate called Ernest Marsden: "Why not let him see if any alpha particles can be scattered through a large angle?" Marsden found some alpha particles bouncing straight back after hitting the gold foil and Rutherford was shocked: "It was almost as incredible as if you had fired a 15-inch shell at a piece of tissue paper and it came back and hit you."

Marsden and Geiger made comparative measurements using different metals and they discovered exactly they same large angle scattering. In June 1909 they published their extraordinary results, but with Rutherford unable to offer any kind of explanation they attracted little interest.

After decades of intense arguments, by 1910 the reality of atoms was established beyond reasonable doubt. The most widely-accepted atomic model was Thomson's so-called "plum pudding". Its ingredients consisted of a ball of diffuse "positive electricity" in which negatively charged electrons were embedded like plums in a pudding. But Rutherford knew that the atom of his old mentor couldn't explain alpha particle scattering. The probability that the accumulated effect of a number of tiny ricochets off electrons in Thomson's atom resulted in even one alpha particle being scattered backwards was almost zero. By December 1910, Rutherford believed that given the mass and energy of an alpha particle the large deflections must be the result of a single collision with an atom. It led him "to devise an atom superior to J.J's" he said at time.

Rutherford's atom consisted of a tiny central core containing virtually all the atomic mass, which he later called the nucleus, but it occupied only a minute volume "like a fly in a cathedral".Most alpha particles would pass straight through Rutherford's atom in any "collision", since they were too far from the tiny nucleus at its heart to suffer any deflection. But if an alpha particle approached the nucleus head-on, the repulsive force between the two would cause it to recoil straight back like a ball bouncing off a brick wall. Rutherford said that such direct hits were "like trying to shoot a gnat in the Albert Hall at night". Rutherford's model allowed him to make definite predictions using a simple formula he had derived about the fraction of scattered alpha particles to be found at any angle of deflection.

Experimental checks performed by Geiger and Marsden confirmed the predictions, but few physicists beyond Manchester gave any serious attention to the nuclear atom. Although Rutherford did not explicitly suggest a planetary model of the atom, there were those who knew that's exactly what it was. For most that settled the matter, Rutherford's atom was fatally flawed. A model of the atom with electrons moving around the nucleus, like planets orbiting the sun, would collapse. Any object moving in a circle undergoes acceleration, if it happens to be a charged particle, like an electron, as it accelerates it continuously losses energy in the form of radiation. An electron in orbit around the nucleus would spiral into it. Rutherford's atom was unstable and the existence of the material world was compelling evidence against it. Enter Niels Bohr.

Arriving in Manchester in March 1912 to learn about radioactivity, it wasn't before long the 27-year-old Dane began thinking about how to prevent Rutherford's nuclear atom from collapsing. His solution employed the quantum – the idea that energy comes in packets. Bohr argued that electrons inside an atom could only move in certain orbits in which they did not radiate energy and therefore couldn't spiral into the nucleus. Bohr said that each orbit had a certain energy associated with it, so all the allowed orbits were in effect a series of energy levels, like the rungs of a ladder. For an electron to move between levels, the famous quantum leap, required it to absorb or emit a quantum of energy that was equivalent to the difference in energy between the two levels.

"It is difficult to overestimate the scientific importance of the discovery of the nucleus," says Sean Freeman, professor of nuclear physics at Manchester University. "Rutherford's insight, imagination and attention to detail enabled him to make revolutionary discoveries using rather rudimentary technology by modern standards. He was a true pioneer."

One of his most important achievements was made in his spare time while Rutherford was developing methods for detecting submarines during the First World War – he split the atom. Arriving late for a committee meeting one day, Rutherford didn't apologise, but announced: "I have been engaged in experiments which suggest that the atom can be artificially disintegrated. If it is true, it is of far greater importance than a war!" It was 1919 before he published the results that showed the nucleus contained positively charged particles he called protons by knocking them out of nitrogen nuclei using alpha particles – thereby effectively splitting the nucleus and hence the atom. It was the last work he did at Manchester before moving to Cambridge to take over from Thomson as head of the Cavendish Laboratory.

It was there that in 1932 his colleagues James Cockcroft and Ernest Walton "split the atom" using the world's first particle accelerator. Also at the Cavendish, James Chadwick used Rutherford's suggestion that there was probably another constituent to heavier nuclei to discover the neutron. The particle plays the central role in establishing a nuclear chain reaction. The three men were among the 11 former students and colleagues of Rutherford who would win the Nobel prize.

Another of those 11 was Niels Bohr, who said that Rutherford never spoke more angrily to him than he did one evening at a Royal Society dinner. He had overheard Bohr refer to him by his title (Rutherford was awarded a peerage in 1931) and angrily asked the Dane loudly: "Do you Lord me?" Rutherford never cared for the honours and was indifferent to academic or social standing. What mattered most to him were the results of experiments. "I was brought up to look at the atom as a nice hard fellow, red or grey in colour, according to taste," he once said. It was a model he replaced with an atom that began the nuclear age.

Saturday, 12 February 2011

The 4% Universe

The 4% Universe: Dark matter, dark energy and the race to discover the rest of reality by Richard Panek

Times, 12 February 2011


For Galileo seeing was believing. When in 1609 he learnt of the Dutch invention of the telescope, he quickly constructed his own. With no reason to think there was anything to find, he searched the night sky and found that there was far more to the universe than meets the naked eye. He saw that the Moon had mountains, the Sun had spots and he observed the phases of Venus. With the discovery of Jupiter’s moons, Galileo found hard evidence that not all heavenly bodies revolved around the Earth. In March 1610 he published, The Starry Messenger, his report of what he had seen. All 500 copies were sold within a week.

Four centuries later Galileo’s successors know that they cannot see, even using their dazzlingly variety of modern telescopes, an astonishing 96 percent of the universe. The tiny fraction that is visible to their fine-tuned instruments is the stuff that we and all the countless planets, stars and galaxies are made from. Get rid of us and of everything else we’ve ever thought of as the universe, and very little would change. ‘We’re just a bit of pollution,’ one cosmologist says. We maybe irrelevant, but the rest of reality has been dubbed ‘dark’ and for the American science writer Richard Panek it ‘could go down in history as the ultimate semantic surrender’. For this is not ‘dark’ as in distant or invisible, but ‘dark’ as in unknown - for now at least.

Yet what is known is that almost a quarter of what can’t be seen is something called dark matter. Although its very nature is a mystery, its presence is discernible through its gravitational effect on the movement of galaxies. Without dark matter the astronomical data doesn’t make sense.

From a derelict iron mine in Minnesota to mountaintop observatories, at a pace that would shame many a thriller writer, Panek tells the story of the quest to unlock the secrets of dark matter and the particles that make it up. These weakly interacting massive particles, or WIMPs, have proven so elusive that the possibility that two were detected in November 2009 caused great excitement.

Dark matter is less than half the tale Panek wants to tell. For three quarters of the unknown universe consists of an even stranger substance called dark energy. Its existence was inferred, once again, from the circumstantial evidence gathered by astronomers measuring what could be seen. They didn’t need Sherlock Holmes to remind them that after eliminating the impossible, whatever remains, no matter how improbable, is the truth.

In the late 1990s two rival teams set out to collect data on distant supernovae in an attempt to determine the rate at which the universe was expanding. It was assumed that the pull of gravity would act as a break on the pace of expansion. To their disbelief they found that space-time was being pushed apart faster than ever before. Something was overwhelming the force of gravity to drive the expansion. Dark energy was winning the cosmic tug-of-war.

With a future Nobel prize at stake, disputes and arguments over who did what and when were inevitable. Parek provides a behind-the-scenes glimpse of science in the raw as alliances are forged and friendships strained. There is a new universe to explore and the latest experiments reveal it is 13.75 billion years old and made up of 72.8 per cent dark energy, 22.7 per cent dark matter and 4.5 per cent ordinary matter. These numbers are ‘an exquisitely precise accounting of the depths of our ignorance,’ says Panek. ‘It’s 1610 all over again.’

Thursday, 10 February 2011

Culture Under The Microscope

Culture Under The Microscope
The Visceral Exhibition @ The Science Gallery, Dublin.

Independent, 10 February 2011




What is Life? It's a question that the quantum physicist Erwin Schrödinger tackled in three famous lectures given at Trinity College, Dublin. The first, on 5 February 1943, was heard by an audience that included the entire Irish cabinet led by Éamon de Valera.

Schrödinger is remembered today for making vivid the weirdness of the quantum world with his famous cat-in-the-box thought experiment. Schrödinger's cat is neither dead nor alive but exists in a superposition of states until we open it and look. Yet when his Trinity College lectures were published they became influential in persuading many young physicists that Schrödinger's methods might solve some of the problems in the developing field of molecular biology. James Watson and Francis Crick cited the book as a key inspiration for the research that led them to the discovery of the double-helix structure of DNA.

"Schrödinger with his mythical 'semi-living' cat, could be described as a pioneer of BioArt," says Dr Michael John Gorman, the director of Dublin's Science Gallery, which is also located in Trinity College. His tongue is firmly in his cheek as he accompanies 40 people on the short walk from his gallery to the Schrödinger Theatre, to discuss what life is. This is one of the many activities surrounding the gallery's latest exhibition, Visceral: The Living Art Experiment.

"BioArt" was a term coined in 1997 as a number of artists abandoned paints and brush in favour of cells, fragments of DNA, proteins and living tissue. Visceral, a month-long exhibition uses new technologies, tissue and neural engineering to explore the question "what is life?" People may be put off by some of the 15 works, some of which use human tissue as book covers or retinal cells to project film. Gorman admits there is something a little queasy about creating artworks from living tissue. "The very idea of tissue-engineering becoming an art form makes us squirm," he says. However, Visceral is all about provoking the sort of instinctive gut reaction that Gorman hopes will gets visitors asking questions about the ethical implications of manipulating living material and what we mean by "living".

The exhibition's curator, Oron Catts, believes that the "logic that drives things like nanotechnology, synthetic biology and even things like neuroengineering needs to be scrutinised and explored by people other than just scientists and engineers". It was one of the reason that Catts helped to set up SymbioticA, an artistic lab dedicated to a hands-on engagement with the life sciences based at the University of Western Australia in Perth.

"Our interest is in life," says Catts, "not only art or science." Yet the exhibition demonstrates the depth of the potential of interactions between art and science. For Gorman, nothing illustrates this better than Silent Barrage, the largest work on show. The product of a collaboration between Neurotica, a group of five artists, and Dr Steve Potter of the Georgia Institute of Technology in Atlanta, its a cutting edge piece of neural engineering. It consists of an array of robotic poles hooked up to neurons from the brains of rats in Potter's lab.

The array responds to the way visitors move through it and sends signals back to the neurons. These neurons then fire, making the robotic poles shudder up and down. Depending on the amount of audience activity, the neurons can undergo what is called a "barrage" – when they start firing in a chaotic fashion. This is exactly what happens during an epileptic seizure. With epilepsy affecting over 450000 people in the UK alone, it is hoped by the scientists involved that the data collected might lead to a better understanding of the process by which cells are calmed and seizures mitigated. And its not the only exhibit that promises something scientifically tangible.

The battlefield of Kathy High's Blood Wars is a Petri dish with the combatants being the white blood cells drawn from two different people. After a few hours slugging it out, one set of platelets will have destroyed the other. The "winner" of each cellular battle goes on to fight another participant. The concept may sound sinister to some with concerns about eugenics, but it is in an ingenious attempt to engage in the age-old debates surrounding traits inherited through blood.

Catts says that cell lines create a form of immortality since they can live beyond the life of the donor. I'm reminded of the story told by Rebecca Skloot in her bestselling book, The Immortal Life of Henrietta Lacks. Known to scientists as HeLa, Lacks died in 1951 but her cancer cells were taken without her knowledge and became one of the most important tools in medicine. The Vision Splendid, a work by Alicia King consists of two sealed glass jars, connected by tubes that contain nutrients and cultured human tissue. The cells were those of an unknown African-American girl aged 13. You're left wondering who owns the stuff our bodies are made of. If that worries you, then Catts offers a way to ease your troubles.

The Semi-Living Worry Dolls by Catts and Ionat Zurr are a modern version of the famous Guatemalan worry dolls constructed out of degradable polymer on which cells are grown in micro-gravity conditions. You can whisper your troubles to them through a microphone as they eventually replace the polymer completely, transforming the piece from fabric to tissue.

With the Irish general election rescheduled for the closing date of Visceral on 25 February, there's a rumour going around that the Silent Barrage installation may be able to predict the outcome – if political candidates are willing to present themselves to the cultured rat neurons in person.





Friday, 4 February 2011

Unnatural

Unnatural: The Heretical Idea of Making People by Philip Ball

Guardian, 5 February 2011

The award of the Nobel Prize, when it came in October 2010, was long overdue. By then there was more than three decades' worth of growing evidence to back up the claim of two British men. Around 4 million people, none older than 33, were living proof of their pioneering work in developing the technique of in vitro fertilisation (IVF). Sadly for the gynaecologist and surgeon Dr Patrick Steptoe, who died in 1988, the Nobel isn't awarded posthumously. Therefore the sole recipient was the physiologist Professor Robert Edwards who at 85 was too ill to travel to Stockholm to collect the prize in person.

Since the birth of Louise Brown on 25 July 1978, IVF has helped and offered hope to some of the 10% of all couples worldwide who suffer from infertility. Yet the birth of the "first test tube baby" at Oldham General hospital outraged many for being the product of an unnatural interference by scientists in the creation of a human being. Repeatedly having to fend off charges that he was playing God, Edwards once complained that the early public response to IVF was conditioned by "fantasies of horror and disaster, and visions of white-coated, heartless men, breeding and rearing embryos in the laboratory to bring forth Frankenstein genetic monsters".

Philip Ball, who in Critical Mass explored how one thing leads to another, points out in his latest book, Unnatural, that traditionally the "natural" end of sex is procreation since the latter requires the former. However, religious objections to IVF, Ball argues, invoke this reasoning in reverse: the natural beginning of procreation is sex – not sex in terms of sperm meets egg, but in the anatomical sense. Hence, Ball's interest in exploring what lies beyond the "this bit goes in here" method. The result is a fascinating and impressive cultural history of anthropoeia – the centuries of myths and tales about the artificial creation of people.

Ball explores what these fables reveal about contemporary views on life, humanity and technology as modern science has turned the fantasy of making people into reality. From the homunculus of the medieval alchemists and the clay golem of Jewish legend to Frankenstein's monster and the babies in jars of Huxley's Brave New World, Ball ranges far and wide to show that the idea that making life is either hubristic or "unnatural" is a relatively recent one.

Until the Enlightenment, it was widely assumed that it was possible to make lower forms of life. For example, a process called bougonia in which bees were created using the carcasses of dead oxen was once accepted as fact. It was only in the 19th century that "spontaneous generation", the belief that life could spring forth from inanimate matter without the need for seeds, eggs or parents, was finally discredited. If there were any doubts about such practices, explains Ball, then they were about the quality and character of "artificial life" – was it inferior, equivalent, or better than "natural" life?

The ultimate "unnatural" act is the artificial creation of humans, since it challenges the conviction that we are God's chosen. Yet Ball makes a persuasive case when he suggests that the response of the medieval mind to the idea of artificial human life was very different from the horror it now typically engenders. This indicates that feelings of revulsion about these "unnatural" creations are not inevitable.

The prefix "un" was only attached to acts that were deemed reprehensible because they were contra naturam, against nature. However, people in the middle ages saw nothing intrinsically wrong in creating human and other forms of life. The problem for them was rather, as the 12th-century Muslim scholar Averroes said, that organisms made by art were like alchemist's gold, a kind of fake. In short, any "unnatural" creation lacked a soul.

Doubts about the possibility of an artificial person having a soul are still with us, though given a modern spin. The fabricated being is denied genuine humanity. He or she is thought to be soulless: lacking in love, warmth and human feeling. This same failing is now imputed to human clones – 21st-century reincarnations of Frankenstein's monster, as the very term carries connotations of spiritual vacancy. A skilled practitioner of the book-length essay, Ball can also be wonderfully succinct: "'Soul' has become a kind of watermark of humanity, a defence against the awful thought that we could be manufactured."

Debates about the pros and cons of human embryo research, cloning and the like require a focus on issues that are rooted in the particularities of our time and culture. Ball's thoughtful book is a reminder that as we try and deal with how to enable and assist people into being, we need to understand and then conquer our fears surrounding the very idea of making people.

Wednesday, 19 January 2011

Incoming!

ET Rock

Incoming!:or, Why we should stop worrying and learn to love the meteorite by Ted Nield

New Scientist, 21 January 2011


From AD 218 to 222 the Roman empire worshipped a meteorite. This bizarre episode ended when the transsexual priest-emperor Elagabalus was hacked to bits and hurled into the Tiber. This is just one of the many stories Ted Nield skilfully weaves into his entertaining history of meteorites.

In July 2010, two spectators at a cricket match in Sussex in the south of England witnessed an extremely rare meteor strike. The rock, 12 centimetres long, broke in two when it hit the ground with a piece ricocheting into the chest of one man. Luckily he was unharmed. Nield reckons the "global risk of death by extraterrestrial impact to be a negligible 1 in 720,000". Meteorites pose little threat, but says Nield, "we humans have transplanted into meteorites the geological aliens, the heart of our own times, as we searched them for signs of times to come".

Saturday, 1 January 2011

The Many Worlds of Hugh Everett III

The Many Worlds of Hugh Everett III: Multiple universes, mutual assured destruction, and the meltdown of a nuclear family by Peter Byrne

It’s a sobering fact that the world we live in would be a very different place but for the discovery of quantum mechanics in the 1920s. Yet for the next sixty years most physicists accepted that the theory denied the existence of reality at the atomic and sub-atomic level. This strange state of affairs led the Nobel Prize-winning American physicist Murray Gell-Mann to describe quantum mechanics as ‘that mysterious, confusing discipline which none of us really understands but which we know how to use’. And use it we have, for without it there would be no computers, mobile phones, televisions and numerous other everyday gadgets.

Gell-Mann blamed the celebrated Danish physicist Niels Bohr, accusing him of having ‘brain-washed a whole generation of physicists into believing that the problem had been solved’. The problem being the vexed one that lies at the heart of quantum mechanics – what does the theory tells us about the nature of reality.

Albert Einstein and Bohr spent nearly thirty years locked in a debate over this question. Bohr believed he had the answer. There simply was no objective reality, but only ‘an abstract quantum mechanical description’. For Bohr, Schrödinger’s famous mythical cat trapped in a box with a vial of poison was neither dead nor alive but in a ghostly mixture of quantum states that ranged from being totally dead to completely alive and every conceivable combination in between until the box was opened. It was the act of observation/measurement, i.e. opening the box, that decided the fate of the cat. Einstein thought this was absurd. He believed that the cat was simply either dead or alive and to find out which all one had to do was look in the box.

Hugh Everett III was a twenty-four student at Princeton University when Einstein died in April 1955. By then Bohr’s so-called Copenhagen interpretation had become quantum orthodoxy that few were prepared to challenge. However, Everett did exactly that in his PhD thesis when he demonstrated it was theoretically possible to treat each and every possible outcome of a quantum experiment as actually existing in an alternative parallel reality. According to Everett this meant that the moment the box containing Schrödinger’s cat was opened the universe split in two, one in which the cat was dead and another in which it was still alive and kicking.

H.G. Wells wrote one of the first stories about parallel universes. In his 1922 Men Like Gods there exists an alternate world with ‘no parliament, no politics, no private wealth, no business competition, no police, no prison’. The Utopians who inhibit this world had shared our bloody past until history inexplicably branched. Everett was an avid reader of science fiction and believed that his theory was the simplest explanation of quantum mechanics, but accepting it was ‘a matter of taste’. Bohr and his inner circle rejected Everett’s heretical ‘many worlds’ as unpalatable and it was ignored.

Instead of pursuing an academic career and fighting for his idea, Everest joined the Pentagon in the late 1950s to apply game theory to strategic nuclear war planning. At the time the Pentagon was busy analyzing the cost-benefits of global and limited nuclear wars and calculating nuclear blast and fallout kill ratios. Appropriately for a man whose favourite film was Dr Strangelove, Everett helped bring the world perilously close to the brink of nuclear destruction. His work was instrumental in the development of the no-win scenario of mutual assured destruction (MAD) that defined the Cold War as it became the strategic posture of both the Americans and the Soviets. Peter Byrne in this vivid and thoroughly researched portrayal of a quintessential cold war techno-warrior found no evidence that Everett ever had the slightest doubt about effectively playing God with all life on Earth.

Byrne does an admirable job of weaving together quantum mechanics, nuclear war games and the disintegration of a dysfunctional family in this tale of a talented scientist, but morally compromised man. In pursuing what mattered to him, Everett was indifferent to the feeling of others and cared little what harm is actions caused those closest to him – one of his two children committed suicide. His achievements in physics and for the military were matched by his shortcomings as a husband and father.

Everett, who died of a heart attack aged fifty-one in 1982, did not live to see his many worlds interpretation taken seriously by physicists as they struggled to explain the mystery of how the universe came into being. A poll conducted just over a decade ago revealed that only four out of ninety physicists voted for Bohr’s Copenhagen interpretation, but thirty favoured the modern version of Everett’s many worlds. ‘There is no question that there is an unseen world,’ Woody Allen once said. ‘The problem is how far is it from mid-town and how late is it open’.