The science of selfhood

lorna-zoe-wanamaker-by-johan-persson2-1200x800.jpgZoë Wanamaker as Lorna in Nick Payne’s Elegy.

“The brain is wider than the sky,
For, put them side by side,
The one the other would contain,
With ease, and you besides”

Emily Dickinson, Complete Poems, 1924

What does it mean to be a self? And what happens to the social fabric of life, to our ethics and morality, when the nature of selfhood is called into question?

In neuroscience and psychology, the experience of ‘being a self’ has long been a central concern. One of the most important lessons, from decades of research, is that there is no single thing that is the self. Rather, the self is better thought of as an integrated network of processes that distinguish self from non-self at many different levels. There is the bodily self – the experience of identifying with and owning a particular body, which at a more fundamental level involves the amorphous experience of being a self-sustaining organism. There is the perspectival self, the experience of perceiving the world from a particular first-person point-of-view. The volitional self involves experiences of intention of agency, of urges to do this-or-that (or, perhaps more importantly, to refrain from doing this-or-that) and of being the cause of things that happen.

At higher levels we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time. This narrative self – the story we tell ourselves about who we are – is built from a rich set of autobiographical memories that are associated with a particular subject. Finally, the social self is that aspect of my self-experience and personal identity that depends on my social milieu, on how others perceive and behave towards me, and on how I perceive myself through their eyes and minds.

In daily life, it can be hard to differentiate these dimensions of selfhood. We move through the world as seemingly unified wholes, our experience of bodily self seamlessly integrated with our memories from the past, and with our experiences of volition and agency. But introspection can be a poor guide. Many experiments and neuropsychological case studies tell a rather different story, one in which the brain actively and continuously generates and coordinates these diverse aspects of self-experience.

The many ways of being a self can come apart in surprising and revealing situations. For example, it is remarkably easy to alter the experience of bodily selfhood. In the so-called ‘rubber hand illusion,’ I ask you to focus your attention on a fake hand while your real hand is kept out of sight. If I then simultaneously stroke your real hand and the fake hand with a soft paintbrush, you may develop the uncanny feeling that the fake hand is now, somehow, part of your body. A more dramatic disturbance of the experience of body ownership happens in somatoparaphrenia, a condition in which people experience that part of their body is no longer theirs, that it belongs to someone else – perhaps their doctor or family member. Both these examples involve changes in brain activity, in particular within the ‘temporo-parietal junction’, showing how even very basic aspects of personal identity are actively constructed by the brain.

Moving through levels of selfhood, autoscopic hallucinations involve seeing oneself from a different perspective, much like ‘out of body’ experiences. In akinetic mutism, people seem to lack any experiences of volition or intention (and do very little), while in schizophrenia or anarchic hand syndrome, people can experience their intentions or voluntary actions as having external causes. At the other end of the spectrum, disturbances of social self emerge in autism, where difficulties in perceiving others’ states of mind seems to be a core problem, though the exact nature of the autistic condition is still much debated.

When it comes to the ‘I’, memory is the key. Specifically, autobiographical memory: the recollection of personal experiences of people, objects, and places and other episodes from an individual’s life. While there are as many types of memory as there are varieties of self (for example, we have separate memory processes for facts, for the short term and the long term, and for skills that we learn), autobiographical memories are those most closely associated with our sense of personal identity. This is well illustrated by some classic medical cases in which, as a result of surgery or disease, the ability to lay down new memories is lost. In 1953 Henry Moliason (also known as the patient HM) had large parts of his medial temporal lobes removed in order to relieve severe epilepsy. From 1957 until his death in 2008, HM was studied closely by the neuropsychologist Brenda Milner, yet he was never able to remember meeting her. In 1985 the accomplished musician Clive Wearing suffered a severe viral brain disease that affected similar parts of his brain. Now 77, he frequently believes he has just awoken from a coma, spending each day in a constant state of re-awakening.

Surprisingly, both HM and Wearing remained able to learn new skills, forming new ‘procedural’ memories, despite never recalling the learning process itself. Wearing could still play the piano, and conduct his choir, though he would immediately forget having done so. The music appears to carry him along from moment to moment, restoring his sense of self in a way his memory no longer can. And his love for his wife Deborah seems undiminished, so that he expresses an enormous sense of joy on seeing her, even though he cannot tell whether their last meeting was years, or seconds, in the past. Love, it seems, persists when much else is gone.

For people like HM and Clive Wearing, memory loss has been unintended and unwanted. But as scientific understanding develops, could we be moving towards a world where specific memories and elements of our identity can be isolated or removed through medical intervention? And could the ability to lay down new memories ever be surgically restored? Some recent breakthroughs suggest these developments may not be all that far-fetched.

In 2013, Jason Chan and Jessica LaPaglia, from Iowa State University showed that specific human memories could indeed be deleted. They took advantage of the fact that when memories are explicitly recalled they become more vulnerable. By changing details about a memory, while it was being remembered, they induced a selective amnesia which lasted for at least 24 hours. Although an important advance, this experiment was limited by relying on ‘non-invasive’ methods – which means not using drugs or directly interfering with the brain.

More recent animal experiments have shown even more striking effects. In a ground-breaking 2014 study at the University of California, using genetically engineered mice, Sadegh Nabavi and colleagues managed to block and then re-activate a specific memory. They used a powerful (invasive) technique called optogenetics to activate (or inactivate) the biochemical processes determining how neurons change their connectivity. And elsewhere in California, Ted Berger is working on the first prototypes of so-called ‘hippocampal prostheses’ which replace a part of the brain essential for memory with a computer chip. Although these advances are still a long way from implementation in humans, they show an extraordinary potential for future medical interventions.

The German philosopher Thomas Metzinger believes that “no such things as selves exist in the world”. Modern neuroscience may be on his side, with memory being only one thread in the rich tapestry of processes shaping our sense of selfhood. At the same time, the world outside the laboratory is still full of people who experience themselves – and each other – as distinct, integrated wholes. How the new science of selfhood will change this everyday lived experience, and society with it, is a story that is yet to be told.

Originally commissioned for the Donmar Warehouse production of Elegy, with support from The Wellcome Trust.  Reprinted in the programme notes and in Nick Payne’s published script.

Tracing the edges of consciousness

States-of-mind-main

As a scientist, consciousness has always fascinated me. But understanding consciousness is not a project for science alone. Throughout history, philosophers, artists, storytellers, and musicians have all wondered about the apparent miracle of conscious awareness. Even today, while science might give us our best shot at figuring out the brain – the organ of experience – we need, more than ever, a melding of the arts and sciences, of contemporary and historical approaches, to understand what consciousness really is, to grasp what we mean by, as Mark Haddon eloquently puts it, “Life in the first person.”

This quote comes from Haddon’s beautiful introductory essay to a major new exhibition at the Wellcome Collection in London. Curated by Emily Sargent, States of Mind: Tracing the edges of consciousness “examines perspectives from artists, psychologists, philosophers and neuroscientists to interrogate our understanding of the conscious experience”. Its a fantastic exhibition, with style and substance, and I feel very fortunate to have been involved as an advisor from its early stages.

What’s so special about consciousness?

Consciousness is at once the most familiar and the most mysterious aspect of our existence. Conscious experiences define our lives, but the private, subjective, and what-it-is-likeness of these experiences seems to resist scientific enquiry. Somehow, within each our brains the combined activity of many billions of neurons, each one a tiny biological machine, is giving rise to a conscious experience. Your conscious experience: right here, right now, reading these words. How does this happen? Why is life in the first person?

In one sense, this seems like the kind of mystery ripe for explanation. Borrowing again from Mark Haddon, the raw material of consciousness is not squirreled away deep inside an atom, its not happening 14 billion years ago, and its not hiding out on the other side of the universe. It’s right here in front of – or rather behind – our eyes. Saying this, the brain is a remarkably complex object. It’s not so much the sheer number of neurons (though there about 90 billion). It’s the complexity of its wiring: there are so many connections, that if you counted one every second it would take you 3 million years to finish. Is it not possible that an object of such extraordinary complexity should be capable of extraordinary things?

People have been thinking about consciousness since they’ve been thinking at all. Hippocrates, the founder of modern medicine, said: “Men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, griefs and tears … Madness comes from its moistness.” (Aristotle, by the way, got it wrong, thinking the brain hadn’t much to do with consciousness at all.)

Fast forward to Francis Crick, whose ‘astonishing hypothesis’ in the 1990s deliberately echoed Hippocrates: “You, your joys and your sorrows, your memories and your ambitions … and so on … are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules”. Crick, who I was lucky enough to meet during my time in America, was working on the neurobiology of consciousness even on the day he died. You will see some of his personal notes, and his perplexing plasticine brain models, in States of Mind.

L0080096 Descartes: view of posterior of brain

Descartes: view of posterior of brain, from De Hominem. Wellcome Collection

A major landmark in thinking about consciousness is of course Descartes, who in the 17th Century distinguished between “mind stuff” (res cogitans) and “matter stuff” (res extensa), so giving rise to the now infamous mind-body problem and the philosophy of dualism. Its a great thrill for to see an original copy of Descartes’ De Homine as part of this exhibition. Its modern incarnation as David Chalmers’ so-called ‘hard problem’ has recently gained enough cultural notoriety even to inspire a Tom Stoppard play (though for my money Alex Garland’s screenplay for Ex Machina is the more perspicuous). The idea of the hard problem is this: Even if we knew everything about how the operations of the brain give rise to perception, cognition, learning, and behaviour a problem would still remain: why and how any of this should be associated with consciousness at all? Why is life in the first person?

Defining consciousness

How to define consciousness? One simple definition is that for a conscious organism there is something it is like to be that organism. Or, one can simply say that consciousness is what disappears when we fall into a dreamless sleep, and what returns when we wake up or start dreaming. A bit more formally, for conscious organisms there exists a continuous (though interruptible) stream of conscious scenes – a phenomenal world – which has the character of being subjective and private. The material in States of Mind can help us encounter these ideas with a bit more clarity and force, by focusing on the edges – the liminal boundaries – of consciousness.

First there is conscious level: the difference between being awake and, let’s say, under general anaesthesia. Here, neuroscience now tells us that there is no single ‘generator’ of consciousness in the brain, rather, being consciousness depends on highly specific ways in which different parts of the brain speak to each other. Aya Ben Ron’s film of patients slipping away under anaesthesia is a beautiful exploration of this process, as is the whole section on ‘SLEEP | AWAKE’.

Then there is conscious content: what we are conscious of, when we are conscious. These are the perceptions, thoughts, and emotions that populate our ever-flowing stream of awareness. Here, current research is revealing that our perceptual world is not simply an internal picture of some external reality. Rather, conscious perception depends on the brain’s best guesses, or hypotheses, about the causes of sensory data. Perception is therefore a continuously creative act that is tightly bound up with imagination, so that our experience of the world is a kind of ‘controlled hallucination’, a fantasy that – usually, but not always – coincides with reality. The material on synaesthesia in States of Mind beautifully illuminates this process by showing how, for some of us, these perceptual fantasies can be very different – that we all have our own distinctive inner universes. You can even try training yourself to become a ‘synaesthete’ with a demo of some of our own research, developed for this exhibition. Many thanks to Dr. David Schwartzman of the Sackler Centre for making this happen.

dsc_0001

Alphabet in Colour: Illustrating Vladimir Nabokov’s grapheme-colour synaesthesia, by Jean Holabird.

Finally there is conscious self – the specific experience of being me, or being you. While this might seem easy to take for granted, the experience of being a self requires explanation just as much as any other kind of experience. It too has its edges, its border regions. Here, research is revealing that conscious selfhood, though experienced as unified, can come apart in many different ways. For example, our experience of being and having a particular body can dissociate from our experience of being a person with name and a specific set of memories. Conscious selfhood, like all conscious perception, is therefore another controlled hallucination maintained by the brain. The section BEING | NOT BEING dramatically explores some of these issues, for example by looking at amnesia with Shona Illingworth, and with Adrian Owen’s seminal work on the possibility of consciousness even after severe brain injury.

This last example brings up an important point. Besides the allure of basic science, there are urgent practical motivations for studying consciousness. Neurological and psychiatric disorders are increasingly common and can often be understood as disturbances of conscious experience. Consciousness science promises new approaches and perhaps new treatments for these deeply destructive problems. Scoping out further boundary areas, studying the biology of consciousness can shed new light on awareness in newborn infants and in non-human animals, informing ethical debates in these areas. Above all, consciousness science carries the promise of understanding more about our place in nature. Following the tradition of Copernicus and Darwin, a biological account of conscious experience will help us see ourselves as part of, not apart from, the rest of the universe.

L0079940 Neuronal Theory - 11312.

Santiago Ramon y Cajal, distinguishing the reticular theory (left) from the neuron doctrine (right).  From the Instituto Cajal, Madrid.

Let’s finish by returning to this brilliant exhibition, States of Mind. What I found most remarkable are the objects that Emily Sargent has collected together. Whether its Descartes’ De Hominem, Ramon y Cajal’s delicate ink drawings of neurons, or Francis Crick’s notebooks and models, these objects bring home and render tangible the creativity and imagination which people have brought to bear in their struggle to understand consciousness, over hundreds of years. For me, this brings a new appreciation and wonder to our modern attempts to tackle this basic mystery of life. Emily Dickinson, my favourite poet of neuroscience, put it like this. “The brain is wider than the sky, for – put them side by side – the one the other will contain, with ease, and you – beside.”

States of Mind is at the Wellcome Collection in London from Feb 4th until October 16th 2016 and is curated by Emily Sargent. Sackler Centre researchers, in particular David Schwartzman and myself,  helped out as scientific advisors. This text is lightly adapted from a speech I gave at the opening event on Feb 3rd 2016. Watch this space, and visit the exhibition website, for news about special events on consciousness that will happen throughout the year.

Ex Machina: A shot in the arm for smart sci-fi

machina_a

Alicia Vikander as Ava in Alex Garland’s Ex Machina

IT’S a rare thing to see a movie about science that takes no prisoners intellectually. Alex Garland’s Ex Machina is just that: a stylish, spare and cerebral psycho-techno-thriller, which gives a much-needed shot in the arm for smart science fiction.

Reclusive billionaire genius Nathan, played by Oscar Isaac, creates Ava, an intelligent and very attractive robot played by Alicia Vikander. He then struggles with the philosophical and ethical dilemmas his creation poses, while all hell breaks loose. Many twists and turns add nuance to the plot, which centres on the evolving relationships between the balletic Ava and Caleb (Domhnall Gleeson), a hotshot programmer invited by Nathan to be the “human component in a Turing test”, and between Caleb and Nathan, as Ava’s extraordinary capabilities become increasingly apparent

Everything about this movie is good. Compelling acting (with only three speaking parts), exquisite photography and set design, immaculate special effects, a subtle score and, above all, a hugely imaginative screenplay combine under Garland’s precise direction to deliver a cinematic experience that grabs you and never lets go.

The best science fiction often tackles the oldest questions. At the heart of Ex Machina is one of our toughest intellectual knots, that of artificial consciousness. Is it possible to build a machine that is not only intelligent but also sentient: that has consciousness, not only of the world but also of its own self? Can we construct a modern-day Golem, that lumpen being of Jewish folklore which is shaped from unformed matter and can both serve humankind and turn against it? And if we could, what would happen to us?

In Jewish folkore, the Golem is animate being shaped from unformed matter.

In Jewish folkore, the Golem is animate being shaped from unformed matter.

Putting aside the tedious business of actually building a conscious AI, we face the challenge of figuring out whether the attempt succeeds. The standard reference for this sort of question is Alan Turing’s eponymous test, in which a human judge interrogates both a candidate machine and another human. A machine passes the test when the judge consistently fails to distinguish between them.

While the Turing test has provided a trope for many AI-inspired movies (such as Spike Jonze’s excellent Her), Ex Machina takes things much further. In a sparkling exchange between Caleb and Nathan, Garland nails the weakness of Turing’s version of the test, a focus on the disembodied exchange of messages, and proposes something far more interesting. “The challenge is to show you that she’s a robot. And see if you still feel she has consciousness,” Nathan says to Caleb.

This shifts the goalposts in a vital way. What matters is not whether Ava is a machine. It is not even whether Ava, even though a machine, can be conscious. What matters is whether Ava makes a conscious person feel that Ava is conscious. The brilliance of Ex Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine. And Garland is not necessarily on our side.

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Is consciousness a matter of social consensus? Is it more relevant whether people believe (or feel) that something (or someone) is conscious than whether it is in fact actually conscious? Or, does something being “actually conscious” rest on other people’s beliefs about it being conscious, or on its own beliefs about its consciousness (beliefs that may themselves depend on how it interprets others’ beliefs about it)? And exactly what is the difference between believing and feeling in situations like this?

It seems to me that my consciousness, here and now, is not a matter of social consensus or of my simply believing or feeling that I am conscious. It seems to me, simply, that I am conscious here and now. When I wake up and smell the coffee, there is a real experience of coffee-smelling going on.

But let me channel Ludwig Wittgenstein, one of the greatest philosophers of the 20th century, for a moment. What would it seem like if it seemed to me that my being conscious were a matter of social consensus or beliefs or feelings about my own conscious status? Is what it “seems like” to me relevant at all when deciding how consciousness comes about or what has consciousness?

Before vanishing completely into a philosophical rabbit hole, it is worth saying that questions like these are driving much influential current research on consciousness. Philosophers and scientists like Daniel Dennett, David Rosenthal and Michael Graziano defend, in various ways, the idea that consciousness is somehow illusory and what we really mean in saying we are conscious is that we have certain beliefs about mental states, that these states have distinctive functional properties, or that they are involved in specific sorts of attention.

Another theoretical approach accepts that conscious experience is real and sees the problem as one of determining its physical or biological mechanism. Some leading neuroscientists such as Giulio Tononi, and recently, Christof Koch, take consciousness to be a fundamental property, much like mass-energy and electrical charge, that is expressed through localised concentrations of “integrated information”. And others, like philosopher John Searle, believe that consciousness is an essentially biological property that emerges in some systems but not in others, for reasons as-yet unknown.

In the film we hear about Searle’s Chinese Room thought experiment. His premise was that researchers had managed to build a computer programmed in English that can respond to written Chinese with written Chinese so convincingly it easily passes the Turing test, persuading a human Chinese speaker that the program understands and speaks Chinese. Does the machine really “understand” Chinese (Searle called this “strong AI”) or is it only simulating the ability (“weak” AI)? There is also a nod to the notional “Mary”, the scientist, who, while knowing everything about the physics and biology of colour vision, has only ever experienced black, white and shades of grey. What happens when she sees a red object for the first time? Will she learn anything new? Does consciousness exceed the realms of knowledge.

All of the above illustrates how academically savvy and intellectually provocative Ex Machina is. Hat-tips here to Murray Shanahan, professor of cognitive robotics at Imperial College London, and writer and geneticist Adam Rutherford, whom Garland did well to enlist as science advisers.

Not every scene invites deep philosophy of mind, with the film encompassing everything from ethics, the technological singularity, Ghostbusters and social media to the erosion of privacy, feminism and sexual politics within its subtle scope. But when it comes to riffing on the possibilities and mysteries of brain, mind and consciousness, Ex Machina doesn’t miss a trick.

As a scientist, it is easy to moan when films don’t stack up against reality, but there is usually little to be gained from nitpicking over inaccuracies and narrative inventions. Such criticisms can seem petty and reinforcing of the stereotype of scientists as humourless gatekeepers of facts and hoarders of equations. But these complaints sometimes express a sense of missed opportunity rather than injustice, a sense that intellectual riches could have been exploited, not sidelined, in making a good movie. AI, neuroscience and consciousness are among the most vibrant and fascinating areas of contemporary science, and what we are discovering far outstrips anything that could be imagined out of thin air.

In his directorial debut, Garland has managed to capture the thrill of this adventure in a film that is effortlessly enthralling, whatever your background. This is why, on emerging from it, I felt lucky to be a neuroscientist. Here is a film that is a better film, because of and not despite its engagement with its intellectual inspiration.


The original version of this piece was published as a Culture Lab article in New Scientist on Jan 21. I am grateful to the New Scientist for permission to reproduce it here, and to Liz Else for help with editing. I will be discussing Ex Machina with Dr. Adam Rutherford at a special screening of the film at the Edinburgh Science Festival (April 16, details and tickets here).

There’s more to geek-chic than meets the eye, but not in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game. (Spoiler alert: this post reveals some plot details.)

World War Two was won not just with tanks, guns, and planes, but by a crack team of code-breakers led by the brilliant and ultimately tragic figure of Alan Turing. This is the story as told in The Imitation Game, a beautifully shot and hugely popular film which nonetheless left me nursing a deep sense of missed opportunity. True, Benedict Cumberbatch is brilliant, spicing his superb Holmes with a dash of the Russell Crowe’s John Nash (A Beautiful Mind) to propel geek rapture into yet higher orbits. (See also Eddie Redmayne and Stephen Hawking.)

The rest was not so good. The clunky acting might reflect a screenplay desperate to humanize and popularize what was fundamentally a triumph of the intellect. But what got to me most was the treatment of Turing himself. On one hand there is the perhaps cinematically necessary canonisation of individual genius, sweeping aside so much important context. On the other there is the saccharin treatment of Turing’s open homosexuality (with compensatory boosting of Keira Knightley’s Joan Clarke) and the egregious scenes in which he stands accused of both treason and cowardice by association with Soviet spy John Cairncross, whom he likely never met. The requisite need for a bad guy does disservice also to Turing’s Bletchley Park boss Alastair Denniston, who while a product of old-school classics-inspired cryptography nonetheless recognized and supported Turing and his crew. Historical jiggery-pokery is of course to be expected in any mass-market biopic, but the story as told in The Imitation Game becomes much less interesting as a result.

Alan Turing as himself

Alan Turing as himself

I studied at King’s College, Cambridge, Turing’s academic home and also where I first encountered the basics of modern day computer science and artificial intelligence (AI). By all accounts Turing was a genius, laying the foundations for these disciplines but also for other areas of science, which – like AI – didn’t even exist in his time. His theories of morphogenesis presaged contemporary developmental biology, explaining how leopards get their spots. He was a pioneer of cybernetics, an inspired amalgam of engineering and biology that after many years in the academic hinterland is once again galvanising our understanding of how minds and brains work, and what they are for. One can only wonder what more he would have done, had he lived.

There is a breathless moment in the film where Joan Clarke (or poor spy-hungry and historically-unsupported Detective Nock, I can’t remember) wonders whether Turing, in cracking Enigma, has built his ‘universal machine’. This references Turing’s most influential intellectual breakthrough, his conceptual design for a machine that was not only programmable but re-programmable, that could execute any algorithm, any computational process.

The Universal Turing Machine formed the blueprint for modern-day computers, but the machine that broke Enigma was no such thing. The ‘Bombe’, as it was known, was based on Polish prototypes (the bomba kryptologiczna) and was co-designed with Gordon Welchman whose critical ‘diagonal board’ innovation is in the film attributed to the suave Hugh Alexander (Welchman doesn’t appear at all). Far from being a universal computer the Bombe was designed for a single specific purpose – to rapidly run through as many settings of the Enigma machine as possible.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

The Bombe is half the story of Enigma. The other half is pure cryptographic catnip. Even with a working Bombe the number of possible machine settings to be searched each day (the Germans changed all the settings at midnight) was just too large. The code-breakers needed a way to limit the combinations to be tested. And here Turing and his team inadvertently pioneered the principles of modern-day ‘Bayesian’ machine learning, by using prior assumptions to constrain possible mappings between a cipher and its translation. For Enigma, the breakthroughs came on realizing that no letter could encode itself, and that German operators often used the same phrases in repeated messages (“Heil Hitler!”). Hugh Alexander, diagonal boards aside, was supremely talented at this process which Turing called ‘banburismus’, on account of having to get printed ‘message cards’ from nearby Banbury.

In this way the Bletchley code-breakers combined extraordinary engineering prowess with freewheeling intellectual athleticism, to find a testable range of Enigma settings, each and every day, which were then run through the Bombe until a match was found.

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

Though it gave the allies a decisive advantage, the Bombe was not the first computer, not the first ‘digital brain’. This honour belongs to Colossus, also built at Bletchley Park, and based on Turing’s principles, but constructed mainly by Tommy Flowers, Jack Good, and Bill Tutte. Colossus was designed to break the even more encrypted communications the Germans used later in the war: the Tunny cipher. After the war the intense secrecy surrounding Bletchley Park meant that all Colossi (and Bombi) were dismantled or hidden away, depriving Turing, Flowers – and many others – of recognition and setting back the computer age by years. It amazes me that full details about Colussus were only released in 2000.

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

The Imitation Game of the title is a nod to Turing’s most widely known idea: a pragmatic answer to the philosophically challenging and possibly absurd question, “can machines think”. In one version of what is now known as the Turing Test, a human judge interacts with two players – another human and a machine – and must decide which is which. Interactions are limited to disembodied exchanges of pieces of text, and a candidate machine passes the test when the judge consistently fails to distinguish the one from the other. It is unfortunate but in keeping with the screenplay that Turing’s code-breaking had little to do with his eponymous test.

It is completely understandable that films simplify and rearrange complex historical events in order to generate widespread appeal. But the Imitation Game focuses so much on a distorted narrative of Turing’s personal life that the other story – a thrilling ‘band of brothers’ tale of winning a war by inventing the modern world – is pushed out into the wings. The assumption is that none of this puts bums on seats. But who knows, there might be more to geek-chic than meets the eye.

Should we fear the technological singularity?

terminator

Could wanting the latest mobile phone for Christmas lead to human extermination? Existential risks to our species have long been part of our collective psyche – in the form of asteroid impacts, pandemics, global nuclear cataclysm, and more recently, climate change. The idea is not simply that humans and other animals could be wiped out, but that basic human values and structures of society would change so as to become unrecognisable.

Last week, Stephen Hawking claimed that technological progress, while perhaps intended for human betterment, might lead to a new kind of existential threat in the form of self-improving artificial intelligence (AI). This worry is based on the “law of accelerating returns”, which applies when the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – advances in its capabilities. The idea is that a point might be reached where this process leads to wholesale and irreversible changes in how we live. This is the technological singularity, a concept made popular by AI maverick and Google engineering director Ray Kurzweil.

We are already familiar with accelerating returns in the rapid development of computer power (“Moore’s law”), and Kurzweil’s vision of the singularity is actually a sort of utopian techno-rapture. But there are scarier scenarios where exponential technological growth might exceed our ability to foresee and prevent unintended consequences. Genetically modified food is an early example of this worry, but now the spotlight is on bio- and nano-technology, and – above all – AI, the engineering of artificial minds.

Moore's law: the exponential growth in computational power since 1900.

Moore’s law: the exponential growth in computational power since 1900.

A focus on AI might seem weird given how disappointing present-day ‘intelligent robots’ are. They can hardly vacuum your living room let alone take over the world, and reports that the famous Turing Test for AI has been passed are greatly exaggerated. Yet AI has developed a surprising behind-the-scenes momentum. New ‘deep learning’ algorithms have been developed which, when coupled with vast amounts of data, show remarkable abilities to tackle everyday problems like speech comprehension and face recognition. As well as world-beating chess players like Deep Blue, we have Apple Siri and Google Now helping us navigate our messy and un-chesslike environments in ways that mimic our natural cognitive abilities. Huge amounts of money have followed, with Google this year paying £400M for AI start-up DeepMind in a deal which Google CEO Eric Schmidt heralded as enabling products that are “infinitely more intelligent”.

"Hello Dave".

“Hello Dave”.

What if the ability to engineer artificial minds leads to these minds engineering themselves, developing their own goals, and bootstrapping themselves beyond human understanding and control? This dystopian prospect has been mined by many sci-fi movies – think Blade Runner, HAL in 2001, Terminator, Matrix – but while sci-fi is primarily for entertainment, the accelerating developments in AI give pause for thought. Enter Hawking, who now warns that “the full development of AI could spell the end of the human race”. He joins real-world-Iron-Man Elon Musk and Oxford philosopher Nick Bostrom in declaring AI the most serious existential threat we face. (Hawking in fact used the term ‘singularity’ long ago to describe situations where the laws of physics break down, like at the centre of a black hole).

However implausible a worldwide AI revolution might seem, Holmes will tell you there is all the difference in the world between the impossible and the merely improbable. Even if highly unlikely, the seismic impact of a technological singularity is such that it deserves to be taken seriously, both in estimating and mitigating its likelihood, and in planning potential responses. Cambridge University’s new Centre for the Study for Existential Risk has been established to do just this, with Hawking and ex-Astronomer Royal Sir Martin Rees among the founders.

Dystopian eventualities aside, the singularity concept is inherently interesting because it pushes us to examine what we mean by being human (as my colleague Murray Shanahan argues in a forthcoming book). While intelligence is part of the story, being human is also about having a body and an internal physiology; we are self-sustaining flesh bags. It is also about consciousness; we are each at the centre of a subjective universe of experience. Current AI has little to say about these issues, and it is far from clear whether truly autonomous and self-driven AI is possible in their absence. The ethical minefield deepens when we realize that AIs becoming conscious would entail ethical responsibilities towards them, regardless of their impact on us.

At the moment, AI like any powerful technology has the potential for good and ill, long before any singularity is reached. On the dark side, AI gives us the tools to wreak our own havoc by distancing ourselves from the consequences of our actions. Remote controlled military drones already reduce life-and-death decisions to the click of a button: with enhanced AI there would be no need for the button. On the side of the angels, AI can make our lives healthier and happier, and our world more balanced and sustainable, by complementing our natural mental prowess with the unprecedented power of computation. The pendulum may swing from the singularity-mongerers to the techno-mavens; and we should listen to both, but proceed serenely with the angels.

This post is an amended version of a commisioned comment for The Guardian: Why we must not stall technological progress, despite its threat to humanity, published on December 03, 2014.  It was part of a flurry of comments occasioned by a BBC interview with Stephen Hawking, which you can listen to here. I’m actually quite excited to see Eddie Redmayne’s rendition of the great physicist.

The Human Brain Project risks becoming a missed opportunity

Image concept of a network of neurons in the human brain.

The brain is much on our minds at the moment. David Cameron is advocating a step-change in dementia research, brain-computer interfaces promise new solutions to paralysis, and the ongoing plight of Michael Schumacher has reminded us of the terrifying consequences of traumatic brain injury. Articles in scholarly journals and in the media are decorated with magical images of the living brain, like the one shown below, to illuminate these stories. Yet, when asked, most neuroscientists will say we still know very little about how the brain works, or how to fix it when it goes wrong.

DTI-sagittal-fibers
A diffusion tensor image showing some of the main pathways along which brain connections are organized.

The €1.2bn Human Brain Project (HBP) is supposed to change all this. Funded by the European Research Council, the HBP brings together more than 80 research institutes in a ten-year endeavour to unravel the mysteries of the brain, and to emulate its powers in new technologies. Following examples like the Human Genome Project and the Large Hadron Collider (where Higgs’ elusive boson was finally found), the idea is that a very large investment will deliver very significant results. But now a large contingent of prominent European neuroscientists are rebelling against the HBP, claiming that its approach is doomed to fail and will undermine European neuroscience for decades to come.

Stepping back from the fuss, it’s worth thinking whether the aims of the HBP really make sense. Sequencing the genome and looking for Higgs were both major challenges, but in these cases the scientific community agreed on the objectives, and on what would constitute success. There is no similar consensus among neuroscientists.

It is often said that the adult human brain is the most complex object in the universe. It contains about 90 billion neurons and a thousand times more connections, so that if you counted one connection each second it would take about three million years to finish. The challenge for neuroscience is to understand how this vast, complex, and always changing network gives rise to our sensations, perceptions, thoughts, actions, beliefs, desires, our sense of self and of others, our emotions and moods, and all else that guides our behaviour and populates our mental life, in health and in disease. No single breakthrough could ever mark success across such a wide range of important problems.

The central pillar of the HBP approach is to build computational simulations of the brain. Befitting the huge investment, these simulations would be of unprecedented size and detail, and would allow brain scientists to integrate their individual findings into a collective resource. What distinguishes the HBP – besides the money – is its aggressively ‘bottom up’ approach: the vision is that by taking care of the neurons, the big things – thoughts, perceptions, beliefs, and the like – will take care of themselves. As such, the HBP does not set out to test any specific hypothesis or collection of hypotheses, marking another distinction with common scientific practice.

Could this work? Certainly, modern neuroscience is generating an accelerating data deluge demanding new technologies for visualisation and analysis. This is the ‘big data’ challenge now common in many settings. It is also clear that better pictures of the brain’s wiring diagram (the ‘connectome’) will be essential as we move ahead. On the other hand, more detailed simulations don’t inevitably lead to better understanding. Strikingly, we don’t fully understand the brain of the tiny worm Caenorhabtis elegans even though it has only 302 neurons and the wiring diagram is known exactly. More generally, a key ability in science is to abstract away from the specifics to see more clearly what underlying principles are at work. In the limit, a perfectly accurate model of the brain may become as difficult to understand as the brain itself, as Borges long ago noted when describing the tragic uselessness of the perfectly detailed map.

jorge_luis_borges_por_paola_agosti
Jorge Luis Borges at Harvard University, 1967/8

Neuroscience is, and should remain, a broad church. Understanding the brain does not reduce to simulating the collective behaviour of all its miniscule parts, however interesting a part of the final story this might become. Understanding the brain means grasping complex interactions cross-linking many different levels of description, from neurons to brain regions to individuals to societies. It means complementing bottom-up simulations with new theories describing what the brain is actually doing, when its neurons are buzzing merrily away. It means designing elegant experiments that reveal how the mind constructs its reality, without always worrying about the neuronal hardware underneath. Sometimes, it means aiming directly for new treatments for devastating neurological and psychiatric conditions like coma, paralysis, dementia, and depression.

Put this way, neuroscience has enormous potential to benefit society, well deserving of high profile and large-scale support. It would be a great shame if the Human Brain Project, through its singular emphasis on massive computer simulation, ends up as a lightning rod for dissatisfaction with ‘big science’ rather than fostering a new and powerfully productive picture of the biological basis of the mind.

This article first appeared online in The Guardian on July 8 2014.  It appeared in print in the July 9 edition, on page 30 (comment section).

Post publication notes:

The HBP leadership have published a response to the open letter here. I didn’t find it very convincing. There have been a plethora of other commentaries on the HBP, as it comes up to its first review.  I can’t provide an exhaustive list but I particularly liked Gary Marcus’ piece in the New York Times (July 11). There was also trenchant criticism in the editorial pages of Nature.  Paul Verschure has a nice TED talk addressing some of the challenges facing big data, encompassing the HBP.

 

 

The importance of being Eugene: What (not) passing the Turing test really means

Image
Eugene Goostman, chatbot.

Could you tell difference between a non-native-English-speaking 13-year old Ukranian boy, and a computer program? On Saturday, at the Royal Society, one out of three human judges were fooled. So, it has been widely reported, the iconic Turing Test has been passed and a brave new era of Artificial Intelligence (AI) begins.

Not so fast. While this event marks a modest improvement in the abilities of so-called ‘chatbots’ to engage fluently with humans, real AI requires much more.

Here’s what happened. At a competition held in central London, thirty judges (including politician Lord Sharkey, computer scientist Kevin Warwick, and Red Dwarf actor Robert Llewellyn) interacted with ‘Eugene Goostman’ in a series of five-minute text-only exchanges. As a result, 33% of the judges (reports do not yet say which, though tweets implicate Llewellyn) were persuaded that ‘Goostman’ was real. The other 67%  were not. It turns out that ‘Eugene Goostman’ is not a teenager from Odessa, but a computer program, a ‘chatbot’ created by computer engineers Vladimir Veselov and Eugene Demchenko. According to his creators, ‘Goostman’ was ‘born’ in 2001, owns a pet guinea pig, and has a gynaecologist father.

The Turing Test, devised by computer science pioneer and codebreaker Alan Turing, was proposed as a practical alternative to the philosophically challenging and possibly absurd question, “can machines think”. In one popular interpretation, a human judge interacts with two players – a human and a machine – and must decide which is which. A candidate machine passes the test when the judge consistently fails to distinguish the one from the other. Interactions are limited to exchanges of strings of text, to make the competition fair (more on this later; its also worth noting that Turing’s original idea was more complex than this, but lets press on). While there have been many previous attempts and prior claims about passing the test, the Goostman-bot arguably outperformed its predecessors, leading Warwick to noisily proclaim “We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday”.

Image
Alan Turing’s seminal 1950 paper

This is a major overstatement which does grave disservice to the field of AI. While Goostman may represent progress of a sort – for instance this year’s competition did not place any particular restrictions on conversation topics – some context is badly needed.

An immediate concern is that Goostman is gaming the system. By imitating a non-native speaker, the chatbot can make its clumsy English expected rather than unusual. Hence its reaction to winning the prize: “I feel about beating the Turing test in quite convenient way”. And its assumed age of thirteen lowers expectations about satisfactory responses to questions. As Veselov put it “Thirteen years old is not too old to know everything and not too young to know nothing.” While Veselov’s strategy is cunning, it also shows that the Turing test is as much a test of the judges’ abilities to make suitable inferences, and to ask probing questions, as it is of the capabilities of intelligent machinery.

More importantly, fooling 33% of judges over 5 minute sessions was never the standard intended by Alan Turing for passing his test – it was merely his prediction about how computers might fare within about 50 years of his proposal. (In this, as in much else, he was not far wrong: the original Turing test was described in 1950.) A more natural criterion, as emphasized by the cognitive scientist Stevan Harnad, is for a machine to be consistently indistinguishable from human counterparts over extended periods of time, in other words to have the generic performance capacity of a real human being. This more stringent benchmark is still a long way off.

Perhaps the most significant limitation exposed by Goostman is the assumption that ‘intelligence’ can be instantiated in the disembodied exchange of short passages of text. On one hand this restriction is needed to enable interesting comparisons between humans and machines in the first place. On the other, it simply underlines that intelligent behaviour is intimately grounded in the tight couplings and blurry boundaries separating and joining brains, bodies, and environments. If Saturday’s judges had seen Goostman, or even an advanced robotic avatar voicing its responses, there would no question of any confusion. Indeed, robots that are today physically most similar to humans tend to elicit sensations like anxiety and revulsion, not camaraderie. This is the ‘uncanny valley’ – a term coined by robotics professor Masahiro Mori in 1970 (with a nod to Freud) and exemplified by the ‘geminoids’ built by Hiroshi Ishiguro.

Image
Hiroshi Ishiguro and his geminoid.  Another imitation game.

A growing appreciation of the importance of embodied, embedded intelligence explains why nobody is claiming that human-like robots are among us, or are in any sense imminent. Critics of AI consistently point to the notable absence of intelligent robots capable of fluent interactions with people, or even with mugs of tea. In a recent blog post I argued that new developments in AI are increasingly motivated by the near forgotten discipline of cybernetics, which held that prediction and control were at the heart of intelligent behaviour – not barefaced imitation as in Turing’s test (and, from a different angle, in Ishiguro’s geminoids). While these emerging cybernetic-inspired approaches hold great promise (and are attracting the interest of tech giants like Google) there is still plenty to be done.

These ideas have two main implications for AI. The first is that true AI necessarily involves robotics. Intelligent systems are systems that flexibly and adaptively interact with complex, dynamic, and often social environments. Reducing intelligence to short context-free text-based conversations misses the target by a country mile. The second is that true AI should focus not only on the outcome (i.e., whether a machine or robot behaves indistinguishably from a human or other animal) but also on the process by which the outcome is attained. This is why considerable attention within AI has always been paid to understanding, and simulating, how real brains work, and how real bodies behave.

Image
How the leopard got its spots: Turing’s chemical basis of morphogenesis.

Turing of course did much more than propose an interesting but ultimately unsatisfactory (and often misinterpreted) intelligence test. He laid the foundations for modern computer science, he saved untold lives through his prowess in code breaking, and he refused to be cowed by the deep prejudices against homosexuality prevalent in his time, losing his own life in the bargain. He was also a pioneer in theoretical biology: his work in morphogenesis showed how simple interactions could give rise to complex patterns during animal development. And he was a central figure in the emerging field of cybernetics, where he recognized the deep importance of embodied and embedded cognition. The Turing of 1950 might not recognize much of today’s technology, but he would not have been fooled by Goostman.

[postscript: while Warwick &co have been very reluctant to release the transcript of Goostman’s 2014 performance, this recent Guardian piece has some choice dialogue from 2012, where Goostman polled at 28%, not far off Saturday’s 33%. This piece was updated on June 12 following a helpful dialog with Aaron Sloman].

The amoral molecule

Image

The cuddle drug, the trust hormone, the moral molecule: oxytocin (OXT), has been called all these things and more.  You can buy nasal sprays of the stuff online in the promise that some judicious squirting will make people trust you more. In a recent book neuroscientist-cum-economist Paul Zak goes the whole hog, saying that if we only let ourselves be guided by this “moral molecule”, prosperity and social harmony will certainly ensue.

Behind this outlandish and rather ridiculous claim lies some fascinating science. The story starts with the discovery that injecting female virgin rats with OXT triggers maternal instincts, and that these same instincts in mother rats are suppressed when OXT is blocked.  Then came the finding of different levels of OXT receptors in two closely related species of vole. The male prairie vole, having high levels, is monogamous and helps look after its little vole-lets.  Male meadow voles, with many fewer receptors, are aggressive loners who move from one female to the next without regard for their offspring. What’s more, genetically manipulating meadow voles to express OXT receptors turns them into monogamous prairie-vole-a-likes. These early rodent studies showed that OXT plays an important and previously unsuspected role in social behaviour.

Studies of oxytocin and social cognition really took off about ten years ago when Paul Zak, Ernst Fehr, and colleagues began manipulating OXT levels in human volunteers while they played a variety of economic and ‘moral’ games in the laboratory.  These studies showed that OXT, usually administered by a few intranasal puffs, could make people more trusting, generous, cooperative, and empathetic.

For example, in the so-called ‘ultimatum game’ one player (the proposer) is given £10 and offers a proportion of it to a second player (the responder) who has to decide whether or not to accept. If the responder accepts, both players get their share; if not, neither gets anything.  Since these are one-off encounters, rational analysis says that the responder should accept any non zero proposal, since something is better than nothing.  In practice what happens is that offers below about £3 are often rejected, presumably because the desire to punish ‘unfair’ offers outweighs the allure of a small reward. Strikingly, a few whiffs of OXT makes donor players more generous, by almost 50% in some cases. And the same thing happens in other similar situations, like the ‘trust game’: OXT seems to make people more cooperative and pro-social.

Even more exciting are recent findings that OXT can help reduce negative experiences and promote social interactions in conditions like autism and schizophrenia.  In part this could be due to OXTs general ability to reduce anxiety, but there’s likely more to the story than this.  It could also be that OXT enhances the ability to ‘read’ emotional expressions, perhaps by increasing their salience.  Although clinical trials have so far been inconclusive there is at least some hope for new OXT-based pharmacological treatments (though not cures) for these sometimes devastating conditions.

These discoveries are eye-opening and apparently very hopeful. What’s not to like?

Image

The main thing not to like is the idea that there could be such a simple relationship between socially-conditioned phenomena like trust and morality, and the machinations of single molecule.  The evolutionary biologist Leslie Orgel said it well with his ‘third rule’: “Biology is more complicated than you imagine, even when you take Orgel’s third rule into account”.  Sure enough, the emerging scientific story says things are far from simple.

Carsten de Dreu of the University of Amsterdam has published a series of important studies showing that whether oxytocin has a prosocial effect, or an antisocial effect, seems to depend critically on who the interactions are between. In one study, OXT was found to increase generosity within a participant’s ingroup (i.e., among participants judged as similar) but to actually decrease it for interactions with outgroup members.  Another study produced even more dramatic results: here, OXT infusion led volunteers to adopt more derogatory attitudes to outgroup members, even when ingroup and outgroup compositions were determined arbitrarily. OXT can even increase social conformity, as shown in a recent study in which volunteers were divided into two groups and had to judge the attractiveness of arbitrary shapes.

All this should make us look very suspiciously on claims that OXT is any kind of ‘moral molecule’ as some might suggest.  So where do we go from here? A crucial next step is to try to understand how the complex interplay between OXT and behaviour is mediated by the brain. Work in this area has already begun: the research on autism, for example, has shown that OXT infusion leads to autistic brains better differentiating between emotional and non-emotional stimuli.  This work complements emerging social neuroscience studies showing how social stereotypes can affect even very basic perceptual processes. In one example, current studies in our lab are indicating that outgroup faces (e.g., Moroccans for Caucasian Dutch subjects) are literally harder to see than ingroup faces.

Neuroscience has come in for a lot of recent criticism for reductionist ‘explanations’ in which complex cognitive phenomena are identified with activity in this-or-that brain region.  Following this pattern, talk of ‘moral molecules’ is, like crime in multi-storey car-parks, wrong on so many levels. There are no moral molecules, only moral people (and maybe moral societies).  But let’s not allow this kind of over-reaching to blind us to the progress being made when sufficient attention is paid to the complex hierarchical interactions linking molecules to minds.  Neuroscience is wonderfully exciting and has enormous potential for human betterment.  It’s just not the whole story.

This piece is based on a talk given at Brighton’s Catalyst Club as part of the 2014 Brighton Science Festival.

 

All watched over by search engines of loving grace

google-deepmind-artificial-intelligence

Google’s shopping spree has continued with the purchase of the British artificial intelligence (AI) start-up DeepMind, acquired for an eye-watering £400M ($650M).  This is Google’s 8th biggest acquisition in its history, and the latest in a string of purchases in AI and robotics. Boston Dynamics, an American company famous for building agile robots capable of scaling walls and running over rough terrain (see BigDog here), was mopped up in 2013. And there is no sign that Google is finished yet. Should we be excited or should we be afraid?

Probably both. AI and robotics have long promised brave new worlds of helpful robots (think Wall-E) and omniscient artificial intelligences (think HAL), which remain conspicuously absent. Undoubtedly, the combined resources of Google’s in-house skills and its new acquisitions will drive progress in both these areas. Experts have accordingly fretted about military robotics and speculated how DeepMind might help us make better lasagne. But perhaps something bigger is going on, something with roots extending back to the middle of the last century and the now forgotten discipline of cybernetics.

The founders of cybernetics included some of the leading lights of the age, including John Von Neumann (designer of the digital computer), Alan Turing, the British roboticist Grey Walter and even people like the psychiatrist R.D. Laing and the anthropologist Margaret Mead.  They were led by the brilliant and eccentric figures of Norbert Wiener and Warren McCulloch in the USA, and Ross Ashby in the UK. The fundamental idea of cybernetics was consider biological systems as machines. The aim was not to build artificial intelligence per se, but rather to understand how machines could appear to have goals and act with purpose, and how complex systems could be controlled by feedback. Although the brain was the primary focus, cybernetic ideas were applied much more broadly – to economics, ecology, even management science.  Yet cybernetics faded from view as the digital computer took centre stage, and has remained hidden in the shadows ever since.  Well, almost hidden.

One of the most important innovations of 1940s cybernetics was the neural network, the idea that logical operations could be implemented in networks of brain-cell-like elements wired up in particular ways. Neural networks lay dormant, like the rest of cybernetics, until being rediscovered in the 1980s as the basis of powerful new ‘machine learning’ algorithms capable of extracting meaningful patterns from large quantities of data. DeepMind’s technologies are based on just these principles, and indeed some of their algorithms originate in the pioneering neural network research of Geoffrey Hinton (another Brit), who’s company DNN Research was also recently bought by Google and who is now a Google Distinguished Researcher.

What sets Hinton and DeepMind apart is that their algorithms reflect an increasingly prominent theory about brain function. (DeepMind’s founder, the ex-chess-prodigy and computer games maestro Demis Hassabis, set up his company shortly after taking a Ph.D. in cognitive neuroscience.) This theory, which came from cybernetics, says that the brains’ neural networks achieve perception, learning, and behaviour through repeated application of a single principle: predictive control.  Put simply, the brain learns about the statistics of its sensory inputs, and about how these statistics change in response to its own actions. In this way, the brain can build a model of its world (which includes its own body) and figure out how to control its environment in order to achieve specific goals. What’s more, exactly the same principle can be used to develop robust and agile robotics, as seen in BigDog and its friends

Put all this together and so resurface the cybernetic ideals of exploiting deep similarities between biological entities and machines.  These similarities go far beyond superficial (and faulty) assertions that brains are computers, but rather recognize that prediction and control lie at the very heart of both effective technologies and successful biological systems.  This means that Google’s activity in AI and robotics should not be considered separately, but instead as part of larger view of how technology and nature interact: Google’s deep mind has deep roots.

What might this mean for you and me? Many of the original cyberneticians held out a utopian prospect of a new harmony between people and computers, well captured by Richard Brautigan’s 1967 poem – All Watched Over By Machines of Loving Grace – and recently re-examined in Adam Curtis’ powerful though breathless documentary of the same name.  As Curtis argued, these original cybernetic dreams were dashed against the complex realities of the real world. Will things be different now that Google is in charge?  One thing that is certain is that simple idea of a ‘search engine’ will seem increasingly antiquated.  As the data deluge of our modern world accelerates, the concept of ‘search’ will become inseparable from ideas of prediction and control.  This really is both scary and exciting.

The limpid subtle peace of the ecstatic brain

Image

In Dostoevsky’s “The Idiot”, Prince Mychkine experiences repeated epileptic seizures accompanied by “an incredible hitherto unsuspected feeling of bliss and appeasement”, so that “All my problems, doubts and worries resolved themselves in a limpid subtle peace, with a feeling of understanding and awareness of the ‘Supreme Principal of life’”. Such ‘ecstatic epileptic seizures’ have been described many times since (usually with less lyricism), but only now is the brain basis of these supremely meaningful experiences becoming clear, thanks to remarkable new studies by Fabienne Picard and her colleagues at the University of Geneva.

Ecstatic seizures, besides being highly pleasurable, involve a constellation of other symptoms including an increased vividness of sensory perceptions, heightened feelings of self-awareness – of being “present” in the world – a feeling of time standing still, and an apparent clarity of mind where all things seem suddenly to make perfect sense. For some people this clarity involves a realization that a ‘higher power’ (or Supreme Principal) is responsible, though for atheists such beliefs usually recede once the seizure has passed.

In the brain, epilepsy is an electrical storm. Waves of synchronized electrical activity spread through the cortex, usually emanating from one or more specific regions where the local neural wiring may have gone awry.  While epilepsy can often be treated by medicines, in some instances surgery to remove the offending chunk of brain tissue is the only option. In these cases it is now becoming common to insert electrodes directly into the brains of surgical candidates, to better localize the ‘epileptic focus’ and to check that its removal would not cause severe impairments, like the loss of language or movement.  And herein lie some remarkable new opportunities.

Recently, Dr. Picard used just this method to record brain activity from a 23-year-old woman who has experienced ecstatic seizures since the age of 12. Picard found that her seizures involved electrical brain-storms centred on a particular region called the ‘anterior insula cortex’.  The key new finding was that electrical stimulation of this region, using the same electrodes, directly elicited ecstatic feelings – the first time this has been seen. These new data provide important support for previous brain-imaging studies which have shown increased blood flow to the anterior insula in other patients during similar episodes.

The anterior insula (named from the latin for ‘island’) is a particularly fascinating lump of brain tissue.  We have long known that it is involved in how we perceive the internal state of our body, and that these perceptions underlie emotional experiences. More recent evidence suggests that the subjective sensation of the passing of time depends on insular activity.  It also seems to be the place where perceptions of the outside world are integrated with perceptions of our body, perhaps supporting basic forms of self-consciousness and underpinning how we experience our relation to the world.  Strikingly, abnormal activity of the insula is associated with pathological anxiety (the opposite of ecstatic ‘certainty’) and symptoms of depersonalization and derealisation, where the self and world are drained of subjective reality (the opposite of ecstatic perceptual vividness and enhanced self-awareness). Anatomically the anterior insula is among the most highly developed brain regions in humans when compared to other animals, and it even houses a special kind of ‘Von Economo’ neuron. These and other findings are motivating new research, including experiments here at the Sackler Centre for Consciousness Science, which aim to further illuminate the role of the insula in the weaving the fabric of our experienced self. The finding that electrical stimulation of the insular can lead to ecstatic experiences and enhanced self-awareness provides an important advance in this direction.

Picard’s work brings renewed scientific attention to the richness of human experience, the positive as well as the negative, the spiritual as well as the mundane. The finding that ecstatic experiences can be induced by direct brain stimulation may seem both fascinating and troubling, but taking a scientific approach does not imply reducing these phenomena to the buzzing of neurons. Quite the opposite: our sense of wonder should be increased by perceiving connections between the peaks and troughs of our emotional lives and the intricate neural conversations on which they, at least partly, depend.