Tracing the edges of consciousness


As a scientist, consciousness has always fascinated me. But understanding consciousness is not a project for science alone. Throughout history, philosophers, artists, storytellers, and musicians have all wondered about the apparent miracle of conscious awareness. Even today, while science might give us our best shot at figuring out the brain – the organ of experience – we need, more than ever, a melding of the arts and sciences, of contemporary and historical approaches, to understand what consciousness really is, to grasp what we mean by, as Mark Haddon eloquently puts it, “Life in the first person.”

This quote comes from Haddon’s beautiful introductory essay to a major new exhibition at the Wellcome Collection in London. Curated by Emily Sargent, States of Mind: Tracing the edges of consciousness “examines perspectives from artists, psychologists, philosophers and neuroscientists to interrogate our understanding of the conscious experience”. Its a fantastic exhibition, with style and substance, and I feel very fortunate to have been involved as an advisor from its early stages.

What’s so special about consciousness?

Consciousness is at once the most familiar and the most mysterious aspect of our existence. Conscious experiences define our lives, but the private, subjective, and what-it-is-likeness of these experiences seems to resist scientific enquiry. Somehow, within each our brains the combined activity of many billions of neurons, each one a tiny biological machine, is giving rise to a conscious experience. Your conscious experience: right here, right now, reading these words. How does this happen? Why is life in the first person?

In one sense, this seems like the kind of mystery ripe for explanation. Borrowing again from Mark Haddon, the raw material of consciousness is not squirreled away deep inside an atom, its not happening 14 billion years ago, and its not hiding out on the other side of the universe. It’s right here in front of – or rather behind – our eyes. Saying this, the brain is a remarkably complex object. It’s not so much the sheer number of neurons (though there about 90 billion). It’s the complexity of its wiring: there are so many connections, that if you counted one every second it would take you 3 million years to finish. Is it not possible that an object of such extraordinary complexity should be capable of extraordinary things?

People have been thinking about consciousness since they’ve been thinking at all. Hippocrates, the founder of modern medicine, said: “Men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, griefs and tears … Madness comes from its moistness.” (Aristotle, by the way, got it wrong, thinking the brain hadn’t much to do with consciousness at all.)

Fast forward to Francis Crick, whose ‘astonishing hypothesis’ in the 1990s deliberately echoed Hippocrates: “You, your joys and your sorrows, your memories and your ambitions … and so on … are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules”. Crick, who I was lucky enough to meet during my time in America, was working on the neurobiology of consciousness even on the day he died. You will see some of his personal notes, and his perplexing plasticine brain models, in States of Mind.

L0080096 Descartes: view of posterior of brain

Descartes: view of posterior of brain, from De Hominem. Wellcome Collection

A major landmark in thinking about consciousness is of course Descartes, who in the 17th Century distinguished between “mind stuff” (res cogitans) and “matter stuff” (res extensa), so giving rise to the now infamous mind-body problem and the philosophy of dualism. Its a great thrill for to see an original copy of Descartes’ De Homine as part of this exhibition. Its modern incarnation as David Chalmers’ so-called ‘hard problem’ has recently gained enough cultural notoriety even to inspire a Tom Stoppard play (though for my money Alex Garland’s screenplay for Ex Machina is the more perspicuous). The idea of the hard problem is this: Even if we knew everything about how the operations of the brain give rise to perception, cognition, learning, and behaviour a problem would still remain: why and how any of this should be associated with consciousness at all? Why is life in the first person?

Defining consciousness

How to define consciousness? One simple definition is that for a conscious organism there is something it is like to be that organism. Or, one can simply say that consciousness is what disappears when we fall into a dreamless sleep, and what returns when we wake up or start dreaming. A bit more formally, for conscious organisms there exists a continuous (though interruptible) stream of conscious scenes – a phenomenal world – which has the character of being subjective and private. The material in States of Mind can help us encounter these ideas with a bit more clarity and force, by focusing on the edges – the liminal boundaries – of consciousness.

First there is conscious level: the difference between being awake and, let’s say, under general anaesthesia. Here, neuroscience now tells us that there is no single ‘generator’ of consciousness in the brain, rather, being consciousness depends on highly specific ways in which different parts of the brain speak to each other. Aya Ben Ron’s film of patients slipping away under anaesthesia is a beautiful exploration of this process, as is the whole section on ‘SLEEP | AWAKE’.

Then there is conscious content: what we are conscious of, when we are conscious. These are the perceptions, thoughts, and emotions that populate our ever-flowing stream of awareness. Here, current research is revealing that our perceptual world is not simply an internal picture of some external reality. Rather, conscious perception depends on the brain’s best guesses, or hypotheses, about the causes of sensory data. Perception is therefore a continuously creative act that is tightly bound up with imagination, so that our experience of the world is a kind of ‘controlled hallucination’, a fantasy that – usually, but not always – coincides with reality. The material on synaesthesia in States of Mind beautifully illuminates this process by showing how, for some of us, these perceptual fantasies can be very different – that we all have our own distinctive inner universes. You can even try training yourself to become a ‘synaesthete’ with a demo of some of our own research, developed for this exhibition. Many thanks to Dr. David Schwartzman of the Sackler Centre for making this happen.


Alphabet in Colour: Illustrating Vladimir Nabokov’s grapheme-colour synaesthesia, by Jean Holabird.

Finally there is conscious self – the specific experience of being me, or being you. While this might seem easy to take for granted, the experience of being a self requires explanation just as much as any other kind of experience. It too has its edges, its border regions. Here, research is revealing that conscious selfhood, though experienced as unified, can come apart in many different ways. For example, our experience of being and having a particular body can dissociate from our experience of being a person with name and a specific set of memories. Conscious selfhood, like all conscious perception, is therefore another controlled hallucination maintained by the brain. The section BEING | NOT BEING dramatically explores some of these issues, for example by looking at amnesia with Shona Illingworth, and with Adrian Owen’s seminal work on the possibility of consciousness even after severe brain injury.

This last example brings up an important point. Besides the allure of basic science, there are urgent practical motivations for studying consciousness. Neurological and psychiatric disorders are increasingly common and can often be understood as disturbances of conscious experience. Consciousness science promises new approaches and perhaps new treatments for these deeply destructive problems. Scoping out further boundary areas, studying the biology of consciousness can shed new light on awareness in newborn infants and in non-human animals, informing ethical debates in these areas. Above all, consciousness science carries the promise of understanding more about our place in nature. Following the tradition of Copernicus and Darwin, a biological account of conscious experience will help us see ourselves as part of, not apart from, the rest of the universe.

L0079940 Neuronal Theory - 11312.

Santiago Ramon y Cajal, distinguishing the reticular theory (left) from the neuron doctrine (right).  From the Instituto Cajal, Madrid.

Let’s finish by returning to this brilliant exhibition, States of Mind. What I found most remarkable are the objects that Emily Sargent has collected together. Whether its Descartes’ De Hominem, Ramon y Cajal’s delicate ink drawings of neurons, or Francis Crick’s notebooks and models, these objects bring home and render tangible the creativity and imagination which people have brought to bear in their struggle to understand consciousness, over hundreds of years. For me, this brings a new appreciation and wonder to our modern attempts to tackle this basic mystery of life. Emily Dickinson, my favourite poet of neuroscience, put it like this. “The brain is wider than the sky, for – put them side by side – the one the other will contain, with ease, and you – beside.”

States of Mind is at the Wellcome Collection in London from Feb 4th until October 16th 2016 and is curated by Emily Sargent. Sackler Centre researchers, in particular David Schwartzman and myself,  helped out as scientific advisors. This text is lightly adapted from a speech I gave at the opening event on Feb 3rd 2016. Watch this space, and visit the exhibition website, for news about special events on consciousness that will happen throughout the year.

States of Mind at the Wellcome Collection


YellowPinkBlue by Ann Veronica Janssens

From October 2015 until October 2016 the Wellcome Collection in London is curating an exhibition called States of Mind: Tracing the Edges of Consciousness.  It has been launched with a brilliant piece of installation art by Ann Veronica Janssens (until 3rd Jan 2016).  In YellowPinkBlue the entire gallery space is invaded by coloured mist, to focus attention on the process of perception itself so that one becomes subsumed by the experience of seeing.  I’m excited to be contributing in various ways to States of Mind, via the Sackler Centre (more on that soon). To start with, here is the text I wrote for Janssen’s remarkable piece.

What in the world is consciousness?

Right now an apparent miracle is unfolding. Within your brain, the electrochemical activity of many billions of richly interconnected brain cells – each one a tiny biological machine – is giving rise to a conscious experience. Your conscious experience: right here, right now, reading these words.

It is all too easy to go about our daily lives, having conscious experiences, without appreciating how remarkable it is that we have these experiences at all. Ann Veronica Janssens’s piece returns us to the sheer wonder of being conscious. By stripping away many of the features that permeate our normal conscious lives, the raw fact of experiencing is given renewed emphasis.

People have wondered about consciousness since they’ve wondered about anything. Hippocrates, the Greek founder of modern medicine, rightly identified the brain as the organ of experience (though Aristotle didn’t agree). In the Renaissance, Descartes divided the universe into ‘mind stuff’ (res cogitans) and ‘matter stuff’ (res extensa), giving birth to the philosophy of dualism and the confounding ‘mind–body’ problem of how the two relate. In the 19th century, when psychology first emerged as a science, understanding consciousness was its primary objective. Though largely sidelined during the 20th century, the challenge of revealing the biological basis of consciousness is now firmly re-established for our times. Janssens’s piece reminds us of the important distinction in science between being conscious at all (conscious level: the difference between being awake and being in a dreamless sleep or under anaesthesia) and what we are conscious of (conscious content: the perceptions, thoughts and emotions that populate our conscious mind). There is also conscious selfhood – the specific experience of being me (or you). Each of these aspects of consciousness can be traced to specific mechanisms in the brain that neuroscientists, in cahoots with researchers from many other disciplines, are now starting to unravel. There are many exciting ideas in play, ranging from the dependence of conscious level on how different parts of the brain speak to each other, to understanding conscious content as determined by the brain’s ‘best guess’ of the causes of ambiguous and noisy sensory signals. Crucially, these ideas have allowed consciousness science to progress from the philosopher’s armchair to the research laboratory.

Besides the allure of basic science, there are important practical motivations for studying consciousness. Neurological and psychiatric disorders are increasingly common and can often be framed as disturbances of conscious experience. Consciousness science promises new approaches and perhaps new treatments for these scourges of modern society. New theories and experiments can also shed light on consciousness in newborns and in non-human animals, adding critical information to important ethical debates in these areas. But above all, consciousness science carries the promise of understanding more about our place in nature. Following Darwin and Copernicus, a biological account of conscious experience will help us see ourselves as part of, not apart from, the rest of the universe.

Anil Seth, Professor of Cognitive and Computational Neuroscience
Co-Director, Sackler Centre for Consciousness Science, University of Sussex

A first draft of a digital brain: The Human Brain Project’s new simulation


Today, Henry Markram and colleagues have released one of the first of a raft of substantial new results emerging from the controversial Human Brain Project (HBP). The paper, Reconstruction and Simulation of Neocortical Microcircuitry, appears in the journal Cell.

As one of the first concrete outputs emerging from this billion-euro endeavour this had to be a substantial piece of work, and it is. The paper describes a digital reconstruction of ~31,000 neurons (with ~8 million connections and ~37 million synapses) of a tiny part of the somatosensory cortex of the juvenile rat brain. What is unique about this simulation is not the number of neurons (31,000 is pretty modest by today’s standards), but the additional detail included. Simulated neurons are given specific morphological, chemical, and electrical characteristics, and are precisely positioned in 3D space so that they form biologically realistic connections. This level of detail is at the heart of the HBP strategy, and it underlies the claim that the simulation is a ‘reconstruction’ of neural tissue, not just an abstract model of neuronal connectivity.

So how good is it? Certainly, the simulation detail is extremely impressive, as is the wealth of experimental data that is accounted for. Particularly striking is the ability to predict both general features of neocortical dynamics – like the existence of ‘soloist’ and ‘chorister’ neurons – as well as to inspire specific new experiments that further validated the simulation. It is also promising that Markram & co managed to interpolate their sparse experimental data in order to fully specify the model, without losing the fidelity of the model to the real ‘target’ system.

The authors admit this is a ‘first step’ and the results are certainly intriguing. But the real question is whether the aggressively ‘bottom up’ approach of the HBP will, by itself, yield the transformational understanding of neuroscience that it has promised. Modelling work in science – whether computational or mathematical – is about finding the right level of abstraction to best explore and understand some natural principle, or test some specific hypothesis.  A model that relies on incorporating as much detail as possible could lead to a simulation that is almost as hard to understand as the target system. Jorge Luis Borges long ago noted the tragic uselessness of the perfectly detailed map in his short story ‘On Exactitude in Science’.

For this reason alone, its hard to be confident that the HBP approach — impressive as it is at the level of a tiny volume of immature cortex — will scale up to deliver real insights about how brains, bodies and environments mesh together in generating complex adaptive behaviour (and perception, and thought, and consciousness). And on the other hand, as detailed as the current simulation is, it still neglects very basic and undoubtedly important aspects of the brain – including glial cells, vasculature, receptors, and the like. This goes to show that even the most detailed simulation models still have to make abstractions. In the present model decisions about what is included and excluded seem to be made more according to practical criteria (what is possible?) than theoretically principled criteria (what are we trying to explain with this model?).

Can the HBP be extended both downwards (to encompass the so-far excluded but potentially critical details of neuronal microstructure) and upwards (to a whole brain and organism level, including sensorimotor interactions with bodies and environments)? The jury is still out. So let’s applaud this Herculean effort to simulate a tiny part of a tiny brain, but let’s also keep in mind that the HBP won’t solve neuroscience all by itself, and only time will tell whether it will play a significant role in unravelling the properties of the most complex object in the known universe.

The original article is here: Markram et al (2015). Cell 163:1-37.
Some of the above comments appear in a New Scientist commentary by Jessica Hamzelou, published 08/10/2015:  Digital version of piece of rat brain fires like the real thing.

Brain Twisters

BrainTwistBrain Twisters, the follow up to the Royal Society Prize Winning ‘EyeBenders’ is now out! Another co-production with author Clive Gifford.

Here’s the blurb: Trick your senses and baffle your brain with this crazy book of mind tricks and neuroscience information. Find out how magicians make use of “inattentional blindness” when doing magic tricks, and why you miss details that are hidden in plain sight. Discover why your memory isn’t as good as you think, and how it’s possible to remember things that never actually happened. This astonishing science book presents a wide range of brain games and mind tricks, and explains how these reveal the working processes of the brain. It will engage and entertain, and leave you wondering: do you really know your own mind?

I really enjoyed working on this book with Clive.  Its more ambitious than EyeBenders – taking on the whole of the brain rather than just optical illusions. But I think the end result works brilliantly – though let’s see what the kids think!

Can we figure out the brain’s wiring diagram?


The human brain, it is often said, is the most complex object in the known universe. Counting all the connections among its roughly 90 billion neurons, at the rate of one each second, would take about 3 million years – and just counting these connections says nothing about their intricate patterns of connectivity. A new study, published this week in Proceedings of the National Academy of Sciences USA, shows that mapping out these patterns is likely to be much more difficult than previously thought — but also shows what we need to do, to succeed.

Characterizing the detailed point-to-point connectivity of the brain is increasingly recognized as a key objective for neuroscience. Many even think that without knowing the ‘connectome’ – the brain’s wiring diagram – we will never understand how its electrochemical alchemy gives rise to our thoughts, actions, perceptions, beliefs, and ultimately to our consciousness. There is a good precedent for thinking along these lines. Biology has been galvanized by sequencing of the genome (of humans and of other species), and genetic medicine is gathering pace as whole-genome sequencing becomes fast and cheap enough to be available to the many, not just the few. Big-science big-money projects like the Human Genome Project were critical to these developments. Similar efforts in brain science – like the Human Connectome Project in the US and the Human Brain Project in Europe – are now receiving vast amounts of funding (though not without criticism, especially in the European case) (see also here). The hope is that the genetic revolution can be replicated in neuroscience, delivering step changes in our understanding of the brain and in our ability to treat neurological and psychiatric disorders.

Mapping the networks of the human brain relies on non-invasive neuroimaging methods that can be applied without risk to living people. These methods almost exclusively depend on ‘diffusion magnetic resonance imaging (dMRI) tractography’. This technology measures, for each location (or ‘voxel’) in the brain, the direction in which water is best able to diffuse. Taking advantage of the fact that water diffuses more easily along the fibre bundles connecting different brain regions, than across them, dMRI tractography has been able to generate accurate, informative, and surprisingly beautiful pictures of the major superhighways in the brain.

Diffusion MRI of the human brain.  Source: Human Connectome Project.

Diffusion MRI of the human brain. Source: Human Connectome Project.

But identifying these neuronal superhighways is only a step towards the connectome. Think of a road atlas: knowing only about motorways may tell you how cities are connected, but its not going to tell you how to get from one particular house to another. The assumption in neuroscience has been that as brain scanning improves in resolution and as tracking algorithms gain sophistication, dMRI tractography will be able to reveal the point-to-point long-range anatomical connectivity needed to construct the full connectome.

In a study published this week we challenge this assumption, showing that basic features of brain anatomy pose severe obstacles to measuring cortical connectivity using dMRI. The study, a collaboration between the University of Sussex in the UK and the National Institutes of Health (NIH) in the US, applied dMRI tractography to ultra-high resolution dMRI data obtained from extensive scanning of the macaque monkey brain – data of much higher quality than can be presently obtained from human studies. Our analysis, led by Profs. Frank Ye and David Leopold of NIH and Ph.D student Colin Reveley of Sussex, took a large number of starting points (‘seed voxels’) in the brain, and investigated which other parts of the brain could be reached using dMRI tractography.

The result: roughly half of the brain could not be reached, meaning that even our best methods for mapping the connectome aren’t up to the job. What’s more, by looking carefully at the actual brain tissue where tractography failed, we were able to figure out why. Lying just beneath many of the deep valleys in the brain (the ‘sulci’ – but in some other places too), are dense weaves of neuronal fibres (‘white matter’) running largely parallel to the cortical surface. The existence of these ‘superficial white matter fibre systems’, as we call them, prevents the tractography algorithms from detecting where small tributaries leave the main neuronal superhighways, cross into the cortical grey matter, and reach their destinations. Back to the roads: imagine that small minor roads occasionally leave the main motorways, which are flanked by other major roads busy with heavy traffic. If we tried to construct a detailed road atlas by measuring the flow of vehicles, we might well miss these small but critical branching points.

This image shows, on a colour scale, the 'reachability' of different parts of the brain by diffusion tractography.

This image shows, on a colour scale, the ‘reachability’ of different parts of the brain by diffusion tractography.

Identifying the connectome remains a central objective for neuroscience, and non-invasive brain imaging – especially dMRI – is a powerful technology that is improving all the time. But a comprehensive and accurate map of brain connectivity is going to require more than simply ramping up scanning resolution and computational oomph, a message that mega-budget neuroscience might usefully heed. This is not bad news for brain research. Solving a problem always requires fully understanding what the problem is, and our findings open new opportunities and objectives for studies of brain connectivity. Still, it goes to show that the most complex object in the universe is not quite ready to give up all its secrets.

Colin Reveley, Anil K. Seth, Carlo Pierpaoli, Afonso C. Silva, David Yu, Richard C. Saunders, David A. Leopold*, and Frank Q. Ye. (2015) Superficial white-matter fiber systems impede detection of long-range cortical connections in diffusion MR tractography. Proc. Nat. Acad. Sci USA (2015). doi/10.1073/pnas.1418198112

*David A. Leopold is the corresponding author.

Ex Machina: A shot in the arm for smart sci-fi


Alicia Vikander as Ava in Alex Garland’s Ex Machina

IT’S a rare thing to see a movie about science that takes no prisoners intellectually. Alex Garland’s Ex Machina is just that: a stylish, spare and cerebral psycho-techno-thriller, which gives a much-needed shot in the arm for smart science fiction.

Reclusive billionaire genius Nathan, played by Oscar Isaac, creates Ava, an intelligent and very attractive robot played by Alicia Vikander. He then struggles with the philosophical and ethical dilemmas his creation poses, while all hell breaks loose. Many twists and turns add nuance to the plot, which centres on the evolving relationships between the balletic Ava and Caleb (Domhnall Gleeson), a hotshot programmer invited by Nathan to be the “human component in a Turing test”, and between Caleb and Nathan, as Ava’s extraordinary capabilities become increasingly apparent

Everything about this movie is good. Compelling acting (with only three speaking parts), exquisite photography and set design, immaculate special effects, a subtle score and, above all, a hugely imaginative screenplay combine under Garland’s precise direction to deliver a cinematic experience that grabs you and never lets go.

The best science fiction often tackles the oldest questions. At the heart of Ex Machina is one of our toughest intellectual knots, that of artificial consciousness. Is it possible to build a machine that is not only intelligent but also sentient: that has consciousness, not only of the world but also of its own self? Can we construct a modern-day Golem, that lumpen being of Jewish folklore which is shaped from unformed matter and can both serve humankind and turn against it? And if we could, what would happen to us?

In Jewish folkore, the Golem is animate being shaped from unformed matter.

In Jewish folkore, the Golem is animate being shaped from unformed matter.

Putting aside the tedious business of actually building a conscious AI, we face the challenge of figuring out whether the attempt succeeds. The standard reference for this sort of question is Alan Turing’s eponymous test, in which a human judge interrogates both a candidate machine and another human. A machine passes the test when the judge consistently fails to distinguish between them.

While the Turing test has provided a trope for many AI-inspired movies (such as Spike Jonze’s excellent Her), Ex Machina takes things much further. In a sparkling exchange between Caleb and Nathan, Garland nails the weakness of Turing’s version of the test, a focus on the disembodied exchange of messages, and proposes something far more interesting. “The challenge is to show you that she’s a robot. And see if you still feel she has consciousness,” Nathan says to Caleb.

This shifts the goalposts in a vital way. What matters is not whether Ava is a machine. It is not even whether Ava, even though a machine, can be conscious. What matters is whether Ava makes a conscious person feel that Ava is conscious. The brilliance of Ex Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine. And Garland is not necessarily on our side.

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Is consciousness a matter of social consensus? Is it more relevant whether people believe (or feel) that something (or someone) is conscious than whether it is in fact actually conscious? Or, does something being “actually conscious” rest on other people’s beliefs about it being conscious, or on its own beliefs about its consciousness (beliefs that may themselves depend on how it interprets others’ beliefs about it)? And exactly what is the difference between believing and feeling in situations like this?

It seems to me that my consciousness, here and now, is not a matter of social consensus or of my simply believing or feeling that I am conscious. It seems to me, simply, that I am conscious here and now. When I wake up and smell the coffee, there is a real experience of coffee-smelling going on.

But let me channel Ludwig Wittgenstein, one of the greatest philosophers of the 20th century, for a moment. What would it seem like if it seemed to me that my being conscious were a matter of social consensus or beliefs or feelings about my own conscious status? Is what it “seems like” to me relevant at all when deciding how consciousness comes about or what has consciousness?

Before vanishing completely into a philosophical rabbit hole, it is worth saying that questions like these are driving much influential current research on consciousness. Philosophers and scientists like Daniel Dennett, David Rosenthal and Michael Graziano defend, in various ways, the idea that consciousness is somehow illusory and what we really mean in saying we are conscious is that we have certain beliefs about mental states, that these states have distinctive functional properties, or that they are involved in specific sorts of attention.

Another theoretical approach accepts that conscious experience is real and sees the problem as one of determining its physical or biological mechanism. Some leading neuroscientists such as Giulio Tononi, and recently, Christof Koch, take consciousness to be a fundamental property, much like mass-energy and electrical charge, that is expressed through localised concentrations of “integrated information”. And others, like philosopher John Searle, believe that consciousness is an essentially biological property that emerges in some systems but not in others, for reasons as-yet unknown.

In the film we hear about Searle’s Chinese Room thought experiment. His premise was that researchers had managed to build a computer programmed in English that can respond to written Chinese with written Chinese so convincingly it easily passes the Turing test, persuading a human Chinese speaker that the program understands and speaks Chinese. Does the machine really “understand” Chinese (Searle called this “strong AI”) or is it only simulating the ability (“weak” AI)? There is also a nod to the notional “Mary”, the scientist, who, while knowing everything about the physics and biology of colour vision, has only ever experienced black, white and shades of grey. What happens when she sees a red object for the first time? Will she learn anything new? Does consciousness exceed the realms of knowledge.

All of the above illustrates how academically savvy and intellectually provocative Ex Machina is. Hat-tips here to Murray Shanahan, professor of cognitive robotics at Imperial College London, and writer and geneticist Adam Rutherford, whom Garland did well to enlist as science advisers.

Not every scene invites deep philosophy of mind, with the film encompassing everything from ethics, the technological singularity, Ghostbusters and social media to the erosion of privacy, feminism and sexual politics within its subtle scope. But when it comes to riffing on the possibilities and mysteries of brain, mind and consciousness, Ex Machina doesn’t miss a trick.

As a scientist, it is easy to moan when films don’t stack up against reality, but there is usually little to be gained from nitpicking over inaccuracies and narrative inventions. Such criticisms can seem petty and reinforcing of the stereotype of scientists as humourless gatekeepers of facts and hoarders of equations. But these complaints sometimes express a sense of missed opportunity rather than injustice, a sense that intellectual riches could have been exploited, not sidelined, in making a good movie. AI, neuroscience and consciousness are among the most vibrant and fascinating areas of contemporary science, and what we are discovering far outstrips anything that could be imagined out of thin air.

In his directorial debut, Garland has managed to capture the thrill of this adventure in a film that is effortlessly enthralling, whatever your background. This is why, on emerging from it, I felt lucky to be a neuroscientist. Here is a film that is a better film, because of and not despite its engagement with its intellectual inspiration.

The original version of this piece was published as a Culture Lab article in New Scientist on Jan 21. I am grateful to the New Scientist for permission to reproduce it here, and to Liz Else for help with editing. I will be discussing Ex Machina with Dr. Adam Rutherford at a special screening of the film at the Edinburgh Science Festival (April 16, details and tickets here).

Open your MIND

is a brand new collection of original research publications on the mind, brain, and consciousness
. It is now freely available online. The collection contains altogether 118 articles from 90 senior and junior researchers, in the always-revealing format of target articles, commentaries, and responses.

This innovative project is the brainchild of Thomas Metzinger and Jennifer Windt, of the MIND group of the Johanes Gutenburg University in Mainz, Germany (Windt has since moved to Monash University in Melbourne). The MIND group was set up by Metzinger in 2003 to catalyse the development of young German philosophers by engaging them with the latest developments in philosophy of mind, cognitive science, and neuroscience. Open MIND celebrates the 10th anniversary of the MIND group, in a way that is so much more valuable to the academic community than ‘just another meeting’ with its quick-burn excitement and massive carbon footprint. Editors Metzinger and Windt explain:

“With this collection, we wanted to make a substantial and innovative contribution that will have a major and sustained impact on the international debate on the mind and the brain. But we also wanted to create an electronic resource that could also be used by less privileged students and researchers in countries such as India, China, or Brazil for years to come … The title ‘Open MIND’ stands for our continuous search for a renewed form of academic philosophy that is concerned with intellectual rigor, takes the results of empirical research seriously, and at the same time remains sensitive to ethical and social issues.”

As a senior member of the MIND group, I was lucky enough to contribute a target article, which was commented on by Wanja Wiese, one of the many talented graduate students with Metzinger and a junior MIND group member. My paper marries concepts in cybernetics and predictive control with the increasingly powerful perspective of ‘predictive processing’ or the Bayesian brain, with a focus on interoception and embodiment. I’ll summarize the main points in a different post, but you can go straight to the target paper, Wanja’s commentary, and my response.

Open MIND is a unique resource in many ways. The Editors were determined to maximize its impact, so, unlike in many otherwise similar projects, the original target papers have not been circulated prior to launch. This means there is a great deal of highly original material now available to be discovered. The entire project was compressed into about 10 months from submission of initial drafts, to publication this week of the complete collection. This means the original content is completely up-to-date. Also, Open MIND  shows how excellent scientific publication can  sidestep the main publishing houses, given the highly developed resources now available, coupled of course with extreme dedication and hard work. The collection was assembled, rigorously reviewed, edited, and produced entirely in-house – a remarkable achievement.

Thomas Metzinger with the Open MIND student team

Thomas Metzinger with the Open MIND student team

Above all Open MIND opened a world of opportunity for its junior members, the graduate students and postdocs who were involved in every stage of the project: soliciting and reviewing papers, editing, preparing commentaries, and organizing the final collection. As Metzinger and Windt say

“The whole publication project is itself an attempt to develop a new format for promoting junior researchers, for developing their academic skills, and for creating a new type of interaction between senior and junior group members.”

The results of Open MIND are truly impressive and will undoubtedly make a lasting contribution to the philosophy of mind, especially in its most powerful multidisciplinary and empirically grounded forms.

Take a look, and open your mind too.

Open MIND contributors: Adrian John Tetteh Alsmith, Michael L. Anderson, Margherita Arcangeli, Andreas Bartels, Tim Bayne, David H. Baßler, Christian Beyer, Ned Block, Hannes Boelsen, Amanda Brovold, Anne-Sophie Brüggen, Paul M. Churchland, Andy Clark, Carl F. Craver, Holk Cruse, Valentina Cuccio, Brian Day, Daniel C. Dennett, Jérôme Dokic, Martin Dresler, Andrea R. Dreßing, Chris Eliasmith, Maximilian H. Engel, Kathinka Evers, Regina Fabry, Sascha Fink, Vittorio Gallese, Philip Gerrans, Ramiro Glauer, Verena Gottschling, Rick Grush, Aaron Gutknecht, Dominic Harkness, Oliver J. Haug, John-Dylan Haynes, Heiko Hecht, Daniela Hill, John Allan Hobson, Jakob Hohwy, Pierre Jacob, J. Scott Jordan, Marius Jung, Anne-Kathrin Koch, Axel Kohler, Miriam Kyselo, Lana Kuhle, Victor A. Lamme, Bigna Le Nggenhager, Caleb Liang, Ying-Tung Lin, Christophe Lopez, Michael Madary, Denis C. Martin, Mark May, Lucia Melloni, Richard Menary, Aleksandra Mroczko-Wąsowicz, Saskia K. Nagel, Albert Newen, Valdas Noreika, Alva Noë, Gerard O’Brien, Elisabeth Pacherie, Anita Pacholik-Żuromska, Christian Pfeiffer, Iuliia Pliushch, Ulrike Pompe-Alama, Jesse J. Prinz, Joëlle Proust, Lisa Quadt, Antti Revonsuo, Adina L. Roskies, Malte Schilling, Stephan Schleim, Tobias Schlicht, Jonathan Schooler, Caspar M. Schwiedrzik, Anil Seth, Wolf Singer, Evan Thompson, Jarno Tuominen, Katja Valli, Ursula Voss, Wanja Wiese, Yann F. Wilhelm, Kenneth Williford, Jennifer M. Windt.

Open MIND press release.
The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies
Perceptual presence in the Kuhnian-Popperian Bayesian brain
Inference to the best prediction

There’s more to geek-chic than meets the eye, but not in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game. (Spoiler alert: this post reveals some plot details.)

World War Two was won not just with tanks, guns, and planes, but by a crack team of code-breakers led by the brilliant and ultimately tragic figure of Alan Turing. This is the story as told in The Imitation Game, a beautifully shot and hugely popular film which nonetheless left me nursing a deep sense of missed opportunity. True, Benedict Cumberbatch is brilliant, spicing his superb Holmes with a dash of the Russell Crowe’s John Nash (A Beautiful Mind) to propel geek rapture into yet higher orbits. (See also Eddie Redmayne and Stephen Hawking.)

The rest was not so good. The clunky acting might reflect a screenplay desperate to humanize and popularize what was fundamentally a triumph of the intellect. But what got to me most was the treatment of Turing himself. On one hand there is the perhaps cinematically necessary canonisation of individual genius, sweeping aside so much important context. On the other there is the saccharin treatment of Turing’s open homosexuality (with compensatory boosting of Keira Knightley’s Joan Clarke) and the egregious scenes in which he stands accused of both treason and cowardice by association with Soviet spy John Cairncross, whom he likely never met. The requisite need for a bad guy does disservice also to Turing’s Bletchley Park boss Alastair Denniston, who while a product of old-school classics-inspired cryptography nonetheless recognized and supported Turing and his crew. Historical jiggery-pokery is of course to be expected in any mass-market biopic, but the story as told in The Imitation Game becomes much less interesting as a result.

Alan Turing as himself

Alan Turing as himself

I studied at King’s College, Cambridge, Turing’s academic home and also where I first encountered the basics of modern day computer science and artificial intelligence (AI). By all accounts Turing was a genius, laying the foundations for these disciplines but also for other areas of science, which – like AI – didn’t even exist in his time. His theories of morphogenesis presaged contemporary developmental biology, explaining how leopards get their spots. He was a pioneer of cybernetics, an inspired amalgam of engineering and biology that after many years in the academic hinterland is once again galvanising our understanding of how minds and brains work, and what they are for. One can only wonder what more he would have done, had he lived.

There is a breathless moment in the film where Joan Clarke (or poor spy-hungry and historically-unsupported Detective Nock, I can’t remember) wonders whether Turing, in cracking Enigma, has built his ‘universal machine’. This references Turing’s most influential intellectual breakthrough, his conceptual design for a machine that was not only programmable but re-programmable, that could execute any algorithm, any computational process.

The Universal Turing Machine formed the blueprint for modern-day computers, but the machine that broke Enigma was no such thing. The ‘Bombe’, as it was known, was based on Polish prototypes (the bomba kryptologiczna) and was co-designed with Gordon Welchman whose critical ‘diagonal board’ innovation is in the film attributed to the suave Hugh Alexander (Welchman doesn’t appear at all). Far from being a universal computer the Bombe was designed for a single specific purpose – to rapidly run through as many settings of the Enigma machine as possible.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

The Bombe is half the story of Enigma. The other half is pure cryptographic catnip. Even with a working Bombe the number of possible machine settings to be searched each day (the Germans changed all the settings at midnight) was just too large. The code-breakers needed a way to limit the combinations to be tested. And here Turing and his team inadvertently pioneered the principles of modern-day ‘Bayesian’ machine learning, by using prior assumptions to constrain possible mappings between a cipher and its translation. For Enigma, the breakthroughs came on realizing that no letter could encode itself, and that German operators often used the same phrases in repeated messages (“Heil Hitler!”). Hugh Alexander, diagonal boards aside, was supremely talented at this process which Turing called ‘banburismus’, on account of having to get printed ‘message cards’ from nearby Banbury.

In this way the Bletchley code-breakers combined extraordinary engineering prowess with freewheeling intellectual athleticism, to find a testable range of Enigma settings, each and every day, which were then run through the Bombe until a match was found.

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

Though it gave the allies a decisive advantage, the Bombe was not the first computer, not the first ‘digital brain’. This honour belongs to Colossus, also built at Bletchley Park, and based on Turing’s principles, but constructed mainly by Tommy Flowers, Jack Good, and Bill Tutte. Colossus was designed to break the even more encrypted communications the Germans used later in the war: the Tunny cipher. After the war the intense secrecy surrounding Bletchley Park meant that all Colossi (and Bombi) were dismantled or hidden away, depriving Turing, Flowers – and many others – of recognition and setting back the computer age by years. It amazes me that full details about Colussus were only released in 2000.

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

The Imitation Game of the title is a nod to Turing’s most widely known idea: a pragmatic answer to the philosophically challenging and possibly absurd question, “can machines think”. In one version of what is now known as the Turing Test, a human judge interacts with two players – another human and a machine – and must decide which is which. Interactions are limited to disembodied exchanges of pieces of text, and a candidate machine passes the test when the judge consistently fails to distinguish the one from the other. It is unfortunate but in keeping with the screenplay that Turing’s code-breaking had little to do with his eponymous test.

It is completely understandable that films simplify and rearrange complex historical events in order to generate widespread appeal. But the Imitation Game focuses so much on a distorted narrative of Turing’s personal life that the other story – a thrilling ‘band of brothers’ tale of winning a war by inventing the modern world – is pushed out into the wings. The assumption is that none of this puts bums on seats. But who knows, there might be more to geek-chic than meets the eye.

Should we fear the technological singularity?


Could wanting the latest mobile phone for Christmas lead to human extermination? Existential risks to our species have long been part of our collective psyche – in the form of asteroid impacts, pandemics, global nuclear cataclysm, and more recently, climate change. The idea is not simply that humans and other animals could be wiped out, but that basic human values and structures of society would change so as to become unrecognisable.

Last week, Stephen Hawking claimed that technological progress, while perhaps intended for human betterment, might lead to a new kind of existential threat in the form of self-improving artificial intelligence (AI). This worry is based on the “law of accelerating returns”, which applies when the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – advances in its capabilities. The idea is that a point might be reached where this process leads to wholesale and irreversible changes in how we live. This is the technological singularity, a concept made popular by AI maverick and Google engineering director Ray Kurzweil.

We are already familiar with accelerating returns in the rapid development of computer power (“Moore’s law”), and Kurzweil’s vision of the singularity is actually a sort of utopian techno-rapture. But there are scarier scenarios where exponential technological growth might exceed our ability to foresee and prevent unintended consequences. Genetically modified food is an early example of this worry, but now the spotlight is on bio- and nano-technology, and – above all – AI, the engineering of artificial minds.

Moore's law: the exponential growth in computational power since 1900.

Moore’s law: the exponential growth in computational power since 1900.

A focus on AI might seem weird given how disappointing present-day ‘intelligent robots’ are. They can hardly vacuum your living room let alone take over the world, and reports that the famous Turing Test for AI has been passed are greatly exaggerated. Yet AI has developed a surprising behind-the-scenes momentum. New ‘deep learning’ algorithms have been developed which, when coupled with vast amounts of data, show remarkable abilities to tackle everyday problems like speech comprehension and face recognition. As well as world-beating chess players like Deep Blue, we have Apple Siri and Google Now helping us navigate our messy and un-chesslike environments in ways that mimic our natural cognitive abilities. Huge amounts of money have followed, with Google this year paying £400M for AI start-up DeepMind in a deal which Google CEO Eric Schmidt heralded as enabling products that are “infinitely more intelligent”.

"Hello Dave".

“Hello Dave”.

What if the ability to engineer artificial minds leads to these minds engineering themselves, developing their own goals, and bootstrapping themselves beyond human understanding and control? This dystopian prospect has been mined by many sci-fi movies – think Blade Runner, HAL in 2001, Terminator, Matrix – but while sci-fi is primarily for entertainment, the accelerating developments in AI give pause for thought. Enter Hawking, who now warns that “the full development of AI could spell the end of the human race”. He joins real-world-Iron-Man Elon Musk and Oxford philosopher Nick Bostrom in declaring AI the most serious existential threat we face. (Hawking in fact used the term ‘singularity’ long ago to describe situations where the laws of physics break down, like at the centre of a black hole).

However implausible a worldwide AI revolution might seem, Holmes will tell you there is all the difference in the world between the impossible and the merely improbable. Even if highly unlikely, the seismic impact of a technological singularity is such that it deserves to be taken seriously, both in estimating and mitigating its likelihood, and in planning potential responses. Cambridge University’s new Centre for the Study for Existential Risk has been established to do just this, with Hawking and ex-Astronomer Royal Sir Martin Rees among the founders.

Dystopian eventualities aside, the singularity concept is inherently interesting because it pushes us to examine what we mean by being human (as my colleague Murray Shanahan argues in a forthcoming book). While intelligence is part of the story, being human is also about having a body and an internal physiology; we are self-sustaining flesh bags. It is also about consciousness; we are each at the centre of a subjective universe of experience. Current AI has little to say about these issues, and it is far from clear whether truly autonomous and self-driven AI is possible in their absence. The ethical minefield deepens when we realize that AIs becoming conscious would entail ethical responsibilities towards them, regardless of their impact on us.

At the moment, AI like any powerful technology has the potential for good and ill, long before any singularity is reached. On the dark side, AI gives us the tools to wreak our own havoc by distancing ourselves from the consequences of our actions. Remote controlled military drones already reduce life-and-death decisions to the click of a button: with enhanced AI there would be no need for the button. On the side of the angels, AI can make our lives healthier and happier, and our world more balanced and sustainable, by complementing our natural mental prowess with the unprecedented power of computation. The pendulum may swing from the singularity-mongerers to the techno-mavens; and we should listen to both, but proceed serenely with the angels.

This post is an amended version of a commisioned comment for The Guardian: Why we must not stall technological progress, despite its threat to humanity, published on December 03, 2014.  It was part of a flurry of comments occasioned by a BBC interview with Stephen Hawking, which you can listen to here. I’m actually quite excited to see Eddie Redmayne’s rendition of the great physicist.