Ex Machina: A shot in the arm for smart sci-fi

machina_a

Alicia Vikander as Ava in Alex Garland’s Ex Machina

IT’S a rare thing to see a movie about science that takes no prisoners intellectually. Alex Garland’s Ex Machina is just that: a stylish, spare and cerebral psycho-techno-thriller, which gives a much-needed shot in the arm for smart science fiction.

Reclusive billionaire genius Nathan, played by Oscar Isaac, creates Ava, an intelligent and very attractive robot played by Alicia Vikander. He then struggles with the philosophical and ethical dilemmas his creation poses, while all hell breaks loose. Many twists and turns add nuance to the plot, which centres on the evolving relationships between the balletic Ava and Caleb (Domhnall Gleeson), a hotshot programmer invited by Nathan to be the “human component in a Turing test”, and between Caleb and Nathan, as Ava’s extraordinary capabilities become increasingly apparent

Everything about this movie is good. Compelling acting (with only three speaking parts), exquisite photography and set design, immaculate special effects, a subtle score and, above all, a hugely imaginative screenplay combine under Garland’s precise direction to deliver a cinematic experience that grabs you and never lets go.

The best science fiction often tackles the oldest questions. At the heart of Ex Machina is one of our toughest intellectual knots, that of artificial consciousness. Is it possible to build a machine that is not only intelligent but also sentient: that has consciousness, not only of the world but also of its own self? Can we construct a modern-day Golem, that lumpen being of Jewish folklore which is shaped from unformed matter and can both serve humankind and turn against it? And if we could, what would happen to us?

In Jewish folkore, the Golem is animate being shaped from unformed matter.

In Jewish folkore, the Golem is animate being shaped from unformed matter.

Putting aside the tedious business of actually building a conscious AI, we face the challenge of figuring out whether the attempt succeeds. The standard reference for this sort of question is Alan Turing’s eponymous test, in which a human judge interrogates both a candidate machine and another human. A machine passes the test when the judge consistently fails to distinguish between them.

While the Turing test has provided a trope for many AI-inspired movies (such as Spike Jonze’s excellent Her), Ex Machina takes things much further. In a sparkling exchange between Caleb and Nathan, Garland nails the weakness of Turing’s version of the test, a focus on the disembodied exchange of messages, and proposes something far more interesting. “The challenge is to show you that she’s a robot. And see if you still feel she has consciousness,” Nathan says to Caleb.

This shifts the goalposts in a vital way. What matters is not whether Ava is a machine. It is not even whether Ava, even though a machine, can be conscious. What matters is whether Ava makes a conscious person feel that Ava is conscious. The brilliance of Ex Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine. And Garland is not necessarily on our side.

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Is consciousness a matter of social consensus? Is it more relevant whether people believe (or feel) that something (or someone) is conscious than whether it is in fact actually conscious? Or, does something being “actually conscious” rest on other people’s beliefs about it being conscious, or on its own beliefs about its consciousness (beliefs that may themselves depend on how it interprets others’ beliefs about it)? And exactly what is the difference between believing and feeling in situations like this?

It seems to me that my consciousness, here and now, is not a matter of social consensus or of my simply believing or feeling that I am conscious. It seems to me, simply, that I am conscious here and now. When I wake up and smell the coffee, there is a real experience of coffee-smelling going on.

But let me channel Ludwig Wittgenstein, one of the greatest philosophers of the 20th century, for a moment. What would it seem like if it seemed to me that my being conscious were a matter of social consensus or beliefs or feelings about my own conscious status? Is what it “seems like” to me relevant at all when deciding how consciousness comes about or what has consciousness?

Before vanishing completely into a philosophical rabbit hole, it is worth saying that questions like these are driving much influential current research on consciousness. Philosophers and scientists like Daniel Dennett, David Rosenthal and Michael Graziano defend, in various ways, the idea that consciousness is somehow illusory and what we really mean in saying we are conscious is that we have certain beliefs about mental states, that these states have distinctive functional properties, or that they are involved in specific sorts of attention.

Another theoretical approach accepts that conscious experience is real and sees the problem as one of determining its physical or biological mechanism. Some leading neuroscientists such as Giulio Tononi, and recently, Christof Koch, take consciousness to be a fundamental property, much like mass-energy and electrical charge, that is expressed through localised concentrations of “integrated information”. And others, like philosopher John Searle, believe that consciousness is an essentially biological property that emerges in some systems but not in others, for reasons as-yet unknown.

In the film we hear about Searle’s Chinese Room thought experiment. His premise was that researchers had managed to build a computer programmed in English that can respond to written Chinese with written Chinese so convincingly it easily passes the Turing test, persuading a human Chinese speaker that the program understands and speaks Chinese. Does the machine really “understand” Chinese (Searle called this “strong AI”) or is it only simulating the ability (“weak” AI)? There is also a nod to the notional “Mary”, the scientist, who, while knowing everything about the physics and biology of colour vision, has only ever experienced black, white and shades of grey. What happens when she sees a red object for the first time? Will she learn anything new? Does consciousness exceed the realms of knowledge.

All of the above illustrates how academically savvy and intellectually provocative Ex Machina is. Hat-tips here to Murray Shanahan, professor of cognitive robotics at Imperial College London, and writer and geneticist Adam Rutherford, whom Garland did well to enlist as science advisers.

Not every scene invites deep philosophy of mind, with the film encompassing everything from ethics, the technological singularity, Ghostbusters and social media to the erosion of privacy, feminism and sexual politics within its subtle scope. But when it comes to riffing on the possibilities and mysteries of brain, mind and consciousness, Ex Machina doesn’t miss a trick.

As a scientist, it is easy to moan when films don’t stack up against reality, but there is usually little to be gained from nitpicking over inaccuracies and narrative inventions. Such criticisms can seem petty and reinforcing of the stereotype of scientists as humourless gatekeepers of facts and hoarders of equations. But these complaints sometimes express a sense of missed opportunity rather than injustice, a sense that intellectual riches could have been exploited, not sidelined, in making a good movie. AI, neuroscience and consciousness are among the most vibrant and fascinating areas of contemporary science, and what we are discovering far outstrips anything that could be imagined out of thin air.

In his directorial debut, Garland has managed to capture the thrill of this adventure in a film that is effortlessly enthralling, whatever your background. This is why, on emerging from it, I felt lucky to be a neuroscientist. Here is a film that is a better film, because of and not despite its engagement with its intellectual inspiration.


The original version of this piece was published as a Culture Lab article in New Scientist on Jan 21. I am grateful to the New Scientist for permission to reproduce it here, and to Liz Else for help with editing. I will be discussing Ex Machina with Dr. Adam Rutherford at a special screening of the film at the Edinburgh Science Festival (April 16, details and tickets here).

Open your MIND

openMINDscreen
Open MIND
is a brand new collection of original research publications on the mind, brain, and consciousness
. It is now freely available online. The collection contains altogether 118 articles from 90 senior and junior researchers, in the always-revealing format of target articles, commentaries, and responses.

This innovative project is the brainchild of Thomas Metzinger and Jennifer Windt, of the MIND group of the Johanes Gutenburg University in Mainz, Germany (Windt has since moved to Monash University in Melbourne). The MIND group was set up by Metzinger in 2003 to catalyse the development of young German philosophers by engaging them with the latest developments in philosophy of mind, cognitive science, and neuroscience. Open MIND celebrates the 10th anniversary of the MIND group, in a way that is so much more valuable to the academic community than ‘just another meeting’ with its quick-burn excitement and massive carbon footprint. Editors Metzinger and Windt explain:

“With this collection, we wanted to make a substantial and innovative contribution that will have a major and sustained impact on the international debate on the mind and the brain. But we also wanted to create an electronic resource that could also be used by less privileged students and researchers in countries such as India, China, or Brazil for years to come … The title ‘Open MIND’ stands for our continuous search for a renewed form of academic philosophy that is concerned with intellectual rigor, takes the results of empirical research seriously, and at the same time remains sensitive to ethical and social issues.”

As a senior member of the MIND group, I was lucky enough to contribute a target article, which was commented on by Wanja Wiese, one of the many talented graduate students with Metzinger and a junior MIND group member. My paper marries concepts in cybernetics and predictive control with the increasingly powerful perspective of ‘predictive processing’ or the Bayesian brain, with a focus on interoception and embodiment. I’ll summarize the main points in a different post, but you can go straight to the target paper, Wanja’s commentary, and my response.

Open MIND is a unique resource in many ways. The Editors were determined to maximize its impact, so, unlike in many otherwise similar projects, the original target papers have not been circulated prior to launch. This means there is a great deal of highly original material now available to be discovered. The entire project was compressed into about 10 months from submission of initial drafts, to publication this week of the complete collection. This means the original content is completely up-to-date. Also, Open MIND  shows how excellent scientific publication can  sidestep the main publishing houses, given the highly developed resources now available, coupled of course with extreme dedication and hard work. The collection was assembled, rigorously reviewed, edited, and produced entirely in-house – a remarkable achievement.

Thomas Metzinger with the Open MIND student team

Thomas Metzinger with the Open MIND student team

Above all Open MIND opened a world of opportunity for its junior members, the graduate students and postdocs who were involved in every stage of the project: soliciting and reviewing papers, editing, preparing commentaries, and organizing the final collection. As Metzinger and Windt say

“The whole publication project is itself an attempt to develop a new format for promoting junior researchers, for developing their academic skills, and for creating a new type of interaction between senior and junior group members.”

The results of Open MIND are truly impressive and will undoubtedly make a lasting contribution to the philosophy of mind, especially in its most powerful multidisciplinary and empirically grounded forms.

Take a look, and open your mind too.

Open MIND contributors: Adrian John Tetteh Alsmith, Michael L. Anderson, Margherita Arcangeli, Andreas Bartels, Tim Bayne, David H. Baßler, Christian Beyer, Ned Block, Hannes Boelsen, Amanda Brovold, Anne-Sophie Brüggen, Paul M. Churchland, Andy Clark, Carl F. Craver, Holk Cruse, Valentina Cuccio, Brian Day, Daniel C. Dennett, Jérôme Dokic, Martin Dresler, Andrea R. Dreßing, Chris Eliasmith, Maximilian H. Engel, Kathinka Evers, Regina Fabry, Sascha Fink, Vittorio Gallese, Philip Gerrans, Ramiro Glauer, Verena Gottschling, Rick Grush, Aaron Gutknecht, Dominic Harkness, Oliver J. Haug, John-Dylan Haynes, Heiko Hecht, Daniela Hill, John Allan Hobson, Jakob Hohwy, Pierre Jacob, J. Scott Jordan, Marius Jung, Anne-Kathrin Koch, Axel Kohler, Miriam Kyselo, Lana Kuhle, Victor A. Lamme, Bigna Le Nggenhager, Caleb Liang, Ying-Tung Lin, Christophe Lopez, Michael Madary, Denis C. Martin, Mark May, Lucia Melloni, Richard Menary, Aleksandra Mroczko-Wąsowicz, Saskia K. Nagel, Albert Newen, Valdas Noreika, Alva Noë, Gerard O’Brien, Elisabeth Pacherie, Anita Pacholik-Żuromska, Christian Pfeiffer, Iuliia Pliushch, Ulrike Pompe-Alama, Jesse J. Prinz, Joëlle Proust, Lisa Quadt, Antti Revonsuo, Adina L. Roskies, Malte Schilling, Stephan Schleim, Tobias Schlicht, Jonathan Schooler, Caspar M. Schwiedrzik, Anil Seth, Wolf Singer, Evan Thompson, Jarno Tuominen, Katja Valli, Ursula Voss, Wanja Wiese, Yann F. Wilhelm, Kenneth Williford, Jennifer M. Windt.


Open MIND press release.
The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies
Perceptual presence in the Kuhnian-Popperian Bayesian brain
Inference to the best prediction

There’s more to geek-chic than meets the eye, but not in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game. (Spoiler alert: this post reveals some plot details.)

World War Two was won not just with tanks, guns, and planes, but by a crack team of code-breakers led by the brilliant and ultimately tragic figure of Alan Turing. This is the story as told in The Imitation Game, a beautifully shot and hugely popular film which nonetheless left me nursing a deep sense of missed opportunity. True, Benedict Cumberbatch is brilliant, spicing his superb Holmes with a dash of the Russell Crowe’s John Nash (A Beautiful Mind) to propel geek rapture into yet higher orbits. (See also Eddie Redmayne and Stephen Hawking.)

The rest was not so good. The clunky acting might reflect a screenplay desperate to humanize and popularize what was fundamentally a triumph of the intellect. But what got to me most was the treatment of Turing himself. On one hand there is the perhaps cinematically necessary canonisation of individual genius, sweeping aside so much important context. On the other there is the saccharin treatment of Turing’s open homosexuality (with compensatory boosting of Keira Knightley’s Joan Clarke) and the egregious scenes in which he stands accused of both treason and cowardice by association with Soviet spy John Cairncross, whom he likely never met. The requisite need for a bad guy does disservice also to Turing’s Bletchley Park boss Alastair Denniston, who while a product of old-school classics-inspired cryptography nonetheless recognized and supported Turing and his crew. Historical jiggery-pokery is of course to be expected in any mass-market biopic, but the story as told in The Imitation Game becomes much less interesting as a result.

Alan Turing as himself

Alan Turing as himself

I studied at King’s College, Cambridge, Turing’s academic home and also where I first encountered the basics of modern day computer science and artificial intelligence (AI). By all accounts Turing was a genius, laying the foundations for these disciplines but also for other areas of science, which – like AI – didn’t even exist in his time. His theories of morphogenesis presaged contemporary developmental biology, explaining how leopards get their spots. He was a pioneer of cybernetics, an inspired amalgam of engineering and biology that after many years in the academic hinterland is once again galvanising our understanding of how minds and brains work, and what they are for. One can only wonder what more he would have done, had he lived.

There is a breathless moment in the film where Joan Clarke (or poor spy-hungry and historically-unsupported Detective Nock, I can’t remember) wonders whether Turing, in cracking Enigma, has built his ‘universal machine’. This references Turing’s most influential intellectual breakthrough, his conceptual design for a machine that was not only programmable but re-programmable, that could execute any algorithm, any computational process.

The Universal Turing Machine formed the blueprint for modern-day computers, but the machine that broke Enigma was no such thing. The ‘Bombe’, as it was known, was based on Polish prototypes (the bomba kryptologiczna) and was co-designed with Gordon Welchman whose critical ‘diagonal board’ innovation is in the film attributed to the suave Hugh Alexander (Welchman doesn’t appear at all). Far from being a universal computer the Bombe was designed for a single specific purpose – to rapidly run through as many settings of the Enigma machine as possible.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

The Bombe is half the story of Enigma. The other half is pure cryptographic catnip. Even with a working Bombe the number of possible machine settings to be searched each day (the Germans changed all the settings at midnight) was just too large. The code-breakers needed a way to limit the combinations to be tested. And here Turing and his team inadvertently pioneered the principles of modern-day ‘Bayesian’ machine learning, by using prior assumptions to constrain possible mappings between a cipher and its translation. For Enigma, the breakthroughs came on realizing that no letter could encode itself, and that German operators often used the same phrases in repeated messages (“Heil Hitler!”). Hugh Alexander, diagonal boards aside, was supremely talented at this process which Turing called ‘banburismus’, on account of having to get printed ‘message cards’ from nearby Banbury.

In this way the Bletchley code-breakers combined extraordinary engineering prowess with freewheeling intellectual athleticism, to find a testable range of Enigma settings, each and every day, which were then run through the Bombe until a match was found.

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

Though it gave the allies a decisive advantage, the Bombe was not the first computer, not the first ‘digital brain’. This honour belongs to Colossus, also built at Bletchley Park, and based on Turing’s principles, but constructed mainly by Tommy Flowers, Jack Good, and Bill Tutte. Colossus was designed to break the even more encrypted communications the Germans used later in the war: the Tunny cipher. After the war the intense secrecy surrounding Bletchley Park meant that all Colossi (and Bombi) were dismantled or hidden away, depriving Turing, Flowers – and many others – of recognition and setting back the computer age by years. It amazes me that full details about Colussus were only released in 2000.

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

The Imitation Game of the title is a nod to Turing’s most widely known idea: a pragmatic answer to the philosophically challenging and possibly absurd question, “can machines think”. In one version of what is now known as the Turing Test, a human judge interacts with two players – another human and a machine – and must decide which is which. Interactions are limited to disembodied exchanges of pieces of text, and a candidate machine passes the test when the judge consistently fails to distinguish the one from the other. It is unfortunate but in keeping with the screenplay that Turing’s code-breaking had little to do with his eponymous test.

It is completely understandable that films simplify and rearrange complex historical events in order to generate widespread appeal. But the Imitation Game focuses so much on a distorted narrative of Turing’s personal life that the other story – a thrilling ‘band of brothers’ tale of winning a war by inventing the modern world – is pushed out into the wings. The assumption is that none of this puts bums on seats. But who knows, there might be more to geek-chic than meets the eye.

Should we fear the technological singularity?

terminator

Could wanting the latest mobile phone for Christmas lead to human extermination? Existential risks to our species have long been part of our collective psyche – in the form of asteroid impacts, pandemics, global nuclear cataclysm, and more recently, climate change. The idea is not simply that humans and other animals could be wiped out, but that basic human values and structures of society would change so as to become unrecognisable.

Last week, Stephen Hawking claimed that technological progress, while perhaps intended for human betterment, might lead to a new kind of existential threat in the form of self-improving artificial intelligence (AI). This worry is based on the “law of accelerating returns”, which applies when the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – advances in its capabilities. The idea is that a point might be reached where this process leads to wholesale and irreversible changes in how we live. This is the technological singularity, a concept made popular by AI maverick and Google engineering director Ray Kurzweil.

We are already familiar with accelerating returns in the rapid development of computer power (“Moore’s law”), and Kurzweil’s vision of the singularity is actually a sort of utopian techno-rapture. But there are scarier scenarios where exponential technological growth might exceed our ability to foresee and prevent unintended consequences. Genetically modified food is an early example of this worry, but now the spotlight is on bio- and nano-technology, and – above all – AI, the engineering of artificial minds.

Moore's law: the exponential growth in computational power since 1900.

Moore’s law: the exponential growth in computational power since 1900.

A focus on AI might seem weird given how disappointing present-day ‘intelligent robots’ are. They can hardly vacuum your living room let alone take over the world, and reports that the famous Turing Test for AI has been passed are greatly exaggerated. Yet AI has developed a surprising behind-the-scenes momentum. New ‘deep learning’ algorithms have been developed which, when coupled with vast amounts of data, show remarkable abilities to tackle everyday problems like speech comprehension and face recognition. As well as world-beating chess players like Deep Blue, we have Apple Siri and Google Now helping us navigate our messy and un-chesslike environments in ways that mimic our natural cognitive abilities. Huge amounts of money have followed, with Google this year paying £400M for AI start-up DeepMind in a deal which Google CEO Eric Schmidt heralded as enabling products that are “infinitely more intelligent”.

"Hello Dave".

“Hello Dave”.

What if the ability to engineer artificial minds leads to these minds engineering themselves, developing their own goals, and bootstrapping themselves beyond human understanding and control? This dystopian prospect has been mined by many sci-fi movies – think Blade Runner, HAL in 2001, Terminator, Matrix – but while sci-fi is primarily for entertainment, the accelerating developments in AI give pause for thought. Enter Hawking, who now warns that “the full development of AI could spell the end of the human race”. He joins real-world-Iron-Man Elon Musk and Oxford philosopher Nick Bostrom in declaring AI the most serious existential threat we face. (Hawking in fact used the term ‘singularity’ long ago to describe situations where the laws of physics break down, like at the centre of a black hole).

However implausible a worldwide AI revolution might seem, Holmes will tell you there is all the difference in the world between the impossible and the merely improbable. Even if highly unlikely, the seismic impact of a technological singularity is such that it deserves to be taken seriously, both in estimating and mitigating its likelihood, and in planning potential responses. Cambridge University’s new Centre for the Study for Existential Risk has been established to do just this, with Hawking and ex-Astronomer Royal Sir Martin Rees among the founders.

Dystopian eventualities aside, the singularity concept is inherently interesting because it pushes us to examine what we mean by being human (as my colleague Murray Shanahan argues in a forthcoming book). While intelligence is part of the story, being human is also about having a body and an internal physiology; we are self-sustaining flesh bags. It is also about consciousness; we are each at the centre of a subjective universe of experience. Current AI has little to say about these issues, and it is far from clear whether truly autonomous and self-driven AI is possible in their absence. The ethical minefield deepens when we realize that AIs becoming conscious would entail ethical responsibilities towards them, regardless of their impact on us.

At the moment, AI like any powerful technology has the potential for good and ill, long before any singularity is reached. On the dark side, AI gives us the tools to wreak our own havoc by distancing ourselves from the consequences of our actions. Remote controlled military drones already reduce life-and-death decisions to the click of a button: with enhanced AI there would be no need for the button. On the side of the angels, AI can make our lives healthier and happier, and our world more balanced and sustainable, by complementing our natural mental prowess with the unprecedented power of computation. The pendulum may swing from the singularity-mongerers to the techno-mavens; and we should listen to both, but proceed serenely with the angels.

This post is an amended version of a commisioned comment for The Guardian: Why we must not stall technological progress, despite its threat to humanity, published on December 03, 2014.  It was part of a flurry of comments occasioned by a BBC interview with Stephen Hawking, which you can listen to here. I’m actually quite excited to see Eddie Redmayne’s rendition of the great physicist.

Training synaesthesia: How to see things differently in half-an-hour a day

syn_brain_phillips
Image courtesy of Phil Wheeler Illustrations

Can you learn to see the world differently? Some people already do. People with synaesthesia experience the world very differently indeed, in a way that seems linked to creativity, and which can shed light on some of the deepest mysteries of consciousness. In a paper published in Scientific Reports, we describe new evidence suggesting that non-synaesthetes can be trained to experience the world much like natural synaesthetes. Our results have important implications for understanding individual differences in conscious experiences, and they extend what we know about the flexibility (‘plasticity’) of perception.

Synaesthesia means that an experience of one kind (like seeing a letter) consistently and automatically evokes an experience of another kind (like seeing a colour), when the normal kind of sensory stimulation for the additional experience (the colour) isn’t there. This example describes grapheme-colour synaesthesia, but this is just one among many fascinating varieties. Other synaesthetes experience numbers as having particular spatial relationships (spatial form synaesthesia, probably the most common of all). And there are other more unusual varieties like mirror-touch synaesthesia, where people experience touch on their own bodies when they see someone else being touched, and taste-shape synaesthesia, where triangles might taste sharp, and ellipses bitter.

The richly associative nature of synaesthesia, and the biographies of famous case studies like Vladimir Nabokov and Wassily Kandinsky (or, as the Daily Wail preferred: Lady Gaga and Pharrell Williams), has fuelled its association with creativity and intelligence. Yet the condition is remarkably common, with recent estimates suggesting about 1 in 23 people have some form of synaesthesia. But how does it come about? Is it in your genes, or is it something you can learn?

kandinsky
It is widely believed that Kandinsky was synaesthetic. For instance he said: “Colour is the keyboard, the eyes are the harmonies, the soul is the piano with many strings. The artist is the hand that plays, touching one key or another, to cause vibrations in the soul”

As with most biological traits the truth is: a bit of both. But this still begs the question of whether being synaesthetic is something that can be learnt, even as an adult.

There is a rather long history of attempts to train people to be synaesthetic. Perhaps the earliest example was by E.L. Kelly who in 1934 published a paper with the title: An experimental attempt to produce artificial chromaesthesia by the technique of the conditioned response. While this attempt failed (the paper says it is “a report of purely negative experimental findings”) things have now moved on.

More recent attempts, for instance the excellent work of Olympia Colizoli and colleagues in Amsterdam, have tried to mimic (grapheme-colour) synaesthesia by having people read books in which some of the letters are always coloured in with particular colours. They found that it was possible to train people to display some of the characteristics of synaesthesia, like being slower to name coloured letters when they were presented in a colour conflicting with the training (the ‘synaesthetic Stroop’ effect). But crucially, until now no study has found that training could lead to people actually reporting synaesthesia-like conscious experiences.

syn_reading
An extract from the ‘coloured reading’ training material, used in our study, and similar to the material used by Colizoli and colleagues. The text is from James Joyce. Later in training we replaced some of the letters with (appropriately) coloured blocks to make the task even harder.

Our approach was based on brute force. We decided to dramatically increase the length and rigour of the training procedure that our (initially non-synaesthetic) volunteers undertook. Each of them (14 in all) came in to the lab for half-an-hour each day, five days a week, for nine weeks! On each visit they completed a selection of training exercises designed to cement specific associations between letters and colours. Crucially, we adapted the difficulty of the tasks to each volunteer and each training session, and we also gave them financial rewards for good performance. Over the nine-week regime, some of the easier tasks were dropped entirely, and other more difficult tasks were introduced. Our volunteers also had homework to do, like reading the coloured books. Our idea was that the training must always be challenging, in order to have a chance of working.

The results were striking. At the end of the nine-week exercise, our dedicated volunteers were tested for behavioural signs of synaesthesia, and – crucially – were also asked about their experiences, both inside and outside the lab. Behaviourally they all showed strong similarities with natural-born synaesthetes. This was most striking in measures of ‘consistency’, a test which requires repeated selection of the colour associated with a particular letter, from a palette of millions.

consistency
The consistency test for synaesthesia. This example from David Eagleman’s popular ‘synaesthesia battery’.

Natural-born synaesthetes show very high consistency: the colours they pick (for a given letter) are very close to each other in colour space, across repeated selections. This is important because consistency is very hard to fake. The idea is that synaesthetes can simply match a colour to their experienced ‘concurrent’, whereas non-synaesthetes have to rely on less reliable visual memory, or other strategies.

Our trained quasi-synaesthetes passed the consistency test with flying colours (so to speak). They also performed much like natural synaesthetes on a whole range of other behavioural tests, including synaesthetic stroop, and a ‘synaesthetic conditioning’ task which shows that trained colours can elicit automatic physiological responses, like increases in skin conductance. Most importantly, most (8/14) of our volunteers described colour experiences much like those of natural synaesthetes (only 2 reported no colour phenomenology at all). Strikingly, some of these experience took place even outside the lab:

“When I was walking into campus I glanced at the University of Sussex sign and the letters were coloured” [according to their trained associations]

Like natural synaesthetes, some of our volunteers seemed to experience the concurrent colour ‘out in the world’ while others experienced the colours more ‘in the head’:

“When I am looking at a letter I see them in the trained colours”

“When I look at the letter ‘p’ … its like the inside of my head is pink”

syn_letters
For grapheme colour synaesthetes, letters evoke specific colour experiences. Most of our trained quasi-synaesthetes reported similar experiences. This image is however quite misleading. Synaesthetes (natural born or not) also see the letters in their actual colour, and they typically know that the synaesthetic colour is not ‘real’. But that’s another story.

These results are very exciting, suggesting for the first time that with sufficient training, people can actually learn to see the world differently. Of course, since they are based on subjective reports about conscious experiences, they are also the hardest to independently verify. There is always the slight worry that our volunteers said what they thought we wanted to hear. Against this worry, we were careful to ensure that none of our volunteers knew the study was about synaesthesia (and on debrief, none of them did!). Also, similar ‘demand characteristic’ concerns could have affected other synaesthesia training studies, yet none of these led to descriptions of synaesthesia-like experiences.

Our results weren’t just about synaesthesia. A fascinating side effect was that our volunteers registered a dramatic increase in IQ, gaining an average of about 12 IQ points (compared to a control group which didn’t undergo training). We don’t yet know whether this increase was due to the specifically synaesthetic aspects of our regime, or just intensive cognitive training in general. Either way, our findings provide support for the idea that carefully designed cognitive training could enhance normal cognition, or even help remedy cognitive deficits or decline. More research is needed on these important questions.

What happened in the brain as a result of our training? The short answer is: we don’t know, yet. While in this study we didn’t look at the brain, other studies have found changes in the brain after similar kinds of training. This makes sense: changes in behaviour or in perception should be accompanied by neural changes of some kind. At the same time, natural-born synaesthetes appear to have differences both in the structure of their brains, and in their activity patterns. We are now eager to see what kind of neural signatures underlie the outcome of our training paradigm. The hope is, that because our study showed actual changes in perceptual experience, analysis of these signatures will shed new light on the brain basis of consciousness itself.

So, yes, you can learn to see the world differently. To me, the most important aspect of this work is that it emphasizes that each of us inhabits our own distinctive conscious world. It may be tempting to think that while different people – maybe other cultures – have different beliefs and ways of thinking, still we all see the same external reality. But synaesthesia, along with other emerging theories of ‘predictive processing’ – shows that the differences go much deeper. We each inhabit our own personalised universe, albeit one which is partly defined and shaped by other people. So next time you think someone is off in their own little world: they are.


The work described here was led by Daniel Bor and Nicolas Rothen, and is just one part of an energetic inquiry into synaesthesia taking place at Sussex University and the Sackler Centre for Consciousness Science. With Jamie Ward and (recently) Julia Simner also working here, we have a uniquely concentrated expertise in this fascinating area. In other related work I have been interested in why synaesthetic experiences lack a sense of reality and how this give an important clue about the nature of ‘perceptual presence’. I’ve also been working on the phenomenology of spatial form synaesthesia, and whether synaesthetic experiences can be induced through hypnosis. And an exciting brain imaging study of natural synaesthetes will shortly hit the press! Nicolas Rothen is an authority on the relationship between synaesthesia and memory, and Jamie Ward and Julia Simner have way too many accomplishments in this field to mention. (OK, Jamie has written the most influential review paper in the area – featuring a lot of his own work – and Julia (with Ed Hubbard) has written the leading textbook. That’s not bad to start with.)


Our paper, Adults can be Trained to Acquire Synesthetic Experiences (sorry for US spelling) is published (open access, free!) in Scientific Reports, part of the Nature family. The authors were Daniel Bor, Nicolas Rothen, David Schwartzman, Stephanie Clayton, and Anil K. Seth. There has been quite a lot of media coverage of this work, for instance in the New Scientist and the Daily Fail. Other coverage is summarized here.

Eye Benders: the science of seeing and believing, wins Royal Society prize!

eyebenders_cover

An unexpected post.  I’m very happy to have learnt today that the book Eye Benders has won the 2014 Royal Society Young Person’s Book Prize.  Eye Benders was written by Clive Gifford (main author) and me (consultant).  It was published by Ivy Press, who are also the redoubtable publishers of the so-far-prizeless but nonetheless worthy 30 Second Brain. A follow-up to Eye Benders, Brain Twister, is in the works: More brain, less optical illusions, but same high quality young-person-neuroscience-fare.

The Royal Society says this about the prize: “Each year the Royal Society awards a prize to the best book that communicates science to young people. The prize aims to inspire young people to read about science and promotes the best science writing for the under-14s.”

This year, the shortlist was chosen by Professor James Hough FRS, Dr Rhaana Starling, Mr Michael Heyes, Professor Iain Stewart and Dr Anjana Ahuja. Well done all, good shortlisting.  More importantly, the winner was chosen by groups of young persons themselves.  Here is what some of the 2014 young people had to say about Eye Benders:

Matt, 12 said “Science from a different perspective. Factual and interesting – a spiral of a read!”

Beth, 14 said “It was way, way cool!

Ethan, 12 said “The illustrations were absolutely amazing”

Joe, 12 said “A great, well written and well thought-out book; the illustrations are clear, detailed and amazing. The front cover is beautiful.”

Felix, 10 said “Eye popping and mind-blowingly fun!’

So there it is. Matt and friends have spoken, and here is a picture of Clive accepting the award in Newcastle (alas I wasn’t there) accompanied with a young person being enthused:

eyebenders_award

Here’s a sneak at what the book looks like, on the inside:

eyebenders_sample

A personal note: I remember well going through the final layouts for Eye Benders, heavily dosed on painkillers in hospital in Barcelona following emergency surgery, while at the same time my father was entering his final weeks back in Oxfordshire. A dark time.  Its lovely, if bittersweet, to see something like this emerge from it.

Other coverage:

GrrlScientist in The Guardian.
Optical illusion book wins Royal Society prize
Clive shares some of the best Eye Benders illusions online
Royal Society official announcement
University of Sussex press release

I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition

metacog

Image from 30 Second Brain, Ivy Press, available at all good booksellers.

Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.

In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!

This is important because it changes how we think about metacognition. Metacognition, strictly speaking, is ‘knowing about knowing’. When we make a perceptual judgment, or a decision of any kind, we typically have some degree of insight into whether our decision was correct or not. This is metacognition, which in experiments is usually measured by asking people how confident they are in a previous decision. Good metacognitive performance is indicated by high correlations between confidence and accuracy, which can be quantified in various ways.

Most explanations of metacognition assume that metacognitive judgements are based on the same information as the original (‘first-order’) decision. For example, if you are asked to decide whether a dim light was present or not, you might make a (first-order) judgment based on signals flowing from your eyes to your brain. Perhaps your brain sets a threshold below which you will say ‘No’ and above which you will say ‘Yes’. Metacognitive judgments are typically assumed to work on the same data. If you are asked whether you were guessing or were confident, maybe you will set additional thresholds a bit further apart. The idea is that your brain may need more sensory evidence to be confident in judging that a dim light was in fact present, than when merely guessing that it was.

This way of looking at things is formalized by signal detection theory (SDT). The nice thing about SDT is that it can give quantitative mathematical expressions for how well a person can make both first-order and metacognitive judgements, in ways which are not affected by individual biases to say ‘yes’ or ‘no’, or ‘guess’ versus ‘confident’. (The situation is a bit trickier for metacognitive confidence judgements but we can set these details aside for now: see here for the gory details). A simple schematic of SDT is shown below.

sdt

Signal detection theory. The ‘signal’ refers to sensory evidence and the curves show hypothetical probability distributions for stimulus present (solid line) and stimulus absent (dashed line). If a stimulus (e.g., a dim light) is present, then the sensory signal is likely to be stronger (higher) – but because sensory systems are assumed to be noisy (probabilistic), some signal is likely even when there is no stimulus. The difficulty of the decision is shown by the overlap of the distributions. The best strategy for the brain is to place a single ‘decision criterion’ midway between the peaks of the two distributions, and to say ‘present’ for any signal above this threshold, and ‘absent’ for any signal below. This determines the ‘first order decision’. Metacognitive judgements are then specified by additional ‘confidence thresholds’ which bracket the decision criterion. If the signal lies in between the two confidence thresholds, the metacognitive response is ‘guess’; if it lies to the two extremes, the metacognitive response is ‘confident’. The mathematics of SDT allow researchers to calculate ‘bias free’ measures of how well people can make both first-order and metacognitive decisions (these are called ‘d-primes’). As well as providing a method for quantifying decision making performance, the framework is also frequently assumed to say something about what the brain is actually doing when it is making these decisions. It is this last assumption that our present work challenges.

On SDT it is easy to see that one can make above-chance first order decisions while displaying low or no metacognition. One way to do this would be to set your metacognitive thresholds very far apart, so that you are always guessing. But there is no way, on this theory (without making various weird assumptions), that you could be at chance in your first-order decisions, yet above chance in your metacognitive judgements about these decisions.

Surprisingly, until now, no-one had actually checked to see whether this could happen in practice. This is exactly what we did, and this is exactly what we found. We analysed a large amount of data from a paradigm called artificial grammar learning, which is a workhorse in psychological laboratories for studying unconscious learning and decision-making. In artificial grammar learning people are shown strings of letters and have to decide whether each string belongs to ‘grammar A’ or ‘grammar B’. Each grammar is just an arbitrary set of rules determining allowable patterns of letters. Over time, most people can learn to classify letter strings at better than chance. However, over a large sample, there will always be some people that can’t: for these unfortunates, their first-order performance remains at ~50% (in SDT terms they have a d-prime not different from zero).

agl

Artificial grammar learning. Two rule sets (shown on the left) determine which letter strings belong to ‘grammar A’ or ‘grammar B’. Participants are first shown examples of strings generated by one or the other grammar (training). Importantly, they are not told about the grammatical rules, and in most cases they remain unaware of them. Nonetheless, after some training they are able to successfully (i.e., above chance) classify novel letter strings appropriately (testing).

Crucially, subjects in our experiments were asked to make confidence judgments along with their first-order grammaticality judgments. Focusing on those subjects who remained at chance in their first-order judgements, we found that they still showed above-chance metacognition. That is, they were more likely to be confident when they were (by chance) right, than when they were (by chance) wrong. We call this novel finding blind insight.

The discovery of blind insight changes the way we think about decision-making. Our results show that theoretical frameworks based on SDT are, at the very least, incomplete. Metacognitive performance during blind insight cannot be explained by simply setting different thresholds on a single underlying signal. Additional information, or substantially different transformations of the first-order signal, are needed. Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference.

pp

In predictive processing theories of brain function, perception depends on top-down predictions (blue) about the causes of sensory signals. Sensory signals carry ‘prediction errors’ (magenta) which update top-down predictions according to principles of Bayesian inference. Maybe a similar process underlies metacognition. Image from 30 Second Brain, Ivy Press.

This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them. While speculative, this idea fits neatly with the framework of predictive processing which says that top-down influences are critical in shaping the nature of perceptual contents.

The discovery of blindsight many years ago has substantially changed the way we think about vision. Our new finding of blind insight may similarly change the way we think about metacognition, and about consciousness too.

The paper is published open access (i.e. free!) in Psychological Science. The authors were Ryan Scott, Zoltan Dienes, Adam Barrett, Daniel Bor, and Anil K Seth. There are also accompanying press releases and coverage:

Sussex study reveals how ‘blind insight’ confounds logic.  (University of Sussex, 13/11/2014)
People show ‘blind insight’ into decision making performance (Association for Psychological Science, 13/11/2014)

The Human Brain Project risks becoming a missed opportunity

Image concept of a network of neurons in the human brain.

The brain is much on our minds at the moment. David Cameron is advocating a step-change in dementia research, brain-computer interfaces promise new solutions to paralysis, and the ongoing plight of Michael Schumacher has reminded us of the terrifying consequences of traumatic brain injury. Articles in scholarly journals and in the media are decorated with magical images of the living brain, like the one shown below, to illuminate these stories. Yet, when asked, most neuroscientists will say we still know very little about how the brain works, or how to fix it when it goes wrong.

DTI-sagittal-fibers
A diffusion tensor image showing some of the main pathways along which brain connections are organized.

The €1.2bn Human Brain Project (HBP) is supposed to change all this. Funded by the European Research Council, the HBP brings together more than 80 research institutes in a ten-year endeavour to unravel the mysteries of the brain, and to emulate its powers in new technologies. Following examples like the Human Genome Project and the Large Hadron Collider (where Higgs’ elusive boson was finally found), the idea is that a very large investment will deliver very significant results. But now a large contingent of prominent European neuroscientists are rebelling against the HBP, claiming that its approach is doomed to fail and will undermine European neuroscience for decades to come.

Stepping back from the fuss, it’s worth thinking whether the aims of the HBP really make sense. Sequencing the genome and looking for Higgs were both major challenges, but in these cases the scientific community agreed on the objectives, and on what would constitute success. There is no similar consensus among neuroscientists.

It is often said that the adult human brain is the most complex object in the universe. It contains about 90 billion neurons and a thousand times more connections, so that if you counted one connection each second it would take about three million years to finish. The challenge for neuroscience is to understand how this vast, complex, and always changing network gives rise to our sensations, perceptions, thoughts, actions, beliefs, desires, our sense of self and of others, our emotions and moods, and all else that guides our behaviour and populates our mental life, in health and in disease. No single breakthrough could ever mark success across such a wide range of important problems.

The central pillar of the HBP approach is to build computational simulations of the brain. Befitting the huge investment, these simulations would be of unprecedented size and detail, and would allow brain scientists to integrate their individual findings into a collective resource. What distinguishes the HBP – besides the money – is its aggressively ‘bottom up’ approach: the vision is that by taking care of the neurons, the big things – thoughts, perceptions, beliefs, and the like – will take care of themselves. As such, the HBP does not set out to test any specific hypothesis or collection of hypotheses, marking another distinction with common scientific practice.

Could this work? Certainly, modern neuroscience is generating an accelerating data deluge demanding new technologies for visualisation and analysis. This is the ‘big data’ challenge now common in many settings. It is also clear that better pictures of the brain’s wiring diagram (the ‘connectome’) will be essential as we move ahead. On the other hand, more detailed simulations don’t inevitably lead to better understanding. Strikingly, we don’t fully understand the brain of the tiny worm Caenorhabtis elegans even though it has only 302 neurons and the wiring diagram is known exactly. More generally, a key ability in science is to abstract away from the specifics to see more clearly what underlying principles are at work. In the limit, a perfectly accurate model of the brain may become as difficult to understand as the brain itself, as Borges long ago noted when describing the tragic uselessness of the perfectly detailed map.

jorge_luis_borges_por_paola_agosti
Jorge Luis Borges at Harvard University, 1967/8

Neuroscience is, and should remain, a broad church. Understanding the brain does not reduce to simulating the collective behaviour of all its miniscule parts, however interesting a part of the final story this might become. Understanding the brain means grasping complex interactions cross-linking many different levels of description, from neurons to brain regions to individuals to societies. It means complementing bottom-up simulations with new theories describing what the brain is actually doing, when its neurons are buzzing merrily away. It means designing elegant experiments that reveal how the mind constructs its reality, without always worrying about the neuronal hardware underneath. Sometimes, it means aiming directly for new treatments for devastating neurological and psychiatric conditions like coma, paralysis, dementia, and depression.

Put this way, neuroscience has enormous potential to benefit society, well deserving of high profile and large-scale support. It would be a great shame if the Human Brain Project, through its singular emphasis on massive computer simulation, ends up as a lightning rod for dissatisfaction with ‘big science’ rather than fostering a new and powerfully productive picture of the biological basis of the mind.

This article first appeared online in The Guardian on July 8 2014.  It appeared in print in the July 9 edition, on page 30 (comment section).

Post publication notes:

The HBP leadership have published a response to the open letter here. I didn’t find it very convincing. There have been a plethora of other commentaries on the HBP, as it comes up to its first review.  I can’t provide an exhaustive list but I particularly liked Gary Marcus’ piece in the New York Times (July 11). There was also trenchant criticism in the editorial pages of Nature.  Paul Verschure has a nice TED talk addressing some of the challenges facing big data, encompassing the HBP.

 

 

The importance of being Eugene: What (not) passing the Turing test really means

Image
Eugene Goostman, chatbot.

Could you tell difference between a non-native-English-speaking 13-year old Ukranian boy, and a computer program? On Saturday, at the Royal Society, one out of three human judges were fooled. So, it has been widely reported, the iconic Turing Test has been passed and a brave new era of Artificial Intelligence (AI) begins.

Not so fast. While this event marks a modest improvement in the abilities of so-called ‘chatbots’ to engage fluently with humans, real AI requires much more.

Here’s what happened. At a competition held in central London, thirty judges (including politician Lord Sharkey, computer scientist Kevin Warwick, and Red Dwarf actor Robert Llewellyn) interacted with ‘Eugene Goostman’ in a series of five-minute text-only exchanges. As a result, 33% of the judges (reports do not yet say which, though tweets implicate Llewellyn) were persuaded that ‘Goostman’ was real. The other 67%  were not. It turns out that ‘Eugene Goostman’ is not a teenager from Odessa, but a computer program, a ‘chatbot’ created by computer engineers Vladimir Veselov and Eugene Demchenko. According to his creators, ‘Goostman’ was ‘born’ in 2001, owns a pet guinea pig, and has a gynaecologist father.

The Turing Test, devised by computer science pioneer and codebreaker Alan Turing, was proposed as a practical alternative to the philosophically challenging and possibly absurd question, “can machines think”. In one popular interpretation, a human judge interacts with two players – a human and a machine – and must decide which is which. A candidate machine passes the test when the judge consistently fails to distinguish the one from the other. Interactions are limited to exchanges of strings of text, to make the competition fair (more on this later; its also worth noting that Turing’s original idea was more complex than this, but lets press on). While there have been many previous attempts and prior claims about passing the test, the Goostman-bot arguably outperformed its predecessors, leading Warwick to noisily proclaim “We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday”.

Image
Alan Turing’s seminal 1950 paper

This is a major overstatement which does grave disservice to the field of AI. While Goostman may represent progress of a sort – for instance this year’s competition did not place any particular restrictions on conversation topics – some context is badly needed.

An immediate concern is that Goostman is gaming the system. By imitating a non-native speaker, the chatbot can make its clumsy English expected rather than unusual. Hence its reaction to winning the prize: “I feel about beating the Turing test in quite convenient way”. And its assumed age of thirteen lowers expectations about satisfactory responses to questions. As Veselov put it “Thirteen years old is not too old to know everything and not too young to know nothing.” While Veselov’s strategy is cunning, it also shows that the Turing test is as much a test of the judges’ abilities to make suitable inferences, and to ask probing questions, as it is of the capabilities of intelligent machinery.

More importantly, fooling 33% of judges over 5 minute sessions was never the standard intended by Alan Turing for passing his test – it was merely his prediction about how computers might fare within about 50 years of his proposal. (In this, as in much else, he was not far wrong: the original Turing test was described in 1950.) A more natural criterion, as emphasized by the cognitive scientist Stevan Harnad, is for a machine to be consistently indistinguishable from human counterparts over extended periods of time, in other words to have the generic performance capacity of a real human being. This more stringent benchmark is still a long way off.

Perhaps the most significant limitation exposed by Goostman is the assumption that ‘intelligence’ can be instantiated in the disembodied exchange of short passages of text. On one hand this restriction is needed to enable interesting comparisons between humans and machines in the first place. On the other, it simply underlines that intelligent behaviour is intimately grounded in the tight couplings and blurry boundaries separating and joining brains, bodies, and environments. If Saturday’s judges had seen Goostman, or even an advanced robotic avatar voicing its responses, there would no question of any confusion. Indeed, robots that are today physically most similar to humans tend to elicit sensations like anxiety and revulsion, not camaraderie. This is the ‘uncanny valley’ – a term coined by robotics professor Masahiro Mori in 1970 (with a nod to Freud) and exemplified by the ‘geminoids’ built by Hiroshi Ishiguro.

Image
Hiroshi Ishiguro and his geminoid.  Another imitation game.

A growing appreciation of the importance of embodied, embedded intelligence explains why nobody is claiming that human-like robots are among us, or are in any sense imminent. Critics of AI consistently point to the notable absence of intelligent robots capable of fluent interactions with people, or even with mugs of tea. In a recent blog post I argued that new developments in AI are increasingly motivated by the near forgotten discipline of cybernetics, which held that prediction and control were at the heart of intelligent behaviour – not barefaced imitation as in Turing’s test (and, from a different angle, in Ishiguro’s geminoids). While these emerging cybernetic-inspired approaches hold great promise (and are attracting the interest of tech giants like Google) there is still plenty to be done.

These ideas have two main implications for AI. The first is that true AI necessarily involves robotics. Intelligent systems are systems that flexibly and adaptively interact with complex, dynamic, and often social environments. Reducing intelligence to short context-free text-based conversations misses the target by a country mile. The second is that true AI should focus not only on the outcome (i.e., whether a machine or robot behaves indistinguishably from a human or other animal) but also on the process by which the outcome is attained. This is why considerable attention within AI has always been paid to understanding, and simulating, how real brains work, and how real bodies behave.

Image
How the leopard got its spots: Turing’s chemical basis of morphogenesis.

Turing of course did much more than propose an interesting but ultimately unsatisfactory (and often misinterpreted) intelligence test. He laid the foundations for modern computer science, he saved untold lives through his prowess in code breaking, and he refused to be cowed by the deep prejudices against homosexuality prevalent in his time, losing his own life in the bargain. He was also a pioneer in theoretical biology: his work in morphogenesis showed how simple interactions could give rise to complex patterns during animal development. And he was a central figure in the emerging field of cybernetics, where he recognized the deep importance of embodied and embedded cognition. The Turing of 1950 might not recognize much of today’s technology, but he would not have been fooled by Goostman.

[postscript: while Warwick &co have been very reluctant to release the transcript of Goostman’s 2014 performance, this recent Guardian piece has some choice dialogue from 2012, where Goostman polled at 28%, not far off Saturday’s 33%. This piece was updated on June 12 following a helpful dialog with Aaron Sloman].

Darwin’s Neuroscientist: Gerald M. Edelman, 1929-2014

Image
Dr. Gerald M. Edelman, 1929-2014.

“The brain is wider than the sky.
For, put them side by side,
The one the other will include,
With ease, and you beside.”

Dr. Gerald M. Edelman often used these lines from Emily Dickinson to introduce the deep mysteries of neuroscience and consciousness. Dr. Edelman (it was always ‘Dr.’), who has died in La Jolla, aged 84, was without doubt a scientific great. He was a Nobel laureate at the age of 43, a pioneer in immunology, embryology, molecular biology, and neuroscience, a shrewd political operator, and a Renaissance man of striking erudition who displayed a masterful knowledge of science, music, literature, and the visual arts who at one time could have been a concert violinist. He quoted Woody Allen and Jascha Heifetz as readily as Linus Pauling and Ludwig Wittgenstein, a compelling raconteur who loved telling a good Jewish joke just as much as explaining the principles of neuronal selection. And he was my mentor from the time I arrived as a freshly minted Ph.D. at The Neurosciences Institute in San Diego, back in 2001. His influence in biology and the neurosciences is inestimable. While his loss marks the end of an era, his legacy is sure to continue.

Gerald Maurice Edelman was born in Ozone Park, New York City, in 1929, to parents Edward and Anna. He trained in medicine at the University of Pennsylvania, graduating cum laude in 1954. After an internship at the Massachusetts General Hospital and three years in the US Army Medical Corp in France, Edelman entered the doctoral program at Rockefeller University, New York. Staying at Rockefeller after his Ph.D. he became Associate Dean and Vincent Astor Distinguished Professor, and in 1981 he founded The Neuroscience Institute (NSI). In 1992 the NSI moved lock, stock, and barrel into new purpose-built laboratories in La Jolla, California, where Edelman continued as Director for more than twenty years. A dedicated man, he continued working at the NSI until a week before he died.

In 1972 Edelman won the Nobel Prize in Physiology or Medicine (shared independently with Rodney Porter) for showing how antibodies can recognize an almost infinite range of invading antigens. Edelman’s insight, the principles of which resonate throughout his entire career, was based on variation and selection: antibodies undergo a process of ‘evolution within the body’ in order to match novel antigens. Crucially, he performed definitive experiments on the chemical structure of antibodies to support his idea [1].

Image
Dr. Edelman at Rockefeller University in 1972, explaining his model of gamma globulin.

Edelman then moved into embryology, discovering an important class of proteins known as ‘cell adhesion molecules’ [2]. Though this, too, was a major contribution, it was the biological basis of mind and consciousness – one of the ‘dark areas’ of science, where mystery reigned – that drew his attention for the rest of his long career. Over more than three decades Edelman developed his theory of neuronal group selection, also known as ‘neural Darwinism’, which again took principles of variation and selection, but here applied them to brain development and dynamics [3-7]. The theory is rich and still underappreciated. At its heart is the realization that the brain is very different from a computer: as he put it, brains don’t work with ‘logic and a clock’. Instead, Edelman emphasized the rampantly ‘re-entrant’ connectivity of the brain, with massively parallel bidirectional connections linking most brain regions. Uncovering the implications of re-entry remains a profound challenge today.

Image
The campus of The Neuroscience Institute in La Jolla, California.

Edelman was convinced that scientific breakthroughs require both sharp minds and inspiring environments. The NSI was founded as a monastery of science, supporting a small cadre of experimental and theoretical neuroscientists and enabling them to work on ambitious goals free from the immediate pressures of research funding and paper publication. This at least was the model, and Edelman struggled heroically to maintain its reality in the face of increasing financial pressures and the shifting landscape of academia. That he was able to succeed for so long attests to his political nous and focal determination as well as his intellectual prowess. I remember vividly the ritual lunches that exemplified life at the NSI. The entire scientific staff ate together at noon every day (except Fridays), at tables seemingly designed to hold just enough people so that the only common topic could be neuroscience; Edelman, of course, held court at one table, brainstorming and story-telling in equal measure. The NSI itself is a striking building, housing not only experimental laboratories but also a concert-grade auditorium. Science and art were, for Edelman, two manifestations of a fundamental urge towards creativity and beauty.

Edelman did not always take the easiest path through academic life. Among many rivalries, he enjoyed lively clashes with fellow Nobel laureate Francis Crick who, like Edelman himself, had turned his attention to the brain after resolving a central problem in a different area of biology. Crick once infamously referred to neural Darwinism as ‘neural Edelmanism’ [8], a criticism which nowadays seems less forceful as attention within neurosciences increasingly focuses on neuronal population dynamics (just before his death in 2004, Crick met with Edelman and they put aside any remaining feelings of enmity). In 2003 both men published influential papers setting out their respective ideas on consciousness [9, 10]; these papers put the neuroscience of consciousness at last, and for good, back on the agenda.

The biological basis of consciousness had been central to Edelman’s scientific agenda from the late 1980s. Consciousness had long been considered beyond the reach of science; Edelman was at the forefront its rehabilitation as a serious subject within biology. His approach was from the outset more subtle and sophisticated than those of his contemporaries. Rather than simply looking for ‘neural correlates of consciousness’ – brain areas or types of activity that happen to co-exist with conscious states – Edelman wanted to naturalize phenomenology itself. That is, he tried to establish formal mappings between phenomenological properties of conscious experience and homologous properties of neural dynamics. In short, this meant coming up with explanations rather than mere correlations, the idea being that such an approach would demystify the dualistic schism between ‘mind’ and ‘matter’ first invoked by Descartes. This approach was first outlined in his book The Remembered Present [5] and later amplified in A Universe of Consciousness, a work co-authored with Giulio Tononi [11]. It was this approach to consciousness that first drew me to the NSI and to Edelman, and I was not disappointed. These ideas, and the work they enabled, will continue to shape and define consciousness science for years to come.

My own memories of Edelman revolve entirely around life at the NSI. It was immediately obvious that he was not a distant boss who might leave his minions to get on with their research in isolation. He was generous with his time. I saw him almost every working day, and many discussions lasted long beyond their allotted duration. His dedication to detail sometimes took the breath away. On one occasion, while working on a paper together [12], I had fallen into the habit of giving him a hard copy of my latest effort each Friday evening. One Monday morning I noticed the appearance of a thick sheaf of papers on my desk. Over the weekend Edelman had cut and paste – with scissors and glue, not Microsoft Word – paragraphs, sentences, and individual words, to almost entirely rewrite my tentative text. Needless to say, it was much improved.

The abiding memory of anyone who has spent time with Dr. Edelman is however not the scientific accomplishments, not the achievements encompassed by the NSI, but instead the impression of an uncommon intellect moving more quickly and ranging more widely than seemed possible. The New York Times put it this way in a 2004 profile:

“Out of free-floating riffs, vaudevillian jokes, recollections, citations and patient explanations, out of the excited explosions of example and counterexample, associations develop, mental terrain is reordered, and ever grander patterns emerge.”

Dr. Edelman will long be remembered for his remarkably diverse scientific contributions, his strength of character, erudition, integrity, and humour, and for the warmth and dedication he showed to those fortunate enough to share his vision. He is survived by his wife, Maxine, and three children: David, Eric, and Judith.

Anil Seth
Professor of Cognitive and Computational Neuroscience
Co-Director, Sackler Centre for Consciousness Science
University of Sussex

This article has been republished in Frontiers in Conciousness Research doi: 10.3389/fpsyg.2014.00896

References

1 Edelman, G.M., Benacerraf, B., Ovary, Z., and Poulik, M.D. (1961) Structural differences among antibodies of different specificities. Proc Natl Acad Sci U S A 47, 1751-1758
2 Edelman, G.M. (1983) Cell adhesion molecules. Science 219, 450-457
3 Edelman, G.M. and Gally, J. (2001) Degeneracy and complexity in biological systems. Proc. Natl. Acad. Sci. USA 98, 13763-13768
4 Edelman, G.M. (1993) Neural Darwinism: selection and reentrant signaling in higher brain function. Neuron 10, 115-125.
5 Edelman, G.M. (1989) The remembered present. Basic Books
6 Edelman, G.M. (1987) Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, Inc.
7 Edelman, G.M. (1978) Group selection and phasic re-entrant signalling: a theory of higher brain function. In The Mindful Brain (Edelman, G.M. and Mountcastle, V.B., eds), MIT Press
8 Crick, F. (1989) Neural edelmanism. Trends Neurosci 12, 240-248
9 Edelman, G.M. (2003) Naturalizing consciousness: a theoretical framework. Proc Natl Acad Sci U S A 100, 5520-5524
10 Crick, F. and Koch, C. (2003) A framework for consciousness. Nature Neuroscience 6, 119-126
11 Edelman, G.M. and Tononi, G. (2000) A universe of consciousness : how matter becomes imagination. Basic Books
12 Seth, A.K., Izhikevich, E.I, Reeke, G.N, and Edelman, G.M. (2006) Theories and measures of consciousness: An extended framework. Proc Natl Acad Sci U S A 103, 10799-804