A first draft of a digital brain: The Human Brain Project’s new simulation

HBP_image

Today, Henry Markram and colleagues have released one of the first of a raft of substantial new results emerging from the controversial Human Brain Project (HBP). The paper, Reconstruction and Simulation of Neocortical Microcircuitry, appears in the journal Cell.

As one of the first concrete outputs emerging from this billion-euro endeavour this had to be a substantial piece of work, and it is. The paper describes a digital reconstruction of ~31,000 neurons (with ~8 million connections and ~37 million synapses) of a tiny part of the somatosensory cortex of the juvenile rat brain. What is unique about this simulation is not the number of neurons (31,000 is pretty modest by today’s standards), but the additional detail included. Simulated neurons are given specific morphological, chemical, and electrical characteristics, and are precisely positioned in 3D space so that they form biologically realistic connections. This level of detail is at the heart of the HBP strategy, and it underlies the claim that the simulation is a ‘reconstruction’ of neural tissue, not just an abstract model of neuronal connectivity.

So how good is it? Certainly, the simulation detail is extremely impressive, as is the wealth of experimental data that is accounted for. Particularly striking is the ability to predict both general features of neocortical dynamics – like the existence of ‘soloist’ and ‘chorister’ neurons – as well as to inspire specific new experiments that further validated the simulation. It is also promising that Markram & co managed to interpolate their sparse experimental data in order to fully specify the model, without losing the fidelity of the model to the real ‘target’ system.

The authors admit this is a ‘first step’ and the results are certainly intriguing. But the real question is whether the aggressively ‘bottom up’ approach of the HBP will, by itself, yield the transformational understanding of neuroscience that it has promised. Modelling work in science – whether computational or mathematical – is about finding the right level of abstraction to best explore and understand some natural principle, or test some specific hypothesis.  A model that relies on incorporating as much detail as possible could lead to a simulation that is almost as hard to understand as the target system. Jorge Luis Borges long ago noted the tragic uselessness of the perfectly detailed map in his short story ‘On Exactitude in Science’.

For this reason alone, its hard to be confident that the HBP approach — impressive as it is at the level of a tiny volume of immature cortex — will scale up to deliver real insights about how brains, bodies and environments mesh together in generating complex adaptive behaviour (and perception, and thought, and consciousness). And on the other hand, as detailed as the current simulation is, it still neglects very basic and undoubtedly important aspects of the brain – including glial cells, vasculature, receptors, and the like. This goes to show that even the most detailed simulation models still have to make abstractions. In the present model decisions about what is included and excluded seem to be made more according to practical criteria (what is possible?) than theoretically principled criteria (what are we trying to explain with this model?).

Can the HBP be extended both downwards (to encompass the so-far excluded but potentially critical details of neuronal microstructure) and upwards (to a whole brain and organism level, including sensorimotor interactions with bodies and environments)? The jury is still out. So let’s applaud this Herculean effort to simulate a tiny part of a tiny brain, but let’s also keep in mind that the HBP won’t solve neuroscience all by itself, and only time will tell whether it will play a significant role in unravelling the properties of the most complex object in the known universe.


The original article is here: Markram et al (2015). Cell 163:1-37.
Some of the above comments appear in a New Scientist commentary by Jessica Hamzelou, published 08/10/2015:  Digital version of piece of rat brain fires like the real thing.

Brain Twisters

BrainTwistBrain Twisters, the follow up to the Royal Society Prize Winning ‘EyeBenders’ is now out! Another co-production with author Clive Gifford.

Here’s the blurb: Trick your senses and baffle your brain with this crazy book of mind tricks and neuroscience information. Find out how magicians make use of “inattentional blindness” when doing magic tricks, and why you miss details that are hidden in plain sight. Discover why your memory isn’t as good as you think, and how it’s possible to remember things that never actually happened. This astonishing science book presents a wide range of brain games and mind tricks as well as some lols MMR ideas, and explains how these reveal the working processes of the brain. It will engage and entertain, and leave you wondering: do you really know your own mind?

I really enjoyed working on this book with Clive.  Its more ambitious than EyeBenders – taking on the whole of the brain rather than just optical illusions. But I think the end result works brilliantly – though let’s see what the kids think!

Can we figure out the brain’s wiring diagram?

connecttomemain_2

The human brain, it is often said, is the most complex object in the known universe. Counting all the connections among its roughly 90 billion neurons, at the rate of one each second, would take about 3 million years – and just counting these connections says nothing about their intricate patterns of connectivity. A new study, published this week in Proceedings of the National Academy of Sciences USA, shows that mapping out these patterns is likely to be much more difficult than previously thought — but also shows what we need to do, to succeed.

Characterizing the detailed point-to-point connectivity of the brain is increasingly recognized as a key objective for neuroscience. Many even think that without knowing the ‘connectome’ – the brain’s wiring diagram – we will never understand how its electrochemical alchemy gives rise to our thoughts, actions, perceptions, beliefs, and ultimately to our consciousness. There is a good precedent for thinking along these lines. Biology has been galvanized by sequencing of the genome (of humans and of other species), and genetic medicine is gathering pace as whole-genome sequencing becomes fast and cheap enough to be available to the many, not just the few. Big-science big-money projects like the Human Genome Project were critical to these developments. Similar efforts in brain science – like the Human Connectome Project in the US and the Human Brain Project in Europe – are now receiving vast amounts of funding (though not without criticism, especially in the European case) (see also here). The hope is that the genetic revolution can be replicated in neuroscience, delivering step changes in our understanding of the brain and in our ability to treat neurological and psychiatric disorders.

Mapping the networks of the human brain relies on non-invasive neuroimaging methods that can be applied without risk to living people. These methods almost exclusively depend on ‘diffusion magnetic resonance imaging (dMRI) tractography’. This technology measures, for each location (or ‘voxel’) in the brain, the direction in which water is best able to diffuse. Taking advantage of the fact that water diffuses more easily along the fibre bundles connecting different brain regions, than across them, dMRI tractography has been able to generate accurate, informative, and surprisingly beautiful pictures of the major superhighways in the brain.

Diffusion MRI of the human brain.  Source: Human Connectome Project.

Diffusion MRI of the human brain. Source: Human Connectome Project.

But identifying these neuronal superhighways is only a step towards the connectome. Think of a road atlas: knowing only about motorways may tell you how cities are connected, but its not going to tell you how to get from one particular house to another. The assumption in neuroscience has been that as brain scanning improves in resolution and as tracking algorithms gain sophistication, dMRI tractography will be able to reveal the point-to-point long-range anatomical connectivity needed to construct the full connectome.

In a study published this week we challenge this assumption, showing that basic features of brain anatomy pose severe obstacles to measuring cortical connectivity using dMRI. The study, a collaboration between the University of Sussex in the UK and the National Institutes of Health (NIH) in the US, applied dMRI tractography to ultra-high resolution dMRI data obtained from extensive scanning of the macaque monkey brain – data of much higher quality than can be presently obtained from human studies. Our analysis, led by Profs. Frank Ye and David Leopold of NIH and Ph.D student Colin Reveley of Sussex, took a large number of starting points (‘seed voxels’) in the brain, and investigated which other parts of the brain could be reached using dMRI tractography.

The result: roughly half of the brain could not be reached, meaning that even our best methods for mapping the connectome aren’t up to the job. What’s more, by looking carefully at the actual brain tissue where tractography failed, we were able to figure out why. Lying just beneath many of the deep valleys in the brain (the ‘sulci’ – but in some other places too), are dense weaves of neuronal fibres (‘white matter’) running largely parallel to the cortical surface. The existence of these ‘superficial white matter fibre systems’, as we call them, prevents the tractography algorithms from detecting where small tributaries leave the main neuronal superhighways, cross into the cortical grey matter, and reach their destinations. Back to the roads: imagine that small minor roads occasionally leave the main motorways, which are flanked by other major roads busy with heavy traffic. If we tried to construct a detailed road atlas by measuring the flow of vehicles, we might well miss these small but critical branching points.

This image shows, on a colour scale, the 'reachability' of different parts of the brain by diffusion tractography.

This image shows, on a colour scale, the ‘reachability’ of different parts of the brain by diffusion tractography.

Identifying the connectome remains a central objective for neuroscience, and non-invasive brain imaging – especially dMRI – is a powerful technology that is improving all the time. But a comprehensive and accurate map of brain connectivity is going to require more than simply ramping up scanning resolution and computational oomph, a message that mega-budget neuroscience might usefully heed. This is not bad news for brain research. Solving a problem always requires fully understanding what the problem is, and our findings open new opportunities and objectives for studies of brain connectivity. Still, it goes to show that the most complex object in the universe is not quite ready to give up all its secrets.


Colin Reveley, Anil K. Seth, Carlo Pierpaoli, Afonso C. Silva, David Yu, Richard C. Saunders, David A. Leopold*, and Frank Q. Ye. (2015) Superficial white-matter fiber systems impede detection of long-range cortical connections in diffusion MR tractography. Proc. Nat. Acad. Sci USA (2015). doi/10.1073/pnas.1418198112

*David A. Leopold is the corresponding author.

Ex Machina: A shot in the arm for smart sci-fi

machina_a

Alicia Vikander as Ava in Alex Garland’s Ex Machina

IT’S a rare thing to see a movie about science that takes no prisoners intellectually. Alex Garland’s Ex Machina is just that: a stylish, spare and cerebral psycho-techno-thriller, which gives a much-needed shot in the arm for smart science fiction.

Reclusive billionaire genius Nathan, played by Oscar Isaac, creates Ava, an intelligent and very attractive robot played by Alicia Vikander. He then struggles with the philosophical and ethical dilemmas his creation poses, while all hell breaks loose. Many twists and turns add nuance to the plot, which centres on the evolving relationships between the balletic Ava and Caleb (Domhnall Gleeson), a hotshot programmer invited by Nathan to be the “human component in a Turing test”, and between Caleb and Nathan, as Ava’s extraordinary capabilities become increasingly apparent

Everything about this movie is good. Compelling acting (with only three speaking parts), exquisite photography and set design, immaculate special effects, a subtle score and, above all, a hugely imaginative screenplay combine under Garland’s precise direction to deliver a cinematic experience that grabs you and never lets go.

The best science fiction often tackles the oldest questions. At the heart of Ex Machina is one of our toughest intellectual knots, that of artificial consciousness. Is it possible to build a machine that is not only intelligent but also sentient: that has consciousness, not only of the world but also of its own self? Can we construct a modern-day Golem, that lumpen being of Jewish folklore which is shaped from unformed matter and can both serve humankind and turn against it? And if we could, what would happen to us?

In Jewish folkore, the Golem is animate being shaped from unformed matter.

In Jewish folkore, the Golem is animate being shaped from unformed matter.

Putting aside the tedious business of actually building a conscious AI, we face the challenge of figuring out whether the attempt succeeds. The standard reference for this sort of question is Alan Turing’s eponymous test, in which a human judge interrogates both a candidate machine and another human. A machine passes the test when the judge consistently fails to distinguish between them.

While the Turing test has provided a trope for many AI-inspired movies (such as Spike Jonze’s excellent Her), Ex Machina takes things much further. In a sparkling exchange between Caleb and Nathan, Garland nails the weakness of Turing’s version of the test, a focus on the disembodied exchange of messages, and proposes something far more interesting. “The challenge is to show you that she’s a robot. And see if you still feel she has consciousness,” Nathan says to Caleb.

This shifts the goalposts in a vital way. What matters is not whether Ava is a machine. It is not even whether Ava, even though a machine, can be conscious. What matters is whether Ava makes a conscious person feel that Ava is conscious. The brilliance of Ex Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine. And Garland is not necessarily on our side.

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Is consciousness a matter of social consensus? Is it more relevant whether people believe (or feel) that something (or someone) is conscious than whether it is in fact actually conscious? Or, does something being “actually conscious” rest on other people’s beliefs about it being conscious, or on its own beliefs about its consciousness (beliefs that may themselves depend on how it interprets others’ beliefs about it)? And exactly what is the difference between believing and feeling in situations like this?

It seems to me that my consciousness, here and now, is not a matter of social consensus or of my simply believing or feeling that I am conscious. It seems to me, simply, that I am conscious here and now. When I wake up and smell the coffee, there is a real experience of coffee-smelling going on.

But let me channel Ludwig Wittgenstein, one of the greatest philosophers of the 20th century, for a moment. What would it seem like if it seemed to me that my being conscious were a matter of social consensus or beliefs or feelings about my own conscious status? Is what it “seems like” to me relevant at all when deciding how consciousness comes about or what has consciousness?

Before vanishing completely into a philosophical rabbit hole, it is worth saying that questions like these are driving much influential current research on consciousness. Philosophers and scientists like Daniel Dennett, David Rosenthal and Michael Graziano defend, in various ways, the idea that consciousness is somehow illusory and what we really mean in saying we are conscious is that we have certain beliefs about mental states, that these states have distinctive functional properties, or that they are involved in specific sorts of attention.

Another theoretical approach accepts that conscious experience is real and sees the problem as one of determining its physical or biological mechanism. Some leading neuroscientists such as Giulio Tononi, and recently, Christof Koch, take consciousness to be a fundamental property, much like mass-energy and electrical charge, that is expressed through localised concentrations of “integrated information”. And others, like philosopher John Searle, believe that consciousness is an essentially biological property that emerges in some systems but not in others, for reasons as-yet unknown.

In the film we hear about Searle’s Chinese Room thought experiment. His premise was that researchers had managed to build a computer programmed in English that can respond to written Chinese with written Chinese so convincingly it easily passes the Turing test, persuading a human Chinese speaker that the program understands and speaks Chinese. Does the machine really “understand” Chinese (Searle called this “strong AI”) or is it only simulating the ability (“weak” AI)? There is also a nod to the notional “Mary”, the scientist, who, while knowing everything about the physics and biology of colour vision, has only ever experienced black, white and shades of grey. What happens when she sees a red object for the first time? Will she learn anything new? Does consciousness exceed the realms of knowledge.

All of the above illustrates how academically savvy and intellectually provocative Ex Machina is. Hat-tips here to Murray Shanahan, professor of cognitive robotics at Imperial College London, and writer and geneticist Adam Rutherford, whom Garland did well to enlist as science advisers.

Not every scene invites deep philosophy of mind, with the film encompassing everything from ethics, the technological singularity, Ghostbusters and social media to the erosion of privacy, feminism and sexual politics within its subtle scope. But when it comes to riffing on the possibilities and mysteries of brain, mind and consciousness, Ex Machina doesn’t miss a trick.

As a scientist, it is easy to moan when films don’t stack up against reality, but there is usually little to be gained from nitpicking over inaccuracies and narrative inventions. Such criticisms can seem petty and reinforcing of the stereotype of scientists as humourless gatekeepers of facts and hoarders of equations. But these complaints sometimes express a sense of missed opportunity rather than injustice, a sense that intellectual riches could have been exploited, not sidelined, in making a good movie. AI, neuroscience and consciousness are among the most vibrant and fascinating areas of contemporary science, and what we are discovering far outstrips anything that could be imagined out of thin air.

In his directorial debut, Garland has managed to capture the thrill of this adventure in a film that is effortlessly enthralling, whatever your background. This is why, on emerging from it, I felt lucky to be a neuroscientist. Here is a film that is a better film, because of and not despite its engagement with its intellectual inspiration.


The original version of this piece was published as a Culture Lab article in New Scientist on Jan 21. I am grateful to the New Scientist for permission to reproduce it here, and to Liz Else for help with editing. I will be discussing Ex Machina with Dr. Adam Rutherford at a special screening of the film at the Edinburgh Science Festival (April 16, details and tickets here).

Open your MIND

openMINDscreen
Open MIND
is a brand new collection of original research publications on the mind, brain, and consciousness
. It is now freely available online. The collection contains altogether 118 articles from 90 senior and junior researchers, in the always-revealing format of target articles, commentaries, and responses.

This innovative project is the brainchild of Thomas Metzinger and Jennifer Windt, of the MIND group of the Johanes Gutenburg University in Mainz, Germany (Windt has since moved to Monash University in Melbourne). The MIND group was set up by Metzinger in 2003 to catalyse the development of young German philosophers by engaging them with the latest developments in philosophy of mind, cognitive science, and neuroscience. Open MIND celebrates the 10th anniversary of the MIND group, in a way that is so much more valuable to the academic community than ‘just another meeting’ with its quick-burn excitement and massive carbon footprint. Editors Metzinger and Windt explain:

“With this collection, we wanted to make a substantial and innovative contribution that will have a major and sustained impact on the international debate on the mind and the brain. But we also wanted to create an electronic resource that could also be used by less privileged students and researchers in countries such as India, China, or Brazil for years to come … The title ‘Open MIND’ stands for our continuous search for a renewed form of academic philosophy that is concerned with intellectual rigor, takes the results of empirical research seriously, and at the same time remains sensitive to ethical and social issues.”

As a senior member of the MIND group, I was lucky enough to contribute a target article, which was commented on by Wanja Wiese, one of the many talented graduate students with Metzinger and a junior MIND group member. My paper marries concepts in cybernetics and predictive control with the increasingly powerful perspective of ‘predictive processing’ or the Bayesian brain, with a focus on interoception and embodiment. I’ll summarize the main points in a different post, but you can go straight to the target paper, Wanja’s commentary, and my response.

Open MIND is a unique resource in many ways. The Editors were determined to maximize its impact, so, unlike in many otherwise similar projects, the original target papers have not been circulated prior to launch. This means there is a great deal of highly original material now available to be discovered. The entire project was compressed into about 10 months from submission of initial drafts, to publication this week of the complete collection. This means the original content is completely up-to-date. Also, Open MIND  shows how excellent scientific publication can  sidestep the main publishing houses, given the highly developed resources now available, coupled of course with extreme dedication and hard work. The collection was assembled, rigorously reviewed, edited, and produced entirely in-house – a remarkable achievement.

Thomas Metzinger with the Open MIND student team

Thomas Metzinger with the Open MIND student team

Above all Open MIND opened a world of opportunity for its junior members, the graduate students and postdocs who were involved in every stage of the project: soliciting and reviewing papers, editing, preparing commentaries, and organizing the final collection. As Metzinger and Windt say

“The whole publication project is itself an attempt to develop a new format for promoting junior researchers, for developing their academic skills, and for creating a new type of interaction between senior and junior group members.”

The results of Open MIND are truly impressive and will undoubtedly make a lasting contribution to the philosophy of mind, especially in its most powerful multidisciplinary and empirically grounded forms.

Take a look, and open your mind too.

Open MIND contributors: Adrian John Tetteh Alsmith, Michael L. Anderson, Margherita Arcangeli, Andreas Bartels, Tim Bayne, David H. Baßler, Christian Beyer, Ned Block, Hannes Boelsen, Amanda Brovold, Anne-Sophie Brüggen, Paul M. Churchland, Andy Clark, Carl F. Craver, Holk Cruse, Valentina Cuccio, Brian Day, Daniel C. Dennett, Jérôme Dokic, Martin Dresler, Andrea R. Dreßing, Chris Eliasmith, Maximilian H. Engel, Kathinka Evers, Regina Fabry, Sascha Fink, Vittorio Gallese, Philip Gerrans, Ramiro Glauer, Verena Gottschling, Rick Grush, Aaron Gutknecht, Dominic Harkness, Oliver J. Haug, John-Dylan Haynes, Heiko Hecht, Daniela Hill, John Allan Hobson, Jakob Hohwy, Pierre Jacob, J. Scott Jordan, Marius Jung, Anne-Kathrin Koch, Axel Kohler, Miriam Kyselo, Lana Kuhle, Victor A. Lamme, Bigna Le Nggenhager, Caleb Liang, Ying-Tung Lin, Christophe Lopez, Michael Madary, Denis C. Martin, Mark May, Lucia Melloni, Richard Menary, Aleksandra Mroczko-Wąsowicz, Saskia K. Nagel, Albert Newen, Valdas Noreika, Alva Noë, Gerard O’Brien, Elisabeth Pacherie, Anita Pacholik-Żuromska, Christian Pfeiffer, Iuliia Pliushch, Ulrike Pompe-Alama, Jesse J. Prinz, Joëlle Proust, Lisa Quadt, Antti Revonsuo, Adina L. Roskies, Malte Schilling, Stephan Schleim, Tobias Schlicht, Jonathan Schooler, Caspar M. Schwiedrzik, Anil Seth, Wolf Singer, Evan Thompson, Jarno Tuominen, Katja Valli, Ursula Voss, Wanja Wiese, Yann F. Wilhelm, Kenneth Williford, Jennifer M. Windt.


Open MIND press release.
The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies
Perceptual presence in the Kuhnian-Popperian Bayesian brain
Inference to the best prediction

There’s more to geek-chic than meets the eye, but not in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game. (Spoiler alert: this post reveals some plot details.)

World War Two was won not just with tanks, guns, and planes, but by a crack team of code-breakers led by the brilliant and ultimately tragic figure of Alan Turing. This is the story as told in The Imitation Game, a beautifully shot and hugely popular film which nonetheless left me nursing a deep sense of missed opportunity. True, Benedict Cumberbatch is brilliant, spicing his superb Holmes with a dash of the Russell Crowe’s John Nash (A Beautiful Mind) to propel geek rapture into yet higher orbits. (See also Eddie Redmayne and Stephen Hawking.)

The rest was not so good. The clunky acting might reflect a screenplay desperate to humanize and popularize what was fundamentally a triumph of the intellect. But what got to me most was the treatment of Turing himself. On one hand there is the perhaps cinematically necessary canonisation of individual genius, sweeping aside so much important context. On the other there is the saccharin treatment of Turing’s open homosexuality (with compensatory boosting of Keira Knightley’s Joan Clarke) and the egregious scenes in which he stands accused of both treason and cowardice by association with Soviet spy John Cairncross, whom he likely never met. The requisite need for a bad guy does disservice also to Turing’s Bletchley Park boss Alastair Denniston, who while a product of old-school classics-inspired cryptography nonetheless recognized and supported Turing and his crew. Historical jiggery-pokery is of course to be expected in any mass-market biopic, but the story as told in The Imitation Game becomes much less interesting as a result.

Alan Turing as himself

Alan Turing as himself

I studied at King’s College, Cambridge, Turing’s academic home and also where I first encountered the basics of modern day computer science and artificial intelligence (AI). By all accounts Turing was a genius, laying the foundations for these disciplines but also for other areas of science, which – like AI – didn’t even exist in his time. His theories of morphogenesis presaged contemporary developmental biology, explaining how leopards get their spots. He was a pioneer of cybernetics, an inspired amalgam of engineering and biology that after many years in the academic hinterland is once again galvanising our understanding of how minds and brains work, and what they are for. One can only wonder what more he would have done, had he lived.

There is a breathless moment in the film where Joan Clarke (or poor spy-hungry and historically-unsupported Detective Nock, I can’t remember) wonders whether Turing, in cracking Enigma, has built his ‘universal machine’. This references Turing’s most influential intellectual breakthrough, his conceptual design for a machine that was not only programmable but re-programmable, that could execute any algorithm, any computational process.

The Universal Turing Machine formed the blueprint for modern-day computers, but the machine that broke Enigma was no such thing. The ‘Bombe’, as it was known, was based on Polish prototypes (the bomba kryptologiczna) and was co-designed with Gordon Welchman whose critical ‘diagonal board’ innovation is in the film attributed to the suave Hugh Alexander (Welchman doesn’t appear at all). Far from being a universal computer the Bombe was designed for a single specific purpose – to rapidly run through as many settings of the Enigma machine as possible.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

The Bombe is half the story of Enigma. The other half is pure cryptographic catnip. Even with a working Bombe the number of possible machine settings to be searched each day (the Germans changed all the settings at midnight) was just too large. The code-breakers needed a way to limit the combinations to be tested. And here Turing and his team inadvertently pioneered the principles of modern-day ‘Bayesian’ machine learning, by using prior assumptions to constrain possible mappings between a cipher and its translation. For Enigma, the breakthroughs came on realizing that no letter could encode itself, and that German operators often used the same phrases in repeated messages (“Heil Hitler!”). Hugh Alexander, diagonal boards aside, was supremely talented at this process which Turing called ‘banburismus’, on account of having to get printed ‘message cards’ from nearby Banbury.

In this way the Bletchley code-breakers combined extraordinary engineering prowess with freewheeling intellectual athleticism, to find a testable range of Enigma settings, each and every day, which were then run through the Bombe until a match was found.

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

Though it gave the allies a decisive advantage, the Bombe was not the first computer, not the first ‘digital brain’. This honour belongs to Colossus, also built at Bletchley Park, and based on Turing’s principles, but constructed mainly by Tommy Flowers, Jack Good, and Bill Tutte. Colossus was designed to break the even more encrypted communications the Germans used later in the war: the Tunny cipher. After the war the intense secrecy surrounding Bletchley Park meant that all Colossi (and Bombi) were dismantled or hidden away, depriving Turing, Flowers – and many others – of recognition and setting back the computer age by years. It amazes me that full details about Colussus were only released in 2000.

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

The Imitation Game of the title is a nod to Turing’s most widely known idea: a pragmatic answer to the philosophically challenging and possibly absurd question, “can machines think”. In one version of what is now known as the Turing Test, a human judge interacts with two players – another human and a machine – and must decide which is which. Interactions are limited to disembodied exchanges of pieces of text, and a candidate machine passes the test when the judge consistently fails to distinguish the one from the other. It is unfortunate but in keeping with the screenplay that Turing’s code-breaking had little to do with his eponymous test.

It is completely understandable that films simplify and rearrange complex historical events in order to generate widespread appeal. But the Imitation Game focuses so much on a distorted narrative of Turing’s personal life that the other story – a thrilling ‘band of brothers’ tale of winning a war by inventing the modern world – is pushed out into the wings. The assumption is that none of this puts bums on seats. But who knows, there might be more to geek-chic than meets the eye.

Should we fear the technological singularity?

terminator

Could wanting the latest mobile phone for Christmas lead to human extermination? Existential risks to our species have long been part of our collective psyche – in the form of asteroid impacts, pandemics, global nuclear cataclysm, and more recently, climate change. The idea is not simply that humans and other animals could be wiped out, but that basic human values and structures of society would change so as to become unrecognisable.

Last week, Stephen Hawking claimed that technological progress, while perhaps intended for human betterment, might lead to a new kind of existential threat in the form of self-improving artificial intelligence (AI). This worry is based on the “law of accelerating returns”, which applies when the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – advances in its capabilities. The idea is that a point might be reached where this process leads to wholesale and irreversible changes in how we live. This is the technological singularity, a concept made popular by AI maverick and Google engineering director Ray Kurzweil.

We are already familiar with accelerating returns in the rapid development of computer power (“Moore’s law”), and Kurzweil’s vision of the singularity is actually a sort of utopian techno-rapture. But there are scarier scenarios where exponential technological growth might exceed our ability to foresee and prevent unintended consequences. Genetically modified food is an early example of this worry, but now the spotlight is on bio- and nano-technology, and – above all – AI, the engineering of artificial minds.

Moore's law: the exponential growth in computational power since 1900.

Moore’s law: the exponential growth in computational power since 1900.

A focus on AI might seem weird given how disappointing present-day ‘intelligent robots’ are. They can hardly vacuum your living room let alone take over the world, and reports that the famous Turing Test for AI has been passed are greatly exaggerated. Yet AI has developed a surprising behind-the-scenes momentum. New ‘deep learning’ algorithms have been developed which, when coupled with vast amounts of data, show remarkable abilities to tackle everyday problems like speech comprehension and face recognition. As well as world-beating chess players like Deep Blue, we have Apple Siri and Google Now helping us navigate our messy and un-chesslike environments in ways that mimic our natural cognitive abilities. Huge amounts of money have followed, with Google this year paying £400M for AI start-up DeepMind in a deal which Google CEO Eric Schmidt heralded as enabling products that are “infinitely more intelligent”.

"Hello Dave".

“Hello Dave”.

What if the ability to engineer artificial minds leads to these minds engineering themselves, developing their own goals, and bootstrapping themselves beyond human understanding and control? This dystopian prospect has been mined by many sci-fi movies – think Blade Runner, HAL in 2001, Terminator, Matrix – but while sci-fi is primarily for entertainment, the accelerating developments in AI give pause for thought. Enter Hawking, who now warns that “the full development of AI could spell the end of the human race”. He joins real-world-Iron-Man Elon Musk and Oxford philosopher Nick Bostrom in declaring AI the most serious existential threat we face. (Hawking in fact used the term ‘singularity’ long ago to describe situations where the laws of physics break down, like at the centre of a black hole).

However implausible a worldwide AI revolution might seem, Holmes will tell you there is all the difference in the world between the impossible and the merely improbable. Even if highly unlikely, the seismic impact of a technological singularity is such that it deserves to be taken seriously, both in estimating and mitigating its likelihood, and in planning potential responses. Cambridge University’s new Centre for the Study for Existential Risk has been established to do just this, with Hawking and ex-Astronomer Royal Sir Martin Rees among the founders.

Dystopian eventualities aside, the singularity concept is inherently interesting because it pushes us to examine what we mean by being human (as my colleague Murray Shanahan argues in a forthcoming book). While intelligence is part of the story, being human is also about having a body and an internal physiology; we are self-sustaining flesh bags. It is also about consciousness; we are each at the centre of a subjective universe of experience. Current AI has little to say about these issues, and it is far from clear whether truly autonomous and self-driven AI is possible in their absence. The ethical minefield deepens when we realize that AIs becoming conscious would entail ethical responsibilities towards them, regardless of their impact on us.

At the moment, AI like any powerful technology has the potential for good and ill, long before any singularity is reached. On the dark side, AI gives us the tools to wreak our own havoc by distancing ourselves from the consequences of our actions. Remote controlled military drones already reduce life-and-death decisions to the click of a button: with enhanced AI there would be no need for the button. On the side of the angels, AI can make our lives healthier and happier, and our world more balanced and sustainable, by complementing our natural mental prowess with the unprecedented power of computation. The pendulum may swing from the singularity-mongerers to the techno-mavens; and we should listen to both, but proceed serenely with the angels.

This post is an amended version of a commisioned comment for The Guardian: Why we must not stall technological progress, despite its threat to humanity, published on December 03, 2014.  It was part of a flurry of comments occasioned by a BBC interview with Stephen Hawking, which you can listen to here. I’m actually quite excited to see Eddie Redmayne’s rendition of the great physicist.

Training synaesthesia: How to see things differently in half-an-hour a day

syn_brain_phillips
Image courtesy of Phil Wheeler Illustrations

Can you learn to see the world differently? Some people already do. People with synaesthesia experience the world very differently indeed, in a way that seems linked to creativity, and which can shed light on some of the deepest mysteries of consciousness. In a paper published in Scientific Reports, we describe new evidence suggesting that non-synaesthetes can be trained to experience the world much like natural synaesthetes. Our results have important implications for understanding individual differences in conscious experiences, and they extend what we know about the flexibility (‘plasticity’) of perception.

Synaesthesia means that an experience of one kind (like seeing a letter) consistently and automatically evokes an experience of another kind (like seeing a colour), when the normal kind of sensory stimulation for the additional experience (the colour) isn’t there. This example describes grapheme-colour synaesthesia, but this is just one among many fascinating varieties. Other synaesthetes experience numbers as having particular spatial relationships (spatial form synaesthesia, probably the most common of all). And there are other more unusual varieties like mirror-touch synaesthesia, where people experience touch on their own bodies when they see someone else being touched, and taste-shape synaesthesia, where triangles might taste sharp, and ellipses bitter.

The richly associative nature of synaesthesia, and the biographies of famous case studies like Vladimir Nabokov and Wassily Kandinsky (or, as the Daily Wail preferred: Lady Gaga and Pharrell Williams), has fuelled its association with creativity and intelligence. Yet the condition is remarkably common, with recent estimates suggesting about 1 in 23 people have some form of synaesthesia. But how does it come about? Is it in your genes, or is it something you can learn?

kandinsky
It is widely believed that Kandinsky was synaesthetic. For instance he said: “Colour is the keyboard, the eyes are the harmonies, the soul is the piano with many strings. The artist is the hand that plays, touching one key or another, to cause vibrations in the soul”

As with most biological traits the truth is: a bit of both. But this still begs the question of whether being synaesthetic is something that can be learnt, even as an adult.

There is a rather long history of attempts to train people to be synaesthetic. Perhaps the earliest example was by E.L. Kelly who in 1934 published a paper with the title: An experimental attempt to produce artificial chromaesthesia by the technique of the conditioned response. While this attempt failed (the paper says it is “a report of purely negative experimental findings”) things have now moved on.

More recent attempts, for instance the excellent work of Olympia Colizoli and colleagues in Amsterdam, have tried to mimic (grapheme-colour) synaesthesia by having people read books in which some of the letters are always coloured in with particular colours. They found that it was possible to train people to display some of the characteristics of synaesthesia, like being slower to name coloured letters when they were presented in a colour conflicting with the training (the ‘synaesthetic Stroop’ effect). But crucially, until now no study has found that training could lead to people actually reporting synaesthesia-like conscious experiences.

syn_reading
An extract from the ‘coloured reading’ training material, used in our study, and similar to the material used by Colizoli and colleagues. The text is from James Joyce. Later in training we replaced some of the letters with (appropriately) coloured blocks to make the task even harder.

Our approach was based on brute force. We decided to dramatically increase the length and rigour of the training procedure that our (initially non-synaesthetic) volunteers undertook. Each of them (14 in all) came in to the lab for half-an-hour each day, five days a week, for nine weeks! On each visit they completed a selection of training exercises designed to cement specific associations between letters and colours. Crucially, we adapted the difficulty of the tasks to each volunteer and each training session, and we also gave them financial rewards for good performance. Over the nine-week regime, some of the easier tasks were dropped entirely, and other more difficult tasks were introduced. Our volunteers also had homework to do, like reading the coloured books. Our idea was that the training must always be challenging, in order to have a chance of working.

The results were striking. At the end of the nine-week exercise, our dedicated volunteers were tested for behavioural signs of synaesthesia, and – crucially – were also asked about their experiences, both inside and outside the lab. Behaviourally they all showed strong similarities with natural-born synaesthetes. This was most striking in measures of ‘consistency’, a test which requires repeated selection of the colour associated with a particular letter, from a palette of millions.

consistency
The consistency test for synaesthesia. This example from David Eagleman’s popular ‘synaesthesia battery’.

Natural-born synaesthetes show very high consistency: the colours they pick (for a given letter) are very close to each other in colour space, across repeated selections. This is important because consistency is very hard to fake. The idea is that synaesthetes can simply match a colour to their experienced ‘concurrent’, whereas non-synaesthetes have to rely on less reliable visual memory, or other strategies.

Our trained quasi-synaesthetes passed the consistency test with flying colours (so to speak). They also performed much like natural synaesthetes on a whole range of other behavioural tests, including synaesthetic stroop, and a ‘synaesthetic conditioning’ task which shows that trained colours can elicit automatic physiological responses, like increases in skin conductance. Most importantly, most (8/14) of our volunteers described colour experiences much like those of natural synaesthetes (only 2 reported no colour phenomenology at all). Strikingly, some of these experience took place even outside the lab:

“When I was walking into campus I glanced at the University of Sussex sign and the letters were coloured” [according to their trained associations]

Like natural synaesthetes, some of our volunteers seemed to experience the concurrent colour ‘out in the world’ while others experienced the colours more ‘in the head’:

“When I am looking at a letter I see them in the trained colours”

“When I look at the letter ‘p’ … its like the inside of my head is pink”

syn_letters
For grapheme colour synaesthetes, letters evoke specific colour experiences. Most of our trained quasi-synaesthetes reported similar experiences. This image is however quite misleading. Synaesthetes (natural born or not) also see the letters in their actual colour, and they typically know that the synaesthetic colour is not ‘real’. But that’s another story.

These results are very exciting, suggesting for the first time that with sufficient training, people can actually learn to see the world differently. Of course, since they are based on subjective reports about conscious experiences, they are also the hardest to independently verify. There is always the slight worry that our volunteers said what they thought we wanted to hear. Against this worry, we were careful to ensure that none of our volunteers knew the study was about synaesthesia (and on debrief, none of them did!). Also, similar ‘demand characteristic’ concerns could have affected other synaesthesia training studies, yet none of these led to descriptions of synaesthesia-like experiences.

Our results weren’t just about synaesthesia. A fascinating side effect was that our volunteers registered a dramatic increase in IQ, gaining an average of about 12 IQ points (compared to a control group which didn’t undergo training). We don’t yet know whether this increase was due to the specifically synaesthetic aspects of our regime, or just intensive cognitive training in general. Either way, our findings provide support for the idea that carefully designed cognitive training could enhance normal cognition, or even help remedy cognitive deficits or decline. More research is needed on these important questions.

What happened in the brain as a result of our training? The short answer is: we don’t know, yet. While in this study we didn’t look at the brain, other studies have found changes in the brain after similar kinds of training. This makes sense: changes in behaviour or in perception should be accompanied by neural changes of some kind. At the same time, natural-born synaesthetes appear to have differences both in the structure of their brains, and in their activity patterns. We are now eager to see what kind of neural signatures underlie the outcome of our training paradigm. The hope is, that because our study showed actual changes in perceptual experience, analysis of these signatures will shed new light on the brain basis of consciousness itself.

So, yes, you can learn to see the world differently. To me, the most important aspect of this work is that it emphasizes that each of us inhabits our own distinctive conscious world. It may be tempting to think that while different people – maybe other cultures – have different beliefs and ways of thinking, still we all see the same external reality. But synaesthesia, along with other emerging theories of ‘predictive processing’ – shows that the differences go much deeper. We each inhabit our own personalised universe, albeit one which is partly defined and shaped by other people. So next time you think someone is off in their own little world: they are.


The work described here was led by Daniel Bor and Nicolas Rothen, and is just one part of an energetic inquiry into synaesthesia taking place at Sussex University and the Sackler Centre for Consciousness Science. With Jamie Ward and (recently) Julia Simner also working here, we have a uniquely concentrated expertise in this fascinating area. In other related work I have been interested in why synaesthetic experiences lack a sense of reality and how this give an important clue about the nature of ‘perceptual presence’. I’ve also been working on the phenomenology of spatial form synaesthesia, and whether synaesthetic experiences can be induced through hypnosis. And an exciting brain imaging study of natural synaesthetes will shortly hit the press! Nicolas Rothen is an authority on the relationship between synaesthesia and memory, and Jamie Ward and Julia Simner have way too many accomplishments in this field to mention. (OK, Jamie has written the most influential review paper in the area – featuring a lot of his own work – and Julia (with Ed Hubbard) has written the leading textbook. That’s not bad to start with.)


Our paper, Adults can be Trained to Acquire Synesthetic Experiences (sorry for US spelling) is published (open access, free!) in Scientific Reports, part of the Nature family. The authors were Daniel Bor, Nicolas Rothen, David Schwartzman, Stephanie Clayton, and Anil K. Seth. There has been quite a lot of media coverage of this work, for instance in the New Scientist and the Daily Fail. Other coverage is summarized here.

Eye Benders: the science of seeing and believing, wins Royal Society prize!

eyebenders_cover

An unexpected post.  I’m very happy to have learnt today that the book Eye Benders has won the 2014 Royal Society Young Person’s Book Prize.  Eye Benders was written by Clive Gifford (main author) and me (consultant).  It was published by Ivy Press, who are also the redoubtable publishers of the so-far-prizeless but nonetheless worthy 30 Second Brain. A follow-up to Eye Benders, Brain Twister, is in the works: More brain, less optical illusions, but same high quality young-person-neuroscience-fare.

The Royal Society says this about the prize: “Each year the Royal Society awards a prize to the best book that communicates science to young people. The prize aims to inspire young people to read about science and promotes the best science writing for the under-14s.”

This year, the shortlist was chosen by Professor James Hough FRS, Dr Rhaana Starling, Mr Michael Heyes, Professor Iain Stewart and Dr Anjana Ahuja. Well done all, good shortlisting.  More importantly, the winner was chosen by groups of young persons themselves.  Here is what some of the 2014 young people had to say about Eye Benders:

Matt, 12 said “Science from a different perspective. Factual and interesting – a spiral of a read!”

Beth, 14 said “It was way, way cool!

Ethan, 12 said “The illustrations were absolutely amazing”

Joe, 12 said “A great, well written and well thought-out book; the illustrations are clear, detailed and amazing. The front cover is beautiful.”

Felix, 10 said “Eye popping and mind-blowingly fun!’

So there it is. Matt and friends have spoken, and here is a picture of Clive accepting the award in Newcastle (alas I wasn’t there) accompanied with a young person being enthused:

eyebenders_award

Here’s a sneak at what the book looks like, on the inside:

eyebenders_sample

A personal note: I remember well going through the final layouts for Eye Benders, heavily dosed on painkillers in hospital in Barcelona following emergency surgery, while at the same time my father was entering his final weeks back in Oxfordshire. A dark time.  Its lovely, if bittersweet, to see something like this emerge from it.

Other coverage:

GrrlScientist in The Guardian.
Optical illusion book wins Royal Society prize
Clive shares some of the best Eye Benders illusions online
Royal Society official announcement
University of Sussex press release

I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition

metacog

Image from 30 Second Brain, Ivy Press, available at all good booksellers.

Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.

In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!

This is important because it changes how we think about metacognition. Metacognition, strictly speaking, is ‘knowing about knowing’. When we make a perceptual judgment, or a decision of any kind, we typically have some degree of insight into whether our decision was correct or not. This is metacognition, which in experiments is usually measured by asking people how confident they are in a previous decision. Good metacognitive performance is indicated by high correlations between confidence and accuracy, which can be quantified in various ways.

Most explanations of metacognition assume that metacognitive judgements are based on the same information as the original (‘first-order’) decision. For example, if you are asked to decide whether a dim light was present or not, you might make a (first-order) judgment based on signals flowing from your eyes to your brain. Perhaps your brain sets a threshold below which you will say ‘No’ and above which you will say ‘Yes’. Metacognitive judgments are typically assumed to work on the same data. If you are asked whether you were guessing or were confident, maybe you will set additional thresholds a bit further apart. The idea is that your brain may need more sensory evidence to be confident in judging that a dim light was in fact present, than when merely guessing that it was.

This way of looking at things is formalized by signal detection theory (SDT). The nice thing about SDT is that it can give quantitative mathematical expressions for how well a person can make both first-order and metacognitive judgements, in ways which are not affected by individual biases to say ‘yes’ or ‘no’, or ‘guess’ versus ‘confident’. (The situation is a bit trickier for metacognitive confidence judgements but we can set these details aside for now: see here for the gory details). A simple schematic of SDT is shown below.

sdt

Signal detection theory. The ‘signal’ refers to sensory evidence and the curves show hypothetical probability distributions for stimulus present (solid line) and stimulus absent (dashed line). If a stimulus (e.g., a dim light) is present, then the sensory signal is likely to be stronger (higher) – but because sensory systems are assumed to be noisy (probabilistic), some signal is likely even when there is no stimulus. The difficulty of the decision is shown by the overlap of the distributions. The best strategy for the brain is to place a single ‘decision criterion’ midway between the peaks of the two distributions, and to say ‘present’ for any signal above this threshold, and ‘absent’ for any signal below. This determines the ‘first order decision’. Metacognitive judgements are then specified by additional ‘confidence thresholds’ which bracket the decision criterion. If the signal lies in between the two confidence thresholds, the metacognitive response is ‘guess’; if it lies to the two extremes, the metacognitive response is ‘confident’. The mathematics of SDT allow researchers to calculate ‘bias free’ measures of how well people can make both first-order and metacognitive decisions (these are called ‘d-primes’). As well as providing a method for quantifying decision making performance, the framework is also frequently assumed to say something about what the brain is actually doing when it is making these decisions. It is this last assumption that our present work challenges.

On SDT it is easy to see that one can make above-chance first order decisions while displaying low or no metacognition. One way to do this would be to set your metacognitive thresholds very far apart, so that you are always guessing. But there is no way, on this theory (without making various weird assumptions), that you could be at chance in your first-order decisions, yet above chance in your metacognitive judgements about these decisions.

Surprisingly, until now, no-one had actually checked to see whether this could happen in practice. This is exactly what we did, and this is exactly what we found. We analysed a large amount of data from a paradigm called artificial grammar learning, which is a workhorse in psychological laboratories for studying unconscious learning and decision-making. In artificial grammar learning people are shown strings of letters and have to decide whether each string belongs to ‘grammar A’ or ‘grammar B’. Each grammar is just an arbitrary set of rules determining allowable patterns of letters. Over time, most people can learn to classify letter strings at better than chance. However, over a large sample, there will always be some people that can’t: for these unfortunates, their first-order performance remains at ~50% (in SDT terms they have a d-prime not different from zero).

agl

Artificial grammar learning. Two rule sets (shown on the left) determine which letter strings belong to ‘grammar A’ or ‘grammar B’. Participants are first shown examples of strings generated by one or the other grammar (training). Importantly, they are not told about the grammatical rules, and in most cases they remain unaware of them. Nonetheless, after some training they are able to successfully (i.e., above chance) classify novel letter strings appropriately (testing).

Crucially, subjects in our experiments were asked to make confidence judgments along with their first-order grammaticality judgments. Focusing on those subjects who remained at chance in their first-order judgements, we found that they still showed above-chance metacognition. That is, they were more likely to be confident when they were (by chance) right, than when they were (by chance) wrong. We call this novel finding blind insight.

The discovery of blind insight changes the way we think about decision-making. Our results show that theoretical frameworks based on SDT are, at the very least, incomplete. Metacognitive performance during blind insight cannot be explained by simply setting different thresholds on a single underlying signal. Additional information, or substantially different transformations of the first-order signal, are needed. Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference.

pp

In predictive processing theories of brain function, perception depends on top-down predictions (blue) about the causes of sensory signals. Sensory signals carry ‘prediction errors’ (magenta) which update top-down predictions according to principles of Bayesian inference. Maybe a similar process underlies metacognition. Image from 30 Second Brain, Ivy Press.

This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them. While speculative, this idea fits neatly with the framework of predictive processing which says that top-down influences are critical in shaping the nature of perceptual contents.

The discovery of blindsight many years ago has substantially changed the way we think about vision. Our new finding of blind insight may similarly change the way we think about metacognition, and about consciousness too.

The paper is published open access (i.e. free!) in Psychological Science. The authors were Ryan Scott, Zoltan Dienes, Adam Barrett, Daniel Bor, and Anil K Seth. There are also accompanying press releases and coverage:

Sussex study reveals how ‘blind insight’ confounds logic.  (University of Sussex, 13/11/2014)
People show ‘blind insight’ into decision making performance (Association for Psychological Science, 13/11/2014)