The Human Brain Project risks becoming a missed opportunity

Image concept of a network of neurons in the human brain.

The brain is much on our minds at the moment. David Cameron is advocating a step-change in dementia research, brain-computer interfaces promise new solutions to paralysis, and the ongoing plight of Michael Schumacher has reminded us of the terrifying consequences of traumatic brain injury. Articles in scholarly journals and in the media are decorated with magical images of the living brain, like the one shown below, to illuminate these stories. Yet, when asked, most neuroscientists will say we still know very little about how the brain works, or how to fix it when it goes wrong.

DTI-sagittal-fibers
A diffusion tensor image showing some of the main pathways along which brain connections are organized.

The €1.2bn Human Brain Project (HBP) is supposed to change all this. Funded by the European Research Council, the HBP brings together more than 80 research institutes in a ten-year endeavour to unravel the mysteries of the brain, and to emulate its powers in new technologies. Following examples like the Human Genome Project and the Large Hadron Collider (where Higgs’ elusive boson was finally found), the idea is that a very large investment will deliver very significant results. But now a large contingent of prominent European neuroscientists are rebelling against the HBP, claiming that its approach is doomed to fail and will undermine European neuroscience for decades to come.

Stepping back from the fuss, it’s worth thinking whether the aims of the HBP really make sense. Sequencing the genome and looking for Higgs were both major challenges, but in these cases the scientific community agreed on the objectives, and on what would constitute success. There is no similar consensus among neuroscientists.

It is often said that the adult human brain is the most complex object in the universe. It contains about 90 billion neurons and a thousand times more connections, so that if you counted one connection each second it would take about three million years to finish. The challenge for neuroscience is to understand how this vast, complex, and always changing network gives rise to our sensations, perceptions, thoughts, actions, beliefs, desires, our sense of self and of others, our emotions and moods, and all else that guides our behaviour and populates our mental life, in health and in disease. No single breakthrough could ever mark success across such a wide range of important problems.

The central pillar of the HBP approach is to build computational simulations of the brain. Befitting the huge investment, these simulations would be of unprecedented size and detail, and would allow brain scientists to integrate their individual findings into a collective resource. What distinguishes the HBP – besides the money – is its aggressively ‘bottom up’ approach: the vision is that by taking care of the neurons, the big things – thoughts, perceptions, beliefs, and the like – will take care of themselves. As such, the HBP does not set out to test any specific hypothesis or collection of hypotheses, marking another distinction with common scientific practice.

Could this work? Certainly, modern neuroscience is generating an accelerating data deluge demanding new technologies for visualisation and analysis. This is the ‘big data’ challenge now common in many settings. It is also clear that better pictures of the brain’s wiring diagram (the ‘connectome’) will be essential as we move ahead. On the other hand, more detailed simulations don’t inevitably lead to better understanding. Strikingly, we don’t fully understand the brain of the tiny worm Caenorhabtis elegans even though it has only 302 neurons and the wiring diagram is known exactly. More generally, a key ability in science is to abstract away from the specifics to see more clearly what underlying principles are at work. In the limit, a perfectly accurate model of the brain may become as difficult to understand as the brain itself, as Borges long ago noted when describing the tragic uselessness of the perfectly detailed map.

jorge_luis_borges_por_paola_agosti
Jorge Luis Borges at Harvard University, 1967/8

Neuroscience is, and should remain, a broad church. Understanding the brain does not reduce to simulating the collective behaviour of all its miniscule parts, however interesting a part of the final story this might become. Understanding the brain means grasping complex interactions cross-linking many different levels of description, from neurons to brain regions to individuals to societies. It means complementing bottom-up simulations with new theories describing what the brain is actually doing, when its neurons are buzzing merrily away. It means designing elegant experiments that reveal how the mind constructs its reality, without always worrying about the neuronal hardware underneath. Sometimes, it means aiming directly for new treatments for devastating neurological and psychiatric conditions like coma, paralysis, dementia, and depression.

Put this way, neuroscience has enormous potential to benefit society, well deserving of high profile and large-scale support. It would be a great shame if the Human Brain Project, through its singular emphasis on massive computer simulation, ends up as a lightning rod for dissatisfaction with ‘big science’ rather than fostering a new and powerfully productive picture of the biological basis of the mind.

This article first appeared online in The Guardian on July 8 2014.  It appeared in print in the July 9 edition, on page 30 (comment section).

Post publication notes:

The HBP leadership have published a response to the open letter here. I didn’t find it very convincing. There have been a plethora of other commentaries on the HBP, as it comes up to its first review.  I can’t provide an exhaustive list but I particularly liked Gary Marcus’ piece in the New York Times (July 11). There was also trenchant criticism in the editorial pages of Nature.  Paul Verschure has a nice TED talk addressing some of the challenges facing big data, encompassing the HBP.

 

 

The importance of being Eugene: What (not) passing the Turing test really means

Image
Eugene Goostman, chatbot.

Could you tell difference between a non-native-English-speaking 13-year old Ukranian boy, and a computer program? On Saturday, at the Royal Society, one out of three human judges were fooled. So, it has been widely reported, the iconic Turing Test has been passed and a brave new era of Artificial Intelligence (AI) begins.

Not so fast. While this event marks a modest improvement in the abilities of so-called ‘chatbots’ to engage fluently with humans, real AI requires much more.

Here’s what happened. At a competition held in central London, thirty judges (including politician Lord Sharkey, computer scientist Kevin Warwick, and Red Dwarf actor Robert Llewellyn) interacted with ‘Eugene Goostman’ in a series of five-minute text-only exchanges. As a result, 33% of the judges (reports do not yet say which, though tweets implicate Llewellyn) were persuaded that ‘Goostman’ was real. The other 67%  were not. It turns out that ‘Eugene Goostman’ is not a teenager from Odessa, but a computer program, a ‘chatbot’ created by computer engineers Vladimir Veselov and Eugene Demchenko. According to his creators, ‘Goostman’ was ‘born’ in 2001, owns a pet guinea pig, and has a gynaecologist father.

The Turing Test, devised by computer science pioneer and codebreaker Alan Turing, was proposed as a practical alternative to the philosophically challenging and possibly absurd question, “can machines think”. In one popular interpretation, a human judge interacts with two players – a human and a machine – and must decide which is which. A candidate machine passes the test when the judge consistently fails to distinguish the one from the other. Interactions are limited to exchanges of strings of text, to make the competition fair (more on this later; its also worth noting that Turing’s original idea was more complex than this, but lets press on). While there have been many previous attempts and prior claims about passing the test, the Goostman-bot arguably outperformed its predecessors, leading Warwick to noisily proclaim “We are therefore proud to declare that Alan Turing’s Test was passed for the first time on Saturday”.

Image
Alan Turing’s seminal 1950 paper

This is a major overstatement which does grave disservice to the field of AI. While Goostman may represent progress of a sort – for instance this year’s competition did not place any particular restrictions on conversation topics – some context is badly needed.

An immediate concern is that Goostman is gaming the system. By imitating a non-native speaker, the chatbot can make its clumsy English expected rather than unusual. Hence its reaction to winning the prize: “I feel about beating the Turing test in quite convenient way”. And its assumed age of thirteen lowers expectations about satisfactory responses to questions. As Veselov put it “Thirteen years old is not too old to know everything and not too young to know nothing.” While Veselov’s strategy is cunning, it also shows that the Turing test is as much a test of the judges’ abilities to make suitable inferences, and to ask probing questions, as it is of the capabilities of intelligent machinery.

More importantly, fooling 33% of judges over 5 minute sessions was never the standard intended by Alan Turing for passing his test – it was merely his prediction about how computers might fare within about 50 years of his proposal. (In this, as in much else, he was not far wrong: the original Turing test was described in 1950.) A more natural criterion, as emphasized by the cognitive scientist Stevan Harnad, is for a machine to be consistently indistinguishable from human counterparts over extended periods of time, in other words to have the generic performance capacity of a real human being. This more stringent benchmark is still a long way off.

Perhaps the most significant limitation exposed by Goostman is the assumption that ‘intelligence’ can be instantiated in the disembodied exchange of short passages of text. On one hand this restriction is needed to enable interesting comparisons between humans and machines in the first place. On the other, it simply underlines that intelligent behaviour is intimately grounded in the tight couplings and blurry boundaries separating and joining brains, bodies, and environments. If Saturday’s judges had seen Goostman, or even an advanced robotic avatar voicing its responses, there would no question of any confusion. Indeed, robots that are today physically most similar to humans tend to elicit sensations like anxiety and revulsion, not camaraderie. This is the ‘uncanny valley’ – a term coined by robotics professor Masahiro Mori in 1970 (with a nod to Freud) and exemplified by the ‘geminoids’ built by Hiroshi Ishiguro.

Image
Hiroshi Ishiguro and his geminoid.  Another imitation game.

A growing appreciation of the importance of embodied, embedded intelligence explains why nobody is claiming that human-like robots are among us, or are in any sense imminent. Critics of AI consistently point to the notable absence of intelligent robots capable of fluent interactions with people, or even with mugs of tea. In a recent blog post I argued that new developments in AI are increasingly motivated by the near forgotten discipline of cybernetics, which held that prediction and control were at the heart of intelligent behaviour – not barefaced imitation as in Turing’s test (and, from a different angle, in Ishiguro’s geminoids). While these emerging cybernetic-inspired approaches hold great promise (and are attracting the interest of tech giants like Google) there is still plenty to be done.

These ideas have two main implications for AI. The first is that true AI necessarily involves robotics. Intelligent systems are systems that flexibly and adaptively interact with complex, dynamic, and often social environments. Reducing intelligence to short context-free text-based conversations misses the target by a country mile. The second is that true AI should focus not only on the outcome (i.e., whether a machine or robot behaves indistinguishably from a human or other animal) but also on the process by which the outcome is attained. This is why considerable attention within AI has always been paid to understanding, and simulating, how real brains work, and how real bodies behave.

Image
How the leopard got its spots: Turing’s chemical basis of morphogenesis.

Turing of course did much more than propose an interesting but ultimately unsatisfactory (and often misinterpreted) intelligence test. He laid the foundations for modern computer science, he saved untold lives through his prowess in code breaking, and he refused to be cowed by the deep prejudices against homosexuality prevalent in his time, losing his own life in the bargain. He was also a pioneer in theoretical biology: his work in morphogenesis showed how simple interactions could give rise to complex patterns during animal development. And he was a central figure in the emerging field of cybernetics, where he recognized the deep importance of embodied and embedded cognition. The Turing of 1950 might not recognize much of today’s technology, but he would not have been fooled by Goostman.

[postscript: while Warwick &co have been very reluctant to release the transcript of Goostman’s 2014 performance, this recent Guardian piece has some choice dialogue from 2012, where Goostman polled at 28%, not far off Saturday’s 33%. This piece was updated on June 12 following a helpful dialog with Aaron Sloman].

The amoral molecule

Image

The cuddle drug, the trust hormone, the moral molecule: oxytocin (OXT), has been called all these things and more.  You can buy nasal sprays of the stuff online in the promise that some judicious squirting will make people trust you more. In a recent book neuroscientist-cum-economist Paul Zak goes the whole hog, saying that if we only let ourselves be guided by this “moral molecule”, prosperity and social harmony will certainly ensue.

Behind this outlandish and rather ridiculous claim lies some fascinating science. The story starts with the discovery that injecting female virgin rats with OXT triggers maternal instincts, and that these same instincts in mother rats are suppressed when OXT is blocked.  Then came the finding of different levels of OXT receptors in two closely related species of vole. The male prairie vole, having high levels, is monogamous and helps look after its little vole-lets.  Male meadow voles, with many fewer receptors, are aggressive loners who move from one female to the next without regard for their offspring. What’s more, genetically manipulating meadow voles to express OXT receptors turns them into monogamous prairie-vole-a-likes. These early rodent studies showed that OXT plays an important and previously unsuspected role in social behaviour.

Studies of oxytocin and social cognition really took off about ten years ago when Paul Zak, Ernst Fehr, and colleagues began manipulating OXT levels in human volunteers while they played a variety of economic and ‘moral’ games in the laboratory.  These studies showed that OXT, usually administered by a few intranasal puffs, could make people more trusting, generous, cooperative, and empathetic.

For example, in the so-called ‘ultimatum game’ one player (the proposer) is given £10 and offers a proportion of it to a second player (the responder) who has to decide whether or not to accept. If the responder accepts, both players get their share; if not, neither gets anything.  Since these are one-off encounters, rational analysis says that the responder should accept any non zero proposal, since something is better than nothing.  In practice what happens is that offers below about £3 are often rejected, presumably because the desire to punish ‘unfair’ offers outweighs the allure of a small reward. Strikingly, a few whiffs of OXT makes donor players more generous, by almost 50% in some cases. And the same thing happens in other similar situations, like the ‘trust game’: OXT seems to make people more cooperative and pro-social.

Even more exciting are recent findings that OXT can help reduce negative experiences and promote social interactions in conditions like autism and schizophrenia.  In part this could be due to OXTs general ability to reduce anxiety, but there’s likely more to the story than this.  It could also be that OXT enhances the ability to ‘read’ emotional expressions, perhaps by increasing their salience.  Although clinical trials have so far been inconclusive there is at least some hope for new OXT-based pharmacological treatments (though not cures) for these sometimes devastating conditions.

These discoveries are eye-opening and apparently very hopeful. What’s not to like?

Image

The main thing not to like is the idea that there could be such a simple relationship between socially-conditioned phenomena like trust and morality, and the machinations of single molecule.  The evolutionary biologist Leslie Orgel said it well with his ‘third rule’: “Biology is more complicated than you imagine, even when you take Orgel’s third rule into account”.  Sure enough, the emerging scientific story says things are far from simple.

Carsten de Dreu of the University of Amsterdam has published a series of important studies showing that whether oxytocin has a prosocial effect, or an antisocial effect, seems to depend critically on who the interactions are between. In one study, OXT was found to increase generosity within a participant’s ingroup (i.e., among participants judged as similar) but to actually decrease it for interactions with outgroup members.  Another study produced even more dramatic results: here, OXT infusion led volunteers to adopt more derogatory attitudes to outgroup members, even when ingroup and outgroup compositions were determined arbitrarily. OXT can even increase social conformity, as shown in a recent study in which volunteers were divided into two groups and had to judge the attractiveness of arbitrary shapes.

All this should make us look very suspiciously on claims that OXT is any kind of ‘moral molecule’ as some might suggest.  So where do we go from here? A crucial next step is to try to understand how the complex interplay between OXT and behaviour is mediated by the brain. Work in this area has already begun: the research on autism, for example, has shown that OXT infusion leads to autistic brains better differentiating between emotional and non-emotional stimuli.  This work complements emerging social neuroscience studies showing how social stereotypes can affect even very basic perceptual processes. In one example, current studies in our lab are indicating that outgroup faces (e.g., Moroccans for Caucasian Dutch subjects) are literally harder to see than ingroup faces.

Neuroscience has come in for a lot of recent criticism for reductionist ‘explanations’ in which complex cognitive phenomena are identified with activity in this-or-that brain region.  Following this pattern, talk of ‘moral molecules’ is, like crime in multi-storey car-parks, wrong on so many levels. There are no moral molecules, only moral people (and maybe moral societies).  But let’s not allow this kind of over-reaching to blind us to the progress being made when sufficient attention is paid to the complex hierarchical interactions linking molecules to minds.  Neuroscience is wonderfully exciting and has enormous potential for human betterment.  It’s just not the whole story.

This piece is based on a talk given at Brighton’s Catalyst Club as part of the 2014 Brighton Science Festival.

 

All watched over by search engines of loving grace

google-deepmind-artificial-intelligence

Google’s shopping spree has continued with the purchase of the British artificial intelligence (AI) start-up DeepMind, acquired for an eye-watering £400M ($650M).  This is Google’s 8th biggest acquisition in its history, and the latest in a string of purchases in AI and robotics. Boston Dynamics, an American company famous for building agile robots capable of scaling walls and running over rough terrain (see BigDog here), was mopped up in 2013. And there is no sign that Google is finished yet. Should we be excited or should we be afraid?

Probably both. AI and robotics have long promised brave new worlds of helpful robots (think Wall-E) and omniscient artificial intelligences (think HAL), which remain conspicuously absent. Undoubtedly, the combined resources of Google’s in-house skills and its new acquisitions will drive progress in both these areas. Experts have accordingly fretted about military robotics and speculated how DeepMind might help us make better lasagne. But perhaps something bigger is going on, something with roots extending back to the middle of the last century and the now forgotten discipline of cybernetics.

The founders of cybernetics included some of the leading lights of the age, including John Von Neumann (designer of the digital computer), Alan Turing, the British roboticist Grey Walter and even people like the psychiatrist R.D. Laing and the anthropologist Margaret Mead.  They were led by the brilliant and eccentric figures of Norbert Wiener and Warren McCulloch in the USA, and Ross Ashby in the UK. The fundamental idea of cybernetics was consider biological systems as machines. The aim was not to build artificial intelligence per se, but rather to understand how machines could appear to have goals and act with purpose, and how complex systems could be controlled by feedback. Although the brain was the primary focus, cybernetic ideas were applied much more broadly – to economics, ecology, even management science.  Yet cybernetics faded from view as the digital computer took centre stage, and has remained hidden in the shadows ever since.  Well, almost hidden.

One of the most important innovations of 1940s cybernetics was the neural network, the idea that logical operations could be implemented in networks of brain-cell-like elements wired up in particular ways. Neural networks lay dormant, like the rest of cybernetics, until being rediscovered in the 1980s as the basis of powerful new ‘machine learning’ algorithms capable of extracting meaningful patterns from large quantities of data. DeepMind’s technologies are based on just these principles, and indeed some of their algorithms originate in the pioneering neural network research of Geoffrey Hinton (another Brit), who’s company DNN Research was also recently bought by Google and who is now a Google Distinguished Researcher.

What sets Hinton and DeepMind apart is that their algorithms reflect an increasingly prominent theory about brain function. (DeepMind’s founder, the ex-chess-prodigy and computer games maestro Demis Hassabis, set up his company shortly after taking a Ph.D. in cognitive neuroscience.) This theory, which came from cybernetics, says that the brains’ neural networks achieve perception, learning, and behaviour through repeated application of a single principle: predictive control.  Put simply, the brain learns about the statistics of its sensory inputs, and about how these statistics change in response to its own actions. In this way, the brain can build a model of its world (which includes its own body) and figure out how to control its environment in order to achieve specific goals. What’s more, exactly the same principle can be used to develop robust and agile robotics, as seen in BigDog and its friends

Put all this together and so resurface the cybernetic ideals of exploiting deep similarities between biological entities and machines.  These similarities go far beyond superficial (and faulty) assertions that brains are computers, but rather recognize that prediction and control lie at the very heart of both effective technologies and successful biological systems.  This means that Google’s activity in AI and robotics should not be considered separately, but instead as part of larger view of how technology and nature interact: Google’s deep mind has deep roots.

What might this mean for you and me? Many of the original cyberneticians held out a utopian prospect of a new harmony between people and computers, well captured by Richard Brautigan’s 1967 poem – All Watched Over By Machines of Loving Grace – and recently re-examined in Adam Curtis’ powerful though breathless documentary of the same name.  As Curtis argued, these original cybernetic dreams were dashed against the complex realities of the real world. Will things be different now that Google is in charge?  One thing that is certain is that simple idea of a ‘search engine’ will seem increasingly antiquated.  As the data deluge of our modern world accelerates, the concept of ‘search’ will become inseparable from ideas of prediction and control.  This really is both scary and exciting.

The limpid subtle peace of the ecstatic brain

Image

In Dostoevsky’s “The Idiot”, Prince Mychkine experiences repeated epileptic seizures accompanied by “an incredible hitherto unsuspected feeling of bliss and appeasement”, so that “All my problems, doubts and worries resolved themselves in a limpid subtle peace, with a feeling of understanding and awareness of the ‘Supreme Principal of life’”. Such ‘ecstatic epileptic seizures’ have been described many times since (usually with less lyricism), but only now is the brain basis of these supremely meaningful experiences becoming clear, thanks to remarkable new studies by Fabienne Picard and her colleagues at the University of Geneva.

Ecstatic seizures, besides being highly pleasurable, involve a constellation of other symptoms including an increased vividness of sensory perceptions, heightened feelings of self-awareness – of being “present” in the world – a feeling of time standing still, and an apparent clarity of mind where all things seem suddenly to make perfect sense. For some people this clarity involves a realization that a ‘higher power’ (or Supreme Principal) is responsible, though for atheists such beliefs usually recede once the seizure has passed.

In the brain, epilepsy is an electrical storm. Waves of synchronized electrical activity spread through the cortex, usually emanating from one or more specific regions where the local neural wiring may have gone awry.  While epilepsy can often be treated by medicines, in some instances surgery to remove the offending chunk of brain tissue is the only option. In these cases it is now becoming common to insert electrodes directly into the brains of surgical candidates, to better localize the ‘epileptic focus’ and to check that its removal would not cause severe impairments, like the loss of language or movement.  And herein lie some remarkable new opportunities.

Recently, Dr. Picard used just this method to record brain activity from a 23-year-old woman who has experienced ecstatic seizures since the age of 12. Picard found that her seizures involved electrical brain-storms centred on a particular region called the ‘anterior insula cortex’.  The key new finding was that electrical stimulation of this region, using the same electrodes, directly elicited ecstatic feelings – the first time this has been seen. These new data provide important support for previous brain-imaging studies which have shown increased blood flow to the anterior insula in other patients during similar episodes.

The anterior insula (named from the latin for ‘island’) is a particularly fascinating lump of brain tissue.  We have long known that it is involved in how we perceive the internal state of our body, and that these perceptions underlie emotional experiences. More recent evidence suggests that the subjective sensation of the passing of time depends on insular activity.  It also seems to be the place where perceptions of the outside world are integrated with perceptions of our body, perhaps supporting basic forms of self-consciousness and underpinning how we experience our relation to the world.  Strikingly, abnormal activity of the insula is associated with pathological anxiety (the opposite of ecstatic ‘certainty’) and symptoms of depersonalization and derealisation, where the self and world are drained of subjective reality (the opposite of ecstatic perceptual vividness and enhanced self-awareness). Anatomically the anterior insula is among the most highly developed brain regions in humans when compared to other animals, and it even houses a special kind of ‘Von Economo’ neuron. These and other findings are motivating new research, including experiments here at the Sackler Centre for Consciousness Science, which aim to further illuminate the role of the insula in the weaving the fabric of our experienced self. The finding that electrical stimulation of the insular can lead to ecstatic experiences and enhanced self-awareness provides an important advance in this direction.

Picard’s work brings renewed scientific attention to the richness of human experience, the positive as well as the negative, the spiritual as well as the mundane. The finding that ecstatic experiences can be induced by direct brain stimulation may seem both fascinating and troubling, but taking a scientific approach does not imply reducing these phenomena to the buzzing of neurons. Quite the opposite: our sense of wonder should be increased by perceiving connections between the peaks and troughs of our emotional lives and the intricate neural conversations on which they, at least partly, depend.