The amoral molecule

Image

The cuddle drug, the trust hormone, the moral molecule: oxytocin (OXT), has been called all these things and more.  You can buy nasal sprays of the stuff online in the promise that some judicious squirting will make people trust you more. In a recent book neuroscientist-cum-economist Paul Zak goes the whole hog, saying that if we only let ourselves be guided by this “moral molecule”, prosperity and social harmony will certainly ensue.

Behind this outlandish and rather ridiculous claim lies some fascinating science. The story starts with the discovery that injecting female virgin rats with OXT triggers maternal instincts, and that these same instincts in mother rats are suppressed when OXT is blocked.  Then came the finding of different levels of OXT receptors in two closely related species of vole. The male prairie vole, having high levels, is monogamous and helps look after its little vole-lets.  Male meadow voles, with many fewer receptors, are aggressive loners who move from one female to the next without regard for their offspring. What’s more, genetically manipulating meadow voles to express OXT receptors turns them into monogamous prairie-vole-a-likes. These early rodent studies showed that OXT plays an important and previously unsuspected role in social behaviour.

Studies of oxytocin and social cognition really took off about ten years ago when Paul Zak, Ernst Fehr, and colleagues began manipulating OXT levels in human volunteers while they played a variety of economic and ‘moral’ games in the laboratory.  These studies showed that OXT, usually administered by a few intranasal puffs, could make people more trusting, generous, cooperative, and empathetic.

For example, in the so-called ‘ultimatum game’ one player (the proposer) is given £10 and offers a proportion of it to a second player (the responder) who has to decide whether or not to accept. If the responder accepts, both players get their share; if not, neither gets anything.  Since these are one-off encounters, rational analysis says that the responder should accept any non zero proposal, since something is better than nothing.  In practice what happens is that offers below about £3 are often rejected, presumably because the desire to punish ‘unfair’ offers outweighs the allure of a small reward. Strikingly, a few whiffs of OXT makes donor players more generous, by almost 50% in some cases. And the same thing happens in other similar situations, like the ‘trust game’: OXT seems to make people more cooperative and pro-social.

Even more exciting are recent findings that OXT can help reduce negative experiences and promote social interactions in conditions like autism and schizophrenia.  In part this could be due to OXTs general ability to reduce anxiety, but there’s likely more to the story than this.  It could also be that OXT enhances the ability to ‘read’ emotional expressions, perhaps by increasing their salience.  Although clinical trials have so far been inconclusive there is at least some hope for new OXT-based pharmacological treatments (though not cures) for these sometimes devastating conditions.

These discoveries are eye-opening and apparently very hopeful. What’s not to like?

Image

The main thing not to like is the idea that there could be such a simple relationship between socially-conditioned phenomena like trust and morality, and the machinations of single molecule.  The evolutionary biologist Leslie Orgel said it well with his ‘third rule’: “Biology is more complicated than you imagine, even when you take Orgel’s third rule into account”.  Sure enough, the emerging scientific story says things are far from simple.

Carsten de Dreu of the University of Amsterdam has published a series of important studies showing that whether oxytocin has a prosocial effect, or an antisocial effect, seems to depend critically on who the interactions are between. In one study, OXT was found to increase generosity within a participant’s ingroup (i.e., among participants judged as similar) but to actually decrease it for interactions with outgroup members.  Another study produced even more dramatic results: here, OXT infusion led volunteers to adopt more derogatory attitudes to outgroup members, even when ingroup and outgroup compositions were determined arbitrarily. OXT can even increase social conformity, as shown in a recent study in which volunteers were divided into two groups and had to judge the attractiveness of arbitrary shapes.

All this should make us look very suspiciously on claims that OXT is any kind of ‘moral molecule’ as some might suggest.  So where do we go from here? A crucial next step is to try to understand how the complex interplay between OXT and behaviour is mediated by the brain. Work in this area has already begun: the research on autism, for example, has shown that OXT infusion leads to autistic brains better differentiating between emotional and non-emotional stimuli.  This work complements emerging social neuroscience studies showing how social stereotypes can affect even very basic perceptual processes. In one example, current studies in our lab are indicating that outgroup faces (e.g., Moroccans for Caucasian Dutch subjects) are literally harder to see than ingroup faces.

Neuroscience has come in for a lot of recent criticism for reductionist ‘explanations’ in which complex cognitive phenomena are identified with activity in this-or-that brain region.  Following this pattern, talk of ‘moral molecules’ is, like crime in multi-storey car-parks, wrong on so many levels. There are no moral molecules, only moral people (and maybe moral societies).  But let’s not allow this kind of over-reaching to blind us to the progress being made when sufficient attention is paid to the complex hierarchical interactions linking molecules to minds.  Neuroscience is wonderfully exciting and has enormous potential for human betterment.  It’s just not the whole story.

This piece is based on a talk given at Brighton’s Catalyst Club as part of the 2014 Brighton Science Festival.

 

All watched over by search engines of loving grace

google-deepmind-artificial-intelligence

Google’s shopping spree has continued with the purchase of the British artificial intelligence (AI) start-up DeepMind, acquired for an eye-watering £400M ($650M).  This is Google’s 8th biggest acquisition in its history, and the latest in a string of purchases in AI and robotics. Boston Dynamics, an American company famous for building agile robots capable of scaling walls and running over rough terrain (see BigDog here), was mopped up in 2013. And there is no sign that Google is finished yet. Should we be excited or should we be afraid?

Probably both. AI and robotics have long promised brave new worlds of helpful robots (think Wall-E) and omniscient artificial intelligences (think HAL), which remain conspicuously absent. Undoubtedly, the combined resources of Google’s in-house skills and its new acquisitions will drive progress in both these areas. Experts have accordingly fretted about military robotics and speculated how DeepMind might help us make better lasagne. But perhaps something bigger is going on, something with roots extending back to the middle of the last century and the now forgotten discipline of cybernetics.

The founders of cybernetics included some of the leading lights of the age, including John Von Neumann (designer of the digital computer), Alan Turing, the British roboticist Grey Walter and even people like the psychiatrist R.D. Laing and the anthropologist Margaret Mead.  They were led by the brilliant and eccentric figures of Norbert Wiener and Warren McCulloch in the USA, and Ross Ashby in the UK. The fundamental idea of cybernetics was consider biological systems as machines. The aim was not to build artificial intelligence per se, but rather to understand how machines could appear to have goals and act with purpose, and how complex systems could be controlled by feedback. Although the brain was the primary focus, cybernetic ideas were applied much more broadly – to economics, ecology, even management science.  Yet cybernetics faded from view as the digital computer took centre stage, and has remained hidden in the shadows ever since.  Well, almost hidden.

One of the most important innovations of 1940s cybernetics was the neural network, the idea that logical operations could be implemented in networks of brain-cell-like elements wired up in particular ways. Neural networks lay dormant, like the rest of cybernetics, until being rediscovered in the 1980s as the basis of powerful new ‘machine learning’ algorithms capable of extracting meaningful patterns from large quantities of data. DeepMind’s technologies are based on just these principles, and indeed some of their algorithms originate in the pioneering neural network research of Geoffrey Hinton (another Brit), who’s company DNN Research was also recently bought by Google and who is now a Google Distinguished Researcher.

What sets Hinton and DeepMind apart is that their algorithms reflect an increasingly prominent theory about brain function. (DeepMind’s founder, the ex-chess-prodigy and computer games maestro Demis Hassabis, set up his company shortly after taking a Ph.D. in cognitive neuroscience.) This theory, which came from cybernetics, says that the brains’ neural networks achieve perception, learning, and behaviour through repeated application of a single principle: predictive control.  Put simply, the brain learns about the statistics of its sensory inputs, and about how these statistics change in response to its own actions. In this way, the brain can build a model of its world (which includes its own body) and figure out how to control its environment in order to achieve specific goals. What’s more, exactly the same principle can be used to develop robust and agile robotics, as seen in BigDog and its friends

Put all this together and so resurface the cybernetic ideals of exploiting deep similarities between biological entities and machines.  These similarities go far beyond superficial (and faulty) assertions that brains are computers, but rather recognize that prediction and control lie at the very heart of both effective technologies and successful biological systems.  This means that Google’s activity in AI and robotics should not be considered separately, but instead as part of larger view of how technology and nature interact: Google’s deep mind has deep roots.

What might this mean for you and me? Many of the original cyberneticians held out a utopian prospect of a new harmony between people and computers, well captured by Richard Brautigan’s 1967 poem – All Watched Over By Machines of Loving Grace – and recently re-examined in Adam Curtis’ powerful though breathless documentary of the same name.  As Curtis argued, these original cybernetic dreams were dashed against the complex realities of the real world. Will things be different now that Google is in charge?  One thing that is certain is that simple idea of a ‘search engine’ will seem increasingly antiquated.  As the data deluge of our modern world accelerates, the concept of ‘search’ will become inseparable from ideas of prediction and control.  This really is both scary and exciting.

The limpid subtle peace of the ecstatic brain

Image

In Dostoevsky’s “The Idiot”, Prince Mychkine experiences repeated epileptic seizures accompanied by “an incredible hitherto unsuspected feeling of bliss and appeasement”, so that “All my problems, doubts and worries resolved themselves in a limpid subtle peace, with a feeling of understanding and awareness of the ‘Supreme Principal of life’”. Such ‘ecstatic epileptic seizures’ have been described many times since (usually with less lyricism), but only now is the brain basis of these supremely meaningful experiences becoming clear, thanks to remarkable new studies by Fabienne Picard and her colleagues at the University of Geneva.

Ecstatic seizures, besides being highly pleasurable, involve a constellation of other symptoms including an increased vividness of sensory perceptions, heightened feelings of self-awareness – of being “present” in the world – a feeling of time standing still, and an apparent clarity of mind where all things seem suddenly to make perfect sense. For some people this clarity involves a realization that a ‘higher power’ (or Supreme Principal) is responsible, though for atheists such beliefs usually recede once the seizure has passed.

In the brain, epilepsy is an electrical storm. Waves of synchronized electrical activity spread through the cortex, usually emanating from one or more specific regions where the local neural wiring may have gone awry.  While epilepsy can often be treated by medicines, in some instances surgery to remove the offending chunk of brain tissue is the only option. In these cases it is now becoming common to insert electrodes directly into the brains of surgical candidates, to better localize the ‘epileptic focus’ and to check that its removal would not cause severe impairments, like the loss of language or movement.  And herein lie some remarkable new opportunities.

Recently, Dr. Picard used just this method to record brain activity from a 23-year-old woman who has experienced ecstatic seizures since the age of 12. Picard found that her seizures involved electrical brain-storms centred on a particular region called the ‘anterior insula cortex’.  The key new finding was that electrical stimulation of this region, using the same electrodes, directly elicited ecstatic feelings – the first time this has been seen. These new data provide important support for previous brain-imaging studies which have shown increased blood flow to the anterior insula in other patients during similar episodes.

The anterior insula (named from the latin for ‘island’) is a particularly fascinating lump of brain tissue.  We have long known that it is involved in how we perceive the internal state of our body, and that these perceptions underlie emotional experiences. More recent evidence suggests that the subjective sensation of the passing of time depends on insular activity.  It also seems to be the place where perceptions of the outside world are integrated with perceptions of our body, perhaps supporting basic forms of self-consciousness and underpinning how we experience our relation to the world.  Strikingly, abnormal activity of the insula is associated with pathological anxiety (the opposite of ecstatic ‘certainty’) and symptoms of depersonalization and derealisation, where the self and world are drained of subjective reality (the opposite of ecstatic perceptual vividness and enhanced self-awareness). Anatomically the anterior insula is among the most highly developed brain regions in humans when compared to other animals, and it even houses a special kind of ‘Von Economo’ neuron. These and other findings are motivating new research, including experiments here at the Sackler Centre for Consciousness Science, which aim to further illuminate the role of the insula in the weaving the fabric of our experienced self. The finding that electrical stimulation of the insular can lead to ecstatic experiences and enhanced self-awareness provides an important advance in this direction.

Picard’s work brings renewed scientific attention to the richness of human experience, the positive as well as the negative, the spiritual as well as the mundane. The finding that ecstatic experiences can be induced by direct brain stimulation may seem both fascinating and troubling, but taking a scientific approach does not imply reducing these phenomena to the buzzing of neurons. Quite the opposite: our sense of wonder should be increased by perceiving connections between the peaks and troughs of our emotional lives and the intricate neural conversations on which they, at least partly, depend.