Training synaesthesia: How to see things differently in half-an-hour a day

syn_brain_phillips
Image courtesy of Phil Wheeler Illustrations

Can you learn to see the world differently? Some people already do. People with synaesthesia experience the world very differently indeed, in a way that seems linked to creativity, and which can shed light on some of the deepest mysteries of consciousness. In a paper published in Scientific Reports, we describe new evidence suggesting that non-synaesthetes can be trained to experience the world much like natural synaesthetes. Our results have important implications for understanding individual differences in conscious experiences, and they extend what we know about the flexibility (‘plasticity’) of perception.

Synaesthesia means that an experience of one kind (like seeing a letter) consistently and automatically evokes an experience of another kind (like seeing a colour), when the normal kind of sensory stimulation for the additional experience (the colour) isn’t there. This example describes grapheme-colour synaesthesia, but this is just one among many fascinating varieties. Other synaesthetes experience numbers as having particular spatial relationships (spatial form synaesthesia, probably the most common of all). And there are other more unusual varieties like mirror-touch synaesthesia, where people experience touch on their own bodies when they see someone else being touched, and taste-shape synaesthesia, where triangles might taste sharp, and ellipses bitter.

The richly associative nature of synaesthesia, and the biographies of famous case studies like Vladimir Nabokov and Wassily Kandinsky (or, as the Daily Wail preferred: Lady Gaga and Pharrell Williams), has fuelled its association with creativity and intelligence. Yet the condition is remarkably common, with recent estimates suggesting about 1 in 23 people have some form of synaesthesia. But how does it come about? Is it in your genes, or is it something you can learn?

kandinsky
It is widely believed that Kandinsky was synaesthetic. For instance he said: “Colour is the keyboard, the eyes are the harmonies, the soul is the piano with many strings. The artist is the hand that plays, touching one key or another, to cause vibrations in the soul”

As with most biological traits the truth is: a bit of both. But this still begs the question of whether being synaesthetic is something that can be learnt, even as an adult.

There is a rather long history of attempts to train people to be synaesthetic. Perhaps the earliest example was by E.L. Kelly who in 1934 published a paper with the title: An experimental attempt to produce artificial chromaesthesia by the technique of the conditioned response. While this attempt failed (the paper says it is “a report of purely negative experimental findings”) things have now moved on.

More recent attempts, for instance the excellent work of Olympia Colizoli and colleagues in Amsterdam, have tried to mimic (grapheme-colour) synaesthesia by having people read books in which some of the letters are always coloured in with particular colours. They found that it was possible to train people to display some of the characteristics of synaesthesia, like being slower to name coloured letters when they were presented in a colour conflicting with the training (the ‘synaesthetic Stroop’ effect). But crucially, until now no study has found that training could lead to people actually reporting synaesthesia-like conscious experiences.

syn_reading
An extract from the ‘coloured reading’ training material, used in our study, and similar to the material used by Colizoli and colleagues. The text is from James Joyce. Later in training we replaced some of the letters with (appropriately) coloured blocks to make the task even harder.

Our approach was based on brute force. We decided to dramatically increase the length and rigour of the training procedure that our (initially non-synaesthetic) volunteers undertook. Each of them (14 in all) came in to the lab for half-an-hour each day, five days a week, for nine weeks! On each visit they completed a selection of training exercises designed to cement specific associations between letters and colours. Crucially, we adapted the difficulty of the tasks to each volunteer and each training session, and we also gave them financial rewards for good performance. Over the nine-week regime, some of the easier tasks were dropped entirely, and other more difficult tasks were introduced. Our volunteers also had homework to do, like reading the coloured books. Our idea was that the training must always be challenging, in order to have a chance of working.

The results were striking. At the end of the nine-week exercise, our dedicated volunteers were tested for behavioural signs of synaesthesia, and – crucially – were also asked about their experiences, both inside and outside the lab. Behaviourally they all showed strong similarities with natural-born synaesthetes. This was most striking in measures of ‘consistency’, a test which requires repeated selection of the colour associated with a particular letter, from a palette of millions.

consistency
The consistency test for synaesthesia. This example from David Eagleman’s popular ‘synaesthesia battery’.

Natural-born synaesthetes show very high consistency: the colours they pick (for a given letter) are very close to each other in colour space, across repeated selections. This is important because consistency is very hard to fake. The idea is that synaesthetes can simply match a colour to their experienced ‘concurrent’, whereas non-synaesthetes have to rely on less reliable visual memory, or other strategies.

Our trained quasi-synaesthetes passed the consistency test with flying colours (so to speak). They also performed much like natural synaesthetes on a whole range of other behavioural tests, including synaesthetic stroop, and a ‘synaesthetic conditioning’ task which shows that trained colours can elicit automatic physiological responses, like increases in skin conductance. Most importantly, most (8/14) of our volunteers described colour experiences much like those of natural synaesthetes (only 2 reported no colour phenomenology at all). Strikingly, some of these experience took place even outside the lab:

“When I was walking into campus I glanced at the University of Sussex sign and the letters were coloured” [according to their trained associations]

Like natural synaesthetes, some of our volunteers seemed to experience the concurrent colour ‘out in the world’ while others experienced the colours more ‘in the head’:

“When I am looking at a letter I see them in the trained colours”

“When I look at the letter ‘p’ … its like the inside of my head is pink”

syn_letters
For grapheme colour synaesthetes, letters evoke specific colour experiences. Most of our trained quasi-synaesthetes reported similar experiences. This image is however quite misleading. Synaesthetes (natural born or not) also see the letters in their actual colour, and they typically know that the synaesthetic colour is not ‘real’. But that’s another story.

These results are very exciting, suggesting for the first time that with sufficient training, people can actually learn to see the world differently. Of course, since they are based on subjective reports about conscious experiences, they are also the hardest to independently verify. There is always the slight worry that our volunteers said what they thought we wanted to hear. Against this worry, we were careful to ensure that none of our volunteers knew the study was about synaesthesia (and on debrief, none of them did!). Also, similar ‘demand characteristic’ concerns could have affected other synaesthesia training studies, yet none of these led to descriptions of synaesthesia-like experiences.

Our results weren’t just about synaesthesia. A fascinating side effect was that our volunteers registered a dramatic increase in IQ, gaining an average of about 12 IQ points (compared to a control group which didn’t undergo training). We don’t yet know whether this increase was due to the specifically synaesthetic aspects of our regime, or just intensive cognitive training in general. Either way, our findings provide support for the idea that carefully designed cognitive training could enhance normal cognition, or even help remedy cognitive deficits or decline. More research is needed on these important questions.

What happened in the brain as a result of our training? The short answer is: we don’t know, yet. While in this study we didn’t look at the brain, other studies have found changes in the brain after similar kinds of training. This makes sense: changes in behaviour or in perception should be accompanied by neural changes of some kind. At the same time, natural-born synaesthetes appear to have differences both in the structure of their brains, and in their activity patterns. We are now eager to see what kind of neural signatures underlie the outcome of our training paradigm. The hope is, that because our study showed actual changes in perceptual experience, analysis of these signatures will shed new light on the brain basis of consciousness itself.

So, yes, you can learn to see the world differently. To me, the most important aspect of this work is that it emphasizes that each of us inhabits our own distinctive conscious world. It may be tempting to think that while different people – maybe other cultures – have different beliefs and ways of thinking, still we all see the same external reality. But synaesthesia, along with other emerging theories of ‘predictive processing’ – shows that the differences go much deeper. We each inhabit our own personalised universe, albeit one which is partly defined and shaped by other people. So next time you think someone is off in their own little world: they are.


The work described here was led by Daniel Bor and Nicolas Rothen, and is just one part of an energetic inquiry into synaesthesia taking place at Sussex University and the Sackler Centre for Consciousness Science. With Jamie Ward and (recently) Julia Simner also working here, we have a uniquely concentrated expertise in this fascinating area. In other related work I have been interested in why synaesthetic experiences lack a sense of reality and how this give an important clue about the nature of ‘perceptual presence’. I’ve also been working on the phenomenology of spatial form synaesthesia, and whether synaesthetic experiences can be induced through hypnosis. And an exciting brain imaging study of natural synaesthetes will shortly hit the press! Nicolas Rothen is an authority on the relationship between synaesthesia and memory, and Jamie Ward and Julia Simner have way too many accomplishments in this field to mention. (OK, Jamie has written the most influential review paper in the area – featuring a lot of his own work – and Julia (with Ed Hubbard) has written the leading textbook. That’s not bad to start with.)


Our paper, Adults can be Trained to Acquire Synesthetic Experiences (sorry for US spelling) is published (open access, free!) in Scientific Reports, part of the Nature family. The authors were Daniel Bor, Nicolas Rothen, David Schwartzman, Stephanie Clayton, and Anil K. Seth. There has been quite a lot of media coverage of this work, for instance in the New Scientist and the Daily Fail. Other coverage is summarized here.

Eye Benders: the science of seeing and believing, wins Royal Society prize!

eyebenders_cover

An unexpected post.  I’m very happy to have learnt today that the book Eye Benders has won the 2014 Royal Society Young Person’s Book Prize.  Eye Benders was written by Clive Gifford (main author) and me (consultant).  It was published by Ivy Press, who are also the redoubtable publishers of the so-far-prizeless but nonetheless worthy 30 Second Brain. A follow-up to Eye Benders, Brain Twister, is in the works: More brain, less optical illusions, but same high quality young-person-neuroscience-fare.

The Royal Society says this about the prize: “Each year the Royal Society awards a prize to the best book that communicates science to young people. The prize aims to inspire young people to read about science and promotes the best science writing for the under-14s.”

This year, the shortlist was chosen by Professor James Hough FRS, Dr Rhaana Starling, Mr Michael Heyes, Professor Iain Stewart and Dr Anjana Ahuja. Well done all, good shortlisting.  More importantly, the winner was chosen by groups of young persons themselves.  Here is what some of the 2014 young people had to say about Eye Benders:

Matt, 12 said “Science from a different perspective. Factual and interesting – a spiral of a read!”

Beth, 14 said “It was way, way cool!

Ethan, 12 said “The illustrations were absolutely amazing”

Joe, 12 said “A great, well written and well thought-out book; the illustrations are clear, detailed and amazing. The front cover is beautiful.”

Felix, 10 said “Eye popping and mind-blowingly fun!’

So there it is. Matt and friends have spoken, and here is a picture of Clive accepting the award in Newcastle (alas I wasn’t there) accompanied with a young person being enthused:

eyebenders_award

Here’s a sneak at what the book looks like, on the inside:

eyebenders_sample

A personal note: I remember well going through the final layouts for Eye Benders, heavily dosed on painkillers in hospital in Barcelona following emergency surgery, while at the same time my father was entering his final weeks back in Oxfordshire. A dark time.  Its lovely, if bittersweet, to see something like this emerge from it.

Other coverage:

GrrlScientist in The Guardian.
Optical illusion book wins Royal Society prize
Clive shares some of the best Eye Benders illusions online
Royal Society official announcement
University of Sussex press release

I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition

metacog

Image from 30 Second Brain, Ivy Press, available at all good booksellers.

Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.

In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!

This is important because it changes how we think about metacognition. Metacognition, strictly speaking, is ‘knowing about knowing’. When we make a perceptual judgment, or a decision of any kind, we typically have some degree of insight into whether our decision was correct or not. This is metacognition, which in experiments is usually measured by asking people how confident they are in a previous decision. Good metacognitive performance is indicated by high correlations between confidence and accuracy, which can be quantified in various ways.

Most explanations of metacognition assume that metacognitive judgements are based on the same information as the original (‘first-order’) decision. For example, if you are asked to decide whether a dim light was present or not, you might make a (first-order) judgment based on signals flowing from your eyes to your brain. Perhaps your brain sets a threshold below which you will say ‘No’ and above which you will say ‘Yes’. Metacognitive judgments are typically assumed to work on the same data. If you are asked whether you were guessing or were confident, maybe you will set additional thresholds a bit further apart. The idea is that your brain may need more sensory evidence to be confident in judging that a dim light was in fact present, than when merely guessing that it was.

This way of looking at things is formalized by signal detection theory (SDT). The nice thing about SDT is that it can give quantitative mathematical expressions for how well a person can make both first-order and metacognitive judgements, in ways which are not affected by individual biases to say ‘yes’ or ‘no’, or ‘guess’ versus ‘confident’. (The situation is a bit trickier for metacognitive confidence judgements but we can set these details aside for now: see here for the gory details). A simple schematic of SDT is shown below.

sdt

Signal detection theory. The ‘signal’ refers to sensory evidence and the curves show hypothetical probability distributions for stimulus present (solid line) and stimulus absent (dashed line). If a stimulus (e.g., a dim light) is present, then the sensory signal is likely to be stronger (higher) – but because sensory systems are assumed to be noisy (probabilistic), some signal is likely even when there is no stimulus. The difficulty of the decision is shown by the overlap of the distributions. The best strategy for the brain is to place a single ‘decision criterion’ midway between the peaks of the two distributions, and to say ‘present’ for any signal above this threshold, and ‘absent’ for any signal below. This determines the ‘first order decision’. Metacognitive judgements are then specified by additional ‘confidence thresholds’ which bracket the decision criterion. If the signal lies in between the two confidence thresholds, the metacognitive response is ‘guess’; if it lies to the two extremes, the metacognitive response is ‘confident’. The mathematics of SDT allow researchers to calculate ‘bias free’ measures of how well people can make both first-order and metacognitive decisions (these are called ‘d-primes’). As well as providing a method for quantifying decision making performance, the framework is also frequently assumed to say something about what the brain is actually doing when it is making these decisions. It is this last assumption that our present work challenges.

On SDT it is easy to see that one can make above-chance first order decisions while displaying low or no metacognition. One way to do this would be to set your metacognitive thresholds very far apart, so that you are always guessing. But there is no way, on this theory (without making various weird assumptions), that you could be at chance in your first-order decisions, yet above chance in your metacognitive judgements about these decisions.

Surprisingly, until now, no-one had actually checked to see whether this could happen in practice. This is exactly what we did, and this is exactly what we found. We analysed a large amount of data from a paradigm called artificial grammar learning, which is a workhorse in psychological laboratories for studying unconscious learning and decision-making. In artificial grammar learning people are shown strings of letters and have to decide whether each string belongs to ‘grammar A’ or ‘grammar B’. Each grammar is just an arbitrary set of rules determining allowable patterns of letters. Over time, most people can learn to classify letter strings at better than chance. However, over a large sample, there will always be some people that can’t: for these unfortunates, their first-order performance remains at ~50% (in SDT terms they have a d-prime not different from zero).

agl

Artificial grammar learning. Two rule sets (shown on the left) determine which letter strings belong to ‘grammar A’ or ‘grammar B’. Participants are first shown examples of strings generated by one or the other grammar (training). Importantly, they are not told about the grammatical rules, and in most cases they remain unaware of them. Nonetheless, after some training they are able to successfully (i.e., above chance) classify novel letter strings appropriately (testing).

Crucially, subjects in our experiments were asked to make confidence judgments along with their first-order grammaticality judgments. Focusing on those subjects who remained at chance in their first-order judgements, we found that they still showed above-chance metacognition. That is, they were more likely to be confident when they were (by chance) right, than when they were (by chance) wrong. We call this novel finding blind insight.

The discovery of blind insight changes the way we think about decision-making. Our results show that theoretical frameworks based on SDT are, at the very least, incomplete. Metacognitive performance during blind insight cannot be explained by simply setting different thresholds on a single underlying signal. Additional information, or substantially different transformations of the first-order signal, are needed. Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference.

pp

In predictive processing theories of brain function, perception depends on top-down predictions (blue) about the causes of sensory signals. Sensory signals carry ‘prediction errors’ (magenta) which update top-down predictions according to principles of Bayesian inference. Maybe a similar process underlies metacognition. Image from 30 Second Brain, Ivy Press.

This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them. While speculative, this idea fits neatly with the framework of predictive processing which says that top-down influences are critical in shaping the nature of perceptual contents.

The discovery of blindsight many years ago has substantially changed the way we think about vision. Our new finding of blind insight may similarly change the way we think about metacognition, and about consciousness too.

The paper is published open access (i.e. free!) in Psychological Science. The authors were Ryan Scott, Zoltan Dienes, Adam Barrett, Daniel Bor, and Anil K Seth. There are also accompanying press releases and coverage:

Sussex study reveals how ‘blind insight’ confounds logic.  (University of Sussex, 13/11/2014)
People show ‘blind insight’ into decision making performance (Association for Psychological Science, 13/11/2014)

The Human Brain Project risks becoming a missed opportunity

Image concept of a network of neurons in the human brain.

The brain is much on our minds at the moment. David Cameron is advocating a step-change in dementia research, brain-computer interfaces promise new solutions to paralysis, and the ongoing plight of Michael Schumacher has reminded us of the terrifying consequences of traumatic brain injury. Articles in scholarly journals and in the media are decorated with magical images of the living brain, like the one shown below, to illuminate these stories. Yet, when asked, most neuroscientists will say we still know very little about how the brain works, or how to fix it when it goes wrong.

DTI-sagittal-fibers
A diffusion tensor image showing some of the main pathways along which brain connections are organized.

The €1.2bn Human Brain Project (HBP) is supposed to change all this. Funded by the European Research Council, the HBP brings together more than 80 research institutes in a ten-year endeavour to unravel the mysteries of the brain, and to emulate its powers in new technologies. Following examples like the Human Genome Project and the Large Hadron Collider (where Higgs’ elusive boson was finally found), the idea is that a very large investment will deliver very significant results. But now a large contingent of prominent European neuroscientists are rebelling against the HBP, claiming that its approach is doomed to fail and will undermine European neuroscience for decades to come.

Stepping back from the fuss, it’s worth thinking whether the aims of the HBP really make sense. Sequencing the genome and looking for Higgs were both major challenges, but in these cases the scientific community agreed on the objectives, and on what would constitute success. There is no similar consensus among neuroscientists.

It is often said that the adult human brain is the most complex object in the universe. It contains about 90 billion neurons and a thousand times more connections, so that if you counted one connection each second it would take about three million years to finish. The challenge for neuroscience is to understand how this vast, complex, and always changing network gives rise to our sensations, perceptions, thoughts, actions, beliefs, desires, our sense of self and of others, our emotions and moods, and all else that guides our behaviour and populates our mental life, in health and in disease. No single breakthrough could ever mark success across such a wide range of important problems.

The central pillar of the HBP approach is to build computational simulations of the brain. Befitting the huge investment, these simulations would be of unprecedented size and detail, and would allow brain scientists to integrate their individual findings into a collective resource. What distinguishes the HBP – besides the money – is its aggressively ‘bottom up’ approach: the vision is that by taking care of the neurons, the big things – thoughts, perceptions, beliefs, and the like – will take care of themselves. As such, the HBP does not set out to test any specific hypothesis or collection of hypotheses, marking another distinction with common scientific practice.

Could this work? Certainly, modern neuroscience is generating an accelerating data deluge demanding new technologies for visualisation and analysis. This is the ‘big data’ challenge now common in many settings. It is also clear that better pictures of the brain’s wiring diagram (the ‘connectome’) will be essential as we move ahead. On the other hand, more detailed simulations don’t inevitably lead to better understanding. Strikingly, we don’t fully understand the brain of the tiny worm Caenorhabtis elegans even though it has only 302 neurons and the wiring diagram is known exactly. More generally, a key ability in science is to abstract away from the specifics to see more clearly what underlying principles are at work. In the limit, a perfectly accurate model of the brain may become as difficult to understand as the brain itself, as Borges long ago noted when describing the tragic uselessness of the perfectly detailed map.

jorge_luis_borges_por_paola_agosti
Jorge Luis Borges at Harvard University, 1967/8

Neuroscience is, and should remain, a broad church. Understanding the brain does not reduce to simulating the collective behaviour of all its miniscule parts, however interesting a part of the final story this might become. Understanding the brain means grasping complex interactions cross-linking many different levels of description, from neurons to brain regions to individuals to societies. It means complementing bottom-up simulations with new theories describing what the brain is actually doing, when its neurons are buzzing merrily away. It means designing elegant experiments that reveal how the mind constructs its reality, without always worrying about the neuronal hardware underneath. Sometimes, it means aiming directly for new treatments for devastating neurological and psychiatric conditions like coma, paralysis, dementia, and depression.

Put this way, neuroscience has enormous potential to benefit society, well deserving of high profile and large-scale support. It would be a great shame if the Human Brain Project, through its singular emphasis on massive computer simulation, ends up as a lightning rod for dissatisfaction with ‘big science’ rather than fostering a new and powerfully productive picture of the biological basis of the mind.

This article first appeared online in The Guardian on July 8 2014.  It appeared in print in the July 9 edition, on page 30 (comment section).

Post publication notes:

The HBP leadership have published a response to the open letter here. I didn’t find it very convincing. There have been a plethora of other commentaries on the HBP, as it comes up to its first review.  I can’t provide an exhaustive list but I particularly liked Gary Marcus’ piece in the New York Times (July 11). There was also trenchant criticism in the editorial pages of Nature.  Paul Verschure has a nice TED talk addressing some of the challenges facing big data, encompassing the HBP.

 

 

Darwin’s Neuroscientist: Gerald M. Edelman, 1929-2014

Image
Dr. Gerald M. Edelman, 1929-2014.

“The brain is wider than the sky.
For, put them side by side,
The one the other will include,
With ease, and you beside.”

Dr. Gerald M. Edelman often used these lines from Emily Dickinson to introduce the deep mysteries of neuroscience and consciousness. Dr. Edelman (it was always ‘Dr.’), who has died in La Jolla, aged 84, was without doubt a scientific great. He was a Nobel laureate at the age of 43, a pioneer in immunology, embryology, molecular biology, and neuroscience, a shrewd political operator, and a Renaissance man of striking erudition who displayed a masterful knowledge of science, music, literature, and the visual arts who at one time could have been a concert violinist. He quoted Woody Allen and Jascha Heifetz as readily as Linus Pauling and Ludwig Wittgenstein, a compelling raconteur who loved telling a good Jewish joke just as much as explaining the principles of neuronal selection. And he was my mentor from the time I arrived as a freshly minted Ph.D. at The Neurosciences Institute in San Diego, back in 2001. His influence in biology and the neurosciences is inestimable. While his loss marks the end of an era, his legacy is sure to continue.

Gerald Maurice Edelman was born in Ozone Park, New York City, in 1929, to parents Edward and Anna. He trained in medicine at the University of Pennsylvania, graduating cum laude in 1954. After an internship at the Massachusetts General Hospital and three years in the US Army Medical Corp in France, Edelman entered the doctoral program at Rockefeller University, New York. Staying at Rockefeller after his Ph.D. he became Associate Dean and Vincent Astor Distinguished Professor, and in 1981 he founded The Neuroscience Institute (NSI). In 1992 the NSI moved lock, stock, and barrel into new purpose-built laboratories in La Jolla, California, where Edelman continued as Director for more than twenty years. A dedicated man, he continued working at the NSI until a week before he died.

In 1972 Edelman won the Nobel Prize in Physiology or Medicine (shared independently with Rodney Porter) for showing how antibodies can recognize an almost infinite range of invading antigens. Edelman’s insight, the principles of which resonate throughout his entire career, was based on variation and selection: antibodies undergo a process of ‘evolution within the body’ in order to match novel antigens. Crucially, he performed definitive experiments on the chemical structure of antibodies to support his idea [1].

Image
Dr. Edelman at Rockefeller University in 1972, explaining his model of gamma globulin.

Edelman then moved into embryology, discovering an important class of proteins known as ‘cell adhesion molecules’ [2]. Though this, too, was a major contribution, it was the biological basis of mind and consciousness – one of the ‘dark areas’ of science, where mystery reigned – that drew his attention for the rest of his long career. Over more than three decades Edelman developed his theory of neuronal group selection, also known as ‘neural Darwinism’, which again took principles of variation and selection, but here applied them to brain development and dynamics [3-7]. The theory is rich and still underappreciated. At its heart is the realization that the brain is very different from a computer: as he put it, brains don’t work with ‘logic and a clock’. Instead, Edelman emphasized the rampantly ‘re-entrant’ connectivity of the brain, with massively parallel bidirectional connections linking most brain regions. Uncovering the implications of re-entry remains a profound challenge today.

Image
The campus of The Neuroscience Institute in La Jolla, California.

Edelman was convinced that scientific breakthroughs require both sharp minds and inspiring environments. The NSI was founded as a monastery of science, supporting a small cadre of experimental and theoretical neuroscientists and enabling them to work on ambitious goals free from the immediate pressures of research funding and paper publication. This at least was the model, and Edelman struggled heroically to maintain its reality in the face of increasing financial pressures and the shifting landscape of academia. That he was able to succeed for so long attests to his political nous and focal determination as well as his intellectual prowess. I remember vividly the ritual lunches that exemplified life at the NSI. The entire scientific staff ate together at noon every day (except Fridays), at tables seemingly designed to hold just enough people so that the only common topic could be neuroscience; Edelman, of course, held court at one table, brainstorming and story-telling in equal measure. The NSI itself is a striking building, housing not only experimental laboratories but also a concert-grade auditorium. Science and art were, for Edelman, two manifestations of a fundamental urge towards creativity and beauty.

Edelman did not always take the easiest path through academic life. Among many rivalries, he enjoyed lively clashes with fellow Nobel laureate Francis Crick who, like Edelman himself, had turned his attention to the brain after resolving a central problem in a different area of biology. Crick once infamously referred to neural Darwinism as ‘neural Edelmanism’ [8], a criticism which nowadays seems less forceful as attention within neurosciences increasingly focuses on neuronal population dynamics (just before his death in 2004, Crick met with Edelman and they put aside any remaining feelings of enmity). In 2003 both men published influential papers setting out their respective ideas on consciousness [9, 10]; these papers put the neuroscience of consciousness at last, and for good, back on the agenda.

The biological basis of consciousness had been central to Edelman’s scientific agenda from the late 1980s. Consciousness had long been considered beyond the reach of science; Edelman was at the forefront its rehabilitation as a serious subject within biology. His approach was from the outset more subtle and sophisticated than those of his contemporaries. Rather than simply looking for ‘neural correlates of consciousness’ – brain areas or types of activity that happen to co-exist with conscious states – Edelman wanted to naturalize phenomenology itself. That is, he tried to establish formal mappings between phenomenological properties of conscious experience and homologous properties of neural dynamics. In short, this meant coming up with explanations rather than mere correlations, the idea being that such an approach would demystify the dualistic schism between ‘mind’ and ‘matter’ first invoked by Descartes. This approach was first outlined in his book The Remembered Present [5] and later amplified in A Universe of Consciousness, a work co-authored with Giulio Tononi [11]. It was this approach to consciousness that first drew me to the NSI and to Edelman, and I was not disappointed. These ideas, and the work they enabled, will continue to shape and define consciousness science for years to come.

My own memories of Edelman revolve entirely around life at the NSI. It was immediately obvious that he was not a distant boss who might leave his minions to get on with their research in isolation. He was generous with his time. I saw him almost every working day, and many discussions lasted long beyond their allotted duration. His dedication to detail sometimes took the breath away. On one occasion, while working on a paper together [12], I had fallen into the habit of giving him a hard copy of my latest effort each Friday evening. One Monday morning I noticed the appearance of a thick sheaf of papers on my desk. Over the weekend Edelman had cut and paste – with scissors and glue, not Microsoft Word – paragraphs, sentences, and individual words, to almost entirely rewrite my tentative text. Needless to say, it was much improved.

The abiding memory of anyone who has spent time with Dr. Edelman is however not the scientific accomplishments, not the achievements encompassed by the NSI, but instead the impression of an uncommon intellect moving more quickly and ranging more widely than seemed possible. The New York Times put it this way in a 2004 profile:

“Out of free-floating riffs, vaudevillian jokes, recollections, citations and patient explanations, out of the excited explosions of example and counterexample, associations develop, mental terrain is reordered, and ever grander patterns emerge.”

Dr. Edelman will long be remembered for his remarkably diverse scientific contributions, his strength of character, erudition, integrity, and humour, and for the warmth and dedication he showed to those fortunate enough to share his vision. He is survived by his wife, Maxine, and three children: David, Eric, and Judith.

Anil Seth
Professor of Cognitive and Computational Neuroscience
Co-Director, Sackler Centre for Consciousness Science
University of Sussex

This article has been republished in Frontiers in Conciousness Research doi: 10.3389/fpsyg.2014.00896

References

1 Edelman, G.M., Benacerraf, B., Ovary, Z., and Poulik, M.D. (1961) Structural differences among antibodies of different specificities. Proc Natl Acad Sci U S A 47, 1751-1758
2 Edelman, G.M. (1983) Cell adhesion molecules. Science 219, 450-457
3 Edelman, G.M. and Gally, J. (2001) Degeneracy and complexity in biological systems. Proc. Natl. Acad. Sci. USA 98, 13763-13768
4 Edelman, G.M. (1993) Neural Darwinism: selection and reentrant signaling in higher brain function. Neuron 10, 115-125.
5 Edelman, G.M. (1989) The remembered present. Basic Books
6 Edelman, G.M. (1987) Neural Darwinism: The Theory of Neuronal Group Selection. Basic Books, Inc.
7 Edelman, G.M. (1978) Group selection and phasic re-entrant signalling: a theory of higher brain function. In The Mindful Brain (Edelman, G.M. and Mountcastle, V.B., eds), MIT Press
8 Crick, F. (1989) Neural edelmanism. Trends Neurosci 12, 240-248
9 Edelman, G.M. (2003) Naturalizing consciousness: a theoretical framework. Proc Natl Acad Sci U S A 100, 5520-5524
10 Crick, F. and Koch, C. (2003) A framework for consciousness. Nature Neuroscience 6, 119-126
11 Edelman, G.M. and Tononi, G. (2000) A universe of consciousness : how matter becomes imagination. Basic Books
12 Seth, A.K., Izhikevich, E.I, Reeke, G.N, and Edelman, G.M. (2006) Theories and measures of consciousness: An extended framework. Proc Natl Acad Sci U S A 103, 10799-804

 

The amoral molecule

Image

The cuddle drug, the trust hormone, the moral molecule: oxytocin (OXT), has been called all these things and more.  You can buy nasal sprays of the stuff online in the promise that some judicious squirting will make people trust you more. In a recent book neuroscientist-cum-economist Paul Zak goes the whole hog, saying that if we only let ourselves be guided by this “moral molecule”, prosperity and social harmony will certainly ensue.

Behind this outlandish and rather ridiculous claim lies some fascinating science. The story starts with the discovery that injecting female virgin rats with OXT triggers maternal instincts, and that these same instincts in mother rats are suppressed when OXT is blocked.  Then came the finding of different levels of OXT receptors in two closely related species of vole. The male prairie vole, having high levels, is monogamous and helps look after its little vole-lets.  Male meadow voles, with many fewer receptors, are aggressive loners who move from one female to the next without regard for their offspring. What’s more, genetically manipulating meadow voles to express OXT receptors turns them into monogamous prairie-vole-a-likes. These early rodent studies showed that OXT plays an important and previously unsuspected role in social behaviour.

Studies of oxytocin and social cognition really took off about ten years ago when Paul Zak, Ernst Fehr, and colleagues began manipulating OXT levels in human volunteers while they played a variety of economic and ‘moral’ games in the laboratory.  These studies showed that OXT, usually administered by a few intranasal puffs, could make people more trusting, generous, cooperative, and empathetic.

For example, in the so-called ‘ultimatum game’ one player (the proposer) is given £10 and offers a proportion of it to a second player (the responder) who has to decide whether or not to accept. If the responder accepts, both players get their share; if not, neither gets anything.  Since these are one-off encounters, rational analysis says that the responder should accept any non zero proposal, since something is better than nothing.  In practice what happens is that offers below about £3 are often rejected, presumably because the desire to punish ‘unfair’ offers outweighs the allure of a small reward. Strikingly, a few whiffs of OXT makes donor players more generous, by almost 50% in some cases. And the same thing happens in other similar situations, like the ‘trust game’: OXT seems to make people more cooperative and pro-social.

Even more exciting are recent findings that OXT can help reduce negative experiences and promote social interactions in conditions like autism and schizophrenia.  In part this could be due to OXTs general ability to reduce anxiety, but there’s likely more to the story than this.  It could also be that OXT enhances the ability to ‘read’ emotional expressions, perhaps by increasing their salience.  Although clinical trials have so far been inconclusive there is at least some hope for new OXT-based pharmacological treatments (though not cures) for these sometimes devastating conditions.

These discoveries are eye-opening and apparently very hopeful. What’s not to like?

Image

The main thing not to like is the idea that there could be such a simple relationship between socially-conditioned phenomena like trust and morality, and the machinations of single molecule.  The evolutionary biologist Leslie Orgel said it well with his ‘third rule’: “Biology is more complicated than you imagine, even when you take Orgel’s third rule into account”.  Sure enough, the emerging scientific story says things are far from simple.

Carsten de Dreu of the University of Amsterdam has published a series of important studies showing that whether oxytocin has a prosocial effect, or an antisocial effect, seems to depend critically on who the interactions are between. In one study, OXT was found to increase generosity within a participant’s ingroup (i.e., among participants judged as similar) but to actually decrease it for interactions with outgroup members.  Another study produced even more dramatic results: here, OXT infusion led volunteers to adopt more derogatory attitudes to outgroup members, even when ingroup and outgroup compositions were determined arbitrarily. OXT can even increase social conformity, as shown in a recent study in which volunteers were divided into two groups and had to judge the attractiveness of arbitrary shapes.

All this should make us look very suspiciously on claims that OXT is any kind of ‘moral molecule’ as some might suggest.  So where do we go from here? A crucial next step is to try to understand how the complex interplay between OXT and behaviour is mediated by the brain. Work in this area has already begun: the research on autism, for example, has shown that OXT infusion leads to autistic brains better differentiating between emotional and non-emotional stimuli.  This work complements emerging social neuroscience studies showing how social stereotypes can affect even very basic perceptual processes. In one example, current studies in our lab are indicating that outgroup faces (e.g., Moroccans for Caucasian Dutch subjects) are literally harder to see than ingroup faces.

Neuroscience has come in for a lot of recent criticism for reductionist ‘explanations’ in which complex cognitive phenomena are identified with activity in this-or-that brain region.  Following this pattern, talk of ‘moral molecules’ is, like crime in multi-storey car-parks, wrong on so many levels. There are no moral molecules, only moral people (and maybe moral societies).  But let’s not allow this kind of over-reaching to blind us to the progress being made when sufficient attention is paid to the complex hierarchical interactions linking molecules to minds.  Neuroscience is wonderfully exciting and has enormous potential for human betterment.  It’s just not the whole story.

This piece is based on a talk given at Brighton’s Catalyst Club as part of the 2014 Brighton Science Festival.