Evidence for a higher state of consciousness? Sort of.

Hoffman_Bicycle_Day_-_full_square

Bicycle Day Celebration Blotter.  By YttriumOx CC BY-SA 3.0

On April 19 1943, seventy-four years ago to the day, Albert Hoffman conducted his now famous self-experimentation on the psychological effects of LSD, a compound he had been the first to synthesize some years earlier. Now called ‘bicycle day’ in honour of how Hoffman made his way home, it led to some remarkable descriptions:

“… Little by little I could begin to enjoy the unprecedented colors and plays of shapes that persisted behind my closed eyes. Kaleidoscopic, fantastic images surged in on me, alternating, variegated, opening and then closing themselves in circles and spirals, exploding in colored fountains, rearranging and hybridizing themselves in constant flux …”

In the decades that followed, academic research into LSD and other psychedelics was cast into the wilderness as worries about their recreational use held sway. Recently, however, the tide has started to turn. There is now gathering momentum for studies showing a remarkable clinical potential for psychedelics in treating recalcitrant psychiatric disorders, as well as experiments trying to understand how psychedelics exert their distinctive effects on conscious experience.

In a new paper published in Scientific Reports on this bicycle day anniversary, we describe a distinctive neuronal signature of the psychedelic state: a global increase in neuronal signal diversity. So – is this evidence for a ‘higher state’ of consciousness? And could it account for the nature of psychedelic experience? Let me answer these questions by summarizing what we did.

Our study analyzed data previously collected by Dr. Robin Carhart-Harris (Imperial College London) and Dr. Suresh Muthukumaraswamy (then Cardiff, now at Auckland). These were magnetoencephalographic (MEG) brain-imaging data from healthy volunteers either in a normal waking state, or after having taken LSD, psylocibin (the active ingredient in magic mushrooms) or ketamine (which in low doses acts as a psychedelic – in high doses it has an anaesethetic effect). MEG data combine a very high temporal resolution, with a much better spatial resolution than EEG (electroencephalography), allowing us to compute some relatively sophisticated mathematical measures of signal diversity. The participants in our study had passed strict ethical criteria, and were asked simply to rest quietly in the scanner during the experiment. Afterwards, they were asked various questions about what they had experienced.

With Carhart-Harris and Muthukumaraswamy, and with Dr. Adam Barrett and first-author Michael Schartner of the Sackler Centre for Consciousness Science here at Sussex, we chopped up the MEG data into small segments and for each segment calculated a range of different mathematical measures. The most interesting is called ‘Lempel Ziv (LZ) complexity,’ which measures the diversity of the data by figuring out how ‘compressible’ it is. A completely random data sequence would be maximally diverse since it is not compressible at all. A completely uniform data sequence would be minimally diverse since it is easy to compress. In fact, because of these properties the algorithm for computing LZ complexity is widely used to compress digital photos into smaller files, in an optimal way.

LSD_LZ

Changes in LZ complexity under LSD, as compared to the waking state.  Data are source localized MEG.  Image from Suresh Muthukumaraswamy and appears in Figure 3 in the paper.

We found that MEG signals had a reliably higher level of LZ – and hence signal diversity – for all three psychedelic compounds, with perhaps the strongest effects for LSD. The fact that we found the same pattern of results across all three psychedelic compounds is both striking and reassuring – it means our results are not likely to have arisen by chance.

Intuitively, these findings mean that the brain-on-psychedelics is less predictable, more random – and more diverse than in the normal waking state.

Our data can be thought of as evidence for a ‘higher’ state of consciousness only in this very specific way, and only in the context provided by other studies where a loss of consciousness has been associated with a reduction of neuronal diversity. For example, studies in our lab have shown reduced LZ complexity (reduced diversity) for both anaesthesia and for (non-dreaming) sleep. (Interestingly, levels of LZ returned to ‘normal’ during REM sleep when dreams are likely.) What’s striking about our results, in this context, is that increases in quantitative measures of conscious level, compared to the waking state, have never been found before.

Interpreting our data in terms of conscious level also make sense since measures of signal diversity, like LZ, can be thought of as approximations to related quantities like the ‘perturbation complexity index’ (PCI). This measure captures the diversity of the brain’s response to an electromagnetic stimulus: think banging on the brain (but using transcranial magnetic stimulation which applies a sharp electromagnetic ‘bang’), and listening to the echo. Studies using PCI, pioneered by Prof. Marcello Massimini at the University of Milan, have found a remarkable sensitivity to changes in conscious level, and even an ability to predict residual consciousness in devastating neurological conditions like coma and the vegetative state. The differences between LZ and PCI are subtle, having mainly to do with whether they measure simple diversity or a mixture of diversity and ‘integration’ in brain dynamics.

More generally, measures of diversity are related to influential theories which associate consciousness with ‘integrated information’ or ‘causal density’ in the dynamics of the brain. While these theories specify even more complicated mathematical measures of conscious level, the fact that we see measurable increases in diversity so reliably across conscious states gives some support to these theories.  Our results are also consistent with Robin Carhart-Harris’ ‘entropic brain‘ theory, which proposes that the psychedelic state is associated with greater entropy or uncertainty in neural dyamics.

In this broader theoretical context, what’s interesting about our results is that they show that a measure of conscious level – previously applied to sleep and anesthesia – is also sensitive to differences in conscious content, as in the contrast between the psychedelic state and normal wakefulness. This helps shed some new light on an old debate in the science of consciousness – the relationship between conscious level (how conscious you are) and conscious content (what you’re conscious of, when you’re conscious).

Taking this research forward, we plan to understand more about how specific properties of neural dynamics relate to specific properties of psychedelic experiences. In the present study, we found some tentative correlations between changes in signal diversity and the degree to which people reported experiences like ‘ego dissolution’ and ‘vividness of imagination’. However, these correlations were not strong. One possible reason is that the subjective reports were taken outside the scanner, likely some time after the peak effect of the drug. Another possibility – which we are currently looking into – is that more fine-grained measures of information flow in the brain, like Granger causality, might be needed in order to closely map properties of psychedelic experience to changes in the brain.

Overall, our study adds to a growing body of work – much of which has been led by Carhart-Harris and colleagues –  that is now revealing the brain-basis of the psychedelic state. Our data show that a simple measure of neuronal signal diversity places the psychedelic state ‘above’ the normal waking state, in comparison to the lower diversity found in sleep and anesthesia. Taking this work forward stands to do much more than enhance our understanding of psychedelics. It may help expose how, why – and for whom – psychedelics may help alleviate the appalling suffering of psychiatric disorders like depression. And in the end, it may help us figure out how our normal everyday conscious experiences of the world, and the self, come to be.

After all, everything we experience – even when stone cold sober – is just a kind of ‘controlled hallucination.’ Our perceptions are just the brain’s “best guess” of what’s going on, reined in by sensory signals. It’s just that most of the time we agree with each other about our hallucinations, and call them reality.


‘Increased spontaneous MEG signal diversity for psychoactive doses of ketamine, LSD and psilocybin’ by Michael Schartner, Robin Carhart-Harris, Adam Barrett, Anil Seth and Suresh Muthukumaraswamy is published in Scientific Reports (7): 46421, 2017. It is freely available here as an open-access publication. I am the corresponding author.

The study has been extensively covered in the media. Particularly good pieces are in The Guardian in the New Scientist and in Wired. There is also a highly active Reddit thread, which on the day of publication was consistently on the Reddit homepage. 

I would like to specifically acknowledge Michael Schartner and Adam Barrett in this post.  Michael’s Ph.D. – awarded just a few months ago – was all about measuring signal diversity in various different conscious states (sleep, anesthesia, psychedelia. Michael was primarily supervised by Dr. Barrett who devoted his considerable mathematical expertise to the project. Very many thanks are also due to Robin Carhart-Harris and Suresh Muthukumaraswamy for generously engaging with this collaboration.

Carhart-Harris, Muthukumaraswamy and colleagues have published a number of other important studies on the neural basis of the psychedelic state.  See here and here – or just look on PubMed.

The real problem

aeon_coverWhat is the best way to understand consciousness? In philosophy, centuries-old debates continue to rage over whether the Universe is divided, following René Descartes, into ‘mind stuff’ and ‘matter stuff’. But the rise of modern neuroscience has seen a more pragmatic approach gain ground: an approach that is guided by philosophy but doesn’t rely on philosophical research to provide the answers. Its key is to recognise that explaining why consciousness exists at all is not necessary in order to make progress in revealing its material basis – to start building explanatory bridges from the subjective and phenomenal to the objective and measurable.

This is the start of an essay I recently wrote for the website aeon.co, which publishes an essay a day, focusing on ideas and culture.  The basic idea is to chart a pragmatic path for the scientific study of consciousness, respecting but not directly targeting the deep metaphysical mysteries so eloquently exposed by Chalmers’ famous distinction between the ‘easy’ and ‘hard’ problems.  Much of what I say has been said before (e.g., in the tradition of neurophenomenology) but I hope to bring things together in a new way and with a distinctive empirical angle.  Anyway, best make up your own mind – I’d be keen to hear what you think!

At the edges of awareness

Imagine this. Following a brain injury you lie in a hospital bed and from the outside you appear to be totally unconscious. You don’t respond to anything the doctors or your family say, you make no voluntary movements, and although you still go to sleep and wake up there seems to be nobody at home. But your ‘inner universe’ of conscious awareness still remains, perhaps flickering and inconsistent, but definitely there. How could anyone else ever know, and how could you ever communicate with your loved ones again?

Two new radio dramas, The Sky is Wider and Real Worlds, engage with these critical questions by drawing on the cutting edge of the neurology and neuroscience. Recent advances have enabled researchers to not only diagnose ‘residual’ awareness following severe brain injuries, but also to open new channels of communication with behaviourally unresponsive patients. The key medical challenge is to distinguish between the so-called ‘vegetative state’ in which there truly is no conscious awareness, from ‘minimally conscious’ or ‘locked-in’ conditions where some degree of consciousness persists (even normal consciousness, in the locked-in state), even though there are no outward signs.

Untitled

Brain activity during mental imagery, in a behaviourally unresponsive patient and in a  control subject.  Source: MRC via The Guardian

Linda Marshall Griffith’s drama The Sky is Wider takes inspiration from an ‘active approach’ in which the neurologist asks questions of the patient and monitors their brain activity for signs of response. In a classic study from about 10 years ago, Adrian Owen and his team asked behaviourally unresponsive patients to imagine either walking around their house or playing tennis, while their brains were scanned using functional MRI (which measures regional metabolic activity in the brain). These questions were chosen because imagining these different behaviours activates different parts of the brain, and so if we see these selective activations in a patient, we know that they have understood and are voluntarily following the instructions. If they can do this, they must be conscious. It turns out that between 10-20% of patients behaviourally diagnosed as being in the vegetative state can pass this test. Equally important, this same method can be used to establish simple communication by (for example) asking a patient to imagine playing ‘tennis’ to answer ‘yes’ and walking around a house to answer ‘no’.

These developments represent a revolution in clinical neurology. Current research is increasing the efficiency of active approaches by using the more portable electroencephalography (EEG) instead of bulky and expensive MRI. ‘Passive’ techniques in which residual consciousness can be inferred without requiring patients to perform any task are also rapidly improving. These methods are important because active approaches may underestimate the incidence of residual awareness since not all conscious patients may understand or be able to follow verbal instructions.

Alongside these scientific developments we encounter pressing ethical questions. How should we treat patients in these liminal states of awareness? And given a means of communication, what kinds of questions should we ask? The Sky is Wider explores these challenging ethical issues in a compelling narrative which gives dramatic voice to the mysterious conditions of the vegetative and minimally conscious states.


 

In Real Worlds, Jane Rogers takes us several years into the future. Communication with behaviourally unresponsive patients is now far advanced and is based on amazing developments in ‘virtual reality’. The clinical context for this drama is the ‘locked-in syndrome’ where a patient may have more-or-less normal conscious experiences but completely lack the ability to move. In Real Worlds, a locked-in patient transcends these limitations by controlling a virtual reality avatar directly using brain signals. These avatars inhabit virtual worlds in which the avatars of different people can interact, while the ‘real’ person behind each may remain hidden and unknown.

This drama deliberately inhabits the realm of science fiction, but there is solid science behind it too. The development of so-called ‘brain computer interfaces’ (BCI) is moving fast. These interfaces combine brain imaging methods (like EEG or fMRI, or sometimes more ‘invasive’ methods’ in which electrodes are inserted directly into the brain) with advanced machine learning methods to perform a kind of ‘brain-reading’. The idea is to infer, from brain activity alone, intended movements, perceptions, and perhaps even thoughts. These decoded ‘thoughts’ can then be used to control robotic devices, or virtual avatars. In some cases, a person’s own body might be controlled via direct stimulation of muscles. Progress in this area has been remarkably rapid. In a landmark but rather showy example, the Brazilian neuroscientist Miguel Nicolelis used a BCI to allow a paralysed person to ‘kick’ the first ball of the 2014 football world cup, through brain-control of a robotic avatar. More recently, brain-reading methods have allowed a paralysed man to play Guitar Hero for the first time since his injury.

The other technology highlighted in Real Worlds is virtual reality (VR), which – thanks to its enormous consumer potential – is developing even more rapidly. All the major technology and AI companies are getting in on the act, and VR headsets are finally becoming cheap enough, comfortable enough, and powerful enough to define a new technological landscape. Here at the Sackler Centre for Consciousness Science at the University of Sussex, we are exploring how VR can help shed light on our normal conscious experience. In one example, we use a method called ‘augmented reality’ (AR) to project a ‘virtual’ body into the real world as seen through a camera mounted on the front of a VR headset. This experiment revealed how our perception of what is (and what is not) our own body can be easily manipulated, indicating that our experience of ‘body ownership’, which is so easy to take for granted, is in fact continuously and actively generated by the brain. In a second example, we developed a method called ‘substitutional reality’ in which a VR headset is coupled with panoramic video and audio taken from a real environment, manipulated in various ways. The resulting experiences are much more immersive than current computer-generated virtual environments and in some cases people cannot distinguish them from actually ‘real’ environments.

vr

A ‘virtual reality’ hand, part of a Sackler Centre study to explore the mechanisms underlying experiences of body ownership.  VR programming by Dr. Keisuke Suzuki.

Just as in the first drama, ethical questions risk outpacing the science and technology. As VR becomes increasingly immersive and pervasive, its potential to impact our real lives is ever more powerful. While benefits are easy to imagine – for instance in bringing distant relatives together or enabling remote experiences of inaccessible places – there are also legitimate concerns. High on the list would be what happens if people become increasingly unable to distinguish the real world from the virtual, whether in the moment or (more plausibly) in their memories. And what if they progressively withdrew from ‘reality’ if the available virtual worlds became more appealing places to be? Of course, simple dichotomies are unhelpful since VR technologies are part of our real worlds, just like mobile phones and laptop computers. Jane Rogers’ Real Worlds explores these complex ethical issues by imagining VR as a future treatment – perhaps ‘prosthesis’ would be a better word – for the disorders of consciousness like those encountered in The Sky is Wider.

Together, these dramas explore the human and societal consequences of existing and near-future clinical technologies. With artistic license they ask important questions that scientists and clinicians are not yet equipped to address. Ultimately, I think they convey an optimistic message, that we can understand and treat – if not cure – severely debilitating conditions that may otherwise have remained undiagnosed let alone treated. But they also lead us to consider, not just what we could do, but what we should do.


The Sky is Wider (written by Linda Marshall Griffiths) and Real Worlds (written by Jane Rogers) were produced by Nadia Molinari for BBC Radio 4. I acted as the scientific consultant. The original ideas were formulated during a 2014 Wellcome Trust ‘Experimental Stories’ workshop in a conversation between myself, Nadia, and Linda.

The science of selfhood

lorna-zoe-wanamaker-by-johan-persson2-1200x800.jpgZoë Wanamaker as Lorna in Nick Payne’s Elegy.

“The brain is wider than the sky,
For, put them side by side,
The one the other would contain,
With ease, and you besides”

Emily Dickinson, Complete Poems, 1924

What does it mean to be a self? And what happens to the social fabric of life, to our ethics and morality, when the nature of selfhood is called into question?

In neuroscience and psychology, the experience of ‘being a self’ has long been a central concern. One of the most important lessons, from decades of research, is that there is no single thing that is the self. Rather, the self is better thought of as an integrated network of processes that distinguish self from non-self at many different levels. There is the bodily self – the experience of identifying with and owning a particular body, which at a more fundamental level involves the amorphous experience of being a self-sustaining organism. There is the perspectival self, the experience of perceiving the world from a particular first-person point-of-view. The volitional self involves experiences of intention of agency, of urges to do this-or-that (or, perhaps more importantly, to refrain from doing this-or-that) and of being the cause of things that happen.

At higher levels we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time. This narrative self – the story we tell ourselves about who we are – is built from a rich set of autobiographical memories that are associated with a particular subject. Finally, the social self is that aspect of my self-experience and personal identity that depends on my social milieu, on how others perceive and behave towards me, and on how I perceive myself through their eyes and minds.

In daily life, it can be hard to differentiate these dimensions of selfhood. We move through the world as seemingly unified wholes, our experience of bodily self seamlessly integrated with our memories from the past, and with our experiences of volition and agency. But introspection can be a poor guide. Many experiments and neuropsychological case studies tell a rather different story, one in which the brain actively and continuously generates and coordinates these diverse aspects of self-experience.

The many ways of being a self can come apart in surprising and revealing situations. For example, it is remarkably easy to alter the experience of bodily selfhood. In the so-called ‘rubber hand illusion,’ I ask you to focus your attention on a fake hand while your real hand is kept out of sight. If I then simultaneously stroke your real hand and the fake hand with a soft paintbrush, you may develop the uncanny feeling that the fake hand is now, somehow, part of your body. A more dramatic disturbance of the experience of body ownership happens in somatoparaphrenia, a condition in which people experience that part of their body is no longer theirs, that it belongs to someone else – perhaps their doctor or family member. Both these examples involve changes in brain activity, in particular within the ‘temporo-parietal junction’, showing how even very basic aspects of personal identity are actively constructed by the brain.

Moving through levels of selfhood, autoscopic hallucinations involve seeing oneself from a different perspective, much like ‘out of body’ experiences. In akinetic mutism, people seem to lack any experiences of volition or intention (and do very little), while in schizophrenia or anarchic hand syndrome, people can experience their intentions or voluntary actions as having external causes. At the other end of the spectrum, disturbances of social self emerge in autism, where difficulties in perceiving others’ states of mind seems to be a core problem, though the exact nature of the autistic condition is still much debated.

When it comes to the ‘I’, memory is the key. Specifically, autobiographical memory: the recollection of personal experiences of people, objects, and places and other episodes from an individual’s life. While there are as many types of memory as there are varieties of self (for example, we have separate memory processes for facts, for the short term and the long term, and for skills that we learn), autobiographical memories are those most closely associated with our sense of personal identity. This is well illustrated by some classic medical cases in which, as a result of surgery or disease, the ability to lay down new memories is lost. In 1953 Henry Moliason (also known as the patient HM) had large parts of his medial temporal lobes removed in order to relieve severe epilepsy. From 1957 until his death in 2008, HM was studied closely by the neuropsychologist Brenda Milner, yet he was never able to remember meeting her. In 1985 the accomplished musician Clive Wearing suffered a severe viral brain disease that affected similar parts of his brain. Now 77, he frequently believes he has just awoken from a coma, spending each day in a constant state of re-awakening.

Surprisingly, both HM and Wearing remained able to learn new skills, forming new ‘procedural’ memories, despite never recalling the learning process itself. Wearing could still play the piano, and conduct his choir, though he would immediately forget having done so. The music appears to carry him along from moment to moment, restoring his sense of self in a way his memory no longer can. And his love for his wife Deborah seems undiminished, so that he expresses an enormous sense of joy on seeing her, even though he cannot tell whether their last meeting was years, or seconds, in the past. Love, it seems, persists when much else is gone.

For people like HM and Clive Wearing, memory loss has been unintended and unwanted. But as scientific understanding develops, could we be moving towards a world where specific memories and elements of our identity can be isolated or removed through medical intervention? And could the ability to lay down new memories ever be surgically restored? Some recent breakthroughs suggest these developments may not be all that far-fetched.

In 2013, Jason Chan and Jessica LaPaglia, from Iowa State University showed that specific human memories could indeed be deleted. They took advantage of the fact that when memories are explicitly recalled they become more vulnerable. By changing details about a memory, while it was being remembered, they induced a selective amnesia which lasted for at least 24 hours. Although an important advance, this experiment was limited by relying on ‘non-invasive’ methods – which means not using drugs or directly interfering with the brain.

More recent animal experiments have shown even more striking effects. In a ground-breaking 2014 study at the University of California, using genetically engineered mice, Sadegh Nabavi and colleagues managed to block and then re-activate a specific memory. They used a powerful (invasive) technique called optogenetics to activate (or inactivate) the biochemical processes determining how neurons change their connectivity. And elsewhere in California, Ted Berger is working on the first prototypes of so-called ‘hippocampal prostheses’ which replace a part of the brain essential for memory with a computer chip. Although these advances are still a long way from implementation in humans, they show an extraordinary potential for future medical interventions.

The German philosopher Thomas Metzinger believes that “no such things as selves exist in the world”. Modern neuroscience may be on his side, with memory being only one thread in the rich tapestry of processes shaping our sense of selfhood. At the same time, the world outside the laboratory is still full of people who experience themselves – and each other – as distinct, integrated wholes. How the new science of selfhood will change this everyday lived experience, and society with it, is a story that is yet to be told.

Originally commissioned for the Donmar Warehouse production of Elegy, with support from The Wellcome Trust.  Reprinted in the programme notes and in Nick Payne’s published script.

Tracing the edges of consciousness

States-of-mind-main

As a scientist, consciousness has always fascinated me. But understanding consciousness is not a project for science alone. Throughout history, philosophers, artists, storytellers, and musicians have all wondered about the apparent miracle of conscious awareness. Even today, while science might give us our best shot at figuring out the brain – the organ of experience – we need, more than ever, a melding of the arts and sciences, of contemporary and historical approaches, to understand what consciousness really is, to grasp what we mean by, as Mark Haddon eloquently puts it, “Life in the first person.”

This quote comes from Haddon’s beautiful introductory essay to a major new exhibition at the Wellcome Collection in London. Curated by Emily Sargent, States of Mind: Tracing the edges of consciousness “examines perspectives from artists, psychologists, philosophers and neuroscientists to interrogate our understanding of the conscious experience”. Its a fantastic exhibition, with style and substance, and I feel very fortunate to have been involved as an advisor from its early stages.

What’s so special about consciousness?

Consciousness is at once the most familiar and the most mysterious aspect of our existence. Conscious experiences define our lives, but the private, subjective, and what-it-is-likeness of these experiences seems to resist scientific enquiry. Somehow, within each our brains the combined activity of many billions of neurons, each one a tiny biological machine, is giving rise to a conscious experience. Your conscious experience: right here, right now, reading these words. How does this happen? Why is life in the first person?

In one sense, this seems like the kind of mystery ripe for explanation. Borrowing again from Mark Haddon, the raw material of consciousness is not squirreled away deep inside an atom, its not happening 14 billion years ago, and its not hiding out on the other side of the universe. It’s right here in front of – or rather behind – our eyes. Saying this, the brain is a remarkably complex object. It’s not so much the sheer number of neurons (though there about 90 billion). It’s the complexity of its wiring: there are so many connections, that if you counted one every second it would take you 3 million years to finish. Is it not possible that an object of such extraordinary complexity should be capable of extraordinary things?

People have been thinking about consciousness since they’ve been thinking at all. Hippocrates, the founder of modern medicine, said: “Men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, griefs and tears … Madness comes from its moistness.” (Aristotle, by the way, got it wrong, thinking the brain hadn’t much to do with consciousness at all.)

Fast forward to Francis Crick, whose ‘astonishing hypothesis’ in the 1990s deliberately echoed Hippocrates: “You, your joys and your sorrows, your memories and your ambitions … and so on … are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules”. Crick, who I was lucky enough to meet during my time in America, was working on the neurobiology of consciousness even on the day he died. You will see some of his personal notes, and his perplexing plasticine brain models, in States of Mind.

L0080096 Descartes: view of posterior of brain

Descartes: view of posterior of brain, from De Hominem. Wellcome Collection

A major landmark in thinking about consciousness is of course Descartes, who in the 17th Century distinguished between “mind stuff” (res cogitans) and “matter stuff” (res extensa), so giving rise to the now infamous mind-body problem and the philosophy of dualism. Its a great thrill for to see an original copy of Descartes’ De Homine as part of this exhibition. Its modern incarnation as David Chalmers’ so-called ‘hard problem’ has recently gained enough cultural notoriety even to inspire a Tom Stoppard play (though for my money Alex Garland’s screenplay for Ex Machina is the more perspicuous). The idea of the hard problem is this: Even if we knew everything about how the operations of the brain give rise to perception, cognition, learning, and behaviour a problem would still remain: why and how any of this should be associated with consciousness at all? Why is life in the first person?

Defining consciousness

How to define consciousness? One simple definition is that for a conscious organism there is something it is like to be that organism. Or, one can simply say that consciousness is what disappears when we fall into a dreamless sleep, and what returns when we wake up or start dreaming. A bit more formally, for conscious organisms there exists a continuous (though interruptible) stream of conscious scenes – a phenomenal world – which has the character of being subjective and private. The material in States of Mind can help us encounter these ideas with a bit more clarity and force, by focusing on the edges – the liminal boundaries – of consciousness.

First there is conscious level: the difference between being awake and, let’s say, under general anaesthesia. Here, neuroscience now tells us that there is no single ‘generator’ of consciousness in the brain, rather, being consciousness depends on highly specific ways in which different parts of the brain speak to each other. Aya Ben Ron’s film of patients slipping away under anaesthesia is a beautiful exploration of this process, as is the whole section on ‘SLEEP | AWAKE’.

Then there is conscious content: what we are conscious of, when we are conscious. These are the perceptions, thoughts, and emotions that populate our ever-flowing stream of awareness. Here, current research is revealing that our perceptual world is not simply an internal picture of some external reality. Rather, conscious perception depends on the brain’s best guesses, or hypotheses, about the causes of sensory data. Perception is therefore a continuously creative act that is tightly bound up with imagination, so that our experience of the world is a kind of ‘controlled hallucination’, a fantasy that – usually, but not always – coincides with reality. The material on synaesthesia in States of Mind beautifully illuminates this process by showing how, for some of us, these perceptual fantasies can be very different – that we all have our own distinctive inner universes. You can even try training yourself to become a ‘synaesthete’ with a demo of some of our own research, developed for this exhibition. Many thanks to Dr. David Schwartzman of the Sackler Centre for making this happen.

dsc_0001

Alphabet in Colour: Illustrating Vladimir Nabokov’s grapheme-colour synaesthesia, by Jean Holabird.

Finally there is conscious self – the specific experience of being me, or being you. While this might seem easy to take for granted, the experience of being a self requires explanation just as much as any other kind of experience. It too has its edges, its border regions. Here, research is revealing that conscious selfhood, though experienced as unified, can come apart in many different ways. For example, our experience of being and having a particular body can dissociate from our experience of being a person with name and a specific set of memories. Conscious selfhood, like all conscious perception, is therefore another controlled hallucination maintained by the brain. The section BEING | NOT BEING dramatically explores some of these issues, for example by looking at amnesia with Shona Illingworth, and with Adrian Owen’s seminal work on the possibility of consciousness even after severe brain injury.

This last example brings up an important point. Besides the allure of basic science, there are urgent practical motivations for studying consciousness. Neurological and psychiatric disorders are increasingly common and can often be understood as disturbances of conscious experience. Consciousness science promises new approaches and perhaps new treatments for these deeply destructive problems. Scoping out further boundary areas, studying the biology of consciousness can shed new light on awareness in newborn infants and in non-human animals, informing ethical debates in these areas. Above all, consciousness science carries the promise of understanding more about our place in nature. Following the tradition of Copernicus and Darwin, a biological account of conscious experience will help us see ourselves as part of, not apart from, the rest of the universe.

L0079940 Neuronal Theory - 11312.

Santiago Ramon y Cajal, distinguishing the reticular theory (left) from the neuron doctrine (right).  From the Instituto Cajal, Madrid.

Let’s finish by returning to this brilliant exhibition, States of Mind. What I found most remarkable are the objects that Emily Sargent has collected together. Whether its Descartes’ De Hominem, Ramon y Cajal’s delicate ink drawings of neurons, or Francis Crick’s notebooks and models, these objects bring home and render tangible the creativity and imagination which people have brought to bear in their struggle to understand consciousness, over hundreds of years. For me, this brings a new appreciation and wonder to our modern attempts to tackle this basic mystery of life. Emily Dickinson, my favourite poet of neuroscience, put it like this. “The brain is wider than the sky, for – put them side by side – the one the other will contain, with ease, and you – beside.”

States of Mind is at the Wellcome Collection in London from Feb 4th until October 16th 2016 and is curated by Emily Sargent. Sackler Centre researchers, in particular David Schwartzman and myself,  helped out as scientific advisors. This text is lightly adapted from a speech I gave at the opening event on Feb 3rd 2016. Watch this space, and visit the exhibition website, for news about special events on consciousness that will happen throughout the year.

States of Mind at the Wellcome Collection

yellowbluepink

YellowPinkBlue by Ann Veronica Janssens

From October 2015 until October 2016 the Wellcome Collection in London is curating an exhibition called States of Mind: Tracing the Edges of Consciousness.  It has been launched with a brilliant piece of installation art by Ann Veronica Janssens (until 3rd Jan 2016).  In YellowPinkBlue the entire gallery space is invaded by coloured mist, to focus attention on the process of perception itself so that one becomes subsumed by the experience of seeing.  I’m excited to be contributing in various ways to States of Mind, via the Sackler Centre (more on that soon). To start with, here is the text I wrote for Janssen’s remarkable piece.

What in the world is consciousness?

Right now an apparent miracle is unfolding. Within your brain, the electrochemical activity of many billions of richly interconnected brain cells – each one a tiny biological machine – is giving rise to a conscious experience. Your conscious experience: right here, right now, reading these words.

It is all too easy to go about our daily lives, having conscious experiences, without appreciating how remarkable it is that we have these experiences at all. Ann Veronica Janssens’s piece returns us to the sheer wonder of being conscious. By stripping away many of the features that permeate our normal conscious lives, the raw fact of experiencing is given renewed emphasis.

People have wondered about consciousness since they’ve wondered about anything. Hippocrates, the Greek founder of modern medicine, rightly identified the brain as the organ of experience (though Aristotle didn’t agree). In the Renaissance, Descartes divided the universe into ‘mind stuff’ (res cogitans) and ‘matter stuff’ (res extensa), giving birth to the philosophy of dualism and the confounding ‘mind–body’ problem of how the two relate. In the 19th century, when psychology first emerged as a science, understanding consciousness was its primary objective. Though largely sidelined during the 20th century, the challenge of revealing the biological basis of consciousness is now firmly re-established for our times. Janssens’s piece reminds us of the important distinction in science between being conscious at all (conscious level: the difference between being awake and being in a dreamless sleep or under anaesthesia) and what we are conscious of (conscious content: the perceptions, thoughts and emotions that populate our conscious mind). There is also conscious selfhood – the specific experience of being me (or you). Each of these aspects of consciousness can be traced to specific mechanisms in the brain that neuroscientists, in cahoots with researchers from many other disciplines, are now starting to unravel. There are many exciting ideas in play, ranging from the dependence of conscious level on how different parts of the brain speak to each other, to understanding conscious content as determined by the brain’s ‘best guess’ of the causes of ambiguous and noisy sensory signals. Crucially, these ideas have allowed consciousness science to progress from the philosopher’s armchair to the research laboratory.

Besides the allure of basic science, there are important practical motivations for studying consciousness. Neurological and psychiatric disorders are increasingly common and can often be framed as disturbances of conscious experience. Consciousness science promises new approaches and perhaps new treatments for these scourges of modern society. New theories and experiments can also shed light on consciousness in newborns and in non-human animals, adding critical information to important ethical debates in these areas. But above all, consciousness science carries the promise of understanding more about our place in nature. Following Darwin and Copernicus, a biological account of conscious experience will help us see ourselves as part of, not apart from, the rest of the universe.

Anil Seth, Professor of Cognitive and Computational Neuroscience
Co-Director, Sackler Centre for Consciousness Science, University of Sussex

Ex Machina: A shot in the arm for smart sci-fi

machina_a

Alicia Vikander as Ava in Alex Garland’s Ex Machina

IT’S a rare thing to see a movie about science that takes no prisoners intellectually. Alex Garland’s Ex Machina is just that: a stylish, spare and cerebral psycho-techno-thriller, which gives a much-needed shot in the arm for smart science fiction.

Reclusive billionaire genius Nathan, played by Oscar Isaac, creates Ava, an intelligent and very attractive robot played by Alicia Vikander. He then struggles with the philosophical and ethical dilemmas his creation poses, while all hell breaks loose. Many twists and turns add nuance to the plot, which centres on the evolving relationships between the balletic Ava and Caleb (Domhnall Gleeson), a hotshot programmer invited by Nathan to be the “human component in a Turing test”, and between Caleb and Nathan, as Ava’s extraordinary capabilities become increasingly apparent

Everything about this movie is good. Compelling acting (with only three speaking parts), exquisite photography and set design, immaculate special effects, a subtle score and, above all, a hugely imaginative screenplay combine under Garland’s precise direction to deliver a cinematic experience that grabs you and never lets go.

The best science fiction often tackles the oldest questions. At the heart of Ex Machina is one of our toughest intellectual knots, that of artificial consciousness. Is it possible to build a machine that is not only intelligent but also sentient: that has consciousness, not only of the world but also of its own self? Can we construct a modern-day Golem, that lumpen being of Jewish folklore which is shaped from unformed matter and can both serve humankind and turn against it? And if we could, what would happen to us?

In Jewish folkore, the Golem is animate being shaped from unformed matter.

In Jewish folkore, the Golem is animate being shaped from unformed matter.

Putting aside the tedious business of actually building a conscious AI, we face the challenge of figuring out whether the attempt succeeds. The standard reference for this sort of question is Alan Turing’s eponymous test, in which a human judge interrogates both a candidate machine and another human. A machine passes the test when the judge consistently fails to distinguish between them.

While the Turing test has provided a trope for many AI-inspired movies (such as Spike Jonze’s excellent Her), Ex Machina takes things much further. In a sparkling exchange between Caleb and Nathan, Garland nails the weakness of Turing’s version of the test, a focus on the disembodied exchange of messages, and proposes something far more interesting. “The challenge is to show you that she’s a robot. And see if you still feel she has consciousness,” Nathan says to Caleb.

This shifts the goalposts in a vital way. What matters is not whether Ava is a machine. It is not even whether Ava, even though a machine, can be conscious. What matters is whether Ava makes a conscious person feel that Ava is conscious. The brilliance of Ex Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine. And Garland is not necessarily on our side.

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Nathan (Oscar Isaac) and Caleb (Domnhall Gleeson) discuss deep matters of AI

Is consciousness a matter of social consensus? Is it more relevant whether people believe (or feel) that something (or someone) is conscious than whether it is in fact actually conscious? Or, does something being “actually conscious” rest on other people’s beliefs about it being conscious, or on its own beliefs about its consciousness (beliefs that may themselves depend on how it interprets others’ beliefs about it)? And exactly what is the difference between believing and feeling in situations like this?

It seems to me that my consciousness, here and now, is not a matter of social consensus or of my simply believing or feeling that I am conscious. It seems to me, simply, that I am conscious here and now. When I wake up and smell the coffee, there is a real experience of coffee-smelling going on.

But let me channel Ludwig Wittgenstein, one of the greatest philosophers of the 20th century, for a moment. What would it seem like if it seemed to me that my being conscious were a matter of social consensus or beliefs or feelings about my own conscious status? Is what it “seems like” to me relevant at all when deciding how consciousness comes about or what has consciousness?

Before vanishing completely into a philosophical rabbit hole, it is worth saying that questions like these are driving much influential current research on consciousness. Philosophers and scientists like Daniel Dennett, David Rosenthal and Michael Graziano defend, in various ways, the idea that consciousness is somehow illusory and what we really mean in saying we are conscious is that we have certain beliefs about mental states, that these states have distinctive functional properties, or that they are involved in specific sorts of attention.

Another theoretical approach accepts that conscious experience is real and sees the problem as one of determining its physical or biological mechanism. Some leading neuroscientists such as Giulio Tononi, and recently, Christof Koch, take consciousness to be a fundamental property, much like mass-energy and electrical charge, that is expressed through localised concentrations of “integrated information”. And others, like philosopher John Searle, believe that consciousness is an essentially biological property that emerges in some systems but not in others, for reasons as-yet unknown.

In the film we hear about Searle’s Chinese Room thought experiment. His premise was that researchers had managed to build a computer programmed in English that can respond to written Chinese with written Chinese so convincingly it easily passes the Turing test, persuading a human Chinese speaker that the program understands and speaks Chinese. Does the machine really “understand” Chinese (Searle called this “strong AI”) or is it only simulating the ability (“weak” AI)? There is also a nod to the notional “Mary”, the scientist, who, while knowing everything about the physics and biology of colour vision, has only ever experienced black, white and shades of grey. What happens when she sees a red object for the first time? Will she learn anything new? Does consciousness exceed the realms of knowledge.

All of the above illustrates how academically savvy and intellectually provocative Ex Machina is. Hat-tips here to Murray Shanahan, professor of cognitive robotics at Imperial College London, and writer and geneticist Adam Rutherford, whom Garland did well to enlist as science advisers.

Not every scene invites deep philosophy of mind, with the film encompassing everything from ethics, the technological singularity, Ghostbusters and social media to the erosion of privacy, feminism and sexual politics within its subtle scope. But when it comes to riffing on the possibilities and mysteries of brain, mind and consciousness, Ex Machina doesn’t miss a trick.

As a scientist, it is easy to moan when films don’t stack up against reality, but there is usually little to be gained from nitpicking over inaccuracies and narrative inventions. Such criticisms can seem petty and reinforcing of the stereotype of scientists as humourless gatekeepers of facts and hoarders of equations. But these complaints sometimes express a sense of missed opportunity rather than injustice, a sense that intellectual riches could have been exploited, not sidelined, in making a good movie. AI, neuroscience and consciousness are among the most vibrant and fascinating areas of contemporary science, and what we are discovering far outstrips anything that could be imagined out of thin air.

In his directorial debut, Garland has managed to capture the thrill of this adventure in a film that is effortlessly enthralling, whatever your background. This is why, on emerging from it, I felt lucky to be a neuroscientist. Here is a film that is a better film, because of and not despite its engagement with its intellectual inspiration.


The original version of this piece was published as a Culture Lab article in New Scientist on Jan 21. I am grateful to the New Scientist for permission to reproduce it here, and to Liz Else for help with editing. I will be discussing Ex Machina with Dr. Adam Rutherford at a special screening of the film at the Edinburgh Science Festival (April 16, details and tickets here).

Open your MIND

openMINDscreen
Open MIND
is a brand new collection of original research publications on the mind, brain, and consciousness
. It is now freely available online. The collection contains altogether 118 articles from 90 senior and junior researchers, in the always-revealing format of target articles, commentaries, and responses.

This innovative project is the brainchild of Thomas Metzinger and Jennifer Windt, of the MIND group of the Johanes Gutenburg University in Mainz, Germany (Windt has since moved to Monash University in Melbourne). The MIND group was set up by Metzinger in 2003 to catalyse the development of young German philosophers by engaging them with the latest developments in philosophy of mind, cognitive science, and neuroscience. Open MIND celebrates the 10th anniversary of the MIND group, in a way that is so much more valuable to the academic community than ‘just another meeting’ with its quick-burn excitement and massive carbon footprint. Editors Metzinger and Windt explain:

“With this collection, we wanted to make a substantial and innovative contribution that will have a major and sustained impact on the international debate on the mind and the brain. But we also wanted to create an electronic resource that could also be used by less privileged students and researchers in countries such as India, China, or Brazil for years to come … The title ‘Open MIND’ stands for our continuous search for a renewed form of academic philosophy that is concerned with intellectual rigor, takes the results of empirical research seriously, and at the same time remains sensitive to ethical and social issues.”

As a senior member of the MIND group, I was lucky enough to contribute a target article, which was commented on by Wanja Wiese, one of the many talented graduate students with Metzinger and a junior MIND group member. My paper marries concepts in cybernetics and predictive control with the increasingly powerful perspective of ‘predictive processing’ or the Bayesian brain, with a focus on interoception and embodiment. I’ll summarize the main points in a different post, but you can go straight to the target paper, Wanja’s commentary, and my response.

Open MIND is a unique resource in many ways. The Editors were determined to maximize its impact, so, unlike in many otherwise similar projects, the original target papers have not been circulated prior to launch. This means there is a great deal of highly original material now available to be discovered. The entire project was compressed into about 10 months from submission of initial drafts, to publication this week of the complete collection. This means the original content is completely up-to-date. Also, Open MIND  shows how excellent scientific publication can  sidestep the main publishing houses, given the highly developed resources now available, coupled of course with extreme dedication and hard work. The collection was assembled, rigorously reviewed, edited, and produced entirely in-house – a remarkable achievement.

Thomas Metzinger with the Open MIND student team

Thomas Metzinger with the Open MIND student team

Above all Open MIND opened a world of opportunity for its junior members, the graduate students and postdocs who were involved in every stage of the project: soliciting and reviewing papers, editing, preparing commentaries, and organizing the final collection. As Metzinger and Windt say

“The whole publication project is itself an attempt to develop a new format for promoting junior researchers, for developing their academic skills, and for creating a new type of interaction between senior and junior group members.”

The results of Open MIND are truly impressive and will undoubtedly make a lasting contribution to the philosophy of mind, especially in its most powerful multidisciplinary and empirically grounded forms.

Take a look, and open your mind too.

Open MIND contributors: Adrian John Tetteh Alsmith, Michael L. Anderson, Margherita Arcangeli, Andreas Bartels, Tim Bayne, David H. Baßler, Christian Beyer, Ned Block, Hannes Boelsen, Amanda Brovold, Anne-Sophie Brüggen, Paul M. Churchland, Andy Clark, Carl F. Craver, Holk Cruse, Valentina Cuccio, Brian Day, Daniel C. Dennett, Jérôme Dokic, Martin Dresler, Andrea R. Dreßing, Chris Eliasmith, Maximilian H. Engel, Kathinka Evers, Regina Fabry, Sascha Fink, Vittorio Gallese, Philip Gerrans, Ramiro Glauer, Verena Gottschling, Rick Grush, Aaron Gutknecht, Dominic Harkness, Oliver J. Haug, John-Dylan Haynes, Heiko Hecht, Daniela Hill, John Allan Hobson, Jakob Hohwy, Pierre Jacob, J. Scott Jordan, Marius Jung, Anne-Kathrin Koch, Axel Kohler, Miriam Kyselo, Lana Kuhle, Victor A. Lamme, Bigna Le Nggenhager, Caleb Liang, Ying-Tung Lin, Christophe Lopez, Michael Madary, Denis C. Martin, Mark May, Lucia Melloni, Richard Menary, Aleksandra Mroczko-Wąsowicz, Saskia K. Nagel, Albert Newen, Valdas Noreika, Alva Noë, Gerard O’Brien, Elisabeth Pacherie, Anita Pacholik-Żuromska, Christian Pfeiffer, Iuliia Pliushch, Ulrike Pompe-Alama, Jesse J. Prinz, Joëlle Proust, Lisa Quadt, Antti Revonsuo, Adina L. Roskies, Malte Schilling, Stephan Schleim, Tobias Schlicht, Jonathan Schooler, Caspar M. Schwiedrzik, Anil Seth, Wolf Singer, Evan Thompson, Jarno Tuominen, Katja Valli, Ursula Voss, Wanja Wiese, Yann F. Wilhelm, Kenneth Williford, Jennifer M. Windt.


Open MIND press release.
The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies
Perceptual presence in the Kuhnian-Popperian Bayesian brain
Inference to the best prediction

Training synaesthesia: How to see things differently in half-an-hour a day

syn_brain_phillips
Image courtesy of Phil Wheeler Illustrations

Can you learn to see the world differently? Some people already do. People with synaesthesia experience the world very differently indeed, in a way that seems linked to creativity, and which can shed light on some of the deepest mysteries of consciousness. In a paper published in Scientific Reports, we describe new evidence suggesting that non-synaesthetes can be trained to experience the world much like natural synaesthetes. Our results have important implications for understanding individual differences in conscious experiences, and they extend what we know about the flexibility (‘plasticity’) of perception.

Synaesthesia means that an experience of one kind (like seeing a letter) consistently and automatically evokes an experience of another kind (like seeing a colour), when the normal kind of sensory stimulation for the additional experience (the colour) isn’t there. This example describes grapheme-colour synaesthesia, but this is just one among many fascinating varieties. Other synaesthetes experience numbers as having particular spatial relationships (spatial form synaesthesia, probably the most common of all). And there are other more unusual varieties like mirror-touch synaesthesia, where people experience touch on their own bodies when they see someone else being touched, and taste-shape synaesthesia, where triangles might taste sharp, and ellipses bitter.

The richly associative nature of synaesthesia, and the biographies of famous case studies like Vladimir Nabokov and Wassily Kandinsky (or, as the Daily Wail preferred: Lady Gaga and Pharrell Williams), has fuelled its association with creativity and intelligence. Yet the condition is remarkably common, with recent estimates suggesting about 1 in 23 people have some form of synaesthesia. But how does it come about? Is it in your genes, or is it something you can learn?

kandinsky
It is widely believed that Kandinsky was synaesthetic. For instance he said: “Colour is the keyboard, the eyes are the harmonies, the soul is the piano with many strings. The artist is the hand that plays, touching one key or another, to cause vibrations in the soul”

As with most biological traits the truth is: a bit of both. But this still begs the question of whether being synaesthetic is something that can be learnt, even as an adult.

There is a rather long history of attempts to train people to be synaesthetic. Perhaps the earliest example was by E.L. Kelly who in 1934 published a paper with the title: An experimental attempt to produce artificial chromaesthesia by the technique of the conditioned response. While this attempt failed (the paper says it is “a report of purely negative experimental findings”) things have now moved on.

More recent attempts, for instance the excellent work of Olympia Colizoli and colleagues in Amsterdam, have tried to mimic (grapheme-colour) synaesthesia by having people read books in which some of the letters are always coloured in with particular colours. They found that it was possible to train people to display some of the characteristics of synaesthesia, like being slower to name coloured letters when they were presented in a colour conflicting with the training (the ‘synaesthetic Stroop’ effect). But crucially, until now no study has found that training could lead to people actually reporting synaesthesia-like conscious experiences.

syn_reading
An extract from the ‘coloured reading’ training material, used in our study, and similar to the material used by Colizoli and colleagues. The text is from James Joyce. Later in training we replaced some of the letters with (appropriately) coloured blocks to make the task even harder.

Our approach was based on brute force. We decided to dramatically increase the length and rigour of the training procedure that our (initially non-synaesthetic) volunteers undertook. Each of them (14 in all) came in to the lab for half-an-hour each day, five days a week, for nine weeks! On each visit they completed a selection of training exercises designed to cement specific associations between letters and colours. Crucially, we adapted the difficulty of the tasks to each volunteer and each training session, and we also gave them financial rewards for good performance. Over the nine-week regime, some of the easier tasks were dropped entirely, and other more difficult tasks were introduced. Our volunteers also had homework to do, like reading the coloured books. Our idea was that the training must always be challenging, in order to have a chance of working.

The results were striking. At the end of the nine-week exercise, our dedicated volunteers were tested for behavioural signs of synaesthesia, and – crucially – were also asked about their experiences, both inside and outside the lab. Behaviourally they all showed strong similarities with natural-born synaesthetes. This was most striking in measures of ‘consistency’, a test which requires repeated selection of the colour associated with a particular letter, from a palette of millions.

consistency
The consistency test for synaesthesia. This example from David Eagleman’s popular ‘synaesthesia battery’.

Natural-born synaesthetes show very high consistency: the colours they pick (for a given letter) are very close to each other in colour space, across repeated selections. This is important because consistency is very hard to fake. The idea is that synaesthetes can simply match a colour to their experienced ‘concurrent’, whereas non-synaesthetes have to rely on less reliable visual memory, or other strategies.

Our trained quasi-synaesthetes passed the consistency test with flying colours (so to speak). They also performed much like natural synaesthetes on a whole range of other behavioural tests, including synaesthetic stroop, and a ‘synaesthetic conditioning’ task which shows that trained colours can elicit automatic physiological responses, like increases in skin conductance. Most importantly, most (8/14) of our volunteers described colour experiences much like those of natural synaesthetes (only 2 reported no colour phenomenology at all). Strikingly, some of these experience took place even outside the lab:

“When I was walking into campus I glanced at the University of Sussex sign and the letters were coloured” [according to their trained associations]

Like natural synaesthetes, some of our volunteers seemed to experience the concurrent colour ‘out in the world’ while others experienced the colours more ‘in the head’:

“When I am looking at a letter I see them in the trained colours”

“When I look at the letter ‘p’ … its like the inside of my head is pink”

syn_letters
For grapheme colour synaesthetes, letters evoke specific colour experiences. Most of our trained quasi-synaesthetes reported similar experiences. This image is however quite misleading. Synaesthetes (natural born or not) also see the letters in their actual colour, and they typically know that the synaesthetic colour is not ‘real’. But that’s another story.

These results are very exciting, suggesting for the first time that with sufficient training, people can actually learn to see the world differently. Of course, since they are based on subjective reports about conscious experiences, they are also the hardest to independently verify. There is always the slight worry that our volunteers said what they thought we wanted to hear. Against this worry, we were careful to ensure that none of our volunteers knew the study was about synaesthesia (and on debrief, none of them did!). Also, similar ‘demand characteristic’ concerns could have affected other synaesthesia training studies, yet none of these led to descriptions of synaesthesia-like experiences.

Our results weren’t just about synaesthesia. A fascinating side effect was that our volunteers registered a dramatic increase in IQ, gaining an average of about 12 IQ points (compared to a control group which didn’t undergo training). We don’t yet know whether this increase was due to the specifically synaesthetic aspects of our regime, or just intensive cognitive training in general. Either way, our findings provide support for the idea that carefully designed cognitive training could enhance normal cognition, or even help remedy cognitive deficits or decline. More research is needed on these important questions.

What happened in the brain as a result of our training? The short answer is: we don’t know, yet. While in this study we didn’t look at the brain, other studies have found changes in the brain after similar kinds of training. This makes sense: changes in behaviour or in perception should be accompanied by neural changes of some kind. At the same time, natural-born synaesthetes appear to have differences both in the structure of their brains, and in their activity patterns. We are now eager to see what kind of neural signatures underlie the outcome of our training paradigm. The hope is, that because our study showed actual changes in perceptual experience, analysis of these signatures will shed new light on the brain basis of consciousness itself.

So, yes, you can learn to see the world differently. To me, the most important aspect of this work is that it emphasizes that each of us inhabits our own distinctive conscious world. It may be tempting to think that while different people – maybe other cultures – have different beliefs and ways of thinking, still we all see the same external reality. But synaesthesia, along with other emerging theories of ‘predictive processing’ – shows that the differences go much deeper. We each inhabit our own personalised universe, albeit one which is partly defined and shaped by other people. So next time you think someone is off in their own little world: they are.


The work described here was led by Daniel Bor and Nicolas Rothen, and is just one part of an energetic inquiry into synaesthesia taking place at Sussex University and the Sackler Centre for Consciousness Science. With Jamie Ward and (recently) Julia Simner also working here, we have a uniquely concentrated expertise in this fascinating area. In other related work I have been interested in why synaesthetic experiences lack a sense of reality and how this give an important clue about the nature of ‘perceptual presence’. I’ve also been working on the phenomenology of spatial form synaesthesia, and whether synaesthetic experiences can be induced through hypnosis. And an exciting brain imaging study of natural synaesthetes will shortly hit the press! Nicolas Rothen is an authority on the relationship between synaesthesia and memory, and Jamie Ward and Julia Simner have way too many accomplishments in this field to mention. (OK, Jamie has written the most influential review paper in the area – featuring a lot of his own work – and Julia (with Ed Hubbard) has written the leading textbook. That’s not bad to start with.)


Our paper, Adults can be Trained to Acquire Synesthetic Experiences (sorry for US spelling) is published (open access, free!) in Scientific Reports, part of the Nature family. The authors were Daniel Bor, Nicolas Rothen, David Schwartzman, Stephanie Clayton, and Anil K. Seth. There has been quite a lot of media coverage of this work, for instance in the New Scientist and the Daily Fail. Other coverage is summarized here.

I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition

metacog

Image from 30 Second Brain, Ivy Press, available at all good booksellers.

Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.

In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!

This is important because it changes how we think about metacognition. Metacognition, strictly speaking, is ‘knowing about knowing’. When we make a perceptual judgment, or a decision of any kind, we typically have some degree of insight into whether our decision was correct or not. This is metacognition, which in experiments is usually measured by asking people how confident they are in a previous decision. Good metacognitive performance is indicated by high correlations between confidence and accuracy, which can be quantified in various ways.

Most explanations of metacognition assume that metacognitive judgements are based on the same information as the original (‘first-order’) decision. For example, if you are asked to decide whether a dim light was present or not, you might make a (first-order) judgment based on signals flowing from your eyes to your brain. Perhaps your brain sets a threshold below which you will say ‘No’ and above which you will say ‘Yes’. Metacognitive judgments are typically assumed to work on the same data. If you are asked whether you were guessing or were confident, maybe you will set additional thresholds a bit further apart. The idea is that your brain may need more sensory evidence to be confident in judging that a dim light was in fact present, than when merely guessing that it was.

This way of looking at things is formalized by signal detection theory (SDT). The nice thing about SDT is that it can give quantitative mathematical expressions for how well a person can make both first-order and metacognitive judgements, in ways which are not affected by individual biases to say ‘yes’ or ‘no’, or ‘guess’ versus ‘confident’. (The situation is a bit trickier for metacognitive confidence judgements but we can set these details aside for now: see here for the gory details). A simple schematic of SDT is shown below.

sdt

Signal detection theory. The ‘signal’ refers to sensory evidence and the curves show hypothetical probability distributions for stimulus present (solid line) and stimulus absent (dashed line). If a stimulus (e.g., a dim light) is present, then the sensory signal is likely to be stronger (higher) – but because sensory systems are assumed to be noisy (probabilistic), some signal is likely even when there is no stimulus. The difficulty of the decision is shown by the overlap of the distributions. The best strategy for the brain is to place a single ‘decision criterion’ midway between the peaks of the two distributions, and to say ‘present’ for any signal above this threshold, and ‘absent’ for any signal below. This determines the ‘first order decision’. Metacognitive judgements are then specified by additional ‘confidence thresholds’ which bracket the decision criterion. If the signal lies in between the two confidence thresholds, the metacognitive response is ‘guess’; if it lies to the two extremes, the metacognitive response is ‘confident’. The mathematics of SDT allow researchers to calculate ‘bias free’ measures of how well people can make both first-order and metacognitive decisions (these are called ‘d-primes’). As well as providing a method for quantifying decision making performance, the framework is also frequently assumed to say something about what the brain is actually doing when it is making these decisions. It is this last assumption that our present work challenges.

On SDT it is easy to see that one can make above-chance first order decisions while displaying low or no metacognition. One way to do this would be to set your metacognitive thresholds very far apart, so that you are always guessing. But there is no way, on this theory (without making various weird assumptions), that you could be at chance in your first-order decisions, yet above chance in your metacognitive judgements about these decisions.

Surprisingly, until now, no-one had actually checked to see whether this could happen in practice. This is exactly what we did, and this is exactly what we found. We analysed a large amount of data from a paradigm called artificial grammar learning, which is a workhorse in psychological laboratories for studying unconscious learning and decision-making. In artificial grammar learning people are shown strings of letters and have to decide whether each string belongs to ‘grammar A’ or ‘grammar B’. Each grammar is just an arbitrary set of rules determining allowable patterns of letters. Over time, most people can learn to classify letter strings at better than chance. However, over a large sample, there will always be some people that can’t: for these unfortunates, their first-order performance remains at ~50% (in SDT terms they have a d-prime not different from zero).

agl

Artificial grammar learning. Two rule sets (shown on the left) determine which letter strings belong to ‘grammar A’ or ‘grammar B’. Participants are first shown examples of strings generated by one or the other grammar (training). Importantly, they are not told about the grammatical rules, and in most cases they remain unaware of them. Nonetheless, after some training they are able to successfully (i.e., above chance) classify novel letter strings appropriately (testing).

Crucially, subjects in our experiments were asked to make confidence judgments along with their first-order grammaticality judgments. Focusing on those subjects who remained at chance in their first-order judgements, we found that they still showed above-chance metacognition. That is, they were more likely to be confident when they were (by chance) right, than when they were (by chance) wrong. We call this novel finding blind insight.

The discovery of blind insight changes the way we think about decision-making. Our results show that theoretical frameworks based on SDT are, at the very least, incomplete. Metacognitive performance during blind insight cannot be explained by simply setting different thresholds on a single underlying signal. Additional information, or substantially different transformations of the first-order signal, are needed. Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference.

pp

In predictive processing theories of brain function, perception depends on top-down predictions (blue) about the causes of sensory signals. Sensory signals carry ‘prediction errors’ (magenta) which update top-down predictions according to principles of Bayesian inference. Maybe a similar process underlies metacognition. Image from 30 Second Brain, Ivy Press.

This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them. While speculative, this idea fits neatly with the framework of predictive processing which says that top-down influences are critical in shaping the nature of perceptual contents.

The discovery of blindsight many years ago has substantially changed the way we think about vision. Our new finding of blind insight may similarly change the way we think about metacognition, and about consciousness too.

The paper is published open access (i.e. free!) in Psychological Science. The authors were Ryan Scott, Zoltan Dienes, Adam Barrett, Daniel Bor, and Anil K Seth. There are also accompanying press releases and coverage:

Sussex study reveals how ‘blind insight’ confounds logic.  (University of Sussex, 13/11/2014)
People show ‘blind insight’ into decision making performance (Association for Psychological Science, 13/11/2014)