Registered Reports now available in Neuroscience of Consciousness

registered_reports.width-800

I was in Dublin last week, for the biannual meeting of the British Neuroscience Association.  Amid the usual buzz of new research findings – and an outstanding public outreach programme – something different was in the air. There is now an unstoppable momentum behind efforts to increase the credibility of research in psychology and neuroscience (and in other areas of science too) and this momentum was fully on show at the BNA. There was a ‘credibility zone’, nestled among the usual mess of posters and bookstalls, and a keynote lecture from Professor Uta Frith on the three ‘R’s and what they mean for neuroscientists: reproducibility, replicability and reliability of research. The BNA itself has recently received £450K from the Gatsby Foundation to support a new ‘credibility in neuroscience programme’. Science can only progress when we can trust its findings, and while outright fraud is rare, the implicit demands of the ‘publish or perish’ culture can easily lead to unreliable results, as various replication crises have amply revealed. Measures to counter these dangers are therefore more than welcome – they are necessary.

DsCsSeGV4AAKdn6

This is why I’m delighted that Neuroscience of Consciousness, part of the Oxford University Press family of journals, is now accepting Registered Report submissions. Registered Reports are a form of research article in which the methods and proposed analyses are written up and reviewed before the research is actually conducted. Typically, and as implemented in our journal, a ‘stage 1’ submission includes a detailed description of the study protocol. If this stage 1 submission is accepted after peer-review, then a stage 2 submission can be submitted which includes the results and discussion. The key innovation of a Registered Report is that acceptance at stage 1 guarantees publication at stage 2, whichever way the results actually turn out – so long as the protocol specified at stage 1 has been properly followed. Also important is that RRs do not exclude exploratory analyses – they only require that such analyses are clearly flagged up. Of course, not all research will be suitable for the registered report format, but we do encourage researchers to use it whenever they can. I’m very pleased that we have a dedicated member of our editorial board, Professor Zoltan Dienes, who will handle Registered Report submissions and who can advise on the process.

Registered Reports are just one among many innovations aimed at improving the credibility of research. Another important development is the emphasis on pre-registration of research designs, so that planned analyses can be unambiguously separated from exploratory analyses. This may be suitable in many cases when a full Registered Report is not. Neuroscience of Consciousness strongly encourages all experimental studies, wherever possible, to be pre-registered. This can be quite easy to do with facilities like Open Science Framework and aspredicted.org. Better science can also be catalysed through publication of methods and resources papers, including datasets. Here again I’m delighted that Neuroscience of Consciousness has launched a new submission category – ‘methods and resources’ articles – to encourage this kind of work.

As many have already emphasized, this emerging ‘open science’ research culture is not about calling people out or being holier-than-thou. Like many others, I’ve faced my own challenges in getting to grips with this rapidly evolving landscape, these challenges will no doubt continue, and it’s been uncomfortable contemplating some work I’ve led or been involved with in the past. Collectively, though, we have a duty to improve our practice and deliver not only more robust results but also more robust methodologies for advancing scientific understanding. My own laboratory embraced an explicit open science policy several months ago, setting out heuristics for best practice across a number of different research methodologies. This policy came primarily from discussions among the researchers, rather than ‘top down’ from me as the overall lab head, and I’m grateful that it did. One thing that’s become clear is that lab heads and research group leaders would do well to reflect on their expectations of research fellows and graduate students. One well designed pre-registered (ideally registered report) publication is worth n interesting-but-underpowered studies (choose your n). It goes without saying that these changed expectations must also filter through to funding bodies and appointment committees. I am confident that they will.

I prefer to think of these new developments in research practice and methodology as an exciting new opportunity, rather than as a scrambled response to a perceived crisis. And I’m greatly looking forward to seeing the first Registered Report appear in Neuroscience of Consciousness. Whichever way the results turn out.

*

(Many thanks to Chris Chambers for his advice and encouragement in setting up a Registered Report pipeline, to Rosie Chambers and Lucy Oates at OUP for making it happen, to Zoltan Dienes for agreeing to handle Registered Report submissions editorially, and to Warrick Roseboom, Peter Lush, Bence Palfi, Reny Baykova, and Maxine Sherman for leading open science discussions in our lab.)

 

Intentional binding without intentional action: A new take on an old idea

You press a light switch and the light comes on. What could be simpler than that. But notice something. As the light comes on, you probably have a feeling that, somehow, you caused that to happen. This experience of ‘being … Continue reading

Guest blog: Phenomenological control: Response to imaginative suggestion predicts measures of mirror touch synaesthesia, vicarious pain, and the rubber hand illusion

This gallery contains 5 photos.

This is a Guest Blog written by Peter Lush, postdoctoral research at the Sackler Centre for Consciousness Science, and lead author on this new study.  It’s all about our new preprint. A key challenge for psychological research is how to … Continue reading

Time perception without clocks

the_persistence_of_memory

Salvador Dali, The Persistence of Memory, 1931

Our new paper, led by Warrick Roseboom, is out now (open access) in Nature Communications. It’s about time.

More than two thousand years ago, though who knows how long exactly, Saint Augustine complained “What then is time? If no-one asks me, I know; if I wish to explain to one who asks, I know not.”

The nature of time is endlessly mysterious, in philosophy, in physics, and also in neuroscience. We experience the flow of time, we perceive events as being ordered in time and as having particular durations, yet there are no time sensors in the brain. The eye has rod and cone cells to detect light, the ear has hair cells to detect sound, but there are no dedicated ‘time receptors’ to be found anywhere. How, then, does the brain create the subjective sense of time passing?

Most neuroscientific models of time perception rely on some kind internal timekeeper or pacemaker, a putative ‘clock in the head’ against which the flow of events can be measured. But despite considerable research, clear evidence for these neuronal pacemakers has been rather lacking, especially when it comes to psychologically relevant timescales of a few seconds to minutes.

An alternative view, and one with substantial psychological pedigree, is that time perception is driven by changes in other perceptual modalities. These modalities include vision and hearing, and possibly also internal modalities like interoception (the sense of the body ‘from within’). This is the view we set out to test in this new study, initiated by Warrick Roseboom here at the Sackler Centre, and Dave Bhowmik at Imperial College London, as part of the recently finished EU H2020 project TIMESTORM.

*

Their idea was that one specific aspect of time perception – duration estimation – is based on the rate of accumulation of salient events in other perceptual modalities. More salient changes, longer estimated durations. Fewer salient changes, shorter durations. He set out to test this idea using a neural network model of visual object classification modified to generate estimates of salient changes when exposed to natural videos of varying lengths (Figure 1).

timefig1

Figure 1. Experiment design. Both human volunteers (a, with eye tracking) and a pretrained object classification neural network (b) view a series of natural videos of different lengths (c), recorded in different environments (d). Activity in the classification networks is analysed for frame-to-frame ‘salient changes’ and records of salient changes are used to train estimates of duration – based on the physical duration of the video. These estimates are then compared with human reports. We also compare networks trained on gaze-constrained video input versus ‘full frame’ video input.

We first collected several hundred videos of five different environments and chopped them into varying lengths from 1 sec to ~1 min. The environments were quiet office scenes, café scenes, busy city scenes, outdoor countryside scenes, and scenes from the campus of Sussex University.  We then showed the videos to some human participants, who rated their apparent durations. We also collected eye tracking data while they viewed the videos. All in all we obtained over 4,000 duration ratings.

The behavioural data showed that people could do the task, and that – as expected – they underestimated long durations and overestimated short durations (Figure 2a). This ‘regression to the mean’ effect is known as Vierodt’s law in the time perception literature and is very well known. Our human volunteers also showed biases according to the video content, rating busy (e.g., city) scenes as lasting longer than non-busy (e.g., office) scenes of the same physical duration. This is just as expected, if duration estimation is based on accumulation of salient perceptual changes.

For the computational part, we used AlexNet, a pretrained deep convolutional neural network (DCNN) which has excellent object classification performance across 1,000 classes of object. We exposed AlexNet to each video, frame by frame. For each frame we examined activity in four separate layers of the network and compared it to the activity elicited by the previous frame. If the difference exceeded an adaptive threshold, we counted a ‘salient event’ and accumulated a unit of subjective time at that level. Finally, we used a simple machine learning tool (a support vector machine) to convert the record of salient events into an estimate of duration in seconds, in order to compare the model with human reports.  There are two important things to note here. The first is that the system was trained on the physical duration of the videos, not on the human estimates (apparent durations). The second is that there is no reliance on any internal clock or pacemaker at all (the frame rate is arbitrary – changing it doesn’t make any difference).

timefig2

Fig 2. Main results. Human volunteers can do the task and show characteristic biases (a).  When the model is trained on ‘full-frame’ data it can also do the task, but the biases are even more severe (b). There is a much closer match to human data when the model input is constrained by human gaze data (c), but not when the gaze locations are drawn from different trials (d).

There were two key tests of the model.  Was it able to perform the task?  More importantly, did it reveal the same pattern of biases as shown by humans?

Figure 2(b) shows that the model indeed performed the task, classifying longer videos as longer than shorter videos.  It also showed the same pattern of biases, though these were more exaggerated than for the human data (a).  But – critically – when we constrained the video input to the model by where humans were looking, the match to human performance was incredibly close (c). (Importantly, this match went away if we used gaze locations from a different video, d). We also found that the model displayed a similar pattern of biases by content, rating busy scenes as lasting longer than non-busy scenes – just as our human volunteers did. Additional control experiments, described in the paper, rule out that these close matches could be achieved just by changes within the video image itself, or by other trivial dependencies (e.g., on frame rate, or on the support vector regression step).

Altogether, these data show that our clock-free model of time-perception, based on the dynamics of perceptual classification, provides a sufficient basis for capturing subjective duration estimation of visual scenes – scenes that vary in their content as well as in their duration. Our model works on a fully end-to-end basis, going all the way from natural video stimuli to duration estimation in seconds.

*

We think this work is important because it comprehensively illustrates an empirically adequate alternative to ‘pacemaker’ models of time perception.

Pacemaker models are undoubtedly intuitive and influential, but they raise the spectre of what Daniel Dennett has called the ‘fallacy of double transduction’. This is false idea that perceptual systems somehow need to re-instantiate a perceived property inside the head, in order for perception to work. Thus perceived redness might require something red-in-the-head, and perceived music might need a little band-in-the-head, together with a complicated system of intracranial microphones. Naturally no-one would explicitly sign up to this kind of theory, but it sometimes creeps in unannounced to theories that rely too heavily on representations of one kind or another. And it seems that proposing a ‘clock in the head’ for time perception provides a prime example of an implicit double transduction. Our model neatly avoids the fallacy, and as we say in our Conclusion:

“That our system produces human-like time estimates based on only natural video inputs, without any appeal to a pacemaker or clock-like mechanism, represents a substantial advance in building artificial systems with human-like temporal cognition, and presents a fresh opportunity to understand human perception and experience of time.” (p.7).

We’re now extending this line of work by obtaining neuroimaging (fMRI) data during the same task, so that we can compare the computational model activity against brain activity in human observers (with Maxine Sherman). We’ve also recorded a whole array of physiological signatures – such as heart-rate and eye-blink data – to see whether we can find any reliable physiological influences on duration estimation in this task.  We can’t – and the preprint, with Marta Suarez-Pinilla – is here.

*

Major credit for this study to Warrick Roseboom who led the whole thing, with the able assistance of Zaferious Fountas and Kyriacos Nikiforou with the modelling. Major credit also to David Bhowmik who was heavily involved in the conception and early stages of the project, and also to Murray Shanahan who provided very helpful oversight. Thanks also to the EU H2020 TIMESTORM project which supported this project from start to finish. As always, I’d also like to thank the Dr. Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research, Azrieli Programme in Brain, Mind, and Consciousness, for their support.

*

Roseboom, W., Fountas, Z., Nikiforou, K., Bhowmik, D., Shanahan, M.P., and Seth, A.K. (2019). Activity in perceptual classification networks as a basis for human subjective time perception. Nature Communications. 10:269.

 

Be careful what you measure: Comparing measures of integrated information

entropycover2019

Our new paper on ‘measuring integrated information’ is out now, open access, in the journal Entropy. It’s part of a special issue dedicated to integrated information theory.

In consciousness research, ‘integrated information theory’, or IIT, has come to occupy a highly influential and rather controversial position. Acclaimed by some as the most important development in consciousness science so far, critiqued by others as too mathematically abstruse and empirically untestable, IIT is by turns both fascinating and frustrating. Certainly, a key challenge for IIT is to develop measures of ‘integrated information’ that can be usefully applied to actual data. These measures should capture, in empirically interesting and theoretically profound ways, the extent to which ‘a system generates more information than the sum of its parts’. Such measures are also of interest in many domains beyond consciousness, through for example to physics and engineering, where notions of ‘dynamical complexity’ are of more general importance.

Adam Barrett and I have been working towards this challenge for many years, both through approximations of the measure F (‘phi’, central to the various iterations of IIT) and through alternative measures like ‘causal density’. Alongside new work from other groups, there now exist a range of measures of integrated information – yet so far no systematic comparison of how they perform on non-trivial systems.

This is what we provide in our new paper, led by Adam along with Pedro Mediano from Imperial College London.

*

We describe, using a uniform notation, six different candidate measures of integrated information (among which we count the related measure of ‘causal density’). We set out the intuitions behind each, and compare their properties across a series of criteria. We then explore how they behave on a variety of network models, some very simple, others a little bit more complex.

The most striking finding is that the measures all behave very differently – no two measures show consistent agreement across all our analyses. Here’s an example:

screen shot 2019-01-03 at 16.45.52

Diverse behavior of measures of integrated information. The six measures (plus two control measures) are shown in terms of their behavior on a simple 2-node network animated by autoregressive dynamics.

At first glance this seems worrying for IIT since, ideally, one would want conceptually similar measures to behave in similar ways when applied to empirical test-cases. Indeed, it is worrying if existing measures are used uncritically. However, by rigorously comparing these measures we are able to identify those which better reflect the underlying intuitions of ‘integrated information’, which we believe will be of some help as these measures continue to be developed and refined.

Integrated information, along with related notions of dynamical complexity and emergence, are likely to be important pillars of our emerging understanding of complex dynamics in all sorts of situations – in consciousness research, in neuroscience more generally, and beyond biology altogether. Our new paper provides a firm foundation for the future development of this critical line of research.

*

One important caveat is necessary. We focus on measures that are, by construction, applicable to the empirical, or spontaneous, statistically stationary distribution of a system’s dynamics. This means we depart, by necessity, from the supposedly more fundamental measures of integrated information that feature in the most recent iterations of IIT. These recent versions of the theory appeal to the so-called ‘maximum entropy’ distribution since they are more interested in characterizing the ‘cause-effect structure’ of a system than in saying things about its dynamics. This means we should be very cautious about taking our results to apply to current versions of IIT. But, in recognizing this, we also return to where we started in this post. A major issue for the more recent (and supposedly more fundamental) versions of IIT is that they are extremely challenging to operationalize and therefore to put to an empirical test. Our work on integrated information departs from ‘fundamental’ IIT precisely because we prioritise empirical applicability. This, we think, is a feature, not a bug.

*

All credit for this study to Pedro Mediano and Adam Barrett, who did all the work. As always, I’d like to thank the Dr. Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research, Azrieli Programme in Brain, Mind, and Consciousness, for their support. The paper was published in Entropy on Christmas Day, which may explain why some of you might’ve missed it!  But it did make the cover, which is nice.

*

Mediano, P.A.M., Seth, A.K., and Barrett, A.B. (2019). Measuring integrated information. Comparison of candidate measures in theory and in simulation. Entropy, 21:17

Can we figure out the brain’s wiring diagram?

connecttomemain_2

The human brain, it is often said, is the most complex object in the known universe. Counting all the connections among its roughly 90 billion neurons, at the rate of one each second, would take about 3 million years – and just counting these connections says nothing about their intricate patterns of connectivity. A new study, published this week in Proceedings of the National Academy of Sciences USA, shows that mapping out these patterns is likely to be much more difficult than previously thought — but also shows what we need to do, to succeed.

Characterizing the detailed point-to-point connectivity of the brain is increasingly recognized as a key objective for neuroscience. Many even think that without knowing the ‘connectome’ – the brain’s wiring diagram – we will never understand how its electrochemical alchemy gives rise to our thoughts, actions, perceptions, beliefs, and ultimately to our consciousness. There is a good precedent for thinking along these lines. Biology has been galvanized by sequencing of the genome (of humans and of other species), and genetic medicine is gathering pace as whole-genome sequencing becomes fast and cheap enough to be available to the many, not just the few. Big-science big-money projects like the Human Genome Project were critical to these developments. Similar efforts in brain science – like the Human Connectome Project in the US and the Human Brain Project in Europe – are now receiving vast amounts of funding (though not without criticism, especially in the European case) (see also here). The hope is that the genetic revolution can be replicated in neuroscience, delivering step changes in our understanding of the brain and in our ability to treat neurological and psychiatric disorders.

Mapping the networks of the human brain relies on non-invasive neuroimaging methods that can be applied without risk to living people. These methods almost exclusively depend on ‘diffusion magnetic resonance imaging (dMRI) tractography’. This technology measures, for each location (or ‘voxel’) in the brain, the direction in which water is best able to diffuse. Taking advantage of the fact that water diffuses more easily along the fibre bundles connecting different brain regions, than across them, dMRI tractography has been able to generate accurate, informative, and surprisingly beautiful pictures of the major superhighways in the brain.

Diffusion MRI of the human brain.  Source: Human Connectome Project.

Diffusion MRI of the human brain. Source: Human Connectome Project.

But identifying these neuronal superhighways is only a step towards the connectome. Think of a road atlas: knowing only about motorways may tell you how cities are connected, but its not going to tell you how to get from one particular house to another. The assumption in neuroscience has been that as brain scanning improves in resolution and as tracking algorithms gain sophistication, dMRI tractography will be able to reveal the point-to-point long-range anatomical connectivity needed to construct the full connectome.

In a study published this week we challenge this assumption, showing that basic features of brain anatomy pose severe obstacles to measuring cortical connectivity using dMRI. The study, a collaboration between the University of Sussex in the UK and the National Institutes of Health (NIH) in the US, applied dMRI tractography to ultra-high resolution dMRI data obtained from extensive scanning of the macaque monkey brain – data of much higher quality than can be presently obtained from human studies. Our analysis, led by Profs. Frank Ye and David Leopold of NIH and Ph.D student Colin Reveley of Sussex, took a large number of starting points (‘seed voxels’) in the brain, and investigated which other parts of the brain could be reached using dMRI tractography.

The result: roughly half of the brain could not be reached, meaning that even our best methods for mapping the connectome aren’t up to the job. What’s more, by looking carefully at the actual brain tissue where tractography failed, we were able to figure out why. Lying just beneath many of the deep valleys in the brain (the ‘sulci’ – but in some other places too), are dense weaves of neuronal fibres (‘white matter’) running largely parallel to the cortical surface. The existence of these ‘superficial white matter fibre systems’, as we call them, prevents the tractography algorithms from detecting where small tributaries leave the main neuronal superhighways, cross into the cortical grey matter, and reach their destinations. Back to the roads: imagine that small minor roads occasionally leave the main motorways, which are flanked by other major roads busy with heavy traffic. If we tried to construct a detailed road atlas by measuring the flow of vehicles, we might well miss these small but critical branching points.

This image shows, on a colour scale, the 'reachability' of different parts of the brain by diffusion tractography.

This image shows, on a colour scale, the ‘reachability’ of different parts of the brain by diffusion tractography.

Identifying the connectome remains a central objective for neuroscience, and non-invasive brain imaging – especially dMRI – is a powerful technology that is improving all the time. But a comprehensive and accurate map of brain connectivity is going to require more than simply ramping up scanning resolution and computational oomph, a message that mega-budget neuroscience might usefully heed. This is not bad news for brain research. Solving a problem always requires fully understanding what the problem is, and our findings open new opportunities and objectives for studies of brain connectivity. Still, it goes to show that the most complex object in the universe is not quite ready to give up all its secrets.


Colin Reveley, Anil K. Seth, Carlo Pierpaoli, Afonso C. Silva, David Yu, Richard C. Saunders, David A. Leopold*, and Frank Q. Ye. (2015) Superficial white-matter fiber systems impede detection of long-range cortical connections in diffusion MR tractography. Proc. Nat. Acad. Sci USA (2015). doi/10.1073/pnas.1418198112

*David A. Leopold is the corresponding author.

Open your MIND

openMINDscreen
Open MIND
is a brand new collection of original research publications on the mind, brain, and consciousness
. It is now freely available online. The collection contains altogether 118 articles from 90 senior and junior researchers, in the always-revealing format of target articles, commentaries, and responses.

This innovative project is the brainchild of Thomas Metzinger and Jennifer Windt, of the MIND group of the Johanes Gutenburg University in Mainz, Germany (Windt has since moved to Monash University in Melbourne). The MIND group was set up by Metzinger in 2003 to catalyse the development of young German philosophers by engaging them with the latest developments in philosophy of mind, cognitive science, and neuroscience. Open MIND celebrates the 10th anniversary of the MIND group, in a way that is so much more valuable to the academic community than ‘just another meeting’ with its quick-burn excitement and massive carbon footprint. Editors Metzinger and Windt explain:

“With this collection, we wanted to make a substantial and innovative contribution that will have a major and sustained impact on the international debate on the mind and the brain. But we also wanted to create an electronic resource that could also be used by less privileged students and researchers in countries such as India, China, or Brazil for years to come … The title ‘Open MIND’ stands for our continuous search for a renewed form of academic philosophy that is concerned with intellectual rigor, takes the results of empirical research seriously, and at the same time remains sensitive to ethical and social issues.”

As a senior member of the MIND group, I was lucky enough to contribute a target article, which was commented on by Wanja Wiese, one of the many talented graduate students with Metzinger and a junior MIND group member. My paper marries concepts in cybernetics and predictive control with the increasingly powerful perspective of ‘predictive processing’ or the Bayesian brain, with a focus on interoception and embodiment. I’ll summarize the main points in a different post, but you can go straight to the target paper, Wanja’s commentary, and my response.

Open MIND is a unique resource in many ways. The Editors were determined to maximize its impact, so, unlike in many otherwise similar projects, the original target papers have not been circulated prior to launch. This means there is a great deal of highly original material now available to be discovered. The entire project was compressed into about 10 months from submission of initial drafts, to publication this week of the complete collection. This means the original content is completely up-to-date. Also, Open MIND  shows how excellent scientific publication can  sidestep the main publishing houses, given the highly developed resources now available, coupled of course with extreme dedication and hard work. The collection was assembled, rigorously reviewed, edited, and produced entirely in-house – a remarkable achievement.

Thomas Metzinger with the Open MIND student team

Thomas Metzinger with the Open MIND student team

Above all Open MIND opened a world of opportunity for its junior members, the graduate students and postdocs who were involved in every stage of the project: soliciting and reviewing papers, editing, preparing commentaries, and organizing the final collection. As Metzinger and Windt say

“The whole publication project is itself an attempt to develop a new format for promoting junior researchers, for developing their academic skills, and for creating a new type of interaction between senior and junior group members.”

The results of Open MIND are truly impressive and will undoubtedly make a lasting contribution to the philosophy of mind, especially in its most powerful multidisciplinary and empirically grounded forms.

Take a look, and open your mind too.

Open MIND contributors: Adrian John Tetteh Alsmith, Michael L. Anderson, Margherita Arcangeli, Andreas Bartels, Tim Bayne, David H. Baßler, Christian Beyer, Ned Block, Hannes Boelsen, Amanda Brovold, Anne-Sophie Brüggen, Paul M. Churchland, Andy Clark, Carl F. Craver, Holk Cruse, Valentina Cuccio, Brian Day, Daniel C. Dennett, Jérôme Dokic, Martin Dresler, Andrea R. Dreßing, Chris Eliasmith, Maximilian H. Engel, Kathinka Evers, Regina Fabry, Sascha Fink, Vittorio Gallese, Philip Gerrans, Ramiro Glauer, Verena Gottschling, Rick Grush, Aaron Gutknecht, Dominic Harkness, Oliver J. Haug, John-Dylan Haynes, Heiko Hecht, Daniela Hill, John Allan Hobson, Jakob Hohwy, Pierre Jacob, J. Scott Jordan, Marius Jung, Anne-Kathrin Koch, Axel Kohler, Miriam Kyselo, Lana Kuhle, Victor A. Lamme, Bigna Le Nggenhager, Caleb Liang, Ying-Tung Lin, Christophe Lopez, Michael Madary, Denis C. Martin, Mark May, Lucia Melloni, Richard Menary, Aleksandra Mroczko-Wąsowicz, Saskia K. Nagel, Albert Newen, Valdas Noreika, Alva Noë, Gerard O’Brien, Elisabeth Pacherie, Anita Pacholik-Żuromska, Christian Pfeiffer, Iuliia Pliushch, Ulrike Pompe-Alama, Jesse J. Prinz, Joëlle Proust, Lisa Quadt, Antti Revonsuo, Adina L. Roskies, Malte Schilling, Stephan Schleim, Tobias Schlicht, Jonathan Schooler, Caspar M. Schwiedrzik, Anil Seth, Wolf Singer, Evan Thompson, Jarno Tuominen, Katja Valli, Ursula Voss, Wanja Wiese, Yann F. Wilhelm, Kenneth Williford, Jennifer M. Windt.


Open MIND press release.
The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies
Perceptual presence in the Kuhnian-Popperian Bayesian brain
Inference to the best prediction

Training synaesthesia: How to see things differently in half-an-hour a day

syn_brain_phillips
Image courtesy of Phil Wheeler Illustrations

Can you learn to see the world differently? Some people already do. People with synaesthesia experience the world very differently indeed, in a way that seems linked to creativity, and which can shed light on some of the deepest mysteries of consciousness. In a paper published in Scientific Reports, we describe new evidence suggesting that non-synaesthetes can be trained to experience the world much like natural synaesthetes. Our results have important implications for understanding individual differences in conscious experiences, and they extend what we know about the flexibility (‘plasticity’) of perception.

Synaesthesia means that an experience of one kind (like seeing a letter) consistently and automatically evokes an experience of another kind (like seeing a colour), when the normal kind of sensory stimulation for the additional experience (the colour) isn’t there. This example describes grapheme-colour synaesthesia, but this is just one among many fascinating varieties. Other synaesthetes experience numbers as having particular spatial relationships (spatial form synaesthesia, probably the most common of all). And there are other more unusual varieties like mirror-touch synaesthesia, where people experience touch on their own bodies when they see someone else being touched, and taste-shape synaesthesia, where triangles might taste sharp, and ellipses bitter.

The richly associative nature of synaesthesia, and the biographies of famous case studies like Vladimir Nabokov and Wassily Kandinsky (or, as the Daily Wail preferred: Lady Gaga and Pharrell Williams), has fuelled its association with creativity and intelligence. Yet the condition is remarkably common, with recent estimates suggesting about 1 in 23 people have some form of synaesthesia. But how does it come about? Is it in your genes, or is it something you can learn?

kandinsky
It is widely believed that Kandinsky was synaesthetic. For instance he said: “Colour is the keyboard, the eyes are the harmonies, the soul is the piano with many strings. The artist is the hand that plays, touching one key or another, to cause vibrations in the soul”

As with most biological traits the truth is: a bit of both. But this still begs the question of whether being synaesthetic is something that can be learnt, even as an adult.

There is a rather long history of attempts to train people to be synaesthetic. Perhaps the earliest example was by E.L. Kelly who in 1934 published a paper with the title: An experimental attempt to produce artificial chromaesthesia by the technique of the conditioned response. While this attempt failed (the paper says it is “a report of purely negative experimental findings”) things have now moved on.

More recent attempts, for instance the excellent work of Olympia Colizoli and colleagues in Amsterdam, have tried to mimic (grapheme-colour) synaesthesia by having people read books in which some of the letters are always coloured in with particular colours. They found that it was possible to train people to display some of the characteristics of synaesthesia, like being slower to name coloured letters when they were presented in a colour conflicting with the training (the ‘synaesthetic Stroop’ effect). But crucially, until now no study has found that training could lead to people actually reporting synaesthesia-like conscious experiences.

syn_reading
An extract from the ‘coloured reading’ training material, used in our study, and similar to the material used by Colizoli and colleagues. The text is from James Joyce. Later in training we replaced some of the letters with (appropriately) coloured blocks to make the task even harder.

Our approach was based on brute force. We decided to dramatically increase the length and rigour of the training procedure that our (initially non-synaesthetic) volunteers undertook. Each of them (14 in all) came in to the lab for half-an-hour each day, five days a week, for nine weeks! On each visit they completed a selection of training exercises designed to cement specific associations between letters and colours. Crucially, we adapted the difficulty of the tasks to each volunteer and each training session, and we also gave them financial rewards for good performance. Over the nine-week regime, some of the easier tasks were dropped entirely, and other more difficult tasks were introduced. Our volunteers also had homework to do, like reading the coloured books. Our idea was that the training must always be challenging, in order to have a chance of working.

The results were striking. At the end of the nine-week exercise, our dedicated volunteers were tested for behavioural signs of synaesthesia, and – crucially – were also asked about their experiences, both inside and outside the lab. Behaviourally they all showed strong similarities with natural-born synaesthetes. This was most striking in measures of ‘consistency’, a test which requires repeated selection of the colour associated with a particular letter, from a palette of millions.

consistency
The consistency test for synaesthesia. This example from David Eagleman’s popular ‘synaesthesia battery’.

Natural-born synaesthetes show very high consistency: the colours they pick (for a given letter) are very close to each other in colour space, across repeated selections. This is important because consistency is very hard to fake. The idea is that synaesthetes can simply match a colour to their experienced ‘concurrent’, whereas non-synaesthetes have to rely on less reliable visual memory, or other strategies.

Our trained quasi-synaesthetes passed the consistency test with flying colours (so to speak). They also performed much like natural synaesthetes on a whole range of other behavioural tests, including synaesthetic stroop, and a ‘synaesthetic conditioning’ task which shows that trained colours can elicit automatic physiological responses, like increases in skin conductance. Most importantly, most (8/14) of our volunteers described colour experiences much like those of natural synaesthetes (only 2 reported no colour phenomenology at all). Strikingly, some of these experience took place even outside the lab:

“When I was walking into campus I glanced at the University of Sussex sign and the letters were coloured” [according to their trained associations]

Like natural synaesthetes, some of our volunteers seemed to experience the concurrent colour ‘out in the world’ while others experienced the colours more ‘in the head’:

“When I am looking at a letter I see them in the trained colours”

“When I look at the letter ‘p’ … its like the inside of my head is pink”

syn_letters
For grapheme colour synaesthetes, letters evoke specific colour experiences. Most of our trained quasi-synaesthetes reported similar experiences. This image is however quite misleading. Synaesthetes (natural born or not) also see the letters in their actual colour, and they typically know that the synaesthetic colour is not ‘real’. But that’s another story.

These results are very exciting, suggesting for the first time that with sufficient training, people can actually learn to see the world differently. Of course, since they are based on subjective reports about conscious experiences, they are also the hardest to independently verify. There is always the slight worry that our volunteers said what they thought we wanted to hear. Against this worry, we were careful to ensure that none of our volunteers knew the study was about synaesthesia (and on debrief, none of them did!). Also, similar ‘demand characteristic’ concerns could have affected other synaesthesia training studies, yet none of these led to descriptions of synaesthesia-like experiences.

Our results weren’t just about synaesthesia. A fascinating side effect was that our volunteers registered a dramatic increase in IQ, gaining an average of about 12 IQ points (compared to a control group which didn’t undergo training). We don’t yet know whether this increase was due to the specifically synaesthetic aspects of our regime, or just intensive cognitive training in general. Either way, our findings provide support for the idea that carefully designed cognitive training could enhance normal cognition, or even help remedy cognitive deficits or decline. More research is needed on these important questions.

What happened in the brain as a result of our training? The short answer is: we don’t know, yet. While in this study we didn’t look at the brain, other studies have found changes in the brain after similar kinds of training. This makes sense: changes in behaviour or in perception should be accompanied by neural changes of some kind. At the same time, natural-born synaesthetes appear to have differences both in the structure of their brains, and in their activity patterns. We are now eager to see what kind of neural signatures underlie the outcome of our training paradigm. The hope is, that because our study showed actual changes in perceptual experience, analysis of these signatures will shed new light on the brain basis of consciousness itself.

So, yes, you can learn to see the world differently. To me, the most important aspect of this work is that it emphasizes that each of us inhabits our own distinctive conscious world. It may be tempting to think that while different people – maybe other cultures – have different beliefs and ways of thinking, still we all see the same external reality. But synaesthesia, along with other emerging theories of ‘predictive processing’ – shows that the differences go much deeper. We each inhabit our own personalised universe, albeit one which is partly defined and shaped by other people. So next time you think someone is off in their own little world: they are.


The work described here was led by Daniel Bor and Nicolas Rothen, and is just one part of an energetic inquiry into synaesthesia taking place at Sussex University and the Sackler Centre for Consciousness Science. With Jamie Ward and (recently) Julia Simner also working here, we have a uniquely concentrated expertise in this fascinating area. In other related work I have been interested in why synaesthetic experiences lack a sense of reality and how this give an important clue about the nature of ‘perceptual presence’. I’ve also been working on the phenomenology of spatial form synaesthesia, and whether synaesthetic experiences can be induced through hypnosis. And an exciting brain imaging study of natural synaesthetes will shortly hit the press! Nicolas Rothen is an authority on the relationship between synaesthesia and memory, and Jamie Ward and Julia Simner have way too many accomplishments in this field to mention. (OK, Jamie has written the most influential review paper in the area – featuring a lot of his own work – and Julia (with Ed Hubbard) has written the leading textbook. That’s not bad to start with.)


Our paper, Adults can be Trained to Acquire Synesthetic Experiences (sorry for US spelling) is published (open access, free!) in Scientific Reports, part of the Nature family. The authors were Daniel Bor, Nicolas Rothen, David Schwartzman, Stephanie Clayton, and Anil K. Seth. There has been quite a lot of media coverage of this work, for instance in the New Scientist and the Daily Fail. Other coverage is summarized here.

Eye Benders: the science of seeing and believing, wins Royal Society prize!

eyebenders_cover

An unexpected post.  I’m very happy to have learnt today that the book Eye Benders has won the 2014 Royal Society Young Person’s Book Prize.  Eye Benders was written by Clive Gifford (main author) and me (consultant).  It was published by Ivy Press, who are also the redoubtable publishers of the so-far-prizeless but nonetheless worthy 30 Second Brain. A follow-up to Eye Benders, Brain Twister, is in the works: More brain, less optical illusions, but same high quality young-person-neuroscience-fare.

The Royal Society says this about the prize: “Each year the Royal Society awards a prize to the best book that communicates science to young people. The prize aims to inspire young people to read about science and promotes the best science writing for the under-14s.”

This year, the shortlist was chosen by Professor James Hough FRS, Dr Rhaana Starling, Mr Michael Heyes, Professor Iain Stewart and Dr Anjana Ahuja. Well done all, good shortlisting.  More importantly, the winner was chosen by groups of young persons themselves.  Here is what some of the 2014 young people had to say about Eye Benders:

Matt, 12 said “Science from a different perspective. Factual and interesting – a spiral of a read!”

Beth, 14 said “It was way, way cool!

Ethan, 12 said “The illustrations were absolutely amazing”

Joe, 12 said “A great, well written and well thought-out book; the illustrations are clear, detailed and amazing. The front cover is beautiful.”

Felix, 10 said “Eye popping and mind-blowingly fun!’

So there it is. Matt and friends have spoken, and here is a picture of Clive accepting the award in Newcastle (alas I wasn’t there) accompanied with a young person being enthused:

eyebenders_award

Here’s a sneak at what the book looks like, on the inside:

eyebenders_sample

A personal note: I remember well going through the final layouts for Eye Benders, heavily dosed on painkillers in hospital in Barcelona following emergency surgery, while at the same time my father was entering his final weeks back in Oxfordshire. A dark time.  Its lovely, if bittersweet, to see something like this emerge from it.

Other coverage:

GrrlScientist in The Guardian.
Optical illusion book wins Royal Society prize
Clive shares some of the best Eye Benders illusions online
Royal Society official announcement
University of Sussex press release