Guest blog: Phenomenological control: Response to imaginative suggestion predicts measures of mirror touch synaesthesia, vicarious pain, and the rubber hand illusion

RHI

The Rubber Hand Illusion.  Credit: 30 Second Brain (Ivy Press). Edited by Anil Seth

This is a Guest Blog written by Peter Lush, postdoctoral research at the Sackler Centre for Consciousness Science, and lead author on this new study.  It’s all about our new preprint.

A key challenge for psychological research is how to measure subjective experience. One domain in which this is particularly relevant is for experiences of ‘embodiment’. These experiences show widespread individual variation and can be surprisingly easy to manipulate. The rubber hand illusion, for example, is a famous effect in which a simple procedure generates experiences of ownership over a fake hand. Experience of the illusion can be measured either directly, through subjective reports of illusion experience, or indirectly, by changes in the felt position of the participant’s own hand. Scientists consider these measures to provide insight into the processes by which conscious experiences of embodiment come about.

However, such interpretations overlook the role of trait (i.e., stable individual) differences in the ability to generate experience to meet expectancies, which we call ‘phenomenological control’. If measures of the rubber hand illusion reflect the active generation of expected experience, then existing accounts of this and related effects will be incomplete or incorrect. Our new preprint, on PsyArXiv, reports the results of three large scale studies (more than 1000 participants in total) investigating the relationship between the ability to change experience to fit situational demands (phenomenological control) and established measures of embodiment. These results have implications not only for interpretation of embodiment measures, but also for any research employing measures taken to reflect subjective experience.

Here are the theoretical motivations:

  • Many people are able to generate compelling experiences in response to expectancies arising from imaginative suggestion presented within the context of ‘hypnosis’. Hypnotic responding is voluntary (nobody can be forced to respond) but is experienced as involuntary. A wide range of experiences can be generated. Examples include visual, auditory or gustatory hallucinations, vivid dreams and apparently involuntary movements.
  • The extent to which individuals can control their phenomenology in response to imaginative suggestion is a normally distributed stable trait, with good test-retest reliability over a 25 year period. Only a relatively small number of people (10-15%) are unable to successfully respond to imaginative suggestion. Therefore, the majority of experimental participants in any scientific experiment are likely to have at least some phenomenological control abilities.
  • We know that the hypnotic context (e.g., the presence of a hypnotist or the use of induction procedures) is not required for response to imaginative suggestion.
  • The context of a scientific experiment (e.g., the presence of a scientist and the expectancies generated by participants’ preconceptions of science) may, like the hypnotic context, cause participants to engage in the control of phenomenology to meet their interpretations of the response expected by the experimenter or arising from the experimental procedure (for example, the synchronous brushing which is used to induce the rubber hand illusion may act as an implicit imaginative suggestion).
  • Such responding will be experienced as involuntary by the participant, and will generate convincing reports of changes in subjective experience.
  • Any test procedure in which the expectations of the experimenter are discernible to the participant may therefore reflect phenomenological control rather than the stated theoretical targets of interest.

Note that this proposal differs from common understanding of demand characteristics and experimenter effects, which are generally considered to lead to merely behavioural effects (e.g., social compliance). Subjects engaging in phenomenological control will report genuine experiences.

Hypnosis researchers employ standardised scales to measure response to imaginative suggestion within a hypnotic context. A high score on a hypnotisability scale shows that a participant has the ability to generate and control their phenomenology to meet the expectancies communicated by the ‘hypnotist’ through direct suggestion. Here we employed our Sussex Waterloo Scale of Hypnotisability (SWASH), which consists of ten imaginative suggestions for particular experiences (for example, the touch of a mosquito, a sweet or sour taste, hearing music, and involuntary movement). The most parsimonious theories of hypnotic responding argue that response to hypnotic suggestion involves a voluntary mental or physical act which is experienced as involuntary (e.g., Hilgard. 1977; Spanos, 1986; Dienes & Perner, 2007). For example, a successful response to a suggestion that one’s arm will move of its own accord involves generating the inaccurate phenomenology that a voluntary action is involuntary. Similarly, a suggested experience of hearing music would involve an intentional act of imagination which, again, is experienced as unintentional.

We tested our predictions on three embodiment measures. These effects were chosen because they involve striking changes in experience and therefore have much surface similarity with imaginative suggestion effects.

The rubber hand illusion

lushblog_fig1

Figure 1. A large-scale rubber hand illusion study. We tested 353 participants in total, over the course of one week.    

The rubber hand illusion is perhaps the most well-known of all embodiment effects. To induce the illusion, a visible fake hand and the participant’s concealed real hand are stroked in synchrony, so that the felt touch of the brush on the real hand and the seen touch on the fake hand are closely matched. The level of agreement or disagreement with statements describing illusion experience is taken on a scale from -3 (indicating strong disagreement to +3 (indicating strong agreement). Expected effects in the rubber hand illusion may be easy to discern from the induction procedure alone, even if given no verbal instructions; for example, it may be clear to participants that they are expected to feel the touch of the brushing on their own hand located on the fake hand positioned in front of them.

We tested 353 participants, measuring both their SWASH hypnotisability score and their performance in the rubber hand illusion (Figure 1). Consistent with our predictions, hypnotisability scores predicted subjective report scores and also proprioceptive drift (a measure of changes in the felt position of the participant’s hand). Figure 2 shows that, on average, experience of both felt touch and ownership in the rubber hand illusion requires the ability to control phenomenology to meet expectancies. The 353 participants have been divided here into four groups by their hypnotisability score (error bars show 95% CIs). The figure shows individual illusion agreement scores for standard illusion statements (used in Botvinick & Cohen,1998 and for many subsequent studies). Statement S1 (“It seemed as if I were feeling the touch of the paintbrush in the location where I saw the rubber hand touched”) and S2 (“It seemed as though the touch I felt was caused by the paintbrush touching the rubber hand”) describe experiences of felt touch, while statement S3 (I felt as if the rubber hand were my hand”) an experience of ownership.  The least hypnotisable quarter of participants did not on average agree with statements S2 and S3, but this group did agree with statement S1. This is probably attributable to ambiguous phrasing, as the statement can be interpreted as asking for participants’ mundane experience of touch on their own hand (see Botvinick and Cohen, 1998, in which all participants reported maximum agreement with this statement). In any case, factor analysis suggests that agreement with this statement does not reflect experience of embodiment (Longo et al, 2007).

lushblog_fig2

Figure 2. Mean subjective report scores in participants grouped by hypnotisability score (lowest scores on the left side of the chart).

In summary, common direct and indirect measures of the rubber hand illusion are substantially related to hypnotisability and on average the illusion does not occur in people unable to respond to hypnotic suggestion.

It’s worth noting that the rubber hand illusion has also been associated with common physiological measures like skin conductance response (SCR), histamine reactivity and body temperature. One might think these measures would be immune to phenomenological control.  However, these physiological properties are known to be susceptible to imaginative suggestion (SCR; histamine reactivity; temperature). We therefore predict similar relationships between hypnotisability and these measures.

Mirror touch synaesthesia and vicarious pain

Mirror touch and vicarious pain are experiences of pain or touch in response to the witnessed pain of another. In a research setting, these effects can be studied through the use of videos showing painful stimuli, or of touch to humans and inanimate objects. The primary measure is the proportion of videos which generate a felt touch or an experience of pain in response to visual stimuli.

Again, as predicted, hypnotisability score predicted both vicarious pain and mirror touch response. Figure 3 shows the mean number of vicarious pain experiences reported for videos showing a range of apparently painful events (e.g., injections and sporting injuries) in 404 participants. A clear relationship between hypnotisability and vicarious pain response can be seen.

lushblog_fig4

Figure 3.  Mean number of vicarious pain responses in participants grouped by hypnotisability score.

Figure 4 shows the results for mirror touch synaesthesia. Here, a sample of 154 participants were tested. Mirror touch synaesthetes (defined by response to 9 or more videos) were, on average, highly hypnotisable, with a mean score equal to the cut-off for the top 13% of SWASH scores.

lushblog_fig5

Figure 4. Mean hypnotisability scores in participants grouped by the number of reports of mirror touch experience to video stimuli (no response, 1-8 videos and 9-16 videos).

Conclusions

Measures of three prominent embodiment effects – the rubber hand illusion, mirror touch synaesthesia, and vicarious pain – reflect the ability to generate compelling phenomenology in response to imaginative suggestion. At this stage, we do not know to what extent these effects are attributable to phenomenological control. Further work will be necessary to establish whether or not there are, for example, rubber hand illusion effects which do not require phenomenological control abilities. Note also that, if mirror experiences in everyday life are driven by phenomenological control, experimenter-derived expectancies may have a relatively minimal effect on measures of these experiences in the lab (because participants may respond this way to any similar visual stimulus, away from the scientific context).

Note that, because imaginative suggestion can produce changes in brain activity consistent with the suggestion given (e.g., activity in visual brain areas for suggested visual hallucination), phenomenological control may also account for the results of neuroimaging studies of these embodiment effects.

Our results demonstrate that the engagement of phenomenological control abilities to fulfil expectancies can occur within a scientific context and that such abilities may account for these a range of subjective embodiment effects. Response to imaginative suggestion does not require a hypnotic induction, or even any hypnotic context. All that is required is the ability to control phenomenology and a context in which phenomenological control can be (unconsciously) interpreted as appropriate. Despite this, the majority of research into imaginative suggestion has been conducted within a hypnotic context, and as a result the possibility that scientific experiments present another such context in which phenomenological control abilities are engaged has been overlooked.

We are now developing a phenomenological control scale with which to investigate phenomenological control in many other effects across psychological science which could be influenced by the participant’s subjective experience. The results we present in this paper, therefore, may indicate that the reappraisal of empirical results in behavioural science will be necessary for a broad range of fields.

These studies focus on the role of phenomenological control in existing effects. However, phenomenological control should not be seen merely as confounding existing theories and presenting problems for psychological science. Trait differences in the ability to influence perception by top-down influences are a valuable target for scientific investigations of conscious experience in their own right.

*

Lush, P*., Botan, V., Scott, R. B., Seth, A.K., Ward, J., & Dienes, Z. (2019, April 16). Phenomenological control: response to imaginative suggestion predicts measures of mirror touch synaesthesia, vicarious pain and the rubber hand illusion. https://doi.org/10.31234/osf.io/82jav

This research was supported by the Dr Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research (CIFAR) Azrieli Programme on Brain, Mind, and Consciousness.

*Corresponding author, and author of this guest blog.

 

 

Taking back control

EUI was born in 1972, a year before the UK joined the European Economic Community. As I grew up, in rural South Oxfordshire, the idea of being part of a world beyond England helped keep me going, helped me believe things would get better. Half Indian and half Yorkshire, with a name that even I couldn’t pronounce properly, I looked forward to being part of a world with all the beauty and diversity of Europe, a world in which the threat of war and nationalism was receding not growing, war which had taken my grandfather before I knew him, before he knew me.

In 2016, the day after the referendum, I was giving a talk at a New Scientist event in London. I was up first, and began with some words about the sadness I felt about the result. Sadness about the UK turning away from the world with all its opportunities and challenges, and sadness about the national self-harm caused by the lies, greed, complacency, and desperation for power that had brought us to this point, to 52% vs 48%.

Now, despite myself, I am angry.

Apparently, Theresa May is preparing to bring her appalling deal back to parliament for a third ‘meaningful’ vote, running down the clock until there are no options left on the table, until there is no table. The deal on offer has not changed. To call the votes ‘meaningful’ is therefore the most moronic oxymoron I’ve ever heard. There is nothing meaningful in repeating a vote you lose (and lose by massive margins) until you get the result you want.

Of course, this is precisely the logic by which we are told it is unacceptable to go back to ask the people what they think. The people, we are told, have given their instructions, and we are compelled to carry them out whatever the cost. But while May’s ‘deal’ has not changed, the consequences of leaving the EU are now entirely and obviously different from the lies and false promises that people voted on during the referendum itself, in a campaign that is increasingly being revealed as riven with corruption and driven by dubious foreign and economic interests. (And yes, we need our own Mueller.)

To refuse a People’s Vote on the basis of it being a threat to democracy is hypocrisy of lowest form.

There are many other reasons for sadness and anger. The shapeshifting of our politicians as they jockey for personal advantage amid their self-generated chaos. The airtime given to the far-right headbangers stirring up regressive nationalistic passions to deepen the divisions that are already tearing our country apart. The pandering to the Ulster Unionists and the threat to peace in Northern Ireland. The blatant lies coming from the government as they pull votes, add votes, trot out the same garbage about ‘taking back control’, attempt shameless bribes to get their way, and plough on to the cliff edge regardless. The absence of any effective opposition to what is the most disastrous leadership I or anyone can remember. Cameron and his mates fleeing the scene to chillax in Italy or Portugal or wherever. The disenfranchisement of the young, the back-burnering of all the non-Brexit government business that might actually matter, and all the time and money and hopes and dreams already burnt to ashes on the Brexit trash-fire.

It’s time for all this to stop.

Our society was and is unequal and the dominant neo-liberal complacency needed shaking up. But this is not the way to do it. We are more divided than ever, half of us sold lies and promises of an impossible future, the other half increasingly disconnected from and despairing of the direction we are headed. The EU, while not perfect, cannot be blamed. We brought this on ourselves. And now it’s clear that parliament, once something to be proud of, cannot form a majority for anything – at least not without May’s deadline-day gun-to-the-head and the prospective horror show of her deal rising like a zombie until it finally staggers over the line. This would not be a triumph of diplomacy and democracy. It would be a travesty.

It’s time to go back to the people. Let them take back control.

Time perception without clocks

the_persistence_of_memory

Salvador Dali, The Persistence of Memory, 1931

Our new paper, led by Warrick Roseboom, is out now (open access) in Nature Communications. It’s about time.

More than two thousand years ago, though who knows how long exactly, Saint Augustine complained “What then is time? If no-one asks me, I know; if I wish to explain to one who asks, I know not.”

The nature of time is endlessly mysterious, in philosophy, in physics, and also in neuroscience. We experience the flow of time, we perceive events as being ordered in time and as having particular durations, yet there are no time sensors in the brain. The eye has rod and cone cells to detect light, the ear has hair cells to detect sound, but there are no dedicated ‘time receptors’ to be found anywhere. How, then, does the brain create the subjective sense of time passing?

Most neuroscientific models of time perception rely on some kind internal timekeeper or pacemaker, a putative ‘clock in the head’ against which the flow of events can be measured. But despite considerable research, clear evidence for these neuronal pacemakers has been rather lacking, especially when it comes to psychologically relevant timescales of a few seconds to minutes.

An alternative view, and one with substantial psychological pedigree, is that time perception is driven by changes in other perceptual modalities. These modalities include vision and hearing, and possibly also internal modalities like interoception (the sense of the body ‘from within’). This is the view we set out to test in this new study, initiated by Warrick Roseboom here at the Sackler Centre, and Dave Bhowmik at Imperial College London, as part of the recently finished EU H2020 project TIMESTORM.

*

Their idea was that one specific aspect of time perception – duration estimation – is based on the rate of accumulation of salient events in other perceptual modalities. More salient changes, longer estimated durations. Fewer salient changes, shorter durations. He set out to test this idea using a neural network model of visual object classification modified to generate estimates of salient changes when exposed to natural videos of varying lengths (Figure 1).

timefig1

Figure 1. Experiment design. Both human volunteers (a, with eye tracking) and a pretrained object classification neural network (b) view a series of natural videos of different lengths (c), recorded in different environments (d). Activity in the classification networks is analysed for frame-to-frame ‘salient changes’ and records of salient changes are used to train estimates of duration – based on the physical duration of the video. These estimates are then compared with human reports. We also compare networks trained on gaze-constrained video input versus ‘full frame’ video input.

We first collected several hundred videos of five different environments and chopped them into varying lengths from 1 sec to ~1 min. The environments were quiet office scenes, café scenes, busy city scenes, outdoor countryside scenes, and scenes from the campus of Sussex University.  We then showed the videos to some human participants, who rated their apparent durations. We also collected eye tracking data while they viewed the videos. All in all we obtained over 4,000 duration ratings.

The behavioural data showed that people could do the task, and that – as expected – they underestimated long durations and overestimated short durations (Figure 2a). This ‘regression to the mean’ effect is known as Vierodt’s law in the time perception literature and is very well known. Our human volunteers also showed biases according to the video content, rating busy (e.g., city) scenes as lasting longer than non-busy (e.g., office) scenes of the same physical duration. This is just as expected, if duration estimation is based on accumulation of salient perceptual changes.

For the computational part, we used AlexNet, a pretrained deep convolutional neural network (DCNN) which has excellent object classification performance across 1,000 classes of object. We exposed AlexNet to each video, frame by frame. For each frame we examined activity in four separate layers of the network and compared it to the activity elicited by the previous frame. If the difference exceeded an adaptive threshold, we counted a ‘salient event’ and accumulated a unit of subjective time at that level. Finally, we used a simple machine learning tool (a support vector machine) to convert the record of salient events into an estimate of duration in seconds, in order to compare the model with human reports.  There are two important things to note here. The first is that the system was trained on the physical duration of the videos, not on the human estimates (apparent durations). The second is that there is no reliance on any internal clock or pacemaker at all (the frame rate is arbitrary – changing it doesn’t make any difference).

timefig2

Fig 2. Main results. Human volunteers can do the task and show characteristic biases (a).  When the model is trained on ‘full-frame’ data it can also do the task, but the biases are even more severe (b). There is a much closer match to human data when the model input is constrained by human gaze data (c), but not when the gaze locations are drawn from different trials (d).

There were two key tests of the model.  Was it able to perform the task?  More importantly, did it reveal the same pattern of biases as shown by humans?

Figure 2(b) shows that the model indeed performed the task, classifying longer videos as longer than shorter videos.  It also showed the same pattern of biases, though these were more exaggerated than for the human data (a).  But – critically – when we constrained the video input to the model by where humans were looking, the match to human performance was incredibly close (c). (Importantly, this match went away if we used gaze locations from a different video, d). We also found that the model displayed a similar pattern of biases by content, rating busy scenes as lasting longer than non-busy scenes – just as our human volunteers did. Additional control experiments, described in the paper, rule out that these close matches could be achieved just by changes within the video image itself, or by other trivial dependencies (e.g., on frame rate, or on the support vector regression step).

Altogether, these data show that our clock-free model of time-perception, based on the dynamics of perceptual classification, provides a sufficient basis for capturing subjective duration estimation of visual scenes – scenes that vary in their content as well as in their duration. Our model works on a fully end-to-end basis, going all the way from natural video stimuli to duration estimation in seconds.

*

We think this work is important because it comprehensively illustrates an empirically adequate alternative to ‘pacemaker’ models of time perception.

Pacemaker models are undoubtedly intuitive and influential, but they raise the spectre of what Daniel Dennett has called the ‘fallacy of double transduction’. This is false idea that perceptual systems somehow need to re-instantiate a perceived property inside the head, in order for perception to work. Thus perceived redness might require something red-in-the-head, and perceived music might need a little band-in-the-head, together with a complicated system of intracranial microphones. Naturally no-one would explicitly sign up to this kind of theory, but it sometimes creeps in unannounced to theories that rely too heavily on representations of one kind or another. And it seems that proposing a ‘clock in the head’ for time perception provides a prime example of an implicit double transduction. Our model neatly avoids the fallacy, and as we say in our Conclusion:

“That our system produces human-like time estimates based on only natural video inputs, without any appeal to a pacemaker or clock-like mechanism, represents a substantial advance in building artificial systems with human-like temporal cognition, and presents a fresh opportunity to understand human perception and experience of time.” (p.7).

We’re now extending this line of work by obtaining neuroimaging (fMRI) data during the same task, so that we can compare the computational model activity against brain activity in human observers (with Maxine Sherman). We’ve also recorded a whole array of physiological signatures – such as heart-rate and eye-blink data – to see whether we can find any reliable physiological influences on duration estimation in this task.  We can’t – and the preprint, with Marta Suarez-Pinilla – is here.

*

Major credit for this study to Warrick Roseboom who led the whole thing, with the able assistance of Zaferious Fountas and Kyriacos Nikiforou with the modelling. Major credit also to David Bhowmik who was heavily involved in the conception and early stages of the project, and also to Murray Shanahan who provided very helpful oversight. Thanks also to the EU H2020 TIMESTORM project which supported this project from start to finish. As always, I’d also like to thank the Dr. Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research, Azrieli Programme in Brain, Mind, and Consciousness, for their support.

*

Roseboom, W., Fountas, Z., Nikiforou, K., Bhowmik, D., Shanahan, M.P., and Seth, A.K. (2019). Activity in perceptual classification networks as a basis for human subjective time perception. Nature Communications. 10:269.

 

Be careful what you measure: Comparing measures of integrated information

entropycover2019

Our new paper on ‘measuring integrated information’ is out now, open access, in the journal Entropy. It’s part of a special issue dedicated to integrated information theory.

In consciousness research, ‘integrated information theory’, or IIT, has come to occupy a highly influential and rather controversial position. Acclaimed by some as the most important development in consciousness science so far, critiqued by others as too mathematically abstruse and empirically untestable, IIT is by turns both fascinating and frustrating. Certainly, a key challenge for IIT is to develop measures of ‘integrated information’ that can be usefully applied to actual data. These measures should capture, in empirically interesting and theoretically profound ways, the extent to which ‘a system generates more information than the sum of its parts’. Such measures are also of interest in many domains beyond consciousness, through for example to physics and engineering, where notions of ‘dynamical complexity’ are of more general importance.

Adam Barrett and I have been working towards this challenge for many years, both through approximations of the measure F (‘phi’, central to the various iterations of IIT) and through alternative measures like ‘causal density’. Alongside new work from other groups, there now exist a range of measures of integrated information – yet so far no systematic comparison of how they perform on non-trivial systems.

This is what we provide in our new paper, led by Adam along with Pedro Mediano from Imperial College London.

*

We describe, using a uniform notation, six different candidate measures of integrated information (among which we count the related measure of ‘causal density’). We set out the intuitions behind each, and compare their properties across a series of criteria. We then explore how they behave on a variety of network models, some very simple, others a little bit more complex.

The most striking finding is that the measures all behave very differently – no two measures show consistent agreement across all our analyses. Here’s an example:

screen shot 2019-01-03 at 16.45.52

Diverse behavior of measures of integrated information. The six measures (plus two control measures) are shown in terms of their behavior on a simple 2-node network animated by autoregressive dynamics.

At first glance this seems worrying for IIT since, ideally, one would want conceptually similar measures to behave in similar ways when applied to empirical test-cases. Indeed, it is worrying if existing measures are used uncritically. However, by rigorously comparing these measures we are able to identify those which better reflect the underlying intuitions of ‘integrated information’, which we believe will be of some help as these measures continue to be developed and refined.

Integrated information, along with related notions of dynamical complexity and emergence, are likely to be important pillars of our emerging understanding of complex dynamics in all sorts of situations – in consciousness research, in neuroscience more generally, and beyond biology altogether. Our new paper provides a firm foundation for the future development of this critical line of research.

*

One important caveat is necessary. We focus on measures that are, by construction, applicable to the empirical, or spontaneous, statistically stationary distribution of a system’s dynamics. This means we depart, by necessity, from the supposedly more fundamental measures of integrated information that feature in the most recent iterations of IIT. These recent versions of the theory appeal to the so-called ‘maximum entropy’ distribution since they are more interested in characterizing the ‘cause-effect structure’ of a system than in saying things about its dynamics. This means we should be very cautious about taking our results to apply to current versions of IIT. But, in recognizing this, we also return to where we started in this post. A major issue for the more recent (and supposedly more fundamental) versions of IIT is that they are extremely challenging to operationalize and therefore to put to an empirical test. Our work on integrated information departs from ‘fundamental’ IIT precisely because we prioritise empirical applicability. This, we think, is a feature, not a bug.

*

All credit for this study to Pedro Mediano and Adam Barrett, who did all the work. As always, I’d like to thank the Dr. Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research, Azrieli Programme in Brain, Mind, and Consciousness, for their support. The paper was published in Entropy on Christmas Day, which may explain why some of you might’ve missed it!  But it did make the cover, which is nice.

*

Mediano, P.A.M., Seth, A.K., and Barrett, A.B. (2019). Measuring integrated information. Comparison of candidate measures in theory and in simulation. Entropy, 21:17

What psychedelic research can, and cannot, tell us about consciousness

CA391E62-A595-41A7-9709872EB3D6BEA1_source

The short piece below first appeared in Scientific American (Observations) on October 26, 2018.  It is a coauthored piece, led by me with contributions from Michael Schartner, Enzo Tagliazucchi, Suresh Muthukumaraswamy, Robin Carhart-Harris, and Adam Barrett.  Since its appearance, both Dr. Kastrup and Prof. Kelly have responded. I attach links to their replies after our article, offering a few comments in further response (entirely my own point of view). These comments just offer additional clarifications – I stand fully by everything said in our Sci Am piece.


It’s not easy to strike the right balance when taking new scientific findings to a wider audience. In a recent opinion piece, Bernardo Kastrup and Edward F. Kelly point out that media reporting can fuel misleading interpretations through oversimplification, sometimes abetted by the scientists themselves. Media misinterpretations can be particularly contagious for research areas likely to pique public interest—such as the exciting new investigations of the brain basis of altered conscious experience induced by psychedelic drugs.

Unfortunately, Kastrup and Kelly fall foul of their own critique by misconstruing and oversimplifying the details of the studies they discuss. This leads them towards an anti-materialistic view of consciousness that has nothing to do with the details of the experimental studies—ours or others.

Take, for example, their discussion of our recent study reporting increased neuronal “signal diversity” in the psychedelic state. In this study, we used “Lempel-Ziv” complexity—a standard algorithm used to compress data files—to measure the diversity of brain signals recorded using magnetoencephalography (MEG). Diversity in this sense is related to, though not entirely equivalent to, “randomness.” The data showed widespread increased neuronal signal diversity for three different psychedelics (LSD, psilocybin and ketamine), when compared to a placebo baseline. This was a striking result since previous studies using this measure had only reported reductions in signal diversity, in global states generally thought to mark “decreases” in consciousness, such as (non-REM) sleep and anesthesia.

Media reporting of this finding led to headlines such as “First evidence found that LSD produces ‘higher’ levels of consciousness” (The Independent, April 19, 2017)—playing on an ambiguity between cultural and scientific interpretations of “higher”—and generating just the kind of confusion that Kastrup and Kelly rightly identify as unhelpful.

Unfortunately, Kastrup and Kelly then depart from the details in misleading ways. They suggest that the changes in signal diversity we found are “small,” when it is not magnitude but statistical significance and effect size that matters. Moreover, even small changes to brain dynamics can have large effects on consciousness. And when they compare the changes reported in psychedelic states with those found in sleep and anesthesia, they neglect the important fact that these analyses were conducted on different data types (intracranial data and scalp-level EEG respectively—compared to source-localized MEG for the psychedelic data)—making quantitative comparisons very difficult.

Having set up the notion that the changes we observed were “small,” they then say, “To suggest that brain activity randomness explains psychedelic experiences seems inconsistent with the fact that these experiences can be highly structured and meaningful.” However, neither we nor others claim that “brain activity randomness” explains psychedelic experiences. Our finding of increased signal diversity is part of a larger mission to account for aspects of conscious experience in terms of physiological processes. In our view, higher signal diversity indicates a larger repertoire of physical brain states that very plausibly underpin specific aspects of psychedelic experience, such as a blending of the senses, dissolution of the “ego,” and hyper-animated imagination. As standard functional networks dissolve and reorganize, so too might our perceptual structuring of the world and self.

“In short, a formidable chasm still yawns between the extraordinary richness of psychedelic experiences and the modest alterations in brain activity patterns so far observed.” Here, their misrepresentations are again exposed. To call the alterations modest is to misread the statistics. To claim a “formidable chasm” is to misunderstand the incremental nature of consciousness research (and experimental research generally), to sideline the constraints and subtleties of the relevant analyses and to ignore the insights into psychedelic experience that such analyses provide.

Kastrup and Kelly’s final move is to take this presumed chasm as motivation for questioning “materialist” views, held by most neuroscientists, according to which conscious experiences —and mental states in general—are underpinned by brain states. Our study, like all other studies that explore relations between experiential states and brain states (whether about psychedelics or not), is entirely irrelevant to this metaphysical question.

These are not the only inaccuracies in the piece that deserve redress. For example, their suggestion that decreased “brain activity” is one of the more reliable findings of psychedelic research is incorrect. Aside from the well-known stimulatory effects of psychedelics on the excitatory glutamate system, early reports of decreased brain blood flow under psilocybin have not been well replicated: a subsequent study by the same team using a different protocol and drug kinetics (intravenous LSD) found only modest increases in brain blood flow confined to the visual cortex. In contrast, more informative dynamic measures have revealed more consistent findings, with network disintegration, increases in global connectivity and increased signal diversity/entropy appearing to be particularly reliable outcomes, replicated across studies and study teams.

Consciousness science remains a fragile business, poised precariously between grand ambition, conflicting philosophical worldviews, immediate personal relevance and the messy reality of empirical research. Psychedelic research in particular has its own awkward cultural and historical baggage. Against this background, it’s important to take empirical advances for what they are: yardsticks of iterative, self-correcting progress.

This research is providing a unique window onto mappings between mechanism and phenomenology, but we are just beginning to scratch the surface. At the same time—and perhaps more importantly—psychedelic research is demonstrating an exciting potential for clinical use, for example in alleviating depression, though larger and more rigorous studies are needed to confirm and contextualize the promising early findings.

Kastrup and Kelly are right to guard against overplaying empirical findings by the media. But by misrepresenting the explanatory reach of our findings in order to motivate metaphysical discussions irrelevant to our study, they risk undermining the hard-won legitimacy of a neuroscience of consciousness. Empirical consciousness science, based firmly on materialistic assumptions, is doing just fine. And unlike alternative perspectives that place themselves “beyond physicalism,” it will continue to shed light on one of our deepest mysteries through rigorous application of the scientific method.


You can read Dr. Kastrup’s response here, and Prof. Kelly’s here. In the spirit of constructive clarification I will offer a few additional comments on the parts of the work I was involved in: the signal diversity study and the general interpretation of how empirical work on the brain basis of psychedelic research speaks to metaphysical debates about the nature of consciousness. These comments relate mainly to Prof. Kelly’s critique.

(With respect to Dr. Kastrup’s comments I will simply offer, as he no doubt knows, that relating fMRI BOLD to neural activity, in terms of global baseline and regionally differentiated metabolics, functional neuronal connectivity, and so on – remains an area of extremely active research and rapid methodological innovation.)

1. Prof Kelly notes that we do not provide ‘exact N’s for the data segments we used to compute measures of signal diversity.  This is because they varied substantially between drug condition, participant, and analysis method. We do however clearly state that “[a]nalyses were performed using non-overlapping segments of length 2 sec for a total length between 2 min and 10 min of MEG recording per participant and state” (Schartner et al 2017, p.5)” These numbers indeed lead to a total number of segments ranging from ~3,500 to ~27,000 per participant and per state (since we have 90 channels/sources per segment). These large numbers provide stable statistical inference (e.g., by the central limit theorem). Also, as we mentioned (above) the absolute scores on the diversity scale are not as meaningful as effect size and statistical significance. I’d also like to add that in our paper we go to great lengths to establish that our reported diversity changes do not trivially follow from well-known spectral changes in the drug conditions – this is part of the unavoidable computational sophistication of the method, when done properly.

2. When Prof. Kelly says that “relatively simple neuroimaging methods can easily distinguish between wakeful and drowsy states and other commonplace conditions” I do not disagree at all. Our paper was specifically interested in signal diversity as a metric of brain dynamics (and as mentioned above we take care to de-confound our diversity results from spectral changes). Also, we do not claim these diversity changes fully explain the extraordinary phenomenology of psychedelic states. However, I do believe that they contribute helpfully to the incremental empirical project of mapping, in explanatorily satisfying ways, between mechanism and phenomenology.  I defend the general approach in this 2016 Aeon article: ‘the “real” problem of consciousness’.

3.  I also agree the measures of signal diversity we apply are only part of the story when mapping between experiential richness and brain dynamics. My lab (and others too) have have worked hard on developing empirically adequate measures of ‘neural complexity’, ‘causal density’, and ‘integrated information’ which are theoretically richer – but unfortunately, at least so far, not very robust when applied to actual data – and are substantially more computationally sophisticated. See here for a recent preprint.  We have to do what we can with the measures we have, while always striving to generate and validate better measures.

4. I do not buy the claim that near-death-experiences provide an empirical challenge to physicalist neuroscience (as argued by Prof. Kelly). See my previous blog post on this issue (‘the brain’s last hurrah‘).

5. No need to impute me with a bias towards physicalism! I explicitly and happily adopt physicalism as a pragmatic metaphysics for pursuing a (neuro)science of consciousness.  I can do this while remaining agnostic about the actual ontological status of consciousness. The problem with many alternative metaphysics – in my view – is that they do not lead to testable propositions.  Dr Kastrup and Prof Kelly are of course entirely entitled to their own metaphysics. I was merely objecting to their usage of our psychedelic research in support of their metaphysics, because I think it is entirely irrelevant. I simply do not accept that there are any “evident tensions between physicalist expectations and the experimental results [from psychedelic neuroimaging]”.

6. Finally, we can hopefully all agree on the importance of forestalling, as far as possible, media misinterpretations.  This is true whatever one’s metaphysics.  And it’s why, when our diversity paper first appeared, I felt compelled to pen an immediate corrective right here in this blog (‘Evidence for a higher state of consciousness? Sort of‘).


After posting, I realized I had not specifically responded to Bernardo’s initial reaction to our Sci Am piece. There is some overlap with the above points, but please anyway allow me to correct this oversight here.

1. Clearing the semantic fog.  I hope I have made clear my intended distinction between ‘fully explain’ and ‘incrementally account for.’ Again my Aeon piece elaborates the strategy of refining explanatory mappings between mechanism and phenomenology.

2. Metaphysical claims. Our work is consistent with materialism and is motivated by it, but empirical studies like this are not suited to arbitrate between competing metaphysical positions (unless such positions state that there are no relations at all between brains and conscious experiences). Empirical studies like ours try to account for phenomenological properties in terms of mechanisms – but in doing so there is no need to make claims that one is addressing the (metaphysical) ‘hard problem’ of consciousness.  Kastrup and Kelly have written that “the psychedelic brain imaging research discussed here has brought us to a major theoretical decision point as to which framework best fits with all the available data” – where ‘physicalism’ is one among several (metaphysical) ‘frameworks’. I continue to think the research discussed here is irrelevant to this ‘decision point’, unless one is deciding to reject frameworks that postulate no relation between consciousness and the brain. The fact that the research is about psychedelics rather than (for example) psychophysics is neither here nor there.

3. What the researchers fail to address. I do not agree with the premise that there is an inconsistency between the dream state and the psychedelic state in terms of neural evidence. As noted above, measures of brain dynamics and activation are being continuously refined and innovated and it is overly simplistic to characterise the relevant dimensions in terms of gross ‘level of activity’.  Also, dreams and psychedelia are different. The point about ‘randomness’ I have  addressed already (diversity is not presented as an exhaustive explanation of psychedelic phenomenology).

4. A surprising claim. I respectfully refrain from addressing these points about the MRI/MEG studies since I was not involved with them. This does not mean I condone Bernardo’s comments. I will only repeat that brute measures of increased/decreased brain activity are less informative than more sophisticated measures of neural dynamics and connectivity, and studies are accumulating to more precisely map brain changes in psychedelic states.

5. The issue of statistics. It is not meaningful to compare, quantitatively, ‘magnitudes’ in changes in subjective experience with magnitudes of statistical effect size as applied to (for example) our diversity measures.  We made this point already in our Sci Am piece.  I find it quite natural to suppose that a massively meaningful change in subjective experience might have a subtle neuronal signature in the brain (and as I have said, diversity/randomness is only a small part of any full ‘explanation’ anyway).

6. A non-sequitur.  I do think its misleading to speak of a “formidable chasm” between “the magnitude of the subjective effects of a psychedelic trance and the accompanying physiological changes” for the reasons given in point 5 above.

7. Final thoughts. I indeed hope we can all agree that psychedelic research is interesting, exciting, valuable, evolving, clinically important, and generally highly worthwhile.  I hope we can also agree, as mentioned above, that forestalling media misrepresentations is important.  On other matters I doubt there will be full agreement between my views (and those of my colleagues) and Bernardo’s and Edward’s.  They are certainly entitled to their metaphysics. I simply wish to point out (i) our studies do help build explanatory bridges between neural mechanism and psychedelic phenomenology, and (ii) they do not provide any additional reasons to entertain non-physicalist metaphysics.

And with that, I’m afraid I’ll have to draw a line under this interesting discussion – at least for my involvement. I hope it generates some light amid the heat.

 

Conscious spoons, really? Pushing back against panpsychism

univerxx

So today I’d been planning to write about a new paper from our lab, just out in Neuropsychologia, in which we show how people without synaesthesia can be trained, over a few weeks, to have synaesthesia-like experiences – and that this training induces noticeable changes in their brains. It’s interesting stuff, and I will write about it later, but this morning I happened to read a recent piece by Olivia Goldhill in Quartz with the provocative title: “The idea that everything from spoons to stones are conscious is gaining academic credibility” (Quartz, Jan 27, 2018). This article had come up in a twitter discussion involving my colleague and friend Hakwan Lau about the challenge of maintaining the academic credibility of consciousness science, with Hakwan noting that provocative articles like this don’t often get the pushback they deserve.

So here’s some pushback.

Goldhill’s article is about panpsychism, which is the idea that consciousness is a fundamental property of the universe, present to some degree everywhere and in everything. Her article suggests that this view is becoming increasingly acceptable and accepted in academic circles, as so-called ‘traditional’ approaches (materialism and dualism) continue to struggle. On the contrary, although it’s true that panpsychism is being discussed more frequently and more openly these days, it remains very much a fringe proposition within consciousness science and is not taken seriously by many. Nor need it be, since consciousness science is getting along just fine without it. Let me explain how.

From hard problems to real problems

We should start with philosophy. Goldhill correctly identifies David Chalmers’ famous ‘hard problem of consciousness‘ as a key origin of modern panpsychism. This is bolstered by Chalmers’ own increasing apparent sympathy with this view, as Goldhill’s article makes clear. Put simply, the ‘hard problem’ is about how and why physical interactions of any sort can give rise to conscious experiences. This is indeed a difficult problem, and the apparent unavailability of any current solution is why those who fixate on it might be tempted by the elixir of panpsychism: if consciousness is ‘here, there, and everywhere‘ then there is no longer any hard problem to be solved.

But consciousness science has largely moved on from attempts to address the hard problem (though see IIT, below). This is not a failure, it’s a sign of maturity. Philosophically, the hard problem rests on conceivability arguments such as the possibility of imagining a philosophical ‘zombie’ – a behaviourally and perhaps physically identical version of me, or you, but which lacks any conscious experience, which has no inner universe. Conceivability arguments are generally weak since they often rest on failures of imagination or knowledge, rather than on insights into necessity. For example: the more I know about aerodynamics, the less I can imagine a 787 Dreamliner flying backwards. It cannot be done and such a thing is only ‘conceivable’ through ignorance about how wings work.

In practice, scientists researching consciousness are not spending their time (or their scarce grant money) worrying about conscious spoons, they are getting on with the job of mapping mechanistic properties (of brains, bodies, and environments) onto properties of consciousness. These properties can be described in many different ways, but include – for example – differences between normal wakeful awareness and general anaesthesia; experiences of identifying with and owning a particular body, or distinctions between conscious and unconscious visual perception. If you come to the primary academic meeting on consciousness science – the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) – or read articles either in specialist journals like Neuroscience of Consciousness (I edit this, other journals are available) or in the general academic literature, you’ll find a wealth of work like this and very little – almost nothing – on panpsychism. You’ll find debates on the best way to test whether prefrontal cortex is involved in visual metacognition – but you won’t find any experiments on whether stones are aware. This, again, is maturity, not stagnation. It is also worth pointing out that consciousness science is having increasing impact in medicine, whether through improved methods for detecting residual awareness following brain injury, or via enhanced understanding of the mechanisms underlying psychiatric illness. Thinking about conscious spoons just doesn’t cut it in this regard.

A standard objection at this point is that empirical work touted as being about consciousness science is often about something else: perhaps memory, attention, or visual perception. Yes, some work in consciousness science may be criticized this way, but it is not generally the case. To the extent that the explanatory target of a study encompasses phenomenological properties, or differences between conscious states (e.g., dreamless sleep versus wakeful rest), it is about consciousness. And of course, consciousness is not independent of other cognitive and perceptual processes – so empirical work that focuses on visual perception can be relevant to consciousness even if it does not explicitly contrast conscious and unconscious states.

The next objection goes like this: OK, you may be able to account for properties of consciousness in terms of underlying mechanisms, but this is never going to explain why consciousness is part of the universe in the first place – it is never going to solve the hard problem. Therefore consciousness science is failing. There are two responses to this.

First, wait and see (and ideally do). By building increasingly sophisticated bridges between mechanism and phenomenology, the apparent mystery of the hard problem may dissolve. Certainly, if we stick with simplistic ‘explanations’ – for instance by associating consciousness simply with activity in (for example) the prefrontal cortex, everything may remain mysterious. But if we can explain (for example) the phenomenology of peripheral vision in terms of neurally-encoded predictions of expected visual uncertainty, perhaps we are getting somewhere. It is unwise to pronounce the insufficiency of mechanistic accounts of some putatively mysterious phenomenon before such mechanistic accounts have been fully developed. This is one reason why frameworks like predictive processing are exciting – they provide explanatorily powerful, computationally explicit, and empirically predictive concepts which can help link phenomenology and mechanism. Such concepts can help move beyond correlation towards explanation in consciousness science, and as we move further along this road the hard problem may lose its lustre.

Second, people often seem to expect more from a science of consciousness than they would ask of other scientific explanations. As long as we can formulate explanatorily rich relations between physical mechanisms and phenomenological properties, and as long as these relations generate empirically testable predictions which stand up in the lab (and in the wild), we are doing just fine. Riding behind many criticisms of current consciousness science are unstated intuitions that a mechanistic account of consciousness should be somehow intuitively satisfying, or even that it must allow some kind of instantiation of consciousness in an arbitrary machine. We don’t make these requirements in other areas of science, and indeed the very fact that we instantiate phenomenological properties ourselves, might mean that a scientifically satisfactory account of consciousness will never generate the intuitive sensation of ‘ah yes, this is right, it has to be this way’. (Thomas Metzinger makes this point nicely in a recent conversation with Sam Harris.)

Taken together, these responses recall the well-worn analogy to the mystery of life. Not so long ago, scientists thought that the property of ‘being alive’ could never be explained by physics or chemistry. That life had to be more than mere ‘mechanism’. But as biologists got on with the job of accounting for the properties of life in terms of physics and chemistry, the basic mystery of the ontological status of life faded away and people no longer felt the need to appeal to vitalistic concepts like ‘elan vital’. Now of course this analogy is imperfect, and from our current vantage it is impossible to say how closely it will stand up over time. Consciousness and life are not the same (though they may be more closely linked than people tend to think – another story!). But the basic point remains: instead of focusing on a possibly illusory big mystery – and thereby falling for the temptations of easy big solutions like panpsychism – the best strategy is to divide and conquer. Identify properties and account for them, and repeat. Chalmers’ himself describes something like this strategy when he talks about the ‘mapping problem’, and with tongue-somewhat-in-cheek I’ve called it ‘the real problem of consciousness‘.

The lure of integrated information theory

A major boost for modern panpsychism has come from Giulio Tononi’s much discussed – and fascinating – integrated information theory of consciousness (IIT). This is a formal mathematical theory which attempts to derive constraints on the mechanisms of consciousness from axioms about phenomenology. It’s a complex theory (and apparently getting more complex all the time) but the relevance for panpsychism is straightforward. On IIT, any mechanism that integrates information in the right way exhibits consciousness to some degree. And the ability to integrate information is very general, since it depends on only the cause-effect structure of a system.

Tononi actually goes further than this, in a crucial but subtle way. For him, the (integrated) information that counts is based not only what a system has done (ie., what states it has been in), but on what a system could do (i.e., what states it could be in, even if has never or will never occupy these states). Technically, this is the difference between the empirical distribution of a system and its maximum entropy distribution. This feature of IIT not only makes it hard (usually impossible) to calculate for nontrivial systems, it pushes further towards panpsychism because it implies an ontological status for certain forms of information – much like John Wheeler’s ‘it from bit‘. If (integrated) information is real (and therefore more-or-less everywhere), and if consciousness is based on (integrated) information, then consciousness is also more-or-less everywhere, thus panpsychism.

But this is not the only way to formulate IIT. Several years ago, Adam Barrett and I formulated a measure of integrated information which depends only on the empirical distribution of a system, and now many competing measures exist. These measures can be applied more easily in practice, and they do not directly imply panpsychism because they can be interpreted as explanatory bridges between mechanism and phenomenology (in the ‘real problem’ sense), rather than as claims about what consciousness actually is. So when Goldhill writes that IIT “shares the panpsychist view that physical matter has innate conscious experience” this is only true for the strong version of the theory articulated by Tononi himself. Other views are possible, and more empirically productive.

Back to science

This leads us to the main problem with panpsychism. It’s not that it sounds crazy, it’s that it cannot be tested. It does not lead to any feasible programme of experimentation. Progress in scientific understanding requires experiments and testability. Given this, it’s curious that Goldhill introduces us to Arthur Eddington, the physicist who experimentally confirmed Einstein’s (totally crazy-sounding) theory of general relativity. Eddington’s immense contribution to experimental physics should not give credence to his views on panpsychism, it should instead remind us of the essential imperative of formulating testable theories, however difficult such tests might be to carry out. (Modern physics is of course now facing a similar testability crisis with string theory.) And outlandish speculations about how quantum entanglement might lead to universe-wide consciousness have no place whatsoever in a rigorous and empirically grounded science of consciousness.

I can’t finish this post without noting that the current attention to panpsychism, especially in the media, has a lot to do with the views of some particularly influential figures in the field: Chalmers and Tononi, but also Christof Koch, whose early work with Francis Crick was fundamental in the rehabilitation of consciousness science in the late 1990s and who continues to be a major figure in the field. These people are all incredibly smart and have made extremely important contributions within consciousness science and beyond. I have learned a great deal from each, and I owe them intellectual debts I will never be able to repay. Having said that, their views on panpsychism are firmly in the minority and should not be over-weighted simply because of their historical contributions and current prominence. Whether there is something about having made such influential contributions that leads to a tendency to adopt countercultural (and difficult to test) views later on – well that’s for another day and another writer.

At the end of her piece, Goldhill quotes Chalmers quoting the philosopher John Perry who says: “If you think about consciousness long enough, you either become a panpsychist or you go into administration.” Perhaps the problem lies in only thinking. We should instead complement only thinking with the challenging empirical work of explaining properties of consciousness in terms of biophysical mechanisms. Then we can say: If you work on consciousness long enough, you either become a neuroscientist or you become a panpsychist. I know where I’d rather be – with my many colleagues who are not worrying about conscious spoons but who are trying, and little-by-little succeeding, in unravelling the complex biophysical mechanisms that shape our subjective experiences of world and self. And now it’s high time I got back to that paper on training synaesthesia.

(For more general discussions about consciousness science, where it’s at and where we’re going, have a listen to my recent conversation with Sam Harris. Make sure you have time for it though, it clocks in at over three hours …)

Evidence for a higher state of consciousness? Sort of.

Hoffman_Bicycle_Day_-_full_square

Bicycle Day Celebration Blotter.  By YttriumOx CC BY-SA 3.0

On April 19 1943, seventy-four years ago to the day, Albert Hoffman conducted his now famous self-experimentation on the psychological effects of LSD, a compound he had been the first to synthesize some years earlier. Now called ‘bicycle day’ in honour of how Hoffman made his way home, it led to some remarkable descriptions:

“… Little by little I could begin to enjoy the unprecedented colors and plays of shapes that persisted behind my closed eyes. Kaleidoscopic, fantastic images surged in on me, alternating, variegated, opening and then closing themselves in circles and spirals, exploding in colored fountains, rearranging and hybridizing themselves in constant flux …”

In the decades that followed, academic research into LSD and other psychedelics was cast into the wilderness as worries about their recreational use held sway. Recently, however, the tide has started to turn. There is now gathering momentum for studies showing a remarkable clinical potential for psychedelics in treating recalcitrant psychiatric disorders, as well as experiments trying to understand how psychedelics exert their distinctive effects on conscious experience.

In a new paper published in Scientific Reports on this bicycle day anniversary, we describe a distinctive neuronal signature of the psychedelic state: a global increase in neuronal signal diversity. So – is this evidence for a ‘higher state’ of consciousness? And could it account for the nature of psychedelic experience? Let me answer these questions by summarizing what we did.

Our study analyzed data previously collected by Dr. Robin Carhart-Harris (Imperial College London) and Dr. Suresh Muthukumaraswamy (then Cardiff, now at Auckland). These were magnetoencephalographic (MEG) brain-imaging data from healthy volunteers either in a normal waking state, or after having taken LSD, psylocibin (the active ingredient in magic mushrooms) or ketamine (which in low doses acts as a psychedelic – in high doses it has an anaesethetic effect). MEG data combine a very high temporal resolution, with a much better spatial resolution than EEG (electroencephalography), allowing us to compute some relatively sophisticated mathematical measures of signal diversity. The participants in our study had passed strict ethical criteria, and were asked simply to rest quietly in the scanner during the experiment. Afterwards, they were asked various questions about what they had experienced.

With Carhart-Harris and Muthukumaraswamy, and with Dr. Adam Barrett and first-author Michael Schartner of the Sackler Centre for Consciousness Science here at Sussex, we chopped up the MEG data into small segments and for each segment calculated a range of different mathematical measures. The most interesting is called ‘Lempel Ziv (LZ) complexity,’ which measures the diversity of the data by figuring out how ‘compressible’ it is. A completely random data sequence would be maximally diverse since it is not compressible at all. A completely uniform data sequence would be minimally diverse since it is easy to compress. In fact, because of these properties the algorithm for computing LZ complexity is widely used to compress digital photos into smaller files, in an optimal way.

LSD_LZ

Changes in LZ complexity under LSD, as compared to the waking state.  Data are source localized MEG.  Image from Suresh Muthukumaraswamy and appears in Figure 3 in the paper.

We found that MEG signals had a reliably higher level of LZ – and hence signal diversity – for all three psychedelic compounds, with perhaps the strongest effects for LSD. The fact that we found the same pattern of results across all three psychedelic compounds is both striking and reassuring – it means our results are not likely to have arisen by chance.

Intuitively, these findings mean that the brain-on-psychedelics is less predictable, more random – and more diverse than in the normal waking state.

Our data can be thought of as evidence for a ‘higher’ state of consciousness only in this very specific way, and only in the context provided by other studies where a loss of consciousness has been associated with a reduction of neuronal diversity. For example, studies in our lab have shown reduced LZ complexity (reduced diversity) for both anaesthesia and for (non-dreaming) sleep. (Interestingly, levels of LZ returned to ‘normal’ during REM sleep when dreams are likely.) What’s striking about our results, in this context, is that increases in quantitative measures of conscious level, compared to the waking state, have never been found before.

Interpreting our data in terms of conscious level also make sense since measures of signal diversity, like LZ, can be thought of as approximations to related quantities like the ‘perturbation complexity index’ (PCI). This measure captures the diversity of the brain’s response to an electromagnetic stimulus: think banging on the brain (but using transcranial magnetic stimulation which applies a sharp electromagnetic ‘bang’), and listening to the echo. Studies using PCI, pioneered by Prof. Marcello Massimini at the University of Milan, have found a remarkable sensitivity to changes in conscious level, and even an ability to predict residual consciousness in devastating neurological conditions like coma and the vegetative state. The differences between LZ and PCI are subtle, having mainly to do with whether they measure simple diversity or a mixture of diversity and ‘integration’ in brain dynamics.

More generally, measures of diversity are related to influential theories which associate consciousness with ‘integrated information’ or ‘causal density’ in the dynamics of the brain. While these theories specify even more complicated mathematical measures of conscious level, the fact that we see measurable increases in diversity so reliably across conscious states gives some support to these theories.  Our results are also consistent with Robin Carhart-Harris’ ‘entropic brain‘ theory, which proposes that the psychedelic state is associated with greater entropy or uncertainty in neural dyamics.

In this broader theoretical context, what’s interesting about our results is that they show that a measure of conscious level – previously applied to sleep and anesthesia – is also sensitive to differences in conscious content, as in the contrast between the psychedelic state and normal wakefulness. This helps shed some new light on an old debate in the science of consciousness – the relationship between conscious level (how conscious you are) and conscious content (what you’re conscious of, when you’re conscious).

Taking this research forward, we plan to understand more about how specific properties of neural dynamics relate to specific properties of psychedelic experiences. In the present study, we found some tentative correlations between changes in signal diversity and the degree to which people reported experiences like ‘ego dissolution’ and ‘vividness of imagination’. However, these correlations were not strong. One possible reason is that the subjective reports were taken outside the scanner, likely some time after the peak effect of the drug. Another possibility – which we are currently looking into – is that more fine-grained measures of information flow in the brain, like Granger causality, might be needed in order to closely map properties of psychedelic experience to changes in the brain.

Overall, our study adds to a growing body of work – much of which has been led by Carhart-Harris and colleagues –  that is now revealing the brain-basis of the psychedelic state. Our data show that a simple measure of neuronal signal diversity places the psychedelic state ‘above’ the normal waking state, in comparison to the lower diversity found in sleep and anesthesia. Taking this work forward stands to do much more than enhance our understanding of psychedelics. It may help expose how, why – and for whom – psychedelics may help alleviate the appalling suffering of psychiatric disorders like depression. And in the end, it may help us figure out how our normal everyday conscious experiences of the world, and the self, come to be.

After all, everything we experience – even when stone cold sober – is just a kind of ‘controlled hallucination.’ Our perceptions are just the brain’s “best guess” of what’s going on, reined in by sensory signals. It’s just that most of the time we agree with each other about our hallucinations, and call them reality.


‘Increased spontaneous MEG signal diversity for psychoactive doses of ketamine, LSD and psilocybin’ by Michael Schartner, Robin Carhart-Harris, Adam Barrett, Anil Seth and Suresh Muthukumaraswamy is published in Scientific Reports (7): 46421, 2017. It is freely available here as an open-access publication. I am the corresponding author.

The study has been extensively covered in the media. Particularly good pieces are in The Guardian in the New Scientist and in Wired. There is also a highly active Reddit thread, which on the day of publication was consistently on the Reddit homepage. 

I would like to specifically acknowledge Michael Schartner and Adam Barrett in this post.  Michael’s Ph.D. – awarded just a few months ago – was all about measuring signal diversity in various different conscious states (sleep, anesthesia, psychedelia. Michael was primarily supervised by Dr. Barrett who devoted his considerable mathematical expertise to the project. Very many thanks are also due to Robin Carhart-Harris and Suresh Muthukumaraswamy for generously engaging with this collaboration.

Carhart-Harris, Muthukumaraswamy and colleagues have published a number of other important studies on the neural basis of the psychedelic state.  See here and here – or just look on PubMed.

The real problem

aeon_coverWhat is the best way to understand consciousness? In philosophy, centuries-old debates continue to rage over whether the Universe is divided, following René Descartes, into ‘mind stuff’ and ‘matter stuff’. But the rise of modern neuroscience has seen a more pragmatic approach gain ground: an approach that is guided by philosophy but doesn’t rely on philosophical research to provide the answers. Its key is to recognise that explaining why consciousness exists at all is not necessary in order to make progress in revealing its material basis – to start building explanatory bridges from the subjective and phenomenal to the objective and measurable.

This is the start of an essay I recently wrote for the website aeon.co, which publishes an essay a day, focusing on ideas and culture.  The basic idea is to chart a pragmatic path for the scientific study of consciousness, respecting but not directly targeting the deep metaphysical mysteries so eloquently exposed by Chalmers’ famous distinction between the ‘easy’ and ‘hard’ problems.  Much of what I say has been said before (e.g., in the tradition of neurophenomenology) but I hope to bring things together in a new way and with a distinctive empirical angle.  Anyway, best make up your own mind – I’d be keen to hear what you think!

At the edges of awareness

Imagine this. Following a brain injury you lie in a hospital bed and from the outside you appear to be totally unconscious. You don’t respond to anything the doctors or your family say, you make no voluntary movements, and although you still go to sleep and wake up there seems to be nobody at home. But your ‘inner universe’ of conscious awareness still remains, perhaps flickering and inconsistent, but definitely there. How could anyone else ever know, and how could you ever communicate with your loved ones again?

Two new radio dramas, The Sky is Wider and Real Worlds, engage with these critical questions by drawing on the cutting edge of the neurology and neuroscience. Recent advances have enabled researchers to not only diagnose ‘residual’ awareness following severe brain injuries, but also to open new channels of communication with behaviourally unresponsive patients. The key medical challenge is to distinguish between the so-called ‘vegetative state’ in which there truly is no conscious awareness, from ‘minimally conscious’ or ‘locked-in’ conditions where some degree of consciousness persists (even normal consciousness, in the locked-in state), even though there are no outward signs.

Untitled

Brain activity during mental imagery, in a behaviourally unresponsive patient and in a  control subject.  Source: MRC via The Guardian

Linda Marshall Griffith’s drama The Sky is Wider takes inspiration from an ‘active approach’ in which the neurologist asks questions of the patient and monitors their brain activity for signs of response. In a classic study from about 10 years ago, Adrian Owen and his team asked behaviourally unresponsive patients to imagine either walking around their house or playing tennis, while their brains were scanned using functional MRI (which measures regional metabolic activity in the brain). These questions were chosen because imagining these different behaviours activates different parts of the brain, and so if we see these selective activations in a patient, we know that they have understood and are voluntarily following the instructions. If they can do this, they must be conscious. It turns out that between 10-20% of patients behaviourally diagnosed as being in the vegetative state can pass this test. Equally important, this same method can be used to establish simple communication by (for example) asking a patient to imagine playing ‘tennis’ to answer ‘yes’ and walking around a house to answer ‘no’.

These developments represent a revolution in clinical neurology. Current research is increasing the efficiency of active approaches by using the more portable electroencephalography (EEG) instead of bulky and expensive MRI. ‘Passive’ techniques in which residual consciousness can be inferred without requiring patients to perform any task are also rapidly improving. These methods are important because active approaches may underestimate the incidence of residual awareness since not all conscious patients may understand or be able to follow verbal instructions.

Alongside these scientific developments we encounter pressing ethical questions. How should we treat patients in these liminal states of awareness? And given a means of communication, what kinds of questions should we ask? The Sky is Wider explores these challenging ethical issues in a compelling narrative which gives dramatic voice to the mysterious conditions of the vegetative and minimally conscious states.


 

In Real Worlds, Jane Rogers takes us several years into the future. Communication with behaviourally unresponsive patients is now far advanced and is based on amazing developments in ‘virtual reality’. The clinical context for this drama is the ‘locked-in syndrome’ where a patient may have more-or-less normal conscious experiences but completely lack the ability to move. In Real Worlds, a locked-in patient transcends these limitations by controlling a virtual reality avatar directly using brain signals. These avatars inhabit virtual worlds in which the avatars of different people can interact, while the ‘real’ person behind each may remain hidden and unknown.

This drama deliberately inhabits the realm of science fiction, but there is solid science behind it too. The development of so-called ‘brain computer interfaces’ (BCI) is moving fast. These interfaces combine brain imaging methods (like EEG or fMRI, or sometimes more ‘invasive’ methods’ in which electrodes are inserted directly into the brain) with advanced machine learning methods to perform a kind of ‘brain-reading’. The idea is to infer, from brain activity alone, intended movements, perceptions, and perhaps even thoughts. These decoded ‘thoughts’ can then be used to control robotic devices, or virtual avatars. In some cases, a person’s own body might be controlled via direct stimulation of muscles. Progress in this area has been remarkably rapid. In a landmark but rather showy example, the Brazilian neuroscientist Miguel Nicolelis used a BCI to allow a paralysed person to ‘kick’ the first ball of the 2014 football world cup, through brain-control of a robotic avatar. More recently, brain-reading methods have allowed a paralysed man to play Guitar Hero for the first time since his injury.

The other technology highlighted in Real Worlds is virtual reality (VR), which – thanks to its enormous consumer potential – is developing even more rapidly. All the major technology and AI companies are getting in on the act, and VR headsets are finally becoming cheap enough, comfortable enough, and powerful enough to define a new technological landscape. Here at the Sackler Centre for Consciousness Science at the University of Sussex, we are exploring how VR can help shed light on our normal conscious experience. In one example, we use a method called ‘augmented reality’ (AR) to project a ‘virtual’ body into the real world as seen through a camera mounted on the front of a VR headset. This experiment revealed how our perception of what is (and what is not) our own body can be easily manipulated, indicating that our experience of ‘body ownership’, which is so easy to take for granted, is in fact continuously and actively generated by the brain. In a second example, we developed a method called ‘substitutional reality’ in which a VR headset is coupled with panoramic video and audio taken from a real environment, manipulated in various ways. The resulting experiences are much more immersive than current computer-generated virtual environments and in some cases people cannot distinguish them from actually ‘real’ environments.

vr

A ‘virtual reality’ hand, part of a Sackler Centre study to explore the mechanisms underlying experiences of body ownership.  VR programming by Dr. Keisuke Suzuki.

Just as in the first drama, ethical questions risk outpacing the science and technology. As VR becomes increasingly immersive and pervasive, its potential to impact our real lives is ever more powerful. While benefits are easy to imagine – for instance in bringing distant relatives together or enabling remote experiences of inaccessible places – there are also legitimate concerns. High on the list would be what happens if people become increasingly unable to distinguish the real world from the virtual, whether in the moment or (more plausibly) in their memories. And what if they progressively withdrew from ‘reality’ if the available virtual worlds became more appealing places to be? Of course, simple dichotomies are unhelpful since VR technologies are part of our real worlds, just like mobile phones and laptop computers. Jane Rogers’ Real Worlds explores these complex ethical issues by imagining VR as a future treatment – perhaps ‘prosthesis’ would be a better word – for the disorders of consciousness like those encountered in The Sky is Wider.

Together, these dramas explore the human and societal consequences of existing and near-future clinical technologies. With artistic license they ask important questions that scientists and clinicians are not yet equipped to address. Ultimately, I think they convey an optimistic message, that we can understand and treat – if not cure – severely debilitating conditions that may otherwise have remained undiagnosed let alone treated. But they also lead us to consider, not just what we could do, but what we should do.


The Sky is Wider (written by Linda Marshall Griffiths) and Real Worlds (written by Jane Rogers) were produced by Nadia Molinari for BBC Radio 4. I acted as the scientific consultant. The original ideas were formulated during a 2014 Wellcome Trust ‘Experimental Stories’ workshop in a conversation between myself, Nadia, and Linda.