Guest blog: Phenomenological control: Response to imaginative suggestion predicts measures of mirror touch synaesthesia, vicarious pain, and the rubber hand illusion

RHI

The Rubber Hand Illusion.  Credit: 30 Second Brain (Ivy Press). Edited by Anil Seth

This is a Guest Blog written by Peter Lush, postdoctoral research at the Sackler Centre for Consciousness Science, and lead author on this new study.  It’s all about our new preprint.

A key challenge for psychological research is how to measure subjective experience. One domain in which this is particularly relevant is for experiences of ‘embodiment’. These experiences show widespread individual variation and can be surprisingly easy to manipulate. The rubber hand illusion, for example, is a famous effect in which a simple procedure generates experiences of ownership over a fake hand. Experience of the illusion can be measured either directly, through subjective reports of illusion experience, or indirectly, by changes in the felt position of the participant’s own hand. Scientists consider these measures to provide insight into the processes by which conscious experiences of embodiment come about.

However, such interpretations overlook the role of trait (i.e., stable individual) differences in the ability to generate experience to meet expectancies, which we call ‘phenomenological control’. If measures of the rubber hand illusion reflect the active generation of expected experience, then existing accounts of this and related effects will be incomplete or incorrect. Our new preprint, on PsyArXiv, reports the results of three large scale studies (more than 1000 participants in total) investigating the relationship between the ability to change experience to fit situational demands (phenomenological control) and established measures of embodiment. These results have implications not only for interpretation of embodiment measures, but also for any research employing measures taken to reflect subjective experience.

Here are the theoretical motivations:

  • Many people are able to generate compelling experiences in response to expectancies arising from imaginative suggestion presented within the context of ‘hypnosis’. Hypnotic responding is voluntary (nobody can be forced to respond) but is experienced as involuntary. A wide range of experiences can be generated. Examples include visual, auditory or gustatory hallucinations, vivid dreams and apparently involuntary movements.
  • The extent to which individuals can control their phenomenology in response to imaginative suggestion is a normally distributed stable trait, with good test-retest reliability over a 25 year period. Only a relatively small number of people (10-15%) are unable to successfully respond to imaginative suggestion. Therefore, the majority of experimental participants in any scientific experiment are likely to have at least some phenomenological control abilities.
  • We know that the hypnotic context (e.g., the presence of a hypnotist or the use of induction procedures) is not required for response to imaginative suggestion.
  • The context of a scientific experiment (e.g., the presence of a scientist and the expectancies generated by participants’ preconceptions of science) may, like the hypnotic context, cause participants to engage in the control of phenomenology to meet their interpretations of the response expected by the experimenter or arising from the experimental procedure (for example, the synchronous brushing which is used to induce the rubber hand illusion may act as an implicit imaginative suggestion).
  • Such responding will be experienced as involuntary by the participant, and will generate convincing reports of changes in subjective experience.
  • Any test procedure in which the expectations of the experimenter are discernible to the participant may therefore reflect phenomenological control rather than the stated theoretical targets of interest.

Note that this proposal differs from common understanding of demand characteristics and experimenter effects, which are generally considered to lead to merely behavioural effects (e.g., social compliance). Subjects engaging in phenomenological control will report genuine experiences.

Hypnosis researchers employ standardised scales to measure response to imaginative suggestion within a hypnotic context. A high score on a hypnotisability scale shows that a participant has the ability to generate and control their phenomenology to meet the expectancies communicated by the ‘hypnotist’ through direct suggestion. Here we employed our Sussex Waterloo Scale of Hypnotisability (SWASH), which consists of ten imaginative suggestions for particular experiences (for example, the touch of a mosquito, a sweet or sour taste, hearing music, and involuntary movement). The most parsimonious theories of hypnotic responding argue that response to hypnotic suggestion involves a voluntary mental or physical act which is experienced as involuntary (e.g., Hilgard. 1977; Spanos, 1986; Dienes & Perner, 2007). For example, a successful response to a suggestion that one’s arm will move of its own accord involves generating the inaccurate phenomenology that a voluntary action is involuntary. Similarly, a suggested experience of hearing music would involve an intentional act of imagination which, again, is experienced as unintentional.

We tested our predictions on three embodiment measures. These effects were chosen because they involve striking changes in experience and therefore have much surface similarity with imaginative suggestion effects.

The rubber hand illusion

lushblog_fig1

Figure 1. A large-scale rubber hand illusion study. We tested 353 participants in total, over the course of one week.    

The rubber hand illusion is perhaps the most well-known of all embodiment effects. To induce the illusion, a visible fake hand and the participant’s concealed real hand are stroked in synchrony, so that the felt touch of the brush on the real hand and the seen touch on the fake hand are closely matched. The level of agreement or disagreement with statements describing illusion experience is taken on a scale from -3 (indicating strong disagreement to +3 (indicating strong agreement). Expected effects in the rubber hand illusion may be easy to discern from the induction procedure alone, even if given no verbal instructions; for example, it may be clear to participants that they are expected to feel the touch of the brushing on their own hand located on the fake hand positioned in front of them.

We tested 353 participants, measuring both their SWASH hypnotisability score and their performance in the rubber hand illusion (Figure 1). Consistent with our predictions, hypnotisability scores predicted subjective report scores and also proprioceptive drift (a measure of changes in the felt position of the participant’s hand). Figure 2 shows that, on average, experience of both felt touch and ownership in the rubber hand illusion requires the ability to control phenomenology to meet expectancies. The 353 participants have been divided here into four groups by their hypnotisability score (error bars show 95% CIs). The figure shows individual illusion agreement scores for standard illusion statements (used in Botvinick & Cohen,1998 and for many subsequent studies). Statement S1 (“It seemed as if I were feeling the touch of the paintbrush in the location where I saw the rubber hand touched”) and S2 (“It seemed as though the touch I felt was caused by the paintbrush touching the rubber hand”) describe experiences of felt touch, while statement S3 (I felt as if the rubber hand were my hand”) an experience of ownership.  The least hypnotisable quarter of participants did not on average agree with statements S2 and S3, but this group did agree with statement S1. This is probably attributable to ambiguous phrasing, as the statement can be interpreted as asking for participants’ mundane experience of touch on their own hand (see Botvinick and Cohen, 1998, in which all participants reported maximum agreement with this statement). In any case, factor analysis suggests that agreement with this statement does not reflect experience of embodiment (Longo et al, 2007).

lushblog_fig2

Figure 2. Mean subjective report scores in participants grouped by hypnotisability score (lowest scores on the left side of the chart).

In summary, common direct and indirect measures of the rubber hand illusion are substantially related to hypnotisability and on average the illusion does not occur in people unable to respond to hypnotic suggestion.

It’s worth noting that the rubber hand illusion has also been associated with common physiological measures like skin conductance response (SCR), histamine reactivity and body temperature. One might think these measures would be immune to phenomenological control.  However, these physiological properties are known to be susceptible to imaginative suggestion (SCR; histamine reactivity; temperature). We therefore predict similar relationships between hypnotisability and these measures.

Mirror touch synaesthesia and vicarious pain

Mirror touch and vicarious pain are experiences of pain or touch in response to the witnessed pain of another. In a research setting, these effects can be studied through the use of videos showing painful stimuli, or of touch to humans and inanimate objects. The primary measure is the proportion of videos which generate a felt touch or an experience of pain in response to visual stimuli.

Again, as predicted, hypnotisability score predicted both vicarious pain and mirror touch response. Figure 3 shows the mean number of vicarious pain experiences reported for videos showing a range of apparently painful events (e.g., injections and sporting injuries) in 404 participants. A clear relationship between hypnotisability and vicarious pain response can be seen.

lushblog_fig4

Figure 3.  Mean number of vicarious pain responses in participants grouped by hypnotisability score.

Figure 4 shows the results for mirror touch synaesthesia. Here, a sample of 154 participants were tested. Mirror touch synaesthetes (defined by response to 9 or more videos) were, on average, highly hypnotisable, with a mean score equal to the cut-off for the top 13% of SWASH scores.

lushblog_fig5

Figure 4. Mean hypnotisability scores in participants grouped by the number of reports of mirror touch experience to video stimuli (no response, 1-8 videos and 9-16 videos).

Conclusions

Measures of three prominent embodiment effects – the rubber hand illusion, mirror touch synaesthesia, and vicarious pain – reflect the ability to generate compelling phenomenology in response to imaginative suggestion. At this stage, we do not know to what extent these effects are attributable to phenomenological control. Further work will be necessary to establish whether or not there are, for example, rubber hand illusion effects which do not require phenomenological control abilities. Note also that, if mirror experiences in everyday life are driven by phenomenological control, experimenter-derived expectancies may have a relatively minimal effect on measures of these experiences in the lab (because participants may respond this way to any similar visual stimulus, away from the scientific context).

Note that, because imaginative suggestion can produce changes in brain activity consistent with the suggestion given (e.g., activity in visual brain areas for suggested visual hallucination), phenomenological control may also account for the results of neuroimaging studies of these embodiment effects.

Our results demonstrate that the engagement of phenomenological control abilities to fulfil expectancies can occur within a scientific context and that such abilities may account for these a range of subjective embodiment effects. Response to imaginative suggestion does not require a hypnotic induction, or even any hypnotic context. All that is required is the ability to control phenomenology and a context in which phenomenological control can be (unconsciously) interpreted as appropriate. Despite this, the majority of research into imaginative suggestion has been conducted within a hypnotic context, and as a result the possibility that scientific experiments present another such context in which phenomenological control abilities are engaged has been overlooked.

We are now developing a phenomenological control scale with which to investigate phenomenological control in many other effects across psychological science which could be influenced by the participant’s subjective experience. The results we present in this paper, therefore, may indicate that the reappraisal of empirical results in behavioural science will be necessary for a broad range of fields.

These studies focus on the role of phenomenological control in existing effects. However, phenomenological control should not be seen merely as confounding existing theories and presenting problems for psychological science. Trait differences in the ability to influence perception by top-down influences are a valuable target for scientific investigations of conscious experience in their own right.

*

Lush, P*., Botan, V., Scott, R. B., Seth, A.K., Ward, J., & Dienes, Z. (2019, April 16). Phenomenological control: response to imaginative suggestion predicts measures of mirror touch synaesthesia, vicarious pain and the rubber hand illusion. https://doi.org/10.31234/osf.io/82jav

This research was supported by the Dr Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research (CIFAR) Azrieli Programme on Brain, Mind, and Consciousness.

*Corresponding author, and author of this guest blog.

 

 

Time perception without clocks

the_persistence_of_memory

Salvador Dali, The Persistence of Memory, 1931

Our new paper, led by Warrick Roseboom, is out now (open access) in Nature Communications. It’s about time.

More than two thousand years ago, though who knows how long exactly, Saint Augustine complained “What then is time? If no-one asks me, I know; if I wish to explain to one who asks, I know not.”

The nature of time is endlessly mysterious, in philosophy, in physics, and also in neuroscience. We experience the flow of time, we perceive events as being ordered in time and as having particular durations, yet there are no time sensors in the brain. The eye has rod and cone cells to detect light, the ear has hair cells to detect sound, but there are no dedicated ‘time receptors’ to be found anywhere. How, then, does the brain create the subjective sense of time passing?

Most neuroscientific models of time perception rely on some kind internal timekeeper or pacemaker, a putative ‘clock in the head’ against which the flow of events can be measured. But despite considerable research, clear evidence for these neuronal pacemakers has been rather lacking, especially when it comes to psychologically relevant timescales of a few seconds to minutes.

An alternative view, and one with substantial psychological pedigree, is that time perception is driven by changes in other perceptual modalities. These modalities include vision and hearing, and possibly also internal modalities like interoception (the sense of the body ‘from within’). This is the view we set out to test in this new study, initiated by Warrick Roseboom here at the Sackler Centre, and Dave Bhowmik at Imperial College London, as part of the recently finished EU H2020 project TIMESTORM.

*

Their idea was that one specific aspect of time perception – duration estimation – is based on the rate of accumulation of salient events in other perceptual modalities. More salient changes, longer estimated durations. Fewer salient changes, shorter durations. He set out to test this idea using a neural network model of visual object classification modified to generate estimates of salient changes when exposed to natural videos of varying lengths (Figure 1).

timefig1

Figure 1. Experiment design. Both human volunteers (a, with eye tracking) and a pretrained object classification neural network (b) view a series of natural videos of different lengths (c), recorded in different environments (d). Activity in the classification networks is analysed for frame-to-frame ‘salient changes’ and records of salient changes are used to train estimates of duration – based on the physical duration of the video. These estimates are then compared with human reports. We also compare networks trained on gaze-constrained video input versus ‘full frame’ video input.

We first collected several hundred videos of five different environments and chopped them into varying lengths from 1 sec to ~1 min. The environments were quiet office scenes, café scenes, busy city scenes, outdoor countryside scenes, and scenes from the campus of Sussex University.  We then showed the videos to some human participants, who rated their apparent durations. We also collected eye tracking data while they viewed the videos. All in all we obtained over 4,000 duration ratings.

The behavioural data showed that people could do the task, and that – as expected – they underestimated long durations and overestimated short durations (Figure 2a). This ‘regression to the mean’ effect is known as Vierodt’s law in the time perception literature and is very well known. Our human volunteers also showed biases according to the video content, rating busy (e.g., city) scenes as lasting longer than non-busy (e.g., office) scenes of the same physical duration. This is just as expected, if duration estimation is based on accumulation of salient perceptual changes.

For the computational part, we used AlexNet, a pretrained deep convolutional neural network (DCNN) which has excellent object classification performance across 1,000 classes of object. We exposed AlexNet to each video, frame by frame. For each frame we examined activity in four separate layers of the network and compared it to the activity elicited by the previous frame. If the difference exceeded an adaptive threshold, we counted a ‘salient event’ and accumulated a unit of subjective time at that level. Finally, we used a simple machine learning tool (a support vector machine) to convert the record of salient events into an estimate of duration in seconds, in order to compare the model with human reports.  There are two important things to note here. The first is that the system was trained on the physical duration of the videos, not on the human estimates (apparent durations). The second is that there is no reliance on any internal clock or pacemaker at all (the frame rate is arbitrary – changing it doesn’t make any difference).

timefig2

Fig 2. Main results. Human volunteers can do the task and show characteristic biases (a).  When the model is trained on ‘full-frame’ data it can also do the task, but the biases are even more severe (b). There is a much closer match to human data when the model input is constrained by human gaze data (c), but not when the gaze locations are drawn from different trials (d).

There were two key tests of the model.  Was it able to perform the task?  More importantly, did it reveal the same pattern of biases as shown by humans?

Figure 2(b) shows that the model indeed performed the task, classifying longer videos as longer than shorter videos.  It also showed the same pattern of biases, though these were more exaggerated than for the human data (a).  But – critically – when we constrained the video input to the model by where humans were looking, the match to human performance was incredibly close (c). (Importantly, this match went away if we used gaze locations from a different video, d). We also found that the model displayed a similar pattern of biases by content, rating busy scenes as lasting longer than non-busy scenes – just as our human volunteers did. Additional control experiments, described in the paper, rule out that these close matches could be achieved just by changes within the video image itself, or by other trivial dependencies (e.g., on frame rate, or on the support vector regression step).

Altogether, these data show that our clock-free model of time-perception, based on the dynamics of perceptual classification, provides a sufficient basis for capturing subjective duration estimation of visual scenes – scenes that vary in their content as well as in their duration. Our model works on a fully end-to-end basis, going all the way from natural video stimuli to duration estimation in seconds.

*

We think this work is important because it comprehensively illustrates an empirically adequate alternative to ‘pacemaker’ models of time perception.

Pacemaker models are undoubtedly intuitive and influential, but they raise the spectre of what Daniel Dennett has called the ‘fallacy of double transduction’. This is false idea that perceptual systems somehow need to re-instantiate a perceived property inside the head, in order for perception to work. Thus perceived redness might require something red-in-the-head, and perceived music might need a little band-in-the-head, together with a complicated system of intracranial microphones. Naturally no-one would explicitly sign up to this kind of theory, but it sometimes creeps in unannounced to theories that rely too heavily on representations of one kind or another. And it seems that proposing a ‘clock in the head’ for time perception provides a prime example of an implicit double transduction. Our model neatly avoids the fallacy, and as we say in our Conclusion:

“That our system produces human-like time estimates based on only natural video inputs, without any appeal to a pacemaker or clock-like mechanism, represents a substantial advance in building artificial systems with human-like temporal cognition, and presents a fresh opportunity to understand human perception and experience of time.” (p.7).

We’re now extending this line of work by obtaining neuroimaging (fMRI) data during the same task, so that we can compare the computational model activity against brain activity in human observers (with Maxine Sherman). We’ve also recorded a whole array of physiological signatures – such as heart-rate and eye-blink data – to see whether we can find any reliable physiological influences on duration estimation in this task.  We can’t – and the preprint, with Marta Suarez-Pinilla – is here.

*

Major credit for this study to Warrick Roseboom who led the whole thing, with the able assistance of Zaferious Fountas and Kyriacos Nikiforou with the modelling. Major credit also to David Bhowmik who was heavily involved in the conception and early stages of the project, and also to Murray Shanahan who provided very helpful oversight. Thanks also to the EU H2020 TIMESTORM project which supported this project from start to finish. As always, I’d also like to thank the Dr. Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research, Azrieli Programme in Brain, Mind, and Consciousness, for their support.

*

Roseboom, W., Fountas, Z., Nikiforou, K., Bhowmik, D., Shanahan, M.P., and Seth, A.K. (2019). Activity in perceptual classification networks as a basis for human subjective time perception. Nature Communications. 10:269.

 

Be careful what you measure: Comparing measures of integrated information

entropycover2019

Our new paper on ‘measuring integrated information’ is out now, open access, in the journal Entropy. It’s part of a special issue dedicated to integrated information theory.

In consciousness research, ‘integrated information theory’, or IIT, has come to occupy a highly influential and rather controversial position. Acclaimed by some as the most important development in consciousness science so far, critiqued by others as too mathematically abstruse and empirically untestable, IIT is by turns both fascinating and frustrating. Certainly, a key challenge for IIT is to develop measures of ‘integrated information’ that can be usefully applied to actual data. These measures should capture, in empirically interesting and theoretically profound ways, the extent to which ‘a system generates more information than the sum of its parts’. Such measures are also of interest in many domains beyond consciousness, through for example to physics and engineering, where notions of ‘dynamical complexity’ are of more general importance.

Adam Barrett and I have been working towards this challenge for many years, both through approximations of the measure F (‘phi’, central to the various iterations of IIT) and through alternative measures like ‘causal density’. Alongside new work from other groups, there now exist a range of measures of integrated information – yet so far no systematic comparison of how they perform on non-trivial systems.

This is what we provide in our new paper, led by Adam along with Pedro Mediano from Imperial College London.

*

We describe, using a uniform notation, six different candidate measures of integrated information (among which we count the related measure of ‘causal density’). We set out the intuitions behind each, and compare their properties across a series of criteria. We then explore how they behave on a variety of network models, some very simple, others a little bit more complex.

The most striking finding is that the measures all behave very differently – no two measures show consistent agreement across all our analyses. Here’s an example:

screen shot 2019-01-03 at 16.45.52

Diverse behavior of measures of integrated information. The six measures (plus two control measures) are shown in terms of their behavior on a simple 2-node network animated by autoregressive dynamics.

At first glance this seems worrying for IIT since, ideally, one would want conceptually similar measures to behave in similar ways when applied to empirical test-cases. Indeed, it is worrying if existing measures are used uncritically. However, by rigorously comparing these measures we are able to identify those which better reflect the underlying intuitions of ‘integrated information’, which we believe will be of some help as these measures continue to be developed and refined.

Integrated information, along with related notions of dynamical complexity and emergence, are likely to be important pillars of our emerging understanding of complex dynamics in all sorts of situations – in consciousness research, in neuroscience more generally, and beyond biology altogether. Our new paper provides a firm foundation for the future development of this critical line of research.

*

One important caveat is necessary. We focus on measures that are, by construction, applicable to the empirical, or spontaneous, statistically stationary distribution of a system’s dynamics. This means we depart, by necessity, from the supposedly more fundamental measures of integrated information that feature in the most recent iterations of IIT. These recent versions of the theory appeal to the so-called ‘maximum entropy’ distribution since they are more interested in characterizing the ‘cause-effect structure’ of a system than in saying things about its dynamics. This means we should be very cautious about taking our results to apply to current versions of IIT. But, in recognizing this, we also return to where we started in this post. A major issue for the more recent (and supposedly more fundamental) versions of IIT is that they are extremely challenging to operationalize and therefore to put to an empirical test. Our work on integrated information departs from ‘fundamental’ IIT precisely because we prioritise empirical applicability. This, we think, is a feature, not a bug.

*

All credit for this study to Pedro Mediano and Adam Barrett, who did all the work. As always, I’d like to thank the Dr. Mortimer and Theresa Sackler Foundation, and the Canadian Institute for Advanced Research, Azrieli Programme in Brain, Mind, and Consciousness, for their support. The paper was published in Entropy on Christmas Day, which may explain why some of you might’ve missed it!  But it did make the cover, which is nice.

*

Mediano, P.A.M., Seth, A.K., and Barrett, A.B. (2019). Measuring integrated information. Comparison of candidate measures in theory and in simulation. Entropy, 21:17

Can we figure out the brain’s wiring diagram?

connecttomemain_2

The human brain, it is often said, is the most complex object in the known universe. Counting all the connections among its roughly 90 billion neurons, at the rate of one each second, would take about 3 million years – and just counting these connections says nothing about their intricate patterns of connectivity. A new study, published this week in Proceedings of the National Academy of Sciences USA, shows that mapping out these patterns is likely to be much more difficult than previously thought — but also shows what we need to do, to succeed.

Characterizing the detailed point-to-point connectivity of the brain is increasingly recognized as a key objective for neuroscience. Many even think that without knowing the ‘connectome’ – the brain’s wiring diagram – we will never understand how its electrochemical alchemy gives rise to our thoughts, actions, perceptions, beliefs, and ultimately to our consciousness. There is a good precedent for thinking along these lines. Biology has been galvanized by sequencing of the genome (of humans and of other species), and genetic medicine is gathering pace as whole-genome sequencing becomes fast and cheap enough to be available to the many, not just the few. Big-science big-money projects like the Human Genome Project were critical to these developments. Similar efforts in brain science – like the Human Connectome Project in the US and the Human Brain Project in Europe – are now receiving vast amounts of funding (though not without criticism, especially in the European case) (see also here). The hope is that the genetic revolution can be replicated in neuroscience, delivering step changes in our understanding of the brain and in our ability to treat neurological and psychiatric disorders.

Mapping the networks of the human brain relies on non-invasive neuroimaging methods that can be applied without risk to living people. These methods almost exclusively depend on ‘diffusion magnetic resonance imaging (dMRI) tractography’. This technology measures, for each location (or ‘voxel’) in the brain, the direction in which water is best able to diffuse. Taking advantage of the fact that water diffuses more easily along the fibre bundles connecting different brain regions, than across them, dMRI tractography has been able to generate accurate, informative, and surprisingly beautiful pictures of the major superhighways in the brain.

Diffusion MRI of the human brain.  Source: Human Connectome Project.

Diffusion MRI of the human brain. Source: Human Connectome Project.

But identifying these neuronal superhighways is only a step towards the connectome. Think of a road atlas: knowing only about motorways may tell you how cities are connected, but its not going to tell you how to get from one particular house to another. The assumption in neuroscience has been that as brain scanning improves in resolution and as tracking algorithms gain sophistication, dMRI tractography will be able to reveal the point-to-point long-range anatomical connectivity needed to construct the full connectome.

In a study published this week we challenge this assumption, showing that basic features of brain anatomy pose severe obstacles to measuring cortical connectivity using dMRI. The study, a collaboration between the University of Sussex in the UK and the National Institutes of Health (NIH) in the US, applied dMRI tractography to ultra-high resolution dMRI data obtained from extensive scanning of the macaque monkey brain – data of much higher quality than can be presently obtained from human studies. Our analysis, led by Profs. Frank Ye and David Leopold of NIH and Ph.D student Colin Reveley of Sussex, took a large number of starting points (‘seed voxels’) in the brain, and investigated which other parts of the brain could be reached using dMRI tractography.

The result: roughly half of the brain could not be reached, meaning that even our best methods for mapping the connectome aren’t up to the job. What’s more, by looking carefully at the actual brain tissue where tractography failed, we were able to figure out why. Lying just beneath many of the deep valleys in the brain (the ‘sulci’ – but in some other places too), are dense weaves of neuronal fibres (‘white matter’) running largely parallel to the cortical surface. The existence of these ‘superficial white matter fibre systems’, as we call them, prevents the tractography algorithms from detecting where small tributaries leave the main neuronal superhighways, cross into the cortical grey matter, and reach their destinations. Back to the roads: imagine that small minor roads occasionally leave the main motorways, which are flanked by other major roads busy with heavy traffic. If we tried to construct a detailed road atlas by measuring the flow of vehicles, we might well miss these small but critical branching points.

This image shows, on a colour scale, the 'reachability' of different parts of the brain by diffusion tractography.

This image shows, on a colour scale, the ‘reachability’ of different parts of the brain by diffusion tractography.

Identifying the connectome remains a central objective for neuroscience, and non-invasive brain imaging – especially dMRI – is a powerful technology that is improving all the time. But a comprehensive and accurate map of brain connectivity is going to require more than simply ramping up scanning resolution and computational oomph, a message that mega-budget neuroscience might usefully heed. This is not bad news for brain research. Solving a problem always requires fully understanding what the problem is, and our findings open new opportunities and objectives for studies of brain connectivity. Still, it goes to show that the most complex object in the universe is not quite ready to give up all its secrets.


Colin Reveley, Anil K. Seth, Carlo Pierpaoli, Afonso C. Silva, David Yu, Richard C. Saunders, David A. Leopold*, and Frank Q. Ye. (2015) Superficial white-matter fiber systems impede detection of long-range cortical connections in diffusion MR tractography. Proc. Nat. Acad. Sci USA (2015). doi/10.1073/pnas.1418198112

*David A. Leopold is the corresponding author.

Open your MIND

openMINDscreen
Open MIND
is a brand new collection of original research publications on the mind, brain, and consciousness
. It is now freely available online. The collection contains altogether 118 articles from 90 senior and junior researchers, in the always-revealing format of target articles, commentaries, and responses.

This innovative project is the brainchild of Thomas Metzinger and Jennifer Windt, of the MIND group of the Johanes Gutenburg University in Mainz, Germany (Windt has since moved to Monash University in Melbourne). The MIND group was set up by Metzinger in 2003 to catalyse the development of young German philosophers by engaging them with the latest developments in philosophy of mind, cognitive science, and neuroscience. Open MIND celebrates the 10th anniversary of the MIND group, in a way that is so much more valuable to the academic community than ‘just another meeting’ with its quick-burn excitement and massive carbon footprint. Editors Metzinger and Windt explain:

“With this collection, we wanted to make a substantial and innovative contribution that will have a major and sustained impact on the international debate on the mind and the brain. But we also wanted to create an electronic resource that could also be used by less privileged students and researchers in countries such as India, China, or Brazil for years to come … The title ‘Open MIND’ stands for our continuous search for a renewed form of academic philosophy that is concerned with intellectual rigor, takes the results of empirical research seriously, and at the same time remains sensitive to ethical and social issues.”

As a senior member of the MIND group, I was lucky enough to contribute a target article, which was commented on by Wanja Wiese, one of the many talented graduate students with Metzinger and a junior MIND group member. My paper marries concepts in cybernetics and predictive control with the increasingly powerful perspective of ‘predictive processing’ or the Bayesian brain, with a focus on interoception and embodiment. I’ll summarize the main points in a different post, but you can go straight to the target paper, Wanja’s commentary, and my response.

Open MIND is a unique resource in many ways. The Editors were determined to maximize its impact, so, unlike in many otherwise similar projects, the original target papers have not been circulated prior to launch. This means there is a great deal of highly original material now available to be discovered. The entire project was compressed into about 10 months from submission of initial drafts, to publication this week of the complete collection. This means the original content is completely up-to-date. Also, Open MIND  shows how excellent scientific publication can  sidestep the main publishing houses, given the highly developed resources now available, coupled of course with extreme dedication and hard work. The collection was assembled, rigorously reviewed, edited, and produced entirely in-house – a remarkable achievement.

Thomas Metzinger with the Open MIND student team

Thomas Metzinger with the Open MIND student team

Above all Open MIND opened a world of opportunity for its junior members, the graduate students and postdocs who were involved in every stage of the project: soliciting and reviewing papers, editing, preparing commentaries, and organizing the final collection. As Metzinger and Windt say

“The whole publication project is itself an attempt to develop a new format for promoting junior researchers, for developing their academic skills, and for creating a new type of interaction between senior and junior group members.”

The results of Open MIND are truly impressive and will undoubtedly make a lasting contribution to the philosophy of mind, especially in its most powerful multidisciplinary and empirically grounded forms.

Take a look, and open your mind too.

Open MIND contributors: Adrian John Tetteh Alsmith, Michael L. Anderson, Margherita Arcangeli, Andreas Bartels, Tim Bayne, David H. Baßler, Christian Beyer, Ned Block, Hannes Boelsen, Amanda Brovold, Anne-Sophie Brüggen, Paul M. Churchland, Andy Clark, Carl F. Craver, Holk Cruse, Valentina Cuccio, Brian Day, Daniel C. Dennett, Jérôme Dokic, Martin Dresler, Andrea R. Dreßing, Chris Eliasmith, Maximilian H. Engel, Kathinka Evers, Regina Fabry, Sascha Fink, Vittorio Gallese, Philip Gerrans, Ramiro Glauer, Verena Gottschling, Rick Grush, Aaron Gutknecht, Dominic Harkness, Oliver J. Haug, John-Dylan Haynes, Heiko Hecht, Daniela Hill, John Allan Hobson, Jakob Hohwy, Pierre Jacob, J. Scott Jordan, Marius Jung, Anne-Kathrin Koch, Axel Kohler, Miriam Kyselo, Lana Kuhle, Victor A. Lamme, Bigna Le Nggenhager, Caleb Liang, Ying-Tung Lin, Christophe Lopez, Michael Madary, Denis C. Martin, Mark May, Lucia Melloni, Richard Menary, Aleksandra Mroczko-Wąsowicz, Saskia K. Nagel, Albert Newen, Valdas Noreika, Alva Noë, Gerard O’Brien, Elisabeth Pacherie, Anita Pacholik-Żuromska, Christian Pfeiffer, Iuliia Pliushch, Ulrike Pompe-Alama, Jesse J. Prinz, Joëlle Proust, Lisa Quadt, Antti Revonsuo, Adina L. Roskies, Malte Schilling, Stephan Schleim, Tobias Schlicht, Jonathan Schooler, Caspar M. Schwiedrzik, Anil Seth, Wolf Singer, Evan Thompson, Jarno Tuominen, Katja Valli, Ursula Voss, Wanja Wiese, Yann F. Wilhelm, Kenneth Williford, Jennifer M. Windt.


Open MIND press release.
The cybernetic Bayesian brain: from interoceptive inference to sensorimotor contingencies
Perceptual presence in the Kuhnian-Popperian Bayesian brain
Inference to the best prediction

Training synaesthesia: How to see things differently in half-an-hour a day

syn_brain_phillips
Image courtesy of Phil Wheeler Illustrations

Can you learn to see the world differently? Some people already do. People with synaesthesia experience the world very differently indeed, in a way that seems linked to creativity, and which can shed light on some of the deepest mysteries of consciousness. In a paper published in Scientific Reports, we describe new evidence suggesting that non-synaesthetes can be trained to experience the world much like natural synaesthetes. Our results have important implications for understanding individual differences in conscious experiences, and they extend what we know about the flexibility (‘plasticity’) of perception.

Synaesthesia means that an experience of one kind (like seeing a letter) consistently and automatically evokes an experience of another kind (like seeing a colour), when the normal kind of sensory stimulation for the additional experience (the colour) isn’t there. This example describes grapheme-colour synaesthesia, but this is just one among many fascinating varieties. Other synaesthetes experience numbers as having particular spatial relationships (spatial form synaesthesia, probably the most common of all). And there are other more unusual varieties like mirror-touch synaesthesia, where people experience touch on their own bodies when they see someone else being touched, and taste-shape synaesthesia, where triangles might taste sharp, and ellipses bitter.

The richly associative nature of synaesthesia, and the biographies of famous case studies like Vladimir Nabokov and Wassily Kandinsky (or, as the Daily Wail preferred: Lady Gaga and Pharrell Williams), has fuelled its association with creativity and intelligence. Yet the condition is remarkably common, with recent estimates suggesting about 1 in 23 people have some form of synaesthesia. But how does it come about? Is it in your genes, or is it something you can learn?

kandinsky
It is widely believed that Kandinsky was synaesthetic. For instance he said: “Colour is the keyboard, the eyes are the harmonies, the soul is the piano with many strings. The artist is the hand that plays, touching one key or another, to cause vibrations in the soul”

As with most biological traits the truth is: a bit of both. But this still begs the question of whether being synaesthetic is something that can be learnt, even as an adult.

There is a rather long history of attempts to train people to be synaesthetic. Perhaps the earliest example was by E.L. Kelly who in 1934 published a paper with the title: An experimental attempt to produce artificial chromaesthesia by the technique of the conditioned response. While this attempt failed (the paper says it is “a report of purely negative experimental findings”) things have now moved on.

More recent attempts, for instance the excellent work of Olympia Colizoli and colleagues in Amsterdam, have tried to mimic (grapheme-colour) synaesthesia by having people read books in which some of the letters are always coloured in with particular colours. They found that it was possible to train people to display some of the characteristics of synaesthesia, like being slower to name coloured letters when they were presented in a colour conflicting with the training (the ‘synaesthetic Stroop’ effect). But crucially, until now no study has found that training could lead to people actually reporting synaesthesia-like conscious experiences.

syn_reading
An extract from the ‘coloured reading’ training material, used in our study, and similar to the material used by Colizoli and colleagues. The text is from James Joyce. Later in training we replaced some of the letters with (appropriately) coloured blocks to make the task even harder.

Our approach was based on brute force. We decided to dramatically increase the length and rigour of the training procedure that our (initially non-synaesthetic) volunteers undertook. Each of them (14 in all) came in to the lab for half-an-hour each day, five days a week, for nine weeks! On each visit they completed a selection of training exercises designed to cement specific associations between letters and colours. Crucially, we adapted the difficulty of the tasks to each volunteer and each training session, and we also gave them financial rewards for good performance. Over the nine-week regime, some of the easier tasks were dropped entirely, and other more difficult tasks were introduced. Our volunteers also had homework to do, like reading the coloured books. Our idea was that the training must always be challenging, in order to have a chance of working.

The results were striking. At the end of the nine-week exercise, our dedicated volunteers were tested for behavioural signs of synaesthesia, and – crucially – were also asked about their experiences, both inside and outside the lab. Behaviourally they all showed strong similarities with natural-born synaesthetes. This was most striking in measures of ‘consistency’, a test which requires repeated selection of the colour associated with a particular letter, from a palette of millions.

consistency
The consistency test for synaesthesia. This example from David Eagleman’s popular ‘synaesthesia battery’.

Natural-born synaesthetes show very high consistency: the colours they pick (for a given letter) are very close to each other in colour space, across repeated selections. This is important because consistency is very hard to fake. The idea is that synaesthetes can simply match a colour to their experienced ‘concurrent’, whereas non-synaesthetes have to rely on less reliable visual memory, or other strategies.

Our trained quasi-synaesthetes passed the consistency test with flying colours (so to speak). They also performed much like natural synaesthetes on a whole range of other behavioural tests, including synaesthetic stroop, and a ‘synaesthetic conditioning’ task which shows that trained colours can elicit automatic physiological responses, like increases in skin conductance. Most importantly, most (8/14) of our volunteers described colour experiences much like those of natural synaesthetes (only 2 reported no colour phenomenology at all). Strikingly, some of these experience took place even outside the lab:

“When I was walking into campus I glanced at the University of Sussex sign and the letters were coloured” [according to their trained associations]

Like natural synaesthetes, some of our volunteers seemed to experience the concurrent colour ‘out in the world’ while others experienced the colours more ‘in the head’:

“When I am looking at a letter I see them in the trained colours”

“When I look at the letter ‘p’ … its like the inside of my head is pink”

syn_letters
For grapheme colour synaesthetes, letters evoke specific colour experiences. Most of our trained quasi-synaesthetes reported similar experiences. This image is however quite misleading. Synaesthetes (natural born or not) also see the letters in their actual colour, and they typically know that the synaesthetic colour is not ‘real’. But that’s another story.

These results are very exciting, suggesting for the first time that with sufficient training, people can actually learn to see the world differently. Of course, since they are based on subjective reports about conscious experiences, they are also the hardest to independently verify. There is always the slight worry that our volunteers said what they thought we wanted to hear. Against this worry, we were careful to ensure that none of our volunteers knew the study was about synaesthesia (and on debrief, none of them did!). Also, similar ‘demand characteristic’ concerns could have affected other synaesthesia training studies, yet none of these led to descriptions of synaesthesia-like experiences.

Our results weren’t just about synaesthesia. A fascinating side effect was that our volunteers registered a dramatic increase in IQ, gaining an average of about 12 IQ points (compared to a control group which didn’t undergo training). We don’t yet know whether this increase was due to the specifically synaesthetic aspects of our regime, or just intensive cognitive training in general. Either way, our findings provide support for the idea that carefully designed cognitive training could enhance normal cognition, or even help remedy cognitive deficits or decline. More research is needed on these important questions.

What happened in the brain as a result of our training? The short answer is: we don’t know, yet. While in this study we didn’t look at the brain, other studies have found changes in the brain after similar kinds of training. This makes sense: changes in behaviour or in perception should be accompanied by neural changes of some kind. At the same time, natural-born synaesthetes appear to have differences both in the structure of their brains, and in their activity patterns. We are now eager to see what kind of neural signatures underlie the outcome of our training paradigm. The hope is, that because our study showed actual changes in perceptual experience, analysis of these signatures will shed new light on the brain basis of consciousness itself.

So, yes, you can learn to see the world differently. To me, the most important aspect of this work is that it emphasizes that each of us inhabits our own distinctive conscious world. It may be tempting to think that while different people – maybe other cultures – have different beliefs and ways of thinking, still we all see the same external reality. But synaesthesia, along with other emerging theories of ‘predictive processing’ – shows that the differences go much deeper. We each inhabit our own personalised universe, albeit one which is partly defined and shaped by other people. So next time you think someone is off in their own little world: they are.


The work described here was led by Daniel Bor and Nicolas Rothen, and is just one part of an energetic inquiry into synaesthesia taking place at Sussex University and the Sackler Centre for Consciousness Science. With Jamie Ward and (recently) Julia Simner also working here, we have a uniquely concentrated expertise in this fascinating area. In other related work I have been interested in why synaesthetic experiences lack a sense of reality and how this give an important clue about the nature of ‘perceptual presence’. I’ve also been working on the phenomenology of spatial form synaesthesia, and whether synaesthetic experiences can be induced through hypnosis. And an exciting brain imaging study of natural synaesthetes will shortly hit the press! Nicolas Rothen is an authority on the relationship between synaesthesia and memory, and Jamie Ward and Julia Simner have way too many accomplishments in this field to mention. (OK, Jamie has written the most influential review paper in the area – featuring a lot of his own work – and Julia (with Ed Hubbard) has written the leading textbook. That’s not bad to start with.)


Our paper, Adults can be Trained to Acquire Synesthetic Experiences (sorry for US spelling) is published (open access, free!) in Scientific Reports, part of the Nature family. The authors were Daniel Bor, Nicolas Rothen, David Schwartzman, Stephanie Clayton, and Anil K. Seth. There has been quite a lot of media coverage of this work, for instance in the New Scientist and the Daily Fail. Other coverage is summarized here.

Eye Benders: the science of seeing and believing, wins Royal Society prize!

eyebenders_cover

An unexpected post.  I’m very happy to have learnt today that the book Eye Benders has won the 2014 Royal Society Young Person’s Book Prize.  Eye Benders was written by Clive Gifford (main author) and me (consultant).  It was published by Ivy Press, who are also the redoubtable publishers of the so-far-prizeless but nonetheless worthy 30 Second Brain. A follow-up to Eye Benders, Brain Twister, is in the works: More brain, less optical illusions, but same high quality young-person-neuroscience-fare.

The Royal Society says this about the prize: “Each year the Royal Society awards a prize to the best book that communicates science to young people. The prize aims to inspire young people to read about science and promotes the best science writing for the under-14s.”

This year, the shortlist was chosen by Professor James Hough FRS, Dr Rhaana Starling, Mr Michael Heyes, Professor Iain Stewart and Dr Anjana Ahuja. Well done all, good shortlisting.  More importantly, the winner was chosen by groups of young persons themselves.  Here is what some of the 2014 young people had to say about Eye Benders:

Matt, 12 said “Science from a different perspective. Factual and interesting – a spiral of a read!”

Beth, 14 said “It was way, way cool!

Ethan, 12 said “The illustrations were absolutely amazing”

Joe, 12 said “A great, well written and well thought-out book; the illustrations are clear, detailed and amazing. The front cover is beautiful.”

Felix, 10 said “Eye popping and mind-blowingly fun!’

So there it is. Matt and friends have spoken, and here is a picture of Clive accepting the award in Newcastle (alas I wasn’t there) accompanied with a young person being enthused:

eyebenders_award

Here’s a sneak at what the book looks like, on the inside:

eyebenders_sample

A personal note: I remember well going through the final layouts for Eye Benders, heavily dosed on painkillers in hospital in Barcelona following emergency surgery, while at the same time my father was entering his final weeks back in Oxfordshire. A dark time.  Its lovely, if bittersweet, to see something like this emerge from it.

Other coverage:

GrrlScientist in The Guardian.
Optical illusion book wins Royal Society prize
Clive shares some of the best Eye Benders illusions online
Royal Society official announcement
University of Sussex press release

I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition

metacog

Image from 30 Second Brain, Ivy Press, available at all good booksellers.

Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.

In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!

This is important because it changes how we think about metacognition. Metacognition, strictly speaking, is ‘knowing about knowing’. When we make a perceptual judgment, or a decision of any kind, we typically have some degree of insight into whether our decision was correct or not. This is metacognition, which in experiments is usually measured by asking people how confident they are in a previous decision. Good metacognitive performance is indicated by high correlations between confidence and accuracy, which can be quantified in various ways.

Most explanations of metacognition assume that metacognitive judgements are based on the same information as the original (‘first-order’) decision. For example, if you are asked to decide whether a dim light was present or not, you might make a (first-order) judgment based on signals flowing from your eyes to your brain. Perhaps your brain sets a threshold below which you will say ‘No’ and above which you will say ‘Yes’. Metacognitive judgments are typically assumed to work on the same data. If you are asked whether you were guessing or were confident, maybe you will set additional thresholds a bit further apart. The idea is that your brain may need more sensory evidence to be confident in judging that a dim light was in fact present, than when merely guessing that it was.

This way of looking at things is formalized by signal detection theory (SDT). The nice thing about SDT is that it can give quantitative mathematical expressions for how well a person can make both first-order and metacognitive judgements, in ways which are not affected by individual biases to say ‘yes’ or ‘no’, or ‘guess’ versus ‘confident’. (The situation is a bit trickier for metacognitive confidence judgements but we can set these details aside for now: see here for the gory details). A simple schematic of SDT is shown below.

sdt

Signal detection theory. The ‘signal’ refers to sensory evidence and the curves show hypothetical probability distributions for stimulus present (solid line) and stimulus absent (dashed line). If a stimulus (e.g., a dim light) is present, then the sensory signal is likely to be stronger (higher) – but because sensory systems are assumed to be noisy (probabilistic), some signal is likely even when there is no stimulus. The difficulty of the decision is shown by the overlap of the distributions. The best strategy for the brain is to place a single ‘decision criterion’ midway between the peaks of the two distributions, and to say ‘present’ for any signal above this threshold, and ‘absent’ for any signal below. This determines the ‘first order decision’. Metacognitive judgements are then specified by additional ‘confidence thresholds’ which bracket the decision criterion. If the signal lies in between the two confidence thresholds, the metacognitive response is ‘guess’; if it lies to the two extremes, the metacognitive response is ‘confident’. The mathematics of SDT allow researchers to calculate ‘bias free’ measures of how well people can make both first-order and metacognitive decisions (these are called ‘d-primes’). As well as providing a method for quantifying decision making performance, the framework is also frequently assumed to say something about what the brain is actually doing when it is making these decisions. It is this last assumption that our present work challenges.

On SDT it is easy to see that one can make above-chance first order decisions while displaying low or no metacognition. One way to do this would be to set your metacognitive thresholds very far apart, so that you are always guessing. But there is no way, on this theory (without making various weird assumptions), that you could be at chance in your first-order decisions, yet above chance in your metacognitive judgements about these decisions.

Surprisingly, until now, no-one had actually checked to see whether this could happen in practice. This is exactly what we did, and this is exactly what we found. We analysed a large amount of data from a paradigm called artificial grammar learning, which is a workhorse in psychological laboratories for studying unconscious learning and decision-making. In artificial grammar learning people are shown strings of letters and have to decide whether each string belongs to ‘grammar A’ or ‘grammar B’. Each grammar is just an arbitrary set of rules determining allowable patterns of letters. Over time, most people can learn to classify letter strings at better than chance. However, over a large sample, there will always be some people that can’t: for these unfortunates, their first-order performance remains at ~50% (in SDT terms they have a d-prime not different from zero).

agl

Artificial grammar learning. Two rule sets (shown on the left) determine which letter strings belong to ‘grammar A’ or ‘grammar B’. Participants are first shown examples of strings generated by one or the other grammar (training). Importantly, they are not told about the grammatical rules, and in most cases they remain unaware of them. Nonetheless, after some training they are able to successfully (i.e., above chance) classify novel letter strings appropriately (testing).

Crucially, subjects in our experiments were asked to make confidence judgments along with their first-order grammaticality judgments. Focusing on those subjects who remained at chance in their first-order judgements, we found that they still showed above-chance metacognition. That is, they were more likely to be confident when they were (by chance) right, than when they were (by chance) wrong. We call this novel finding blind insight.

The discovery of blind insight changes the way we think about decision-making. Our results show that theoretical frameworks based on SDT are, at the very least, incomplete. Metacognitive performance during blind insight cannot be explained by simply setting different thresholds on a single underlying signal. Additional information, or substantially different transformations of the first-order signal, are needed. Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference.

pp

In predictive processing theories of brain function, perception depends on top-down predictions (blue) about the causes of sensory signals. Sensory signals carry ‘prediction errors’ (magenta) which update top-down predictions according to principles of Bayesian inference. Maybe a similar process underlies metacognition. Image from 30 Second Brain, Ivy Press.

This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them. While speculative, this idea fits neatly with the framework of predictive processing which says that top-down influences are critical in shaping the nature of perceptual contents.

The discovery of blindsight many years ago has substantially changed the way we think about vision. Our new finding of blind insight may similarly change the way we think about metacognition, and about consciousness too.

The paper is published open access (i.e. free!) in Psychological Science. The authors were Ryan Scott, Zoltan Dienes, Adam Barrett, Daniel Bor, and Anil K Seth. There are also accompanying press releases and coverage:

Sussex study reveals how ‘blind insight’ confounds logic.  (University of Sussex, 13/11/2014)
People show ‘blind insight’ into decision making performance (Association for Psychological Science, 13/11/2014)

Accurate metacognition for visual sensory memory

Image

I’m co-author on a new paper in Psychological Science – a collaboration between the Sackler Centre (me and Adam Barrett) and the University of Amsterdam (where I am a Visiting Professor).  The new study addresses the continuing debate about whether the apparent rich content of our visual sensory scenes is somehow an illusion, as suggested by experiments like change blindness.  Here, we provide evidence in the opposite direction by showing that metacognition (literally, cognition about cognition) is equivalent for different kinds of visual memory, including visual ‘sensory’ memory which reflects brief, unattended, stimuli.  The results indicate that our subjective impression of seeing more than we can attend to is not an illusion, but is an accurate reflection of the richness of visual perception.

Accurate Metacognition for Visual Sensory Memory Representations.

The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition-the degree of knowledge that subjects have about the correctness of their decisions-for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.