I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition

metacog

Image from 30 Second Brain, Ivy Press, available at all good booksellers.

Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.

In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!

This is important because it changes how we think about metacognition. Metacognition, strictly speaking, is ‘knowing about knowing’. When we make a perceptual judgment, or a decision of any kind, we typically have some degree of insight into whether our decision was correct or not. This is metacognition, which in experiments is usually measured by asking people how confident they are in a previous decision. Good metacognitive performance is indicated by high correlations between confidence and accuracy, which can be quantified in various ways.

Most explanations of metacognition assume that metacognitive judgements are based on the same information as the original (‘first-order’) decision. For example, if you are asked to decide whether a dim light was present or not, you might make a (first-order) judgment based on signals flowing from your eyes to your brain. Perhaps your brain sets a threshold below which you will say ‘No’ and above which you will say ‘Yes’. Metacognitive judgments are typically assumed to work on the same data. If you are asked whether you were guessing or were confident, maybe you will set additional thresholds a bit further apart. The idea is that your brain may need more sensory evidence to be confident in judging that a dim light was in fact present, than when merely guessing that it was.

This way of looking at things is formalized by signal detection theory (SDT). The nice thing about SDT is that it can give quantitative mathematical expressions for how well a person can make both first-order and metacognitive judgements, in ways which are not affected by individual biases to say ‘yes’ or ‘no’, or ‘guess’ versus ‘confident’. (The situation is a bit trickier for metacognitive confidence judgements but we can set these details aside for now: see here for the gory details). A simple schematic of SDT is shown below.

sdt

Signal detection theory. The ‘signal’ refers to sensory evidence and the curves show hypothetical probability distributions for stimulus present (solid line) and stimulus absent (dashed line). If a stimulus (e.g., a dim light) is present, then the sensory signal is likely to be stronger (higher) – but because sensory systems are assumed to be noisy (probabilistic), some signal is likely even when there is no stimulus. The difficulty of the decision is shown by the overlap of the distributions. The best strategy for the brain is to place a single ‘decision criterion’ midway between the peaks of the two distributions, and to say ‘present’ for any signal above this threshold, and ‘absent’ for any signal below. This determines the ‘first order decision’. Metacognitive judgements are then specified by additional ‘confidence thresholds’ which bracket the decision criterion. If the signal lies in between the two confidence thresholds, the metacognitive response is ‘guess’; if it lies to the two extremes, the metacognitive response is ‘confident’. The mathematics of SDT allow researchers to calculate ‘bias free’ measures of how well people can make both first-order and metacognitive decisions (these are called ‘d-primes’). As well as providing a method for quantifying decision making performance, the framework is also frequently assumed to say something about what the brain is actually doing when it is making these decisions. It is this last assumption that our present work challenges.

On SDT it is easy to see that one can make above-chance first order decisions while displaying low or no metacognition. One way to do this would be to set your metacognitive thresholds very far apart, so that you are always guessing. But there is no way, on this theory (without making various weird assumptions), that you could be at chance in your first-order decisions, yet above chance in your metacognitive judgements about these decisions.

Surprisingly, until now, no-one had actually checked to see whether this could happen in practice. This is exactly what we did, and this is exactly what we found. We analysed a large amount of data from a paradigm called artificial grammar learning, which is a workhorse in psychological laboratories for studying unconscious learning and decision-making. In artificial grammar learning people are shown strings of letters and have to decide whether each string belongs to ‘grammar A’ or ‘grammar B’. Each grammar is just an arbitrary set of rules determining allowable patterns of letters. Over time, most people can learn to classify letter strings at better than chance. However, over a large sample, there will always be some people that can’t: for these unfortunates, their first-order performance remains at ~50% (in SDT terms they have a d-prime not different from zero).

agl

Artificial grammar learning. Two rule sets (shown on the left) determine which letter strings belong to ‘grammar A’ or ‘grammar B’. Participants are first shown examples of strings generated by one or the other grammar (training). Importantly, they are not told about the grammatical rules, and in most cases they remain unaware of them. Nonetheless, after some training they are able to successfully (i.e., above chance) classify novel letter strings appropriately (testing).

Crucially, subjects in our experiments were asked to make confidence judgments along with their first-order grammaticality judgments. Focusing on those subjects who remained at chance in their first-order judgements, we found that they still showed above-chance metacognition. That is, they were more likely to be confident when they were (by chance) right, than when they were (by chance) wrong. We call this novel finding blind insight.

The discovery of blind insight changes the way we think about decision-making. Our results show that theoretical frameworks based on SDT are, at the very least, incomplete. Metacognitive performance during blind insight cannot be explained by simply setting different thresholds on a single underlying signal. Additional information, or substantially different transformations of the first-order signal, are needed. Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference.

pp

In predictive processing theories of brain function, perception depends on top-down predictions (blue) about the causes of sensory signals. Sensory signals carry ‘prediction errors’ (magenta) which update top-down predictions according to principles of Bayesian inference. Maybe a similar process underlies metacognition. Image from 30 Second Brain, Ivy Press.

This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them. While speculative, this idea fits neatly with the framework of predictive processing which says that top-down influences are critical in shaping the nature of perceptual contents.

The discovery of blindsight many years ago has substantially changed the way we think about vision. Our new finding of blind insight may similarly change the way we think about metacognition, and about consciousness too.

The paper is published open access (i.e. free!) in Psychological Science. The authors were Ryan Scott, Zoltan Dienes, Adam Barrett, Daniel Bor, and Anil K Seth. There are also accompanying press releases and coverage:

Sussex study reveals how ‘blind insight’ confounds logic.  (University of Sussex, 13/11/2014)
People show ‘blind insight’ into decision making performance (Association for Psychological Science, 13/11/2014)

Accurate metacognition for visual sensory memory

Image

I’m co-author on a new paper in Psychological Science – a collaboration between the Sackler Centre (me and Adam Barrett) and the University of Amsterdam (where I am a Visiting Professor).  The new study addresses the continuing debate about whether the apparent rich content of our visual sensory scenes is somehow an illusion, as suggested by experiments like change blindness.  Here, we provide evidence in the opposite direction by showing that metacognition (literally, cognition about cognition) is equivalent for different kinds of visual memory, including visual ‘sensory’ memory which reflects brief, unattended, stimuli.  The results indicate that our subjective impression of seeing more than we can attend to is not an illusion, but is an accurate reflection of the richness of visual perception.

Accurate Metacognition for Visual Sensory Memory Representations.

The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition-the degree of knowledge that subjects have about the correctness of their decisions-for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

The 30 Second Brain

Image

This week I’d like to highlight my new book, 30 Second Brain,  published by Icon Books on March 6th.  It is widely available in both the UK and the USA.  To whet your appetite here is a slightly amended version of the Introduction.

[New Scientist have just reviewed the book]

Understanding how the brain works is one of our greatest scientific quests.  The challenge is quite different from other frontiers in science.  Unlike the bizarre world of the very small in which quantum-mechanical particles can exist and not-exist at the same time, or the mind-boggling expanses of time and space conjured up in astronomy, the human brain is in one sense an everyday object: it is about the size and shape of a cauliflower, weighs about 1.5 kilograms, and has a texture like tofu.  It is the complexity of the brain that makes it so remarkable and difficult to fathom.  There are so many connections in the average adult human brain, that if you counted one each second, it would take you over 3 million years to finish.

Faced with such a daunting prospect it might seem as well to give up and do some gardening instead.  But the brain cannot be ignored.  As we live longer, more and more of us are suffering  – or will suffer – from neurodegenerative conditions like Alzheimer’s disease and dementia, and the incidence of psychiatric illnesses like depression and schizophrenia is also on the rise. Better treatments for these conditions depend on a better understanding of the brain’s intricate networks.

More fundamentally, the brain draws us in because the brain defines who we are.  It is much more than just a machine to think with. Hippocrates, the father of western medicine, recognized this long ago:  “Men ought to know that from nothing else but the brain come joys, delights, laughter and jests, and sorrows, griefs, despondency, and lamentations.” Much more recently Francis Crick – one of the major biologists of our time  – echoed the same idea: “You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules”.  And, perhaps less controversially but just as important, the brain is also responsible for the way we perceive the world and how we behave within it. So to understand the operation of the brain is to understand our own selves and our place in society and in nature, and by doing so to follow in the hallowed footsteps of giants like Copernicus and Darwin.

But how to begin?  From humble beginnings, neuroscience is now a vast enterprise involving scientists from many different disciplines and almost every country in the world.  The annual meeting of the ‘Society for Neuroscience’ attracts more than twenty thousand (and sometime more than thirty thousand!) brain scientists each year, all intent on talking about their own specific discoveries and finding out what’s new.  No single person – however capacious their brain – could possible keep track of such an enormous and fast-moving field.  Fortunately, as in any area of science, underlying all this complexity are some key ideas to help us get by.  Here’s where this book can help.

Within the pages of this book, leading neuroscientists will take you on a tour of fifty of the most exciting ideas in modern brain science, using simple plain English.  To start with, in ‘Building the brain’ we will learn about the basic components and design of the brain, and trace its history from birth (and before!), and over evolution.  ‘Brainy theories’ will introduce some of the most promising ideas about how the brain’s many billions of nerve cells (neurons) might work together.  The next chapter will show how new technologies are providing astonishing advances in our ability to map the brain and decipher its activity in time and space.  Then in ‘Consciousness’ we tackle the big question raised by Hippocrates and Crick, namely the still-mysterious relation between the brain and conscious experience – how does the buzzing of neurons transform into the subjective experience of being you, here, now, reading these words? Although the brain basis of consciousness happens to be my own particular research interest, much of the brain’s work is done below its radar – think of the delicate orchestration of muscles involved in picking up a cup, or in walking across the room.  So in the next chapter we will explore how the brain enables perception, action, cognition, and emotion, both with and without consciousness.  Finally, nothing – of course – ever stays the same. In the last chapter – ‘the changing brain –we will explore some very recent ideas about how the brain changes its structure and function throughout life, in both health and in disease.

Each of the 50 ideas is condensed into a concise, accessible and engaging ’30 second neuroscience’.  To get the main message across there is also a ‘3 second brainwave’, and a ‘3 minute brainstorm’ provides some extra food for thought on each topic. There are helpful glossaries summarizing the most important terms used in each chapter, as well as biographies of key scientists who helped make neuroscience what it is today.  Above all, I hope to convey that the science of the brain is just getting into its stride. These are exciting times and it’s time to put the old grey matter through its paces.

Update 29.04.14.  Foreign editions now arriving!

30SecBrainMontage

Interoceptive inference, emotion, and the embodied self

ImageSince this is a new blog, forgive a bit of a catch up.  This is about a recent Trends Cognitive Sciences opinion article I wrote, applying the framework of predictive processing/coding to interoception, emotion, and the experience of body ownership.  There’s a lot of interest at the moment in understanding how interoception (the sense of the internal state of the body) and exteroception (everything else) interact.  Hopefully this will contribute in some way.  The full paper is here.

Interoceptive inference, emotion, and the embodied self

ABSTRACT:  The concept of the brain as a prediction machine has enjoyed a resurgence in the context of the Bayesian brain and predictive coding approaches within cognitive science. To date, this perspective has been applied primarily to exteroceptive perception (e.g., vision, audition), and action. Here, I describe a predictive, inferential perspective on interoception: ‘interoceptive inference’ conceives of subjective feeling states (emotions) as arising from actively-inferred generative (predictive) models of the causes of interoceptive afferents. The model generalizes ‘appraisal’ theories that view emotions as emerging from cognitive evaluations of physiological changes, and it sheds new light on the neurocognitive mechanisms that underlie the experience of body ownership and conscious selfhood in health and in neuropsychiatric illness.

As always, a pre-copy-edited version is here.