For thousands of years people have wondered about the mystery of consciousness. How can anything made of physical stuff – a brain, for instance – be identical to, or give rise to, a subjective experience? Despite a revival in the … Continue reading
What started in Minneapolis has swept across America and now spreads around the globe. This moment is delivering a new opportunity to deal with an old enemy, to begin setting right many old wrongs. One thing that has become abundantly clear is that it is not enough to whisper support from the sidelines, to take solace in the self-assessment that ‘I am not racist’ and then sit back to see what happens. Each one of us must look to what we can do to make things better.
I am a Professor of Neuroscience in a medium-sized University in England. My research focuses on the neuroscience of perception and of consciousness – the science of how we experience the world (and the self), and of how ‘experience’ happens at all. These questions are deeply fascinating and have many practical implications. I am lucky to have a career that revolves around such interesting topics. I am keenly aware that this luck has not been equally available.
In my research group of nine, there are no Black people. There have been no Black people among the 22 doctoral students and postdoctoral researchers I have mentored since arriving at Sussex University more than a decade ago. There are no Black people on the editorial board of the academic journal I oversee – a board consisting of 29 researchers from around the world. I am co-director of an international research program: the CIFAR program on Brain, Mind, and Consciousness has no Black people among its 34 Fellows, Advisors, and Global Scholars. This is not acceptable. Unfortunately, it is also entirely normal for neuroscience as a field. When wandering the halls of the Society for Neuroscience Annual Meeting – which regularly draws about 30,000 people – seeing a Black person is always a rare event. This is wrong.
Beyond its exclusionary effects, this systemic bias has a pernicious influence on how neuroscience is done. In studies of brain disorders in America, minority groups make up a tiny percentage of the cohorts on which studies are conducted (less than 5%, according to 2018 figures from the Lieber Institute for Brain Development). When neuroscience excludes Black researchers, neuroscience neglects Black people.
How did we get here? For those of us in comparatively senior positions, it is tempting to put it all down to a lack of qualified Black applicants, and to the scarcity of Black researchers in neuroscience more generally. There is some truth to this, but simply re-describing the situation does not solve the situation.
What can be done? There is an urgent need to encourage and support Black students and researchers at all stages of their education, training, and professional development, through scholarships, mentorships, networking events, and so on. Here, organisations such as the Society for Black Brain and Behavioural Scientists are doing excellent work. But progress will be too slow if action is left only to those who have a direct stake.
I will make a commitment. Recognising that exclusion starts early, I will make time to mentor and advise Black students who are keen to find a way into cognitive neuroscience. The opportunities we take are defined by the opportunities we see, and having a personal connection into a new world can make a real difference. If anyone wants to take me up on this, all you have to do is email me here – include ‘stepping up’ in the subject line.
Besides this, wherever I have a leadership role I will develop strategies to encourage greater participation from and representation of Black people, extending the active programmes that already exist to promote equity, diversity, and inclusion. I will also sign up for bystander intervention training – a step that many of us can take to make sure we are not left useless on the sidelines when something goes down.
I was a brown-skinned boy growing up in white rural Oxfordshire. I’m not a stranger to racism. I had hoped that things would get better, that the future would naturally tend towards diversity and inclusivity, to the benefit of all. But I now understand that history doesn’t write itself. It’s time for each of us to do what we can to make things better.
I was born in 1972, a year before the UK joined the European Economic Community. As I grew up, in rural South Oxfordshire, the idea of being part of a world beyond England helped keep me going, helped me believe things would get better. Half Indian and half Yorkshire, with a name that even I couldn’t pronounce properly, I looked forward to being part of a world with all the beauty and diversity of Europe, a world in which the threat of war and nationalism was receding not growing, war which had taken my grandfather before I knew him, before he knew me.
In 2016, the day after the referendum, I was giving a talk at a New Scientist event in London. I was up first, and began with some words about the sadness I felt about the result. Sadness about the UK turning away from the world with all its opportunities and challenges, and sadness about the national self-harm caused by the lies, greed, complacency, and desperation for power that had brought us to this point, to 52% vs 48%.
Now, despite myself, I am angry.
Apparently, Theresa May is preparing to bring her appalling deal back to parliament for a third ‘meaningful’ vote, running down the clock until there are no options left on the table, until there is no table. The deal on offer has not changed. To call the votes ‘meaningful’ is therefore the most moronic oxymoron I’ve ever heard. There is nothing meaningful in repeating a vote you lose (and lose by massive margins) until you get the result you want.
Of course, this is precisely the logic by which we are told it is unacceptable to go back to ask the people what they think. The people, we are told, have given their instructions, and we are compelled to carry them out whatever the cost. But while May’s ‘deal’ has not changed, the consequences of leaving the EU are now entirely and obviously different from the lies and false promises that people voted on during the referendum itself, in a campaign that is increasingly being revealed as riven with corruption and driven by dubious foreign and economic interests. (And yes, we need our own Mueller.)
To refuse a People’s Vote on the basis of it being a threat to democracy is hypocrisy of lowest form.
There are many other reasons for sadness and anger. The shapeshifting of our politicians as they jockey for personal advantage amid their self-generated chaos. The airtime given to the far-right headbangers stirring up regressive nationalistic passions to deepen the divisions that are already tearing our country apart. The pandering to the Ulster Unionists and the threat to peace in Northern Ireland. The blatant lies coming from the government as they pull votes, add votes, trot out the same garbage about ‘taking back control’, attempt shameless bribes to get their way, and plough on to the cliff edge regardless. The absence of any effective opposition to what is the most disastrous leadership I or anyone can remember. Cameron and his mates fleeing the scene to chillax in Italy or Portugal or wherever. The disenfranchisement of the young, the back-burnering of all the non-Brexit government business that might actually matter, and all the time and money and hopes and dreams already burnt to ashes on the Brexit trash-fire.
It’s time for all this to stop.
Our society was and is unequal and the dominant neo-liberal complacency needed shaking up. But this is not the way to do it. We are more divided than ever, half of us sold lies and promises of an impossible future, the other half increasingly disconnected from and despairing of the direction we are headed. The EU, while not perfect, cannot be blamed. We brought this on ourselves. And now it’s clear that parliament, once something to be proud of, cannot form a majority for anything – at least not without May’s deadline-day gun-to-the-head and the prospective horror show of her deal rising like a zombie until it finally staggers over the line. This would not be a triumph of diplomacy and democracy. It would be a travesty.
It’s time to go back to the people. Let them take back control.
The short piece below first appeared in Scientific American (Observations) on October 26, 2018. It is a coauthored piece, led by me with contributions from Michael Schartner, Enzo Tagliazucchi, Suresh Muthukumaraswamy, Robin Carhart-Harris, and Adam Barrett. Since its appearance, both Dr. Kastrup and Prof. Kelly have responded. I attach links to their replies after our article, offering a few comments in further response (entirely my own point of view). These comments just offer additional clarifications – I stand fully by everything said in our Sci Am piece.
It’s not easy to strike the right balance when taking new scientific findings to a wider audience. In a recent opinion piece, Bernardo Kastrup and Edward F. Kelly point out that media reporting can fuel misleading interpretations through oversimplification, sometimes abetted by the scientists themselves. Media misinterpretations can be particularly contagious for research areas likely to pique public interest—such as the exciting new investigations of the brain basis of altered conscious experience induced by psychedelic drugs.
Unfortunately, Kastrup and Kelly fall foul of their own critique by misconstruing and oversimplifying the details of the studies they discuss. This leads them towards an anti-materialistic view of consciousness that has nothing to do with the details of the experimental studies—ours or others.
Take, for example, their discussion of our recent study reporting increased neuronal “signal diversity” in the psychedelic state. In this study, we used “Lempel-Ziv” complexity—a standard algorithm used to compress data files—to measure the diversity of brain signals recorded using magnetoencephalography (MEG). Diversity in this sense is related to, though not entirely equivalent to, “randomness.” The data showed widespread increased neuronal signal diversity for three different psychedelics (LSD, psilocybin and ketamine), when compared to a placebo baseline. This was a striking result since previous studies using this measure had only reported reductions in signal diversity, in global states generally thought to mark “decreases” in consciousness, such as (non-REM) sleep and anesthesia.
Media reporting of this finding led to headlines such as “First evidence found that LSD produces ‘higher’ levels of consciousness” (The Independent, April 19, 2017)—playing on an ambiguity between cultural and scientific interpretations of “higher”—and generating just the kind of confusion that Kastrup and Kelly rightly identify as unhelpful.
Unfortunately, Kastrup and Kelly then depart from the details in misleading ways. They suggest that the changes in signal diversity we found are “small,” when it is not magnitude but statistical significance and effect size that matters. Moreover, even small changes to brain dynamics can have large effects on consciousness. And when they compare the changes reported in psychedelic states with those found in sleep and anesthesia, they neglect the important fact that these analyses were conducted on different data types (intracranial data and scalp-level EEG respectively—compared to source-localized MEG for the psychedelic data)—making quantitative comparisons very difficult.
Having set up the notion that the changes we observed were “small,” they then say, “To suggest that brain activity randomness explains psychedelic experiences seems inconsistent with the fact that these experiences can be highly structured and meaningful.” However, neither we nor others claim that “brain activity randomness” explains psychedelic experiences. Our finding of increased signal diversity is part of a larger mission to account for aspects of conscious experience in terms of physiological processes. In our view, higher signal diversity indicates a larger repertoire of physical brain states that very plausibly underpin specific aspects of psychedelic experience, such as a blending of the senses, dissolution of the “ego,” and hyper-animated imagination. As standard functional networks dissolve and reorganize, so too might our perceptual structuring of the world and self.
“In short, a formidable chasm still yawns between the extraordinary richness of psychedelic experiences and the modest alterations in brain activity patterns so far observed.” Here, their misrepresentations are again exposed. To call the alterations modest is to misread the statistics. To claim a “formidable chasm” is to misunderstand the incremental nature of consciousness research (and experimental research generally), to sideline the constraints and subtleties of the relevant analyses and to ignore the insights into psychedelic experience that such analyses provide.
Kastrup and Kelly’s final move is to take this presumed chasm as motivation for questioning “materialist” views, held by most neuroscientists, according to which conscious experiences —and mental states in general—are underpinned by brain states. Our study, like all other studies that explore relations between experiential states and brain states (whether about psychedelics or not), is entirely irrelevant to this metaphysical question.
These are not the only inaccuracies in the piece that deserve redress. For example, their suggestion that decreased “brain activity” is one of the more reliable findings of psychedelic research is incorrect. Aside from the well-known stimulatory effects of psychedelics on the excitatory glutamate system, early reports of decreased brain blood flow under psilocybin have not been well replicated: a subsequent study by the same team using a different protocol and drug kinetics (intravenous LSD) found only modest increases in brain blood flow confined to the visual cortex. In contrast, more informative dynamic measures have revealed more consistent findings, with network disintegration, increases in global connectivity and increased signal diversity/entropy appearing to be particularly reliable outcomes, replicated across studies and study teams.
Consciousness science remains a fragile business, poised precariously between grand ambition, conflicting philosophical worldviews, immediate personal relevance and the messy reality of empirical research. Psychedelic research in particular has its own awkward cultural and historical baggage. Against this background, it’s important to take empirical advances for what they are: yardsticks of iterative, self-correcting progress.
This research is providing a unique window onto mappings between mechanism and phenomenology, but we are just beginning to scratch the surface. At the same time—and perhaps more importantly—psychedelic research is demonstrating an exciting potential for clinical use, for example in alleviating depression, though larger and more rigorous studies are needed to confirm and contextualize the promising early findings.
Kastrup and Kelly are right to guard against overplaying empirical findings by the media. But by misrepresenting the explanatory reach of our findings in order to motivate metaphysical discussions irrelevant to our study, they risk undermining the hard-won legitimacy of a neuroscience of consciousness. Empirical consciousness science, based firmly on materialistic assumptions, is doing just fine. And unlike alternative perspectives that place themselves “beyond physicalism,” it will continue to shed light on one of our deepest mysteries through rigorous application of the scientific method.
You can read Dr. Kastrup’s response here, and Prof. Kelly’s here. In the spirit of constructive clarification I will offer a few additional comments on the parts of the work I was involved in: the signal diversity study and the general interpretation of how empirical work on the brain basis of psychedelic research speaks to metaphysical debates about the nature of consciousness. These comments relate mainly to Prof. Kelly’s critique.
(With respect to Dr. Kastrup’s comments I will simply offer, as he no doubt knows, that relating fMRI BOLD to neural activity, in terms of global baseline and regionally differentiated metabolics, functional neuronal connectivity, and so on – remains an area of extremely active research and rapid methodological innovation.)
1. Prof Kelly notes that we do not provide ‘exact N’s for the data segments we used to compute measures of signal diversity. This is because they varied substantially between drug condition, participant, and analysis method. We do however clearly state that “[a]nalyses were performed using non-overlapping segments of length 2 sec for a total length between 2 min and 10 min of MEG recording per participant and state” (Schartner et al 2017, p.5)” These numbers indeed lead to a total number of segments ranging from ~3,500 to ~27,000 per participant and per state (since we have 90 channels/sources per segment). These large numbers provide stable statistical inference (e.g., by the central limit theorem). Also, as we mentioned (above) the absolute scores on the diversity scale are not as meaningful as effect size and statistical significance. I’d also like to add that in our paper we go to great lengths to establish that our reported diversity changes do not trivially follow from well-known spectral changes in the drug conditions – this is part of the unavoidable computational sophistication of the method, when done properly.
2. When Prof. Kelly says that “relatively simple neuroimaging methods can easily distinguish between wakeful and drowsy states and other commonplace conditions” I do not disagree at all. Our paper was specifically interested in signal diversity as a metric of brain dynamics (and as mentioned above we take care to de-confound our diversity results from spectral changes). Also, we do not claim these diversity changes fully explain the extraordinary phenomenology of psychedelic states. However, I do believe that they contribute helpfully to the incremental empirical project of mapping, in explanatorily satisfying ways, between mechanism and phenomenology. I defend the general approach in this 2016 Aeon article: ‘the “real” problem of consciousness’.
3. I also agree the measures of signal diversity we apply are only part of the story when mapping between experiential richness and brain dynamics. My lab (and others too) have have worked hard on developing empirically adequate measures of ‘neural complexity’, ‘causal density’, and ‘integrated information’ which are theoretically richer – but unfortunately, at least so far, not very robust when applied to actual data – and are substantially more computationally sophisticated. See here for a recent preprint. We have to do what we can with the measures we have, while always striving to generate and validate better measures.
4. I do not buy the claim that near-death-experiences provide an empirical challenge to physicalist neuroscience (as argued by Prof. Kelly). See my previous blog post on this issue (‘the brain’s last hurrah‘).
5. No need to impute me with a bias towards physicalism! I explicitly and happily adopt physicalism as a pragmatic metaphysics for pursuing a (neuro)science of consciousness. I can do this while remaining agnostic about the actual ontological status of consciousness. The problem with many alternative metaphysics – in my view – is that they do not lead to testable propositions. Dr Kastrup and Prof Kelly are of course entirely entitled to their own metaphysics. I was merely objecting to their usage of our psychedelic research in support of their metaphysics, because I think it is entirely irrelevant. I simply do not accept that there are any “evident tensions between physicalist expectations and the experimental results [from psychedelic neuroimaging]”.
6. Finally, we can hopefully all agree on the importance of forestalling, as far as possible, media misinterpretations. This is true whatever one’s metaphysics. And it’s why, when our diversity paper first appeared, I felt compelled to pen an immediate corrective right here in this blog (‘Evidence for a higher state of consciousness? Sort of‘).
After posting, I realized I had not specifically responded to Bernardo’s initial reaction to our Sci Am piece. There is some overlap with the above points, but please anyway allow me to correct this oversight here.
1. Clearing the semantic fog. I hope I have made clear my intended distinction between ‘fully explain’ and ‘incrementally account for.’ Again my Aeon piece elaborates the strategy of refining explanatory mappings between mechanism and phenomenology.
2. Metaphysical claims. Our work is consistent with materialism and is motivated by it, but empirical studies like this are not suited to arbitrate between competing metaphysical positions (unless such positions state that there are no relations at all between brains and conscious experiences). Empirical studies like ours try to account for phenomenological properties in terms of mechanisms – but in doing so there is no need to make claims that one is addressing the (metaphysical) ‘hard problem’ of consciousness. Kastrup and Kelly have written that “the psychedelic brain imaging research discussed here has brought us to a major theoretical decision point as to which framework best fits with all the available data” – where ‘physicalism’ is one among several (metaphysical) ‘frameworks’. I continue to think the research discussed here is irrelevant to this ‘decision point’, unless one is deciding to reject frameworks that postulate no relation between consciousness and the brain. The fact that the research is about psychedelics rather than (for example) psychophysics is neither here nor there.
3. What the researchers fail to address. I do not agree with the premise that there is an inconsistency between the dream state and the psychedelic state in terms of neural evidence. As noted above, measures of brain dynamics and activation are being continuously refined and innovated and it is overly simplistic to characterise the relevant dimensions in terms of gross ‘level of activity’. Also, dreams and psychedelia are different. The point about ‘randomness’ I have addressed already (diversity is not presented as an exhaustive explanation of psychedelic phenomenology).
4. A surprising claim. I respectfully refrain from addressing these points about the MRI/MEG studies since I was not involved with them. This does not mean I condone Bernardo’s comments. I will only repeat that brute measures of increased/decreased brain activity are less informative than more sophisticated measures of neural dynamics and connectivity, and studies are accumulating to more precisely map brain changes in psychedelic states.
5. The issue of statistics. It is not meaningful to compare, quantitatively, ‘magnitudes’ in changes in subjective experience with magnitudes of statistical effect size as applied to (for example) our diversity measures. We made this point already in our Sci Am piece. I find it quite natural to suppose that a massively meaningful change in subjective experience might have a subtle neuronal signature in the brain (and as I have said, diversity/randomness is only a small part of any full ‘explanation’ anyway).
6. A non-sequitur. I do think its misleading to speak of a “formidable chasm” between “the magnitude of the subjective effects of a psychedelic trance and the accompanying physiological changes” for the reasons given in point 5 above.
7. Final thoughts. I indeed hope we can all agree that psychedelic research is interesting, exciting, valuable, evolving, clinically important, and generally highly worthwhile. I hope we can also agree, as mentioned above, that forestalling media misrepresentations is important. On other matters I doubt there will be full agreement between my views (and those of my colleagues) and Bernardo’s and Edward’s. They are certainly entitled to their metaphysics. I simply wish to point out (i) our studies do help build explanatory bridges between neural mechanism and psychedelic phenomenology, and (ii) they do not provide any additional reasons to entertain non-physicalist metaphysics.
And with that, I’m afraid I’ll have to draw a line under this interesting discussion – at least for my involvement. I hope it generates some light amid the heat.
So today I’d been planning to write about a new paper from our lab, just out in Neuropsychologia, in which we show how people without synaesthesia can be trained, over a few weeks, to have synaesthesia-like experiences – and that this training induces noticeable changes in their brains. It’s interesting stuff, and I will write about it later, but this morning I happened to read a recent piece by Olivia Goldhill in Quartz with the provocative title: “The idea that everything from spoons to stones are conscious is gaining academic credibility” (Quartz, Jan 27, 2018). This article had come up in a twitter discussion involving my colleague and friend Hakwan Lau about the challenge of maintaining the academic credibility of consciousness science, with Hakwan noting that provocative articles like this don’t often get the pushback they deserve.
So here’s some pushback.
Goldhill’s article is about panpsychism, which is the idea that consciousness is a fundamental property of the universe, present to some degree everywhere and in everything. Her article suggests that this view is becoming increasingly acceptable and accepted in academic circles, as so-called ‘traditional’ approaches (materialism and dualism) continue to struggle. On the contrary, although it’s true that panpsychism is being discussed more frequently and more openly these days, it remains very much a fringe proposition within consciousness science and is not taken seriously by many. Nor need it be, since consciousness science is getting along just fine without it. Let me explain how.
From hard problems to real problems
We should start with philosophy. Goldhill correctly identifies David Chalmers’ famous ‘hard problem of consciousness‘ as a key origin of modern panpsychism. This is bolstered by Chalmers’ own increasing apparent sympathy with this view, as Goldhill’s article makes clear. Put simply, the ‘hard problem’ is about how and why physical interactions of any sort can give rise to conscious experiences. This is indeed a difficult problem, and the apparent unavailability of any current solution is why those who fixate on it might be tempted by the elixir of panpsychism: if consciousness is ‘here, there, and everywhere‘ then there is no longer any hard problem to be solved.
But consciousness science has largely moved on from attempts to address the hard problem (though see IIT, below). This is not a failure, it’s a sign of maturity. Philosophically, the hard problem rests on conceivability arguments such as the possibility of imagining a philosophical ‘zombie’ – a behaviourally and perhaps physically identical version of me, or you, but which lacks any conscious experience, which has no inner universe. Conceivability arguments are generally weak since they often rest on failures of imagination or knowledge, rather than on insights into necessity. For example: the more I know about aerodynamics, the less I can imagine a 787 Dreamliner flying backwards. It cannot be done and such a thing is only ‘conceivable’ through ignorance about how wings work.
In practice, scientists researching consciousness are not spending their time (or their scarce grant money) worrying about conscious spoons, they are getting on with the job of mapping mechanistic properties (of brains, bodies, and environments) onto properties of consciousness. These properties can be described in many different ways, but include – for example – differences between normal wakeful awareness and general anaesthesia; experiences of identifying with and owning a particular body, or distinctions between conscious and unconscious visual perception. If you come to the primary academic meeting on consciousness science – the annual meeting of the Association for the Scientific Study of Consciousness (ASSC) – or read articles either in specialist journals like Neuroscience of Consciousness (I edit this, other journals are available) or in the general academic literature, you’ll find a wealth of work like this and very little – almost nothing – on panpsychism. You’ll find debates on the best way to test whether prefrontal cortex is involved in visual metacognition – but you won’t find any experiments on whether stones are aware. This, again, is maturity, not stagnation. It is also worth pointing out that consciousness science is having increasing impact in medicine, whether through improved methods for detecting residual awareness following brain injury, or via enhanced understanding of the mechanisms underlying psychiatric illness. Thinking about conscious spoons just doesn’t cut it in this regard.
A standard objection at this point is that empirical work touted as being about consciousness science is often about something else: perhaps memory, attention, or visual perception. Yes, some work in consciousness science may be criticized this way, but it is not generally the case. To the extent that the explanatory target of a study encompasses phenomenological properties, or differences between conscious states (e.g., dreamless sleep versus wakeful rest), it is about consciousness. And of course, consciousness is not independent of other cognitive and perceptual processes – so empirical work that focuses on visual perception can be relevant to consciousness even if it does not explicitly contrast conscious and unconscious states.
The next objection goes like this: OK, you may be able to account for properties of consciousness in terms of underlying mechanisms, but this is never going to explain why consciousness is part of the universe in the first place – it is never going to solve the hard problem. Therefore consciousness science is failing. There are two responses to this.
First, wait and see (and ideally do). By building increasingly sophisticated bridges between mechanism and phenomenology, the apparent mystery of the hard problem may dissolve. Certainly, if we stick with simplistic ‘explanations’ – for instance by associating consciousness simply with activity in (for example) the prefrontal cortex, everything may remain mysterious. But if we can explain (for example) the phenomenology of peripheral vision in terms of neurally-encoded predictions of expected visual uncertainty, perhaps we are getting somewhere. It is unwise to pronounce the insufficiency of mechanistic accounts of some putatively mysterious phenomenon before such mechanistic accounts have been fully developed. This is one reason why frameworks like predictive processing are exciting – they provide explanatorily powerful, computationally explicit, and empirically predictive concepts which can help link phenomenology and mechanism. Such concepts can help move beyond correlation towards explanation in consciousness science, and as we move further along this road the hard problem may lose its lustre.
Second, people often seem to expect more from a science of consciousness than they would ask of other scientific explanations. As long as we can formulate explanatorily rich relations between physical mechanisms and phenomenological properties, and as long as these relations generate empirically testable predictions which stand up in the lab (and in the wild), we are doing just fine. Riding behind many criticisms of current consciousness science are unstated intuitions that a mechanistic account of consciousness should be somehow intuitively satisfying, or even that it must allow some kind of instantiation of consciousness in an arbitrary machine. We don’t make these requirements in other areas of science, and indeed the very fact that we instantiate phenomenological properties ourselves, might mean that a scientifically satisfactory account of consciousness will never generate the intuitive sensation of ‘ah yes, this is right, it has to be this way’. (Thomas Metzinger makes this point nicely in a recent conversation with Sam Harris.)
Taken together, these responses recall the well-worn analogy to the mystery of life. Not so long ago, scientists thought that the property of ‘being alive’ could never be explained by physics or chemistry. That life had to be more than mere ‘mechanism’. But as biologists got on with the job of accounting for the properties of life in terms of physics and chemistry, the basic mystery of the ontological status of life faded away and people no longer felt the need to appeal to vitalistic concepts like ‘elan vital’. Now of course this analogy is imperfect, and from our current vantage it is impossible to say how closely it will stand up over time. Consciousness and life are not the same (though they may be more closely linked than people tend to think – another story!). But the basic point remains: instead of focusing on a possibly illusory big mystery – and thereby falling for the temptations of easy big solutions like panpsychism – the best strategy is to divide and conquer. Identify properties and account for them, and repeat. Chalmers’ himself describes something like this strategy when he talks about the ‘mapping problem’, and with tongue-somewhat-in-cheek I’ve called it ‘the real problem of consciousness‘.
The lure of integrated information theory
A major boost for modern panpsychism has come from Giulio Tononi’s much discussed – and fascinating – integrated information theory of consciousness (IIT). This is a formal mathematical theory which attempts to derive constraints on the mechanisms of consciousness from axioms about phenomenology. It’s a complex theory (and apparently getting more complex all the time) but the relevance for panpsychism is straightforward. On IIT, any mechanism that integrates information in the right way exhibits consciousness to some degree. And the ability to integrate information is very general, since it depends on only the cause-effect structure of a system.
Tononi actually goes further than this, in a crucial but subtle way. For him, the (integrated) information that counts is based not only what a system has done (ie., what states it has been in), but on what a system could do (i.e., what states it could be in, even if has never or will never occupy these states). Technically, this is the difference between the empirical distribution of a system and its maximum entropy distribution. This feature of IIT not only makes it hard (usually impossible) to calculate for nontrivial systems, it pushes further towards panpsychism because it implies an ontological status for certain forms of information – much like John Wheeler’s ‘it from bit‘. If (integrated) information is real (and therefore more-or-less everywhere), and if consciousness is based on (integrated) information, then consciousness is also more-or-less everywhere, thus panpsychism.
But this is not the only way to formulate IIT. Several years ago, Adam Barrett and I formulated a measure of integrated information which depends only on the empirical distribution of a system, and now many competing measures exist. These measures can be applied more easily in practice, and they do not directly imply panpsychism because they can be interpreted as explanatory bridges between mechanism and phenomenology (in the ‘real problem’ sense), rather than as claims about what consciousness actually is. So when Goldhill writes that IIT “shares the panpsychist view that physical matter has innate conscious experience” this is only true for the strong version of the theory articulated by Tononi himself. Other views are possible, and more empirically productive.
Back to science
This leads us to the main problem with panpsychism. It’s not that it sounds crazy, it’s that it cannot be tested. It does not lead to any feasible programme of experimentation. Progress in scientific understanding requires experiments and testability. Given this, it’s curious that Goldhill introduces us to Arthur Eddington, the physicist who experimentally confirmed Einstein’s (totally crazy-sounding) theory of general relativity. Eddington’s immense contribution to experimental physics should not give credence to his views on panpsychism, it should instead remind us of the essential imperative of formulating testable theories, however difficult such tests might be to carry out. (Modern physics is of course now facing a similar testability crisis with string theory.) And outlandish speculations about how quantum entanglement might lead to universe-wide consciousness have no place whatsoever in a rigorous and empirically grounded science of consciousness.
I can’t finish this post without noting that the current attention to panpsychism, especially in the media, has a lot to do with the views of some particularly influential figures in the field: Chalmers and Tononi, but also Christof Koch, whose early work with Francis Crick was fundamental in the rehabilitation of consciousness science in the late 1990s and who continues to be a major figure in the field. These people are all incredibly smart and have made extremely important contributions within consciousness science and beyond. I have learned a great deal from each, and I owe them intellectual debts I will never be able to repay. Having said that, their views on panpsychism are firmly in the minority and should not be over-weighted simply because of their historical contributions and current prominence. Whether there is something about having made such influential contributions that leads to a tendency to adopt countercultural (and difficult to test) views later on – well that’s for another day and another writer.
At the end of her piece, Goldhill quotes Chalmers quoting the philosopher John Perry who says: “If you think about consciousness long enough, you either become a panpsychist or you go into administration.” Perhaps the problem lies in only thinking. We should instead complement only thinking with the challenging empirical work of explaining properties of consciousness in terms of biophysical mechanisms. Then we can say: If you work on consciousness long enough, you either become a neuroscientist or you become a panpsychist. I know where I’d rather be – with my many colleagues who are not worrying about conscious spoons but who are trying, and little-by-little succeeding, in unravelling the complex biophysical mechanisms that shape our subjective experiences of world and self. And now it’s high time I got back to that paper on training synaesthesia.
(For more general discussions about consciousness science, where it’s at and where we’re going, have a listen to my recent conversation with Sam Harris. Make sure you have time for it though, it clocks in at over three hours …)
Zoë Wanamaker as Lorna in Nick Payne’s Elegy.
“The brain is wider than the sky,
For, put them side by side,
The one the other would contain,
With ease, and you besides”
Emily Dickinson, Complete Poems, 1924
What does it mean to be a self? And what happens to the social fabric of life, to our ethics and morality, when the nature of selfhood is called into question?
In neuroscience and psychology, the experience of ‘being a self’ has long been a central concern. One of the most important lessons, from decades of research, is that there is no single thing that is the self. Rather, the self is better thought of as an integrated network of processes that distinguish self from non-self at many different levels. There is the bodily self – the experience of identifying with and owning a particular body, which at a more fundamental level involves the amorphous experience of being a self-sustaining organism. There is the perspectival self, the experience of perceiving the world from a particular first-person point-of-view. The volitional self involves experiences of intention of agency, of urges to do this-or-that (or, perhaps more importantly, to refrain from doing this-or-that) and of being the cause of things that happen.
At higher levels we encounter narrative and social selves. The narrative self is where the ‘I’ comes in, as the experience of being a continuous and distinctive person over time. This narrative self – the story we tell ourselves about who we are – is built from a rich set of autobiographical memories that are associated with a particular subject. Finally, the social self is that aspect of my self-experience and personal identity that depends on my social milieu, on how others perceive and behave towards me, and on how I perceive myself through their eyes and minds.
In daily life, it can be hard to differentiate these dimensions of selfhood. We move through the world as seemingly unified wholes, our experience of bodily self seamlessly integrated with our memories from the past, and with our experiences of volition and agency. But introspection can be a poor guide. Many experiments and neuropsychological case studies tell a rather different story, one in which the brain actively and continuously generates and coordinates these diverse aspects of self-experience.
The many ways of being a self can come apart in surprising and revealing situations. For example, it is remarkably easy to alter the experience of bodily selfhood. In the so-called ‘rubber hand illusion,’ I ask you to focus your attention on a fake hand while your real hand is kept out of sight. If I then simultaneously stroke your real hand and the fake hand with a soft paintbrush, you may develop the uncanny feeling that the fake hand is now, somehow, part of your body. A more dramatic disturbance of the experience of body ownership happens in somatoparaphrenia, a condition in which people experience that part of their body is no longer theirs, that it belongs to someone else – perhaps their doctor or family member. Both these examples involve changes in brain activity, in particular within the ‘temporo-parietal junction’, showing how even very basic aspects of personal identity are actively constructed by the brain.
Moving through levels of selfhood, autoscopic hallucinations involve seeing oneself from a different perspective, much like ‘out of body’ experiences. In akinetic mutism, people seem to lack any experiences of volition or intention (and do very little), while in schizophrenia or anarchic hand syndrome, people can experience their intentions or voluntary actions as having external causes. At the other end of the spectrum, disturbances of social self emerge in autism, where difficulties in perceiving others’ states of mind seems to be a core problem, though the exact nature of the autistic condition is still much debated.
When it comes to the ‘I’, memory is the key. Specifically, autobiographical memory: the recollection of personal experiences of people, objects, and places and other episodes from an individual’s life. While there are as many types of memory as there are varieties of self (for example, we have separate memory processes for facts, for the short term and the long term, and for skills that we learn), autobiographical memories are those most closely associated with our sense of personal identity. This is well illustrated by some classic medical cases in which, as a result of surgery or disease, the ability to lay down new memories is lost. In 1953 Henry Moliason (also known as the patient HM) had large parts of his medial temporal lobes removed in order to relieve severe epilepsy. From 1957 until his death in 2008, HM was studied closely by the neuropsychologist Brenda Milner, yet he was never able to remember meeting her. In 1985 the accomplished musician Clive Wearing suffered a severe viral brain disease that affected similar parts of his brain. Now 77, he frequently believes he has just awoken from a coma, spending each day in a constant state of re-awakening.
Surprisingly, both HM and Wearing remained able to learn new skills, forming new ‘procedural’ memories, despite never recalling the learning process itself. Wearing could still play the piano, and conduct his choir, though he would immediately forget having done so. The music appears to carry him along from moment to moment, restoring his sense of self in a way his memory no longer can. And his love for his wife Deborah seems undiminished, so that he expresses an enormous sense of joy on seeing her, even though he cannot tell whether their last meeting was years, or seconds, in the past. Love, it seems, persists when much else is gone.
For people like HM and Clive Wearing, memory loss has been unintended and unwanted. But as scientific understanding develops, could we be moving towards a world where specific memories and elements of our identity can be isolated or removed through medical intervention? And could the ability to lay down new memories ever be surgically restored? Some recent breakthroughs suggest these developments may not be all that far-fetched.
In 2013, Jason Chan and Jessica LaPaglia, from Iowa State University showed that specific human memories could indeed be deleted. They took advantage of the fact that when memories are explicitly recalled they become more vulnerable. By changing details about a memory, while it was being remembered, they induced a selective amnesia which lasted for at least 24 hours. Although an important advance, this experiment was limited by relying on ‘non-invasive’ methods – which means not using drugs or directly interfering with the brain.
More recent animal experiments have shown even more striking effects. In a ground-breaking 2014 study at the University of California, using genetically engineered mice, Sadegh Nabavi and colleagues managed to block and then re-activate a specific memory. They used a powerful (invasive) technique called optogenetics to activate (or inactivate) the biochemical processes determining how neurons change their connectivity. And elsewhere in California, Ted Berger is working on the first prototypes of so-called ‘hippocampal prostheses’ which replace a part of the brain essential for memory with a computer chip. Although these advances are still a long way from implementation in humans, they show an extraordinary potential for future medical interventions.
The German philosopher Thomas Metzinger believes that “no such things as selves exist in the world”. Modern neuroscience may be on his side, with memory being only one thread in the rich tapestry of processes shaping our sense of selfhood. At the same time, the world outside the laboratory is still full of people who experience themselves – and each other – as distinct, integrated wholes. How the new science of selfhood will change this everyday lived experience, and society with it, is a story that is yet to be told.
Originally commissioned for the Donmar Warehouse production of Elegy, with support from The Wellcome Trust. Reprinted in the programme notes and in Nick Payne’s published script.
As a scientist, consciousness has always fascinated me. But understanding consciousness is not a project for science alone. Throughout history, philosophers, artists, storytellers, and musicians have all wondered about the apparent miracle of conscious awareness. Even today, while science might give us our best shot at figuring out the brain – the organ of experience – we need, more than ever, a melding of the arts and sciences, of contemporary and historical approaches, to understand what consciousness really is, to grasp what we mean by, as Mark Haddon eloquently puts it, “Life in the first person.”
This quote comes from Haddon’s beautiful introductory essay to a major new exhibition at the Wellcome Collection in London. Curated by Emily Sargent, States of Mind: Tracing the edges of consciousness “examines perspectives from artists, psychologists, philosophers and neuroscientists to interrogate our understanding of the conscious experience”. Its a fantastic exhibition, with style and substance, and I feel very fortunate to have been involved as an advisor from its early stages.
What’s so special about consciousness?
Consciousness is at once the most familiar and the most mysterious aspect of our existence. Conscious experiences define our lives, but the private, subjective, and what-it-is-likeness of these experiences seems to resist scientific enquiry. Somehow, within each our brains the combined activity of many billions of neurons, each one a tiny biological machine, is giving rise to a conscious experience. Your conscious experience: right here, right now, reading these words. How does this happen? Why is life in the first person?
In one sense, this seems like the kind of mystery ripe for explanation. Borrowing again from Mark Haddon, the raw material of consciousness is not squirreled away deep inside an atom, its not happening 14 billion years ago, and its not hiding out on the other side of the universe. It’s right here in front of – or rather behind – our eyes. Saying this, the brain is a remarkably complex object. It’s not so much the sheer number of neurons (though there about 90 billion). It’s the complexity of its wiring: there are so many connections, that if you counted one every second it would take you 3 million years to finish. Is it not possible that an object of such extraordinary complexity should be capable of extraordinary things?
People have been thinking about consciousness since they’ve been thinking at all. Hippocrates, the founder of modern medicine, said: “Men ought to know that from the brain, and from the brain only, arise our pleasures, joys, laughter and jests, as well as our sorrows, pains, griefs and tears … Madness comes from its moistness.” (Aristotle, by the way, got it wrong, thinking the brain hadn’t much to do with consciousness at all.)
Fast forward to Francis Crick, whose ‘astonishing hypothesis’ in the 1990s deliberately echoed Hippocrates: “You, your joys and your sorrows, your memories and your ambitions … and so on … are in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules”. Crick, who I was lucky enough to meet during my time in America, was working on the neurobiology of consciousness even on the day he died. You will see some of his personal notes, and his perplexing plasticine brain models, in States of Mind.
A major landmark in thinking about consciousness is of course Descartes, who in the 17th Century distinguished between “mind stuff” (res cogitans) and “matter stuff” (res extensa), so giving rise to the now infamous mind-body problem and the philosophy of dualism. Its a great thrill for to see an original copy of Descartes’ De Homine as part of this exhibition. Its modern incarnation as David Chalmers’ so-called ‘hard problem’ has recently gained enough cultural notoriety even to inspire a Tom Stoppard play (though for my money Alex Garland’s screenplay for Ex Machina is the more perspicuous). The idea of the hard problem is this: Even if we knew everything about how the operations of the brain give rise to perception, cognition, learning, and behaviour a problem would still remain: why and how any of this should be associated with consciousness at all? Why is life in the first person?
How to define consciousness? One simple definition is that for a conscious organism there is something it is like to be that organism. Or, one can simply say that consciousness is what disappears when we fall into a dreamless sleep, and what returns when we wake up or start dreaming. A bit more formally, for conscious organisms there exists a continuous (though interruptible) stream of conscious scenes – a phenomenal world – which has the character of being subjective and private. The material in States of Mind can help us encounter these ideas with a bit more clarity and force, by focusing on the edges – the liminal boundaries – of consciousness.
First there is conscious level: the difference between being awake and, let’s say, under general anaesthesia. Here, neuroscience now tells us that there is no single ‘generator’ of consciousness in the brain, rather, being consciousness depends on highly specific ways in which different parts of the brain speak to each other. Aya Ben Ron’s film of patients slipping away under anaesthesia is a beautiful exploration of this process, as is the whole section on ‘SLEEP | AWAKE’.
Then there is conscious content: what we are conscious of, when we are conscious. These are the perceptions, thoughts, and emotions that populate our ever-flowing stream of awareness. Here, current research is revealing that our perceptual world is not simply an internal picture of some external reality. Rather, conscious perception depends on the brain’s best guesses, or hypotheses, about the causes of sensory data. Perception is therefore a continuously creative act that is tightly bound up with imagination, so that our experience of the world is a kind of ‘controlled hallucination’, a fantasy that – usually, but not always – coincides with reality. The material on synaesthesia in States of Mind beautifully illuminates this process by showing how, for some of us, these perceptual fantasies can be very different – that we all have our own distinctive inner universes. You can even try training yourself to become a ‘synaesthete’ with a demo of some of our own research, developed for this exhibition. Many thanks to Dr. David Schwartzman of the Sackler Centre for making this happen.
Finally there is conscious self – the specific experience of being me, or being you. While this might seem easy to take for granted, the experience of being a self requires explanation just as much as any other kind of experience. It too has its edges, its border regions. Here, research is revealing that conscious selfhood, though experienced as unified, can come apart in many different ways. For example, our experience of being and having a particular body can dissociate from our experience of being a person with name and a specific set of memories. Conscious selfhood, like all conscious perception, is therefore another controlled hallucination maintained by the brain. The section BEING | NOT BEING dramatically explores some of these issues, for example by looking at amnesia with Shona Illingworth, and with Adrian Owen’s seminal work on the possibility of consciousness even after severe brain injury.
This last example brings up an important point. Besides the allure of basic science, there are urgent practical motivations for studying consciousness. Neurological and psychiatric disorders are increasingly common and can often be understood as disturbances of conscious experience. Consciousness science promises new approaches and perhaps new treatments for these deeply destructive problems. Scoping out further boundary areas, studying the biology of consciousness can shed new light on awareness in newborn infants and in non-human animals, informing ethical debates in these areas. Above all, consciousness science carries the promise of understanding more about our place in nature. Following the tradition of Copernicus and Darwin, a biological account of conscious experience will help us see ourselves as part of, not apart from, the rest of the universe.
Let’s finish by returning to this brilliant exhibition, States of Mind. What I found most remarkable are the objects that Emily Sargent has collected together. Whether its Descartes’ De Hominem, Ramon y Cajal’s delicate ink drawings of neurons, or Francis Crick’s notebooks and models, these objects bring home and render tangible the creativity and imagination which people have brought to bear in their struggle to understand consciousness, over hundreds of years. For me, this brings a new appreciation and wonder to our modern attempts to tackle this basic mystery of life. Emily Dickinson, my favourite poet of neuroscience, put it like this. “The brain is wider than the sky, for – put them side by side – the one the other will contain, with ease, and you – beside.”
States of Mind is at the Wellcome Collection in London from Feb 4th until October 16th 2016 and is curated by Emily Sargent. Sackler Centre researchers, in particular David Schwartzman and myself, helped out as scientific advisors. This text is lightly adapted from a speech I gave at the opening event on Feb 3rd 2016. Watch this space, and visit the exhibition website, for news about special events on consciousness that will happen throughout the year.
IT’S a rare thing to see a movie about science that takes no prisoners intellectually. Alex Garland’s Ex Machina is just that: a stylish, spare and cerebral psycho-techno-thriller, which gives a much-needed shot in the arm for smart science fiction.
Reclusive billionaire genius Nathan, played by Oscar Isaac, creates Ava, an intelligent and very attractive robot played by Alicia Vikander. He then struggles with the philosophical and ethical dilemmas his creation poses, while all hell breaks loose. Many twists and turns add nuance to the plot, which centres on the evolving relationships between the balletic Ava and Caleb (Domhnall Gleeson), a hotshot programmer invited by Nathan to be the “human component in a Turing test”, and between Caleb and Nathan, as Ava’s extraordinary capabilities become increasingly apparent
Everything about this movie is good. Compelling acting (with only three speaking parts), exquisite photography and set design, immaculate special effects, a subtle score and, above all, a hugely imaginative screenplay combine under Garland’s precise direction to deliver a cinematic experience that grabs you and never lets go.
The best science fiction often tackles the oldest questions. At the heart of Ex Machina is one of our toughest intellectual knots, that of artificial consciousness. Is it possible to build a machine that is not only intelligent but also sentient: that has consciousness, not only of the world but also of its own self? Can we construct a modern-day Golem, that lumpen being of Jewish folklore which is shaped from unformed matter and can both serve humankind and turn against it? And if we could, what would happen to us?
Putting aside the tedious business of actually building a conscious AI, we face the challenge of figuring out whether the attempt succeeds. The standard reference for this sort of question is Alan Turing’s eponymous test, in which a human judge interrogates both a candidate machine and another human. A machine passes the test when the judge consistently fails to distinguish between them.
While the Turing test has provided a trope for many AI-inspired movies (such as Spike Jonze’s excellent Her), Ex Machina takes things much further. In a sparkling exchange between Caleb and Nathan, Garland nails the weakness of Turing’s version of the test, a focus on the disembodied exchange of messages, and proposes something far more interesting. “The challenge is to show you that she’s a robot. And see if you still feel she has consciousness,” Nathan says to Caleb.
This shifts the goalposts in a vital way. What matters is not whether Ava is a machine. It is not even whether Ava, even though a machine, can be conscious. What matters is whether Ava makes a conscious person feel that Ava is conscious. The brilliance of Ex Machina is that it reveals the Turing test for what it really is: a test of the human, not of the machine. And Garland is not necessarily on our side.
Is consciousness a matter of social consensus? Is it more relevant whether people believe (or feel) that something (or someone) is conscious than whether it is in fact actually conscious? Or, does something being “actually conscious” rest on other people’s beliefs about it being conscious, or on its own beliefs about its consciousness (beliefs that may themselves depend on how it interprets others’ beliefs about it)? And exactly what is the difference between believing and feeling in situations like this?
It seems to me that my consciousness, here and now, is not a matter of social consensus or of my simply believing or feeling that I am conscious. It seems to me, simply, that I am conscious here and now. When I wake up and smell the coffee, there is a real experience of coffee-smelling going on.
But let me channel Ludwig Wittgenstein, one of the greatest philosophers of the 20th century, for a moment. What would it seem like if it seemed to me that my being conscious were a matter of social consensus or beliefs or feelings about my own conscious status? Is what it “seems like” to me relevant at all when deciding how consciousness comes about or what has consciousness?
Before vanishing completely into a philosophical rabbit hole, it is worth saying that questions like these are driving much influential current research on consciousness. Philosophers and scientists like Daniel Dennett, David Rosenthal and Michael Graziano defend, in various ways, the idea that consciousness is somehow illusory and what we really mean in saying we are conscious is that we have certain beliefs about mental states, that these states have distinctive functional properties, or that they are involved in specific sorts of attention.
Another theoretical approach accepts that conscious experience is real and sees the problem as one of determining its physical or biological mechanism. Some leading neuroscientists such as Giulio Tononi, and recently, Christof Koch, take consciousness to be a fundamental property, much like mass-energy and electrical charge, that is expressed through localised concentrations of “integrated information”. And others, like philosopher John Searle, believe that consciousness is an essentially biological property that emerges in some systems but not in others, for reasons as-yet unknown.
In the film we hear about Searle’s Chinese Room thought experiment. His premise was that researchers had managed to build a computer programmed in English that can respond to written Chinese with written Chinese so convincingly it easily passes the Turing test, persuading a human Chinese speaker that the program understands and speaks Chinese. Does the machine really “understand” Chinese (Searle called this “strong AI”) or is it only simulating the ability (“weak” AI)? There is also a nod to the notional “Mary”, the scientist, who, while knowing everything about the physics and biology of colour vision, has only ever experienced black, white and shades of grey. What happens when she sees a red object for the first time? Will she learn anything new? Does consciousness exceed the realms of knowledge.
All of the above illustrates how academically savvy and intellectually provocative Ex Machina is. Hat-tips here to Murray Shanahan, professor of cognitive robotics at Imperial College London, and writer and geneticist Adam Rutherford, whom Garland did well to enlist as science advisers.
Not every scene invites deep philosophy of mind, with the film encompassing everything from ethics, the technological singularity, Ghostbusters and social media to the erosion of privacy, feminism and sexual politics within its subtle scope. But when it comes to riffing on the possibilities and mysteries of brain, mind and consciousness, Ex Machina doesn’t miss a trick.
As a scientist, it is easy to moan when films don’t stack up against reality, but there is usually little to be gained from nitpicking over inaccuracies and narrative inventions. Such criticisms can seem petty and reinforcing of the stereotype of scientists as humourless gatekeepers of facts and hoarders of equations. But these complaints sometimes express a sense of missed opportunity rather than injustice, a sense that intellectual riches could have been exploited, not sidelined, in making a good movie. AI, neuroscience and consciousness are among the most vibrant and fascinating areas of contemporary science, and what we are discovering far outstrips anything that could be imagined out of thin air.
In his directorial debut, Garland has managed to capture the thrill of this adventure in a film that is effortlessly enthralling, whatever your background. This is why, on emerging from it, I felt lucky to be a neuroscientist. Here is a film that is a better film, because of and not despite its engagement with its intellectual inspiration.
The original version of this piece was published as a Culture Lab article in New Scientist on Jan 21. I am grateful to the New Scientist for permission to reproduce it here, and to Liz Else for help with editing. I will be discussing Ex Machina with Dr. Adam Rutherford at a special screening of the film at the Edinburgh Science Festival (April 16, details and tickets here).
World War Two was won not just with tanks, guns, and planes, but by a crack team of code-breakers led by the brilliant and ultimately tragic figure of Alan Turing. This is the story as told in The Imitation Game, a beautifully shot and hugely popular film which nonetheless left me nursing a deep sense of missed opportunity. True, Benedict Cumberbatch is brilliant, spicing his superb Holmes with a dash of the Russell Crowe’s John Nash (A Beautiful Mind) to propel geek rapture into yet higher orbits. (See also Eddie Redmayne and Stephen Hawking.)
The rest was not so good. The clunky acting might reflect a screenplay desperate to humanize and popularize what was fundamentally a triumph of the intellect. But what got to me most was the treatment of Turing himself. On one hand there is the perhaps cinematically necessary canonisation of individual genius, sweeping aside so much important context. On the other there is the saccharin treatment of Turing’s open homosexuality (with compensatory boosting of Keira Knightley’s Joan Clarke) and the egregious scenes in which he stands accused of both treason and cowardice by association with Soviet spy John Cairncross, whom he likely never met. The requisite need for a bad guy does disservice also to Turing’s Bletchley Park boss Alastair Denniston, who while a product of old-school classics-inspired cryptography nonetheless recognized and supported Turing and his crew. Historical jiggery-pokery is of course to be expected in any mass-market biopic, but the story as told in The Imitation Game becomes much less interesting as a result.
I studied at King’s College, Cambridge, Turing’s academic home and also where I first encountered the basics of modern day computer science and artificial intelligence (AI). By all accounts Turing was a genius, laying the foundations for these disciplines but also for other areas of science, which – like AI – didn’t even exist in his time. His theories of morphogenesis presaged contemporary developmental biology, explaining how leopards get their spots. He was a pioneer of cybernetics, an inspired amalgam of engineering and biology that after many years in the academic hinterland is once again galvanising our understanding of how minds and brains work, and what they are for. One can only wonder what more he would have done, had he lived.
There is a breathless moment in the film where Joan Clarke (or poor spy-hungry and historically-unsupported Detective Nock, I can’t remember) wonders whether Turing, in cracking Enigma, has built his ‘universal machine’. This references Turing’s most influential intellectual breakthrough, his conceptual design for a machine that was not only programmable but re-programmable, that could execute any algorithm, any computational process.
The Universal Turing Machine formed the blueprint for modern-day computers, but the machine that broke Enigma was no such thing. The ‘Bombe’, as it was known, was based on Polish prototypes (the bomba kryptologiczna) and was co-designed with Gordon Welchman whose critical ‘diagonal board’ innovation is in the film attributed to the suave Hugh Alexander (Welchman doesn’t appear at all). Far from being a universal computer the Bombe was designed for a single specific purpose – to rapidly run through as many settings of the Enigma machine as possible.
The Bombe is half the story of Enigma. The other half is pure cryptographic catnip. Even with a working Bombe the number of possible machine settings to be searched each day (the Germans changed all the settings at midnight) was just too large. The code-breakers needed a way to limit the combinations to be tested. And here Turing and his team inadvertently pioneered the principles of modern-day ‘Bayesian’ machine learning, by using prior assumptions to constrain possible mappings between a cipher and its translation. For Enigma, the breakthroughs came on realizing that no letter could encode itself, and that German operators often used the same phrases in repeated messages (“Heil Hitler!”). Hugh Alexander, diagonal boards aside, was supremely talented at this process which Turing called ‘banburismus’, on account of having to get printed ‘message cards’ from nearby Banbury.
In this way the Bletchley code-breakers combined extraordinary engineering prowess with freewheeling intellectual athleticism, to find a testable range of Enigma settings, each and every day, which were then run through the Bombe until a match was found.
Though it gave the allies a decisive advantage, the Bombe was not the first computer, not the first ‘digital brain’. This honour belongs to Colossus, also built at Bletchley Park, and based on Turing’s principles, but constructed mainly by Tommy Flowers, Jack Good, and Bill Tutte. Colossus was designed to break the even more encrypted communications the Germans used later in the war: the Tunny cipher. After the war the intense secrecy surrounding Bletchley Park meant that all Colossi (and Bombi) were dismantled or hidden away, depriving Turing, Flowers – and many others – of recognition and setting back the computer age by years. It amazes me that full details about Colussus were only released in 2000.
The Imitation Game of the title is a nod to Turing’s most widely known idea: a pragmatic answer to the philosophically challenging and possibly absurd question, “can machines think”. In one version of what is now known as the Turing Test, a human judge interacts with two players – another human and a machine – and must decide which is which. Interactions are limited to disembodied exchanges of pieces of text, and a candidate machine passes the test when the judge consistently fails to distinguish the one from the other. It is unfortunate but in keeping with the screenplay that Turing’s code-breaking had little to do with his eponymous test.
It is completely understandable that films simplify and rearrange complex historical events in order to generate widespread appeal. But the Imitation Game focuses so much on a distorted narrative of Turing’s personal life that the other story – a thrilling ‘band of brothers’ tale of winning a war by inventing the modern world – is pushed out into the wings. The assumption is that none of this puts bums on seats. But who knows, there might be more to geek-chic than meets the eye.
Could wanting the latest mobile phone for Christmas lead to human extermination? Existential risks to our species have long been part of our collective psyche – in the form of asteroid impacts, pandemics, global nuclear cataclysm, and more recently, climate change. The idea is not simply that humans and other animals could be wiped out, but that basic human values and structures of society would change so as to become unrecognisable.
Last week, Stephen Hawking claimed that technological progress, while perhaps intended for human betterment, might lead to a new kind of existential threat in the form of self-improving artificial intelligence (AI). This worry is based on the “law of accelerating returns”, which applies when the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – advances in its capabilities. The idea is that a point might be reached where this process leads to wholesale and irreversible changes in how we live. This is the technological singularity, a concept made popular by AI maverick and Google engineering director Ray Kurzweil.
We are already familiar with accelerating returns in the rapid development of computer power (“Moore’s law”), and Kurzweil’s vision of the singularity is actually a sort of utopian techno-rapture. But there are scarier scenarios where exponential technological growth might exceed our ability to foresee and prevent unintended consequences. Genetically modified food is an early example of this worry, but now the spotlight is on bio- and nano-technology, and – above all – AI, the engineering of artificial minds.
A focus on AI might seem weird given how disappointing present-day ‘intelligent robots’ are. They can hardly vacuum your living room let alone take over the world, and reports that the famous Turing Test for AI has been passed are greatly exaggerated. Yet AI has developed a surprising behind-the-scenes momentum. New ‘deep learning’ algorithms have been developed which, when coupled with vast amounts of data, show remarkable abilities to tackle everyday problems like speech comprehension and face recognition. As well as world-beating chess players like Deep Blue, we have Apple Siri and Google Now helping us navigate our messy and un-chesslike environments in ways that mimic our natural cognitive abilities. Huge amounts of money have followed, with Google this year paying £400M for AI start-up DeepMind in a deal which Google CEO Eric Schmidt heralded as enabling products that are “infinitely more intelligent”.
What if the ability to engineer artificial minds leads to these minds engineering themselves, developing their own goals, and bootstrapping themselves beyond human understanding and control? This dystopian prospect has been mined by many sci-fi movies – think Blade Runner, HAL in 2001, Terminator, Matrix – but while sci-fi is primarily for entertainment, the accelerating developments in AI give pause for thought. Enter Hawking, who now warns that “the full development of AI could spell the end of the human race”. He joins real-world-Iron-Man Elon Musk and Oxford philosopher Nick Bostrom in declaring AI the most serious existential threat we face. (Hawking in fact used the term ‘singularity’ long ago to describe situations where the laws of physics break down, like at the centre of a black hole).
However implausible a worldwide AI revolution might seem, Holmes will tell you there is all the difference in the world between the impossible and the merely improbable. Even if highly unlikely, the seismic impact of a technological singularity is such that it deserves to be taken seriously, both in estimating and mitigating its likelihood, and in planning potential responses. Cambridge University’s new Centre for the Study for Existential Risk has been established to do just this, with Hawking and ex-Astronomer Royal Sir Martin Rees among the founders.
Dystopian eventualities aside, the singularity concept is inherently interesting because it pushes us to examine what we mean by being human (as my colleague Murray Shanahan argues in a forthcoming book). While intelligence is part of the story, being human is also about having a body and an internal physiology; we are self-sustaining flesh bags. It is also about consciousness; we are each at the centre of a subjective universe of experience. Current AI has little to say about these issues, and it is far from clear whether truly autonomous and self-driven AI is possible in their absence. The ethical minefield deepens when we realize that AIs becoming conscious would entail ethical responsibilities towards them, regardless of their impact on us.
At the moment, AI like any powerful technology has the potential for good and ill, long before any singularity is reached. On the dark side, AI gives us the tools to wreak our own havoc by distancing ourselves from the consequences of our actions. Remote controlled military drones already reduce life-and-death decisions to the click of a button: with enhanced AI there would be no need for the button. On the side of the angels, AI can make our lives healthier and happier, and our world more balanced and sustainable, by complementing our natural mental prowess with the unprecedented power of computation. The pendulum may swing from the singularity-mongerers to the techno-mavens; and we should listen to both, but proceed serenely with the angels.
This post is an amended version of a commisioned comment for The Guardian: Why we must not stall technological progress, despite its threat to humanity, published on December 03, 2014. It was part of a flurry of comments occasioned by a BBC interview with Stephen Hawking, which you can listen to here. I’m actually quite excited to see Eddie Redmayne’s rendition of the great physicist.