There’s more to geek-chic than meets the eye, but not in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game

Benedict Cumberbatch as Alan Turing in The Imitation Game. (Spoiler alert: this post reveals some plot details.)

World War Two was won not just with tanks, guns, and planes, but by a crack team of code-breakers led by the brilliant and ultimately tragic figure of Alan Turing. This is the story as told in The Imitation Game, a beautifully shot and hugely popular film which nonetheless left me nursing a deep sense of missed opportunity. True, Benedict Cumberbatch is brilliant, spicing his superb Holmes with a dash of the Russell Crowe’s John Nash (A Beautiful Mind) to propel geek rapture into yet higher orbits. (See also Eddie Redmayne and Stephen Hawking.)

The rest was not so good. The clunky acting might reflect a screenplay desperate to humanize and popularize what was fundamentally a triumph of the intellect. But what got to me most was the treatment of Turing himself. On one hand there is the perhaps cinematically necessary canonisation of individual genius, sweeping aside so much important context. On the other there is the saccharin treatment of Turing’s open homosexuality (with compensatory boosting of Keira Knightley’s Joan Clarke) and the egregious scenes in which he stands accused of both treason and cowardice by association with Soviet spy John Cairncross, whom he likely never met. The requisite need for a bad guy does disservice also to Turing’s Bletchley Park boss Alastair Denniston, who while a product of old-school classics-inspired cryptography nonetheless recognized and supported Turing and his crew. Historical jiggery-pokery is of course to be expected in any mass-market biopic, but the story as told in The Imitation Game becomes much less interesting as a result.

Alan Turing as himself

Alan Turing as himself

I studied at King’s College, Cambridge, Turing’s academic home and also where I first encountered the basics of modern day computer science and artificial intelligence (AI). By all accounts Turing was a genius, laying the foundations for these disciplines but also for other areas of science, which – like AI – didn’t even exist in his time. His theories of morphogenesis presaged contemporary developmental biology, explaining how leopards get their spots. He was a pioneer of cybernetics, an inspired amalgam of engineering and biology that after many years in the academic hinterland is once again galvanising our understanding of how minds and brains work, and what they are for. One can only wonder what more he would have done, had he lived.

There is a breathless moment in the film where Joan Clarke (or poor spy-hungry and historically-unsupported Detective Nock, I can’t remember) wonders whether Turing, in cracking Enigma, has built his ‘universal machine’. This references Turing’s most influential intellectual breakthrough, his conceptual design for a machine that was not only programmable but re-programmable, that could execute any algorithm, any computational process.

The Universal Turing Machine formed the blueprint for modern-day computers, but the machine that broke Enigma was no such thing. The ‘Bombe’, as it was known, was based on Polish prototypes (the bomba kryptologiczna) and was co-designed with Gordon Welchman whose critical ‘diagonal board’ innovation is in the film attributed to the suave Hugh Alexander (Welchman doesn’t appear at all). Far from being a universal computer the Bombe was designed for a single specific purpose – to rapidly run through as many settings of the Enigma machine as possible.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

A working rebuilt Bombe at Bletchley Park, containing 36 Enigma equivalents. The (larger) Bombe in The Imitation Game was a high point – a beautiful piece of historical reconstruction.

The Bombe is half the story of Enigma. The other half is pure cryptographic catnip. Even with a working Bombe the number of possible machine settings to be searched each day (the Germans changed all the settings at midnight) was just too large. The code-breakers needed a way to limit the combinations to be tested. And here Turing and his team inadvertently pioneered the principles of modern-day ‘Bayesian’ machine learning, by using prior assumptions to constrain possible mappings between a cipher and its translation. For Enigma, the breakthroughs came on realizing that no letter could encode itself, and that German operators often used the same phrases in repeated messages (“Heil Hitler!”). Hugh Alexander, diagonal boards aside, was supremely talented at this process which Turing called ‘banburismus’, on account of having to get printed ‘message cards’ from nearby Banbury.

In this way the Bletchley code-breakers combined extraordinary engineering prowess with freewheeling intellectual athleticism, to find a testable range of Enigma settings, each and every day, which were then run through the Bombe until a match was found.

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

A Colossus Mk 2 in operation. The Mk 2, with 2400 valves, came into service on June 1st 1944

Though it gave the allies a decisive advantage, the Bombe was not the first computer, not the first ‘digital brain’. This honour belongs to Colossus, also built at Bletchley Park, and based on Turing’s principles, but constructed mainly by Tommy Flowers, Jack Good, and Bill Tutte. Colossus was designed to break the even more encrypted communications the Germans used later in the war: the Tunny cipher. After the war the intense secrecy surrounding Bletchley Park meant that all Colossi (and Bombi) were dismantled or hidden away, depriving Turing, Flowers – and many others – of recognition and setting back the computer age by years. It amazes me that full details about Colussus were only released in 2000.

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

Turing’s seminal 1950 paper, describing the ‘Imitation Game’ experiment

The Imitation Game of the title is a nod to Turing’s most widely known idea: a pragmatic answer to the philosophically challenging and possibly absurd question, “can machines think”. In one version of what is now known as the Turing Test, a human judge interacts with two players – another human and a machine – and must decide which is which. Interactions are limited to disembodied exchanges of pieces of text, and a candidate machine passes the test when the judge consistently fails to distinguish the one from the other. It is unfortunate but in keeping with the screenplay that Turing’s code-breaking had little to do with his eponymous test.

It is completely understandable that films simplify and rearrange complex historical events in order to generate widespread appeal. But the Imitation Game focuses so much on a distorted narrative of Turing’s personal life that the other story – a thrilling ‘band of brothers’ tale of winning a war by inventing the modern world – is pushed out into the wings. The assumption is that none of this puts bums on seats. But who knows, there might be more to geek-chic than meets the eye.

Should we fear the technological singularity?

terminator

Could wanting the latest mobile phone for Christmas lead to human extermination? Existential risks to our species have long been part of our collective psyche – in the form of asteroid impacts, pandemics, global nuclear cataclysm, and more recently, climate change. The idea is not simply that humans and other animals could be wiped out, but that basic human values and structures of society would change so as to become unrecognisable.

Last week, Stephen Hawking claimed that technological progress, while perhaps intended for human betterment, might lead to a new kind of existential threat in the form of self-improving artificial intelligence (AI). This worry is based on the “law of accelerating returns”, which applies when the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – advances in its capabilities. The idea is that a point might be reached where this process leads to wholesale and irreversible changes in how we live. This is the technological singularity, a concept made popular by AI maverick and Google engineering director Ray Kurzweil.

We are already familiar with accelerating returns in the rapid development of computer power (“Moore’s law”), and Kurzweil’s vision of the singularity is actually a sort of utopian techno-rapture. But there are scarier scenarios where exponential technological growth might exceed our ability to foresee and prevent unintended consequences. Genetically modified food is an early example of this worry, but now the spotlight is on bio- and nano-technology, and – above all – AI, the engineering of artificial minds.

Moore's law: the exponential growth in computational power since 1900.

Moore’s law: the exponential growth in computational power since 1900.

A focus on AI might seem weird given how disappointing present-day ‘intelligent robots’ are. They can hardly vacuum your living room let alone take over the world, and reports that the famous Turing Test for AI has been passed are greatly exaggerated. Yet AI has developed a surprising behind-the-scenes momentum. New ‘deep learning’ algorithms have been developed which, when coupled with vast amounts of data, show remarkable abilities to tackle everyday problems like speech comprehension and face recognition. As well as world-beating chess players like Deep Blue, we have Apple Siri and Google Now helping us navigate our messy and un-chesslike environments in ways that mimic our natural cognitive abilities. Huge amounts of money have followed, with Google this year paying £400M for AI start-up DeepMind in a deal which Google CEO Eric Schmidt heralded as enabling products that are “infinitely more intelligent”.

"Hello Dave".

“Hello Dave”.

What if the ability to engineer artificial minds leads to these minds engineering themselves, developing their own goals, and bootstrapping themselves beyond human understanding and control? This dystopian prospect has been mined by many sci-fi movies – think Blade Runner, HAL in 2001, Terminator, Matrix – but while sci-fi is primarily for entertainment, the accelerating developments in AI give pause for thought. Enter Hawking, who now warns that “the full development of AI could spell the end of the human race”. He joins real-world-Iron-Man Elon Musk and Oxford philosopher Nick Bostrom in declaring AI the most serious existential threat we face. (Hawking in fact used the term ‘singularity’ long ago to describe situations where the laws of physics break down, like at the centre of a black hole).

However implausible a worldwide AI revolution might seem, Holmes will tell you there is all the difference in the world between the impossible and the merely improbable. Even if highly unlikely, the seismic impact of a technological singularity is such that it deserves to be taken seriously, both in estimating and mitigating its likelihood, and in planning potential responses. Cambridge University’s new Centre for the Study for Existential Risk has been established to do just this, with Hawking and ex-Astronomer Royal Sir Martin Rees among the founders.

Dystopian eventualities aside, the singularity concept is inherently interesting because it pushes us to examine what we mean by being human (as my colleague Murray Shanahan argues in a forthcoming book). While intelligence is part of the story, being human is also about having a body and an internal physiology; we are self-sustaining flesh bags. It is also about consciousness; we are each at the centre of a subjective universe of experience. Current AI has little to say about these issues, and it is far from clear whether truly autonomous and self-driven AI is possible in their absence. The ethical minefield deepens when we realize that AIs becoming conscious would entail ethical responsibilities towards them, regardless of their impact on us.

At the moment, AI like any powerful technology has the potential for good and ill, long before any singularity is reached. On the dark side, AI gives us the tools to wreak our own havoc by distancing ourselves from the consequences of our actions. Remote controlled military drones already reduce life-and-death decisions to the click of a button: with enhanced AI there would be no need for the button. On the side of the angels, AI can make our lives healthier and happier, and our world more balanced and sustainable, by complementing our natural mental prowess with the unprecedented power of computation. The pendulum may swing from the singularity-mongerers to the techno-mavens; and we should listen to both, but proceed serenely with the angels.

This post is an amended version of a commisioned comment for The Guardian: Why we must not stall technological progress, despite its threat to humanity, published on December 03, 2014.  It was part of a flurry of comments occasioned by a BBC interview with Stephen Hawking, which you can listen to here. I’m actually quite excited to see Eddie Redmayne’s rendition of the great physicist.