Should we fear the technological singularity?

terminator

Could wanting the latest mobile phone for Christmas lead to human extermination? Existential risks to our species have long been part of our collective psyche – in the form of asteroid impacts, pandemics, global nuclear cataclysm, and more recently, climate change. The idea is not simply that humans and other animals could be wiped out, but that basic human values and structures of society would change so as to become unrecognisable.

Last week, Stephen Hawking claimed that technological progress, while perhaps intended for human betterment, might lead to a new kind of existential threat in the form of self-improving artificial intelligence (AI). This worry is based on the “law of accelerating returns”, which applies when the rate at which technology improves is proportional to how good the technology is, yielding exponential – and unpredictable – advances in its capabilities. The idea is that a point might be reached where this process leads to wholesale and irreversible changes in how we live. This is the technological singularity, a concept made popular by AI maverick and Google engineering director Ray Kurzweil.

We are already familiar with accelerating returns in the rapid development of computer power (“Moore’s law”), and Kurzweil’s vision of the singularity is actually a sort of utopian techno-rapture. But there are scarier scenarios where exponential technological growth might exceed our ability to foresee and prevent unintended consequences. Genetically modified food is an early example of this worry, but now the spotlight is on bio- and nano-technology, and – above all – AI, the engineering of artificial minds.

Moore's law: the exponential growth in computational power since 1900.

Moore’s law: the exponential growth in computational power since 1900.

A focus on AI might seem weird given how disappointing present-day ‘intelligent robots’ are. They can hardly vacuum your living room let alone take over the world, and reports that the famous Turing Test for AI has been passed are greatly exaggerated. Yet AI has developed a surprising behind-the-scenes momentum. New ‘deep learning’ algorithms have been developed which, when coupled with vast amounts of data, show remarkable abilities to tackle everyday problems like speech comprehension and face recognition. As well as world-beating chess players like Deep Blue, we have Apple Siri and Google Now helping us navigate our messy and un-chesslike environments in ways that mimic our natural cognitive abilities. Huge amounts of money have followed, with Google this year paying £400M for AI start-up DeepMind in a deal which Google CEO Eric Schmidt heralded as enabling products that are “infinitely more intelligent”.

"Hello Dave".

“Hello Dave”.

What if the ability to engineer artificial minds leads to these minds engineering themselves, developing their own goals, and bootstrapping themselves beyond human understanding and control? This dystopian prospect has been mined by many sci-fi movies – think Blade Runner, HAL in 2001, Terminator, Matrix – but while sci-fi is primarily for entertainment, the accelerating developments in AI give pause for thought. Enter Hawking, who now warns that “the full development of AI could spell the end of the human race”. He joins real-world-Iron-Man Elon Musk and Oxford philosopher Nick Bostrom in declaring AI the most serious existential threat we face. (Hawking in fact used the term ‘singularity’ long ago to describe situations where the laws of physics break down, like at the centre of a black hole).

However implausible a worldwide AI revolution might seem, Holmes will tell you there is all the difference in the world between the impossible and the merely improbable. Even if highly unlikely, the seismic impact of a technological singularity is such that it deserves to be taken seriously, both in estimating and mitigating its likelihood, and in planning potential responses. Cambridge University’s new Centre for the Study for Existential Risk has been established to do just this, with Hawking and ex-Astronomer Royal Sir Martin Rees among the founders.

Dystopian eventualities aside, the singularity concept is inherently interesting because it pushes us to examine what we mean by being human (as my colleague Murray Shanahan argues in a forthcoming book). While intelligence is part of the story, being human is also about having a body and an internal physiology; we are self-sustaining flesh bags. It is also about consciousness; we are each at the centre of a subjective universe of experience. Current AI has little to say about these issues, and it is far from clear whether truly autonomous and self-driven AI is possible in their absence. The ethical minefield deepens when we realize that AIs becoming conscious would entail ethical responsibilities towards them, regardless of their impact on us.

At the moment, AI like any powerful technology has the potential for good and ill, long before any singularity is reached. On the dark side, AI gives us the tools to wreak our own havoc by distancing ourselves from the consequences of our actions. Remote controlled military drones already reduce life-and-death decisions to the click of a button: with enhanced AI there would be no need for the button. On the side of the angels, AI can make our lives healthier and happier, and our world more balanced and sustainable, by complementing our natural mental prowess with the unprecedented power of computation. The pendulum may swing from the singularity-mongerers to the techno-mavens; and we should listen to both, but proceed serenely with the angels.

This post is an amended version of a commisioned comment for The Guardian: Why we must not stall technological progress, despite its threat to humanity, published on December 03, 2014.  It was part of a flurry of comments occasioned by a BBC interview with Stephen Hawking, which you can listen to here. I’m actually quite excited to see Eddie Redmayne’s rendition of the great physicist.