Google’s shopping spree has continued with the purchase of the British artificial intelligence (AI) start-up DeepMind, acquired for an eye-watering £400M ($650M). This is Google’s 8th biggest acquisition in its history, and the latest in a string of purchases in AI and robotics. Boston Dynamics, an American company famous for building agile robots capable of scaling walls and running over rough terrain (see BigDog here), was mopped up in 2013. And there is no sign that Google is finished yet. Should we be excited or should we be afraid?
Probably both. AI and robotics have long promised brave new worlds of helpful robots (think Wall-E) and omniscient artificial intelligences (think HAL), which remain conspicuously absent. Undoubtedly, the combined resources of Google’s in-house skills and its new acquisitions will drive progress in both these areas. Experts have accordingly fretted about military robotics and speculated how DeepMind might help us make better lasagne. But perhaps something bigger is going on, something with roots extending back to the middle of the last century and the now forgotten discipline of cybernetics.
The founders of cybernetics included some of the leading lights of the age, including John Von Neumann (designer of the digital computer), Alan Turing, the British roboticist Grey Walter and even people like the psychiatrist R.D. Laing and the anthropologist Margaret Mead. They were led by the brilliant and eccentric figures of Norbert Wiener and Warren McCulloch in the USA, and Ross Ashby in the UK. The fundamental idea of cybernetics was consider biological systems as machines. The aim was not to build artificial intelligence per se, but rather to understand how machines could appear to have goals and act with purpose, and how complex systems could be controlled by feedback. Although the brain was the primary focus, cybernetic ideas were applied much more broadly – to economics, ecology, even management science. Yet cybernetics faded from view as the digital computer took centre stage, and has remained hidden in the shadows ever since. Well, almost hidden.
One of the most important innovations of 1940s cybernetics was the neural network, the idea that logical operations could be implemented in networks of brain-cell-like elements wired up in particular ways. Neural networks lay dormant, like the rest of cybernetics, until being rediscovered in the 1980s as the basis of powerful new ‘machine learning’ algorithms capable of extracting meaningful patterns from large quantities of data. DeepMind’s technologies are based on just these principles, and indeed some of their algorithms originate in the pioneering neural network research of Geoffrey Hinton (another Brit), who’s company DNN Research was also recently bought by Google and who is now a Google Distinguished Researcher.
What sets Hinton and DeepMind apart is that their algorithms reflect an increasingly prominent theory about brain function. (DeepMind’s founder, the ex-chess-prodigy and computer games maestro Demis Hassabis, set up his company shortly after taking a Ph.D. in cognitive neuroscience.) This theory, which came from cybernetics, says that the brains’ neural networks achieve perception, learning, and behaviour through repeated application of a single principle: predictive control. Put simply, the brain learns about the statistics of its sensory inputs, and about how these statistics change in response to its own actions. In this way, the brain can build a model of its world (which includes its own body) and figure out how to control its environment in order to achieve specific goals. What’s more, exactly the same principle can be used to develop robust and agile robotics, as seen in BigDog and its friends
Put all this together and so resurface the cybernetic ideals of exploiting deep similarities between biological entities and machines. These similarities go far beyond superficial (and faulty) assertions that brains are computers, but rather recognize that prediction and control lie at the very heart of both effective technologies and successful biological systems. This means that Google’s activity in AI and robotics should not be considered separately, but instead as part of larger view of how technology and nature interact: Google’s deep mind has deep roots.
What might this mean for you and me? Many of the original cyberneticians held out a utopian prospect of a new harmony between people and computers, well captured by Richard Brautigan’s 1967 poem – All Watched Over By Machines of Loving Grace – and recently re-examined in Adam Curtis’ powerful though breathless documentary of the same name. As Curtis argued, these original cybernetic dreams were dashed against the complex realities of the real world. Will things be different now that Google is in charge? One thing that is certain is that simple idea of a ‘search engine’ will seem increasingly antiquated. As the data deluge of our modern world accelerates, the concept of ‘search’ will become inseparable from ideas of prediction and control. This really is both scary and exciting.
The author refers to “the now forgotten discipline of cybernetics.” Cybernetics is arguably one of the most important systems theories ever developed. If anyone was introduced to this theory and forgot it, then I imagine they either have dementia or a selective memory.
Well indeed. Perhaps ‘underappreciated’ would have been a more appropriate descriptor. To my mind, many people have been introduced to concepts (like neural networks for example) which have their roots firmly in cybernetics, but have not been introduced to cybernetics itself. I am currently teaching an MSc course on cybernetics to try to correct this at least a little.
You’re right. The illustration you described, of AI, robotics to cybernetics, is very good.
Interesting…
But troubling….
Why should prediction be so central to control? When tied to a train track I predict the arrival of the train and my imminent demise but I have no control. It seems to me that adaptive behaviour doesn’t require prediction just competence. The latter doesn’t require the former.
Perforce my current opinion is that the current focus on predictive coding threatens to obscure some important questions for adaptive behaviour. For example Friston says the reason we don’t just stare at the wall to minimize prediction errors is because there are internal constraints within organism that make that not an option. By saying that hasn’t he just dismissed the whole project of adaptive behaviour as just an aside to his greater project.
Chris
Well yes Chris. Not all prediction requires control, but prediction can greatly benefit from control as elaborated in the concept of active inference. (We can change the model to fit the world, or change the world to fit the model – or both!). The ‘dark room’ (or wall stare) problem you allude to is I think a red herring. Friston’s response is that endless wall-staring is maladaptive on a longer time-scale than that involved in moment-to-moment perception, which seems reasonable. Though the point is well taken that repeatedly ‘explaining’ apparently maladaptive behaviour by positing alternative criteria does weaken explanatory power.
From a philosophy of science perspective we should perhaps consider the ‘free energy principle’ as a very general framework, and predictive coding as a specific example theory lying within that framework.
The least energy principle stands very high in terms of efficiency of work, and many other cases. I use it myself to get things done efficiently, minimaxing outcomes/cost.
“This theory, which came from cybernetics, says that the brains’ neural networks achieve perception, learning, and behaviour through repeated application of a single principle: predictive control.”
This is remarkably similar to the simple event of the comparison process which the cortical cell columns do again and again. It’s self-organizing, creates order, creativity, language, mathematics, measurement, the conscience, & recognition via the Long Term memory and indeed most of the higher cortical functions including formal verbal logic can be deduced from it.
It also explains what machines must be able to do to model human language, thinking and creativity. I call it metaphorically, Le Chanson Sans Fin, and models a good bit of what Hofstadter wrote about, tho is far more general and expansive in what it explains. It may be able to create a unifying model of a great deal, because the process is self comparing and creates self recursive, consistent models. The comparison process is done endlessly by the cortex; and interestingly enough, the EEG over the entire cortex is all the same alpha, beta, and theta, as if a single, simple process is being used, reiteratively.
The comparison process may be bayesian.
Herb Wiggins MD, diplomat Am. Bd. Psychiatry/Neurology
Interesting
Wow what a great combo of words and color! How original. Thanks for posting.
That’s cool
Reblogged this on The International Blogspaper.
Cybernetic is interesting indeed. However, when people get lost into trying to imitate humanoids, they lose the focus.
Cybernetic would be first useful as a program/software managing your home/appartment (domotics). I mean Full Control, not some commercial useless gadgets. That, coupled with managing your car systems (driving-assistance, head-up-display, maintenance), and coupled with managing your personal agenda (work and social schedule).
Then, later on these would manage social institutions, like schools, hospitals, clinics, libraries, public transport systems, town halls (cities), etc.
That would be real steps into realistic Cybernetics.
Consequently, and because of all the related implications, the foremost problematic to solve once and for all, before anything, is how do we define “work”. I think these topics won’t be accessible to us, as a global society, before we see that everybody on the planet can feed, drink, and educate themselves.
CarlD.
http://roomancer3dintro.wordpress.com/2014/02/03/3d-virtual-virtual-living
Cybernetics is (was) never really about imitating humans, it was always about the control and prediction of complex systems, with particular regard to complex biological systems like the brain, and ecologies. Interestingly, some of the greatest impact of the first wave of cybernetics was in management science, where people like Stafford Beer (founding cybernetician based in Liverpool) devoted considerable effort to understanding how concepts like feedback could be applied within organizations.
Well done!
Being Human, we understand the need for control and predictability and therefore can effectively model AI by understanding ourselves in a deeper more comprehensive way. The point at which science meets spirituality deserves further investigation.
Thanks for this engaging blog! I have a couple comments. First, does a machine which appears intelligent and can even act in goal-oriented ways have consciousness? It’s a conundrum that we can’t really understand because much of the time we have a hard time understanding the nature of our own consciousness. Second comment – an innocent childlike one – your tag is sublime, nary a stone’s toss from Asketh and Access.
Well I think the question of consciousness is really quite distinct. But I don’t think that goal-directed behaviour is a good indicator of consciousness either in animals or machines. A key point of cybernetics was to show how goal-directed behaviour could emerge from very simple mechanisms.
Reblogged this on Eigyou Conseil, Connect Japan to the world.
Reblogged this on Socially Relevant and commented:
Well said #profound
Awesome post. Thanks for sharing.
————–
http://tshirtlegend.com/
Good read. I would like to understand [predictive control]. Does this mean [prediction from past memories] here?
If you assume the computational conscious model, does that model have homunculus which observe consciousness? I have my current hypothesis which doesn’t have the homunculus.
Reblogged this on @elmaverickfuego.
Reblogged this on ALLKNOL and commented:
All watched over by search engines of loving grace
Great post, thanks
Good article! thanx 4 giving this post 4 us!
Always we need anything searching in this world……..run run run to search new!
Good #wordpress!
Pingback: There’s more to geek-chic than meets the eye, but not in The Imitation Game | NeuroBanter