Technology Planning and Analysis

Machine Learning: Myths, science fiction and the Singularity

“And then Hephaestus, Olympian god of fire, created a massive automaton out of bronze.” Known as Talos, the giant creation was to protect the island of Crete. “On seeing strangers approach, he enveloped himself in fire and engulfed them.” Or so the Greek myth goes.

Our fascination with machines that can think is rooted in our collective histories. Perhaps we have a deep psychological need to engage with beings greater than ourselves, super-intelligent, transcendent parent figures that abdicate us from reliance on our own senses.

“[AI is] an ancient wish to forge the gods,” stated Pamela McCorduck in her 1989 book Machines That Think. And unsurprisingly, fiction before and since has offered a continued feed of examples of how we have sought to recreate the idea of a higher power with a technological image. 

When the techno-gods are good they are very good, like those portrayed in Iain M. Banks’ Culture; and when they are bad, they become Skynet, providing as they do their evil deeds a suitable backdrop upon which to express our own humanity. 

Right now, we are told, we are on the brink of making such an intelligence real.

But is this true? The progress of so-called Artificial Intelligence has been constrained over the decades, not only by our ability to create algorithms that model how humans make calculated decisions, but also by the available processing power of computers. 

Marvin Minsky (may he rest in peace) built the first ‘neural' network in 1951 and every generation of computing since has been coupled with a wave of interest in ‘computers that think’ — from AI itself in the 70s, to expert systems in the 80s, data analytics in the 90s and so on. Most recently we have been seeing a rise in popularity of machine learning, which we will doubtless move on from. 

This repeated cycle is not a bad thing, as it reflects our increased trust in what computer algorithms can do on our behalf. As Nick Bostrom explained a decade ago, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.”

A good illustration is Google: as Larry Page himself has said, AI is core to how the search engine works, albeit not as outwardly ‘human’ intelligence. (He also mentioned how dating site OKCupid uses algorithms to match people likely to be romantically compatible, suggesting that AI is acting like the fabled demigod of love).

As computers become more powerful, so their abilities to compute are progressing. Indeed, they are already capable of many processing tasks that are way beyond the ken of humans. Even some metrics of human intelligence — such as the IQ test — are being surpassed.

But, thus far, computers remain unable to act without instruction. The latest resurgence of AI has been captured by the term ‘machine learning’, itself coming hot on the heels of Big Data analytics. In both cases, the need for human intervention has been emphasised, either through recognition of the dearth of data scientists, or by providing user interfaces designed for business experts rather than technologists. 

“We’re looking to have machines do what they do well, and humans to do what they do well — to enable the machine and human to have a dialogue,” says Augustin Huret, founder of ‘augmented intelligence’ platform MondoBrain.

Will this situation change? The moment that this switches has been called the ‘Singularity’ — a distinct moment where, like with Mary Shelley’s Frankenstein, super-beings spring into life. While it is still seen as decades away, this ‘big bang’ of computing has pretty much been taken as read. 

It may never happen, for the simple reason that complexity is exponential and computer power, despite its stunning growth in the past 60 years, is ultimately linear. We will indeed arrive at a point where computers can emulate human communications, but the former will always remain a logical entity, driven by maths, whereas we are most certainly not. 

This is no bad thing, and neither is our aspiration to strive to create heavenly beings, which is driving a great deal of innovation. I have already written about an orchestration singularity, in which computers will eventually become smart enough to manage themselves; meanwhile driverless cars are an eminently logical progression.

No doubt we will achieve capabilities that are currently the stuff of science fiction. But will computers emerge that can act on a whim, or in any way irrationally? To do so would be illogical: either we'd have to program in such a facility, or the algorithms would need to decide that such a thing was necessary to their development. 

Both of which are unlikely: even Singularity advocate Ray Kurzweil argues that ‘human’ isn’t necessarily the goal. Machine intelligence, if and when it comes, will continue to develop like a flywheel that spins through accelerating cycles of self-improvement, taking on board increasingly broad problem spaces. 

Pretty much as now, in fact — in many ways computers are already there. And so are we, if we think about it hard enough.


« How to think about computers in the 21st century


Samsung S7: Reactions »
Jon Collins

Jon Collins is an analyst and principal advisor at Inter Orbis. He has over 25 years in experience of the tech sector, having worked as an IT manager, software consultant, project manager and training manager among other roles. Jon’s published work covers security, governance, project management but also includes books on music, including works on Rush, Mike Oldfield and Marillion. See More

  • Mail


Do you think your smartphone is making you a workaholic?