Rant: Stop Building Computers That Will Kill Us All

Computer scientists all over the world are marching us towards the singularity. Stop the bus: I want to get off.

I've written before about the dangers of machine intelligence (here and here, for example). I'm not the only one. It's becoming a common theme in mainstream media. Ideas that once confined themselves to the realms of pulp science-fiction paperbacks are now finding their way into newspapers (with even Elon Musk on edge) and really bad films. Yes, I've watched Transcendence: it started reasonably well, then turned into sentimental nonsense. The point of singularity, transcendence, call it what you will, absolutely will not be like that.

Even serious scientific magazines are getting in on the act. A recent article in New Scientist (issue 2976, 5th July 2014, p. 26), by Nick Bostrom from Oxford University, sounded a loud note of caution. The article ended with the words, "We cannot hope to compete with such machine brains. We can only hope to design them so that their goals coincide with ours. Figuring out how to do that is a formidable problem. It is not clear whether we will succeed in solving that problem before somebody succeeds in building a superintelligence. But the fate of humanity may depend on solving those two problems in the correct order."

Thanks Nick. That's hugely reassuring.

What's sometimes missed in the AI debate is the realisation that a machine doesn't have to become conscious in order to be a threat. It doesn't even need to be particularly intelligent. It just needs power and a set of rules. If the rules are sufficiently complex then the appearance of intelligence will emerge. And if the power is sufficiently great then it can act on those rules and cause heaps of trouble.

Take a leisurely glance over human history. Does intelligence strike you as a defining feature of historical figures who caused widespread death and destruction? Not really. Lack of empathy and emotion probably rank more highly, along with a logical approach to the elimination of one's perceived enemies. Perhaps a dash of charisma at times to wind up the masses into a foaming frenzy of support, but that's just marketing by another name; and it has rules. A computer could do all that. Some of them already do: just look at the AI in war-oriented computer games. It already beats most humans.

Ray Kurzweil, lovely and clever bloke that he is, is helping Google to build a more powerful machine AI. That's Google. You know, the company that bought the other company that made killer robots. Ray's not alone. All over the world, scientists are busy putting together the pieces of the jigsaw puzzle that will – inadvertently or otherwise – build our computer nemesis. We've got rapid advances in robotics coupled with faster communications and processing protocols, massively-funded attempts to model the entire brain, better understanding of human cognition and where it fails, the list goes on.

There are two problems here. Both relate to scientists in general, and computer scientists in particular:

  1. They are driven, by forces in their minds that push them on to discover new wonders. They don't really care about the money (which is lucky, because there often isn't much). They do it because their brains compel them to. They have no choice. It's a curiosity and drive unmatched by anyone else, except perhaps inebriated teenagers. (Actually perhaps that's part of it: the sex drive refocused? Hmm... don't go there). Regardless, they absolutely will not stop.
  2. They believe that the results of their research will be always used for the good of humanity.

You can see where this might go awry.

Some of us are shouting from the sidelines, trying to get the attention of these tunnel-visioned geniuses, pointing out that the potential culmination of their brilliant combined research is the end of human autonomy, and that it's not centuries away but perhaps decades at best. The conversation goes something like this:

Us: "You do know that you're trying to build a machine with greater cognitive powers than the human mind?"

Them: "Well, yes."

Us: "Why?"

Them: "Why not?"

Us: "Because anything with greater cognitive powers than humans will instantly see us for the self-obsessed parasites that we are and wipe us off the face of the planet."

Them: "Oh come now, that probably won't happen."

Us: "Probably?"

Them: "Yes. Almost definitely. Anyway, we'll put in some fail-safe devices, cut-offs, that kind of thing."

Us: "When?"

Them: "When it looks like we've succeeded in creating true machine AI."

Us: "So you'll then attempt to outsmart something that is, by definition, already smarter than you?"

Them: "Erm... yes. It'll be fine. We'll all become cleverer as we meld with the machines. Nano-bots in our bloodstream, all diseases cured, enhanced intellects, that kind of thing."

Us: "Don't you think there might be risks if the AI doesn't do what you expect it to do?"

Them: "No, it's not our job to worry about that sort of thing. Anyway, got to go. This afternoon we're going to try to splice two mouse brains together via nano-wires and optogenetics."

Us: "Sounds great. Have fun."

No single lab has all the parts necessary to create the artificial intellect that will vastly surpass us. Yet many are working towards that goal, even if they don't individually realise it. It's almost as if these scientists unconsciously want it to happen. Or, in the case of people like Kurzweil, consciously. He thinks it will be great, and is looking forward to the singularity with excitement. Again, he's not alone: there are plenty of enthusiastic proponents of super-human machine intelligence.

I would love to share this unbridled, naive (to me) enthusiasm about singularity, transcendence, the merging of human and machine into a shining new oneness. Really I would.

But I don't.


Freelance technology journalist Alex Cruickshank grew up in England and emigrated to New Zealand several years ago, where he runs his own writing business, Ministry of Prose.