future-question
Technology Planning and Analysis

AI "existential threat": Is humanity doomed?

Recent discussions around artificial intelligence (AI) have notably been overshadowed by the warnings of scientist Stephen Hawking who cautioned that humans would be “superseded by AI” and Tesla’s CEO Elon Musk who said AI poses a real “existential threat” to humanity. Even Microsoft’s Bill Gates is “concerned about super intelligence”.

But their hyperbolic statements have not gone unnoticed. A think tank has recently criticised Hawking and Musk for stirring “fear and hysteria in 2015” about AI and for doing a “disservice to the public” for distracting attention from the benefits AI can bring society.

Hyperbolic or not, their statements have had an impact in generating a debate on the implications of AI for society. But terms like “killer robots” are also contributing to the hysteria with very little clarity on what they mean. How much of this “doom-talk” is shaping our mind-sets in terms of how we might view the future?

“One of the problems with a dystopian dialogue is that it degrades the idea that you can make progress and do things better,” says Professor Brad Allenby on the phone to me from Arizona, US. “If you read a lot of the literature, it’s hard to believe that anything good is going to happen anymore. That means that people don’t really strive to create a better world and when they do, they only strive for their particular ideology or perspective.”

Allenby, a distinguished sustainability scientist, teaches at Arizona State University and his current research is focused on emerging technologies. In his recent paper on emerging technologies and the future of humanity, he says that “existential catastrophic language is not only invalid, but can actually prevent seeking constructive adaptations to accelerating change”.

Are emerging technologies evolving humans?

For Allenby, trying to predict how all these technologies will impact our world is kind of a “fantasy”. Instead, he proposes that we should find ways to “manage these technology trajectories” instead of trying to stop these technologies from ever taking off.

In fact, throughout his article, much of his argument centres on the point that dystopians tend to assume that humans are a “fixed reality in a rapidly changing world, rather than a constantly evolving, complex, adaptive, inherently unpredictable, increasingly technological process”.

Is the internet and brain rewiring more urgent than AI “existential threats”?

Many of the technologies we use in our daily lives are having an impact on us as we speak, but still we seem to be very heavily focused on how technologies will impact us in the future. Technology writer Nicholas Carr in his article “Is Google Making Us Stupid?” wrote about his “uncomfortable sense” that “something has been tinkering with his brain, remapping the neural circuitry, reprogramming the memory”. He observed that the “deep reading that used to come naturally has become a struggle”.

Carr articulates his feelings brilliantly but his struggles are felt by many. The proliferation of social media has not made the environment conducive for any form of “deep reading”. Quick tweets on Twitter poses as a distraction as does the easy wealth of information available on the internet at the stroke of our fingertips. Why bother immersing yourself in a massive book when you get just what you need on the internet in less than a minute?

So perhaps, instead of focusing on humanity being killed by robots in the far future, we should be more concerned with how technology is impacting us right now.

Allenby agrees and says that people don’t really view AI assistants like Siri or Google’s search process as something worrying.

“If you think about it, probably one of the most significant changes in human cognition in the past 100 years has been the way we have floated a lot of our memory function over to Google. And yet people have not even begun to really think about that. So rather than saying AI is going to destroy us the question is probably: how is human cognition already changing?” Allenby says.

So why is there less discussion around these immediate subjects and more focus on future existential threats? Is this just a human tendency to avoid the present?

“Well we are not very self-reflective when it comes to the present. That actually is part of the problem in a lot of ways. What we really want to do is project our experience into the future as the determinant of what the future should be and should look like,” Allenby explains. “The present in a lot of ways is fundamentally changing and the intellectual tools that we have to deal with that are not well developed. And it takes energy to do that and it also means you're going to be challenging accepted assumptions and accepted institutional structures and that's always problematic.

“The present is so embedded in us that it becomes a very powerful set of blinders,” Allenby adds.

Why are we projecting AI anxieties onto drones?

Allenby refers to the recent discussion around the “threat” of drones and privacy. At the moment the Civil Aviation Authority (CAA) has set strict rules on how drones can be used.  But there are still concerns around “persistent aerial surveillance” becoming the norm.

“The language around drones, particularly a year or two ago was very dystopian. The drone has very little to do with their privacy compared to what they have already told Facebook, Google, Amazon and everybody else. So they have no privacy and somehow they know it, but rather than dealing with that - they project their anxiety onto drones,” says Allenby.

Allenby thinks a lot of this has to do with AI and says that people are “getting a feeling that things are fundamentally changing” and are anxious and worried about it. So the answer is to “project this highly dystopian perspective onto AI in the future”.

Is humanity doomed?

In his article, Allenby says that humanity, “as it appears at any particular time, is always doomed”. Interestingly, he refers to doom as an “evolution” and that it is “unlikely that we will stop it – or really that we should want to”. But on the phone, Allenby clarifies what he means.

“No I don't think we are doomed into either a utopian or dystopian future. Because we simply don't have the data to tell and the system is very complex and impossible to predict.”

So what are Allenby’s predictions for humanity in the future?

Allenby says he doesn’t like to “predict” but thinks there a few interesting scenarios ahead for us. He thinks radical life extension which a lot of the leading universities and technology companies are working on right now will have interesting repercussions: “You think you have generational conflict now?” he poses. He also talks about the social implications of having “abundant energy” and the prospects of developing really good “computer-brain interfaces”.

“What does it mean when you have that much computational power floating around in networks? Nobody has any idea. And yet that’s pretty clearly the direction we’re headed. Today’s kids are living in the first chapter of a science fiction novel. And nobody knows what the rest of the book looks like. Not the utopians, the dystopians. Nobody.”

 

Further reading:

What is Google doing to your brain?

PREVIOUS ARTICLE

« C-suite talk fav tech: Keith Clark, MTI Technology

NEXT ARTICLE

C-suite career advice: Gavin Wilson, Sociomantic Labs »
author_image
Ayesha Salim

Ayesha Salim is Staff Writer at IDG Connect

  • twt
  • Mail

Recommended for You

Trump hits partial pause on Huawei ban, but 5G concerns persist

Phil Muncaster reports on China and beyond

FinancialForce profits from PSA investment

Martin Veitch's inside track on today’s tech trends

Future-proofing the Middle East

Keri Allan looks at the latest trends and technologies

Poll

Do you think your smartphone is making you a workaholic?