Experts on AI: Robotics Professor from Carnegie Mellon
Wireless Technologies

Experts on AI: Robotics Professor from Carnegie Mellon

Dabbala Rajagopal “Raj” Reddy is Moza Bint Nasser Professor of Computer Science and Robotics at Carnegie Mellon University. He is renowned for his work in computer speech recognition, robotics, human-computer interaction and his efforts to bring digital technology to people on the other side of the “digital divide”.

In 1994 he wonthe Association for Computing Machinery Turing Award which now celebrates 50 years. He answers five quick questions on AI below.

 

 

Does the misappropriation of the term AI annoy you?

The main misappropriation is as a result of the need for media sound bites. For example, Bill Gates made a comment about how he can imagine a certain set of circumstances where some aspects of AI becomes dangerous, and all of a sudden his comments are translated into ‘Bill Gates believes AI is dangerous,’ when the reality of what was said is much more circumscribed.

 

What are the most important examples of AI in mainstream society today?

There are several examples of AI in mainstream society today. The most popular include IBM’s Deep Blue and Watson. Deep Blue was the first chess-playing AI system to win against a reigning world champion, while the Watson answer system gained recognition when it was featured on the game show, Jeopardy. Other examples of popular AI include “any language to any language” translation systems by Google, Speech dialog based intelligent assistants such as Siri, Cortana, Alexa and recent demonstrations of autonomous technology used in self-driving vehicles.

 

What have been the biggest breakthroughs in AI in recent years and what impact is it having in the real world?

Ten years ago, I would have said it wouldn’t be possible, in my lifetime, to recognise unrehearsed spontaneous speech from an open population but that’s exactly what Siri, Cortana and Alexa do. The same is happening with vision and robotics - we are by no means at the end of the activity in these areas, but we have enough working examples that society can benefit from these breakthroughs.

Other than recent breakthroughs, I will talk about some fundamental concepts that came out of AI research that have evolved over the last 50 years that are still true and will continue to be true.

One of them is what Herb Simon got his Nobel Prize for, i.e. “human beings do not optimise but satisfice”. Meaning, humans don’t try to find optimal solutions, they simply find a solution which is good enough and go with it. That is the fundamental principle of AI - you don’t try to look for optimal results because, usually, they are “NP-complete” and unsolvable. AI systems simply try to find a solution that works.

Another fundamental idea is the concept of what makes someone an expert. This also goes back to Herb Simon and the study on various types of expertise, such as Malcom Gladwell’s assertion that it takes 10,000 hours of practice to achieve mastery on a subject. In my opinion, however, that’s not precise enough. Instead, I believe that one would need to spend 10,000 hours of “mindful” activity to achieve mastery. Simply spending a lot of time on something doesn’t make you an expert.

Then there is the Knowledge-Search continuum. One way to solve a problem is through trial and error or Search, trying every possible combination until you’ve found the correct sequence or answer. Another is to know or learn through experience how to solve the problem. Search compensates for lack of Knowledge and Knowledge compensates for lack of Search.

 

How well prepared is the AI community and society to deal with the deep ethical issues that come with using AI approaches in life-critical areas such as health and transportation?

The problem is not with AI but with humans who may misuse or abuse the technology. We’ve already seen the situation where AI has given the NSA and others the power to monitor and analyse our communications. You could say this invades our privacy and violation of the Constitution or you could say it protects us from terrorists. It’s up to us to decide how to use that power.

Another ethical issue we should be thinking about is how computational biology is using AI to create designer babies, AI techniques are helping create tools to make this happen. Who wouldn’t opt to have a perfect, healthy child but if you eliminate naturally occurring diversity, what might the consequences be?

We also can easily imagine a situation where AI technology widens the gaps between the have and have-nots. Imagine the rise of a super-intelligent species, what I call “homo connecticus,” that comes about not through genetic mutation but by a kind of extra-genetic evolution where individuals are enhanced through thousands of intelligent assistants. Already in this country there are 15% of people who live under the poverty level, and capitalistic principles get in the way of helping them. If AI isn’t universally available, today’s homo sapiens could be the chimpanzees of future millennia.  

 

Much has been made of the potential for AI in pop culture. What are some of the biggest myths you’ve seen? Can you think of examples where science fiction is getting close to reality?

The best example is Ray Kurzweil and Vernor Vinge’s description of the singularity which I believe will happen. Where we disagree is on “when” it will happen. I think it won’t happen for at least another 100 years, if not longer.

Two of my favourite examples of science fiction in the movies are “Minority Report” and “Her”, not because they are completely realistic, but because they provide a plausible scenario of things that could happen. In my Turing talk, I speak about teleportation, time travel, and immortality, but then I go on to redefine what I mean by those terms. For example, if we can observe things happening in 3D Virtual Reality without physically being there, that, in my mind, is teleportation, but of course that’s not the same definition you get from things like Star Trek. The same thing happens in mathematics. If mathematicians don’t like a particular outcome, they will define a new complex number world where such facts tend to be true. The issue is, if you don’t like the world that you are in, then make a world where what you are imagining is true. There are lots of possibilities, some are reasonable and others may not be, but that depends on the date and time when you ask the question.

PREVIOUS ARTICLE

«Former NSA leader talks Snowden and the future of infosec

NEXT ARTICLE

Red Hat EMEA chief sees opportunities in shifting markets»

Add Your Comment

Most Recent Comments

Resource Center

  • /view_company_report/775/aruba-networks
  • /view_company_report/419/splunk

Poll

Crowdfunding: Viable alternative to VC funding or glorified marketing?