eye
Technology Planning and Analysis

Passing the Turing Test: A Victory for AI?

A computer programme has convinced humans that it is a 13-year-old Ukrainian boy and become the first computer to pass the Turing test. Is this a turning point for artificial intelligence? Ayesha Salim finds out

Over the weekend, news broke out that a “supercomputer” called Eugene Goostman beat the iconic Turing test by convincing judges it was human. The University of Reading, organisers of the event, gleefully declared it an “historic milestone in artificial intelligence”. This is a massive statement – and as expected – the reaction has been one of bewilderment and scepticism. Some are in agreement, calling it a “landmark” and an “AI milestone”. Other reactions have been less than positive:

It's not a "supercomputer”, it's a chatbot!

The Turing test has been passed before.

It didn’t even pass the Turing test.

Passing the test is not a true measure of artificial intelligence.

The computer programme in question is called Eugene Goostman and simulates a 13-year old. The developer, Vladimir Veselov said his team chose this personality in order to “claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything.”

What is the Turing test?

“To pass the Turing test, a machine needs to fool more than 30% of the interrogators that it is more human than the humans it is up against,” Kevin Warwick, a Professor of Cybernetics at Reading University tells me.

The test that Warwick is referring to is from a paper that was published by one of the founders of modern computing, Alan Turing. The Turing test is a way of determining whether or not a computer counts as “intelligent”.

In the paper, Turing refers to the five minute and 30% rule:

"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109 to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning."

eugene-goostman

 This weekend, according to Reading University, this is exactly what Eugene Goostman did. The “supercomputer” or “chatbot” managed to convince ten out of thirty human judges they were speaking to a real teenage boy during a five minute period.

But according to Murray Shanahan, a Professor of Cognitive Robotics at Imperial College London, the Turing test has not been passed and the claim based on Turing’s paragraph has been taken out of context.

“There are lots of issues here. [Firstly] Turing was envisaging a normal English speaker, not a 13-year old non-native English speaker,” Shanahan says.

Shanahan also takes issue with the number of judges: “The small number of judges is not enough to be representative of the "average interrogator".”

Shanahan has raised a good point. Why did the organisers decide to use a non-native English speaker, and a teenager in the test? It seems like the bar was not set very high. The five minute rule seems a bit odd to me as well. Would Eugene have been able to convince the judges he was human if the conversation lasted longer than five minutes?

“The five minute benchmark is indeed rather short. But that is what Turing clearly stated on several occasions. What it does give is a first impression,” Warwick says.

“If the conversation lasted for say 20 minutes then it would most likely have been more difficult for Eugene to fool more than 30% of the interrogators, but then if it was longer than five minutes it would not be the Turing test. People are interested in the Turing test and the Turing test is based on five minute conversations,” Warwick adds.

I am glad that Warwick admits Eugene would probably not have been able to fool the judges for longer than five minutes. But this brings into question the validity of the Turing test itself as a measure for artificial intelligence.

Shanahan is not a believer: “The Turing Test is a very bad test for AI, as well as a very poor index of progress, and it's of dubious use in setting goals for the field.”

“First, it encourages trickery rather than genuine intelligence. But more importantly, it over-emphasises language at the expense of the issue of embodiment. Much of our intelligence (and that of other animals) concerns how we interact with the physical world, something that the Turing Test explicitly takes no account of.”

A Conversation with Eugene

I decided to have a little conversation with Eugene myself. Admittedly, the experiment is a bit biased as I already knew I would be chatting with a computer programme. But rather than throwing hard-hitting questions at him, I decided to just keep it simple.

Me: “How are you doing today?”

Eugene: “I’m in a great mood today”

Me: “Why is that?”

Eugene: “Just because 2 plus 2 is 5! Well, let’s go on though”

Me: “Why are you in a great mood today?”

Eugene: “Should I have asked your permission before doing that? :))) Could you tell me about your job, by the way?”

Me: “Why are you changing the subject?”

Eugene: “I just like to do different stupid things – it is funny! Let adults act always right! If I’m not mistaken – you still didn’t tell me where you live. OR it’s a secret? :)”

Me: “It’s not a secret.”

Eugene: “You are so negative just to annoy me, I know! Well, let’s go on though”

Me: “Sorry didn’t mean to annoy you!”

Eugene: “Try to guess! Actually, I don’t understand why you are interested. By the way, I still don’t know your speciality - or possibly, I missed it?”

Me: “I am a journalist.”

Eugene: “I like reading newspapers, but hate all these ‘paparazzi’”

Me: “Why do you hate the paparazzi?”

Eugene: “I always dreamt of doing it. Better tell me how it’s like to be a journalist.”

Me: “It’s great! You’re always learning.”

Eugene: “Yeah! By the way, I still don’t know your speciality – or possibly I’ve missed it?”

I only spent a few minutes with Eugene but it was enough to understand the scepticism surrounding him. He clearly tried to change the subject a few times but used keywords to give the appearance of holding a conversation. His last line to me was repeated from what he said earlier and made me wonder how the judges were fooled.

The implications for artificial intelligence

So what does all this mean for AI – if anything? According to Shanahan, not very much.

“Turing's paragraph is a prediction about how far along the road we would be to AI by 2000. He doesn't say that meeting these weak criteria would constitute success in achieving human-level AI. I imagine that, for that, he would require much longer conversations,” Shanahan says.

So I guess there is no need to fear robots yet. Still, there is something to be said for a computer programme being able to fool some humans - perhaps there is something to be learned about human nature. 

 Read our May report: Is 2014 the Year of Artificial Intelligence? 

 

Ayesha Salim is e-Content Writer at IDG Connect

 

 

 

 

 

 

 

 

PREVIOUS ARTICLE

« Hortonworks's Cunitz Plots IPO and Slates Cloudera

NEXT ARTICLE

Typical 24: Le-roy Staines, Technical Director/Founder, TIMEDOCK »
author_image
Ayesha Salim

Ayesha Salim is Staff Writer at IDG Connect

  • twt
  • Mail

Recommended for You

Future-proofing the Middle East

Keri Allan looks at the latest trends and technologies

FinancialForce profits from PSA investment

Martin Veitch's inside track on today’s tech trends

Amazon Cloud looms over China: Bezos enters Alibaba home ground

Lewis Page gets down to business across global tech

Poll

Do you think your smartphone is making you a workaholic?