Last week headlines exploded on every device I own “Computer passes the Turing Test”. If by some chance you missed this so-called Technological Milestone, you can read about it here Computer passes Turing Test.
I, for one, am entirely skeptical. Alan Turing (for whom the Turing test is named) posited in 1950 that determining if a computer could ‘Think’ was not possible, but that determining if a computer could IMITATE thinking was possible.
The recent test at the University of Reading which pitted a computer imitating a 13 year old boy against a 30 judges to see if they could determine if the Persona Eugene Goostman was a computer or a real 13 year old boy, does not give the result the testers want you to accept.
10 of the 30 judges ruled that the AI (artificial intelligence/persona) was a real boy. Pinocchio would be proud.
However, is this really passing the Turing test? I say, no. It is not.
Objection #1 – Only 1/3 of the judges believed they were talking to a real boy.
To begin with the 30 judges were specifically told they were talking to a 13 year old boy from the Ukraine. This was to establish any awkward wording or delayed response as natural human translation/comprehension/syntax issues.
We’ll call that Objection #2 – The Judges were pre-disposed to believe they were talking to a human.
Additionally the Turing test supposes that a computer can imitate a human personality for a brief conversation, limited to text chat only. But as devices and computers become more and more connected to the internet (the so-called Internet of Things, the integration of Search (Bing.com and Google) into devices and AI (personas) is inevitable. Take for example Cortana. Bing already has a persona in Cortana which currently runs on Windows Phones. More about Cortana here Meet Cortana
These devices would then be able (with a persona attached) to be instantaneous experts connected to Search engines and access subject matter that the ordinary person would have no real use for.
Wouldn’t it be possible then, to easily ask via a text chat, a question to a supposed 13 year old boy, that a 13 year old boy should not be able to answer?
For example if I asked via chat what I assumed was a 13 year old boy “Who painted “The Second of May 1808 (The Charge of the Mamelukes)?”” and the ‘child’ immediately responded that Francisco Goya painted it in 1814 in the two month period he also painted “The Third of May”, I might be a little suspect that I wasn’t text chatting with a real child.
There are numerous examples that could be given here for a persona that was supposedly of any age or sex. Questions could be posited that the very persona should not reasonably be able to answer.
We’ll call this Objection #3 – Computer AI has access to information at speeds well beyond human comprehension. In order to successfully mimic a human, the AI would have to play dumb. This invalidates the Turing Test.
This also makes the Turing test incredibly difficult to pass, or at the most, possibly not really a valid test of computer AI/mimicry.
A computer AI that can text chat, that can interact with humans naturally, and that can play games, instruct, converse naturally does not need to pass as human, and probably cannot. Any long term exposure to an AI will begin to show patterns and flaws in the personality or superior skill in the personality that is NOT Human.
Perhaps, we need to better define what is human, to understand what is not?
Objection #4 – We haven’t defined what human is.
For now…. to quote my friend Andy Garlington (quoting War Games)
“Shall we play a game?”