The Modified Turing Test
Introduction (Skip if pressed for time)
I can distinctly recall being interested in commuters since I was nine years old, me and my dad were jogging and I couldn’t help but as a question: “who is smarter, computers or humans?” My dad, with a chuckle, said “naturally humans are, who do you think programs computers in the first place?” I couldn’t quite accept this, it seemed –to me- that my simple calculator knew a lot more math then I ever would. But my dad easily parried my objection by telling me that computers didn’t really know math, it all in the end amounted to something like a memory. They simply were programmed with rules, some of which I didn’t know yet, or were rules that I was prone to forget at times.
My interest in computers since hasn’t manifested itself in a particularly direct manner. I don’t feel driven to build one from scratch for instance. But I always have been curious about the idea of a computer that didn’t merely remember rules: but was intelligent, and could really know.
It wasn’t until till last year that I learnt about the Turing test in any great detail. But when I finally did I was immediately intrigued; and equally devastated when I learnt about Searle’s Chinese Room objection. But I didn’t feel a particular need to do anything about the Chinese Room objection. I hatched this idea (the Modified Turing Test) that shortly after learning about Searle’s objection. I even featured it as an aside in a paper I wrote on Dualism. But I never thought that commuter programmers were lacking a better means of testing the intelligence of computers. That is until I watched this video. I was surprised that all that could be managed to test a computer’s intelligence was to observe its behavior (like Searle’s dog). And it is with that in mind that I offer this brief outline for a Modified Turing Test, along with some –I take it to be- practical advice for the people clever enough to program these contraptions.
At least since Turing people have been fascinated by computers. Indeed, if I’m not mistaken, we see ‘artificial’ mythological beings made of gold guarding –I think-Crete, as early as Greek mythology. So there is something deeply human in the drive to take something and make it into more then it was before. And when you think about it, computers aren’t the only things with ‘artificial’ intelligence that we’ve felt an incessant need to create.
For quite a while –now- we’re been exploiting ready-made biological systems. If you take sheep to be intelligent, and accept the fact that Dolly was cloned, then in a way we have already achieved ‘artificial’ intelligence. Never mind the countless couples science has enabled to conceive.
But what we really have in mind isn’t creating biological brains, but silicon brains (or whatever they use). And it is this goal that I think properly deserves the name artificial intelligence. The question though has to be: even if someone did succeed, how could we ever tell?
The Turing Test
Well, no doubt, it will come of no surprise to find out that Turing thought he’d found a way. But what exactly is the Turing Test?
Well the Turing Test is a rather simple test, with a simple and clearly defined goal. If a computer can fool a human in the course of a conversation into thinking it –the computer- is a human, the computer is intelligent. So the criterion in the Turing Test for intelligence is an effective use of language, or at least the effective use of syntax.
The Chinese Room The trouble with this is the Chinese Room Experiment (devised by Searle). According to which you take an individual and place them in a room, with all the symbols of the Chinese language (let’s say something like Simplified Mandarin) and a book which tells you how to respond to a certain set of symbols. The point is: the human in the room doesn’t understand Chinese, even though the human can use the language or its syntax anyways.Searle claims that while syntax is important, what’s more important is knowing what words actually mean. And since the Chinese Room can use Chinese perfectly even though it doesn’t have any deeper understanding of the language then syntax it would pass the Turing Test. But it doesn’t seem very likely that we would be willing to call the Chinese Room intelligent.
The point being: a computer could still pass the Turing Test and it would still only know syntax, not semantics. Therefore, the Turing Test is insufficient for determining intelligence.
The Modified Turing Test But a simple change can fix this. Let’s call it the Modified Turing Test (feel free to call it the Turing-Anderson Test but I think Modified Turing Test sounds more technical).
You take the Turing Test and you administer it as usual. But if a computer passes the old test things aren’t over, you kindly ask the computer to retake the test. Only this time instead of passing the test the objective is to fail the test. But there has to be a few new rules:
- The program can’t be written to respond to the word “fail” –or any other trigger word- with a new set of rules or algorithm or anything of the like.
- To make sure people don’t cheat all codes must be examined.
- And the actual parts that make up the computer must be examined for contraband (so no one hides a biological brain in the box…).
Now if a computer isn’t programmed to respond to “fail” in a certain way but is really intelligent then it should understand what “fail” means and “fail” the test. And if it can understand what fails mean it at the very least shows the potential to be intelligent and conscious (which I take to be almost one and the same, I’m probably wrong).
(A strange paradox, in order to ‘pass’ the test the computer must ‘fail’ the test.)
Some Practical Advice
With the old Turing Test a lot of time was wasted on syntax. But if you can show that a computer understands semantics, even if it’s merely a single word then you –at the least- achieve a proof of concept, so instead of focusing on syntax, focus on semantics.
For example, let’s say you have a program and its only objective is to add 1 and 1 together to get 2. Your goal is first to get the computer to give you the answer 2, after which your goal should be to allow it to understand a prompt like: “Give me the wrong answer”. So in this case you would try to come up with a way for the computer to learn what “wrong” when it is combined with “answer” means.
Maybe the example given is too simplistic, but at the very least it demonstrates what just exactly the Modified Turing Test would be asking programmers to do.
Also, while the Modified Turing Test might be sufficient for determining if a computer is intelligent a question remains to be asked: is it necessary?
And finally, it seems that if the Modified Turing Test was implemented a lot of the arguing would come down to what is and what isn’t against the three rules. I think most of us can agree on the objective.
But please, what are your thoughts?
Filed under: Uncategorized | 2 Comments