The Modified Turing Test

09Apr07

introduction.doc

Introduction (Skip if pressed for time)

 

I can distinctly recall being interested in commuters since I was nine years old, me and my dad were jogging and I couldn’t help but as a question: “who is smarter, computers or humans?” My dad, with a chuckle, said “naturally humans are, who do you think programs computers in the first place?” I couldn’t quite accept this, it seemed –to me- that my simple calculator knew a lot more math then I ever would. But my dad easily parried my objection by telling me that computers didn’t really know math, it all in the end amounted to something like a memory. They simply were programmed with rules, some of which I didn’t know yet, or were rules that I was prone to forget at times.  

My interest in computers since hasn’t manifested itself in a particularly direct manner. I don’t feel driven to build one from scratch for instance. But I always have been curious about the idea of a computer that didn’t merely remember rules: but was intelligent, and could really know.

It wasn’t until till last year that I learnt about the Turing test in any great detail. But when I finally did I was immediately intrigued; and equally devastated when I learnt about Searle’s Chinese Room objection. But I didn’t feel a particular need to do anything about the Chinese Room objection. I hatched this idea (the Modified Turing Test) that shortly after learning about Searle’s objection. I even featured it as an aside in a paper I wrote on Dualism. But I never thought that commuter programmers were lacking a better means of testing the intelligence of computers. That is until I watched this video. I was surprised that all that could be managed to test a computer’s intelligence was to observe its behavior (like Searle’s dog). And it is with that in mind that I offer this brief outline for a Modified Turing Test, along with some –I take it to be- practical advice for the people clever enough to program these contraptions.

Artificial Intelligence  

At least since Turing people have been fascinated by computers. Indeed, if I’m not mistaken, we see ‘artificial’ mythological beings made of gold guarding –I think-Crete, as early as Greek mythology. So there is something deeply human in the drive to take something and make it into more then it was before. And when you think about it, computers aren’t the only things with ‘artificial’ intelligence that we’ve felt an incessant need to create.

For quite a while –now- we’re been exploiting ready-made biological systems. If you take sheep to be intelligent, and accept the fact that Dolly was cloned, then in a way we have already achieved ‘artificial’ intelligence. Never mind the countless couples science has enabled to conceive.

But what we really have in mind isn’t creating biological brains, but silicon brains (or whatever they use). And it is this goal that I think properly deserves the name artificial intelligence. The question though has to be: even if someone did succeed, how could we ever tell?

The Turing Test

Well, no doubt, it will come of no surprise to find out that Turing thought he’d found a way. But what exactly is the Turing Test? 

Well the Turing Test is a rather simple test, with a simple and clearly defined goal. If a computer can fool a human in the course of a conversation into thinking it –the computer- is a human, the computer is intelligent. So the criterion in the Turing Test for intelligence is an effective use of language, or at least the effective use of syntax. 

The Chinese Room The trouble with this is the Chinese Room Experiment (devised by Searle). According to which you take an individual and place them in a room, with all the symbols of the Chinese language (let’s say something like Simplified Mandarin) and a book which tells you how to respond to a certain set of symbols. The point is: the human in the room doesn’t understand Chinese, even though the human can use the language or its syntax anyways.Searle claims that while syntax is important, what’s more important is knowing what words actually mean. And since the Chinese Room can use Chinese perfectly even though it doesn’t have any deeper understanding of the language then syntax it would pass the Turing Test. But it doesn’t seem very likely that we would be willing to call the Chinese Room intelligent.  

The point being: a computer could still pass the Turing Test and it would still only know syntax, not semantics. Therefore, the Turing Test is insufficient for determining intelligence.

The Modified Turing Test But a simple change can fix this. Let’s call it the Modified Turing Test (feel free to call it the Turing-Anderson Test but I think Modified Turing Test sounds more technical).

You take the Turing Test and you administer it as usual. But if a computer passes the old test things aren’t over, you kindly ask the computer to retake the test. Only this time instead of passing the test the objective is to fail the test. But there has to be a few new rules:

  1. The program can’t be written to respond to the word “fail” –or any other trigger word- with a new set of rules or algorithm or anything of the like.
  2. To make sure people don’t cheat all codes must be examined.
  3. And the actual parts that make up the computer must be examined for contraband (so no one hides a biological brain in the box…).

Now if a computer isn’t programmed to respond to “fail” in a certain way but is really intelligent then it should understand what “fail” means and “fail” the test. And if it can understand what fails mean it at the very least shows the potential to be intelligent and conscious (which I take to be almost one and the same, I’m probably wrong).

(A strange paradox, in order to ‘pass’ the test the computer must ‘fail’ the test.)

Some Practical Advice  

With the old Turing Test a lot of time was wasted on syntax. But if you can show that a computer understands semantics, even if it’s merely a single word then you –at the least- achieve a proof of concept, so instead of focusing on syntax, focus on semantics.

For example, let’s say you have a program and its only objective is to add 1 and 1 together to get 2. Your goal is first to get the computer to give you the answer 2, after which your goal should be to allow it to understand a prompt like: “Give me the wrong answer”. So in this case you would try to come up with a way for the computer to learn what “wrong” when it is combined with “answer” means.

Final Words

Maybe the example given is too simplistic, but at the very least it demonstrates what just exactly the Modified Turing Test would be asking programmers to do.

Also, while the Modified Turing Test might be sufficient for determining if a computer is intelligent a question remains to be asked: is it necessary?

And finally, it seems that if the Modified Turing Test was implemented a lot of the arguing would come down to what is and what isn’t against the three rules. I think most of us can agree on the objective.

But please, what are your thoughts?

 

Advertisements


2 Responses to “The Modified Turing Test”

  1. 1 bobtwice

    Hi, there’s various sites where you can talk to artificial ‘intelligences, which can be quite fun for about 5 minutes. Many moons ago I had a very simple program that emulated a non-directive therapist . Young children were convinced they were talking to a ‘person’ for some time, but it was easy once you knew, to tie it up in knots. I suspect that your modification does not add anything to the original Turing test; as soon as you specify a conversational situation, a program can be written to answer the case. What is hard to program is prcicely the open-ended responsiveness of humans. I wonder if you are sure, just from this response, whether or not I am a human or a computer? That last sentence, btw, is of a kind that it is hard to get computers to ‘interpret’ correctly (as is this one). What question am I asking, that ‘you’ as a computer would answer? The curious thing though is that sometimes when you talk to people, they respond mechanically, so you are in effect talking to a biological machine, and not a person…
    best wishes, bob.

  2. 2 cranderson

    Well the point of the new test is that you can’t program the test subject directly. So I do think that the modification adds to the original Turing test. The problem is the while Turing created a test that something could pass. I’ve thought up a test that I’m not sure anything can pass. Even a human/person. As to if you are a person… I will judge that by your response, as I’m fearful this might all be a ploy.

    But let’s say I were to use the MTT.

    I would ask you:

    Are you human?

    You would say yes because you want to pass.

    Then I would tell you to fail the test.

    I would ask once more:

    Are you human?

    If you say yes, I would think you’re a computer, or something that does not want to pass the test.

    If you answered no, I wouldn’t think anything at all. But I would ask to see your “code” and “hardware” to see how you were able to answer the same question with two unique answers.

    So, are you human?


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: