The Course of Reason

Did Deep(er) Blue Pass the Turing Test?

January 15, 2013

There is a very high probability that you are reading this on a computer. Computers are really cool. They are based on the work of an English guy named Alan Turing.

Alan Turing
Alan Turing, King of The Geeks.

He was really good at math and logic, and figured out how to make machines that could do really complex computations. It was just in the nick of time, too, because his work was crucial in cracking Nazi codes during World War II. In addition to saving the world and forever changing it by inventing computers, he theorized about the limitations of computers. A lot of people at the time wondered if computers could become so complex as to someday be conscious. But Turing was smarter than all of them, because he knew that question can never be tested. A more interesting question is whether a machine could be programmed to behave as if it were conscious. More specifically, could a machine ever imitate a human mind so convincingly that a real human thought he was interacting with another real human? They call this the Turing test, and it is basically the hallmark for Artificial Intelligence research.

You have probably encountered programs that fail the Turing test, especially if you've ever interacted with AIM chatbots. The conversation might start out normal, but it quickly becomes apparent that you are not talking to a real person.

Here is an excerpt from an actual conversation I had with a chatbot (I've changed my friend's screen name so that you Internet weirdos don't bother the person):

Actual Gmail conversation:

Friend: hey buddy

Me: hey what's up

Friend : haha I just scored 82 on this IQ test

Me: thats low-ish

Friend: [Link removed so you don't click it and get a virus] Can u take it plz and tell me what u get?

Me: nah

Friend: this one is supposed to be pretty accurate and I wanna see if I'm smart plus I know u wont beat me!! LOL

Me: IQ tests are bullshit

plus, you might have been hacked and are sending me a link to a virus

Friend: k

yay, someone to talk to lol

Me: remind me what your screen name means?

Friend: been a while, how are ya

Me: you?

Friend: been a while, how are ya

Me: exactly, you know?

Friend: hey buddy

Me: hey buddy

you have failed the turing test, robot

It just seemed really strange and non-human to me. I could tell something was fishy. It was nothing like talking to an actual person. That's why the chatbot failed the Turing test. Chatbots are probably a lot better now, but I doubt they can pass the Turing test very frequently, if at all. I'm quite confident that someday they will, though.

The reason I'm confident of this is because of Game 2 of Deep Blue vs. Garry Kasparov in their 1997 rematch. Kasparov played an earlier version of Deep Blue in 1996 and basically destroyed it, marking the last time the best human would be able to beat the best chess computer. Kasparov convincingly won Game 1 of the '97 match, but they were playing under tournament rules, so what we really care about is who won the match.

computer vs gary kasparov
Computer (right) vs. Garry Kasparov (left)

For some context, you should know that Kasparov, and everyone else (except the team at IBM) basically thought that no chess engine would ever be able to beat a grand master human, because humans have creativity and other unquantifiable magic powers. So, when Deeper Blue, as it became known, proceeded to obliterate Kasparov, it showed us that computers can be programmed to do things that were traditionally considered solely within the domain of the human mind. Hence my confidence that chatbots will someday be extremely convincing conversationalists.

forever alonely
This may not be so Forever Alonely.

What's really fascinating about the '97 match is a particular move in Game 2. Move 37, to be precise. See, Kasparov was expecting Deeper Blue to play like a machine, much like we expect chatbots to talk like machines when we interact with them. He offered a move to Deeper Blue that, prior to this game, every chess-playing computer would have made, and many still would today. The move involved Kasparov sacrificing a bunch of material so that later on in the game he would have some opportunities to attack. This kind of long-term quasi-fuzzy planning in chess is called positional play, or strategic play. It is long-term and not based on specific move-by-move calculations, called tactical play. Positional play is an evaluation of the board based on difficult-to-quantify concepts. Machines at the time were bad at positional play (compared to grand masters), and Kasparov knew that only the best human grand masters would have avoided making the move.

The best positional human player ever, and Kasparov's human nemesis, Anatoly Karpov.

Deeper Blue didn't take the bait. Instead, it seemed to see what Kasparov was up to, and it stopped him from ever making the positional plays, to no immediate material benefit for itself.

Kasparov freaked out.

Everyone freaked out.

Kasparov was convinced that a human intelligence was behind the move.

Kasparov never recovered from the emotional affect the move had on him, and he went on to lose the match. He was preoccupied with his suspicions that IBM had cheated, that there were devious human minds behind Deeper Blue's strategic move. He demanded over and over again to see the computer logs of Deeper Blue's execution, but IBM refused, because after all they didn't want competing businesses to see under the hood. Kasparov took this refusal to cooperate as evidence of foul play, and he just never regained his focus for the remaining games in the match. After every subsequent loss or draw, during the ensuing press conference he'd shout incessantly about seeing the logs from Game 2 move 37.

Kasparov is not an idiot. When it comes to assessing one's chess opponents, he is without a doubt the most qualified human. He was certain, absolutely certain, that there was a human mind behind move 37. When it comes to distinguishing human chess players from computer chess players, we can consider him an expert. But I think he was wrong here. I think the best explanation for this series of events is that Deeper Blue passed the Turing test with flying colors.

Kasparov continued to spiral out of control, eventually teaming up with machines and trying to take over the world.

Kasparov with friend
Kasparov with a Cyberdyne Systems Series 800 Model 101 Version 2.4, best of friends.

Okay, not really. But A.I. research continues to march on, building on the advances in automated decision-making trail-blazed by Turing and the Deep Blue team. The best human grand masters don't stand a chance against the best chess engines of today, and we should expect similar results in other areas of automated reasoning. The better we are at building machines that seem to think, the more humble we should be of our own reasoning abilities. The human capacity for error is perhaps the only area in which we will always exceed that of machines.

Of course, IBM might have just cheated.


About the Author: Seth Kurtenbach

Seth Kurtenbach's photo
Seth Kurtenbach is pursuing his PhD in computer science at the University of Missouri. His current research focuses on the application of formal logic to questions about knowledge and rationality. He has his Master's degree in philosophy from the University of Missouri, and is growing an epic beard in order to maintain his philosophical powers. You can email Seth at or follow him on Twitter: @SJKur.




Guests may not post URLs. Registration is free and easy.

Remember my personal information

Notify me of follow-up comments?

Enter the missing word: CFI's mission ( is to foster a _______ society.

Creative Commons License

The Course of Reason is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.

CFI blog entries can be copied or distributed freely, provided:

  • Credit is given to the Center for Inquiry and the individual blogger
  • Either the entire entry is reproduced or an excerpt that is considered fair use
  • The copying/distribution is for noncommercial purposes