Garry Kasparov Tangled Up in Deep Blue

When world human chess champion Garry Kasparov lost the second match with Deep Blue in 1997, I said, and have continued to say, and write, that Garry Kasparov will be remembered only for losing to the chess program known as Deep Blue. Many find this unpalatable, but, as Walter Cronkite used to say to end his CBS news broadcast, “That’s the way it is.”
Proof can be found on This Day in History under May 11:
“On May 11, 1997, chess grandmaster Garry Kasparov resigns after 19 moves in a game against Deep Blue, a chess-playing computer developed by scientists at IBM. This was the sixth and final game of their match, which Kasparov lost two games to one, with three draws.” (http://www.history.com/this-day-in-history/deep-blue-defeats-garry-kasparov-in-chess-match)
This is the only listing for Kasparov. There is absolutely nothing concerning any of his other chess accomplishments. It may be unfortunate for Garry, but this is how he leaves his make on history, as a loser.
The above mentioned article ends with, “The last game of the 1997 Kasparov v. Deep Blue match lasted only an hour. Deep Blue traded its bishop and rook for Kasparov’s queen, after sacrificing a knight to gain position on the board. The position left Kasparov defensive, but not helpless, and though he still had a playable position, Kasparov resigned–the first time in his career that he had conceded defeat. Grandmaster John Fedorowicz later gave voice to the chess community’s shock at Kasparov’s loss: “Everybody was surprised that he resigned because it didn’t seem lost. We’ve all played this position before. It’s a known position.” Kasparov said of his decision, “I lost my fighting spirit.”

Many have called Garry Kasparov the greatest chess player in the history of the game. I have always wondered why. I mean, if a player loses the biggest match of his life, a match in which he was fighting for the honor of the human race, how can anyone in their right mind consider him to be the greatest? Garry Kasparov will always be considered a loser by the public.

The chessgames.com website provides the final game of the match, naming it, “Tangled Up in Blue.”
Deep Blue (Computer) vs Garry Kasparov
“Tangled Up in Blue” (game of the day Sep-12-05)
IBM Man-Machine, New York USA (1997) · Caro-Kann Defense: Karpov. Modern Variation (B17) · 1-0
1. e4 c6 2. d4 d5 3. Nc3 de4 4. Ne4 Nd7 5. Ng5 Ngf6 6. Bd3 e6 7. N1f3 h6 8. Ne6 Qe7 9. O-O fe6 10. Bg6 Kd8 11. Bf4 b5 12. a4 Bb7 13. Re1 Nd5 14. Bg3 Kc8 15. ab5 cb5 16. Qd3 Bc6 17. Bf5 ef5 18. Re7 Be7 19. c4 1-0
http://www.chessgames.com/perl/chessgame?gid=1070917

Garry Kasparov lost with the Karpov variation. Cogitate on that one for a moment. Consider why Kasparov would even consider playing a variation named for the previous World Champion, whom he had dethroned. The variation was totally out of character for Kasparov. The only way this makes any sense to me is that Garry Kasparov took a dive. The term “take a dive” means to lose intentionally, as when a prize fighter loses because the fix is in, like Sonny Liston did when he hit the mat against Cassius Clay, later known as Muhammad Ali, in their 1964 title fight. The match meant a great deal to IBM, especially in winning the match. How much was it worth to IBM? Jonathan Schaeffer, of the Department of Computing Science at the University of Alberta, the man behind the program of the now World Checkers Champion, Chinook, that cannot lose (Computer Checkers Program Is Invincible (http://www.nytimes.com/2007/07/19/science/19cnd-checkers.html?_r=0) see also (http://phys.org/news104073048.html) had this to say:
“The victory went around the world. IBM estimated it received $500 million of free publicity from the match, and IBM stock prices went up over $10 to reach a new high for the company. (http://askeplaat.wordpress.com/534-2/deep-blue-vs-garry-kasparov/)

NPR featured a story August 8, 2014, “Kasparov vs. Deep Blue.” It can be heard here: (http://www.npr.org/2014/08/08/338850323/kasparov-vs-deep-blue)
A transcript is also provided:
In 1997, Deep Blue, a computer designed by IBM, took on the undefeated world chess champion, Garry Kasparov. Kasparov lost. Some argued that computers had progressed to be “smarter” than humans.

GLYNN WASHINGTON, HOST:
And speaking of Stephanie Foo, she likes to take things literally. And I told her, I said Stephanie, don’t be such a drag. You’ve been smoking the company line, you got to loosen up, come on, the rules are meant to be broken – it’s time to rage against machine – lady rage, come on. Well, Stephanie – Stephanie promptly brought me a story about raging against a machine. The real machine and someone raging against it. Stephanie Foo, take it away.
STEPHANIE FOO, BYLINE: OK, yes. This story is about chess, but not just any chess game – one of the most famous ever.
(SOUNDBITE OF ARCHIVED RECORDING)
UNIDENTIFIED MAN: Deep Blue and Garry Kasparov, the world’s chess champions…
FOO: It’s 1997 – world chess champion Garry Kasparov versus Deep Blue, a computer designed by IBM. And for people who wanted to believe that the human brain was still stronger than computers, this was a huge deal.
(SOUNDBITE OF ARCHIVED RECORDING)
MAURICE ASHLEY: This is international chess match, I’m Maurice Ashley. The future of humanity is one the line. Now the weather.
FOO: Now Kasparov has never lost a match – ever. He was destroying all the grandmasters at the age of 22. He’s even beaten Deep Blue once before, so he is going into this rematch totally confident, and true enough – bam – Kasparov wins game one easy.
(Applause)
FOO: But then game two is where everything starts to go wrong. In this match, Deep Blue is dominating. Kasparov is visibly frustrated. He’s is rubbing his face, sighing, and then abruptly Kasparov just walks off the stage and quits – forfeits the game. The night after the game, his fans analyze the match and figured something out – something Kasparov, an undefeated grandmaster should have seen. If he had not stormed off the stage and just played his normal game, he could’ve tied Deep Blue.
(SOUNDBITE OF ARCHIVED RECORDING)
UNIDENTIFIED FEMALE: The match now stands at one game apiece.
FOO: Now, the match was best of five games, with Kasparov eventually losing the whole thing, but the turning point was when he forfeited that match. So since 1997, people have always speculated – what happened in game two? Did he quit because the computer was really so much smarter than he was? Then recently this book by Nate Silver came out called “The Signal And The Noise.” In it Murray Campbell, one of the engineers who created Deep Blue and who was at the match, comes out and says that he thinks he knows what really happened, and he says it starts in the first game – the game Kasparov won.
MURRAY CAMPBELL: Near the end of game one Kasparov had reached a very strong position. It was clearly to any chess expert in the audience, that Deep Blue was going to lose in the long run.
FOO: But here’s where it’s interesting. At the end of the game Deep Blue did something weird – it committed suicide.
CAMPBELL: Deep Blue was calculating a particular move that it could make that would prolong the game as long as possible. And then at the last second, it switched to a completely different move and played it.
UNIDENTIFIED MALE #2: Rook to D1.
CAMPBELL: And this particular move was really bad, and so it caused us to give up the game right away.
FOO: This really bad move confused Kasparov. Murray says he heard Kasparov’s team stayed up that night trying to analyze the logic behind that move – what it meant. The only thing was – there was no logic.
CAMPBELL: The more obvious explanation is that there was a bug.
UNIDENTIFIED MALE #2: Uh-oh.
FOO: A glitch – the kind of plot twist only a nerd could love.
CAMPBELL: Due to a bug in the program, unfortunately, it had played a random move.
FOO: But Kasparov didn’t know that, and Murray guesses that Kasparov was so caught up thinking the machine do something that he didn’t that he lost it, and the whole rest of the match was a landslide.
CAMPBELL: My theory is that Kasparov might have seen the drying opportunity but didn’t, because he was overestimating Deep Blue’s capability and assuming that it was incapable of making a mistake that would allow a draw. Deep Blue was very strong but wasn’t that strong. And I don’t know if this is true or not – I think we’ll never know unless Kasparov says himself, but you probably won’t get to talk to him because he doesn’t like to talk about the subject.
FOO: Yeah, Kasparov spent suggesting that IBM cheated, and he hasn’t really talked about the game for many years – until now.
MIG GREENGARD: You have to understand, he’s a little frustrated talking about this stuff over and over again sometimes.
FOO: That’s Mig Greengard. He’s been Kasparov’s aid, publicist and confidant for 14 years. And he’s here to speak on Kasparov’s behalf.
GREENGARD: He’s authorized me to talk with you about it. I talked with Garry about it…
FOO: It being the glitch.
GREENGARD: …And what he said to me – he said it’s ridiculous that move had no impact on his subsequent play – and had no impact on him – that’s it, move on. So that’s all really that I can – that I can go with, is the horse’s mouth.
FOO: So maybe Murray is wrong about the glitch but Mig says, he’s not wrong about Kasparov having a sort of mental breakdown – it just happened a little later. Mig told me that Kasparov was used to playing with computers. He thought he had them all figured out. Kasparov had certain traps that he would set, lures for computers, and computers would always fall for them. So in game two, Kasparov set his trap and waited.
GREENGARD: Because he had these assumptions that of course being a computer, it’s probably going to play this, this and this.
FOO: But it didn’t – it didn’t take the bait.
UNIDENTIFIED MAN #2: I see what you’re up to.
GREENGARD: The played something else.
FOO: Something good.
GREENGARD: Something that not only is not the predicted computer move – but a very, very strong move.
FOO: So you’re saying that this is the moment where basically he was psyched out.
GREENGARD: Right. It was just very – I think a very confusing, very disorienting experience to have to then sit down at the board not really knowing what you’re facing. Can I still try to trick it? Does is still play like a computer? Does it make mistakes at all? So psychologically damaging to Garry in that he realized this was a whole new animal.
FOO: And then after that really awesome move, Deep Blue actually makes another bad move.
UNIDENTIFIED MALE #2: I guess I’ll play this.
FOO: This bad move is the one that allows Kasparov to tie. But Kasparov is too convinced he’s going to lose to see the fault.
GREENGARD: Like well, no way the computer would allow that – that can’t be there. Whereas against a human you think why not, maybe he made a mistake in his calculations – I’ll give it a shot. Against the computer you get – the computer gets the benefit of the doubt. How could something play like God, then play like an idiot in the same game?
FOO: In a way that’s like a total machine mistake though, right? Because since the machine doesn’t have a specific style or personality like, each different move that it makes could be brilliant and idiotic.
GREENGARD: Sure, sure – of course when he resigned he didn’t know any of this – which itself was demoralizing and humiliating.
FOO: So essentially what Mig’s saying is that Deep Blue wasn’t necessarily as smart as we all thought. Deep Blue didn’t have this magnificent triumph over Kasparov, it was more that Kasparov forced himself to fail.
GREENGARD: In actually turned out to be a bit of a red herring as far as artificial intelligence goes. It turned out it didn’t have emulate human thought to beat the world champion. It didn’t even have to play great chess, but it mostly revealed that humans aren’t perfect – humans make mistakes. They certainly – it turned out to be less complicated than we’d hoped. Deep Blue could calculate 200 million possible moves per second, but it was Kasparov who is overthinking it.
WASHINGTON: Thanks so much to Mig and Murray for helping us out on that piece. And of course, you’ve got to check out the almighty Nate Silver’s book “The Signal And The Noise.” And yes, that piece was produced by Stephanie Foo. We’ve got issues against the machines today on SNAP. And when we return, the man tries to corrupt me with all the free food I can stuff into my mouth. And we’re going to illegally destroy private property just because we can. On SNAP JUDGMENT the “Rage Against The Machine” episode continues. Rock on and stay tuned.
Copyright © 2014 NPR. All rights reserved. No quotes from the materials contained herein may be used in any media without attribution to NPR.

Amazing Game: Kasparov’s quickest defeat: IBM’s Deeper Blue (Computer) vs Garry Kasparov 1997

Bob Dylan – Tangled Up In Blue

Bob Dylan – Tangled Up In Blue – Live Oslo 2013

The Chess Detective

The US Open begins in a few days, which means the chess politicos are packing their bags, getting prepared to travel to Orlando to do their “moving” & “shaking.” For that reason I have decided to post some thoughts, and pose some questions, for the “pooh-bahs” and I need to do it now because once they arrive there will be no time for them to read and thoughtfully consider anything because they will be busy “schmoozing.”
Like many others I read with interest the June “Chess Life” cover article by Howard Goldowsky, “How To Catch A Chess Cheater.” I clicked on the links provided and read everything on the blog IM Ken Regan shares with R. J. Lipton, a Professor of Computer Science at Georgia Tech. Since the article appeared I have invested a considerable amount of time reading, and cogitating, about the issue of cheating at chess by using a program. Most people will not do this, and most readers may not have the time to read all of this long post, so I will give my conclusions up front in the hope it will spur some, especially those people in power who must confront one of the major issues facing the Royal game, to read on and learn what brought me to my conclusions.
The article was published in order to allay the fears and suspicions of the chess playing public. I am reminded here of the infamous statement by Secretary of State Al Haig following the assassination attempt on Ronald Reagan.
Al Haig asserted before reporters “I am in control here” as a result of Reagan’s hospitalization. The trouble was that he was not in control, according to the line of succession in the 25th Amendment of the US Constitution. (https://www.youtube.com/watch?v=yAhNzUbGVAA).

The incorrect statement was made to reassure We The People, as was the “official” announcement that Ronald Raygun had removed his oxygen mask and quipped to his doctors, “I hope you are all Republicans.” This is now written as “history.” The thing is that “Rawhide,” the name given to RR by the SS, had suffered a “sucking chest wound,” and no one, not even the “Gipper,” is able to talk after suffering a such a wound. (https://en.wikipedia.org/wiki/Attempted_assassination_of_Ronald_Reagan)

Like the handlers of the wounded POTUS, the Chess Life article is an attempt by the powers that be to tell chess players the Chess Detective is on hand, so, “Don’t worry Be happy.” (https://www.youtube.com/watch?v=d-diB65scQU)

How can one be happy, and not worry when the Chess Detective, IM Ken Regan, says this, “An isolated move is almost un­catchable using my regular methods.”

The man with whom the Chess Dectective shares a blog, Richard J. Lipton writes, “How should we rate how well a player matches the chess engine as a method to detect cheating?
The short answer is very carefully.
Ken has spend (sic) years working on the right scoring method given these issues and others. I believe that his method of scoring is powerful, but is probably not the final answer to this vexing question.”

The only problem is that the Chess Detective has spent years using what is now obviously antiquated programs. Sorry Fritz, but your program was passed by those of Houdini, Komodo, and Stockfish, the top three chess programs in reverse order, quite some time ago. How is it possible for even the Chess Detective to discern possible program generated moves while using an inferior program?

Mr. Lipton continues, “The problem is what strategy might a cheater use? A naïve strategy is to always select the top ranked move, i.e. the move with the largest value. This strategy would be easy for Ken to detect. A superior strategy might be to select moves based on their values: higher values are selected more frequently. This clearly would be more difficult for Ken to detect, since it is randomized.”
“Another twist is the cheater could use a cutoff method. If several moves are above a value, then these could be selected with equal probability.”
“I could go on, but the key is that Ken is not able to assume that the cheater is using a known strategy. This makes the detection of cheating much harder, more interesting, and a still open problem. It is essentially a kind of two player game. Not a game of chess, but a game of the cheater against the detector; some player against Ken’s program.”

There you have it, the man with whom the Chess Detective shares a blog has no faith in the methods used by the Chess Detective. What follows is the post that has consumed much of my time for the last month or so.

The Chess Detective

An interview between Scott Simon of NPR and IM Ken Regan begans, “Ken Regan is a kind of chess detective. He’s a computer scientist and an international chess master, who played with the likes of Bobby Fischer as a kid. Which gives him particular skills to help recognize cheating in chess, which, he says, is becoming more common. Ken Regan has created a new algorithm to help detect test cheating. He’s profiled this month in “U.S. Chess” magazine and joins us now from Buffalo. Thanks for much for being with us.” (http://www.npr.org/2014/06/21/324222845/how-to-catch-a-chess-cheater)

“SIMON: So how does somebody cheat in chess?
REGAN: The most common way is having the game on your smart phone or handheld device and going into the bathroom surreptitiously to check it.
SIMON: So people are consulting their smart phones, because there are algorithms that will tell them what the propitious next move is?
REGAN: Yes. There are chess engines that are very strong – stronger than any human player, apparently even running on the reduced hardware of smart phone.
SIMON: Well, what are the odds of somebody being falsely accused?
REGAN: I deal with accusations, whispers, public statements, grouses that people make. And, usually, my model shows, no, this play really was within expectation. The other side is, yes, it’s a great danger that the statistics might falsely accuse someone. As a failsafe, I have taken data – many millions of pages of data from the entire history of chess, including all the performances by Bobby Fischer and Gary Kasparov. So I have an idea of the distribution of what happens by nature.”

An article about a name from the past, “Seven things you should know about Alan Trefler” By Michael B. Farrell (http://www.bostonglobe.com/business/2014/07/05/seven-things-you-should-know-about-alan-trefler-founder-pegasystems/ZQULf5r9uUFkZZytsQB3WP/story.html), brought back memories of the man, now a billionaire, who, as an expert, was co-champion of the 1975 World Open. What would happen today if an expert did the same?

The cover story of the June 2014 Chess Life, “How To Catch A Chess Cheater: Ken Regan Finds Moves Out of Mind,” by Howard Goldowsky, was deemed so important one finds this preface, “The following is our June 2014 Chess Life cover story. Normally this would be behind our pay wall, but we feel this article about combating cheating in chess carries international importance.
This subject has profound implications for the tournament scene so we are making it available to all who are interested in fighting the good fight.”

~Daniel Lucas, Chess Life editor (http://www.uschess.org/content/view/12677/422/).

From the article, “According to Regan, since 2006 there has been a dramatic increase in the number of world­wide cheating cases. Today the incident rate approaches roughly one case per month, in which usually half involve teenagers. The current anti-cheating regulations of the world chess federation (FIDE) are too outdated to include guidance about disciplining illegal computer assistance, so Regan himself monitors most major events in real-time, including open events, and when a tournament director becomes suspicious for one reason or another and wants to take action, Regan is the first man to get a call.”
Dr. Regan, a deeply religious man, says, “Social networking theory is interesting,” he says. “Cheating is about how often coincidence arises in the chess world.”
“Regan clicks a few times on his mouse and then turns his monitor so I can view his test results from the German Bundesliga. His face turns to disgust. “Again, there’s no physical evidence, no behavioral evidence,” he says. “I’m just seeing the numbers. I’ll tell you, people are doing it.”
Goldowsky writes, “Statistical evidence is immune to con­ceal­ment. No matter how clever a cheater is in communicating with collaborators, no mat­ter how small the wireless communications device, the actual moves produced by a cheater cannot be hidden.”

This is where Alan Trefler enters the conversation. Goldowsky goes on to write, “Nevertheless, non-cheating outliers happen from time to time, the inevitable false positives.”

“Outliers happen…” How would you like to tie for first as an expert today and have your integrity questioned in addition to having to strip naked and “bend over and spread ’em?”
The article moves on to a detailed analysis of how the “Chess Detective” determines whether or not cheating has occurred.

“Faced with a complex calculation, a player could sneak their smartphone into the bathroom for one move and cheat for only a single critical position. Former World Champion Viswanathan Anand said that one bit per game, one yes-no answer about whether a sacrifice is sound, could be worth 150 rating points.
“I think this is a reliable estimate,” says Regan. “An isolated move is almost un­catchable using my regular methods.”
But selective-move cheaters would be doing it on critical moves, and Regan has untested tricks for these cases. “If you’re given even just a few moves, where each time there are, say, four equal choices, then the probabilities of matching these moves become statistically significant. Another way is for an arbiter to give me a game and tell me how many suspect moves, and then I’ll try to tell him which moves, like a police lineup. We have to know which moves to look at, however, and, importantly—this is the vital part— there has to be a criterion for identifying these moves independent of the fact they match.”

What is the “criterion”? It is not mentioned.

Dr. Regan is co-author,with Richard J. Lipton, of the blog, “Godel’s Lost Letter and P=NP.” I went to the blog and found this recent post by “rjlipton” (http://rjlipton.wordpress.com/2014/06/18/the-problem-of-catching-chess-cheaters/).

R.J. writes, “The easy (sic) of cheating is a major issue for organized chess. The number of cases in professional tournament play is, according to Ken, roughly one per month—one case happen at a tournament in Romania just last month. Ken knows this because he routinely runs his detection methods, more on those shortly, on most major tournaments.”
“How should we rate how well a player matches the chess engine as a method to detect cheating?
The short answer is very carefully.
Ken has spend (sic) years working on the right scoring method given these issues and others. I believe that his method of scoring is powerful, but is probably not the final answer to this vexing question.”
“The problem is what strategy might a cheater use? A naïve strategy is to always select the top ranked move, i.e. the move with the largest value. This strategy would be easy for Ken to detect. A superior strategy might be to select moves based on their values: higher values are selected more frequently. This clearly would be more difficult for Ken to detect, since it is randomized.”
“Another twist is the cheater could use a cutoff method. If several moves are above a value, then these could be selected with equal probability.”
“I could go on, but the key is that Ken is not able to assume that the cheater is using a known strategy. This makes the detection of cheating much harder, more interesting, and a still open problem. It is essentially a kind of two player game. Not a game of chess, but a game of the cheater against the detector; some player against Ken’s program.”

This article provides links to several other posts concerning the subject of cheating at chess and I read each and every one. Here are a few excerpts:

“Chess is a game of complete information. There are no cards to hide that might be palmed, switched, or played illegally, no dice that could be loaded. So how is it possible to cheat at chess? Alas the complete information can be conveyed to a computer, and thanks to the exponential increase in computer power and smarter chess-playing algorithms, consumer hardware can play better than any human. Hence cheating in chess in possible, and unfortunately this year it has seemed to become common.” (http://rjlipton.wordpress.com/2013/09/17/littlewoods-law)

“The fear of players being fingered this way is remarked by Dylan McClain in today’s New York Times column:
“If every out-of-the-ordinary performance is questioned, bad feelings could permanently mar the way professional players approach chess.”

The threat can be stronger than its execution.

A question was posed to Dr. Regan:
“I think there is a bigger picture here. Why even play a strategy game that a computer, without any information or connectivity advantage, will win.”
IM Regan answered:
“Because it’s still fun, has a great history, and has more public participation all over the world than any time previously. Computers are still a step behind the best humans at the Japanese form of chess (Shogi), and human supremacy at Go is apparently not threatened in the near future. My best effort at a more computer-resistant “evolution” of Western chess is here.”
From: “The Crown Game Affair” by KWRegan January 13, 2013 (http://rjlipton.wordpress.com/2013/01/13/the-crown-game-affair/)

“I have, however, been even busier with a welter of actual cases, reporting on four to the full committee on Thursday. One concerned accusations made in public last week by Uzbek grandmaster Anton Filippov about the second-place finisher in a World Cup regional qualifier he won in Kyrgyzstan last month. My results do not support his allegations. Our committee is equally concerned about due-diligence requirements for complaints and curbing careless allegations, such as two against Austrian players in May’s European Individual Championship. A second connects to our deliberations on the highly sensitive matter of searching players, as was done also to Borislav Ivanov during the Zadar Open tournament last December. A third is a private case where I find similar odds as with Ivanov, but the fourth raises the fixing of an entire tournament, and I report it here.
Add to this a teen caught consulting an Android chess app in a toilet cubicle in April and a 12-year-old caught reading his phone in June, plus some cases I’ve heard only second-hand, and it is all scary and sad.
The Don Cup 2010 International was held three years ago in Azov, Russia, as a 12-player round-robin. The average Elo rating of 2395 made it a “Category 6″ event with 7 points from 11 games needed for the IM norm, 8.5 for the GM norm. It was prominent enough to have its 66 games published in the weekly TWIC roundup, and they are also downloadable from FIDE’s own website. Half the field scored 7 or higher, while two tailenders lost all their games except for drawing each other and one other draw, while another beat only them and had another draw, losing eight games.
My informant suspected various kinds of “sandbagging”: throwing games in the current event, or having an artifically-inflated Elo rating from previous fixed events, so as to bring up the category. He noted some of the tailenders now have ratings 300 points below what they were then.
In this case I did not have to wait long for more-than-probability. Another member of our committee noticed by searching his million-game database that:
Six of the sixty-six games are move-by-move identical with games played in the 2008 World Computer Chess Championship.
For example, three games given as won by one player are identical with Rybka’s 28-move win over the program Jonny and two losses in 50 and 44 moves by the program Falcon to Sjeng and HIARCS, except one move is missing from the last. One of his victims has three lost games, while another player has two wins and another two losses. Indeed the six games are curiously close to an all-play-all cluster.”
From: “Thirteen Sigma” by KWRegan, July 27, 2013 (http://rjlipton.wordpress.com/2013/07/27/thirteen-sigma/)