While researching the Mechanic’s Institute Chess Club Newsletter tonight (it is ten pm, EST) I found this wonderful picture of a young Ruth Haring in #834 dated July 6, 2018:
One of my favorite Chess places on the internet is the Mechanics’ Institute Chess Club Newsletter, by IM John Donaldson. If you are new to Chess and unaware, the Mechanics’ Institute is located at 57 Post Street, in San Francisco, California. The newsletter is published almost every Friday, unless IMJD, as he is known, is out of town, as in being a team captain for the US Olympiad squad. The MIN is a veritable cornucopia of Chess information, and it continues to get better and better, if that is possible. The edition this week, #809, is no exception. For example we learn, “An article at the singer Joni Mitchell’s web site mentions she polished her talent at the Checkmate coffeehouse in Detroit in the mid-1960s.” I have just finished reading, Joni: The Anthology, edited by Barney Hoskins, and the just published, Reckless Daughter: A Portrait of Joni Mitchell, by David Yaffe, awaits.
John writes, “Few have done as much as Jude Acers to promote chess in the United States the last fifty years and he is still going strong. View one of his recent interviews here.” I love the sui generis Jude the Dude! For the link to the interview you must visit the MIN.
We also learn that, “Noted book dealer National Master Fred Wilson will open his doors at his new location at 41 Union Square West, Suite 718 (at 17th Street) on December 20.” In MIN # 804 we learned that, “Fred Wilson earns National Master at 71.”(!) Way to go Fred! Congratulations on becoming a NM while giving hope to all Seniors, and on the opening of your new location. There is also a nice picture of Fred included, along with many other pictures, some in color, which has really added pizazz to the venerable MIN!
There is more, much more, but I want to focus on: 2) Top Individual Olympiad Performers. John writes: “Outside of the World Championship the biannual Chess Olympiad is the biggest stage in chess. Although it is primarily a team event, individual accomplishment is noted, and no player better represented his country than the late Tigran Petrosian. The former World Champion scored 103 points in 129 games (79.8 percent) and lost only one individual game (on time) in a drawn rook ending to Robert Hubner in the 1972 Olympiad.
Garry Kasparov is not far behind with 64½ points in 82 games (78.7 percent), and unlike Petrosian his teams took gold in every Olympiad he played. Garry won gold but he did lose three games.
Two of the players who defeated Kasparov in Olympiads were present during the Champions Showdown in St. Louis last month: Yasser Seirawan and Veselin Topalov. The latter had an interesting story to tell about the third player to defeat Garry—Bulgarian Grandmaster Krum Georgiev.
According to Topalov, one could not accuse his countryman of being one of Caissa’s most devoted servants. Lazy is the word he used to describe Krum, who loved to play blitz rather than engage in serious study. However it was precisely this passion for rapid transit which helped him to defeat Garry.
Before the Malta Olympiad Georgiev was losing regularly in five-minute chess to someone Veselin referred to as a total patzer. He got so frustrated losing with White in the same variation, over and over again, that he analyzed the line in the 6.Bg5 Najdorf inside and out and came up with some interesting ideas. You guessed it—Garry played right into Georgiev’s preparation. Who says there is no luck in chess.”
The game is given so click on over to the MIN and play over a Kasparov loss in which he let the Najdorf down. (http://www.chessclub.org/news.php)
I want to focus on the part about there being no luck in Chess. After reading this I something went off in my brain about “Chess” & “Luck.” I stopped reading and racked my aging brain. Unfortunately, I could not recall where I had seen it, but it definitely registered. After awhile I finished reading the MIN and took the dog for a walk, then returned to rest and take a nap. I could not sleep because my brain was still working, subconsciously, I suppose, on why “Chess” & “Luck” seemed to have so much meaning to me…It came to me in the shower. I have been a fan of Baseball since the age of nine, and I am also a Sabermetrician.
Chess and luck
In previous posts, I argued about how there’s luck in golf, and how there’s luck in foul shooting in basketball.
But what about games of pure mental performance, like chess? Is there luck involved in chess? Can you win a chess game because you were lucky?
Start by thinking about a college exam. There’s definitely luck there. Hardly anybody has perfect mastery. A student is going to be stronger in some parts of the course material, and weaker in other parts.
Perhaps the professor has a list of 200 questions, and he randomly picks 50 of them for the exam. If those happen to be more weighted to the stuff you’re weak in, you’ll do worse.
Suppose you know 80 percent of the material, in the sense that, on any given question, you have an 80 percent chance of getting the right answer. On average, you’ll score 80 percent, or 40 out of 50. But, depending on which questions the professor picks, your grade will vary, possibly by a lot.
The standard deviation of your score is going to be 5.6 percentage points. That means the 95 percent confidence interval for your score is wide, stretching from 69 to 91.
And, if you’re comparing two students, 2 SD of the difference in their scores is even higher — 16 points. So if one student scores 80, and another student scores 65, you cannot conclude, with statistical significance, that the first student is better than the second!
So, in a sense, exam writing is like coin tossing. You study as hard as you can to learn as much as you can — that is, to build yourself a coin that lands heads (right answer) as often as possible. Then, you walk in to the exam room, and flip the coin you’ve built, 50 times.
It’s similar for chess.
Every game of chess is different. After a few moves, even the most experienced grandmasters are probably looking at board positions they’ve never seen before. In these situations, there are different mental tasks that become important. Some positions require you to look ahead many moves, while some require you to look ahead fewer. Some require you to exploit or defend an advantage in positioning, and some present you with differences in material. In some, you’re attacking, and in others, you’re defending.
That’s how it’s like an exam. If a game is 40 moves each, it’s like you’re sitting down at an exam where you’re going to have 40 questions, one at a time, but you don’t know what they are. Except for the first few moves, you’re looking at a board position you’ve literally never seen before. If it works out that the 40 board positions are the kind where you’re stronger, you might find them easy, and do well. If the 40 positions are “hard” for you — that is, if they happen to be types of positions where you’re weaker — you won’t do as well.
And, even if they’re positions where you’re strong, there’s luck involved: the move that looks the best might not truly *be* the best. For instance, it might be true that a certain class of move — for instance, “putting a fork on the opponent’s rook and bishop on the far side of the board, when the overall position looks roughly similar to this one” — might be a good move 98 percent of the time. But, maybe in this case, because a certain pawn is on A5 instead of A4, it actually turns out to be a weaker move. Well, nobody can know the game down to that detail; there are 10 to the power of 43 different board positions.
The best you can do is see that it *seems* to be a good move, that in situations that look similar to you, it would work out more often than not. But you’ll never know whether it’s 90 percent or 98 percent, and you won’t know whether this is one of the exceptions.
It’s like, suppose I ask you to write down a 14-digit number (that doesn’t start with zero), and, if it’s prime, I’ll give you $20. You have three minutes, and you don’t have a calculator, or extra paper. What’s your strategy? Well, if you know something about math, you’ll know you have to write an odd number. You’ll know it can’t end in 5. You might know enough to make sure the digits don’t add up to a multiple of 3.
After that, you just have to hope your number is prime. It’s luck.
But, if you’re a master prime finder … you can do better. You can also do a quick check to make sure it’s not divisible by 11. And, if you’re a grandmaster, you might have learned to do a test for divisibility by 7, 13, 17, and 19, and even further. In fact, your grandmaster rating might have a lot to do with how many of those extra tests you’re able to do in your head in those three minutes.
But, even if you manage to get through a whole bunch of tests, you still have to be lucky enough to have written a prime, instead of a number that turns out to be divisible by, say, 277, which you didn’t have time to test for.
A grandmaster has a better chance of outpriming a lesser player, because he’s able to eliminate more bad moves. But, there’s still substantial luck in whether or not he wins the $20, or even whether he beats an opponent in a prime-guessing tournament.
On an old thread over at Tango’s blog, someone pointed this out: if you get two chess players of exactly equal skill, it’s 100 percent a matter of luck which one wins. That’s got to be true, right?
Well, maybe you’re not sure about “exactly equal skill.” You figure, it’s impossible to be *exactly* equal, so the guy who won was probably better! But, then, if you like, assume the players are exact clones of each other. If that still doesn’t work, imagine that they’re two computers, programmed identically.
Suppose the computers aren’t doing anything random inside their CPUs at all — they have a precise, deterministic algorithm for what move to make. How, then, can you say the result is random?
Well, it’s not random in the sense that it’s made of the ether of pure, abstract probability, but it’s random in the practical sense, the sense that the algorithm is complex enough that humans can’t predict the outcome. It’s random in the same way the second decimal of tomorrow’s Dow Jones average is random. Almost all computer randomization is deterministic — but not patterned or predictable. The winner of the computer chess game is random in the same way the hands dealt in online poker are random.
In fact, I bet computer chess would make a fine random number generator. Take two computers, give them the same algorithm, which has to include something where the computer “learns” from past games (otherwise, you’ll just get the same positions over and over). Have them play a few trillion games, alternating black and white, to learn as much as they can. Then, play a tournament of an even number of games (so both sides can play white an equal number of times). If A wins, your random digit is a “1”. If B wins, your random digit is a “0”.
It’s not a *practical* random number generator, but I bet it would work. And it’s “random” in the sense that, no human being could predict the outcome in advance any faster than actually running the same algorithm himself.
I read the March 27, 2015 edition of the Mechanics’ Institute Chess Club Newsletter, #703, by John Donaldson, the day it was published (http://www.chessclub.org/news.php?n=703). Of particular interest was this, “The Chess Club and Scholastic Center of Saint Louis has done the chess world a favor by conducting a literature review to answer the question, “Does chess provide educational benefits?” (http://news.stlpublicradio.org/post/chess-literature-review-gives-base-claim-chess-educational-benefits)
I clicked on the link to find the headline: On Chess: Literature Review Gives Base To Claim Of Chess’ Educational Benefits. The article was written by Brian Jerauld, who “is the 2014 Chess Journalist of the Year, and the communications specialist for the Chess Club and Scholastic Center of St. Louis. He is a 2001 graduate of the University of Missouri-Columbia School of Journalism and has more than a decade of experience writing about boats, sports and other ways to relax.” It was dated Feb 12, 2015. Since I had posted a series of articles on this very subject during the latter part of February I could not help but wonder how this had been missed. Further pondering brought forth the question of why no one had commented on this study on the blog or on the USCF forum thread concerning my blog posts on the subject. (http://www.uschess.org/forums/viewtopic.php?f=23&t=21185&sid=95eabe87ba3a5d3ce890f4237d794c88)
Mr. Jerauld writes, “By now, the claim that chess comes packaged with hidden educational perks is a hype certainly heard around the world. And how could it not be believed? Just find some random piece of research that supports such big talk, tie it together with obvious, awesome-sounding hyperbole — like “decision-making skills” and “higher-order thinking” — and boom: You’ve got yourself some Grade-A propaganda.
Over the years, all this talk has given a rather rosy-colored narrative that always ends in support of chess curriculum implementation. But recently, the Chess Club and Scholastic Center of Saint Louis, whose scholastic program branched out to more than 3,000 students and hundreds of area schools last year, dropped the rhetoric and set out to discover if chess actually has an effect on its students.
Empirically, can all the talk walk the walk?”
Good question. Brian continues, “A year ago, the CCSCSL set out to apply a rigorous and critical eye of existing chess studies by commissioning Basis Policy Research, an independent research firm that focuses on K-12 educational exploration. The goal was to survey the entire landscape of existing chess research, digging back through more than four decades of random studies, and compile a literature review of what was actually known about chess’ impact on student outcomes.”
This is exactly the kind of study I wrote about in my post of February 27, 2015, Does Playing Chess Make You Smarter? (https://xpertchesslessons.wordpress.com/2015/02/27/does-playing-chess-make-you-smarter/) The study mentioned, Educational benefits of chess instruction: A critical review, by Fernand Gobet & Guillermo Campitelli, (http://www.brunel.ac.uk/~hsstffg/preprints/chess_and_education.PDF) is dated now, having been published a decade ago, so I looked forward with interest to new literature on the subject. In trying to be scientifically objective, and also being a chess player, I relish a refutation. If new information refutes previous thinking, humans advance. For example, studies were done in the middle of the last century on the amount of radiation harmful to humans. During the course of my life the amount needed has been constantly lowered until now it is an accepted fact that even the lowest amount of radiation detected is deleterious to a human being. Decades ago there were many studies done on the effects of smoking cigarettes, funded by the tobacco industry, which concluded smoking caused no problems whatsoever to a human being. Over the years I have learned how difficult it is for old(er) people to change their preconceived ideas. For example, when I was young it was an accepted fact that an ulcer was caused by stress. It is now known that an ulcer is caused by a virus. My father was unable to wrap his mind around that fact, continuing to believe stress was the cause. There are many examples in the scientific community of a scientist with stature in the community refusing to accept evidence contrary to that with which his career was built. Change is difficult, and some people will stubbornly cling to the old ways “come hell or high water.” I try not to be one of those people. I do not care who is right, or wrong, but what is right, and what is wrong.
During the telecast of the third round of the US Championships yesterday the new study was mentioned by Jennifer Shahade when she said, “St.Louis commissioned a study that showed…they didn’t know what the study was going to say. They wanted to find out what kind of connections chess had to academic performance, and surprisingly, the main connection was math and chess.” GM Yasser Seirawan said, “Really?” Jen continued, “Yes, specifically math and chess.” To which Yasser responded, “I remember the Margolis study where it was about reading.” This comes at the 3:42:30 mark of the broadcast.
I clicked on the link and downloaded the PDF in order to read the new study, Literature Review of Chess Studies, By
Anna Nicotera, and David Stuit, dated November 2014. (http://saintlouischessclub.org/sites/default/files/CCSCSL%20Literature%20Review%20of%20Chess%20Studies%20-%20November%202014.pdf)
“This literature review identified 51 studies of chess. Twenty-four of the 51 studies met a set of pre-determined criteria for eligibility and were included in analyses. Results from the literature review were categorized by the quality of the study design and organized by whether the studies examined after-school or in-school chess programs. The main findings from this literature review are:
1. After-school chess programs had a positive and statistically significant impact on student mathematics outcomes.
2. In-school chess interventions had a positive and statistically significant impact on student mathematics and cognitive outcomes.
Although the findings are interesting, they do come with this caveat, “While the two primary outcomes listed above are based on studies that used rigorous research design methodologies, the results should be interpreted cautiously given the small number of eligible studies that the pooled results encompass.”
This is a caveat huge as the Grand Canyon. It is called a “small sample size.” If a baseball player goes one for three during a single game his batting average is .333, which is outstanding. This does not mean he will finish the season with a batting average of .333. Even if the batter hits .333 for a week, or even a month, it does not mean he will finish the long season hitting .333. If a chess player wins one tournament, even a so-called “Super tournament,” it does not mean he will become World Champion. Sofia Polgar had one super outstanding result with what I seem to recall a performance rating that was higher than any her sister Judit obtained in any one tournament. Yet Sophia did not attain the status that did Judit, because that one tournament is considered to be a limited sample size.
The authors studied the same studies as did Fernand Gobet and Guillermo Campitelli. The discredited Ferguson study is included among those studied, as is the Margolis study mentioned by GM Yasser Seirawan. While considering this post and thinking about IM John Donaldson, GM Yasser Seirawan, and to an extent, NM Jennifer Shahade, I kept thinking about something Upton Sinclair wrote – “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
My thoughts also keep going back to something else written by Gobet and Campitelli: “In spite of these disagreements about the nature of transfer, some results are clear. In particular, recent research into expertise has clearly indicated that, the higher the level of expertise in a domain, the more limited the transfer will be (Ericsson & Charness, 1994). Moreover, reaching a high level of skill in domains such as chess, music or mathematics requires large amounts of practice to acquire the domain specific knowledge which determines expert performance. Inevitably, the time spent in developing such skills will impair the acquisition of other skills.”
Mr. Jerauld ends his article with, “The literature review is a huge first step for the CCSCSL, acknowledgment of the research that exists and laying the groundwork for future research that may be implemented through the club’s expansive scholastic initiative. The Basis Policy Research chess literature review has kicked off a new Research Portal, meant to serve as a repository to all global research — and to keep St. Louis on yet another forefront of chess.”
If this is truly “laying the groundwork for future research that may be implemented through the club’s expansive scholastic initiative,” I would suggest not only a rigorous, controlled, scientific, type study which would include a control group, but also a study of the same size to study the game of Go, as a counter balance to the chess study. This would obviously double the amount of money needed to fund these studies, but we are talking about a man who has a BILLION DOLLARS, so money is no object if Rex Sinquefield actually wants a fair and objective study, not one in which the scientists aim to please the man with the deep pockets.