Hacker News new | past | comments | ask | show | jobs | submit login
Computers revolutionized chess – Magnus Carlsen wins by being human (wsj.com)
97 points by wallflower on Dec 11, 2021 | hide | past | favorite | 43 comments



rchive with full text of article: https://archive.md/4e50f


It was pretty interesting to read that Carlsen figured out how to defeat the opponent's training, and the best move, from the computer's point of view, wasn't necessarily the best one to take.

I hosted Garry Kasparov at Google in 2017 (his London talk is the one on YouTube, not the visit I did), and he made a point of saying that there was no way to defeat the computer now, and the best players would work with the computer rather than against it. I'm not sure this is exactly what he meant by that, though.

The article's capsule summary, "Though Kasparov argued that Deep Blue had cheated" is pretty superficial. One thing he said was that IBM did not allow him to study Deep Blue's games ahead of time, as he would for any human opponent, and IBM suddenly changed their approach from a friendly, collaborative one to being out for blood. I'd have to go back & read the book again to remind mysel, but I think he was also suspicious about IBM's handling of Deep Blue during the match.


> the best players would work with the computer rather than against it

He was certainly talking about "advanced chess"[1], where computers are used to make move suggestions but the human player actually selects the move to play. This combination of human+engine is consistently stronger than the strongest chess engines on their own. This fact also demonstrates that chess engines do not simply recommend "the best move".

Since chess is not mathematically solved, it is actually meaningless to speak of "the best move", unqualified. Certain positions are solved, such that in those positions an objectively best move does provably exist, but the vast majority of the game is still mysterious.

[1] https://en.wikipedia.org/wiki/Advanced_chess


> This combination of human+engine is consistently stronger than the strongest chess engines on their own.

This was true when computers where approach/just met the strength of the best humans.

I'm not aware anyone has done it seriously recently and at this point the computers are so insanely strong relative to a human I think the human would just be along for the ride.


The best chess engines in the world operate at a ~3500 rating level. For context, Magnus Carlsen's peak rating was 2882 (i.e. the highest rating ever achieved by a human).

Combinations of human+engine operate at a ~3800 rating level.

I take your point, and it is a good one, but you've drawn too strong a conclusion. We haven't yet reached the point where the human adds nothing to the game.


> Combinations of human+engine operate at a ~3800 rating level.

They...really don't? There aren't competitive matches between human/computer teams at classical time controls.

There are correspondence games, which is presumably what you're referring to, which are officially played by humans but with unlimited access to computers. But those are played at a time control of multiple days per move. It's true that there still seems to be some human skill involved in leading the computer, balancing the outputs of different engines, finding moves which your opponent's engine may misevaluate. A beginner with a powerful computer won't win the ICCF World Championship. But firstly this is a very niche activity compared to the one Magnus Carlsen competes at, and secondly I don't see any evidence that an unassisted engine scores only 15% against top correspondence players, which would be necessary for a 300-point disparity. At the very least I'd expect it to draw almost all its games as White.


Human+StackOverflow operate at a high coding level.


only when using the gossip protocol :-)


It is known.


You can't compare ELO ratings from different populations. They are like different units.

It used to be the case that human+computer combo was better than a computer alone but it's most likely not the case anymore. Even if it is the difference is miniscule. It seems you're a bit behind recent developments in computer chess. It's worth catching up, modern engines have way superior understanding (in collapsing to human GMs) of positional chess and traditionally hard openings like King's Indian.


How tech advances. In 2017, Kasparov said that the best computers were at 3200, and he was 2850.


Do you have details of recent tournaments between centaurs and engines? Would be interesting to study.


objectively best move does provably exist

Against a perfectly rational opponent, sure. Is it impossible for a sub-optimal move to throw a human player off their game, confuse, or otherwise emotionally-manipulate them in a way that makes it more effective than the "solved" move would be?


Of course it is possible to exploit the psychology of the opponent. Bluffing does exist in chess, and it relies on your opponent's inability to realize that you've made a sub-optimal move. For example, at the highest levels, grandmasters often attempt to increase the amount of complications in a position, which is a way of making the claim that "I can calculate more moves, more quickly, than you"; if that isn't true, then it is a bluff, which doesn't mean it won't work.

But in a solved position, the best move is irrefutable. Perfect play by both opponents in a solved game always leads to a draw (e.g. tic-tac-toe, checkers). In chess, the best move in a solved position always also anticipates every possible response by the opponent. This is the entire point of opening theory, and is why chess players speak of "punishing" their opponents; if I play an ideal move in a solved position, and you play a less-than-ideal move in response, I have already gained an advantage.


> Perfect play by both opponents in a solved game always leads to a draw

That depends on the game. The classical example of a game that isn’t is Hex (https://en.wikipedia.org/wiki/Hex_(board_game)), where the player playing first will win under perfect play (yes, it isn’t solved in general, but the board may be small enough to have been solved.)

Connect Four on the standard-sized board also is a win for the player going first (https://en.wikipedia.org/wiki/Connect_Four#Mathematical_solu...). There, we even know how to do that.

For chess, the question whether it is a draw is an open problem (I think way fewer people would be surprised if it were proven to be a draw than when it would be proven to be winning for either player, but there’s no proof for that)


When you talk of fully solved positions, that doesn't really include openings, does it? There are no opening moves that can guarantee a victory, so there are no solved openings. Truly solved moves are all in the late game, where it becomes possible to evaluate all possible moves and know for sure what is the best course of action.

I also think that even in truly solved positions, there are cases where the known best case is a draw or even defeat, but that assumes perfect play from your opponent. If you suspect your opponent can be lured into making mistakes, sub-optimal moves on your part could lead to a victory where in perfect play the outcome would be defeat or a draw.

To take an extreme case, the various 'check mate in 3 moves' openings do work against some players, though they are very bad moves against remotely decent opponents.


I play an ideal move in a solved position, and you play a less-than-ideal move in response, I have already gained an advantage.

That assumes the less-than-ideal move stays within your preparation. If, on the other hand, you dismissed that move during preparation because of its sub optimal evaluation then I have succeeded in getting you out of prep and into a middle game with a small disadvantage but potentially a lot of play. If I can then steer the game down a very sharp line then your small advantage means a lot less due to the dynamics of the position.


Your response doesn’t apply to truly solved positions.


It does. There are plenty of truly solved positions that very, very few humans, if any, can play optimally.

https://en.wikipedia.org/wiki/Endgame_tablebase#Applications and https://tb7.chessok.com/articles/Top8DTM_eng have some examples.

Certainly, no human player knows how to play all of these.


You talked about the opening but now you’re linking to endgame tablebases? Not at all comparable.


> Perfect play by both opponents in a solved game always leads to a draw

That assumes that both players are equal. In many games there is a benefit for one of the players. In such a game the effect of perfect play by one player is to win Inna's fee turns as possible, while for the other player the perfect play only prolongs the game. For instance in "connect four" the starting player should begin in center to enforce win.


> Inna's fee (from "in as few" I assume)

Whoa! That is some auto-correct you have there.


This is a very simplistic view. You can rate moves not only by result with objective play but also by how hard it is to play against using objective measure (for example move A is better than move B is to draw against A you need a top engine with 20 seconds per move while against move B only 2 seconds are needed).

Saying all moves are the same is like saying all mazes are equally hard.


That sounds optimistic given the advent of the neural nets, which seem to roughly match human positional judgement. Any assumptions from pre-2016 seem suspect.


Interestingly, at Google the situation with Kasparov and rooms was :

(1) Big rooms were very, very hard to reserve, and staff was constantly making them unavailable to us. Besides that, the window of time when the author is coming is usually very narrow.

(2) Googlers could watch from their desks, so you didn't have to go in person

(3) Even if you did go in person, you wouldn't get a free book (to get it autographed). In the early days, they did give out free books

(4) And besides that you didn't get a free book, you couldn't buy one, either, because Books, Inc. stopped coming with a box of books to sell. They weren't making enough money that way.

The net of 2, 3, 4 was that in-person attendance was less valuable.

(5) Very, very few celebrities could fill up Charlie's, the big cafeteria, that holds 400. I don't think he could.

(5) I got a room that holds 100 people for Kasparov, one of the larger ones available

(6) He complained that the room was too small


> IBM suddenly changed their approach from a friendly, collaborative one to being out for blood.

Which is exactly how DeepMind reacted when their bot was destroyed by a not even a top 100 pro. No press releases, no nothing. Just nerd butt hurt.

There were plenty of articles about its prior wins, though. Funny how that works


Which bot? The one they were testing before the go match? If so this comment is beyond ridiculous.


Judith Polgar trained specifically against opponents she was expecting to face. She said that playing against computers is not fun because you have no one to figure out and confuse.


Magnus Carlsen is going to be an all time great in our history books. Potentially, even better than Fischer was.


As far as I understand, experts already rate him better than Fischer. It seems one may still claim Kasparov is greater; Carlsen even says so himself. It will be very interesting to observe the performance the next 4 years!


I think he's very clearly "better" in that he plays objectively stronger chess moves and would beat 1972 Fischer if we had a time machine.

"Greater" is a different question, more one for sports fans than sports scientists. What do you need to be great at a sporting vocation? Pure playing strength? Longevity? If so, in years, or in number of championship matches won? Dominance over your peers? Cultural impact on the world outside of your sport? Some kind of adjusted playing strength where we hypothesize how strong players would be given all the same opportunities and training material? If so, should it be modern grandmaster games and engines, or that from an earlier era?

"All of the above" is a good answer, and depending how you weight them you could incontrovertibly claim any of Carlsen, or Kasparov, or Fischer is the "greatest". You could also try to squeeze out a case for Morphy, or Lasker, or Capablanca.


Kasparov states this well in an interview.

You can't compare the best in a given era because the players now learnt from the players in that era i.e. Carlsen grew up studying Kasparovs games so it's simply a flawed comparison.

Instead he suggests a metric of "how far ahead of the field" they where, So Morphy and Fischer do really well on that metric, Kasparov does as well, Lasker and most of the other greats.

Fischer was unique in so many ways but late 1980's Kasparov would have crushed him.

Carlsen may be the only player to have a reasonable claim to be his equal.


Kasparov's definition is pretty biased in his favour though. It's far easier to dominate through pure chess talent in an era before engine prep, because the skill difference expresses itself across many more moves than modern games.


Kasparov, to surprise of no one, comes up with a self-serving metric. If anything it should be opposite. If you're so far ahead it usually means your opposition is quite weak. Being at the top for 10 years today is way more difficult than it was in Kasparov 's time for example and that was way more difficult than in Lasker's time. Especially taking into consideration that he enjoyed an enormous preparation advantage thanks to his team while today any idiot with a high end GPU or a CPU can see all the best opening moves in seconds.

There is still advantage in preparing the information in a way it's useful/easier to remember and that's what the teams of top GMs do but there are no purely chess secrets anymore. It's just much harder to be ahead of the field today.


Just a quick question: if finding the most obscure path is the holy grail, can't we just write a quick modification to algorithm to find the lines with most diversity of its variations? I do not know the details how the chess engines work, but it doesn't sound too difficult to write one.


I'm not sure I understand the question so I'll break down the quote from the article:

> It’s figuring out what it sees and dismisses that might still be useful. The dream of any computer-savvy chess player is to discover a string of moves that an engine doesn’t necessarily favor, yet taps into a line that their opponent hasn’t prepared.

The first part ("discover a string of moves that an engine doesn’t necessarily favor") is algorithmically trivial of course. Just print out all the lines the algorithm discarded. At this point I think you can even narrow your options based on a heuristic like, as you suggest, diversity of variations.

But here comes the difficult question: which of the remaining lines is "still useful"? That alone requires massive insight. But of course, tournaments at the GM Level require hyper-specialized preparation; at WCC, each of the two contestants has the luxury to obsess over the other's style. At that point in their analysis, not only are they trying to evaluate what line is still useful but, even further, what line is their opponent likely to take.

This gamesmanship is sort of like boxers studying each other, figuring out their opponent's chin is open after a jab and drilling the hell out to exploit that fact.

(Note: I'm not a ranked player, just an enthusiast. I've dabbled a bit with Chess engines during the WCC 2014 and a bit more beyond but my knowledge is in no way state-of-the-art.)


… a very smart human


chess has nothing to do with intelligence, unless you consider memorizing as intelligent.


Variations of this often comes up often on HN.

Please note that:

- in some (many? most?) places IQ tests does not include questions that require memorizing anything neither before or during the tests. (In I spoke to one colleague who was into such stuff and he made a major point out of the fact that asking for facts would disadvantage certain parts of the population.)

- Something is measured using such tests even if they don't contain math or facts, only sorting sequences or figuring out what elements fit in a sequence.

- We call that something "intelligence". We could call it something else: "Synthetic problem solving skill", "fu", or whatever, but we typically call it "intelligence".

- On average that "intelligence" also seems to be a good indicator for a persons performance in many situations. As far as I know chess playing benefits greatly.

- People who play chess will again and again end up in situations they haven't seen before and cannot have memorized.

- in those situations, the same skills that are measured in an intelligence test will help a person calculate outcomes and choose one to their advantage


He's referring to Carlsen's prodigious fantasy football skill.


Because it is usually impossible to remember all possible permutations of a chess board, I disagree.

Humans handle large data set with a cheat by using creativity and predictive reasoning to avoid having to store EVERY piece of data we encounter. Inference is a large part of this. Most models I'm aware of consider these traits to be related to intelligence.


Watch a GM chess match. They’ll play memorized moves till a point and then they’ll be “out of the book” and will have to start dealing with novel positions outside of what they’ve learned or prepared prior. Do they memorize? Absolutely. Can they also play completely novel positions better than anyone else? Absolutely.


surely this is a setup to alphaChess?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: