Hacker News new | past | comments | ask | show | jobs | submit login

Alpha Go Zero inspired the development of an open source version, Leela Go Zero which Leela Chess Zero is forked from by the same guy who made Stock Fish.

Lots of people contribute what I imagine are amounts of CPU Power/money to the Leela Chess Zero project[1].

Would love to see Alpha Chess vs Leela Chess.

[1] https://training.lczero.org/

[edit] I've caused terrible confusion by melding Leela Go and Leela Chess when Leela Chess was originally forked from Leela Go and that's basically when similarities end.

Edited for a bit more clarity.




The great thing about these community driven efforts is that it is indeed feasible to reproduce these super expensive efforts. I'm a bystander now, as new maintainers have taken over, and they are doing a fantastic job pushing things forward.

This is also how Stockfish got to be the #1 engine. By being open source, and having the testing framework (https://tests.stockfishchess.org) use donated computer time from volunteers, it was able to make fast, continuous progress. It flipped what was previously a disadvantage (if you are open source, everyone can copy your ideas), into an advantage - as you can't easily set up a fishtest like system with an engine that isn't already developed in public.


I think KataGo is stronger than Leela Zero.

https://github.com/lightvector/KataGo


I was suspecting another boring clone, but Kata looks like a cool project with nice new ideas! Thanks for sharing


It is indeed independend and has a lot of nice ideas and theory contributions. It turned up after I stopped actively competing in computer Go, so I can't tell how strong it is. It certainly has a lot of potential and I'd probably pick it as a base to work from nowadays.


KataGo is also cool for those of us that use bots to review our human games because it has score estimator. With Leela a 0.5 win and a 20.5 win in the endgame can both amount to 99.5% chance of winning, but to us (amateur) humans that is not true.


There is a Go client that is distributed with Kata Go and was built for that purpose: https://github.com/sanderland/katrain and is distributed as a pip package, making it very frictionless to install.

With a tool like this, instead of waiting for a Go pro to visit our local Go club to review our kifu, I can have my game reviewed move by move until the end (not only the first n moves).

I can still have questions for pros, but they would be more specific.

Now, playing a superhuman intelligence bot can be unfun. Now matter how much effort you put in a move, you will just keep making your outcome worse with every move.

Another important use-case is that the AI can also tell you if a joseki is actually joseki, and how to refute a bad joseki move.


I'd love to see some CPU/GPU/TPU donations to @chewxy's "agogo"[1] which is AlphaGo (AlphaZero) re-implemented in Go as a sort of proof of concept / demo of Gorgonia[2].

[1]: https://github.com/gorgonia/agogo

[2]: https://github.com/gorgonia/gorgonia


As far as I know Gian-Carlo Pascutto is not among the original authors of Stockfish, though he did work on chess engines.

Perhaps you were confused because Leela Chess Zero was forked from Leela Zero (neural network Go engine by Pascutto) but it includes Stockfish's move generation logic.


I think OP was referring to Gary Linscott who has made major contributions to both Stockfish and made an adaptation of Leela Zero to Chess, now living under the Leela Chess Zero Github org but apparently a different adaptation is now the officially sanctioned one, at least his commits don't show up in the new lc0 repo.

https://github.com/official-stockfish/Stockfish/graphs/contr...

https://github.com/LeelaChessZero/lczero/graphs/contributors


He's also well known for his absolutely amazing work on audio fingerprinting.


There are several people who wrote Stockfish, including Tord Romstad, whose Glaurung was the initial base for Stockfish.

Glaurung was pretty innovative at the time.


AFAIK Garry Kasparov to this day does computer&human vs. computer&human chess research, and it's far from a solved problem.


If by "it's far from a solved problem" you mean that chess isn't a solved game that's true.

But Kasparov and others have given up on the idea that a human provides any unique insight into chess anymore. Computers are just better.


You are right if by "better" you mean "competitively stronger at tournament or rapid conditions". Humans are still way stronger strategically and competitively if given enough time and resources to avoid tactical mistakes. So yes, humans still provide unique insight into chess every day in correspondence chess or analytic research.


Humans aren't stronger strategically anymore either, under any conditions.

In 2014 a heavily handicapped Stockfish beat the 5th ranked player in the world (Nakamura) under tournament conditions despite no access to its opening or closing books and a one pawn handicap.


The match you are referring to was played under tournament conditions that clearly handicapped the human Grandmaster. I read from the report of the match [0] that "The total time for the match was more than 10 hours [...] The two decisive games lasted 147 and 97 moves." This unfavourable conditions clearly penalized the human and so the result can hardly be taken as meaningful regarding the strategic superiority. From the quietness of my room I instead regularly find strategic plans that overcome my and my opponent's computers. Feel free to join the correspondence chess federation [1] to experience the joy and pain of strategic research!

[0] https://www.chess.com/news/view/stockfish-outlasts-nakamura-... [1] www.iccf.com


That's absurd. He works on human+chess research. He obviously hasn't given up on human insight.

What he has given up on is a single human beating a computer.


I haven't seen any recent (last 5 years or so) articles by him showing he's doing any work on it. There's a few recent articles where he talks about how humans should work with machine learning, but nothing specific.

When he recently came out of chess retirement he didn't talk about it at all in either 2017 or 2019:

https://www.telegraph.co.uk/news/2017/07/06/king-back-chess-...

https://www.theguardian.com/sport/2019/sep/06/chess-leonard-...

There's nothing recent about him on https://en.wikipedia.org/wiki/Advanced_chess


Can a human + computer beat just a computer?

I can't imagine a human doing anything besides making things worse or even.


It's much more difficult than it used to be, but I think there is still some value to human guidance, more as a "referee" than anything else.

Right now we have essentially two top tier engines -- traditional brute force with alpha beta pruning (stockfish), and ML (leela). Both alone are incredibly strong, but they are strongest and weakest in different types of positions. A computer chess expert, who knows what kind of positions favor stockfish and what kind favor leela, could act as a "referee" between the two engines when they disagree, and when they are unanimous, simply accept the move.

Ten years ago, a grandmaster driving a single engine could typically beat an equal strength engine. I don't think that's the case anymore.

But I think if you have someone who is an expert at computer chess -- not so much a chess grandmaster, and you gave them Leela AND SF, and let them pick which one to use when in the case of conflicts -- they would score positive against either leela or stockfish in isolation.

Larry Kaufman designed his new opening repertoire book by doing exactly this -- running Leela on 2 cores + GPU, and stockfish on 6 cores, and doing the conflict resolution with his own judgement.

The human can certainly no longer pull his own moves out of thin air, though.


Computer mastery of Go has reached the point where it is a difficult task for an expert (read: grandmaster level) human to even follow what is happening. It is totally implausible that a human could resolve a conflict between top engines in a meaningful way.

It is unlikely that Chess is any different. Any superficial understanding by a human of which move is 'better' is just ignorance of the issues around evaluating a position. If you have statistical evidence that is something. 'But I think' is not evidence.

It might be entertaining to have a human involved. It isn't going to help with winning games.


I can't speak for Go but in Chess the best players in the world understand the nuances of a position still better than the computer engines and - if occasionally proven wrong by the computer analysis - are able to understand the refutation and refine their strategic eveluation. I know this because it's what I've been doing in the past seven years in the realm of correspondence chess to gain the title of international master.


I don't even know the rules of Go, but I am a long-time chess enthusiast, and I have a decent, but not top-level, understanding of chess (I am a FIDE master and I also play correspondence chess (which is human+engine) and have an interest in computer chess heavily).

I can absolute guarantee you that a human (who is an expert in computer chess, someone like Larry Kaufman) + engines will beat a single engine over the long run. With current tech and computing power, this is ONLY because we have brute force (with alpha-beta pruning) and ML engines that are at near-equal strength, and have strengths and weaknesses in different types of positions, and that those strengths and weaknesses are understandable.

If we did not have AlphaZero, I don't think the human would be able to add anything at all currently.


He recently spoke quite dismissively of computer- augmented chess on the Lex Friedman podcast. Essentially, the computer knows best...so computer and human isn’t meaningfully different from computer and rubber stamper.


I strongly disagree. The best correspondence chess players often improve over the computer suggestions. It takes time, energy and a great strategic knowledge, but it’s still possibile.

Source: I’m a correspondence chess international master


Maybe the future of computer augmented chess is to form teams of a computer, a person, and a dog. The computer comes up with chess moves. The person feeds the dog. The dog makes sure the person does not touch the board.


Is human&computer better than computer only?


Not really, humans barely provide insight (if anything), which chess engines don’t already consider. Deep Blue could evaluate 200 million different moves... per second. And that’s from 1997.

The few and rare times an engine gets funky is usually in end-game positions where the engine can’t seem to find a sacrifice to win the game and will output a current position as drawn. These cases are few and I very much doubt that a human would be able to find these moves in an actual match.

Now if you’re talking about the way the chess engine learns, it can learn in two different ways: without human help (learning completely on its own giving it nothing but the rules which is how AlphaGo works), or with human aid (through chess theory accumulated over centuries of human matches that these engines have built in as part of their evaluations). Things get very interesting.

I’d recommend you to look up a few games between AlphaGo and Stockfish, which embody these two different philosophies and battle it to the teeth and bones. The matches are brilliant. I would say though that it seems like AlphaGo (learning the game entirely through scratch without human help) has seemed to triumph more times than Stockfish and with the nature of these systems, I’d suspect it to continue that trend.


I'm not sure it's right to characterise Deep Blue or Stockfish as repositories of human chess theory. Fundamentally they were all based on a relatively simplistic function for calculating the value of a board position combined with the ability to evaluate more board positions further into the future than any human possibly could (plus a database of opening moves). That approach seems thoroughly non-human, and represents a victory of tactical accuracy over chess theory or strategy.

However I agree that the games between AlphaGo and Stockfish are really interesting. It strikes me that the AlphaGo version of chess looks a lot more human; it seems to place value on strategic ideas (activity, tempo, freedom of movement) that any human player would recognise.


I think you're right, I meant to say that chess engines usually have book openings built into them which derive off of human chess theory but you're absolutely right in that they don't play in a human form.

It's kind of crazy how AlphaZero has managed the success it has. Stockfish calculates roughly 60 million moves per second and AlphaZero calculates at only 60 thousand per second. Three orders of magnitude less yet its brilliance is mesmerizing, tearing Stockfish apart in certain matches.


> ...learning completely on its own giving it nothing but the rules which is how AlphaGo works...

Not to be too picky, but it was AlphaGo _Zero_ that learned from the rules alone. AlphaGo learned from a large database of human played games: "...trained by a novel combination of supervised learning from human expert games". [1]

AlphaGo Zero, derived from AlphaGo, was "an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules". [2]

[1] https://www.nature.com/articles/nature16961

[2] https://pubmed.ncbi.nlm.nih.gov/29052630/


Also AlphaGo Zero never played chess, only go. It was AlphaZero that applied the same framework to other games including chess.

https://en.wikipedia.org/wiki/AlphaGo_Zero https://en.wikipedia.org/wiki/AlphaZero


> I’d recommend you to look up a few games between AlphaGo and Stockfish

Agadmator's youtube channel covers a bunch of those. https://www.youtube.com/watch?v=1yM0D1iZLrg


And some of the most amazing games are when AlphaGo is absolutely breaking chess "wisdom" left and right simply because it can see a forced solution on the horizon.

Pawn structure? BAH! King safety? CHARGE!

And then 75 moves later Stockfish is in zugzwang.


>Deep Blue could evaluate 200 million different moves... per second. And that’s from 1997.

And still he lost against Kasparov. Which doesn't happen now, top engines haven't been beaten by humans since ~2006.


Not really. It was tried and it turns out that the best strategy for the human is to just do what the computer suggests.


Could you provide a source for this?


I suppose he can’t because it isn’t true at all. The best correspondence players usually improve significantly over the computer suggestions.

Source: I’m a corrspondence chess international master


> The best correspondence players usually improve significantly over the computer suggestions.

I might be misunderstanding your claim, but how can humans playing correspondence chess beat Stockfish or Lc0?


In official correspondence games the computer assistance is allowed so most (if not all) of the players usually start their analysis with the computer suggestions (Stockfish, Lc0 or others). Some players limit themselves to this and play the engine's move, others try to improve with their own contribution. If no human contribution was possible, correspondence chess would become an hardware fight while results show that the best players can defeat "naive" opponents that rely on computer suggestions. In this sense, every correspondence chess win is a win over the opponent's hardware and engine.


Isn't it possible that you're not improving upon the engine's suggestions, but instead, your opponent is choosing suboptimal non-engine lines, and your engine is beating their weakened engine?


Occasionally it is possible. After seven years and more than one hundred games played I can tell you that I have been surprised by my opponent's reply not more than an handful of times. For "surprised" I mean he didn't play the top choice of the engine. In fact most of the times the best move in a given position is easily agreed on by any reasonable engine on any decent hardware. In few critical moments in the game, the best move is not clear and there are two or three or more playable alternatives that take into very different positions. In these cases the computer, after a long thought (one or more hours) usually converges to one suggestion and sticks to it even if given more time (a sort of "horizon effect"). These are the moments where a human, after a long thought, can overcome the computer suggestion and favor the 2nd or 3rd choice of the engine. So in brief no, I can't recall a game where I've been gifted the win by my opponent "weakened" move while most of the time I have confronted with the "engine's approved" suggestion and had to build my win by refuting it.


I assume that when you come across one of these novel moves, plug it into the computer, and give it time to search, it ultimately decides that it's superior?

Relatedly, can you give some examples of novel non-engine lines that turned out to be better than engine lines?


Sometimes if you play a move and the first plies (i.e. half moves) of the main variation the computer starts "understanding" and its score changes accordingly. Those are the cases where more hardware power could be useful and make the engine realize the change from the starting position. More often, the "non-engine" move relies on some blindness of the engine, so the computer starts understanding its strength only when it's too late. In these cases is unlikely that more power could bring benefits. Typical cases are

- fortresses [0]. One side has more material but the position can't be won by the superior side. As the chess rules declare the draw only after 50 moves without captures or pawn pushes, current engines can't look this far away and continue manouvering without realizing the blocked nature of the position. Some engines have been programmed to solve this problem but their overall strength decreases significantly.

- Threefold repetitions [1]. The engine believes the position is equal and move the pieces in - let me say - pseudorandom way. Only at some point it realizes the repetition can be avoided favourably by one side. Also this topic is frequently discussed in the programming forums but no clearcut solution has still emerged.

If you are looking for positions where human play is still better than engine's, the opening phase is the most fruitful. Most theoretical lines were born by human creativity and I doubt a chess engine will ever be able to navigate the intricacies of the Poisoned Pawn Variation of the Sicilian Najdorf [2] or the Marshall Attack of the Ruy Lopez [3]. Neural networks engines are strategically stronger than classical AB programs in the opening phase but they suffers from occasional tactical blindness. Engine-engine competitions often use opening books to force the engines to play a prearranged variation to increase the variabililty and reduce the draw percentage.

[0] https://en.wikipedia.org/wiki/Fortress_(chess) [1] https://en.wikipedia.org/wiki/Threefold_repetition [2] https://en.wikipedia.org/wiki/Poisoned_Pawn_Variation [3] https://en.wikipedia.org/wiki/Ruy_Lopez#Marshall_Attack


I'm interested because the experience in Go is humans simply can't keep up.

What is the evidence that it isn't a hardware or software differential between the players? I can't think of an easy way to ensure that both players started with computer-suggested moves of the same quality.


There are a lot of engines with rating on the chart way higher than the best humans, so every suggestion on their part should be in theory enough to overcome any human opponent. In practice most (if not all) of the players rely on Stockfish and Lc0 (both open source). During a game, most of the time the "best" move is easily agreed on by every reasonable engine on any decent hardware. Only in few cases during a game, the position offers two or three or more playable choices. In these cases a stronger hardware or a longer thought rarely makes the computer change his idea. It's a sort of horizon effect where more power doesn't translate into a really better analysis.

For example in a given position you could have 3 moves M1 - a calm continuation with a good advantage M2 - an exchange sacrifice (a rook for a bishop or a knight) for an attack M3 - a massive exchange of pieces entering into a favorable endgame. If the three choices are so different, the computer usually can't dwell enough to settle on a clear best move. Instead the human can evaluate the choices until one of them shows up as clearly best (for example the endgame can be forcefully won). In these cases the computer suggestion becomes almost irrelevant and only a naive player would make the choice on some minimal score difference (that can unpredictably vary on hardware, software version or duration of analysis). So the quality of the starting suggestion is somehow irrelevant if you plan to make a thoughtful choice.


I'm not sure about very recent chess engines, but for a long time, it was better. The human suggests several moves that would advance their strategy, and the computer dedicates its search time to evaluating the strength of those potential moves, which cuts down the search space considerably. It's called "advanced chess" or "centaur chess". https://en.wikipedia.org/wiki/Advanced_chess


No.

Computers are now as much better than Magnus Carlsen as he is better than a moderate amateur.

If even the best player overrides a move he's much more likely to be reducing the strength of the move than increasing it.


The rating you are referring to are typically based on tournament or rapid games, where the limited time induces the human players to mistakes that the computer capitalizes on. Given enough time or with a “blunder check” option, the best human players are still strategically stronger. In correspondence chess, where the is much more time at disposal, the human players can still improve the computer suggestions.

Source: I’m a correspondence international chess master


Yeah I was thinking about classic or standard time controls. The last big cyborg tournament a few years ago I remember a computer coming in 1st and 2nd.

I wasn't thinking about correspondence but what was the latest large cyborg correspondence tournament?


I don't know the last one but I recall the matches of Hydra chess machine [0] in the early 2000s against GM Adams in tournament condition (5½ to ½ for the machine) and against GM Nickel in correspondence condition (2 to 0 for the human). Both Grandmaster were top players in their relative field so it showed very clearly how the time limitation impacted the competitive results. Nobody in the chess elite would claim that Hydra understood chess better than GM Adams but still he lost resoundigly due to the inevitable mistakes caused by the relatively fast time control.

[0] https://en.wikipedia.org/wiki/Hydra_(chess)


But wasn't Hydra 2005 ~2800 ELO where as the current best chess engines like Leela Chess Zero or Stockfish are ~4000 ELO?

Just realized that correspondence chess is cyborg chess, I didn't know computers were legal in correspondence chess, but it makes sense now. Reading about it, it sounds like it's less about knowing chess, and more about understanding the applications you're using.


Chess engine ratings are not immediately comparable to human ratings as they are extracted from different pools. Hydra played relatively few games so its rating estimation was somewhat approximate but it was clearly "superhuman" (GM Adams was n°7 in the world and only scored one draw in 6 games). Today Stockfish is awarded a rating of about 3500 [0] with a typical PC hardware but this rating comes from matches between engines and not with humans.

Regarding the argument of "knowing chess", it depends on you definition. I often use this analogy. Correspondence chess is to tournament chess what the marathon is to track running. They require different skills and training but I guarantee to you that a lot of understanding is involved in correspondence chess, possibly more than in tournament chess.

[0] https://ccrl.chessdom.com/ccrl/4040/


Oh I assumed it required quite a bit of chess knowledge and skill. But I assume what differentiates a good from great player isn't unassisted chess ability. Basically I'm wondering how well do correspondence ratings track with unassisted ratings. It was my understand they don't track really well at the higher levels of correspondence chess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: