Hacker News new | past | comments | ask | show | jobs | submit login
A step-by-step guide to building a simple chess AI (medium.com/lhartikk)
216 points by vinny12 on March 31, 2017 | hide | past | favorite | 41 comments



Love this article. I went through a lot of these same steps when I was tinkering with my own chess bot. Bitboard representation with bitshifting operations can dramatically speed everything up (see here https://chessprogramming.wikispaces.com/Bitboards). Another easy step is to integrate an opening book. The "Encyclopaedia of Chess Openings" provide a great set. Here is link to CSV of the openings for easy parsing: https://github.com/notnil/opening/blob/master/csv/openings_a...

Plug: Any gophers out there looking to build their own bot can use my chess engine https://github.com/notnil/chess

Edit: wording


Thank you!


No story about chess engines is complete without a mention of 1k Chess for the Sinclair ZX81:

http://users.ox.ac.uk/~uzdm0006/scans/1kchess/

How big is it? 672 bytes.

How much memory did that ZX81 have after the OS had claimed its share? 672 bytes.


Afaik it was an incomplete engine (didn't implement all the rules of the game), right? Still impressive, but important to keep that in mind.


It didn't understand castling, en passant and promotions. It would also only play white.

I did once try to pit it against a modern engine. It seemed to do reasonably well until the modern engine sneakily castled, and I was unable to tell 1k Chess about this, thus forcing it to forfeit the game.

Of course, since posting that, I since found this:

http://nanochess.org/chess6.html

Chess for the PC; 392 bytes. You put it in your boot sector. Still no castling, en passant and promotions, though, although there's an 831 byte version which has those.


> I'm having trouble beating a chess program I wrote. Not sure if I'm a bad player or the algorithm is decent

Most beginners will lose to even a chess engine that only looks one step ahead, as it will not drop pieces in obvious ways, but the beginner will.

Chess.js etc. is great btw. Last time I made some chess ai, most of the time was spent generating possible moves. En passant, promotion and stuff adds complexity.


I think sunfish (https://github.com/thomasahle/sunfish/blob/master/sunfish.py) would be a much better example of a simple chess AI (~100 lines of python code), even if it's not very well documented.

The article doesn't talk about very important concepts like quiescence search (https://chessprogramming.wikispaces.com/Quiescence+Search )

Sunfish doesn't use any of the "further improvements" proposed in the article and still performs much better


I spent some time trying to develop a chess AI a short time ago. I had this crazy idea that if I stored all of the possible moves in memory I'd only need to calculate the nth level of depth after every move.

This kills the computer.

After deciding it was best to simply re-evaluate every time, what caught me up was en passant castling and so fourth. It's a challenging topic. I recommend it.


Interesting that it's deterministic. I can get it to play the same game again and again. Why is this not the case with more advanced chess engines?

Edit: Apparently single-threaded stockfish is deterministic. Maybe my experience with non-deterministic chess engines just has to do with the handicapping related to providing easier levels of play.


Never checked but somehow got me surprised: aren't all chess engines deterministic? The same program, given the same input, should give the same output unless it is specifically programed to try to avoid it (and then we have PRNGs).


Some things that make chess engines non-deterministic: 1) Most chess engines use an opening book instead of calculating moves early in the game. These books are often programmed probabilistically, so it may have 4 reasonable responses to a given move and pick one randomly. 2) Many engines are programmed to "ponder", that is do optimistic calculations during the opponents turn. This means that the time taken by you to make the move could potentially affect the calculations performed by the engine.


You forgot two more things, they might not happen often enough because of the sheer number of possible positions in the early gameplay but it does make a difference in the game if you play the same position every time.

- Opening Book learning. - Position Learning.

Most decent chess engines have these features.


Many chess engines are (also) made for humans to play against.

Humans wouldn't want their opponent to play the exact same way every time. For a simple example, playing white, one wouldn't want the computer to always open 1. h4.

So, either the computer has to pick an opening, or the human opponent has to pick one.

Of course, that extends past the first move, so, assuming the computer to play deterministically, one would end up with the human opponent having to specify "let's play opening X, variant Y, as typically played by Z up to black's 15th move". It probably is easier to use a "set up a custom board" feature than to specify that.

And it wouldn't even stop there. The human specifying that particular opening likely wants to study that opening, not just the single continuation that the computer thinks to be the best one.

Also, as another poster said, a chess engine may use Monte Carlo simulation to pick a move, and that means using a random number generator.


A reply that was deleted mentioned that some engines use Monte Carlo simulations which are non-deterministic (?).

My experience with chess engines being non-deterministic is that if I pick the same opening I don't always play the same game with the same engine. I really might just be off here, and that variance might be related to random artificial handicapping.


Most chess engine opening books weight their branches to pick certain moves x% of the time, and offer 0 or negative value to 'bad moves' never to pick.

e.g. 1.e4, d4, c4 and Nf3 will get picked more often than 1. b3, c3, nc3 and f4, and 1. g3, e3 even less (or not at all).


Unless efforts are made for results to be deterministic, concurrent algorithms may complete their work / return their results in an ordering that is unstable (depending mostly on host OS and CPU management of time-slicing).


Stockfish isn't.


When you say that it is deterministic, do you win or do you lose the game?

If you always win with the same game, then instead of AI, we are rather talking about AS (Artificial Stupidity).


I'm very interested in the values assigned to pieces. Does anyone know the source or reasoning behind the table in step 2?

How are pieces relative values calculated? What's to say a pawn is 1/3 of a rook or a knight?

And why is the king 900, not simply the total value of all other pieces or slightly more? 8 pawns (10 ea), 2 knights (30 ea), 2 rooks (30 ea) and a queen (90) totals 290 points. Why can't a king simply be 300 points? If it needed to be worth more than both sets (white and black) for some reason why couldn't it be worth 600? The value 900 seems arbitrary to me.

If there's sound reasoning behind this can anyone recommend material related to deriving similar weighted values?


Check out this chess programming wiki article [1] for a (still fairly hand-wavy) rationale for similar piece values, and the Wikipedia article on chess piece relative values [2] for a more in-depth look at various weightings. Some modern approaches use analysis of a huge corpus of master games to come up with piece values, but many of the systems are based on intuition and empirical evidence.

[1] https://chessprogramming.wikispaces.com/Simplified+evaluatio... [2] https://en.wikipedia.org/wiki/Chess_piece_relative_value


Good parent question and interesting links, thanks, will take a look.

To follow on though, shoukdn't it technically be the case that piece values should in theory fluctuate based on current situation.

For instance, my castle is worth zero to me if I'm throwing it so I can get you in check.

Will I know modern chess engines take this stuff into account. Mostly just a random thought.


> To follow on though, shoukdn't it technically be the case that piece values should in theory fluctuate based on current situation.

Yes. One common example is knights and bishops. Both are worth about 3 pawns, but knights are generally more useful in closed positions due to their ability to jump over pieces, while bishops are better in open positions since they can more attack and move long distances.


Fascinating.

So in some sense, and I'm just brain-farting here, the value of a piece is a weighted sum of all the possible squares it could occupy during move generation. Something like a rudimentary path trace.

Now trying my best not to get distracted building chess machines. :/


> values should in theory fluctuate based on current situation

The author touched this a little bit with adding the position adjustments

> my castle is worth zero to me if I'm throwing it so I can get you in check.

I know what you mean, but castling into or through check isn't a legal move and shouldn't show up in your list of possible moves.


Right, although I didn't mean castling into check. I meant tempting a greedy algorithm by throwing away a high value piece (which was always a weakness of early chess machines), although I realize the tree search would spot that.

Also now that I'm awake, read the article fully now.

Very cool, makes me want to try building one!


For those interested in understanding how chess programs work, also see Toledo Nanochess: The commented source code

http://www.lulu.com/shop/oscar-toledo-gutierrez/toledo-nanoc...

  Toledo Nanochess is the world's current smallest chess 
  program written in C language. Now for the first time is 
  published the complete documented source code. Also 
  including the documented source code of the JS1K 2010 
  Chess entry (2nd place winner)


Nice write up.

To learn Elm (and to learn more about chess engine implementations) I wrote a pure client-side chess engine and game. https://github.com/darrensiegel/elm-chess

You can play it live here: https://elm-chess.com/


I recently wrote a chess AI in javascript, and found that it wasn't able to evaluate nearly as many moves per second as I expected, even though the implementation was pretty light (though not extensively optimized).

My feeling is that for serious AIs that need to perform huge tree evaluations (whether classical alpha-beta or something closer to monte-carlo minimum regret searches) there's several orders of magnitudes of optimization that can be performed, huge gains from subtle tradeoffs (larger static memory allocations, memory caching strategies, etc) and deep heuristics that can lead to pruning search spaces by huge percentages.

Highly tuned implementations such as these - or the graphics visualizations from the demo scene - have always fascinated me as an art form.

In practice, for most practical purposes, the industry seems to value more code for more features over the incredible feats of superimplementation.


Here's an article explaining AlphaGo, which I suppose could lead to a better chess implementation, [1].

[1] https://www.tastehit.com/blog/google-deepmind-alphago-how-it...


I am working on building a generic card game environment/runner which could plugin into different AI systems, and this is very helpful.

I've been looking for a reinforcement-learning version of this. How do you build an AI that plays Tic-Tac-Toe against itself to figure out the best strategy?


I made a few minimax based AIs for games in uni, but I had always wondered how I would calculate score. How are these point systems made? Are they just educated guesses? How far can minimax take you as far as skill level?


> How are these point systems made?

I'm currently in the process of building my own chess AI (in python), and I'm using as a score a weighted combination of the material and of the number of squares controlled by each side. I am planing to experiment with weighting more the squares that are closed to the opponent's king, and possibly to improve the evaluation function using reinforcement learning.

That said, my program is currently pretty weak, first I have to optimize its calculation speed to it can calculate deeper.


There's a common scoring system that is often used to help children who are learning to play chess (pawn is 1, knight and bishop are 3, etc).

A pure minimax engine isn't that hard to beat because the engine will be basically playing random moves during the opening phase. An opening repertoire + minimax on a modern computer will beat casual players (probably 1700-1900 Elo if I had to guess). You also have to do some good time management.


Why don't we have artificial intelligence? Well, I imagine that it would require that 'intelligence' be something other than an delusional artifact of our neurophysiology in the first place...



Would anyone besides me like to step forward and admit they cannot even beat an opponent that makes random moves?


I'm not quite that bad at chess, but bad enough that I wrote a chess game that plays random moves so I could always beat it:

http://www.notesfromandy.com/2016/09/27/chessfidget/

That got boring, so I added very weak AI by connecting to the chess engine that comes preinstalled on macOS (inside Chess.app).


> connecting to the chess engine that comes preinstalled on macOS

Can you elaborate on this please? The presentation at your link doesn't say.


One of the preinstalled apps on macOS is /Applications/Chess.app. Turns out it uses an open-source chess engine called sjeng, the binary for which is inside the app bundle. Chess.app is itself open source, so I tweaked Apple's code for my own purposes. The code launches an sjeng process, opens a pipe to it, and sends commands over the pipe.

There's a couple of sentences about this in the _README file here (as well as my tweaked source files):

https://github.com/aglee/ChessFidget/tree/master/ChessFidget...

Fun fact: you can play chess in a terminal window by running /Applications/Chess.app/Contents/Resources/sjeng.ChessEngine. You can enter the `help` command for instructions. Note this will create four files with .lrn extensions in your home directory. I forget how to control this behavior -- you can Google "sjeng" for details.


Thanks for the explanation!


huh. this is pretty simple stuff. i'm surprised there is such a market for posts like that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: