Hacker News new | past | comments | ask | show | jobs | submit login

The state space of chess is so huge that even a giant training set would be a _very_ sparse sample of the stockfish-computed value function.

So the network still needs to do some impressive generalization in order to „interpolate“ between those samples.

I think so, anyway (didn‘t read the paper but worked on alphazero-like algorithms for a few years)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: