What's very interesting is that the Komodo developers have implemented a Monte Carlo Tree Search version of their engine without neural nets for evaluation / move selection. This brand new engine can actually compete at the top level (still much worse than Stockfish and slightly worse than Lc0) [1] [2]
The exact implementation details are probably kept secret, but the idea is to do a few steps of minimax / alpha-beta rather than completely random play in the playout phase of MCTS.
This makes me think that the contribution of AlphaZero is not necessarily neural nets, but rather MCTS as a succesful method to search the game tree efficiently.
You missed the point then. Alpha beta pruning requires knowledge of the game rules. Neural network pruning doesn't. The advantage is that it's a general purpose technique.
Yes, that's the main contribution of the experiment / paper. But prior to AlphaZero the chess community did not even consider investing in MCTS engines -- alpha-beta pruning was thought to be far superior. I'm thinking that we might see classical engines exploring this concept more, and maybe it's even a natural step to go from alpha-beta pruning + iterative deepening to 'best-first' search with MCTS.
The exact implementation details are probably kept secret, but the idea is to do a few steps of minimax / alpha-beta rather than completely random play in the playout phase of MCTS.
This makes me think that the contribution of AlphaZero is not necessarily neural nets, but rather MCTS as a succesful method to search the game tree efficiently.
[1] http://tcec.chessdom.com/ [2] http://www.chessdom.com/komodo-mcts-monte-carlo-tree-search-...