Arimaa is a fascinating game; you can learn it quickly and immediately play against the best computer players and expect to win. The branching factor is extremely high (~17,000 vs Chess' 30), but each move consists of several phases where the current player can move several different pieces one after another, so I've always thought a modified minmax type algorithm may be able 'deconstruct' each move into several nodes on a graph.
Whilst extremely interesting, it seems the amount of research into Arimaa pales in comparison against research into Go. Go has a branching factor of ~300 so sits far above Chess, but well below Arimaa. It is even easier to learn but harder for humans to develop an intuitive understanding of how strong any position is. It is starting to succumb Monte Carlo Tree Search [1] with games played on a smaller 9x9 (vs the standard 19x19) board.
However, from my perspective whilst MCTS is extremely interesting and have a wide array of applications, I'd love to see approaches towards these problems that aren't based around an optimised 'brute force' algorithm.
When Deep Blue beat Kasparov, Douglas Hofstadter noted “It was a watershed event, but it doesn’t have to do with computers becoming intelligent”, adding “you can bypass deep thinking in playing chess, the way you can fly without flapping your wings” [2]. I somewhat feel like this criticism could be applied to MCTS and Go, and it'll be interesting to see whether the first algorithms that conquer Arimaa come from a different perspective or not.
> you can learn it quickly and immediately play against the best computer players and expect to win
That is not true at all. The best available bots on the Arimaa server are rated above 2000 elo, which is way higher than beginners can expect to be rated at.
Whilst extremely interesting, it seems the amount of research into Arimaa pales in comparison against research into Go. Go has a branching factor of ~300 so sits far above Chess, but well below Arimaa. It is even easier to learn but harder for humans to develop an intuitive understanding of how strong any position is. It is starting to succumb Monte Carlo Tree Search [1] with games played on a smaller 9x9 (vs the standard 19x19) board.
However, from my perspective whilst MCTS is extremely interesting and have a wide array of applications, I'd love to see approaches towards these problems that aren't based around an optimised 'brute force' algorithm.
When Deep Blue beat Kasparov, Douglas Hofstadter noted “It was a watershed event, but it doesn’t have to do with computers becoming intelligent”, adding “you can bypass deep thinking in playing chess, the way you can fly without flapping your wings” [2]. I somewhat feel like this criticism could be applied to MCTS and Go, and it'll be interesting to see whether the first algorithms that conquer Arimaa come from a different perspective or not.
[1] http://en.wikipedia.org/wiki/Monte_Carlo_method#Artificial_i... [2] http://www-rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/Che...