The ultimate intuitively simple and intellectually challenging game is Go. Go's rules are far simpler than Arimaa's, and it's just as challenging for computers.
The scoring rules for Go are much more complex. I've seen plenty of cases where novices have struggled with end-game scoring, especially under Japanese rules.
Chinese and AGA scoring rules are essentially the same as Tromp-Taylor, just not expressed quite as concisely.
AGA rules actually have a neat hack which makes it so using either area or territory scoring gives you the same result (by making you give an opponent a stone as a capture every time you pass).
The reason Japanese counting is so complex is that there are situations where it would hurt you to play it out in order to determine life or death, as you would need to fill in your own territory, so they've developed a whole bunch of special cases to determine life or death for disputed groups without playing it out. Area scoring eliminates this problem, as does the AGA rule of giving your opponent an extra capture every time you pass.
Really, if people are having trouble with scoring, just teach them area scoring and tell them to play it out until they can score the game unambiguously.
I'm very novice Go player, but I've only ever played with Japanese rules. Could you (and ig1) explain what's so wrong with them? I don't remember ever thinking that the score counting would be somehow complex or unintuitive.
Under Japanese (and Korean) rules, there occur situation in the end game in which it may be difficult to determine whether a particular group is alive or dead, but the player who would need to play in order to play it out and determine for certain may not want to do so as doing so would fill in their own territory, reducing their score.
Japanese rules have a variety of special cases to deal with this problem, but most beginners don't know them, and many beginners may not know exactly when to stop (as they are unsure if a group is safe or not), and so may wind up reducing their score in the endgame just because they are trying to make sure a group is safe.
Under area scoring rules (Chinese, Tromp-Taylor, AGA, New Zealand, Ing, etc), you count the sum of your territory and your stones on the board, avoiding this problem. AGA has a hack that makes both scoring methods work the same; whenever you pass, you give your opponent an extra prisoner.
Territory scoring plus dead and captured stones is a bad idea. Contrast: to count by Tromp-Taylor rules, play out and explicitly kill (or agree to remove) all dead stones, fill every empty intersection that only reaches black with black, fill every empty intersection that only reaches white with white, then simply count up black and white stones on the board.
Simple ko is a bad idea. Positional superko is much better.
Seki complicates things needlessly.
Relying on historical rulings, rather than defining unambiguous rules, is a bad idea.
The worst thing about Go is that the scoring rules that are taught to everyone (the Japanese ones) don't make any sense unless you already understand the game well, so there's a terrible bootstrap problem.
Everyone should be taught Go with Chinese or Tromp-Taylor scoring.
(Note that all these rulesets give basically the same results for almost all games; it's just some edge cases that end up differing.)
I'd disagree about this, because whilst the rules of Go are indeed simple to learn, developing a heuristic for analysing a position on the board is not. With Go you'd not expect a new player to be able to beat a computer player right away, whereas with Arimaa that is entirely possible.
We aren't there yet, but Go is starting to give way to Monte-Carlo Tree Search approaches; I don't believe that Arimaa playing computers are at the same level yet.
However, Go is certainly 'cleaner' and was not artificially constructed to intentionally be difficult for computers.
I had more ambitious goal but was distracted by other things. You can read more about it here: http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=siteIssues...
One of the few reasons I don't like Arimaa is that it is patented. It is probably the biggest reason I won't likely commit any time developing for it.
> On average there are over 17,000 possible moves compared to about 30 for chess; this significantly limits how deep computers can think, but does not seem to affect humans.
Interesting assertion that the branching factor doesn't seem to affect humans. I wonder why they don't think it poses a problem for humans to have a large branching factor, is there any evidence to support this?
On an only tangentially related note, read Zen and the Art of Motorcycle Maintenance. It explorers why human beings are able to make quality decisions in the face of massive branching factors such as making a chess move or (more relevant to the focus of the book) develop scientific hypothesis. It's also just a great read.
The claim is that humans play better than computers, and computers struggle in the face of the high branching factor, so therefor humans aren't affected by it.
The theory is that humans use "something else" to "intuitively" understand the strength of a position, and what move to make. The same argument was made for chess. In chess it has become somewhat irrelevant because the low branching factor means computers can see so far ahead than any advantage humans have becomes useless.
As far as I know there isn't a generalised theory of what human intuition is doing. People have attempted to build specific models for chess than encoded "chess theory" (which is kinda-sorta formalised intuition), but those models have proven inferior to deep search algorithms.
Arimaa is a fascinating game; you can learn it quickly and immediately play against the best computer players and expect to win. The branching factor is extremely high (~17,000 vs Chess' 30), but each move consists of several phases where the current player can move several different pieces one after another, so I've always thought a modified minmax type algorithm may be able 'deconstruct' each move into several nodes on a graph.
Whilst extremely interesting, it seems the amount of research into Arimaa pales in comparison against research into Go. Go has a branching factor of ~300 so sits far above Chess, but well below Arimaa. It is even easier to learn but harder for humans to develop an intuitive understanding of how strong any position is. It is starting to succumb Monte Carlo Tree Search [1] with games played on a smaller 9x9 (vs the standard 19x19) board.
However, from my perspective whilst MCTS is extremely interesting and have a wide array of applications, I'd love to see approaches towards these problems that aren't based around an optimised 'brute force' algorithm.
When Deep Blue beat Kasparov, Douglas Hofstadter noted “It was a watershed event, but it doesn’t have to do with computers becoming intelligent”, adding “you can bypass deep thinking in playing chess, the way you can fly without flapping your wings” [2]. I somewhat feel like this criticism could be applied to MCTS and Go, and it'll be interesting to see whether the first algorithms that conquer Arimaa come from a different perspective or not.
> you can learn it quickly and immediately play against the best computer players and expect to win
That is not true at all. The best available bots on the Arimaa server are rated above 2000 elo, which is way higher than beginners can expect to be rated at.
I've played a few Arimaa rounds, it's very fun. For even more simple + challenging, try 2v2 speed chess where you give captured pieces to your teamate.
http://en.wikipedia.org/wiki/Go_(game)
http://en.wikipedia.org/wiki/Go_and_mathematics
http://en.wikipedia.org/wiki/Computer_Go
losethos mentioned this too, although he is hellbanned. His point was that Go's advantage is it engages the image-recognition abilities of our brains.