Hacker News new | past | comments | ask | show | jobs | submit login
Arimaa – Intuitively simple, intellectually challenging (arimaa.com)
77 points by jonbaer on Aug 23, 2013 | hide | past | favorite | 21 comments



The ultimate intuitively simple and intellectually challenging game is Go. Go's rules are far simpler than Arimaa's, and it's just as challenging for computers.

http://en.wikipedia.org/wiki/Go_(game)

http://en.wikipedia.org/wiki/Go_and_mathematics

http://en.wikipedia.org/wiki/Computer_Go

losethos mentioned this too, although he is hellbanned. His point was that Go's advantage is it engages the image-recognition abilities of our brains.


The scoring rules for Go are much more complex. I've seen plenty of cases where novices have struggled with end-game scoring, especially under Japanese rules.


So use Tromp-Taylor rules. "A player's score is the number of points of her color, plus the number of empty points that reach only her color."

Sorry Japan, but Japanese Go rules are a mess and need to be retired.


That's fine unless you want to compete in a tournament that use Japanese rules, or Chinese rules, or AGA rules, etc.

Simplified rule sets are fine (as is playing on a 9x9 board), but they don't represent Go as it's generally played.


Chinese and AGA scoring rules are essentially the same as Tromp-Taylor, just not expressed quite as concisely.

AGA rules actually have a neat hack which makes it so using either area or territory scoring gives you the same result (by making you give an opponent a stone as a capture every time you pass).

The reason Japanese counting is so complex is that there are situations where it would hurt you to play it out in order to determine life or death, as you would need to fill in your own territory, so they've developed a whole bunch of special cases to determine life or death for disputed groups without playing it out. Area scoring eliminates this problem, as does the AGA rule of giving your opponent an extra capture every time you pass.

Really, if people are having trouble with scoring, just teach them area scoring and tell them to play it out until they can score the game unambiguously.


I'm very novice Go player, but I've only ever played with Japanese rules. Could you (and ig1) explain what's so wrong with them? I don't remember ever thinking that the score counting would be somehow complex or unintuitive.


Under Japanese (and Korean) rules, there occur situation in the end game in which it may be difficult to determine whether a particular group is alive or dead, but the player who would need to play in order to play it out and determine for certain may not want to do so as doing so would fill in their own territory, reducing their score.

Japanese rules have a variety of special cases to deal with this problem, but most beginners don't know them, and many beginners may not know exactly when to stop (as they are unsure if a group is safe or not), and so may wind up reducing their score in the endgame just because they are trying to make sure a group is safe.

Under area scoring rules (Chinese, Tromp-Taylor, AGA, New Zealand, Ing, etc), you count the sum of your territory and your stones on the board, avoiding this problem. AGA has a hack that makes both scoring methods work the same; whenever you pass, you give your opponent an extra prisoner.

Here's an overview of the different rulesets: http://www.britgo.org/rules/compare.html


Territory scoring plus dead and captured stones is a bad idea. Contrast: to count by Tromp-Taylor rules, play out and explicitly kill (or agree to remove) all dead stones, fill every empty intersection that only reaches black with black, fill every empty intersection that only reaches white with white, then simply count up black and white stones on the board.

Simple ko is a bad idea. Positional superko is much better.

Seki complicates things needlessly.

Relying on historical rulings, rather than defining unambiguous rules, is a bad idea.


This is why I gave up trying to play Go. I could never figure out why I lost or how the scoring worked.


The worst thing about Go is that the scoring rules that are taught to everyone (the Japanese ones) don't make any sense unless you already understand the game well, so there's a terrible bootstrap problem.

Everyone should be taught Go with Chinese or Tromp-Taylor scoring.

(Note that all these rulesets give basically the same results for almost all games; it's just some edge cases that end up differing.)


Out of curiosity, did you try to learn with a tutor/mentor, or by yourself?


I'd disagree about this, because whilst the rules of Go are indeed simple to learn, developing a heuristic for analysing a position on the board is not. With Go you'd not expect a new player to be able to beat a computer player right away, whereas with Arimaa that is entirely possible.

We aren't there yet, but Go is starting to give way to Monte-Carlo Tree Search approaches; I don't believe that Arimaa playing computers are at the same level yet.

However, Go is certainly 'cleaner' and was not artificially constructed to intentionally be difficult for computers.


There is another one that is more simple to Arimaa and is also played in a 8x8 board called "Game of Amazons": http://en.wikipedia.org/wiki/Game_of_the_Amazons


If you'd like to have quick grasp of the rules, here's a site I created for just that: http://personal.inet.fi/koti/egaga/arimaa-begin/tutorial.htm...

You can play against (a stupid) bot. Just click once on the golden piece to select it, and then where to move.

I've also developed Arimaa game viewer, where you can analyse your game. The code is a mess but it might be useful for some. http://personal.inet.fi/koti/egaga/arimaa-viewer/arimaa.html

I had more ambitious goal but was distracted by other things. You can read more about it here: http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=siteIssues... One of the few reasons I don't like Arimaa is that it is patented. It is probably the biggest reason I won't likely commit any time developing for it.


> On average there are over 17,000 possible moves compared to about 30 for chess; this significantly limits how deep computers can think, but does not seem to affect humans.

Interesting assertion that the branching factor doesn't seem to affect humans. I wonder why they don't think it poses a problem for humans to have a large branching factor, is there any evidence to support this?


On an only tangentially related note, read Zen and the Art of Motorcycle Maintenance. It explorers why human beings are able to make quality decisions in the face of massive branching factors such as making a chess move or (more relevant to the focus of the book) develop scientific hypothesis. It's also just a great read.


The claim is that humans play better than computers, and computers struggle in the face of the high branching factor, so therefor humans aren't affected by it.

The theory is that humans use "something else" to "intuitively" understand the strength of a position, and what move to make. The same argument was made for chess. In chess it has become somewhat irrelevant because the low branching factor means computers can see so far ahead than any advantage humans have becomes useless.

As far as I know there isn't a generalised theory of what human intuition is doing. People have attempted to build specific models for chess than encoded "chess theory" (which is kinda-sorta formalised intuition), but those models have proven inferior to deep search algorithms.


The Wikipedia article might be a better introduction:

http://en.wikipedia.org/wiki/Arimaa


Arimaa is a fascinating game; you can learn it quickly and immediately play against the best computer players and expect to win. The branching factor is extremely high (~17,000 vs Chess' 30), but each move consists of several phases where the current player can move several different pieces one after another, so I've always thought a modified minmax type algorithm may be able 'deconstruct' each move into several nodes on a graph.

Whilst extremely interesting, it seems the amount of research into Arimaa pales in comparison against research into Go. Go has a branching factor of ~300 so sits far above Chess, but well below Arimaa. It is even easier to learn but harder for humans to develop an intuitive understanding of how strong any position is. It is starting to succumb Monte Carlo Tree Search [1] with games played on a smaller 9x9 (vs the standard 19x19) board.

However, from my perspective whilst MCTS is extremely interesting and have a wide array of applications, I'd love to see approaches towards these problems that aren't based around an optimised 'brute force' algorithm.

When Deep Blue beat Kasparov, Douglas Hofstadter noted “It was a watershed event, but it doesn’t have to do with computers becoming intelligent”, adding “you can bypass deep thinking in playing chess, the way you can fly without flapping your wings” [2]. I somewhat feel like this criticism could be applied to MCTS and Go, and it'll be interesting to see whether the first algorithms that conquer Arimaa come from a different perspective or not.

[1] http://en.wikipedia.org/wiki/Monte_Carlo_method#Artificial_i... [2] http://www-rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/Che...


> you can learn it quickly and immediately play against the best computer players and expect to win

That is not true at all. The best available bots on the Arimaa server are rated above 2000 elo, which is way higher than beginners can expect to be rated at.


I've played a few Arimaa rounds, it's very fun. For even more simple + challenging, try 2v2 speed chess where you give captured pieces to your teamate.

[0] http://en.wikipedia.org/wiki/Bughouse_chess

(It's one of the best games I've ever played and only lasts 90 seconds to play!)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: