Hacker News new | past | comments | ask | show | jobs | submit login

I agree, Go is a challenging game for computers but even more so for humans. However, for computers, Go is not more challenging than translation, speech to text or driving or understanding a comic book. The progression of superhuman play in famous board games has gone as checker->chess-> go. The more a game relies on memory,tactics and evaluation speed, the quicker it falls. In terms of commonly played perfect information games, Go is most difficult both from having a large branching factor and no simple evaluation heuristic. In each of these, before there was superhuman play, there was super amateur human play for decades (including in Go).

Translation still has many failure cases. Speech to text cannot yet handle intonation and auto-driving cannot yet handle driving in places like India. And reading then summarizing a page of a comic book while walking across a room is currently impossible.




You ignored my post entirely. The fact that it took the most elite ML lab in the world to engineer this solution using proprietary hardware never seen before that's orders of magnitudes faster at evaluating than what's available to the rest of us is a testament to how hard it is to beat Go. For a long time it was believed to be impossible.

I am not comparing translation or machine vision to AlphaGo, I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.

AlphaGo can beat the next best go playing bot purely using its neural net ensemble without using MCTS, for example. That's a pleasantly surprising result never before seen, to think that it can beat another bot without doing a single tree search during play and evaluation is also a testament to how impressive it is.


> You ignored my post entirely.

I did not. You said Go is in a lot of respect much more challenging than machine translation, speech to text and auto-driving. I merely pointed out that is wrong because the following exists: superhuman go player and the following do not: superhuman machine translation,speech to text and auto-driving. Go is a perfect information game with no shallow traps. Perfect information means unlike poker, information sets are not cross cutting and as such algorithms can leverage the fact that backwards induction is straightforward.

No shallow search traps and perfect information makes things a lot easier from a computational perspective. Driving at a superhuman level would require a sophisticated forward model from a physics perspective, before even considering predicting other drivers. Speech to text and fluent translation without brittle edge-cases requires hierarchical predictive models that capture long term correlations and higher order concepts. I'm not disputing Go is hard but the hurdles: high branching factor and no evaluation heuristic were the core difficulties. Training via reinforcement in a way that broke correlations which get in the way of learning and integrating roll out with the neural nets (breaking evaluation into value and policy as they did) was the Deepmind's team genius. The roll out and evaluation are what eat up so much electricity.

> The fact that it took the most elite ML lab in the world to engineer this solution using proprietary hardware never seen before that's orders of magnitudes faster at evaluating than what's available to the rest of us is a testament to how hard it is to beat Go.

AlphaGo can run on a GPU, just not (for now) as efficiently as on a TPU. Deepmind is indeed unmatched in output. AlphaGo did build on the 2006 breakthrough paper on tree based bandit algorithms. There was another important 2014 paper on the use of conv-nets on Go. Deepmind did amazing work, but it was not out of nowhere.

And, sure Go is hard. But from a computational perspective, it is still much easier than being able to run up a hill or climb a tree. Humans are just not very good at playing combinatorial games, so the ceiling is low.

> I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.

That is absolutely untrue. I have a decent understanding of the implementation and a strong understanding of the underlying algorithms.


10 years ago when I was learning Go. I can beat the strongest bot within months of learning the rule. Super amateur human play in Go AI is barely 5 years ago, if that.


That doesn't affect my core point: many of the things that humans have commonly associated with intelligence have been the first to fall. In hindsight it makes sense, we mistakenly assumed that there was such a thing as a universal rank of difficulty centered on what humans find hard to reason about.

More to your point, my decades remark had a weaker notion of amateur. For each game, we've had something that could beat most humans for decades. But you're right, that's not a useful distinction.

If we look just at Go the decades remark is somewhat of a stretch. Go has been especially difficult, requiring more intelligent algorithms in order to solve branching and state evaluation (and the latter in particular, is a function too complex to fit in human consciousness).

But progress has been occurring for years. On 9x9 boards MCTS bots have been stronger than most humans since about 2007, 10 years ago. For 19x19 it's true, if we pick 4 or 5 dan as better than most amateurs then that's 6/7 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: