Hacker News new | past | comments | ask | show | jobs | submit login

I posted in the earlier thread because this one wasn't up yet[1].

Some quick observations

1. AlphaGo underwent a substantial amount of improvement since October, apparently. The idea that it could go from mid-level professional to world class in a matter of months is kinda shocking. Once you find an approach that works, progress is fairly rapid.

2. I don't play Go, and so it was perhaps unsurprising that I didn't really appreciate the intricacies of the match, but even being familiar with deep reinforcement learning didn't help either. You can write a program that will crush humans at chess with tree-search + position evaluation in a weekend, and maybe build some intuition for how your agent "thinks" from that, plus maybe playing a few games. Can you get that same level of insight into how AlphaGo makes its decisions? Even evaluating the forward prop of the value network for a single move is likely to require a substantial amount of time if you did it by hand.

3. These sorts of results are amazing, but expect more of the same, more often, over the coming years. More people are getting into machine learning, better algorithms are being developed, and now that "deep learning research" constitutes a market segment for GPU manufacturers, the complexity of the networks we can implement and the datasets we can tackle will expand significantly.

4. It's still early in the series, but I can imagine it's an amazing feeling for David Silver of DeepMind. I read Hamid Maei's thesis from 2009 a while back, and some of the results presented mentioned Silver's implementation of the algorithms for use in Go[2]. Seven years between trying some things and seeing how well they work and beating one of the best human Go players. Surreal stuff.

---

1. https://news.ycombinator.com/reply?id=11251526&goto=item%3Fi...

2. https://webdocs.cs.ualberta.ca/~sutton/papers/maei-thesis-20... (pages 49-51 or so)

3. Since I'm linking papers, why not peruse the one in Nature that describes AlphaGo? http://www.nature.com/nature/journal/v529/n7587/full/nature1...




Regarding 2, the point is that a valid move in Go is to put a stone everywhere on an empty position each turn on 19x19 grid. So the number of valid move is not comparable at all with Chess.

You can check for more information : https://en.wikipedia.org/wiki/Go_and_mathematics

Related : https://en.wikipedia.org/wiki/Shannon_number

From these two links, the game tree complexity of chess is estimated at 10^120 while for Go it is 10^700.

Not really in the same ballpark.


Since the European match went 5-0, how do we know the bot wasn't just as good months ago?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: