Hacker News new | past | comments | ask | show | jobs | submit login

>Hassabis at AAAI indicated DeepMind’s intent to try to train AlphaGo entirely with self-play. This would be more impressive, but until that happens, we may not know how much of AlphaGo’s performance depended on the availability of this dataset, which DeepMind gathered on its own from the KGS servers.

As an amateur (10k) go player who watch the Fan Hui games, I can say that AlphaGo seems to rely heavily on the training data. This is because all of its moves feel very human, even in circumstances where strong humans find better, weird looking moves. This is in contrast to chess AI (and even other go AI) that feel distinctly roboty in how they play. In watching a professional (9p) commentary of the games, it seems that Fan Hui not lose because of particuarly good moves of AlphaGo, but because of some specific mistakes that he made (this is not unusually, as most professional games come down to loosing moves, instead of winning moves). In this sense, AlphaGo seems to be playing at human level, but with fewer mistakes.

This is certainly impressive, but unfortunately AlphaGo currently seems to be a demonstration of synthesizing and automating expert human knowledge, instead of creating new knowledge.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: