Hacker News new | past | comments | ask | show | jobs | submit login

This is a good blog post and articulates well many of the issues I have with calling current work on AI subsets "AI". From my point of view, the results of current work with AI subsets are impressive, but they're hitting self-proclaimed benchmarks or someone poorly defined benchmarks and declaring AI status. Creating an advanced Go machine was very cool, but as Lee Sedol played more, he said he began to understand how AlphaGo "thought". The appended reddit thread here [1] has some interesting insight on why pulling back the curtain on current AI claims is appropriate. A lot of the times, "it's just calculations" is appropriate since we don't really have a machine considering what it's given, it's driving towards a singular goal in the best way possible. In the case of AlphaGo, it disregarded Lee Sedol's move because it had no care about the unusualness of it, it just wrote the move off as sub-optimal. When you see behavior like this where the systems are algorithmically defeated, it's very difficult to not be dismissive of the claims of AI.

Again, there is a lot of coolness happening with subsets of AI research, but I don't feel that there is even a clear definition of what AI would entail - spaghetti code to get a desired result doesn't really help either, since it has to be persistent independent successes - to refer to Karpathy's article, it would have to make repeated assertions and understandings of similar photos with a high success rate to really be something spectacular.

[1] https://www.reddit.com/r/baduk/comments/4a7wl2/fascinating_i...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: