There used to be a concept of "AI completeness" which meant that some problems could only be solved by True AI, which would also be able to solve all other human-level problems. Doug Hofstadter writes in Gödel-Escher-Bach that he believes grandmaster-level chess is AI complete. That was obviously false in retrospect, and it's becoming increasingly obvious that there is no AI completeness at all.
This is a good blog post and articulates well many of the issues I have with calling current work on AI subsets "AI". From my point of view, the results of current work with AI subsets are impressive, but they're hitting self-proclaimed benchmarks or someone poorly defined benchmarks and declaring AI status. Creating an advanced Go machine was very cool, but as Lee Sedol played more, he said he began to understand how AlphaGo "thought". The appended reddit thread here [1] has some interesting insight on why pulling back the curtain on current AI claims is appropriate. A lot of the times, "it's just calculations" is appropriate since we don't really have a machine considering what it's given, it's driving towards a singular goal in the best way possible. In the case of AlphaGo, it disregarded Lee Sedol's move because it had no care about the unusualness of it, it just wrote the move off as sub-optimal. When you see behavior like this where the systems are algorithmically defeated, it's very difficult to not be dismissive of the claims of AI.
Again, there is a lot of coolness happening with subsets of AI research, but I don't feel that there is even a clear definition of what AI would entail - spaghetti code to get a desired result doesn't really help either, since it has to be persistent independent successes - to refer to Karpathy's article, it would have to make repeated assertions and understandings of similar photos with a high success rate to really be something spectacular.
>it's becoming increasingly obvious that there is no AI completeness at all.
Bingo! The "general" part is the ability to learn new structures and task structures from environmental cues, and then construct informed prior beliefs about those new tasks and structures using causal relations to previously-observed tasks and structures. Nothing more!