I'm not sure. That's why we need to come up with a good, rigorous definition that doesn't elevates humanity, but is instead an objective, reasonable definition of intelligence that humans can agree upon. I'm doubtful that we can ever find that definition though.
Right now, humans consider intelligence to be "whatever machines haven't done yet" (Tesler's Theorem), but as machine capabilities increase, then there is a real possibility that humans may believe that intelligence doesn't exist at all (after all, if machines can do everything, and if machines are not intelligent, then nothing requires intelligence). [Source: https://plus.google.com/100656786406473859284/posts/Yp83aFwF...]
I do think that intelligence does actually exist and that current AI can already do intelligent things, but that the stuff that current AI can do won't match my vague understanding of the term "strong". If current trends continue indefinitely, then, of course, we won't ever have Strong AI, but we still have machines that do everything. At least, that's one possible way of thinking about intelligence.
But that's the thing, we don't have a good definition of intelligence at all (and I don't have one either) so we don't really know what's going on. We could invent Strong AI and never even recognize it, and maybe even dismiss it because it doesn't resemble what we think of as intelligence (much less "strong intelligence"). There's just so much that we don't know that talking about it is very difficult. AI is not just a field where you get to write pretty algorithms. It is also a philosophical field, and it is a shame that the philosophical and the practical aspects of AI are disconnected.
What I think is the crucial missing component is: how does your intelligent system define goals?
Right now goal-setting is something intelligences do not, and cannot, do for themselves. Humans must define the bounds of a problem carefully before a robot brain can perform useful work (some kind of numerical optimization).
The preliminary problem, then, is: how do humans define goals?
And the final problem: construct an intelligence that is able to efficiently set and achieve goals that are broadly in line with human goals.
I think this statement of the problems neatly sums up my difficulty with the notion of "strong AI", or "AGI", or Robot God or what-have-you and the possibility that it might be somehow useful in the world.
Because the way humans set goals, I think, is through vague heuristics that are represented as narratives carried by culture and society; we hold these narratives and pass them back and forth to each other, through various tongues and modes of fashion.
This means that human desire is the product of a constantly-shifting stream of socialization, which we are all drinking from and pissing into at once. The only meaningful way to accurately represent this, I think, is for engagement in it. You must participate in culture to "get it". Where this participation breaks down ("let them eat cake") we get strife.
Where does this leave the poor robot mind? It can only be "intelligent" in the way that we want when it can appreciate the horror of losing its daughter to a prison camp, when it can come to feel the memory of an inherited tragedy as both burr and serious weight. At this point we're just raising children again.
At any other point it's simply a dumb slave, doing exactly what we tell it - or a capricious, self-serving monster to be fought.
And since we don't know how humans decide their own goals (because knowing that would be a very revolutionary discovery that would immediately be used in a variety of other fields, including politics and advertising), we can never really establish a road map to building "strong AI"/"AGI"/"Robot Gods" (or even recognizing if we have built one by sheer accident). Clever. I like that.
There are probably ways to "cheat" your criteria though by having AI simulate the idea of discovering goals and acting on them, such as building a bot that searches Tweets on Twitter and then writing Tweets based on those Tweets it discovers. But these are "cheats" and won't be universally accepted. We could argue for instance that this bot really has a higher-level goal of finding new goals and carrying them out, and is only coming up with "lower-level" goals based on its initial "higher-level" goal. So, again, you're probably right. We don't know how to have AI create goals on its own...we can only give it to them.
I would say that "dumb slave[s]" or "capricious, self-serving monster[s]" are still threats to worry about though. Just because robots do what we tell them to doesn't mean that they will do what we want them to. Bugs and edge cases can exist in any exist system, and the more complex the system, the more likely it is for those bugs and edge cases to slip by unnoticed. These bugs/edge cases could lead to pretty catastrophic results. Managing complexity when programming AI would be a good place for "AI Advocates" to focus on.
Right now, humans consider intelligence to be "whatever machines haven't done yet" (Tesler's Theorem), but as machine capabilities increase, then there is a real possibility that humans may believe that intelligence doesn't exist at all (after all, if machines can do everything, and if machines are not intelligent, then nothing requires intelligence). [Source: https://plus.google.com/100656786406473859284/posts/Yp83aFwF...]
I do think that intelligence does actually exist and that current AI can already do intelligent things, but that the stuff that current AI can do won't match my vague understanding of the term "strong". If current trends continue indefinitely, then, of course, we won't ever have Strong AI, but we still have machines that do everything. At least, that's one possible way of thinking about intelligence.
But that's the thing, we don't have a good definition of intelligence at all (and I don't have one either) so we don't really know what's going on. We could invent Strong AI and never even recognize it, and maybe even dismiss it because it doesn't resemble what we think of as intelligence (much less "strong intelligence"). There's just so much that we don't know that talking about it is very difficult. AI is not just a field where you get to write pretty algorithms. It is also a philosophical field, and it is a shame that the philosophical and the practical aspects of AI are disconnected.