Why are we so preoccupied with the notion of "artificial intelligence" in the first place?
Artificial intelligence, if it can even be defined, does not seem like a particularly valuable goal. Why is emulating human cognition the metric by which we assess the utility of machine learning systems?
I'd take a bunch of systems with superhuman abilities in specialized fields (driving, Go, etc.) over so-called "artificial intelligence" any day.
Driving and go were considered waaay deep into "hard AI" just 10 years ago. Even if you mean to ask this question about "general artificial intelligence", you only need to go as far as speech or text comprehension for an example of something that needs to behave more or less like a person.
> Driving and go were considered waaay deep into "hard AI" just 10 years ago.
Oh, I know that. In fact, I think there are some very useful and very hard tasks which even AGI (at least a basic one) would not be able to solve.
My point is that we should focus on applications, not person-ness. Whether a technique is similar to human cognition or not, or the result is similar to a person, is immaterial.
Put simply, I don't think it matters much if all our super-powerful machines are ultimately Blockheads.
There is value in trying to emulate the human mind, which is in learning how the human mind works. A proper simulation of human intelligence would be an invaluable building block for health fields like psychology. Of course this is a very different kind of AI, and the focus on application specific measures of AI success have distracted from this goal.
No, being able to emulate a human is not necessary for understanding one and communicating with one. We can have great text comprehension (maybe even superhuman, won't that be fun?) and conversational interfaces without online learning, or without any overall model of interacting with the world.
> Because specialised systems are a subset of AGI (in the sense that GI is capable of producing them), hence AGI is seen as the ultimate goal.
I don't think one necessarily follows from the other though. We might be able to make an AGI with the intelligence of a human teenager. That doesn't necessarily mean it would be better at driving a truck than a more specialized algorithm.
> That would probably mean testing machine learning systems with something like Turing tests which no one is doing or advocating.
I see a lot of people complaining that various machine learning techniques won't lead to AGI and would never create a system which could pass the Turing test. Even this article laments that neural nets don't actually resemble human cognition.
> I don't think one necessarily follows from the other though. We might be able to make an AGI with the intelligence of a human teenager. That doesn't necessarily mean it would be better at driving a truck than a more specialised algorithm.
I agree that an AGI wouldn't necessarily be good at driving trucks. I meant it in the sense that an AGI would be capable of producing the "truck driving algorithm" (if we humans can do it, the AGI can do it too, almost by definition).
> I see a lot of people complaining that various machine learning techniques won't lead to AGI and would never create a system which could pass the Turing test.
Most of the complaining seems geared towards media and outsiders who portray X technique as being the "Solution to AGI", not towards the techniques themselves.
I think humans can do some cool things that we haven't yet figured out how to make an algorithm for. We can explore our environment, perform experiments, make hypotheses, and ultimately generate theory which have predictive power. Algorithms that can do that would be valuable and would have similar properties to human intellect.
One way to think of AI is what you say, highly specialized systems that have become very good at a specific task. This is very useful. It is also cool because you if you can define your problem well and come up with a fitness function then the algorithm solves your problem for you.
Another way to think about AI, and this is a much more general type, is that it could possible learn new things and create new theories by performing its own experiments much like a human scientist does.
> Algorithms that can do that would be valuable and would have similar properties to human intellect.
I agree. A "scientist" algorithm would be quite cool and helpful.
Where I disagree is whether that scientist algorithm should necessarily have intelligence that in any way resembles humanity. For all I care, it doesn't even need to be able to talk to be useful.
I disagree with your disagreement. Talking - language - is intimately connected to "science", which, as a form of human knowledge, is about reducing the complexity of the universe down into semantic constructs that humans can appreciate. That is, "scientific knowledge" does not exist except in the form of human language. There is no other meaning to "science" (or "knowledge") except "making the universe intelligible to humans".
A machine must also labor under these constraints, and so ultimately must produce knowledge in the form of language/semantics in order to be doing "science".
Limiting science to knowledge expressed in human language seems unreasonable. I think that it's the degree to which a system can understand and predict the world that matters - not whether it can explain it to us.
Might as well say that German mathematics isn't real mathematics, because it's all incomprehensible gobbledegook to someone who doesn't speak German.
Scientific knowledge already isn't expressed in normal human language; scientists build up a language of mathematical symbols, diagrams, and domain specific terms to encode and communicate their understanding in a more effective format. After all, how well do you understand a thing if you can't explain it?
I suspect the real issue will be difference in mental skills... there might not be a human who can understand what the AI understands, even if it is very good at explaining.
Sure, but unless they're willing to communicate their benevolent intent to us, alien science should be viewed as hostile. What is the point of a machine science that is not amenable to human examination? How could we be sure that the machine is acting in concert with human interests?
A purely machine science is anathema; if the machine does science, it must participate in the community of human scientists to validate its conclusions, or else its 'knowledge', however encoded, is as obscure and terrifying as the path of an impending asteroid.
Generating theories which have predictive power is exactly what deep neural nets do. Reinforcement learning systems do explore their environment and perform experiments, it's just that we give them access to a limited environment and reasonably clear objectives.
Really, we cannot draw a clear line between a learning algorithm and intelligence. Any algorithm that can incorporate information from the world it is exposed to into itself and use it to predict that world is "intelligent" in a sense, and all machine learning algorithms do that.
>Generating theories which have predictive power is exactly what deep neural nets do.
Not really. Some symbolic/inductive systems do that. Traditional deep neural nets have a single model that is being adjusted to fit the distribution of input samples. That is not the same as creating a theory, let alone many theories.
> Reinforcement learning systems do explore their environment and perform experiments
Since when does trying something 1000 times with incremental changes qualifies as experiments and exploration?
That single model is in the important respects as powerful as any model can hope to be, since it can approximate any continuous functions. Sure, it's finite, but so is my computer, and my brain.
Sure, we can't guarantee that the ways we have of training this model will be able to learn these function approximations. But the same is true of my brain.
So why would multiple models be better? Any set of multiple models can be aggregated and called one model, anyway.
Being adjusted to fit the distribution of input samples sounds unimpressive, until you remember it can in principle do it for all distributions. And it learns to generalize from the input samples to unseen samples. Making a model to generalize from what we've seen to what we haven't yet seen is exactly the same as creating a theory, as I see it.
Since when does trying something 1000 times with incremental changes qualifies as experiments and exploration?
Since when doesn't it? If it's 100 times, is it experimentation and exploration then in your eye? What about 10 times? Sounds like an arbitrary distinction to me.
The model behind ANNs defines how the system maps its inputs to its outputs. A theory, on the other hand, is something that describes an aspect of the environment/sample data regardless of system's outputs. One benefit of having several theories is that you can compare, test and invalidate them.
...
If your definition of "experiment" includes stuff like practicing a baseball pitch 1000 times, it is too generic to be meaningful. Experimentation implies purposeful gathering of knowledge by trying different things.
RL systems do experiment by that definition. For instance, DeepMind's first Atari player used an epsilon-greedy exploration/exploitation strategy: choose the action the model suggests is best with probability 1 - e, choose a random action (that is, "try different things") with probability e.
I don't what you mean by the "system's output" when talking about theories, but it seems to me that even by your definition, the weights of the ANN can be understood as a theory.
Having several theories is the same as having one theory, "one of these theories is more correct", or possibly "some combination of these theories is correct". If you define it as one theory instead of several, you can still improve it. (As I recall, some RL systems have multiple "heads", in a way corresponding to multiple theories).
I really dislike this notion. It reminds me either a giant evil computer hivemind or some sexy female robots. I think it is more a pop culture creation, the purpose of which is focused on sensational value that elicits fear or fascination, rather than a clearly defined concept.
Artificial intelligence, if it can even be defined, does not seem like a particularly valuable goal. Why is emulating human cognition the metric by which we assess the utility of machine learning systems?
I'd take a bunch of systems with superhuman abilities in specialized fields (driving, Go, etc.) over so-called "artificial intelligence" any day.