He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different. I'm sorry but that's a contradiction. If it's on the same spectrum, then it's just a quantitative measure of where on the spectrum it lies. If it's qualitatively different, it's on another axis, another quality is in play.
His definition is also rubbish. Being useful at economically valuable work has nothing necessarily to do with intelligence. Writing implements are vital in pretty much all economic activities, many couldn't be done at all without them before keyboards came along.
Deep Learning is great, it's a revolution, but it's a fairly narrow technology. It solves one type of task fantastically well, it just happens that solving this task is applicable in many different problem domains, but it's still only one technique. At no point did he show how to draw a line from Deep Learning to General AI in any recognisable form. It just looks like a hook to get you to hear his pitch.
>He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different.
No it is not. The basic premise of fixed-winged aircraft was the same from Wright brothers to modern jets. Yet the wright brothers flyer was useless and a modern jet is not.
We have agents that can act in environments. His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now. This just does not strike me as an absurd claim. We have systems that can learn reasonably robustly. We should accord significant probability to the claim that higher-level reasoning and perception can be learned with these same tools given enough computing power.
He claims we cannot "rule out" near-term AGI. Let's define "rule out" as having a probability of 1% or lower. I think he's given pretty good reasons to up our probability to between 2-10%. For myself, 10-20% seems a reasonable range.
What claim are you responding to here? Simonh said:
> He says general AI is on the same spectrum as the AI technologies we have now, but is qualitatively different. I'm sorry but that's a contradiction.
Which I agree with. How can two qualitatively different things be on the same spectrum? You later say yourself:
> His claim is that getting these agents to human-level intelligence is a matter of compute and architectural advancements that are not qualitatively different that what we have now.
Which seems to be the opposite of what simonh said and it's confusing to say the least.
His definition is also rubbish. Being useful at economically valuable work has nothing necessarily to do with intelligence. Writing implements are vital in pretty much all economic activities, many couldn't be done at all without them before keyboards came along.
Deep Learning is great, it's a revolution, but it's a fairly narrow technology. It solves one type of task fantastically well, it just happens that solving this task is applicable in many different problem domains, but it's still only one technique. At no point did he show how to draw a line from Deep Learning to General AI in any recognisable form. It just looks like a hook to get you to hear his pitch.
It's a great pitch, but it's not about AGI.