Agreed, I think if they were to drop the real-time constraint for the sake of the robotics tasks, they could train a huge model with the lessons from PaLM and Chincilla and probably slam dunk the weakly general AI benchmark.
I'm in the camp that thinks we're headed in a perpendicular direction and won't ever get to human levels of AGI with current efforts based on the simple idea that the basic tooling is wrong from first principles. I mean, most of the "progress" in AI has been due to getting better and learning how to understand a single piece of technology: neural networks.
A lot of recent neuroscience findings have shown that human brains _aren't_ just giant neural networks; in fact, they are infinitely more complex. Until we start thinking from the ground up how to build and engineer systems that reflect the human brain, we're essentially wandering around in the dark with perhaps only a piece of what we _think_ is needed for intelligence. (I'm not saying the human brain is the best engineered thing for intelligence either, but I'm saying it's one of the best examples we have to model AI after and that notion has largely been ignored)
I generally think it's hubris to spit in the face of 4 billion years of evolution thinking that some crafty neural net with X number more parameters will emerge magically as a truly generally intelligent entity - it will be a strange abomination at best.
I'm in the camp that thinks we're headed in a perpendicular direction and won't ever achieve powered flight with current efforts based on the simple idea that the basic tooling is wrong from first principles. I mean, most of the "progress" in flight has been due to getting better and learning how to understand a single piece of technology: fixed wing aircraft.
A lot of recent powered flight findings have shown that real birds _don't_ just use fixed wings; in fact, they flap their wings! Until we start thinking from the ground up how to build and engineer systems that reflect the bird wing, we're essentially wandering around in the dark with perhaps only a piece of what we _think_ is needed for powered flight. (I'm not saying the bird wing is the best engineered thing for powered flight either, but I'm saying it's one of the best examples we have to model powered flight after and that notion has largely been ignored)
I generally think it's hubris to spit in the face of 4 billion years of evolution thinking that some crafty fixed wing aircraft with X number more wingspan and horsepower will emerge magically as truly capable of powered flight - it will be a strange abomination at best.
to be slightly less piquant:
A) Machine learning hasn't been focused on simple neural nets for quite some time.
B) There's no reason to believe that the organizational patterns that produce one general intelligence are the only ones capable of doing that. In fact it's almost certainly not the case.
By slowly iterating and using the best work and discarding the rest, we're essentially hyper-evolving our technology in the same way that natural selection does. It seems inevitable that we'll arrive at least at a convergent evolution of general intelligence, in a tiny fraction of the time it took on the first go-around!
Do you think you’ll see true AGI in your lifetime? I certainly don’t think I’ll see it in mine.
No other domain of the sciences is “complete” yet. There are still unanswered questions in physics, biology, neuroscience, every field. Why are people so sure that AI will be the first field to be “completed”, and in such an astoundingly short amount of time on a historical scale?
What makes you think AGI means AI is a complete field? Wouldn't that mean working out how to build a bridge without trial and error completed engineering? Or that biology was completed when we worked out how dna works + enough organic chemistry? Obviously we won't complete the field in our lifetime, but that's because nothing is ever really complete.
I think I'd put the chances of AGI within my lifetime at 30%. Low, but high enough it's worth thinking about and probably worth dumping money into.
> Until we start thinking from the ground up how to build and engineer systems that reflect the human brain, we're essentially wandering around in the dark with perhaps only a piece of what we _think_ is needed for intelligence.
I have wanted an approach based on a top-down architectural view of the human brain. By simulating the different submodules of the human brain (many of which are shared across all animal species), maybe we can make more progress.