I don't think many people were making the claims that AI can't solve any scientific problems or can't perform creative work at all. That sounds like a big strawman. Before ML was getting big there were AI systems that created art.
What sceptics have actually been saying is that the first step fallacy still applies. Getting 20% to a goal is no indication at all that you're getting 100% to your goal, or as its often put, you don't get to the moon by climbing up trees. For people who work with gradients and local maxima all day that idea seems weirdly absent when it comes to the research itself. In the same sense I don't have the impression that the goalpost of AGI has been moved up, but that it's been moved down. When Minsky et al. started to work on AI more than half a century ago the goal was nothing less than to put a mind into a machine. Today our fridges are 'AI powered', and when a neural net creates an image or some poetry there's much more agency and intent attributed to it than there actually is.
I think it was Andrew Ng, a very prominent ML researcher himself who pointed out that concerns about AGI make about as much sense as worrying about an overpopulation on Mars. We make models bigger, we fine tune them and they perform better. I don't think many AGI sceptics would be surprised by that. But I don't think there is any indication that they are moving towards human level intellect at some exponential rate. If DALL-E suddenly started to discuss philosophy with me I'd be concerned, it making a better image of a bear if you throw some more parameters at it is what we'd expect.
The notions that are crucial for distinguishing between intelligence and what large NNs are doing, are generalization and abstraction. I'm impressed with DALL-E's ability to connect words to images and exploit the compositionality of language to model the compositionality of the physical world. Gato seems to be using the same trick for more domains.
But that's riding on human-created abstractions, rather than creating abstractions. In terms of practical consequences, that means these systems won't learn new things unless humans learn then first and provide ample training data.
But someday we will develop systems that can learn their own abstractions, and teach themselves anything. Aligning those systems is imperative.
Yup, I think this is the pretty much describes the limitation of today's AIs. They are gigantic statistics machines at best. It's still amazing how far we can get with this technique, but we know where they stop getting better.
> concerns about AGI make about as much sense as an overpopulation on Mars
I disagree strongly that this is an apt analogy. Planning strategies for dealing with overpopulation on Mars is contrived and unnecessary, whereas planning for AGI is more reasonable.
The creation of AGI is a more important event than overpopulation of any given planet. There is good reason to believe that mishandling the creation of AGI would pose a permanent existential threat to humans. Overpopulation on Mars would only be an existential threat if we believed it to be followed by an exhausting of resources leading to extinction of all humans in our solar system. It is contrived to worry about that now.
There is no good way to know just how close or far we are from AGI like there would be to predict overpopulation on Mars. In general, we have a strong grasp on the fundamental dynamics of overpopulation, whereas we don't yet have a strong grasp on how intelligence works.
People have been very bad at predicting when AI would be capable of accomplishing tasks. There have been many under- and over- estimates by prominent researchers. If progress is unpredictable, there is some significant chance we are closer to AGI than most people think.
AGI is both far more important and more probable than overpopulation of Mars in the next 20 years.
> But I don't think there is any indication that they are moving towards human level intellect at some exponential rate.
Is there any very strong indication that progress is plateauing, or that the current approach of deep learning is definitely not going to work? If your benchmark is simply "can it do X, or not?", it's not a very good benchmark for determining progress. That's why benchmarks usually have scores associated with them.
> If DALL-E suddenly started to discuss philosophy with me I'd be concerned
If DALL-E suddenly started discussing philosophy with you in a way that would concern you in that moment, you should have been concerned for years.
Self driving cars come to mind as well. I remember 2015, when my friends would debate the self-driving Trolley problem over lunch. We were worried if society was ready for an owner-less car market; I seriously wondered if I would have to have a license in the future, or if I should keep it just in case.
What sceptics have actually been saying is that the first step fallacy still applies. Getting 20% to a goal is no indication at all that you're getting 100% to your goal, or as its often put, you don't get to the moon by climbing up trees. For people who work with gradients and local maxima all day that idea seems weirdly absent when it comes to the research itself. In the same sense I don't have the impression that the goalpost of AGI has been moved up, but that it's been moved down. When Minsky et al. started to work on AI more than half a century ago the goal was nothing less than to put a mind into a machine. Today our fridges are 'AI powered', and when a neural net creates an image or some poetry there's much more agency and intent attributed to it than there actually is.
I think it was Andrew Ng, a very prominent ML researcher himself who pointed out that concerns about AGI make about as much sense as worrying about an overpopulation on Mars. We make models bigger, we fine tune them and they perform better. I don't think many AGI sceptics would be surprised by that. But I don't think there is any indication that they are moving towards human level intellect at some exponential rate. If DALL-E suddenly started to discuss philosophy with me I'd be concerned, it making a better image of a bear if you throw some more parameters at it is what we'd expect.