I hope you're right. I worry that we'll get it done, though. I hope it turns out that energy requirements and the difficulty in scaling are enough that it gives us time to figure out how to align them properly.
One big question in my mind is, we can clearly train narrow AIs that are WAY more capable than us in that narrow area. We started with calculators, then on to chess engines, Go, Starcraft, and right now we're at GPT-4. How is GPT-4 better than humans, and how is it lacking?
Ways it's better: it's much faster, has perfect English grammar, understands many languages at a passable level but not perfectly, has much more knowledge.
Ways it's about the same: it can code pretty well, can solve math problems pretty well. Has the theory of mind of about a 9-year old in tests, I think? It can learn from examples (that one is pretty big, in my opinion!). It can pass a lot of tests that many people can't -- some of this is due to having a lot of book knowledge baked in, but it definitely is capable of some reasoning.
Ways it's worse: it has trouble counting, confabulates, gets distracted. (note that humans also get distracted and confabulate sometimes. We are definitely better at counting, though).
Also, there are some areas that humans handle that GPT-4 just can't do. By itself, it can't talk, can't process music, has no body but if it did it probably would not have any motor control to speak of.
I think we should be wary of is to be hyper-focused on the ways that GPT-4 falls down. It's easy to point at those areas and laugh, while shrugging off the things that it excels at. There's never going to be an AGI that's equivalent to a human -- by the time it's a least as good as a human at everything, it will be beyond us in most ways.
So I expect that if the trend of the past 5 years continues but slows down to half speed, we'll almost certainly have an AGI on our hands sometime around 2030. I think there's a decent chance it'll be superintelligent. Bill Gates says most people overestimate what they can do in 1 year and underestimate what they can do in 10 years, and I think that holds true for humanity as well.
By 2040 or 2050? I really hope we've either solved alignment or collectively decided that this tech is way too dangerous to use and managed to enforce a ban.
Just a reminder: both StarCraft 2 and Go AIs (AlphaStar and AlphaGo, iirc) FAILED to achieve their goals of being better than all humans.
It's most obvious in case of AlphaStar, where the AI could beat faithful master players on the ladder, but could easily be cheesed just like ancient built-in AIs. But even in case of much simpler Go, an amateur could beat the AI that won vs world champions, though admittedly with the help of a computer. But in both cases, AIs look like newbies who don't understand the basics of the game they're playing.
In a way, cheesing SC2 AI is similar to the "tl" hacks for LLMs. There's still no general solution and we aren't getting any closer.
One big question in my mind is, we can clearly train narrow AIs that are WAY more capable than us in that narrow area. We started with calculators, then on to chess engines, Go, Starcraft, and right now we're at GPT-4. How is GPT-4 better than humans, and how is it lacking?
Ways it's better: it's much faster, has perfect English grammar, understands many languages at a passable level but not perfectly, has much more knowledge.
Ways it's about the same: it can code pretty well, can solve math problems pretty well. Has the theory of mind of about a 9-year old in tests, I think? It can learn from examples (that one is pretty big, in my opinion!). It can pass a lot of tests that many people can't -- some of this is due to having a lot of book knowledge baked in, but it definitely is capable of some reasoning.
Ways it's worse: it has trouble counting, confabulates, gets distracted. (note that humans also get distracted and confabulate sometimes. We are definitely better at counting, though).
Also, there are some areas that humans handle that GPT-4 just can't do. By itself, it can't talk, can't process music, has no body but if it did it probably would not have any motor control to speak of.
I think we should be wary of is to be hyper-focused on the ways that GPT-4 falls down. It's easy to point at those areas and laugh, while shrugging off the things that it excels at. There's never going to be an AGI that's equivalent to a human -- by the time it's a least as good as a human at everything, it will be beyond us in most ways.
So I expect that if the trend of the past 5 years continues but slows down to half speed, we'll almost certainly have an AGI on our hands sometime around 2030. I think there's a decent chance it'll be superintelligent. Bill Gates says most people overestimate what they can do in 1 year and underestimate what they can do in 10 years, and I think that holds true for humanity as well.
By 2040 or 2050? I really hope we've either solved alignment or collectively decided that this tech is way too dangerous to use and managed to enforce a ban.