When an AI makes a silly math mistake we say it is bad at math and laugh at how dumb it is. Some people extrapolate this to "they'll never get any better and will always be a dumb toy that gets things wrong". When I forget to carry a 1 when doing a math problem we call it "human error" even if I make that mistake an embarrassing number of times throughout my lifetime.
Do I think LLM's are alive/close to ASI? No. Will they get there? If it's even at all possible - almost certainly one day. Do I think people severely underestimate AI's ability to solve problems while significantly overestimating their own? Absolutely 10,000%.
If there is one thing I've learned from watching the AI discussion over the past 10-20 years its that people have overinflated egos and a crazy amount of hubris.
"Today is the worst that it will ever be." applies to an awful large number of things that people work on creating and improving.
Do I think LLM's are alive/close to ASI? No. Will they get there? If it's even at all possible - almost certainly one day. Do I think people severely underestimate AI's ability to solve problems while significantly overestimating their own? Absolutely 10,000%.
If there is one thing I've learned from watching the AI discussion over the past 10-20 years its that people have overinflated egos and a crazy amount of hubris.
"Today is the worst that it will ever be." applies to an awful large number of things that people work on creating and improving.