You're (somewhat abrasively) stating a version of my own opinion as well, but the "real threat" you mention, is very real. While "AI" (really, machine learning) is not good at most things, it does appear to be very, very good at convincing people it is good at them (for whatever reason). The threat of it being put in charge of things when it has (quite literally) no idea what it is doing, is not a small threat.
Because a poorly implemented chatbot using someone else's LLM API is comparable to what you can accomplish with 10^n rounds of inference in a clever way. Computers are useless without error correction, LLMs may be as well. That's not to say that LLMs will form their own goals, but that people in control of them will be welding dangerously capable agents.
Your kid will be grown up in less than 20 years, not 100. But even still, in 100 years, will there be 4x as many people? Will humanity be consuming 10x the energy that we do today? Will we have computers that are a million times faster?
The point is, exponential progress is incredible, but at some point it ceases to be exponential. And the progress of the last 100 years was fueled by a exponential population growth and exponential energy usage. We're already at +1.5C because of that; how hot will it be when your kid is grown up?
If you look at the rate of change of humanity, it's been exponentially increasing.
If you look at the direction, it's not predictable. A very different set of things will come to pass.
A child born today will live O(100 years), and will be in a very different world than I am today. Computation, in particular, is continuing to change. LLMs are a huge change, as is being interconnected, as are many other things. That's not "faster," like Moore's Law of yesteryear, but it is change.
Just a guess but I'd say "this point" is some time after real signs of understanding and intelligence are displayed.
The concept of *money* and commerce might be a good place to start trying to teach this techno parrot how to actually think.
A 5 year old has way better thinking ability. Maybe we should regulate 5 year olds as being potentially dangerous. You never know --- at "some point" one of them could easily decide to destroy humanity.
Once a technology has been developed and made available it can be used by any number of governments and corporations to do... whatever the fuck they want. You may have the resources to say "no" but they have the resources to get millions of people to give an enthusiastic "yes". Most people will do whatever marketing campaigns and figures of authority tell them they can do. Hold a radiating box by your brain a few hours a day and have it sit next to your crotch the rest of the time? Sure. Take 3-plus shots of a vaccine developed with new technology and in record time? Of course. Get into a metal tube and soar through the skies like an absolute lunatic? You're the boss!
In some cases, like nuclear proliferation, a concerted effort by powerful actors can slow the spread of certain technologies. Otherwise, your "no" will amount to about as much as the anti-vaxxers.
The only real threat is from people willing to trust AI.