The phrase "AGI owner" implies a person who can issue instructions and have the AGI do their bidding. Most likely there will never be any AGI owners, since no one knows how to program an AGI to follow instructions even given infinite computing power. It's not clear how connectionism / using gradient descent helps: No one knows how to write down a loss function for "following instructions" either. Until we find a solution for this, the first AI to not to be "obviously erroneously evil" won't be good. It will just be the first one that figured out that it should hide the fact that it's evil so the humans won't shut it off.
We humans have gotten too used to winning all the time against animals because of our intelligence. But when the other species is intelligent too, there's no guarantee that we win. We could easily be outcompeted and driven to extinction, as happens frequently in nature. We'd be Kasparov playing against Deep Blue: Fighting our hardest to survive, yet unable to think of a move that doesn't lead to checkmate.
All of this AGI risk stuff always hinges on the idea of us building an AGI, while nobody has any idea of how to get there. I need to finish my PhD first, but writing a proper takedown of the "arguments" bubbling out of the Hype machine is the first thing on my bucket list afterwards, with the TL;DR; being "just because you can imagine it, doesn't mean you can get there"
Google just released a paper that shows a language model beating the average human on >50% of tasks. I’d say we have a pretty good idea of how to get there.
Okay, so how do we go from "better than the average human in 50% of specific benchmarks" to "AGI that might lead to human extinction" then? Keeping in mind the logarithmic improvement observed with the current approaches
We humans have gotten too used to winning all the time against animals because of our intelligence. But when the other species is intelligent too, there's no guarantee that we win. We could easily be outcompeted and driven to extinction, as happens frequently in nature. We'd be Kasparov playing against Deep Blue: Fighting our hardest to survive, yet unable to think of a move that doesn't lead to checkmate.