I don't know if there is a formal definition of AGI (like a super Turing Test). I read it not so much as "OpenAI has gone full AGI" but more the board thinking "We're uncomfortable with how fast AI is moving and the commercialisation. Can we think of an excuse to call it AGI so we can slow this down and put an emphasis on AI safety?"
Most serious people would just release a paper. AI safety concerns are a red herring.
This all sounds like hype momentum. People are creating conspiracy theories to backfit the events. That's the real danger to humanity: the hype becoming sentient and enslaving us all.
A more sober reading is that the board decided that Altman is a slimebag and they'd be better off without him, given that he has form in that respect.
> A more sober reading is that the board decided that Altman is a slimebag and they'd be better off without him, given that he has form in that respect.
Between this and the 4chanAGI hypothesis, the latter seems more plausible to me, because deciding that someone "is a slimebag and they'd be better off without him" is not something actual adults do when serious issues are at stake, especially not as a group and in a serious-business(-adjacent) setting. If there was a personal reason, it must've been something more concrete.
Actual adults very much consider a person's character and ethics when they're in charge of any high stakes undertaking. Some people are just not up to the job.
It's kind of incredible, people seem to have been trained to think that being unethical is just a part of being the CEO a large business.