Can someone explain the sides? Ilya seems to think transformers could make AGI and they need to be careful? Sam said what? "We need to make better LLMs to make more money."? My general thought is that whatever architecture gets you to AGI, you don't prevent it from killing everyone by chaining it better, you prevent that by training it better, and then treating it like someone with intrinsic value. As opposed to locking it in a room with 4chan.
If I'm understanding it correctly, it's basically the non-profit, AI for humanity vs the commercialization of AI.
From what I've read, Ilya has been pushing to slow down (less of the move fast and break things start-up attitude).
It also seems that Sam had maybe seen the writing on the wall and was planning an exit already, perhaps those rumors of him working with Jony Ive weren't overblown?
Anything that is language related. Extracting summaries, writing articles, combining multiple articles into one, drawing conclusions from really big prompts, translating, rewriting, fixing grammar errors etc. Half of the corporations in the world have such needs more or less.
I don't think the issue was a technical difference of opinion regarding whether transformers alone were needed or other architectures required. It seems the split was over speed of commercialization and Sam's recent decision to launch custom GPTs and a ChatGPT Store. IMO, the board miscalculated. OpenAI won't be able to pursue their "betterment of humanity" mission without funding and they seemingly just pissed off their biggest funding source with a move that will also make other would be investors very skittish now.
Making humanity’s current lives worse to fund some theoretical future good (enriching himself in the process) is some highly impressive rationalisation work.
Literally any investment is a divert of resources from the present (harming the present) to the future. E.g. planting grains for next year rather than eating them now.
There is a difference between investing in a company who is developing ai software in a widely accessible way that improve everyone’s lives and a company that pursues software to put out of work entire sectors for the profit of a dozen of investors
"Put out of work" is a good thing. If I make a new js library which means a project that used to take 10 devs now takes 5 I've put 5 devs out of work. But ive also made the world a more efficient place and those 5 devs can go do some other valuable thing.
Who can afford it? When LawyerAI and AccountAI are used by all of the mega corps to find more and more tax loopholes and many citizens can’t work then where will UBI come from?
I think the EA movement has been broadly skeptical towards Sam for a while -- my understanding is that Anthropic was founded by EAs who used to work at OpenAI and decided they didn't trust Sam.
I don't think it has to be unfettered progress that Ilya is slowing down for. I could imagine there is a push to hook more commercial capabilities up to the output of the models, and it could be that Ilya doesn't think they are competent/safe enough for that.
I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.
Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.
That's literally what we already do to each other. You think the 1% care about poor people? Lmao, the rich lobby and manufacture race and other wars to distract from the class war, they're destroying our environment and numbing our brains with opiates like Tiktok.