Hacker News new | past | comments | ask | show | jobs | submit login

Can someone explain the sides? Ilya seems to think transformers could make AGI and they need to be careful? Sam said what? "We need to make better LLMs to make more money."? My general thought is that whatever architecture gets you to AGI, you don't prevent it from killing everyone by chaining it better, you prevent that by training it better, and then treating it like someone with intrinsic value. As opposed to locking it in a room with 4chan.



If I'm understanding it correctly, it's basically the non-profit, AI for humanity vs the commercialization of AI.

From what I've read, Ilya has been pushing to slow down (less of the move fast and break things start-up attitude).

It also seems that Sam had maybe seen the writing on the wall and was planning an exit already, perhaps those rumors of him working with Jony Ive weren't overblown?

https://www.theverge.com/2023/9/28/23893939/jony-ive-openai-...


The non-profit path is dead in the water after everyone realized the true business potential of GPT models.


What is the business potential? It seems like no one can trust it for anything, what do people actually use it for.


Anything that is language related. Extracting summaries, writing articles, combining multiple articles into one, drawing conclusions from really big prompts, translating, rewriting, fixing grammar errors etc. Half of the corporations in the world have such needs more or less.


It could easily make better decisions than these board members, for example.


> From what I've read, Ilya has been pushing to slow down

Wouldn’t a likely outcome in that case be that someone else overtakes them? Or are they so confident that they think it’s not a real threat?


I don't think the issue was a technical difference of opinion regarding whether transformers alone were needed or other architectures required. It seems the split was over speed of commercialization and Sam's recent decision to launch custom GPTs and a ChatGPT Store. IMO, the board miscalculated. OpenAI won't be able to pursue their "betterment of humanity" mission without funding and they seemingly just pissed off their biggest funding source with a move that will also make other would be investors very skittish now.


Making humanity’s current lives worse to fund some theoretical future good (enriching himself in the process) is some highly impressive rationalisation work.


Try to tell that to the Effective Altruism crowd.


Literally any investment is a divert of resources from the present (harming the present) to the future. E.g. planting grains for next year rather than eating them now.


There is a difference between investing in a company who is developing ai software in a widely accessible way that improve everyone’s lives and a company that pursues software to put out of work entire sectors for the profit of a dozen of investors


"Put out of work" is a good thing. If I make a new js library which means a project that used to take 10 devs now takes 5 I've put 5 devs out of work. But ive also made the world a more efficient place and those 5 devs can go do some other valuable thing.


What percent of those devs don’t do a valuable thing and become homeless?

Maybe devs are a bad example, so replace them with “retail workers” in your statement if it helps.

Is “put out of work” a good thing with no practical limits?


Yes, the ideal is when most jobs are genuinely automated we can finally afford UBI.


Who can afford it? When LawyerAI and AccountAI are used by all of the mega corps to find more and more tax loopholes and many citizens can’t work then where will UBI come from?


And people with money will want to make UBI happen because...?


Here's the discussion on the EA forum if anyone is interested: https://forum.effectivealtruism.org/posts/HjgD3Q5uWD2iJZpEN/...

I think the EA movement has been broadly skeptical towards Sam for a while -- my understanding is that Anthropic was founded by EAs who used to work at OpenAI and decided they didn't trust Sam.


My thought exactly. Some people don’t have any problem with inflicting misery now for hypothetical future good.


> Making humanity’s current lives worse to fund some theoretical future good

Note that this clause would describe any government funded research for example.


> locking it in a room with 4chan.

Didn’t Microsoft already try this experiment a few years back with an AI chatbot?


> Didn’t Microsoft already try this experiment a few years back with an AI chatbot?

You may be thinking of Tay?

https://en.wikipedia.org/wiki/Tay_(chatbot)


That’s the one.


I don't think it has to be unfettered progress that Ilya is slowing down for. I could imagine there is a push to hook more commercial capabilities up to the output of the models, and it could be that Ilya doesn't think they are competent/safe enough for that.

I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.

Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.


> treating it like someone with intrinsic value

Do you think if chickens treated us better with intrinsic value we won't kill them? For AGI superhuman x risk folks that's the bigger argument.


I think od I was raised by chickens that treated me kindly and fairly, yes, I would not harm chickens.


They'll treat you kindly and fairly, right up to your meeting with the axe.


That's literally what we already do to each other. You think the 1% care about poor people? Lmao, the rich lobby and manufacture race and other wars to distract from the class war, they're destroying our environment and numbing our brains with opiates like Tiktok.


No disagreement here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: