Hacker News new | past | comments | ask | show | jobs | submit login

Perhaps you could argue that he wants to stick with Sam and the others because if they start a company that competes with OpenAI, there’s a real chance they catch up and surpass OpenAI. If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.



One funny detail is that the OpenAI charter states that, if this happens, they will stop their own work and help the organisation that is closest to achieving OpenAI's stated goal.


But now it may be the regulations they've gotten in place will make it harder for any new upstarts to approach them.


Maybe Sam wants to build something for profit?


really?


https://openai.com/charter

Second paragraph of the "Long-term safety" section.


Depends how much research is driven by Ilya…


> If you really want to be a voice for safety, it’ll be most effective if you’re on the winning team.

If an AI said that, we'd be calling it "capability gain" and think it's a huge risk.


I dunno, the moat Sam tried to build might make it hard to make a competitor.


We are about to find out if the moats are indeed that strong.

xAI recently showed that training a decent-ish model is now a multi-month effort. Granted GPT-4 is still farther along than others but curious how many months/resources does that add up when you have the team that built it in the first place

But also, starting another LLM company might be too obvious a thing to do. Maybe Sam has another trick up his sleeve? Though I suspect he is sticking with AI one way or the other




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: