Hacker News new | past | comments | ask | show | jobs | submit login

They just chose a misleading name in the first place. It's pretty obvious that they couldn't properly pursue their core mission as a purely open research organization. The desired end result is too valuable.

And if they end up getting close to what they're really after, it's really undesirable to get an arms race where solely OpenAI's contributions are known to everyone.




> They just chose a misleading name in the first place.

It wasn't just the name, it was the whole rationale they presented for creating OpenAI.

Nick Bostrom was having his 15 minutes of fame with "Superintelligence" [1] and scenarios like a an "AI arms race" and the sudden emergence of a super-intelligent singleton [2] were legion. OpenAI was founded on the premise that making leading edge AI research public was the best way to mitigate that risk [3].

It was presented to the world as a "research institution which can prioritize a good outcome for all over its own self-interest" where "Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world" [4].

That lasted all of three years.

[1] https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[2] https://www.nickbostrom.com/fut/singleton.html

[3] https://en.wikipedia.org/wiki/OpenAI#Strategy

[4] https://openai.com/blog/introducing-openai/


I don’t disagree with your summary of the facts; part of the premise of my comment is that I disagree with that stated rationale for starting OpenAI. I rather think that always publicizing the bleeding-edge research would increase the risks.

Not sure if this is the main reason OpenAI went less open though, as I’m not an insider.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: