Hacker News new | past | comments | ask | show | jobs | submit login

> The board was a non profit board serving the mission. Mission was foremost. Employees are not.

They need employees to advance their stated mission.

> One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

I mean, that's a nice sound bite and everything, but the only scenario where blowing up the company seems to be consistent with their mission is the scenario where Open AI itself achieves a breakthrough in AGI and where the board thinks that system cannot be made safe. Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.




> Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.

That's why they presumably agreed to find a solution. But at the same time shows that in essence, entities with for-profit incentives find a way to get what they want. There certainly needs to be more thought and discussion about governance, and how we collectively as a species or each company individually governs AI.


I don't really think we need more thought and discussion on creative structures for "governance" of this technology. We already have governance; we call them governments, and we elect a bunch of representatives to run them, we don't rely on a few people on a self-appointed non profit board.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: