Hacker News new | past | comments | ask | show | jobs | submit login

Technically no, but when 90% of their employees threatened to quit, they would just be the board of nothing.



The board was a non profit board serving the mission. Mission was foremost. Employees are not. One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

The fallout showed non-profit missions can't co-exist with for-profit incentives. And the power that investors were exerting, and employees (who would also benefit from the recent 70B round they were going to have) was too much.

And any disclaimer the investors got when investing in OpenAI was meaningless. It reportedly stated they would be wise in viewing their investment as charity, and they can potentially lose everything. And there was an AGI clause that said it will reconsider all financial arrangements, that Microsoft and other investors had when investing in the company was all worthless. Link to Wired article with interesting details -https://www.wired.com/story/what-openai-really-wants/


> The board was a non profit board serving the mission. Mission was foremost. Employees are not.

They need employees to advance their stated mission.

> One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

I mean, that's a nice sound bite and everything, but the only scenario where blowing up the company seems to be consistent with their mission is the scenario where Open AI itself achieves a breakthrough in AGI and where the board thinks that system cannot be made safe. Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.


> Otherwise, to be relevant in guiding research towards AGI, they need to stay a going concern, and that means not running off 90% of the employee base.

That's why they presumably agreed to find a solution. But at the same time shows that in essence, entities with for-profit incentives find a way to get what they want. There certainly needs to be more thought and discussion about governance, and how we collectively as a species or each company individually governs AI.


I don't really think we need more thought and discussion on creative structures for "governance" of this technology. We already have governance; we call them governments, and we elect a bunch of representatives to run them, we don't rely on a few people on a self-appointed non profit board.


> One of the comments a member made was, if the company was destroyed, it would still be consistent with serving the mission. Which is right.

I know you're quoting the (now-gone) board member, but this is a ridiculous take. By this standard, Google should have dissolved in 2000 ("Congrats everyone, we didn't be evil!"). Doctors would go away too ("Primum non nocere -- you're all dismissed!").


Indeed, it made no sense. But that's why I never attach any value to mission statements or principles of large entities: they are there as window dressing and preemptive whitewash. They never ever survive their first real test.


Yep, this is spot on. The entire concept of a mission driven non profit with a for profit subsidiary just wasn't workable. It was a nice idea, a nice try, but an utter failure.

The silver lining is that this should clear the path to proper regulation, as it's now clear that this self-regulation approach was given a go, and just didn't work.


If it was a for-profit company would you write that "profit is foremost and 90% employees can leave"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: