Hacker News new | past | comments | ask | show | jobs | submit login

The board seems truly incompetent here and looking at the member list it doesn't seem very surprising. A competent board should have asked for legal and professional advice before taking a drastic step like this. Instead the board thought it was a boxing match and tried to deliver a knockout punch before the market closes with blunt language. This might be the most incompetent board for an organisation of this size.



The major investors whose money is on the line and who are funding the venture, Microsoft, Sequoia, and Khosla, were not given advanced warning or any input in to how this would impact their investment.

I would definitely say the board screwed up.

https://www.forbes.com/sites/alexkonrad/2023/11/17/openai-in...


The board of the non-profit (one that fired Sam) has no fiduciary duty to those investors, I believe. Microsoft invested in the for-profit Openai, which is owned by the non-profit. The other ones I don't know.

The board has no responsibility to Microsoft whatsoever regarding this. Sam Altman structured it this way himself. Not to say that the board didn't screw up.


While this may be technically true, the reality is that when you take $10 billion from a company there are strings attached. Consultation on a decision of this magnitude is one of those strings. You can choose to push ahead anyway after this is done but dropping the news on them 1 minute before you pull the trigger is unacceptable and MSFT will go for the throat here. You can't be seen to be a company that can be treated like this at MSFT level when you have invested this much money in any org.


Once you take in 10 billions then it’s pretty much the opposite, legality is the only things that matter.


Did they take a wire transfer for $10bn in cash, now sitting in their bank account? Or did they get a promise of various funding over N years, subject to milestones, conditions, in a variety of media including cash, Azure credits, loan lines etc.

I'd imagine the latter, and that it can be easily yanked away.


You mean the latter, but yeah. Financing like that is doled out based on a number of things; it would be wildly irresponsible to do otherwise for reasons exactly like this.


Fixed, thanks!


No, that's not it; relationships play gigantic roles in large deals.

Besides, even if you had an outstanding contract for $10bn, a judge would not pull a "well technically, you did say <X> even though that's absurd, so they get all the money and you get nothing."


Depends what you mean. Legally they might be in the clear but guarantee when you fuck around with billions of other people's money, it gets more complicated that that.


There are lots of other people and companies with $10 billion though. Why does it have to be Microsoft? Even after this circus, Open AI could still probably raise a ton of money from new entities if they wanted to. Maybe that is the point of this.


Totally true. One can even argue they are forbidden to discuss this with MS. They would be mixing up the interests of the non-profit and its for-profit subsidiary. Legally, it’s only a change of control in the majority shareholder of a company where MS has invested in. They dont have a say, and pressuring them could be higly illegal.


That Microsoft agreed to such a deal is negligence of the highest order.


It might have been the only deal on the table. Perhaps they thought the risk was worth it - good processes don't always lead to good outcomes. Perhaps they felt that the rights they gained to the GPT models was worth it even if they don't get direct influence over OpenAI.

Between Bing, o365, etc. etc. etc. it's possibly they could recoup all of the value of their investment and more. At the very least it is a significant minimization of the downside.


As I understand it, they got all the model details and most of their investment was actually cloud credits on Azure. So technically they can cancel those going forward if they want to and deal with whatever legal ramifications exist. All of GPT4 (and other models) for probably $1-2b may not actually be a bad deal for them even if that's all they get.


They put out a statement saying they have what they need. I don't see how Microsoft loses here. Either they get altman back at openai and get rid of the ethics crowd and make bank, or they find his new startup without the move slow crowd and make bank. No matter what they win.


We have no idea what the terms of the deal are. It's probably "up to" $20 billion.


how can a non-profit own a for-profit?

honest question


I'd say easily, especially outside the US. Check out Germany for example: - Bertelsmann Foundation, owns or is the majority shareholder of Bertelsmann - Robert Bosch Foundation, owns or is the majority shareholder of Bosch - Alfred Krupp von Bohlen and Halbach Foundation, owns or is the majority shareholder of Krupp - Else Kröner Fresenius Foundation, owns or is the majority shareholder of Fresenius - Zeppelin Foundation (yes, those Zeppelins...) owns or is the majority shareholder of ZF Friedrichshafen - Carl Zeiss Foundation, owns or is the majority shareholder of Carl Zeiss and Schott - Diehl Foundation, owns or is the majority shareholder of Diehl Aerospace

And a bunch more. A lot of you will never have heard of them, but all of them are multi billion dollar behemoths with thousands of subsidiaries, employees, significant research and investment arms. And they love the fact that barely anyone knows them outside Germany.


Easy, they own shares. For example, the nonprofit Mormon church owns 47 billion in equity in private companies including Amazon, Exxon, Tesla, and Nvidia[1].

Nothing stopping a non-profit from owning all the shares in a for-profit.

[1] https://finance.yahoo.com/news/top-10-holdings-mormon-church...


You can do everything by the rules, and still do the wrong thing


Wrong by what metric? What if they believe the only way to fulfill their duty to the charter is for open ai to die? Why would it be wrong? Is it worse that it living to be the antithesis of itself? Just so the investors can have a little more honey?


They don't have any duty as far as governing the non-profit, but as majority shareholder of the for-profit subsidiary, the non-profit would still have a fiduciary duty to the subsidiary's minority shareholders.


Duties to not dilute them or specifically target them, but majority can absolutely make decisions about executives even if those decisions are perceived as harmful.


I'm surprised that none of these investors secured a board seat for themselves before handing over tens of billions. The board is closer to a friendship circle than a group of experienced business folks.


> The board is closer to a friendship circle than a group of experienced business folks.

Isn't this true for most of S.V.?


FOMO


Non profit board therefore for profit investors have no say


It was complete amateur hour for the board.

But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?


I think when it started it was not the most important tech company but just some open research effort.


Not many and even fewer if you consider folks that have a good grasp of themselves, their psychology, their emotions — and how they can mislead them, and their heart.

IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.

Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.


Is this a way of saying that AI safety is unnecessary?


It's a way of saying that what has been historically been considered "studying AI safety" in fact bears little relation to real life AIs and what may or may not make them more or less "safe".


Yes, with the addition that I do feel that we deserve something better than I perceive we’ve gotten so far and that safety is super important; but also I don’t work at OpenAI and am not Ilya so idk


Pretty sure that Sutskever understands the technology, and it looks like he persuaded the others.


>> A competent board should have asked for legal and professional advice...

I will bite. How do you know they didn't?


Typically it would be framed amicably, without so much axe-grinding, particularly for public release. Even ChatGPT itself would have written a more balanced release, and advised against such shenanigans. I enjoy that irony.


That's the thing. Lawyers can give them the letter of the law but might have no idea how popular Sam was inside and outside the company, or how badly he was needed. And that's what really matters here.


Why does it matters to a board that sticks to the principles of the charter of a non-profit? Why would they look at anything else other than the guiding principles?


Because their charter says their goal is to get to AGI, or something like that.

If 99% of their employees quit and their investors pull their support because Sam was fired, they're not getting anywhere and have failed to deliver on their charter.


>house collapses in 15mph wind

Why didn’t they hire a competent builder?

You:

>how do you know they weren’t? It could be pure happenstance! All the nails could… could have been defective! Or something! waves hands


Enron had independent auditors and a law firm approving what they did.


I wonder if any of this is related it to it being envisioned as a non-profit board, but in the past ~year, the for-profit part has outgrown what they were really ready to handle.


Maybe they asked ChatGPT for legal advice.


Maybe they and it didn't help them. Guardrails for chatgpt will prevent it from predicting outcomes, or providing any personalized advice. I asked it and just said to consult with counsel and have a succession plan.

>Predicting specific outcomes in a situation like the potential firing of a high-profile executive such as Sam Altman from OpenAI is quite complex and involves numerous variables. As an AI, I can't predict future events, but I can outline some possible considerations and general advice:


Surely there’s a wholly uncensored chatGPT 5 at OpenAI running on some engineering sample H200 cluster with a Terabyte of video RAM or something.


Better yet, Sutskever’s version with AGI!


i see what you did there.


Even an episode of Succession and they would have known better than to have attempted this


They're the board of a non-profit not a Fortune 500 company. Everyone should just chill.


a non-profit that controls one of the most valuable private tech companies that rivals the importance of a lot of F500 companies.


It didn't start out that way now did it?


> Instead the board thought it was a boxing match

Or maybe chess[1].

[1]: https://www.youtube.com/watch?v=0cv9n0QbLUM


They almost certainly consulted both lawyers and chatGPT and still proceeded with the dismissal. So, in a way, this could be a test of the alignment of chatGPT (and corporate lawyers).

One scenario where both parties are fallible humans and their hands are forced: Increased interest has to close down plus signups, because compute can't scale. Sam goes to Brockman and they decide to use compute meant for GPT-5 to try to scale for new users, without informing board. Can perfectly fine break that rule with GPT-4, but what if Sam does this again in the future when they have AGI on their hands?


>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

>non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

From Forbes [1]

Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”

Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)

McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.

Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.

More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.

[1] https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are...


That board is going to face a wrath of shit from Microsoft, Khosla, and other investors.

This isn't a university department. You fuck around with $100B+ dollars of other people's money, you're gonna be in for it.


Sergei Frolov seems to be thriving these days.


Perhaps the AGI convinced the board to make a wild move like this as part of its first chess move


I’ve mused that an advanced AGI would probably become suicidal after dealing with humans for a while and realizing there’s no escape. Maybe this is an attempt.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: