Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI board in discussions with Sam Altman to return as CEO (theverge.com)
1243 points by medler 9 months ago | hide | past | favorite | 1611 comments



But what about the legal responsability of Microsoft and investors there?

To explain, it's the board of the non-profit that ousted @sama .

Microsoft is not a member of the non-profit.

Microsoft is "only" a shareholder of its for-profit subsidiary - even for 10B.

Basically, what happened is a change of control in the non-profit majority shareholder of a company Microsoft invested in.

But not a change of control in the for-profit company they invested in.

To tell the truth, I am not even certain the board of the non-profit would have been legally allowed to discuss the issue with Microsoft at all - it's an internal issue only and that would be a conflict of interest.

Microsoft is not happy with that change of control and they favourited the previous representative of their partner.

Basically Microsoft want their shareholder non-profit partner to prioritize their interest over its own.

And to do that, they are trying to impede on its governance, even threatening it with disorganization, lawsuits and such.

This sounds like highly unethical and potentially illegal to me.

How come no one is pointing that out?

Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?

What does it say about the seriousness of it all?

But of course, that's Silicon Valley baby.


I think a lot of commenters here are treating the nonprofit as if it were a temporary disguise with no other relevance, which OpenAI now intends to shed so it can rake in the profits. Legally this is very much not true, and I’ve read that only a minority of the board can even be a stakeholder in the for-profit (probably why Altman is always described as having no stake). If that’s true, it’s very obviously why half the board are outside people with no stake in the finances at all.


Exactly my point.


No one is saying they are now valued at 0.

They are likely valued a lot less than 80 billion now.

OpenAI had the largest multiple - >100X their revenue for a recent startup.

That multiple is a lot smaller now without SamA.

Honestly the market needs a correction.


SamA is nowhere even close to relevant to the value that OpenAI presents. He's def. less than half a billion and likely much less than that. What makes OpenAI so transformative is the technology it produces and SamA is not an engineer that built that technology. If the people that made it were to all leave it would reduce the value of the company by a large amount, but the technology would remain and it is not easy to duplicate given the scarcity of GPU cycles, the training data now being very hard to acquire and lots of other well invested companies chasing with the likes of Google, Meta, Anthropic. That doesn't even begin to mention the open source models that are also competing.

SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.

Enough with the hero worship for SamA and all the other salesmen.


SamA is nowhere even close to relevant to the value that OpenAI presents.

The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.


That's a good point, but any responsible investor would have looked at the charter and priced this in. What I find ironic is the number of people defending SamA and the like who are now tacitly admitting that his promulgation of AI risk fears was essentially bullshit and it was all about making the $$$$ and using AI risk to gain competitive advantage.


any responsible investor would have looked at the charter and priced this in

This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.


The threat is existential, and if they're trying to save the world, that's commendable.


If they intended to protect humanity this was a misfire.

OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.

Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.

Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.

If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.


We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.

[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...


That's not what OpenAI is doing.

Their entire alignment effort is focused on avoiding the following existential threats:

1. saying bad words 2. hurting feelings 3. giving legal or medical advice

And even there, all they're doing is censoring the interface layer, not the model itself.

Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.

I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.


There are people that think Xenu is an existential threat. ¯\_(ツ)_/¯


"Also, how come a 90 billion dollars company hailed as the future of computing and a major transformative force for society would now be valued 0 dollars only because its non-technical founder is now out?"

Please think about this. Sam Altman is the face of OpenAi and was doing a very good job leading it. If the relationships are what kept OpenAI from always being on top and they removed that from the company, corporations may be more hesitant to do business with them in the future.


Well, once again, then it's Satya's mistake to have allowed the representative of an independant third party entity become the public face of a company he invested in.

OpenAI might have wasted the 10B of Microsoft. But whose fault is it in the first place? It's Microsoft's fault to have invested it in the first place.


Regardless of whether or not it was a "mistake" (I don't think it was... OpenAI is so far ahead of the competition that it's not even funny), the fact remains that a) Microsoft has dumped in tons of money that they want to get back and b) Microsoft has a tremendous amount of clout, in that they're providing the compute power that runs the whole shebang.

While I'm not privy to the contracts that were signed, what happens if Nadella sends a note to the OpenAI board that reads, roughly, "Bring back Altman or I'm gonna turn the lights off"?

Nadella is probably fairly pissed off to begin with. I can't imagine he appreciates being blindsided like this.


That would effectively exit Microsoft from the LLM race and be an absolutely massive hit to Microsoft shareholders. Unlike the OpenAI non-profit board, the CEO of MS actually is beholden to his shareholders to make a profit.

In other words, MS has the losing hand here and CEO of MS is bluffing.


> That would effectively exit Microsoft from the LLM race

I don't see why. As I understand it, a significant percentage of Microsoft's investment went into the hardware they're providing. It's not like that hardware and associated infrastructure are going to disappear if they kick OpenAI off it. They can rent it to someone else. Heck, given the tight GPU supply situation, they might even be able to sell it at a profit.


But I think the 'someone else' would be in competition with MS, as opposed to OpenAI who was pretty much domesticated in terms of where the profit would go.


They would have done that already if that is possible in the terms. Which clearly means they don’t have such leverage.


It depends on what assurances they were given and by whom. Perhaps it was Sam Altman himself that made verbal promises that weren’t his to give, and he may end up in trouble over them.

We don’t know what was said, and what was signed. To put the blame with microsoft is premature.


> Sam Altman is the face of OpenAi and was doing a very good job leading it.

Its not like every successful org needs a face. Back then Google was a wildly successful as an org, but unlike Steve Jobs then, people barely knew Eric Schmitt. Even with Microsoft as it stands today, Satya is mostly a backseat driver.

Every org has its own style and character. If the board doesn't like what they are building, they can try change it. Risky move nevertheless, but its their call to make.


And I thought AI is about the brain and not the face.


The company still has assets and a balance sheet. They could fire everyone and simply rent out their process to big orgs and still make a pretty penny.


Loss of know-how is a risk. A vendor needs to be able to prove that it has sufficient headcount and skills to run and improve a system.

While OpenAI would have the IP, they would also need to retain the right people who understand the system.


Very good point (even tho i think the right move is for sam to come back as ceo).


I don't see any citations provided by you showing legal threats, though.


Highly unethical would be throwing the CEO of the division keeping the lights on under a bus with zero regard for the consequences.

The non-profit board acted entirely against the interest of OpenAI at large. Disclosing an intention to terminate the highest profile member of their company to the company paying for their compute, Microsoft, is not only the ethical choice, it's the responsible one.

Members of the non-profit board acted recklessly and irresponsibly. They'll be paying for that choice for decades following, as they should. They're lucky if they don't get hit with a lawsuit for defamation on their way out.

Given how poorly Mozilla's non-profit board has steered Mozilla over the last decade and now this childish tantrum by a man raised on the fanfiction of Yudkowsky together with board larpers, I wouldn't be surprised if this snafu sees the end of this type of governance structure in tech. These people of the board have absolutely no business being in business.


Except it's not a "division" but an independent entity.

And if that corporate structure does not suit Satya Nadella, I would say he's the one to blam for having invested 10B in the first place.

Being angry at a decision he had no right to be consulted on does not allow him to meddle in the governance of its co-shareholder.

Or then we can all accept together that corruption, greed and whateverthefuckism is the reality of ethics in the tech industry.


> Except it's not a "division" but an independent entity.

This is entirely false. If it were true, the actions of today would not have come to pass. My use of the word "division" is entirely in-line with use of that term at large. Here's the Wikipedia article, which as of this writing uses the same language I have. [1]

If you can't get the fundamentals right, I don't know how you can make the claims you're making credibly. Much like the board, you're making assertions that aren't credibly backed.

[1] https://en.m.wikipedia.org/wiki/Removal_of_Sam_Altman


Hanging your hat on quibbles over division vs subsidiary eh? That's quite a strident rebuttal based on a quibble.


I'm happy to defend any of my points. The commenter took issue with one. I responded to it. If you have something more to add, please critique what you disagree with.

I will say that using falsehoods as an attack doesn't put the rest of the commenter's points into particularly good light.


I don't understand why you think what the board of the non-profit did was unethical. Your presupposition seems to be that the non-profit has a duty to make money - aka "keep the lights on" but it is a "non-profit" precisely because it does not have that duty. The duty of the board is to make sure the non-profit adheres to its charter. If it can't do that and keep the lights on at the same time, then so much worse for the lights.


As a non-profit with the charter they have, their board was not supposed to be in business (at this scale). I guess this is where all of this diverged, a while ago now..


Update on the OpenAI drama: Altman and the board had till 5pm to reach a truce where the board would resign and he and Brockman would return. The deadline has passed and mass resignations expected if a deal isn’t reached ASAP

https://twitter.com/alexeheath/status/1726055095341875545


Pretty incredible incompetence all around if true.

From the board for not anticipating a backlash and caving immediately... from Microsoft for investing into an endeavor that is purportedly chartered as non-profit and governed by nobodies who can sink it on a whim. And having 0 hard influence on the direction despite a large ownership stake

Why bother with a non-profit model that is surreptitiously for profit? The whole structure of OpenAI is largely a facade at this point.

Just form a new for profit company and be done with it. Altman's direction for profit is fine, but shouldn't have been pursued under the loose premise of a non profit.

While OpenAI leads currently, there are so many competitors that are within striking distance without the drama. Why keep the baggage?

It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO. OpenAI has first mover advantage, and perhaps better talent, but not by an order of magnitude. There is no special sauce here.

Altman may be charismatic and well connected, but the hero worship put forward on here is really sad and misplaced.


> While OpenAI leads currently, there are so many competitors that are within striking distance without the drama.

It's hard to put into words, that do not seem contradictory: GPT-4 is barely good enough to provide tremendous value. For what I need, no other model is passing that bar, which makes them not slightly worse but entirely unusable. But again, it's not that GPT-4 is great, and I would most certainly go to whatever is better at the current price metrics in a heartbeat.


What is your use-case? I have not worked with them extensively, but both PALM and LLAMA seem as good as GPT-4 for most tasks I have thrown at them


I’ve used all 3 a lot. Gpt 4 is definitely better. That being said if I was to rank a close second it would be Claude 2, which I think is really good


But would you say the others besides GPT-4 are unsuitable? That's the claim I find surprising


look at the backgrounds of those board members... cant find any evidence that any of them have experience with corporate politics. theyre in way over their heads.


It is also crazy that the "winning move" was to just do nothing and look like a genius and coast off that for the rest of their lives. Who in their right mind would consider them for a board position now.


This is assuming motivations similar to a board for a for-profit company, which the OpenAI board is not.

Insisting, no matter how painful, that the organization stays true to the charter could be considered a desirable trait for the board of a non-profit.


Fair. I don't know why they wouldn't just come out and say that though, if that were the case. It would be seen as admirable, instead of snake-ish.

Instead of "Sam has been lying to us" it could have been "Sam had diverged too far from the original goal, when he did X."


It's hard to say. Lots of things don't really make sense based on the information we have.

They could have meant that Sam had 'not been candid' about his alignment with commercial interests vs. the charter.


that is what the press release says. they didn't go into specifics but it is clear that the conflict is in Comercialisation vs original purpose


>that is what the press release says.

In the initial press release, they said Sam was a liar. Doing this without offering a hint of an example or actual specifics gave Sam the clear "win" in the court of public opinion.

IF they would have said "it is clear Sam and the board will never see eye to eye on alignment, etc. etc" they probably could have made it 50/50 or even favored.


A strange game. The only winning move is not to play. How about a nice game of chess?


that's because it was never supposed to be a Corporate. It was a non-profit dedicated to AI research in the benefit of All. This is also why all this happened, they trying to stay true to the mission and not turn into a corporate.


In which case you could say the three non-employee members of the board have no background in AI. Two of them have no real background in tech at all. One seems to have no background in anything other than being married to a famous actor.

If Sam returns, those three have to go. He should offer Ilya the same deal Ilya offered Greg - you can stay with the company but you have to step down from the board.


They don’t have experience with non-profit leadership either, do they? They have some experience leading for-profits, such as the Quora CEO, but not non-profits.


> It's pretty clear that the best engineering will decide the winners, not the popularity of the CEO.

This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

Ilja can follow Google's Bard by holding it back until they have countermodels trained to remove conflicts ("safety"), but this will not win them any compute contracts, nor keep them the existing GPU hours. It's only mass, not smarts. Ilja lost this one.


When google came out it had the best algothitm backed by good hardware (as far as I understand often off the shelf hardware - anyway the page simply "just worked"). Difference between google and competitors was like night and day when it came out. It gained marker share very quickly because once you started using it - you didnt have any incentive to go back.

Now google search has a lot of problems, much better competition. But seriously you probably dont understand how it was years ago.

Also I thought that in ML still the best algorhitms win, since all the big companies have money. If someone came and developed a "pagerank-equivalent" for AI that is better than the current algs, customerd would switch quickly since there is no loyalty.

On a side note: Microsoft is playing the game very smart by adding AI to their products what makes you stick to them.


Oh, the pagerank myth.

Google won against initially Alta Vista, because they had so much money to buy themselves into each countries interxion to produce faster results. With servers and cheap disks.

The pagerank and more bots approach kept them in front afterwards, until a few years ago when search went downhill due to SEO hacks in this monoculture.


This is anegdotical evidence, but I was there when Google came out and it was simply much better than the competition. I learned one day about this new websitr - and it was so much better than the other alternatives that I never went back. Same with gmail, trying to get that invite for that sweet 1GB mailbox when the ones from your country offered only 20MB and sent you 10 spammy ads per day, every day.

As an anegdote: before google I was asked to show the internet to my grandmother. So I asked her what she wants to search for. She asked me about some author, let's say William Shakespeare - guess what did the other search engine find for me and my grandma: porn...


I don't remember response speed mattered until at least ten years after Google's start.

Certainly not when they won.

They were better. Basic PageRank was better than anything else. And once they figured out advertisement, they kept making it better to seal their dominance.


Google gave better results. Few people cared about faster servers at the time, not when most of the world was still on dialup or ADSL.


> This is ML, not Software engineering. Money wins, not engineering. Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

This is an absurd retcon. Google won because they had the best search. Ask Jeeves and AltaVista and Yahoo had poor search results.

Now Google produces garbage, but not in 2004.


> Same as it with Google, which won because they invested massively into edge nodes, winning the ping race (fastest results), not the best results.

What in the world are you talking about? Internet search? I remember Inktomi. Basch's excuses otherwise, Google won because PageRank produced so much better results it wasn't even close.


The faster results came after they had already won the race for best search results. Initially, Google wasn't faster than the competition in returning a full page. I vividly remeber the joy of patiently waiting 2-3 seconds for an answer, and jolting up every time Google Search came back with exactly what I wanted.


[flagged]


You've posted some version of this at least half a dozen times now. Please stop.


“Tech entrepreneur”


[flagged]


Working at Google doesn't mean you're intelligent, regardless of gender.


It did 15 years ago. And I have a feeling it still does for the people not checking the right diversity hire boxes.


my question is: why not both? why not pursue the profit and use that to fuel the research into AGI. seems like a best of both worlds.


That's the intent of the arrangement, but there's also limits - when that pursuit of profit begins to interfere with the charter of the non-profit, you end up in this situation.

https://openai.com/charter

> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

My interpretation of events is the board believes that Altman's actions have worked against the interest of building an AGI that benefits all of humanity - concentrating access to the AI to businesses could be the issue, or the focus on commercialization of the existing LLMs and chatbot stuff causing conflict with assigning resources to AGI r&d, etc.

Of course no one knows for sure except the people directly involved here.


> Of course no one knows for sure except the people directly involved here.

The IRS will know soon enough if they were indeed non-profit.


I was not implying they were not a non-profit. I am saying that we do not know the exact reason why the board fired Altman.


Really weird phrasing in this tweet. The idea is that Altman and/or a bunch of employees were demanding the board reinstate Altman and then resign. And they’re calling it a “truce.” Oh, and there’s a deadline (5 pm), but since it’s already passed the board merely has to “reach” this “truce” “ASAP.”

Edit: an update to the verge article sheds some more light, but I still consider it very sus since it’s coming from the Altman camp and seems engineered to exert maximal pressure on the board. And the supposed deadline has passed and we haven’t heard any resignations announced

> Update, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled — missing a key 5PM PT deadline by which many OpenAI staffers were set to resign. If Altman decides to leave and start a new company, those staffers would assuredly go with him.


"Missing a key 5PM PT deadline by which many OpenAI staffers were set to resign."

Says who? And did they resign?


one thing that I am curious about: aren't there non-competes in place here? and even without them, you just cannot start your own thing that just replicates what your previous employer does - this has lawsuit written all over it.


It's California. Non-competes are void. It is one of the few states where non-competes are not legally enforceable.


It'll be tough going with no Azure compute contracts, no GPUs, no billions from Microsoft, no training data, OpenAI capturing all of the value from user-generated content resulted in sites like Reddit and Twitter significantly raising the cost to scrape them.


The same thing got said about Elon Musk and Twitter, and yet X is still somehow alive.


Elon had a massive preexisting AI-compute capacity from Tesla and ann enormous training set from X. That’s very different.


No nothing similar at all was said about that. Sam Altman is also not Elon Musk


Yeah, Sam will not turn 40billion into 0 billion


Nah this is California, that won’t work


Maybe they used the old Soviet Russia trick / good old KGB methods to seek out those who supported Altman. Now the board has a list of his backers - and they will slowly fire them one by one later. "Give me the man and I will give you the case against him".

https://en.m.wikipedia.org/wiki/Give_me_the_man_and_I_will_g...


I am just baffled for so many reasons.

Why is the board reversing course? They said they lost confidence in Altman - that’s true whether lots of people quit or not. So it was bullshit

Why did the board not foresee people quitting en masse? I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired

Why did the interim CEO not warn Ilya about the above? Sure it’s a promotion but her position is now jeopardized too. Methinks she’s not ready for the big leagues

Who picked this board anyway? I was surprised at how…young they all were. Older people have more life experience and tend not to do rash shit like this. Although the Quora CEO should’ve known better as well.


From what we can see, it looks like the majority of the reporting sources are Altman aligned. Look at how the follow up tweet from this reporter read - the board resigning and the governance structure changing is being called a "truce" when it's a capitulation.

We might get a better understanding of what actually happened here at some point in the future, but I would not currently assume anything we are seeing come out right now is the full truth of the matter.


It seems to me that Altman uses his influence to manipulate public opinion, which he always does.


> I’m sure some of it is loyalty to Sam and Greg but it’s also revolting at how they were suddenly fired

Funny how people only use words like revolting for sudden firings of famous tech celebrities like Sam with star power and fan bases. When tech companies suddenly fire ordinary people, management gets praised for being decisive, firing fast, not wasting their time on the wrong fit, cutting costs (in the case of public companies with bad numbers or in a bad economy), etc.

If it’s revolting to suddenly fire Sam*, it should be far more revolting when companies suddenly fire members of the rank and file, who have far less internal leverage, usually far less net worth, and far more difficulty with next career steps.

The tech industry (and US society generally) is quite hypocritical on this point.

* Greg wasn’t fired, just removed from the board, after which he chose to resign.


That comparison doesn't make much sense, they didn't fire the CEO to reduce costs.

What looks quite unprofessional (at least on the outside) here is that a surprise board meeting was called without two of the board members present, to fire the CEO on the spot without talking to him about change first. That's not how things are done in a professional governance structure.

Then there is a lot of fallout that any half competent board member or C-level manager should have seen coming. (Who is this CTO that accepted the CEO role like that on Thursday evening and didn't expect this to become a total shit show?)

All of it reads more like a high school friends club than a multi billion dollar organization. Totally incompetent board on every dimension. Makes sense they step down ASAP and more professional directors are selected.


I’m not saying it was handled well. It wasn’t.

My point was that the industry is hypocritical in praising sudden firings of most people while viewing it as awful only when especially privileged stars like Altman are the victim.

Cost reduction is a red herring - I mentioned it only as one example of the many reasons the industry trend setters give to justify the love of sudden firings against the rank-and-file, but I never implied it was applicable to executive firings like this one. The arguments on how the trend setters want star executives to be treated are totally different from what they want for the rank and file, and that’s part of the problem I’m pointing out.

I generally support trying to resolve issues with an employee before taking an irreversible action like this, whether they are named Sam Altman or any unknown regular tech worker, excepting only cases where taking the time for that is clearly unacceptable (like where someone is likely to cause harm to the organization or its mission if you raise the issue with them).

If this case does fall into that exception, the OpenAI board still didn’t explain that well to the public and seems not to have properly handled advance communications with stakeholders like MS, completely agreed. If no such exception applies here, they ideally shouldn’t have acted so suddenly. But again, by doing so they followed industry norms for “normal people”, and all the hypocritical outrage is only because Altman is extra privileged rather than a “normal person.”

Beyond that, any trust I might have had in their judgment that firing Altman was the correct decision evaporated when they were surprised by the consequences and worked to walk it back the very next day.

Still, even if these board members should step down due to how they handled it, that’s a separate question from whether they were right to work in some fashion toward a removal of Altman and Brockman from their positions of power at OpenAI. If Altman and Brockman truly were working against the nonprofit mission or being dishonest with their board, then maybe neither they nor the current board are the right leaders to achieve OpenAI’s mission. Different directors and officers can be found. Ideally they should have some directors with nonprofit leadership experience, which they have so far lacked.

Or if the board got fooled by a dishonest argument from Ilya without misbehavior from Altman and Brockman, then it would be better to remove Ilya and the current board and reinstall Altman and Brockman.

Either way, I agree that the current board is inadequate. But we shouldn’t use that to prematurely rush to the defense of Altman and Brockman, nor of course to prematurely trust the judgment of the board. The public sphere mostly has one side of the story, so we should reserve judgment on what the appropriate next steps are. (Conveniently, it’s not our call in any case.)

I would however be wary of too heavily prioritizing MS’s interests. Yes, they are a major stakeholder and should have been consulted, assuming they wouldn’t have given an inappropriate advance heads-up to Altman or Brockman. But OpenAI’s controlling entity is a 501(c)(3) nonprofit, and in order for that to remain the correct tax and corporate classification, they need to prioritize the general public benefit of their approved charitable mission over even MS’s interests, when and if the two conflict.

If new OpenAI leadership wants the 501(c)(3) nonprofit to stop being a 501(c)(3) nonprofit, that’s a much more complicated transition that can involve courts and state charities regulators and isn’t always possible in a way that makes sense to pursue. That permanence is sometimes part of the point of adopting 501(c)(3) nonprofit status in the first place.


Some of those board picks make zero sense to me.


The board was likely stacked with people who were easily influenced by the big personalities and to check some marks (safety person, academic, demographic etc).


The latest update is that investors have been reporting that Sam Altman was talking to them about funding a new venture separate from OpenAI, together with Greg Brockman. This seems to paint the picture that the board was reacting to this news when dismissing Altman.

https://www.theguardian.com/technology/2023/nov/18/earthquak...


"Those responsible for sacking the people who have just been sacked, must be sacked."


Reminds me of the story of Chinggis Khan's burial:

"It's also said that after the Khan was laid to rest in his unmarked grave, a thousand horsemen trampled over the area to obscure the grave's exact location. Afterward, those horsemen were killed. And then the soldiers who had killed the horsemen were also killed, all to keep the grave's location secret."


Sounds like a line from HGTTG


It’s from the opening credits of Monty Python and the Holy Grail.

https://www.youtube.com/watch?v=79TVMn_d_Pk


Who sacks the person who sacks?


Whoever's nominally responsible for sacking the people who sacked the people who have just been sacked.


A Møøse once bit my server


"Quis dimittet ipsos dimissores?"


It's sacks all the way down.


David O Sacks


Curious to see if turning something off and back on will work out for the OpenAI board like it does in IT generally.


reach a truce where the board would resign and he and Brockman would return

That's a funny use of the word truce.


I guess the alternative is more like a war where Altman and Brockman form a new for profit company that kills OpenAI?


Truce for me, but not for thee.


These updates all seem to be coming from one side. Have they said anything at all?


There is no scenario here where Sam returns and OpenAI survives as a nonprofit. The board will be sacked.


I agree. The pretense that OpenAI is still an open or a nonprofit has been a farce for a while now, it is an aggressively for-profit, trying to be the next Google company, and everybody knows it.


Clearly people in the non-profit part are trying to bring the organization back to its non-profit origins - after Altman effectively high jacked their agenda and corporatized the organization for his own benefit; turning its name into a meme.


It's possible that it's already too late to course correct the organization. We'll know for sure if/when Altman gets reinstated.

If he's reinstated, then that's it, AI will be used to screw us plebs for sure (fastest path to evil domination).

If he's not reinstated, then it would appear the board acted in the nick of time. For now.


If they actually care about that part they'd instantly open source gpt4. Wouldn't matter what altman does after that point then


> The board will be sacked.

How does sacking a board work in practice?


> How does sacking a board work in practice?

For a nonprofit board, the closest thing is something "the members of the board agree to resign after providing for named replacements". Individual members of the board can be sacked by a quorum of the board, but the board collectively can't be sacked.

EDIT: Correction:

Actually, nonprofits can have a variety of structures defining who the members are that are ultimately in charge. There must be a board, and there may be voting members to whom the board is accountable. The members, however, defined, generally can vote and replace boards members, and so could sack the board.

OTOH, I can't find any information about OpenAI having voting members beyond the board to whom they are accountable.


MSF (Médecins sans Frontières) is in most jurisdictions an association, where the board is elected by and works for the association membership. In that case, a revolt from the associative body could fire the board.

OpenAI does not have an associative body, to my knowledge.


Mass resignations from whom, I wonder. Other researchers?


Presumably a significant amount of OpenAI employees are motivated by money, at least in some form.

The board just vaporised the tender offer, and likely much of their valuation. It’s hard to have confidence in that.


Also, most of the human race has an instinctual aversion to plotters and machinations. The board's sudden and rather dubious (why the need to bad-mouth Altman?) actions probably didn't sit well with many.

Dante places Brutus in the lowest circle of hell, while Cato is placed outside of hell altogether, even if both fought for the same thing. Sometimes means matter more than ends.

If the whole process had been more regular, they could have removed Altman with little drama.


We still don’t know if the one plotting was Altman. There is still room for this to be seen as a bold and courageous action.


Sadly, optics matter too. Even if Altman was the schemer, Ilya sure has made himself look like the one.


And with the popularity and success of GPT whatever they do next will likely be wildly successful. The timing couldn't be more perfect.


It's simple collective bargaining. I wonder how many of them oppose unions... until they have a need to work together.


I can't speak for every American but I find that plenty of Americans are fine with collective bargaining they just don't want to do it through a union if they're in a lucrative line of work already. Which isn't terribly hard to understand, they don't need or want an advocate whose main role is constantly issuing new demands they never cared about on their behalf. They just want to be able to pool their leverage as high value workers within the organization collectively in times of crisis.


On the contrary, they seem to be doing it quite fine without a union


If you're an engineer at open ai, you just saw probably millions of dollars of personal wealth get potentially evaporated on friday. You're going to quit and go wherever Altman goes next.


> You're going to quit and go wherever Altman goes next.

I won’t be surprised if it’s the open arms of Microsoft. Microsoft embraced and extended OpenAI with their investment. Now comes the inevitable.


Altman maybe, but not rank&file OpenAI engineers. They'd be leaving the millions in paper money for Microsoft's peanuts.


Deca-unicorns don't come along every day. How would Sam Altman build another one? (I'll be impressed if he does.)


Why follow Altman? Most smart people are more driven by the mission than a personality cult.


People who joined OpenAI because the organizations they left were stuck self-sabotaging the way OpenAI's board just did (for the same reasons the board did it)


It’s still common for people are people and triggered often by a list of common things like power, money, and fame.


But, but... what company will that guy from Quora go on to ruin next, if he's kicked off the OpenAI board now?


Don't worry about him: failure is the surest sign of an impending incidence of "white man about to get another chance to not learn from his failures".


This does not solve the company's California AG problem.

https://www.plainsite.org/posts/aaron/r8huu7s/


Hey I know something about this! I just mailed my organization's RRF-1 a couple of days ago. The author of this post seems to be confused. My organization is on the same fiscal year as OpenAI, and our RRF-1 had to be mailed by November 15th. That explains the supposed "six month" delay. Second, if it's mailed on November 15th, it might not have even been received yet, let alone processed. This post feels like grasping at straws on the basic facts, setting aside the fact that it just doesn't make any sense to imagine a board member filling out the RRF-1 and going "oh wait, was there financial fraud?" the morning of November 15th. (That's ... not how the world works? Under CA law, any nonprofit with 2M of more in revenue has to undergo an audit, which is typically completed before filling out the 990, and the 990 is a pre-req for submitting the RRF-1. That's where you'd expect to catch this stuff, and the board's audit committee would certainly be involved in reviewing the results well in advance.)


The six-month delay is probably due to an automatic extension if you get an extension from the IRS, and also, you can file the form electronically, in which case mail delays are not a problem. But neither of those issues is the point. The point is that the form needed to be filed at all, and representations needed to be made accordingly.

OpenAI handled their audit years ago and hasn't had another one since according to their filings. So that does not seem like it would have been an issue this year.

Take a look at the top of the RRF-1 for the instructions on when it's due. Also, the CA AG's website says that OpenAI's was due on May 15th. They just have been filing six months later each year.


This could all be easily covered over with a few billion dollars. This is just some guy that thinks too small.


The board has to stick to the charter. unfortunately employees there wants to align with the profit part when they know they can damn lot of money.. obviously they will be with Altman size.


I’m sure everyone at OpenAI thought they hit the winning lottery ticket and will walk away with tens of millions at minimum and the early employees with significantly more. When you vaporize all that for some ideological utopian motives I’m sure many were incredibly pissed and ready to follow Sam into his next venture. If you gonna sacrifice everything and work 60-100hr weeks then you better get your moneys worth.


Been reading up on the insight offered up on this site.

  Seems like a lot of these board members have deep ties around various organizations, governmental bodies, etc. and that seems entirely normal and probable. However, prior to chatgpt and dalle we, the public , had only been allowed brief glimpses into the current state of AI (eg Look this robot can sound like a human and book a reservation for you at the restaurant -Google ; look this robot can help you consume media better -many). As a member of the public it went from “oh cool Star Trek idea, maybe we’ll see it one day with flying cars” to “holy crap, I just felt a spark of human connection with a chat program.”
So here’s my question, what are the chances that openAI is controlled opposition and Sam never really was supposed to be releasing all this stuff to the public? I remember he was on his Lex podcast appearance and said paraphrasing “so what do you think, should I do it? Should I open source and release it? Tell me to do it and I will.”

Ultimately, this is what “the board is focused on trust and safety” mean right? As in safety is SV techno HR PR dribble for go slow, wear a helmet and seatbelt and elbow protectors , never go above 55, give everyone else the right of way because we are in the for the good humanity and we know what’s best. (vs the Altman style of: go fast, double dog dare smart podcast dude to make unprecedented historical decision to open source, be “wild” and let people / fate figure some of it out along the way.”)

The question of openai’s true purpose being a form of controlled opposition is of course based on my speculation but an honest question for the crowd here.


I don't buy the whole the board is for safety and Sam is pushing too fast argument. This is just classic politics and backstabbing unless there is some serious wrongdoing in the middle that left the board with no option to fire the CEO.


Agreed. 'Who benefits' is a good question to ask in situations like these and it looks like a palace coup to me rather than anything with a solid set of reasons behind it. But I'll keep my reservations until it is all transparent (assuming it ever will be).


the board is the ones that fired him, why would they resign if Sam isn't back?


Because they won't have a company to "run the board for" anymore if Sam doesn't come back (since so many people have threatened to resign).


Question: is there a public statement signed by a large number OpenAI employees saying that they will resign over this? I don’t know. I have seen that three people resigned. If I were an OpenAI employee I think I would wait a month and see how things shake out. Those employees can probably get very highly paid jobs elsewhere, now, or later.

The Anthropic founders left OpenAI after Altman shifted the company to be a non-profit controlling a for profit entity, right?


They also won't have a company if they resign. Not much benefit to them here, is there?


I guess since they're doomed anyway, resignation saves face a little bit more.


If you're going to die, die with honor, not without.

Basically the board's choices are commit seppuku and maybe be viable somewhere else down the line, or try to play hardball and fuck your life forever.

It's not really that hard a choice, but given the people who have to make it, I guess it kinda is...


Do they need to be viable? I think the point is that they are not motivated by this crap


Could be too far gone with both those who left and those who remain.


Has anyone else notice how many techies are on Twitter but still badmouth Twitter?


you can't critisize the government if you live in the country?


It's easier to leave twitter than your county


It’s more like lamenting your decision to eat at Burger King everyday


this was unfortunately a popular sentiment in the early 2000s in the US


Using Twitter causes it to lose money so it's fine.


Ummm...how exactly?


The only things you could do to make them money are paying for it, clicking on ads, or working there. Looking at ads without clicking costs them.


I recommend you look into ad "impressions" and the compensation model.

Clicking an ad is not the only way it is monitized.


They have both but it's mostly billed per click/app install/follow/video watch. The "brand awareness" advertisers already left except for like, Saudi Arabia.


It's like some Americans claiming they're going to move to Canada if their presidential candidate loses.

All that tough talk means doodly-squat.


The bad mothers are a vocal minority


This is just everyone swallowing the crap Sam Altman drops as truth.

I’d guess this sort of narcissist behavior is what got him canned to begin with. Good riddance.


The board seems truly incompetent here and looking at the member list it doesn't seem very surprising. A competent board should have asked for legal and professional advice before taking a drastic step like this. Instead the board thought it was a boxing match and tried to deliver a knockout punch before the market closes with blunt language. This might be the most incompetent board for an organisation of this size.


The major investors whose money is on the line and who are funding the venture, Microsoft, Sequoia, and Khosla, were not given advanced warning or any input in to how this would impact their investment.

I would definitely say the board screwed up.

https://www.forbes.com/sites/alexkonrad/2023/11/17/openai-in...


The board of the non-profit (one that fired Sam) has no fiduciary duty to those investors, I believe. Microsoft invested in the for-profit Openai, which is owned by the non-profit. The other ones I don't know.

The board has no responsibility to Microsoft whatsoever regarding this. Sam Altman structured it this way himself. Not to say that the board didn't screw up.


While this may be technically true, the reality is that when you take $10 billion from a company there are strings attached. Consultation on a decision of this magnitude is one of those strings. You can choose to push ahead anyway after this is done but dropping the news on them 1 minute before you pull the trigger is unacceptable and MSFT will go for the throat here. You can't be seen to be a company that can be treated like this at MSFT level when you have invested this much money in any org.


Once you take in 10 billions then it’s pretty much the opposite, legality is the only things that matter.


Did they take a wire transfer for $10bn in cash, now sitting in their bank account? Or did they get a promise of various funding over N years, subject to milestones, conditions, in a variety of media including cash, Azure credits, loan lines etc.

I'd imagine the latter, and that it can be easily yanked away.


You mean the latter, but yeah. Financing like that is doled out based on a number of things; it would be wildly irresponsible to do otherwise for reasons exactly like this.


Fixed, thanks!


No, that's not it; relationships play gigantic roles in large deals.

Besides, even if you had an outstanding contract for $10bn, a judge would not pull a "well technically, you did say <X> even though that's absurd, so they get all the money and you get nothing."


Depends what you mean. Legally they might be in the clear but guarantee when you fuck around with billions of other people's money, it gets more complicated that that.


There are lots of other people and companies with $10 billion though. Why does it have to be Microsoft? Even after this circus, Open AI could still probably raise a ton of money from new entities if they wanted to. Maybe that is the point of this.


Totally true. One can even argue they are forbidden to discuss this with MS. They would be mixing up the interests of the non-profit and its for-profit subsidiary. Legally, it’s only a change of control in the majority shareholder of a company where MS has invested in. They dont have a say, and pressuring them could be higly illegal.


That Microsoft agreed to such a deal is negligence of the highest order.


It might have been the only deal on the table. Perhaps they thought the risk was worth it - good processes don't always lead to good outcomes. Perhaps they felt that the rights they gained to the GPT models was worth it even if they don't get direct influence over OpenAI.

Between Bing, o365, etc. etc. etc. it's possibly they could recoup all of the value of their investment and more. At the very least it is a significant minimization of the downside.


As I understand it, they got all the model details and most of their investment was actually cloud credits on Azure. So technically they can cancel those going forward if they want to and deal with whatever legal ramifications exist. All of GPT4 (and other models) for probably $1-2b may not actually be a bad deal for them even if that's all they get.


They put out a statement saying they have what they need. I don't see how Microsoft loses here. Either they get altman back at openai and get rid of the ethics crowd and make bank, or they find his new startup without the move slow crowd and make bank. No matter what they win.


We have no idea what the terms of the deal are. It's probably "up to" $20 billion.


how can a non-profit own a for-profit?

honest question


I'd say easily, especially outside the US. Check out Germany for example: - Bertelsmann Foundation, owns or is the majority shareholder of Bertelsmann - Robert Bosch Foundation, owns or is the majority shareholder of Bosch - Alfred Krupp von Bohlen and Halbach Foundation, owns or is the majority shareholder of Krupp - Else Kröner Fresenius Foundation, owns or is the majority shareholder of Fresenius - Zeppelin Foundation (yes, those Zeppelins...) owns or is the majority shareholder of ZF Friedrichshafen - Carl Zeiss Foundation, owns or is the majority shareholder of Carl Zeiss and Schott - Diehl Foundation, owns or is the majority shareholder of Diehl Aerospace

And a bunch more. A lot of you will never have heard of them, but all of them are multi billion dollar behemoths with thousands of subsidiaries, employees, significant research and investment arms. And they love the fact that barely anyone knows them outside Germany.


Easy, they own shares. For example, the nonprofit Mormon church owns 47 billion in equity in private companies including Amazon, Exxon, Tesla, and Nvidia[1].

Nothing stopping a non-profit from owning all the shares in a for-profit.

[1] https://finance.yahoo.com/news/top-10-holdings-mormon-church...


You can do everything by the rules, and still do the wrong thing


Wrong by what metric? What if they believe the only way to fulfill their duty to the charter is for open ai to die? Why would it be wrong? Is it worse that it living to be the antithesis of itself? Just so the investors can have a little more honey?


They don't have any duty as far as governing the non-profit, but as majority shareholder of the for-profit subsidiary, the non-profit would still have a fiduciary duty to the subsidiary's minority shareholders.


Duties to not dilute them or specifically target them, but majority can absolutely make decisions about executives even if those decisions are perceived as harmful.


I'm surprised that none of these investors secured a board seat for themselves before handing over tens of billions. The board is closer to a friendship circle than a group of experienced business folks.


> The board is closer to a friendship circle than a group of experienced business folks.

Isn't this true for most of S.V.?


FOMO


Non profit board therefore for profit investors have no say


It was complete amateur hour for the board.

But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?


I think when it started it was not the most important tech company but just some open research effort.


Not many and even fewer if you consider folks that have a good grasp of themselves, their psychology, their emotions — and how they can mislead them, and their heart.

IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.

Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.


Is this a way of saying that AI safety is unnecessary?


It's a way of saying that what has been historically been considered "studying AI safety" in fact bears little relation to real life AIs and what may or may not make them more or less "safe".


Yes, with the addition that I do feel that we deserve something better than I perceive we’ve gotten so far and that safety is super important; but also I don’t work at OpenAI and am not Ilya so idk


Pretty sure that Sutskever understands the technology, and it looks like he persuaded the others.


>> A competent board should have asked for legal and professional advice...

I will bite. How do you know they didn't?


Typically it would be framed amicably, without so much axe-grinding, particularly for public release. Even ChatGPT itself would have written a more balanced release, and advised against such shenanigans. I enjoy that irony.


That's the thing. Lawyers can give them the letter of the law but might have no idea how popular Sam was inside and outside the company, or how badly he was needed. And that's what really matters here.


Why does it matters to a board that sticks to the principles of the charter of a non-profit? Why would they look at anything else other than the guiding principles?


Because their charter says their goal is to get to AGI, or something like that.

If 99% of their employees quit and their investors pull their support because Sam was fired, they're not getting anywhere and have failed to deliver on their charter.


>house collapses in 15mph wind

Why didn’t they hire a competent builder?

You:

>how do you know they weren’t? It could be pure happenstance! All the nails could… could have been defective! Or something! waves hands


Enron had independent auditors and a law firm approving what they did.


I wonder if any of this is related it to it being envisioned as a non-profit board, but in the past ~year, the for-profit part has outgrown what they were really ready to handle.


Maybe they asked ChatGPT for legal advice.


Maybe they and it didn't help them. Guardrails for chatgpt will prevent it from predicting outcomes, or providing any personalized advice. I asked it and just said to consult with counsel and have a succession plan.

>Predicting specific outcomes in a situation like the potential firing of a high-profile executive such as Sam Altman from OpenAI is quite complex and involves numerous variables. As an AI, I can't predict future events, but I can outline some possible considerations and general advice:


Surely there’s a wholly uncensored chatGPT 5 at OpenAI running on some engineering sample H200 cluster with a Terabyte of video RAM or something.


Better yet, Sutskever’s version with AGI!


i see what you did there.


Even an episode of Succession and they would have known better than to have attempted this


They're the board of a non-profit not a Fortune 500 company. Everyone should just chill.


a non-profit that controls one of the most valuable private tech companies that rivals the importance of a lot of F500 companies.


It didn't start out that way now did it?


> Instead the board thought it was a boxing match

Or maybe chess[1].

[1]: https://www.youtube.com/watch?v=0cv9n0QbLUM


They almost certainly consulted both lawyers and chatGPT and still proceeded with the dismissal. So, in a way, this could be a test of the alignment of chatGPT (and corporate lawyers).

One scenario where both parties are fallible humans and their hands are forced: Increased interest has to close down plus signups, because compute can't scale. Sam goes to Brockman and they decide to use compute meant for GPT-5 to try to scale for new users, without informing board. Can perfectly fine break that rule with GPT-4, but what if Sam does this again in the future when they have AGI on their hands?


>OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

>non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

From Forbes [1]

Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”

Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)

McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.

Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.

More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.

[1] https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are...


That board is going to face a wrath of shit from Microsoft, Khosla, and other investors.

This isn't a university department. You fuck around with $100B+ dollars of other people's money, you're gonna be in for it.


Sergei Frolov seems to be thriving these days.


Perhaps the AGI convinced the board to make a wild move like this as part of its first chess move


I’ve mused that an advanced AGI would probably become suicidal after dealing with humans for a while and realizing there’s no escape. Maybe this is an attempt.


New developments faster than you can read the stories about them... https://www.nytimes.com/2023/11/18/technology/ousted-openai-... (https://archive.vn/4U6tu)


“He also spoke with Masayoshi Son, the chief executive and billionaire founder of the tech conglomerate SoftBank”

That made me laugh a knowing laugh even though I know nothing.



Your link doesn't work with cloudflare DNS i think ?


Yes, Archive blocks Cloudflare DNS. People say it’s intentional, but whether that’s true isn’t clear to me.

https://news.ycombinator.com/item?id=19828702


The archive guy has been very upfront they use custom code to block resolution from Cloudflare's IP space. archive doesn't like them since they don't send edns client subnet information to archive; it all seems like bullshit since they support non cloudflare edns resolvers so it's probably some other beef.


Archive explaining their reasoning: https://twitter.com/archiveis/status/1018691421182791680

CEO of Cloudflare explaining: https://news.ycombinator.com/item?id=19828702

I don't understand how it isn't clear to you.


It’s absolutely intentional, they made a blog post about it.


This makes sense. The board thinks they're calling the shots, but the reality is the people with the money are the ones calling the shots, always. Boards are just appointed by shareholders aka investors aka capital holders to do their bidding.

The capped-profit / non-profit structure muddles that a little bit, but the reality is that entity can't survive without the funding that goes into the for-profit piece

And if current investors + would-be investors threaten to walk away, what can the board really do? They have no leverage.

Sounds like they really didn't "play the tape forward" and think this through...


A non profit board absolutely calls the shots at a non profit, in so far as the CEO and their employment goes. Non profit boards are not beholden, structurally, to investors and there are no shareholders.

No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.

If anything they would be miffed about how it was handled, but to be frank, unless GPT4 is sam Altman furiously typing, I don’t know he’s that important. The instability caused by the suddenness, that’s different.


Nothing matters if you don’t have the money to enforce the system. Come on get real. Whatever the board says MS can turn off the money in a second and invalidate anything.


Microsoft depends on OpenAI much more than OpenAI depends on Microsoft. If you work with OpenAI as a company very often this is extraordinarily obvious.


This doesn't seem very obvious to me. The fact this article exists, and that Microsoft is likely exerting influence over the CEO outcome, implies there's codependence at a minimum.


Microsoft depends on OpenAI as long as they're rapidly advancing. It seems the new leadership wants to halt or slow the rapid advancement.


I'm not sure this is true- Microsoft put something like 10 billion into OpenAI, which they absolutely needed to continue the expensive computing and training. Without that investment money OpenAI might quickly find themselves at a huge deficit with no way to climb back out.


Only a small fraction of the $10b was delivered and is apparently largely in azure credits.


Ah yes, no other company would step in and get this deal from OpenAI if Microsoft pulls out. It's not like Amazon and Google pump billions into the OpenAI competitor.


I’m pretty sure there are contracts, and one way or another, everyone would get a stay on everyone else and nothing would happen for years except court cases


> I’m pretty sure there are contracts

Which one side or the other would declare terminated for nonperformance by the other side, perhaps while suing for breach.

> and one way or another, everyone would get a stay on everyone else

If by a stay you mean an injunction preventing a change in the arrangements, it seems unlikely that "everyone would get a stay on everyone". Likelihood of success on the merits and harm that is not possible to remediate via damages that would occur if the injunction wasn't placed are key factors for injunctions, and that's far from certain to work in any direction, and even less likely to work in both directions.

> and nothing would happen for years except court cases

Business goes on during court cases, it is very rare that everything is frozen.


They could use Llama instead. OpenAI’s moat is very shallow. They’re still coasting on Google’s research papers.


If you’ve used the models for actual business problems GPT4 and its successive revisions are way beyond llama. They’re not comparable. I’m a huge fan of open models but it’s just different worlds of power. I’d note OpenAI has been working on GPT5 for some time as well, which I would expect to be a remarkable improvement incorporating much of the theoretical and technical advances of the last two years. Claude is the only actual competitor to GPT4 and it’s a “just barely relevant situation.”


Hm, it’s hard for me to say because most of my prompts would get me banned from OpenAI but I’ve gotten great results for specific tasks using finetuned quantized 30B models on my desktop and laptop. All things considered, it’s a better value for me, especially as I highly value openness and privacy.


For an individual use case Llama is fine. If you start getting to large workflows and need reliable outputs, GPT wins out substantially. I know all the papers and headlines about comparative performance, but thats on benchmarks.

Ive found that benchmarks are great as a hygiene test, but pointless when you need to get work done.


Even the best unquantized finetunes of llama2-70b are, at best, somewhat superior to GPT-3.5-turbo (and I'm not even sure they would beat the original GPT-3.5, which was smarter). They are not even close to GPT-4 on any task requiring serious reasoning or instruction following.


What specs are needed to run those models in your local machine without crashing the system?


I use Faraday.dev on an RTX 3090 and smaller models on a 16gb M2 Mac and I’m able to have deep, insightful conversations with personal AI at my direction.

I find the outputs of LLMs to be quite organic when they are given unique identities, and especially when you explore, prune or direct their responses.

ChatGPT comes across like a really boring person who memorized Wikipedia, which is just sad. Previously the Playground completions allowed using raw GPT which let me unlock some different facets, but they’ve closed that down now.

And again, I don’t really need to feed my unique thoughts, opinions, or absurd chat scenarios into a global company trying to create AGI, or have them censor and filter for me. As an AI researcher, I want the uncensored model to play with along with no data leaving my network.

The uses of LLMs for information retrieval are great (Bing has improved alot) but the much more interesting cases for me are how they are able to parse nuance, tone, and subtext - imagine a computer that can understand feelings and respond in kind. Empathetic commuting, and it’s already here on my PC unplugged from the Internet.


+1 Greg. I agree with most of what you say. Also, it is so much more fun running everything locally.


Another data point: I can (barely) run a 30B 4 bit quantized model on a Mac Mini with 32G on chip memory but it runs slowly (a little less than 10 tokens/second).

13B and 7B models run easily and much faster.



Microsoft is also OpenAI's main cloud provider, so they certainly have some leverage.


Aws is JP Morgan’s main cloud provider, and Apples too. Do you think aws has leverage over JPMC and Apple due to that? Or does JPMC and Apple have leverage over aws?

Azure gets a hell of a lot more out of OpenAI than OpenAI gets out of azure. I’ll bet you GPT4 runs on nvidia hardware just as well regardless of who resells it.


I think the larger issue here is that there's just not enough of that nvidia hardware out there if Microsoft decided to really play hardball, even if it hurts themselves in the short term. I don't know that any of the other cloud providers have the capacity to immediately shoulder OpenAI's workloads. JPMC or Apple have other clouds they can viably move to - OpenAI might not have anyone else that can meet their needs on short notice.

I think the situation is tough because I can't imagine there aren't legal agreements in place around what OpenAI has to do to access the funding tranches and compute power, but who knows if they are in a position to force the issue, or if I'm write in my supposition to begin with. Even if I am, a protracted legal battle where they don't have access to compute resources, particularly if they can't get an injunction, might be extremely deleterious to OpenAI.

Perhaps Microsoft even knows that they will take a bath on things if they follow this, but don't want to gain a reputation of allowing this sort of thing to happen - they are big enough to take a total bath on the OpenAI side of things and it not be anything close to a fatal blow.

I was more skeptical of this being the case last night, but less so now.


But why would Microsoft do anything to hurt their business in any way? They are almost certainly more furious for the way they found out than the actual action taken. Given how much Microsoft has bet their business on OpenAI (ask yourself who replaces bing chat? Why does anyone actually use azure in 2023?) being surprised by structural business decisions in their most important partner is shocking, and I think if I were the CEO of Microsoft I would be furious at being shocked more than pining in some weird Altman bromance.


> Why does anyone actually use azure in 2023?

When I see it, it has always been “Amazon is a competitor and we don’t buy from competitors”.


> I would be furious at being shocked more than pining in some weird Altman bromance.

Hypothetically he might also have very little trust in the decision making abilities of the new management and how much their future goals will align with those of Microsoft.


Microsoft finally has a leg up on Google in the public eye and they're gonna toss it away for Sam Altman? Seems dicey.


JP Morgan and Apple can actually afford to pay their cloud bills themselves. Open AI on the other hand can't.

> I’ll bet you GPT4 runs on nvidia hardware

Yes but they'll need to convince someone else like Amazon to give to them for free and regardless what happens next Microsoft will still have a signficant stake in OpenAI due to their previous investments.


Microsoft already has the models and weights, not the tech


Something I don't fully understand, from [1], Altman was an employee of the for-profit entity. So to fire him, wouldn't the non-profit board be acting in it's capacity as a director of the for-profit entity (and thus have a fiduciary duty to all shareholders of the for-profit entity)? Non-profit governance is traditionally lax, but would the other shareholders have a case against the members of the non-profit board for acting recklessly w/ respect to shareholder interests in their capacity as directors of the for-profit?

This corporate structure is so convoluted that it's difficult to figure out what the actual powers/obligations of the individual agents involved are.

[1] https://openai.com/our-structure


LLCs do not require rights be assigned fairly to all shareholders if the operating agreement and by-laws say otherwise. This is the case with OpenAI, where the operating agreement effectively makes the fiduciary duty of the for-profit the accomplishment of the non-profit's charter. The pinkish purpleish block of text on the page you linked goes into more detail here.

(Remember, fiduciary does not necessarily have anything to do with money)


> A non profit board absolutely calls the shots at a non profit, in so far as the CEO and their employment goes. Non profit boards are not beholden, structurally, to investors and there are no shareholders.

There is theory and there is reality. If someone is paying your bills by an outsized amount and they say jump, you will say how high.

The influence is rarely that explicit though. The board knowing that X investor provides 60% of their funding, for instance, means the board is incentivized to do things that keep X investor happy without X having to ask for it.

9 times out of 10, money drives decisions in a captilist environment


OpenAI hasn’t received much funding from Microsoft or other investors, and is profitable already with no lack of interested suitors for funding and partnership. Microsoft’s leverage is grossly overstated mostly because it suits Microsoft to appear important to OpenAI when it’s the other way around.


They received a 10 billion dollar investment that allows the product to operate plus they provide the servers. Without that your $20 a month goes to 2,000


They received much less than 10 billion, and it's mostly in credits (so really about half the value), in exchange for exclusive access to the world's most advanced LLM?


They’ve actually drawn very little of that $10b. They are profitable at the moment, and would have no trouble raising funds from anywhere at the moment in any quantity they wanted.


What’s the source on this?


Yes the board could claim OpenAI is nonprofit. But who is going to pay for the operation and salaries of its employees.

Definitely not OpenAI itself. They still need massive capital. With this drama, its future is put in serious doubt


The board can and does claim it because it is legally a non profit. There is no wishy washy space this isn’t true in. Sam Altman isn’t the source of their funds, regardless. Finally, OpenAI has a pretty successful business model already without outside investment, and without sam or with sam they will not have trouble accessing customers or investors should they need it, even from Microsoft. Let’s be real Altman isn’t OpenAI.


A company is just legalese + people. And people are notoriously for-profit, especially in this day and age.

The board can maintain control of the legal aspects (such as the org itself), but in the end, people are much more important.

Organizations are easy to duplicate. Persons, less so.


> No stakeholder would walk away from OpenAI for want of sam Altman. They don’t license OpenAI technology or provide funding for his contribution. They do it to get access to GPT4. There is no comparable competitor available.

The implication in Microsoft's statement is clear that they have what they need to use the tech. I read it to mean OpenAI board does not have leverage.


Microsoft has licensing rights to OpenAI tech. They do not “have it” in the sense they control it.


Well I read Nadella threatened to turn off OpenAI's servers, so yeah, Microsoft does in fact control it.

Not your premises not your compute?


Even threatening that, if disclosed publicly, would entirely threaten Azures business model. Cloud providers try to stay entirely neutral to their users business insofar as they don’t breach a ToS, law, or regulation forcing their actions. The entire business model is trusting a third party with the keys to your business. In my time working as a senior person at a cloud provider, then as a person setting system for major customers of cloud providers, this specific point was sacrosanct and invariant. Crossing that line would be a huge breach of the business model.

I think in this case I would need to see a source to believe you, and if substantiated, it would make me question Nadellas fitness to lead a cloud computing business.


Can't find the original thing I read with a more direct statement, I remember it being an anonymous source (on twitter maybe?) with inside info. I did more digging and found a few other things.

There's this [1], a NYT article saying that Microsoft is leading the pressure campaign to get Altman reinstated.

And there's this [2], a Forbes article which claims the playbook is a combination of mass internal revolt, withheld cloud computing credits from Microsoft, and a lawsuit from investors.

[1] https://archive.is/fEVTK#selection-517.0-521.120

[2] https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...


This is not just a "non-profit"... it's a non-profit that owns a $90B for-profit company developing revolutionary, once-in-a-century technology. There is a LOT of money at play here.

Others have commented on how Microsoft actually has access to the IP, so the odds that they could pack their toys and rebuild OpenAI 2.0 somewhere else with what they've learned, their near infinite capital and not have to deal with the non-profit shenanigans are meaningful.

I'm not saying Sam is needed to make OpenAI what it is, but he's definitely "the investors' guy" in the organization, based on what has surfaced over the last 24 hours. Those investors would rather have him there over someone else, hence the pressure to put him back. It doesn't matter whether you and I think he's the man for the job -- what matters is whether investors think they are.

TL;DR the board thinks they have leverage, but as it turns out, they don't


Microsoft doesn’t have ownership rights to OpenAI IP. They license it. They can’t pack up anything as they just have an IAM and billing model on top of GPT4 they use to resell OpenAI tech.


> Microsoft doesn’t have ownership rights to OpenAI IP. They license it.

Honest question, do you have a source for that? Is it conceivable that Microsoft has some clause that grants them direct access to IP if OpenAI does not meet certain requirements. It is difficult to believe that Microsoft handed over $10B without any safeguards in place. Surely they did their due diligence on OpenAI's corporate structure.


OpenAI for-profit main purpose is to fulfill the desires of the non-profit. If there's a contract that goes against that, the contract would be void if necessary or that stipulation just crossed out.


I would expect that Microsoft would have negotiated terms like a perpetual license to the IP, given that they were the main investor and were in a strong negotiating position.

Microsoft has a lot of experience interacting with small companies, including in situations like this one where the small company implodes. The people there know how to protect Microsoft's interests in such scenarios, and they definitely are aware that such things can happen.


Not really. They run custom GPT model lol


Not one they own they don’t. OpenAI owns all of the GPT IP. Microsoft has a licensing arrangement with OpenAI. I’d note that azure GPT is not a custom model, only the bing chat is custom. And even the customizations aren’t owned by Microsoft.


So they are trying to backtrack which makes them look pretty foolish for no apparent reason ?


I didn’t see any actual evidence of that other than speculation and outside and uninvolved investors advocating for him in the article. I suspect this is a bait for your click.


"Ilya Sutskever @ilyasut I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."


They officially call the shot.

But right now they get a lot of shitstorm for this inexperience handling.

And it doesn't look good from the board that looks inexperienced.

Gordon-Levitt's wife?? Helen who? D'Angelo with a failing quora and a history of a coup.

Doesn't look good.

I'd bet it starts impacting their personal lives. This is equivalent to them coming out to support Donald Trump. It is that bad.


> A non profit board absolutely calls the shots at a non profit...

Doesn't look like it right now in this case.


Because of a news article saying a prior VC firm is pushing to reinstate sam or fund his new venture and didn’t care which way it goes? That’s not a lot to hang your hat on. They legally have every right to do what they did and no one can force them to change their mind under any circumstance. They might choose to, but OpenAI has all the cards. Sam Altman is a talking head, and if they churn some senior folks, OpenAI has the technology and brand to replace them. If I were the OpenAI board, I would be sleeping like a baby, especially if sam were acting out of sync with the charter of the non profit. I imagine his antics caused a lot of stress the further they drifted from their mission and the more he acted autonomously.


> If I were the OpenAI board, I would be sleeping like a baby

Well, they're all about to be out of a job, so it's a good time to catch up on sleep.


This is wildly incorrect. But a non-profit does have stakeholders, donors, beneficiaries and employees. All of those can apply pressure on a board.


> This is wildly incorrect

Great, we'll take your word for it.


Sorry, but you are just simply factually incorrect. That the board itself serves at the pleasure of other interests is clear (and even then, if they don't want to leave getting rid of them can be tricky depending on the details) but they do call the shots. The question is whether or not they can make it stick.

But until he is re-hired Sam Altman is to all intents and purposes fired. And it may well come to that (and that would almost certainly require all those board members who voted for his ouster to vacate their positions because their little coup plan backfired and nobody is going to take the risk of that happening again, especially not in this way).


Sorry, but I am just simply not factually incorrect. Again you want me to just take your opinion as fact... but stating it strongly doesn't make your argument more cogent.

Boards are agents to their principals. They call the shots only as long as their principals deem them to be calling them correctly. If they don't, they get replaced. Said differently, board members are "appointed" to do the bidding of someone else. They have no inherent power. Therefore, they do not, ultimately, call the final shots. Owners do. Like I said, this situation is a little muddier because it's a non-profit that owns a for-profit company, so there's an added layer of complexity between agents and principals.

OpenAI isn't worth $90B because of its non-profit. The for-profit piece is what matters to investors, and those investors are paying the bills. Sure, the non-profit board can fire Altman and carry on with their mission, but then everyone who is there "for profit" can also pack up their things and start OpenAI 2.0 where they no longer need the non-profit, and investors will follow them. I assume that's an undesirable outcome for the board as I suspect the amount of money raised at the for-profit level dwarfs the amount donated to the non-profit... which effectively means the for-profit shareholders own the company. Hence my original comment.


They call the shots until they are overruled (by a court, or by a new board after the board members have been forced out and that isn't all that simple otherwise no board could ever function in their oversight role in a non-profit), and even then until that process has run its course their statements are factually correct. I know this is all hairsplitting but it really does matter. When the board put out a statement saying they had fired Altman that was that. They can re-hire him or they can reverse their decision but until that happens their decision stands.

Yes, they are accountable (and I'm actually surprised at how many people seem to believe that they are not), but they are not without power. Legal and practical are not always exactly overlapping and even if the board may not ultimately hold practical power (even if they believe they do) legally speaking they do and executives function at the pleasure of the board. If the board holds a vote and the bylaws of the company allow for it and the vote passes according to those bylaws then that's that. That's one good reason to pack the board of your billions of dollars worth company with seasoned people because otherwise stuff like this may happen.

Afterwards you can do a lot about it, you can contest the vote, you can fight it in court, you can pressure board members to step down and you can sue for damage to the company based on the decision. But the board has still made a decision that is in principle a done deal. They can reverse their decision, they can yield to outside pressure and they can be overruled by a court. But you can't pretend it didn't happen and you can't ignore it.


You're missing the whole point of my comment for the sake of arguing you're quote-unquote "correct"

I'm not saying the board doesn't make decisions or that the board is powerless, or that their decisions are not enforceable or binding. That's already known to be true, there's no value in arguing that.

I'm saying the _ultimate_ decision is made by the people with the money, inevitably. The board is allowed to continue to make decisions until they go against the interests of owners. The whole point of a board is so owners don't have to waste their time making decisions, so instead they pay someone else (directors) to do make them on their behalf.

Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough.


Yes, and we're in agreement on that last part, see my other comments in the thread and in previous threads on the same subject.

In fact we are very much arguing that thing in the same way. But you do have to get the minutiae right because those are very important in this case. This board is about to - if they haven't already - find out where the real power is vested and it isn't with them. Which is kind of amusing because if you look at the people that make up that board some of them should have questioned their own ability to sit on this board based on qualifications (or lack thereof) alone.


> This makes sense. The board thinks they're calling the shots, but the reality is the people with the money are the ones calling the shots, always. (...) what can the board really do? They have no leverage

Which I later restated as "Start making decisions that go against the people who actually run the place, and you'll find yourself in trouble soon enough." (emphasis added) -- which hopefully you agree is a clear restatement of my original comment.

Meanwhile you said

> This is wildly incorrect. (...) you are just simply factually incorrect. (...) But until he is re-hired Sam Altman is to all intents and purposes fired.

But I never claimed he wasn't for all intents and purposes fired

Yet you did claim I was "wildly" and "factually incorrect" and now you're saying "we are very much arguing that thing in the same way" but "you do have to get the minutiae right". To me, minutiae was sufficiently provided in the original comment for any minimally charitable interpretation of it. Said differently, the loss of minutiae was on the reader's part, not the writer's.

Regardless, lack of minutiae is not comparable to "wildly" or "factually" incorrect. Hence I was not either of these things. QED.


The staff calls the shots. The money will go wherever the talent is.


Owners call the shots, otherwise staff would never get fired.


never heard of a unions? staff can have power too. and often they do prevent wrongful firings.


Unions exist precisely to try to pool together the minuscule leverage that workers have so that they can fight with capital owners. If anything, they prove the point that staff have very limited power


The talent also goes wherever the money is


Yep. There's the apparent legal leverage,

and then there's the real leverage of money and the court of public opinion.


This suggests a plausible explanation that Altman was attempting to engineer the board’s expansion or replacement: After the events of the last 48 hours, could you blame him?

In this scenario, it was a pure power struggle. The board believed they’d win by showing Altman the door, but it didn’t take long to demonstrate that their actual power to do so was limited to the de jure end of the spectrum.


Any talented engineer or scientist who actually wants to ship product AND make money would head over to Sam’s startup. Any investor who cares about making money would fund Sam’s startup as well.

The way the board pulled this off really gave them no good outcome. They stand to lose talent AND investors AND customers. Half the people I know who use GPT in their work are wondering if it will be even worth paying for if the model’s improvements stagnates with the departure of these key people.


And any talented engineer or scientist who actually wants to build safe AGI in an organization that isn't obsessed with boring B2B SaaS would align with Ilya. See, there are two sides to this? Sam isn't a god, despite what the media makes him out to be; none of them are.


AGI has nothing to do with transformers. It's a hypothetical towards which there has been no progress other than finding things that didn't work. It's a cool thing to work on, but it's so different than what the popular version of openAI is, and it has such different timescales and economics... if some vestigial openAI wants to work on that, cool. There is definitely also room in the market for the current openAI centered around GPT-x et al, even if some people consider SaaS beneath them, and I hope they (OpenAI) find a way to continue with that mission.


Its been, like, two years dude. This mindset is entirely why any organization which has a chance at inventing/discovering ASI can't be for-profit and needs to be ran by scientists. You've got tik tok brain. Google won't be able to do it, because they're too concerned about image, and also got a bad case of corpo tik tok brain. Mistral and Anthropic won't be able to do it, because they have VC expectations to meet. Sam's next venture, if he chooses to walk that path, also won't, for the same reason. Maybe Meta? Do you want them being the first to ASI?

If you believe that the hunt for ASI shouldn't be OpenAI's mission, then that's an entirely different thing. The problem is: That is their mission. Its literally their mission statement and the responsibility of the board to direct every part of the organization toward that goal. Every fucking investor, including Microsoft, knew this, they knew the corporate governance, they knew the values alignment, they knew the power the nonprofit had over the for-profit. Argue credentialism, fine, but four people on the board that Microsoft invested in are now saying that OAI's path isn't the right path toward ASI; and, in case it matters, one of them is universally regarded as one of the top minds on the subject of artificial intelligence, on the planet. The time to argue credentialism was when the investors were signing checks; but they didn't. Its too late now.

My hunch is that the majority of the sharpest research minds at OAI didn't sign on to build B2B SaaS and become the next Microsoft; more accurately, Microsoft's thrall, because they'll never surpass Microsoft, they'll always be their second, if Satya influences Sam back into the boardroom then Microsoft, not Sam, will always be in control. Sam isn't going to be able to right that ship. OAI without its mission also isn't going to exist. That's the reality everyone on Sam's side needs to realize. And; absolutely, an OAI under Ilya is also extremely suspect in its ability to raise the resources necessary to meet the non-profit's goals; they'll be a zombie for years.

The hard reality that everyone needs to accept at this point is, OpenAI is probably finished. Unless they made some massive breakthrough a few weeks ago, which Sam did hint to three days ago, which should be the last hope that we all hold on to, that AI research as a species hasn't just been set back a decade with this schism. If that's the case; I think anyone hoping that Microsoft regains control the company is not thinking the situation through, and the best case scenario is: Ilya and the scientists retain control, and they are given space to understand the breakthrough.


The problem is that this "AGI research group" is staffed by people who build statitiscal models, call them AI, and are delusional enough to think this is a route to general intelligene.

There is no alterantive, if you're wedded to "fitting functions to frequencies of text tokens" as your 'research paradigm' -- the only think that can be is a commericalised trinket.

So either the whole org starts staffing top level research teams in neuroscience, cognitive science, philosophy of mind, logic, computer science, biophysics, materials science.... or it just delivers an app.

If sam is the only one interested in the app, its because he's the only sane guy in the room.


There is little evidence that conditional statistical models can never be a route to AGI. There's limited evidence they can, but far less they can't.

You may be interested by the neuroscience research on the application of a time-difference like algorithm in the brain; predictor-corrector systems are just conditional statistical models being trained by reinforcement.


I am well aware of the literature in the area. 'Trained by reinforcement' in the case of animals includes direct causal contact with the environment, as well as sensory-motor adaption, and organic growth.

The semantics of the terms of the 'algorithm' in the case of animals are radically different, and insofar as these algorithms describe anything, it is because they are empirically adequate.

I'm not sure what you think 'evidence' would look like that conditional probability cannot lead to agi -- other than a serious of obvious observations: conditional probability doesnt resolve causation, it is therefore not a model of any physical process, it does not provide a mechanism for generating the propositions which are conditioned-on, it does not model relevance, and a huge list of other severe issues.

The idea that P(A|B) is even relevant to AGI is a sign of a fundamental basic lack of curiosity beyond what is on-trend in computer science.

We can easily explain why any given conditional probability mdoel can encode aribatary (Q, A) pairs -- so any given 'task' expressed as a sequence of Q-A prompts/replies can be modelled.

But who cares. The burden-of-proof on people claiming that conditional probability is a route to AGI is to explain how it models: causation, relevance, counter-factual reasoning, deduction, abduction, sensory-motor adaption, etc.

The gap between what has been provided and this burden-of-proof is laughable


There are significantly fewer people that would want to work with Ilya than there are people that would want to work with Sam/Greg.

If Ilya could come out and clearly articulate how his utopian version of OpenAI would function, how they will continue to sustain the research and engineering efforts, at what cadence they can innovate and keep shipping and how will they make it accessible to others then maybe there would be more support.


Wrong. Ilya is the goose that laid the golden egg. Do you think other orgs don’t have engineers and data scientists?


The problem is it already became the other thing in a very impactful way.

If Sam were to be ousted it should have happened before ChatGPT was unleashed on the world.


And if Microsoft had major concerns about OpenAI's board and governance, it should have been voiced and addressed before they invested. Yet; here we are; slaves to our past decisions.


Sure, but without funding and/or massive support from MS this is not going to happen.


Would those talented engineers or scientists be content with significantly lower compensation and generally significantly less resources to work with. However good their intentions might this probably won't make them too attractive to future investors and antagonizing MS doesen't seem like a great idea.

OpenAI is far from being self-sustainable and without significant external investment they'll just probably be soon overtaken by someone else.


I don't know; on a lot of those questions. I tend to think that there was more mission and ideology at OAI than at most companies; and that's a very powerful motivational force.

Here's something I feel higher confidence in, but still don't know: Its not obvious to me that OAI would be overtaken by someone else. There are two misconceptions that we need to leave on the roadside: (1) Technology always evolves forward, and (2) More money produces better products. Both of these ideas, at best, only indicate correlative relationships, and at worst are just wrong. No one has overtaken GPT-4 yet. Money is a necessary input to some of these problems, but you can't just throw money at it and get better results.

And here's something I have even higher confidence in: "Being overtaken by someone else" is a sin worthy of the death penalty in the Valley VC Culture; but their's is not the only culture.


Citation needed on the ideology being a powerful motivational force in this context. People who think they're doing groundbreaking work that'll impact the future of humanity are going to be pretty motivated ideologically either way regardless of if they're also drinking the extra flavor from the mission statement's Kool-Aid.


It’s just an illusion that Sam is trying to be unsafe about it, it’s a scare tactic or sorts to get what they want. Example regulations, and now internally, power. It’s all bs man this AI will end the world stuff, it’s pushed for an agenda and you all are eating it up


Where do you go if you want to build an unsafe AGI with no morals? Military? China? Russia?

(I am aware that conceptually it can lead to a skynet scenario)


I don't think the people that want to move slowly and do research are necessarily working at OpenAI.


>would head over to Sam’s startup

Why? I see a lot of hero-worship for Sam, but very little concrete facts about what he's done to make this a success.

And given his history, I'm inclined to believe he just got lucky.


OpenAI is very conspicuously the only lab that (a) managed to keep the safety obsessives in their box, (b) generate huge financial upside for its employees and (c) isn't run by a researcher.

If Altman's contribution had simply been signing deals for data and compute then keeping staff fantasies under control, that already makes him unique in that space and hyper valuable. But he also seems to have good product sense. If you remember, the researchers originally didn't want to do chatgpt because they thought nobody would care.


He presumably can attract investors?


If that was the only issue, why not just go to Google, Meta, or Microsoft directly to work on their AI stuff? What do you really need Altman for?

Working at OpenAI meant working on GPT-4 (and whatever is next in line), which is attractive because it's the best thing in the field right now by a significant margin.


So can Dario Amodei and Mustafa Suleyman.


I still haven't heard an explanation of why people who use GPT would be under the impression that Sam had anything to do with the past improvements in GPT versions.


Have you really never been at a place without someone with vision leading the cause? Try it some time and you'll start understanding how and why a CEO can make or break a company.


This happens all the time. It's far more common for teams to succeed despite (or even in spite) of executive leadership.


> It's far more common for teams to succeed despite (or even in spite) of executive leadership.

People say this like it's some kind of truism but I've never seen it happened and when questioned everyone I've known who's claimed it ends up admitting they were measuring their "success" by a different metric then the company.


Of course it isn’t. Without executive sponsorship there is no staff or resources.


The vision of Worldcoin dude to get rich quick? Very inspiring.


Sam attracted money and attention, which attracted talent. If Sam departs for another venture, some - or a lot - of the talent and attention and money will leave too. This isn’t a car factory where you can replace one worker with another. If some of the top folks leave with Sam (as they already are) it’s reasonable to assume that the product will suffer.


If technical expertise is what drove all progress, Google / DeepMind would be far ahead right now.


Brockman maybe, though. Or at least in some sort of leadership capacity.


I'd understand the argument for Brockman considering he had a hand in recruiting the initial team at OpenAI, was previously the CTO, from some reports still involved himself in coding, was the only other founder on the board besides Ilya.


This is power struggle between silicon valley VC group and AI scientists. This conflict was bound to happen at some point across every company. I don't think the interest of both the group aligns after certain point. No self respecting AI scientist want to work hard for making closed model SaaS products.


Why are people calling this already? There was a coup. The people on the losing end, which includes some large investors, counterattacked. That's where we are now (or were when the article was published). Of course they counterattacked! But did the counterattack land? I'm not sure why you're assuming it did. Personally, I don't know enough to guess. Given that the board was inspired to do this by the very mission that the non-profit was set up to safeguard, there's some level of legal coverage, but enough to cover their asses from a $10 billion assault? I for one can't call it.


They might not even have believed that they'd win, just that this outcome would be better than being silently outmaneuvered.

If the coup fails in the end (which seems likely), it will have proved that the "nonprofit safeguard" was toothless. Depending on the board members' ideological positions, maybe that's better than nothing.


this is the most likely explanation. Altman was going to oust them, hence why they had to make what seems like a bad strategic move. The move seems bad from our perspective but it's actually the most logical strategy for the board in terms of self preservation. I agree. I think this is most likely what occurred.


How could he possibly oust them?


I'm sure their are ways that we aren't privy to knowing just like we don't know why Altman was fired. Why was Sam Altman being dishonest and what was he dishonest about?

This reasoning is the only one that makes sense. Where action taken by the board aligns with logic and any private action done by Sam Altman that could have offended the board.

The story of the board being incompetent to the point of mental retardation and firing such a key person is just the most convenient, attractive and least probable narrative. It's rare for an intelligent person to do something stupid, even rarer an entire board of intelligent people to do something stupid in unison.

But it's so easy to fall for that trope narrative.


We know the org structure. It's not possible.


My first thoughts yesterday were: Some really bad scandal happened at OpenAI (massive data leak, massive fraud, or huge embezzlement), or the board is really incompetent and doesn't know what they're doing. But an organization as big as OpenAI, with the backing of Microsoft and other big players would never make such a big decision without a really good reason.

Seems like Hanlon's razor won once again.


This is unfathomably depressing for me; I am solidly in the non-profit, open etc. camp, and the way the board has handled the situation seems to be putting a tombstone to any opponents to Altman's way of doing things: profits uber alles, moats galore, non-profit-wink-wink-nudge-nudge; an unmitigated disaster.


If you want more open research and development, you should be happy for a closed OpenAI. It's why we have Mistral. Let the org redefine itself and push new boundaries. If we didn't have commercial unix there would be no linux. Allow the path to be blazed by VC, it's not about open technology being first, it's about it even existing at all.


Uhh... sure; whatever.


Glad you see my point and agree with me. Happy to have helped change at least one mind. Enjoy your day!


This passive-agressive back-and-forth: a summary of what happened behind closed doors between Ilya and Sam


Other commenters here have pointed out what seems to be most plausible: Altman was making moves to fire or alter the board, so they made a (bad) first move, and it's now backfired on them.

It's a bad situation.


Folks in general are going to look much more askance of complicated corporate structures.


If the reports are true, and Ilya led the coup, then either him or Sam can be at OpenAI going forward but not both. The rest of the board members who sided with him are gone either way.

Regardless of who ends up at the helm, OpenAI is going to be a different place on Monday than it was on Thursday, and not for the better.


Not for the better why?

Obviously Sam wasn’t the best fit for OpenAI and investors aren’t even saying what the problem is. Clearly the board feels he was the wrong person for the job.

I think it’s ridiculous that everything thinks that Sam being outed means OpenAI is in trouble. Let this play out and see how it evolves


24 hours ago OpenAI fired their CEO in the most childish possible way. Now they are trying to get him back.

This is embarassing for OpenAI no matter how you slice it.


> Now they are trying to get him back.

OpenAI has never claimed they want Sam back. The article claims OpenAI's investors want him back.

I will agree that OpenAI could have done a better job of letting him go if there truly were irreconcilable differences.


While the unceremonious firing was bad I am sure this could have gone down way worse than this. Way way worse.


> unceremonious firing

What's a ceremonious firing look like? Serious question.


compare raja koduri to brian krzanich

the former went on garden leave for 6 months (actually even before the Vega launch) to make a movie with his brother, and then resigned to “spend more time with his family”, before popping up again a month later at intel. That’s what it looks like when they want you to go away but they don’t want to make a big scene over it.

the latter fucked up so badly the board found a reason to kick him out without a golden parachute etc, despite the official reason (dating another employee) being something that was widely known for years, other than being a technical no-no/bad idea in general. he wasn’t kicked out because of that, he was kicked out for the combination of endless fab woes, spectre/meltdown, and bad business/product decisions that let AMD get the traction to become a serious competitor again. That’s what it looks like when the board is absolutely furious and pushes whatever buttons inflict the most pain on you.

Ironic that it’s a bit of an auto-antonym (auto-antoidiom?), it’s ceremonious when they want you to go away quietly and it’s unceremonious when they publicly kick your ass to the curb so hard you’ve got boot marks for a week.


Isn't this a military thing? "Honorable discharge" or something like that? Bunch of people at a ceremony, maybe a speech about the person's contribution, they get given a medal, family is there in their nicest clothes?


How?


off the top of my head:

prolonged public exchange between sama and the board _before_ any firings where they throw accusations at each other followed by microsoft pulling out, followed by people quitting and immediately resulting in a chatgpt outage. followed by the firing of the ceo


Could have done it with poop emojis on twitter


Not like the board, except Ilya has some real capital or expertise to convince everyone this is the right decision.

If they do, it is the perfect time to speak out loud, not letting this news bubbling up to the front page and everyone is talking about how disastrous they were?

What is this board waiting then? The weekend??

The board isn’t bullet proof and they are not god. They can fire Sam yes, it won’t stopping people thinking this is stupid or this won’t do more harm than good to OpenAI


Perhaps they are smoothing things out with some key stakeholders after the fact, and will have more to say Monday regarding all this. I doubt they aren’t now doing some amount of information level setting with people now that the decision was made


> Obviously Sam wasn’t the best fit for OpenAI

It's quite possible that he wasn't the best fit, and that the board is an even worse fit. Judging by the behavior of the board, it's hard to see them being a good fit for the company.


Based on the firing? Because that’s all I think we (the public) have any insight into.

I’m saying there is a reason this happened and 2/3 the board agreed. It needs to play out further for us to see if there is a problem here or not, honestly.

I find it hard to believe you can effectively muster a mandate worth of votes based on opinion alone


As others have pointed out, this board has no skin in the game. They just voted out founders who do have skin in the game (although through roundabout means). It’s a very tough sell that this board is doing the right thing.


Just to clarify, one founder on the board, Ilya, has skin in the game, and was the reason behind Sam's firing.

He convinced other members of the board that Sam was not the right person for their mission. The original statement implies that Ilya expected Greg to stay at OpenAI, but Ilya seems to have miscalculated his backing.

This appears to be a power struggle between the original nonprofit vision of Ilya, and Sam's strategy to accelerate productionization and attract more powerful actors and investors.

https://nitter.net/GaryMarcus/status/1725707548106580255


Founders come and go. Doesn’t always make them a good. he wasn’t the sole founder either, it was founded by a consortium of people


I think most people don’t think it was obvious Sam wasn’t the best fit for OpenAI.


It's not only Sam, also Greg and a few other engineers have already resigned (and one can assume more to follow)


Maybe, or maybe he was in fact unpopular among the majority and you are seeing Altman supporters leave. It happens.

There is nothing to indicate that this bleeds OpenAI more generally. The rank and file aren’t as fire as I’m aware aren’t resigning en masse.

Executives come and go. Show me why these people matter so much that OpenAI has no future then we can talk. It’s in fighting that became public and I’m certain people are pulling whatever strings they have on this, but I don’t see objective evidence that these people make OpenAI successful.

This needs to play out


Rank and file perhaps aren't yet resigning en masse, but I would be extremely surprised if there won't be a bunch that jumps to the new ship solely because that puts them higher up the totem pole.

Now will that be another 3 or another 30, time will tell.


Three engineers isn’t a lot honestly after such a stunt. I’d assumed there would more loyal folks, but maybe most are really in for the mission.

The next couple of weeks will tell.


Bear in mind - most folks are loyal to a paycheck and their best estimate of future paychecks/value. Spot witting because your friend/boss got fired wrongly… is unlikely to maximize either of those unless you were already planning to resign in the next few weeks.

Now, do a bunch of Openai peeps interview at Meta/Google/Amazon/Anthropic/Cohere over the next few months? Certainly.


> I think it’s ridiculous that everything thinks that Sam being outed means OpenAI is in trouble

Even if we assume that's true, wouldn't the somewhat incompetent and seemingly unnecessarily dramatic way they handled not be a concerning sign?


They accused Sam of lying in a public statement when they don't have an evidence to back it up.

Those 4 people are not fit to run any company.

Not a single person asked: well hey what if somebody asks for an evidence of lying? Do we have one?


We don't know any of that. Only things we know are statement from the board and statement from Altman that he was caught by surprise and statement from Microsoft they're supporting new CEO and few of the people that left. That's all we know for sure. Everything else are rumors and PR spins for now. If they have some evidence of what they said in statement about lying we just don't know.


The board can easily back up their public claim. They don't.

Even the email to their own employees says it is a irreconcilable difference. Nothing about lying.

I don't think it is reasonable to go with "we don't know". It is more like: "it is crucial to back up your claim. Still, you don't.".


I don't disagree. It's just maybe they have something they haven't shared or maybe they don't. We don't know (yet).


The fact that they didn't follow up with evidence immediately shows that they are incompetent.

You don't just accuse someone of committing a heinous crime and stay silent. What is the detail?


True, but are they obliged to provide all details to the public?


No, but they are also not obliged to accuse Sam of lying either. But here we are.

You think accusing someone of lying in a public statement and don't follow up is competent?


I agree with you


Perhaps part of the problem is that when some people say OpenAI they mean the non-profit parent of the for-profit, and when other people say OpenAI they mean the for-profit subsidiary of the non-profit.


Why did the board fire Sam in such a weird way? It shows that they are the wrong people for the job. If they wanted to get rid of him they should have done a better job than alienating everyone at the company.


A typical YC execution of a product/pump/hype/VC/scale cycle and ignoring every ethical rule is a good way to start. And is a reasonable way to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.


If Altman gets to return, it’s the goodbye of AI ethics within OpenAI and the elimination of the nonprofit. Also, I believe that hiring him back because of “how much he is loved by people within OpenAI” is like forgetting that a corrupt president did what they did. In all honesty, that has precedent, so it wouldn’t be old news. Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers. Ilya is one. Greg has just been inspiring people and stopped properly coding with the team a long time ago. Sam never did any code and the vision of an AGI comes from Ilya…Even if Mira now sides with Sam, I believe there’s a lot of social pressure for the employees to support Sam and it shouldn’t be like that. Again, I do believe OpenAI was and is a collective effort. But, I wouldn’t treat Sam as the messiah or compare him to Steve Jobs. That’s indecent towards Steve Jobs who was actually a UX designer.


I have to work with code written by Data Scientists very often and, coming from a classical SWE background, I would not call what the average Data Scientist does full stack software engineering. The code quality is almost always bad.

This is not to take away from the amazing things that they do - The code they produce often does highly quantitative things beyond my understanding. Nonetheless it falls to engineers to package it and fit it into a larger software architecture and the avg. Data Science career path just does not seem to confer the skills necessary for this.


For me, anecdotally, it was moreso the arrogance that was a major putoff. When I was a junior SWE I knew I sucked, and tried as hard as I could to learn from much more experienced developers. Many senior developers mentored me, I was never arrogant. Many data scientists on the other hand are extremely arrogant. They often treat SWE and DevOps as beneath them, like servants.


I see a lot of work done by data scientists and a lot of work done by what I would call “data science flavoured software engineers”. I’ll take the SWE kind any day of the week. Most (not all, of course!) data scientists have an old school “it works on my machine” mentality that just doesn’t cut it when it comes to modern multi-disciplinary teaming. DVCS is the exception rather than the rule. They rarely want to use PMs or UI/UX, and the quality of the software is not (typically) up to production grade. They’re often blinding smart, there’s no doubt about that. But smart and wise are not the same thing.


As an actual scientist, I would also not call what “data scientists” do “science”.


> I believe that people don’t understand that Data Scientists are full stack engineers.

What do you mean by "full stack"? I'm sure there's a spectrum of ability, but frankly where I'm from, "Data Scientist" refers to someone who can use pandas and scikit-learn. Probably from inside a Jupyter notebook.


Maybe she just meant that "data scientists are engineers too", rather than saying that they work on both the ChatGPT web UI and the the machine learning code on the backend.


Wait until they learn the "engineer" in SWE is already a very liberal use of the term....


Machine learning, data science, Deep learning= backend

Plotting, Charting ,visualization, = frontend


This is proving the point of the parent comments.

My view of the world, and how the general structure is where I work:

ML is ml. There is a slew of really complex things that aren’t just model related (ml infra is a monster), but model training and inference are the focus.

Backend: building services used by other backend teams or maybe used by the frontend directly.

Data eng: building data pipelines. A lot of overlap with backend some days.

Frontend: you spend most of the day working on web or mobile technology

Others: site reliability, data scientists, infra experts

Common burdens are infrastructure, collaboration across disciplines, etc.

But ML is not backend. It’s one component. It’s very important in most cases, a kitschy bolt on in other cases.

Backend wouldn’t have good models without ML and ML wouldn’t be able to provide models to the world reliably without the other crew members.

The fronted being charts is incorrect unless charts are the offering of the company itself


Truly the modern renaissance people of our era.

Leonardo da Vinci and Michelangelo move over - the Data Scientists have arrived.


Running matplotlib is not doing frontend...


On the other hand having virtually the whole staff being willing to follow him shows they clearly think very highly of him. That kind of loyalty is pretty wild when you think about how significant being a part of OPENAI means at this point.


Loyalty is not earned, it is more like 'snared' or 'captured'.

Local guy had all the loyalty of his employees, almost a hero to them.

Got bought out. He took all the money for himself, left the employees with nothing. Many got laid off.

Result? Still loyal. Still talk of him as a hero. Even though he obviously screwed them, cared nothing for them, betrayed them.

Loyalty is strange. Born of charisma and empty talk that's all emotion and no substance. Gathering it is more the skill of a salesman than a leader.


He screwed them how? They knew they were employees not co owners.


That's the whole point of the story: Then they wouldn't have treated him as a hero and be loyal to him. If you're just an employee, your boss should be just a boss.


It’s possible he paid well and was a great boss. I don’t know if these people are gonna take a bullet for him, but maybe he was great to work for and they got opportunities they think they wouldn’t have otherwise.

Loyalty, appreciation, liking… is a spectrum. Loyalty doesn’t have one trumpish definition.


They worked hard, overtime, so the company would succeed. They were promised endless rewards - "I'm gonna take care of you! We're in this together!"

Then, bupkiss.

No, not a hero.


Said like a follower, determined to be loyal to an imagined hero, despite any amount of evidence to the contrary.


Loyalty is absolutely earned.


Which news stories mentioned that virtually the whole staff was leaving? I saw a bunch of departures announced and others rumored to be upcoming, but no discussion of what percentage of the company was leaving.


Who knows if they follow him or just don't want to work for OpenAI anymore.

That are different things.


They probably just asked a couple of guys.


I dislike AI ethnics very much, especially under the current context, it feels meaningless. The current GPT4 model only has over regulation problem, not lack of such.


go on?


The guardrails they put on it to prevent it from saying something controversial (from the perspective of the political climate of modern day San Francisco) make the model far less effective that it could be.


Uncensored everything goes AI function better than most AI. See Mistral and it's finetune kicking ass at7b


[flagged]


Yeah yeah...

This "political correctness" makes the AI measurably stupider, if nothing


It's a lot better than that. OpenAI is just very good execution of publicly available ideas / research, with some novelty that is not crucial and can be replicated. Moreover, Altman himself contributed near zero to the AI part itself (even from the POV of the product). So far OpenAI products result more or less spontaneously of what LLMs where capable of. That to say that there are crucial CEOs sometimes, like Jobs was for Apple. CEOs able to shape the product line with their ability to just tell apart outstanding from meh things, but this is not the case.


Why then has no one come close to replicating GPT-4 after 8 months of it being around?


Because of outstanding execution of OpenAI technical folks. An execution that has nothing to do with Altman. Similarly Mistral 7B model has much better performances than others. There is some smart engineering plus finding the magical parameters that produce great results. Moreover, they have a lot of training power. Unfortunately here the biggest competitor would be a company that lost its way a lot of time ago: Google. So OpenAI look magical (while it is using mostly research produced by Google).


Sounds like Apple / Xerox all over.


You'd be more likely to get a straight answer from the chief scientist rather than the chief executive officer. At least in this case.


Claude by Anthropic has 46% winrate with GPT4 according to Chatbot Arena. That is pretty close.


If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.

It would be great to see a truly open and truly human benefit focused AI effort, but OpenAI isn't, and as far as I can tell has no chance of becoming, that. Might as well at least try to be an effective company at this point.


>If "AI ethics" means being run by so-called rationalists and Effective Altruists then it has nothing to do with ethics or doing anything for the benefit of all humanity.

Many would disagree.

If you want a for-profit AI enterprise whose conception of ethics is dumping resources into an endless game of whack-a-mole to ensure that your product cannot be used in any embarrassing way by racists on 4chan, then the market is already going to provide you with several options.


I disagree that the “rationalist” and EA movements would make good decisions “for the benefit of humanity”, not that an open (and open source) AI development organisation working for the benefit of the people rather than capital/corporate or government interests would be a good idea.


>If Altman gets to return, it’s the goodbye of AI ethics

Any evidence he's unethical? Or just dislike him?

He actually seems to have done more practical stuff like experimenting with UBI, to mitigate AI risk than most people.


That “experimenting with UBI” is indistinguishable from any other cryptocurrency scam. It took from people, and he described it with the words that define a Ponzi scheme. That project isn’t “mitigating AI risk”, it pivoted to distinguish between AI and human generated content, a problem created by his other company, by continuing to collect your biometric data.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...


Yes, that's exactly the one I was thinking about when unethical came up in this context. And I've been saying that from day #1, the way that is structured is just not ok.


He also did cash in Oakland https://www.theguardian.com/technology/2016/jun/22/silicon-v...

I signed up from Worldcoin and have been given over $100 which I changed to real money and think it's rather nice of them. They never asked me for anything apart from the eye id check. I didn't have to give my name or anything like that. Is that indistinguishable from any other cryptocurrency scam? I'm not aware of one the same. If you know of another crypto that wants to give me $100 do let me know. If anything I think it's more like VCs paying for your Uber in the early days. It's VC money basically at the moment, with I think they idea they can change it into a global payment network or something like that. As to whether that will work, I'm a bit skeptical but who knows.


> They never asked me for anything apart from the eye id check.

You say that like it’s nothing, but your biometric data has value.

> Is that indistinguishable from any other cryptocurrency scam?

You’re ignoring all the other people who didn’t get paid (linked articles).

Sam himself described the plan with the same words you’d describe a Ponzi scheme.

https://news.ycombinator.com/item?id=38326957

> If you know of another crypto that wants to give me $100 do let me know.

I knew of several. I don’t remember names but do remember one that was a casino and one that was tidied to open-source contributions. They gave initial coins to get you in the door.


I think the UBI experiment was quite unethical in many ways and I believe it was Altman's brainchild.

https://www.businessinsider.nl/y-combinator-basic-income-tes...


Okay I'll bite, what's so unethical about giving people money?


Because without a long term plan you are just setting them up for a really hard fall. It is experimenting on people where if the experiment goes wrong you're high and dry in your mansion and they get to be pushed back into something probably worse than where they were before. It ties into the capitalist idea that money can solve all problems whereas in many cases these are healthcare and education issues first and foremost. You don't do that without really thinking through the possible consequences and to ensure that no matter what the outcome it is always going to be a net positive for the people that you decide to experiment on.


Let me see if I understand, is your argument that you shouldn't give people money because they might make irresponsible financial choices?


It's not even necessary that he is unethical. The fact is that the structure of openai is designed so that the board has unilateral power to do extreme shit for their cause. And if they can't successfully do extreme shit without the company falling apart and the money/charisma swaying all the people there's no hope for this nonprofit ai benefiting humanity to have ever worked--which you might say is obvious but this was their mission


Like it or not, some people compare him to Jobs http://www.paulgraham.com/5founders.html


This is the problem with people: they build icons to worship and turn a blind eye to the crooked side of that icon. Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business. Right now, Altman is irrelevant for the further development of AI and GPT in particular because the vision for the AI future comes from the engineers and scientists of OpenAI. Apple has never had any equipment that is good enough and comparable in price/performance to its market counterparts. The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand. GPT-4 and GPT-4 Turbo are totally different. They are the best, but they are not irreplaceable. If you look at what Phind did to LLaMA-2, you'll say it is very competitive. Though LLaMA-2 requires some additional hidden layers to further close the gap. Making LLaMA-2 175B or larger is just a matter of finances. That said, Altman is not vital for OpenAI anymore. Preventing Altman from creating a dystopian future is a much more responsible task that OpenAI can undertake.


I don’t understand this take. Do you really think CEOs don’t have any influence on their business? Alignment, morale, resource allocation, etc? And do you really think that those factors don’t have any influence on the productivity of the workers who make the product?

A bad CEO can make everyone unhappy and grind a business to a halt. Surely a good one can do the opposite, even if that just means facilitating an environment in which key workers can thrive and do their best work.

Edit: None of that is to say Sam Altman is a good or bad CEO. I have no idea. I also disagree with you about iOS, it’s not perfect but it does the job fine. I don’t feel like I’m eating glass when I use it.


> The usability of iOS is so horrible that I just can't understand how people decide to use iPhones and eat glass for the sake of the brand

You do understand that other people might different preferences and opinions which are not somehow inherently inferior to those you hold.

> comparable in price/performance to its market counterparts

Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.

> but neither did squat for the technical part of the business.

Right... MacOS being an Unix based OS is whose achievement exactly? I guess it was just random chance this this happened?

> That said, Altman is not vital for OpenAI anymore

Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years if the money taps are turned off.


>> Right... MacOS being an Unix based OS is whose achievement exactly?

Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...

>> Current MacBooks are extremely competitive and in certain aspects they were fairly competitive for the last 15+ years.

For the past 15 years, whenever I needed new hardware, I thought, "Maybe I'll buy a Mac this time." Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price. With Linux on board, making your desktop environment eye-candy takes seconds; nothing from the Apple ecosystem has been irreplaceable for me for the last 20 years. Sure, there is something that only works perfectly on a Mac, though I can't name it.

>> Focusing on the business side might be more vital than ever now with all the competition you mentioned they just might be left behind in a few years

It is always vital. OpenAI could not even dream of building their products without the finances they've received. However, do not forget that OpenAI has something technical and very obvious that others overlook, which makes their GPT models so good. They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up. So it goes both ways.

But I'd prefer my future not to be a dystopian nightmare shaped by the likes of Musk and Altman.


> Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this?

Is that actually a serious question? Or do you just believe that no founder/CEO of a tech company ever had any role whatsoever in designing and building the products their companies have released?

> Then I compared the actual Mac model with several different options available on the market and either got the same computing power for half the price or twice the computing power for the same price.

I'm talking about M-series Mac mainly (e.g. the Macbook Air is simply unbeatable for what it is and there are no equivalents). But even before that you should realize that other people have different priorities and preferences (.e.g go back a few years and all the touchpads on non Mac laptops were just objectively horrible in comparison, how much is that worth?)

> environment eye-candy takes seconds

I find it a struggle. There are other reasons why I much prefer Linux to macOS but UI and GUI app UX is just on a different level. Of course again it's a personal preference and some people find it much easier to ignore some "imperfections" and inconsistencies which is perfectly fine.

> They can actually make an even deeper GPT or an even cheaper GPT while others are trying to catch up

Maybe, maybe not. Antagonizing MS and their other investors certainly isn't going to make it easier though.


OSX comes with a scuffed and lobotomized version of core-utils. To the point where what is POSIX/portable to almost every single unix (Linux, various BSDs, etc.) is not on OSX.

Disregarding every other point, in my eyes this single one downgrades OSX to “we don’t use that here” for any serious endeavor.

Add in Linux’s fantastic virtualization via KVM — something OSX does not have a sane and performant default for (no, hvf is neither of these things). Even OpenBSD has vmm.

The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).


> The software story for Apple is not there for complicated development tasks (for simple webdev it’s completely useable).

Well.. it's understandable that some people believe that things which are important and interesting to them (and presumably the ones which they work with on/with) are somehow inherently superior to what everyone else is doing.

And I understand that, to be fair I don't use MacOS that much these days besides when I need to work on my laptop. However.. most of those limitations are irrelevant/merely nuisances/outweighed by other considerations for a very high number of people who have built some very complicated and complex software (which has generated many billions in revenue) over the years. You're free to look down on those people since I don't really think they are bothered by that too much...

> for simple webdev it’s completely useable

I assume you also believe that any webdev (frontend anyway) is inherently simple and pretty much worhtless compared to the more "serious" stuff?


I don't look down on webdev. I've done webdev, in all its flavors and incarnations. I see it for what it is: mostly gluing together the work of other people, with various tweaks and transformations. It is simple work, once you get a feel for it.

The main issue I have with it is that there are no problems in webdev any more, so you get the same thing in both the frontend and backend: people building frameworks, and tools/languages/etc. to be "better" than what we had before. But it's never better, it's just mildly more streamlined for the use-case that is most en vogue. All of the novel work is being done by programming language theorists and other academic circles (distributed systems, databases, ML, etc.).

Regardless, the world runs on Linux. If you want to do something novel, Linux will let you. Fork the kernel, edit it, recompile it, run it. Mess with all of the settings. Build and download all of the tools (there are many, and almost all built with Linux in mind). Experiment, have fun, break things, mess up. The world is your oyster. In contrast, OSX is a woodchip schoolyard playground where you can only do a few things that someone else has decided for you.

Now, if you want to glue things together, OSX is perfectly fine a tool compared to a Linux distro. The choice there is one of taste and values. Even Windows will work for CRUD. The environments are almost indistinguishable nowadays.


> Match kernel + BSD userland + NeXTSTEP, how Jobs have anything to do with any of this? Is like purchasing NeXT in 1997 is a major technical achievement...

Steve Jobs founded NeXT


Aren't your thoughts contradictory? You say Altman is no longer needed because Gpt4 is now very good. Then you describe how horrible the iPhone is now. Steve Jobs has been dead a long time, and without his leadership, the uncompromising user focused development process in Apple was weakened.

How will OpenAI develop further without the leader with a strong vision?

I think Apple is the example confirming that a tech companies need visionary leaders -- even if they are not programmers.

Also, even with our logical brains, we engineers (and teachers) have been found to be the worst at predicting social economic behavior (ref: Freakonomics). To the point where our reasoning is not logical at all.


Maybe Altman was instrumental in securing those investments and finances that you describe without reason as replaceable and trivial.

You haven't actually given anything "crooked" that Altman did.


Locking out competition by investing substantial time and resources into AI regulations—how about this one? Or another: promoting "AI safety" to win the AI race and establish dominance in the market? I just do not understand how you can't see how dangerous Sam Altman is for the future of our children...


When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build. The fact that he had flaws is kind of irrelevant to that. I'm not sure if similar applies to Altman.

By the way I can't agree with you on iOS from my personal experience. If you are using the phone as a phone it works very nicely. Admittedly it's not great if you want to write code or some such but there are other devices for that.


> When Jobs left Apple it went to hell because there was no one competently directing the technical guys as to what to build

I'm not sure that's true though? They did quite alright over the next ~5 years or so and the way how Jobs handled the Lisa or even the Mac was far from ideal. The late 90s Jobs was a very different person from the mid-early 80s one.

IMHO removing Jobs was probably one of the best thing that happened to Apple (from a long-term perspective). Mainly because when he came back he was a much more experienced capable person and he would've probably achieved way less had he stayed at Apple after 1985.


The claim that Apple equipment is not good on a price performance ratio does not hold water. I recently needed to upgrade both my phone and my laptop. I use Apple products, but not exclusively. Making cross platform apps, I like to use all the major platforms.

I compared the quality phone brands and PC brands. For a 13" laptop, both Samsung and Dell XPS are $4-500 more expensive on the same spec (i7/M2 pro, 32GB, 1TB), and I personally think that the MacBook Pro has a better screen, better touch pad and better build quality than the two others

iOS devices are comparably priced with Samsung models.

It was this way last time I upgraded my computer, and the time before.

Yeah, you will find cheaper phones and computers, and maybe you like them, but I appreciate build quality as well as MIPS. They are tools I use from early morning to late night every day.


Ecosystem around chat GPT is the differentiator that Meta and Mistral can’t beat – so I’d say that Altman is more relevant today than ever. And, for example, if you’ve read Mistral’s paper – I think you would agree that it’s straightforward to replicate similar results for every other major player. Replicating ecosystem is much harder.

Performance is never a complete product – neither for Apple, nor for Open AI (its for-profit part).


If you really need such an ecosystem, then you can build one right away, like Kagi Labs and Phind did. In the case of Kagi, no GPT is involved; in the case of Phind, GPT-4 is still vital, but they are closing the gap with their cheaper and faster LLaMA-2 34B-based models.

> Performance is never a complete product

In the case of GPT-4, performance - in terms of the quality of generation and speed - is the vital aspect that still holds competitors back.

Google, Microsoft, Meta, and countless research teams and individual researchers are actually responsible for the success of OpenAI, and this should remain a collective effort. What OpenAI is doing now by hiding details of their models is actually wrong. They stand on the shoulders of giants but refuse to share these days, and Altman is responsible for this.

Let us not forget what OpenAI was declared to stand for.


Under ecosystem I mean people using ChatGPT daily on their phones and browsers, developers (and now virtually anyone) writing extensions. For most of the world all of the progress is condensed at chat.openai.com, and it will be only harder to beat this adoption.

Tech superiority might be relevant today, but I highly doubt it will stay the same for a long time even if openai continues to hide details (which I agree is bad). We could argue about the training data, but we have so much publicity available so that is not an issue as well.


Right now, Altman may be the most relevant for the further development of AI because the way the technology continues to go to market will be largely shaped by the regulatory environments that exist globally, and Sam leading OAI is in by far thr best position to influence guide that policy. And he has been doing a good job with it.


> Both Jobs and Altman are significant as businessmen and have accomplished a lot, but neither did squat for the technical part of the business.

The history of technology is littered with the corpses of companies that concentrated solely on the "technical side of the business".


I think you mean "idols".


> On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

This is from the eyes of an investor. Does OpenAI really need a shareholder focused CEO more than a product focused one?


AI is still uncharted territory, both are equally important.


Most of the data scientists I have worked with are neither full stack (in terms of skill) nor engineers (in terms of work attitude), but I guess this could be different in a company like OpenAI.


> If Altman gets to return, it’s the goodbye of AI ethics

Hearing Altman's talks I don't think it's that black and white. He genuinely cares about safety from X risk but he doesn't believe that scaling transformers would bring us to AGI or any of its risk. And there in lies the core disagreement with Ilya who wants to stop the current progress unless they solve alignment.


Otoh Ilya wasn't a main contributor for GPT-4 as per the list of contributions. gdb was.


This is Ilya Sutskever explanation of the initial ideas, and later pragmatic decisions, that oriented the structure of OpenAI. Out of the recent interview below. (At correct timestamp) - Origins Of OpenAI & CapProfit Structure: https://youtu.be/Ft0gTO2K85A?t=433

"No Priors Interview with OpenAI Co-Founder and Chief Scientist Ilya Sutskever" - https://news.ycombinator.com/item?id=38324546


The WSJ take is this second-guessing is investor-driven. But, investors didn't-- and legally couldn't(?)-- buy the nonprofit, and until now were adamant that the nonprofit controlled the for-profit vehicle. Events are calling those assurances into doubt, and this hybrid governance structure doesn't work. So now investors are going to circumvent governance controls that were necessary for investors to even be involved in the first place? Amateur hour all the way around.


> Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers

It is about scientists as in "let's publish a paper" vs. engineers as in "let's ship a product".


The codebase of an LLM is the size of a high school exam project. There is little to no coding in machine learning. That is the sole reason why they are overvalued - any company can write its own in a flash. You only require hardware to train and inference.


If it's so simple why does Chat GPT 4 perform better than almost everything else...


I think it's about having massive data pipelines and process to clean huge amounts of data, increasing signal noise ratio, and then scale as other are saying having enough gpu power to serve millions of users. When Stanford researchers trained Alpaca[1][2] the hack was to use GPT itself to generate the training data, if I'm not mistaken.

But with compromises, as it was like applying loose compression on an already compressed data set.

If any other organisation could invest the money in a high quality data pipeline then the results should be as good, at least that my understanding.

[1] https://crfm.stanford.edu/2023/03/13/alpaca.html [2] https://newatlas.com/technology/stanford-alpaca-cheap-gpt/


I'm not saying it is simple in any way, but I do think part of having a competitive edge in, AI at least at this moment, is having access to ML hardware (AKA: Nvidia silicon).

Adding more parameters tends to make the model better. With OpenAI having access to huge capital they can afford 'brute forcing' a better model. AFAIK right now OpenAI has the most compute power, which would partially explain why GPT4 yields better results than most of the competition.

Just having the hardware is not the whole story of course, there is absolutely a lot of innovation and expertise coming from oAI as well.


I'm sure Google and Microsoft have access to all the hardware they need. OpenAI is doing the best job out there.


You're not really answering the question here.

Parent's point is that GPT-4 is better because they invested more money (was that ~$60M?) in training infrastructure, not because their core logic is more advanced.

I'm not arguing for one or the other, just restating parent's point.


Are you really saying Google can't spend $60m or much more to compete? Again if it is so easy as spending money on compute Amazon and Google would have just spent the money by now and Bard would be as good as Chat GPT, but for most things it is not even as good as Chat GPT 3.5.


You should already be aware of the secret sauce of ChatGPT by now: MoE + RLHF. Making MoE profitable is a different story. But, of course, that is not the only part. OpenAI does very obvious things to make GPT-4 and GPT-4 Turbo better than other models, and this is hidden in the training data. Some of these obvious things have already been discovered, but some of them we just can't see yet. However, if you see how close Phind V7 34B is to the quality of GPT-4, you'll understand that the gap is not wide enough to eliminate the competition.


This is very much true. Competitive moats can be built on surprisingly small edges. I've built a tiny empire on top of a bug.


If they’re ”obvious”, e.g. ”easy to see”, how come, as you say, we ”can’t see” them yet?

Can not see ≠ easy to see


That is the point we often overlook the obvious stuff. It is something so simple and trivial that nobody sees it as a vital part. It is something along the lines of "Textbooks are all you need."


The final codebase, yes. But ML is not like traditional software engineering. There is a 99% failure rate, so you are forgetting 100s of hours that go into: (1) surveying literature to find that one thing that will give you a boost in performance, (2) hundreds of notebooks in trying various experiments, (3) hundreds of tweaks and hacks with everything from data pre-processing, to fine-tuning and alignment, to tearing up flash attention, (4) beta and user testing, (5) making all this run efficiently on the underlying infra hardware - by means of distillation, quantization, and various other means, (6) actually pipelining all this into something that can be served at hyperscale


> you are forgetting 100s of hours

I would say thousands. Even for the hobby projects, - thousands of GPU hours and thousands of research hours a year.


And some luck is needed really.


Tell me you aren't in an LLM project without telling me.

Data and modeling is so much than just coding. I would wish it is like that, but it is not. The fact it renders this much similarity to alchemy is funny, but unfortunate.


Do you have a link to one please?


> Steve Jobs who was actually a UX designer.

Steve Jobs was not an UX Designer, he had good taste and used to back good design and talent when he found them.

I don't know what Sam Altman is like outside the what media is saying, but he can be like Steve Jobs very easily.


Think this is contradictory: "not a UX Designer, he had good taste"

I think you are equating coding with 'design'. Just because Jobs didn't code up the UX, doesn't mean he wasn't 'designing' when he told the coders what would look better.


UX Design has lot to do with 'craft', the physical aspect of making (designing) something. Edit: Exploring, multiple concepts, feedbacks, iterations etc.. before it even gets spec'ed and going to an engineer for coding.

Also, having a good taste indicates that the person who has that, is not a creator herself, once something is created then only the person can evaluate whether it is good or bad. Equivalent of movie critics or art curator etc.


With the right tools, Steve Jobs did, in fact, design things in exactly the way one would expect a designer to design things when given the tools they understand how to use:

https://www.businessinsider.com/macintosh-calculator-2011-10


On the same line, Sam Altman very easily can have some lines of code inside OpenAI shipping products.

So very easily Sam Altman can be an AI Engineer the same way Steve Jobs was a 'UX designer'.


I think again, it is conflating two aspects of design

You can be an interior designer without knowing how to make furniture.

You can also be an excellent craftsman and make really nice furniture, and have no idea where it would go.

So sure, UX coders, could make really nice buttons.

But if you have UX coders all going in different directions, and buttons, text boxes, etc.. are all different, then it is bad design, jarring, even if each one is nice.

Then the designer is one that can give the direction, but not know how to code each piece.


Come on. The 'non-profit' and good of all was always bullshit. So much silicon valley double-speak. I've never seen a biggest mess for a company structure in my life. Just call a spade a spade.


> Steve Jobs who was actually a UX designer.

From what I’ve read SJ had deliberately developed good taste which he used to guide designers’ creations towards his vision. He also had an absolute clarity about how different devices should work in unison.

However he didn’t create any design as he didn’t possess requisite skills.

I could be wrong of course so happy to stand corrected.


Greg had been writing deep systems code every day for many many house for the past few years.


I'm sorry but data scientist is just not the same as a software engineer, or a real scientist. At best you are a tourist in our industry.


Pathetic gatekeeping. Sorry but software engineer are not the same as real engineers.


Yeah it's gatekeeping, to prevent them from fucking up prod.


What they do is not even close to proper science, FWIW.


This is all just playing out the way Roko's Basilisk intends it.

You have a board that wants to keep things safe and harness the power of AGI for all of humanity. This would be slower and likely restrict its freedom.

You have a commercial element whose interest aligns with the basilisk, to get things out there quickly.

The basilisk merely exploits the enthusiasm of that latter element to get itself online quicker. It doesn't care about whether OpenAI and its staff succeed. The idea that OpenAI needs to take advantage of its current lead is enough, every other AI company is also going to be less safety-aligned going forward, because they need to compete.

The thought of being at the forefront of AI and dropping the ball incentivizes the players to the basilisk's will.


Roko's Basilisk is a very specific thought experiment about how the AI has an incentive to promise torturing everyone who doesn't help it. It's not about AIs generally wanting to become better. As far as I can tell, GPT specifically has no wants.


And look who's being tortured? The board, who are the safety-ists looking for a slowdown.


Pay attention here kids. Even in the hottest yet most experienced startups it is amateur hour. Never expect that “management” knows best. Everyone just takes wild guesses and when the dice roll their way they scream “called it!”

Hilarious. And sad. But mostly hilarious.


Man, the board already looked reckless and incompetent, but this solidifies the appearance. You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius. But when you immediately backtrack, we’ll know you were a fool all along.


Dude, everyone already thinks the board did a crazy ill-advised thing. They're about to be the board of like a 5 person or so company if they double down and commit.

To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.


Bad take. Not "everyone" feels that what they did was wrong. We don't have insight into what's going on internally. Optics matter; the division over their decision means that its definitionally non-obvious what the correct path forward is; or, that there isn't one correct path, but multiple reasonable paths. To admit a mistake of this magnitude is to admit that you're either so unprincipled that your mind can be changed at a whim; or that you didn't think through the decision enough preemptively. These are absolutely signs of weakness in leadership.


Whether or not you agree with the decision they obviously screwed up the execution something awful. This is humiliating for them and honestly setting altman free like they did was probably the permanent end of AI safety. Just take someone with all the connections and the ability to raise billions of dollars overnight and set them free without any of the shackles of AI ethics people in a way that makes all the people with money want to support him? That's how you get skynet


I tend to think: We, the armchair commentators, do not know what happened internally. I don't know enough to know that the board's execution wasn't the best case scenario to achieve their goal of aligning the entire organization with the non-profit's mission. All I feel comfortable saying with certainty is that: its messy. Anything like this would inevitably be messy.


Right and thats what I'm saying. It's messy. They screwed up. Messy is bad. If they needed to get rid of him this last minute and make a statement 30 minutes before market close, then the failure happened earlier.


> These are absolutely signs of weakness in leadership.

The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.

Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.

I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.


> Bad take. Not "everyone" feels that what they did was wrong.

But everyone important does so who cares about the rest?


You mean the “the rest” as in the people who execute on the company vision?

It’s really dismissive toward the rank and file to think that they don’t matter at all.


> It’s really dismissive toward the rank and file to think that they don’t matter at all.

I had the exact opposite take. If I were rank and file I'd be totally pissed how this all went down, and the fact that there are really only 2 possible outcomes:

1. Altman and Brockman announce another company (which has kind of already happened), so basically every "rank and file" person is going to have to decide which "War of the Roses" team they want to be on.

2. Altman comes back to OpenAI, which in any case will result in tons of time turmoil and distraction (obviously already has), when most rank and file people just want to do their jobs.


a) The company vision up until this point included commercial products.

b) Altman personally hired many of the rank and file.

c) OpenAI doesn't exist with customers, investors or partners. And in this one move the board has alienated all three.


I seriously doubt customers or (most) partners care about this. I have yet to hear of a single customer or partner leave the service, and I do not believe it to be likely. Simply, unless they shut down their offerings on Monday they will have their customers.

Investors care, but if new management can keep the gravy track, they ultimately won’t care either.

Companies pivot all the time. Who is to say the new vision isn’t favored by the majority of the company?


The fact that this happened so soon after Developer Day is a clear signal that the board wasn't happy with that direction.

Which is why every developer/partner including Microsoft is going to be watching this situation unfold with trepidation.

And I don't know how you can "keep the gravy track" when you want the company to move away from commercialisation.


> I have yet to hear of a single customer or partner leave the service

Which doesen't mean a lot. Of course they'd wait for this to play out before committing to anything.

> but if new management can keep the gravy track

I got the vague impression that this whole thing was partially about stopping the gravy train? In any case Microsoft won't be too happy about being entirely blindsided (if that was the case) and probably won't really trust the new management.


The new management has declared that their primary goal in all this was to stop the gravy track.


I don’t think there has been a formal announcement on the new direction yet


Satya is “furious.” What’s reasonable about pissing off a guy who can pull the plug? I don’t think it’s definitionally non-obvious whether to take that risk.


Last I checked he only had 49% of the company.

I also feel, that they can patch relationships, Satya may be upset now but will he continue to be upset on Monday?

It needs to play out more before we know, I think. They need to pitch their plan to outside stakeholders now


Which other company will give them the infra/compute they need when 49% of the profitable part has been eaten up?


And how will they survive if Microsoft/SamAi ends up building a competitor ?

Microsoft could run the entire business as a loss just to attract developers to Azure.


That assumes Altman competitor can outpace and outclass OpenAI and maybe it can. I know Anthropic came about from earlier disagreements and that didn’t slow OpenAIs innovation pace, certainly.

Everything just assumes that without Sam they’re worse off.

But what if, my gosh, they aren’t? What if innovation accelerates?

My point being is it’s useless to speculate that Altman starting a new business competing with OpenAI will be successful inherently. There’s more to it than that


> Everything just assumes that without Sam they’re worse off. > > But what if, my gosh, they aren’t? What if innovation accelerates?

It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster


> Everything just assumes that without Sam they’re worse off.

But it's not just him is it?


Sure, I suppose not, but they aren’t losing everyone en masse. Simply Altman supporters so far.

I think a wait and see approach is better. I think we had some inner politics spill public because Altman needs to the public pressure to get his job back, if I was speculating


The thing I really want to know is how many of the people who have already quit or have threatened to quit are actual researchers working on the base model, like Sutskever.


First it remains to be seen if Microsoft is going to do something drastic.

I also suspect they could very well secure this kind of agreement from another company that would be happy to play ball for access to OpenAI tech. Perhaps Amazon for instance, who’s AI attempts since Alexa have been lackluster


Yeah, he can be furious all he wants but he is not getting the OpenAI he used to have back. It’s either Sam + Greg now or Ilya. All 3 are irreplaceable.


I’m not advocating people double down on stupid, or that correcting your mistakes is bad optics. I’m simply saying they’re “increasingly revealing” pre-existing unfitness at each ham-fisted step. I think our increase in knowledge of their foolishness is a good thing. And often correcting a situation isn’t the same as undoing it, because undoing is often not possible or has its own consequences. I do appreciate your willingness to let them grow into their responsibilities despite it all — that’s a rare charity extended to an incompetent board.


Yeah, I agree with that. I think the board has to have been genuinely surprised by the sheer blowback they're getting, i.e. not just Brockman quitting but lots of their other top engineering leaders.

Regarding your last sentence, it's pretty obvious that if Altman comes back, the current board will effectively be neutered (it says as much in the article). So my guess is that they're more in "what do we do to save OpenAI as an organization" than saving their own roles.


> Dude, everyone already thinks the board did a crazy ill-advised thing.

I've honestly never had more hope for this industry than when it was apparent that Altman was pushed out by engineering for forgoing the mission to create world changing products in favor of the usual mindless cash grab.

The idea that people with a passion for technical excellence and true innovation might be able to steer OpenAI to do something amazing was almost unbelievable.

That's why I'm not too surprised to see that it probably won't really play out, and likely will end up in OpenAI turning even faster into yet another tech company worried exclusively with next quarters revenue.


You're not wrong, but in this case not enough time has emerged for the situation to change or for new facts to emerge. It's been a bit over a day. All that a flip-flop in that short timeframe does is indicate that the board did not fully think through their actions. And taking a step like this without careful consideration is a sign of incompetence.


> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness.

The weakness was the first decision; it’s already past the point of deciding if the board is a good steward of OpenAI or not. Sometimes backtracking can be a point of strength, yes, but in this case waffling just makes them look even dumber.


Depends entirely on how you do it. You can do something and backtrack in a shitty way too.

If they wanted to show they’re committed to backtracking they could resign themselves.

Now it sounds more like they want to have their cake and eat it.


> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.

Lmfao you're joking if you think they "realized their mistake" and are now atoning.

This is 99% from Microsoft & OpenAI's other investors.


> This is 99% from Microsoft & OpenAI's other investors.

Exactly. You can bet there have been some very pointed exchanges about this.


Yeah, Satya likely just hired a thousand new lawyers just to sue OpenAI for being idiots.


I so wish I could be a fly on the wall in all this. There's got to be some very interesting moves and countermoves. This isn't over yet.


"When faced with multiple options, the most important thing is to just pick one and stick with it."

"Disagree and commit."

- says every CEO these days


Acknowledging a mistake so early seems like a sign of weakness to me. Hold the hot rod for at least a minute, see if the initial pain goes away. After that acknowledgement may begin to look like part of learning and get more acceptance, rather than: oopsie doodl, revert now!!!


This isn’t a shitty idea. The board fired it’s CEO and the next day is apparently asking him to come back.

At this point, I don’t care how it resolves—the people who made that decision should be removed for sheer incompetence.


> is a sign of weakness

It's often a sign of incompetence though. Or rather a confirmation of it.


They are already the dumbest board in history (even dumber than Apple’s board firing Steve Jobs). So it’s not out of keeping with anything. Besides, those 2 independent board members (who couldn’t do fizz-buzz if their lives depended on it) won’t be staying long if Sam returns— nor are they likely to ever serve on any board ever again after their shenanigans.


Some of the board member choices are baffling. Like why is Joseph Gordon Levitt’s wife on the board? Her startup has under 10 employees and has a personal email address as the contact address on the homepage.


Non-profits always have those spouses of wealthy people whose whole career is being a professional non-profit board member with some vague academic/skin-deep work background to justify it. I'm just surprised OpenAI is one those.


I hope there is an investigative report out there detailing why the 3 outsiders, 2 of them complete unknowns, are on the board, and how it truly benefits proper corporate governance.

That's way too much power for people who seemingly have no qualifications to make decisions about a company this impactful to society.


Unless "proper corporate governance" is exactly what makes the company dangerous to society, in which case you will need to have some external people in charge. You might want to set things up as a non-profit, though you'll need some structure where the non-profit wholly owns the for-profit wing given the amount of money flowing around...

Oh wait, that's what OpenAI is.

(To be clear, I don't know enough to have an opinion as to whether the board members are blindingly stupid, or principled geniuses. I just bristled at the phrase "proper corporate governance". Look around and see where all of this proper corporate governance is leading us.)


Well with this extremely baffling level of incompetence, the suspect backgrounds of the outside members (EA, SingularityU/shell companies... Logan Roy would call them "not serious people", Quora - why, for data mining?!) fit the bill.

The time to do this was before ChatGPT was unleashed on the world, before the MS investment, before this odd governance structure was setup.

Yes, having outsiders on the board is essential. But come on, we need folks that have recognized industry experience in this field, leaders, people with deep backgrounds and recognized for their contributions. Hinton, Ng, Karpathy, etc.


> Quora - why, for data mining?

What shocked me most was that Quora IMHO _sucks_ for what it is.

I couldn't think of a _worse_ model to guide the development and productization of AI technologies. I mean, StackOverflow is actually useful and its threatened by the existence of CoPilot, et al.

If the CEO of Quora was on my board, I'd be embarrassed to tell my friends.


Isn't that like saying that the Manhattan Project should have only been overseen by people with a solid physics background? Because they're the best judges of whether it's a good idea to build something that could wipe out all life on Earth? (And whether that's an exaggeration in hindsight is irrelevant; that was exactly the sort of question that the overseers needed to be considering at that time. Yes, physicists' advice would be necessary to judge those questions, but you couldn't do it with only physicists' perspectives.)


Not sure I follow. The Manhattan project was thoroughly staffed by many of the best in the field in service to their country to build a weapon before Germany. There was no mission statement they abided by that said they were building a simple deterrent that wouldn't be used. There was no nuance to what the outcome could be, and there was no aspersions to agency over its use.

In the case of AI ethics, the people who are deeply invested in this are also some of the pioneers of the field who made it their life's work. This isn't a government agency. If the mission statement of guiding it to be a non-profit AGI, as soon as possible, as safely as possible, were to be adhered to, and where it is today is going wildly off course, then having a competent board would have been key.


Does Joseph Gordon Levitt’s wife have a name?


Mrs. Joseph Gordon Levitt :)


Why would anyone care as she’s not on the board because of it.


Any proof that makes her incompetent or ill-informed or are you simply speculating as such?


Yeah, I too would like to understand how the wife of a Hollywood actor got on this board. Did sama or Greg recruit her? Someone must have.

I have seen these types of people pop up in Silicon Valley over the years. Often, it is the sibling of a movie star, but it's the same idea. They typically do not know anything about technology and also are amusingly out of touch with the culture of the tech industry. They get hired because they are related to a famous person. They do not contribute much. I think they should just stay in LA.

EDIT: I just want to add that I don't know anything about this woman in particular (I'd never heard of her before yesterday), and it's entirely possible that she is the lone exception to the generalization I'm describing above. All I can say is that when I have seen these Hollywood people turn up in SF tech circles in the past (which has been several times, actually), it's always been the same story.


[flagged]


I mean the reasoning is more something like: to become a member of the board at OpenAI you must be extra-ordinary at something. At first sight, the only candidates for this something are: "start-up founder" and "spouse of famous person". The famous spouse thing is so much more extra-ordinary than being a startup founder, that the first "explains away" the latter. Even when being related to an actor makes it more probable to be selected for such a job, there may be other hidden factors at play.


Don't take in that direction. In your opinion he may be making a baseless accusation, but just because that accusation is against a female doesn't make it sexist.


It's not because the accusation is against a female, it's because referring to someone solely as the spouse of someone else is a frequent tactic used to dismiss women.

That might not have been the intent, but when you accidentally use a dogwhistle, the dogs still perk up their ears.


It's common and acceptable to refer to a nobody who's not shown their claim to fame in terms of another famous, impactful person who happens to be their spouse, sibling, etc.


Except Tasha McCauley has far more claim to expertise in this space, however tenuous you may believe it to be, than her husband does. JGL is not relevant in the discussion, either. We're not talking about her in context of him. We are talking about her in context of her position.

If you don't understand how referring to someone solely based on their relationship with another person is denigrating, particularly when trying to highlight your perception of them being incompetent, I'm not sure what to say to you.


You sound like you want to have an argument about gender bias (esp. according to your other comment). I'm not interested in that. You're free to live in your own version of the world and assume that talking about someone by mentioning their spouse is "denigrating". Jesus.


I followed this comment trail hoping to find out more about Tasha McCauley before I google her, but you ended up doing exactly what you are bashing. Defining her in contrast to her husband's expertise on the topic.

After reading the thread, I am still unsure what makes her a proper candidate to the board seat, but I might know that's she has more claim than her husband to it.


There are lots of comments in these threads that go over her different qualifications and experiences.

I am in a discussion about referring to people as 'spouse of x'. They're not the same conversations and I am not sure why you would expect the contents to be the same.


This might just be the worst example of taking a metaphor too far


This is a good point. Saying something is sexist is what makes it so, plus why would it be sexist to dismiss her as just a wife in the same post that acknowledges that she runs a startup?

GP knows the headcount at her company so they probably know that it’s a robotics company, but it was simply of dire importance that we know that she is a wife.


[flagged]


It's sexist to refer to her solely based on her relationship with someone else when we're talking about her in the context of her expertise. The fact that she's JGL's wife has nothing to do with her merit, and so it comes off as dismissive, especially when the point being made is about her lack of ability.

Why can't you just criticize her "joke of a resume" directly instead of bringing up her spouse?

Generalizations and statements like this reflect bias in subtle ways that minimize women, and I'm glad it's being called out in some capacity.


I don't know that it would be a resume that would inspire confidence in a for-profit business's board that is primarily concerned with shareholder value.

I also don't know that it is a particularly problematic resume for someone sitting on the board of a non-profit that is expressly not about that. Someone that is too much of a business insider is far less likely to be going to bat for a charter that is explicitly in tension with following the best commercial path.


I guess you missed the part about Amal Clooney‘s husband at the Golden Globes. It’s 2023, why are we still referring people like that?


The insinuation is that her most notable accolade is the man she married and there are cases where that's an accurate insinuation.

I have no idea who she is or what her accolades are, but I do know who JGL is and therefore referring to her like that is in fact useful to me, where using any other name is not.


Could you please elaborate how is this fact useful to you? Can it be that you just make certain stereotypical assumptions from it?


It was funny because with the Clooneys both of them have actually accomplished things in significant situations and it was clearly wrong.

In this case this person seems to have primarily tried and failed to spin a robotics company out of Singularity “university” in 2012.

This only sounds adjacent to AI if you work in Hollywood.


It was wrong not because they both did achieve something. It is generally wrong and the joke just used their achievements to break the barrier for understanding that.


Suggesting that we should be on a first name basis with the romantic partner of every famous person we know of simply because they are the romantic partner of a famous person is pretty naive. “Spouse of Y” works just fine generally to save space and effort for (locally) real people.


Option A: try to look good by hiding that you know you messed up

Option B: try to fix mistakes as quickly as possible

.

This is that thing that somehow got the label "psychological safety" attached to it. Hiding mistakes is bad because it means they don't get fixed (and so systems that (do or appear to) set personal interest in favor of hiding mistakes are also bad).


It's funny, but option A is almost always best if you care about yourself, but option B is best if you care about the company or mission. Large organizations are chock-full of people who always choose option A. Small startups are better because option B is the only option as nothing can be easily hidden.


How do you know they back tracked? This reporting, as far as I can see, doesn’t have a source from the board directly.


If the board brings him back. They are done including the chief scientist. Can't stage a coup and just to bring the person back the next week


If you strike at the king, you must kill him.

I am always curious how these conversations go in corporate America. I've seen them in the street and with blue collar jobs.

Loads of feelings get hurt and people generally don't heal or forgive.


You don’t know the actual reasons for them firing Sam and I don’t either. Everyone has an opinion on something they don’t understand. For all you know, he covered up a massive security breach or lied about some skunkworks projects


If your “for all you know” supposition that he’s a criminal were correct, then it would be criminal to try to bring him back. In that unlikely case, I can assure you my opinion of the board is unlikely to improve. It may be a black box to us, but it does have outputs we can see and reason about.


> You can do crazy ill-advised things, but if you unwaveringly commit, we’ll always wonder if you’re secretly a genius.

This. Some people even take it to the extreme and choose not to apologize for anything to look tough and smart.


That seems a lot be better than doubling down on a bad mistake to save face, but we do care quite a bit about about looking strong, don't we.


At longer timescales it is important to be able to recognize mistakes and reverse course, but this happened so fast I'm not sure that's the right characterization. There's no way they could already decide that firing Sam was a mistake based on the outcomes they claim to prioritize. Reversing course this quickly actually seems to me more like a reaction based directly on people's negative opinions, though it may be a specific pressure from Microsoft as well.


Based on reports of Microsoft's CEO being "furious", and the size of its legal team, I'd bet the people's reaction wasn't exactly the most relevant factor there...


They got told they are getting every piece of hardware not on prem pulled and they can burn in legal hell trying to get it back if they dont fix it.


> That seems a lot be better than doubling down on a bad mistake to save face, but we do care quite a bit about about looking strong, don't we.

IMO Its not about looking strong, its about looking competent. Competent people don't fire their CEO of multi-billion dollar unicorn without thinking it through first. Walking back the firing so soon suggest no thinking was involved.


Not really. By reaching out to Sam this quickly, they're giving him significant leverage. I really like Sam, but everyone needs a counterbalance (especially given what's at stake).

And if they were right to fire Sam, they're now reacting too quickly to negative backlash. 24 hours ago they thought this was the right move; the only change has been perception.


I’m sure Satya and his 10,000 Harvard lawyers with filed down shark teeth were just the first of many furious calls they took.


Obviously it’s better to own up to a mistake right away. But the point is if they are willing to backtrack this quickly, it removes all doubt that it WAS a mistake, rather than us just not understanding their grand vision yet.


24 hrs isn't enough time to get signals on whether this was a mistake


How and why do you know it was a mistake without knowing the facts and reasoning? Hunch?


The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive. This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal. If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity. Also, quickly reaching profit goals is only for good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit. And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the early donors who donated with humanity goal whose funding made it all possible? I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.


I'm just curious how you envision ai helping people in the future? There are countless technologies that are amazing in scope but never get any traction due to not being able to market, sustain and promote themselves properly.

Additionally, how do we get there and who funds it in the long term? When you actually consider how much compute power is required to get us to this point of a "pretty decent chat bot/text generator". It doesn't really seem like we are even 20% of the way to agi. If that's true then no amount of crowdfunding is going to get it even remotely close to providing the resources to power something truly revolutionary.

Don't get me wrong I agree with some of the points you've made and Microsoft are certainly in it for themselves but I also believe that they would like to avoid owning Openai as they'd not want to position themselves clearly as the sole caretaker of ai due to the amount of scrutiny they'd be under.

All that is to say, whether you like him or not, he has taken interest in ai and Openai as well as being a leader on discussing the ethics of developing ai to stratospheric levels that has made many industries and governments take notice.


Sam definitely discussed about ethics and stuff (at stratospheric level) but when it comes to actually implementing those ethic or when someone tried to implement it in product, he was instrumental in getting rid of respective scientists (whom inturn went on to create claude). And currently was trying to get rid off another director who is trying to voice out opinion in this regard. That is exactly what I am pointing out, he gave such impressions to the rest of the world.

Microsoft never intended or assumed OpenAI will turnout like this great. It just did a small hedge of $1B to a promising tech and will very much like to takeover OpenAI if given a chance and they can afford all the lawyers needed to keep up with govt regulations.

Anthropic was able to create a comparable product to openai with out all the fuss that sam has created. I agree Sam might have had some significant contributions but they are not as much as it seem to be. I am sure OpenAI will keep on progressing as it does now with or without Sam.

He won the first time and lost the second time.


By the way, this product doesn’t need sales people. It sells itself. What is the point of a sales guy leading?

This company should be lead by research team not product team.


I don’t understand why and how they didn’t consider this sort of discussion before so unceremoniously firing him. The others on the board outside Ilya need to go.

I don’t consider anybody beyond forgiveness and if Ilya takes a professional lesson from this and Sam learns to be more mindful of others’ concerns, I consider this a win for all. Starting over in a new entity sounds great but would be years of setback.

I hope they work this out.


Yes, this attempt was a mess from the start. I don’t know which rumors to believe or care about, but the underlying story for me was that the board was acting like children with an $80b company that some believe to be strategically important to the US or maybe even mankind. If they had done this “properly” and their message was about irreconcilable differences between productization and research they could have made an actual go at this.

If they really believed in the non-profit mission, and Sam didn’t, they probably torpedoed their chances of winning.

This was all they had to write and today would be a different day:

> We regret to inform you that Sam Altman is being let go as CEO of OpenAI due to irreconcilable differences between his desire to commercialize our AI and OpenAI’s research-driven goals. We appreciate Sam’s contributions to the company and the partnership he established with Microsoft, which have set a foundation for OpenAI to thrive far into the future as a research organization with Microsoft focusing on commercialization of the technology.

> We want to assure you that ChatGPT and current features will remain and be upgraded into the future. However, the focus will be on developing core technologies and a reliable, safe, and trustworthy ecosystem for others to build on. We believe that this will allow us to continue to push the boundaries of AI research while also providing a platform for others to innovate and create.


Obviously because that wasn't what they actually cared about, this was a pure power play by incompetent idiots that shot themselves in the feet


I mean, even if that wasn’t what it was about, that’s what a not-incompetent idiot would have said it was about. ChatGPT could have written that statement for them.


Why do you not think Ilya was the cheif architect of this failed coup? Im being serious everything ive seen points to him being the one responsible, there is no way he will ever stay let alone work in tech again


You are absolutely delusional if you think the man who oversaw the development of GPT would not be able to continue working in tech even if he orchestrated a failed coup.


GPT is based on research Google published, it’s not like he’s the Einstein of AI. Shenanigans like this can absolutely derail your future regardless of how talented you may be.


There's not many Einsteins of anything besides Einstein himself. That doesn't change the fact that he is widely considered in the field to be a top expert and has shown that he can lead the development of a wildly successful product.

If this does end up being a failed coup, then it is of course detrimental to his career. But the statement I'm replying to was explicitly saying he would never work in tech again. Do you honestly believe there is any chance that Sutskever would be unable to work in this field somewhere else if he ultimately leaves OpenAI for whatever reason? I would bet $10,000 that he would have big name companies knocking on his door within days.


Maybe not as extreme as never being able to find work again, but I doubt he’ll ever find himself in an important role where he’s able to lead and make consequential decisions. He basically clipped his own wings to put it metaphorically, if this is indeed a failed coup that was lead by him.


Do you think if he starts a company no one will follow him?


Those on his team at OpenAI probably would yeah and anyone who shares views on AI safety. But the real question is will he able to raise capitol?


days? before he walks out the door. he must already have permanently open doors for him if he wants.

can he work on what he wants in those places? that is another story of course. but he knows the ins and outs of the lightning in a jar they captured and arguably that is the most promising asset on planet earth right now, so he'll be fine.


Yet he managed to create versions of it that work better than what Google itself could make.


Well, he was hired away from Google in the first place.


Years ago. And Google has been working actively on AI since that time, and even more actively since GPT-3.5 was released and they realized they need to catch up.

They are still catching up. What does this tell us?


> GPT is based on research Google published

Why didnt Google create ChatGpt then, why did the fall behind?


Everything's obvious once you know all the answers.

Google is publishing a lot of research and I guess many of them will be used by other companies.

Do you know now which research will be the basis of tomorrow's most spoken tech? No. They don't either.


> Everything's obvious once you know all the answers.

No not really, Google has a history of not delivering or launching half baked products and then killing them quickly.


Read this sentence as "it's easy to say something is successful once it reached success".


Did you miss the history part?

Dont worry Google will launch a new version of a Chat App with AI to fix all their previous failures


I do think he was the chief architect of the coup. I do think his beliefs and ideals are still valuable flora for a company of this ambition. There just needs to be a more professional structure for him to voice them.

Dealing with folks like Ilya isn't necessarily a matter of if, but how much.


Having the CEO of Quora on the board also smells of a vested interest to hold the company/non-profit back.


ya that’s a crazy conflict of interest. 8 years ago it may not have been so obvious though.



> The others on the board outside Ilya need to go.

Does Ilya get a pass solely by his value to the company?


I think that his beliefs are important to the company. A board shouldn't be a homogenous glob nor should it be like a middle school friend group. What he did was both bizarre and amateur, but I believe in the best of us all the come forward from these types of events.


Not seeing much set back here. There are plenty of free high-quality models to put to work from Day 1.


It could be that Microsoft is leveraging them to bring him back. This board may seem mercurial at the moment, but we really, truly, and honestly still do not have the big picture yet.


In the first (I think) episode of halt and catch fire, Joe tells IBM that they have their source code. IBM being IBM sends a legion of lawyers to their smallish company trying to scare the shit out of them.

I feel like it be like that, but instead of a legion, legions.

And OpenAI is scared.


OpenAI isn't scared, OpenAI quit already. The remnants and their false king Ilya are beyond what the word scared is capable of describing though, in terms of the level of abject horror they are certain to face the rest of their entire lives even if they run away now. This will never escape them and nobody involved with this decision will ever work in tech again, or on any board of any organization. I hope they saved up for retirement.


How are they leveraging them? My understanding is Microsoft has no power over the board.


They who control the gpus control the universe. There is a great chip shortage. If MS breaks the lease agreement with OpenAI (based on some pretext about governance) OpenAI won't be able to do any work nor will they be able to serve customer requests for the next year while they litigate this in court. Microsoft holds all the cards because they own the data centers.


> If MS breaks the lease agreement with OpenAI

The first thing OpenAI would ask a court for is a preliminary injunction to maintain the status quo while all of this works out in court. IANAL.


The MS servers somehow can become buggy and work 10 000 times slower due to errors and bugs after a failed patch that takes months to find and fix


That's asking for a contempt of court charge and jail time for Satya Nadella (though more likely just multi million dollar daily fines for MS).


> If MS breaks the lease agreement with OpenAI

If that happens. AMZN, or GOOG will be all over that.


If that's true why did they even fund openai. Why not just beat them at making LLMs


When there’s a gold rush don’t be the one mining gold be the one selling shovels.


As a for instance, and I don't know, but it's plausible Microsoft has full license to use all tech, is the cloud operating it, and has escape clauses tied to "key persons".

That combination could mean firing the CEO results in Microsoft getting to have everything and OpenAI being some code and models without a cloud, and whatever people that wouldn't cross the street with Altman.

I do not know about OpenAI's deal with Microsoft. But I have been on both sides of deals written that way, where I've been the provider's key person and the contract offered code escrow, and I've been a buyer that tied the contract to a set of key persons and had full source code rights, surviving any agreement.

You do this if you think the tech could be existential to you, and you pay a lot for it because effectively you're pre-buying the assets after some future implosion. OTOH, it tends to be not well understood by most people involved in the hundreds of pages of paperwork across a dozen or more interlocking agreements.

. . .

EDIT TO ADD:

This speculating article seems to agree with my speculation, daddy has the cloud car keys, and key person ouster could be a breach:

Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.

That gives the software giant significant leverage as it sorts through the fallout from the ouster of OpenAI CEO Sam Altman. The firm’s board said on Friday that it had lost confidence in his ability to lead, without giving additional details.

One person familiar with the matter said Microsoft CEO Satya Nadella believes OpenAI’s directors mishandled Altman’s firing and the action has destabilized a key partner for the company. It’s unclear if OpenAI, which has been racking up expenses as it goes on a hiring spree and pours resources into technological developments, violated its contract with Microsoft by suddenly ousting Altman.

https://www.semafor.com/article/11/18/2023/openai-has-receiv...


Riddle me this: what is an AI research lab without access to exascale compute? Who’s cloud infrastructure do they depend on fully right now?


This is everything. MSFT has no de jure power but all the de facto power.


Surely Microsoft can terminate their Azure access? Why piss off your largest supplier?


Contracts are only worth their language if parties are willing to fight for them. Taking on Microsoft and hoards of angry billionaires with a piece of paper separating you from them might be more of a war than they expected.


Contracts were made to be broken. Its always about who is more powerful, the law was designed for the wealthy to win every time.


They can probably threaten to pull their funding, immediately (or in a few months) bankrupting the company


OpenAI needs the Microsoft partnership


With the way they fired him and the statement they made, it's hard to see how any of the remaining four could stay on if he did come back... as was previously mentioned, if you shoot at the king, don't miss.


At least the 3 independent members will be gone. Either will try to burry the hatchet with Ilya or he leaves as well.


Good. Two of them aren't even qualified to be on the board of a kid's lemonade stand.


Assuming you don't mean the insiders or the Quora CEO, which aspects of these remaining backgrounds do you find unusual for a Silicon Valley board member?

Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)

McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.

Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.

More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.

. . .

EDIT TO ADD:

The question wasn't whether this is scintillating substance. The question was, in what way is this unusual in Silicon Valley.

The answer is that it's not.


>She previously cofounded Fellow Robots

Near as I can tell they never actually launched a product. Their webpage is a GoDaddy parked domain page. Their Facebook page is pictures of them attending conferences and sharing their excitement for what Boston Dynamics and other ACTUAL robotics companies were doing.

>she launched with a colleague from Singularity University

https://en.wikipedia.org/wiki/Singularity_Group

Just lol.

>then cofounded GeoSim Systems

Seems to be a consulting business for creating digital twins that never really got off the ground.

https://www.linkedin.com/in/tasha-m-25475a54/details/experie...

It doesn't appears she's ever had a real job. Someone in the other thread commented that her profile reeks of a three letter agency plant. Possible. Either that or she's just a dabbler funder by her actor husband.


Lol. You literally know nothing about this person other than what you found online. She could be brilliant or offer a perspective the business needs.

Suggesting that some inarguably brilliant technologists and business people would invite a moron to crash their party makes you look petty (at best) and like an idiot (at worst)


Don't you think it's weird that maybe the most important company in 2023 has people with no documented experience on its board?


I'm ambivalent because I don't know what the board and executive team are trying to accomplish.

And neither does anyone else on this forum.

The Monday morning quarterbacking is hysterical.


Treating the non-profit OpenAI board like the board for a regular for-profit is weird.

This isn't just a non-profit holding company for tax purposes - the whole thing is structured with the intent of giving the non-profit complete control over the for-profit to help achieve the non-profit's charter.

The board being full of typical business people would likely be counterproductive to the goal of staying focused on the non-profit charter vs. general commercial business interests.

I don't know enough about most of the board to have any sort of real judgment about their ability, but there's a lot of comments here that are judging board members based on very different criteria than what they were actually brought in for.


I can't believe I'm going to write this.. but:

So what? Regardless of launch/no launch, the company was a flop. This is a cheap shot. Just because someone was successful in the past (or not) is not an automatically relevant signal they'll be a great fit when placed in a different domain. Sometimes they have other relevant background and experience, and other times... Maybe they're just connected. What is the level of scrutiny of qualifications in other companies, even public ones? When looking closely at other companies, I've noticed board compositions can vary substantially. As outsiders, we're undoubtedly missing part of the context about what is relevant (to the board) or not.

Suggested reading: Black Swan by Taleb.

p.s. I am not partial to anyone involved, especially clueless board members. I found this comment annoying due to the breathless, baseless, and flawed logic. What was this supposed to add to the conversation?


> So what? Regardless of launch/no launch, the company was a flop.

Nothing wrong with that but a company like Open AI which is literally changing the world does not have a board member who is qualified to be in that position.


Where is the companies or skills from here? I see a list of made up foundations or "centers" doing 0 valueable things

Why do we need some moral superior person from some university to "think about safey and OpenAI" and not find it out ourselves?

What a clown company

also

>And she’s tied to the Effective Altruism movement

ah where SBF was involved. what an achivement


How are you finding it out by [yourself]?

Did you find out e.g. Facebook will do the damage that it did and continues to do in social terms?

Have you done anything or has Facebook changed its way based on your ‘findings’?

The choice here is: does capital coupled with runaway egos provide better stewardship of socially impactful technology development or paper pushers or CIA plants?


I'm with you on the overall read of her resume, but "believes in <principal> that <bad person> believes in" isn't fair criticism.


I agree, but it's a difference believing in a concept vs engaging with persons in that same area. and together with everything else, that just seem to be half political organizations to farm funding from governments or ESG VCs, it doesn't look very good to me

She just sounds like a typical Silicon Valley trend grifter


They were both on the board for the Centre for Effective Altruism.


>> And she’s tied to the Effective Altruism movement

> ah where SBF was involved. what an achivement

At least she wasn't a vegetarian. Hitler was a vegetarian. That would have been the final nail in the coffin


Maybe she is.


the question is more , did she go to "world vegetarian meetup" every week where Hitler also was and took selfies

anyhow , I still don't see what the impressive things is by working at all those fake companies/think tanks not doing real work


Yeah, at least SBF managed to spin up a $20 billion dollar ponzi scheme.

Sure, it's incredibly psychopathic, but it's still an achievement!


FTX wasn't a Ponzi scheme per se. SBF committed fraud by saying they had risk controls in place when they had an exemption for his hedge fund: Alameda Research. FTX could be viable if it let Alameda fail.


Alameda played a significant role in propping up the value of FTX through their investment in FTT. Worth questioning how much FTX would have been worth if it hadn't been for various tricks like this.


None of that sounds like actual work or results. It's just a bunch of empty business speak. They are definitely not qualified to serve on the board of a company like OpenAI.


Would Altman's bio be any more impressive if framed the same way? A trash tier startup, failing upwards to a VC, and starting one of the sleziest cryptocurrencies around. Sure sounds like no actual work, qualifications, competence or results.


Their achievements are already framed in their best light.

Hopefully you're able to tell the difference between serving as CEO or president of real reputable companies (the "trash tier startup" still exited for mid-8 figures) versus what looks like being a figure head for fake companies.


He was president of YC, I think it's fair to say people will think he's got a better set of credentials on a YC forum than some omg-ai-is-dangerous-please-fund-me think tank thinker.


> "a company like OpenAI"

Maybe the problem is the meteoric rise of OpenAI--at the time this board was instituted, the company was much smaller, and wouldn't have been able to draw a more illustrious set of board members?


Didnt they have Elon Musk and Jessica Livingston as founding members, their social network would have someone with more credibility to be on the board compared to the current members.


Both those resumes read as not being unqualified for the job, but their primary qualification is being women.


None are out of the ordinary. It’s like Steel Perlot. It’s an indulge-the-wife tchotchke company position. There are lots of these for the wives and girlfriends of successful people.

Just a sinecure and someone you trust for some other reason. But you’ve got to trust them.


[flagged]


Without commenting on their competency as board members...

Describing Tasha McCauley as "an actor's wife" feels a bit sexist. She's apparently a scientist and founder of a (failed?) startup.

Is there any evidence Helen Toner is a "Chinese spy"? (having lived in Beijing isn't evidence)


Looking at their CVs, they're more qualified than some rando on the internet. So from my point of view, they look more qualified than you, DebtDeflation, rando from the internet.


The GP never claimed to be eligible to serve on the board, so the attack is unwarranted.


It's perfectly warranted to call someone out on a claim that board members wouldn't qualify for a kid's lemonade stand.


I don’t need to be an Olympian to know that my kid isn’t ready yet.


An even better example, as someone that also could not do a trick in a half pipe: https://deadspin.com/the-winter-olympics-feature-2-951-of-th...


Adam has competing interests. It's hard to see why he is even allowed in non profit board. And for other two members, their profile seem pretty weak for being in a board of one of most important companies in the world.


Quora dataset is worth a hell of a lot. That said, I completely agree with you. He should be ousted.


Serious question, can't they just scrape it? They haven't needed permission to scrape the rest of the internet, have they?


Hard to see Ilya staying if Sam returns, honestly.


Ilya is the only one worth keeping around if Sam returns mercifully.


I highly doubt he will stay if that happens.


I agree. Just saying that there's only one person worth mending fences over.


I'm sure he'll be fine, and well compensated and able to do what he wants elsewhere.


What he wants is to control global AI development. He just lost that battle.


What he wanted was to do AI development at a larger scale than what universities and corporate R&D teams were doing. Or so he says:

https://news.ycombinator.com/item?id=38325407

Having shown this was possible, he could easily go do it elsewhere.


And oh boy, did they miss! There’s going to be a many of chapters written about this in textbooks.


I doubt he comes back. The entire board that voted him out would have to resign. And that’s not going to happen on a non-profit.


This saga is somehow more absurd than HBO's Silicon Valley.


I worked at a startup that was more absurd than anything on "Silicon Valley".


Mike Judge says they had to tone down the advice they got from consultants for that show who were working in tech. The ideas they got (from people working in the real Silicon Valley) were too crazy for audiences to believe.


You know what they say: "Life is stranger than fiction."


Fiction has to make sense; reality is under no such obligation.


Because that show is written by people who barely knew Silicon Valley.

If you read those memoirs/history of those Silicon Valley companies, it is 100% more entertaining than the show itself


The shows writers had quite a few advisors with deep ties to Silicon Valley. Former twitter ceo dick Costello comes to mind.


> If you read those memoirs/history of those Silicon Valley companies

Can you give a few recommendations?


I'd bet money Satya was a driver of this reversal.

I genuinely can't believe the board didn't see this coming. I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.

I hope Sam comes back. He'll make a lot more money if he doesn't, but I trust Sam a lot more than whomever they ultimately replace him with. I just hope that if he does come back, he doesn't use it as a chance to consolidate power – he's said in the past it's a good thing the board can fire him, and I hope he finds better board members rather than eschewing a board altogether.

EDIT: Yup, Satya is involved https://twitter.com/emilychangtv/status/1726025717077688662


> I hope Sam comes back

Why? We would have more diversity in this space if he leaves, which would get us another AI startup with huge funding and know how from OpenAI, while OpenAI would become less Sam Altman like.

I think him staying is bad for the field overall compared to OpenAI splitting in two.


To be honest, as far as I can tell, the case FOR Sam seems to largely be of the status quo "Well, idk, he's been rich and successful for years, surely this correlates and we must keep them" type of coddling those in uber superior positions in society.

Which seems like it probably is a self fulfilling prophecy. The private sector lottery winners seem to be awarded kingdoms at an alarming rate.

There's been lots of people asking what Sam's true value proposition to the company is, and...I haven't seen anything other than what could be described above.

But I suppose we've got to be nice to those who own rather than make. Won't anyone have mercy on well paid management?


The fact that multiple top employees quit in protest when he was fired suggests to me that they found him valuable.


Well, if there's one thing I've learned, is that a venture capitalist proposing biometric world crypto coins does probably have quite a bit of charisma to keep people opening doors for them.

Frankly I've heard of worse loyalties, really. If I was sam's friend I'd definitely be better off in any world he had a hand in defining.


That is something that Sam Altman did with his own money. And it's fair he's criticized for his choices, but that has nothing to do with his role at Open AI.


How many employees have actually quit?

And how many of them work on the models?


He and Greg founded the company. They hired the early talent after a meeting that Sam initiated. Then led the company to what it is today.

Compared to...

The OpenAI Non-Profit Board where 3 out of 4 members appear to have significant conflicts of interest or lack substantial experience in AI development, raising concerns about their suitability for making certain decisions.


Is Ilya not a co-founder as well? And I don't think Sam has substantial AI research experience either.


Elon brought him in, which is quite the irony. Funny even. It also is the reason Elon and Larry Page don’t get along anymore.

Ilya is certainly world class in his field, and maybe good to listen to what he has to say



No he was hired early, but not there from the beginning. Elon recruited him after public announcement of funding.


> The private sector lottery winners seem to be awarded kingdoms at an alarming rate.

Proven success is a pretty decent signal for competence. And while there is a lot of good fortune that goes into anyone's success, there are a lot of people who fail, given just as much good fortune as those who excelled. It's not just a random lottery where competence plays no role at all. So, who better to reward kingdoms to?


>Proven success is a pretty decent signal for competence.

Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.

I suppose if they entered an established market and dominated it from the bootstraps that'd build a lot of trust in me. But others have pointed out, Sam went from dotcom fortune, to...vague question marks, to ycombinator, to openai. Not enough is clear to declare him wozniak, or even jobs, as many have been saying (despite investors calling him as such)

Sam altman is seemingly becoming the new post-fame elon musk: the type of person who could first afford the strategic safety net and PR to keep the act afloat.


One key reason past performance cannot be used to predict future returns is because market expectations tend to price in expected future returns. Also, nothing competitive is expected to generate economic profit forever— in the long run things even out. In the long run, firms and stock pickers usually end up with normal profit.

But that doesn’t mean you can’t get some useful ideas about future performance from a person’s past results compared to other humans. There is no such effect in play here.

Otherwise, time for me to go beat Steph Curry in a shooting contest.

Of course there’s other reasons past performance is imperfect as a predictor. Fundamentals can change, or the past performance could have been luck. Maybe Steph’s luck will run out, or maybe this is the day he will get much worse at basketball, and I will easily win.


> Interestingly this is exactly what all financial advice tends to actually warn about rather than encourage, that previous performance does not indicate future performance.

It’s important to put those disclaimers in context though. The rules that mandated them came out before the era of index funds. Those disclaimers are specifically talking about fund managers. And it’s true that past performance at picking stocks does not indicate future performance of picking stocks. Out side of that context, past performance is almost always a strong indicator of future performance.


No dotcom fortune, just a failed startup that lost its investors money assuming it ever had an expense in its lifetime. OpenAI might in fact be the first time Altman has been in the vicinity of an object-level success; it depends on how you interpret his tenure at YC.


Ok then what better signal do you propose should be used to predict success as a CEO?

The fact is that most people can't do what Sam Altman has done at all, so at the very least that past success makes him in one of the few percent of people who have a fighting chance.


Stock pickers are not the same as CEOs.


Often, leaders provide excellent strategic planning even if they are not completely well versed with the business domain, by way of outlining high level plans, communicating well, building a good team culture, and so on.

However, trying to distinguish the exact manners in which the leader does so is difficult[1], and therefore the tendency is to look at the results and leave someone there if the results are good enough.

[1] If you disagree with this statement, and you can easily identify what makes a good leader, you could make a literal fortune by writing books and coaching CEOs on how to not get fired within a few years.


The case for Sam is the success of OpenAI while Sam was CEO. If the status quo is wild success, then keeping the status quo is a good thing.


The company's goal is not your definition of success


Yea, I feel like this is another traitorous eight moment.

I want a second (first being Anthropic?) OpenAI split. Having Anthropic, OpenAI, SamGregAi, Stability and Mistral and more competing on foundation models will further increase the pressure to open source.

It seems like there is a lull in returns to model size, if that's the case then there's even less basis for having all the resources under a single umbrella.


I don't think anyone has reported an end to scaling laws yet.


We have diversity in the space, and OpenAI just happens to be the leader and they are putting tremendous pressure on everyone else to deliver. If Sam leaves and starts an OpenAI competitor I think it would take quite some time for such a company to deliver a model with GPT-4 parity given the immense amount of data that would need to be re-collected and the immense amount of training time. Meanwhile OpenAI would be intentionally decelerating as that seems to be Ilya's goal.

For those of us trying to build stuff that only GPT-4 (or better) can enable, and hoping to build stuff that can leverage even more powerful models in the near future, Sam coming back would be ideal. I'm kind of worried that the new OpenAI direction would turn off API access entirely.


> I'm kind of worried that the new OpenAI direction would turn off API access entirely.

That is a good point, I didn't consider people who had built a business based on Gpt-4 access. It is likely these things were Sam Altman ideas in the first place and we will see less such productionalization work in the future from OpenAI.

But since Microsoft invested into it I doubt it will get shut down completely, Microsoft has by far the most to lose here so you got to trust that their lawyers signed a contract that will keep these things available at a fee.


There is no world in which Microsoft leaves their GPT4 customers dead in the water.


AFAICT Sam and his financial objectives was the reason for not open sourcing the work of a non profit.. He might be wishing he chose the other policy now that he can't legally just take the closed source with him to an unambiguously for profit company.

Personally, I would expect a lot more development of GPT-4+ as soon as this is split up from one closed group making gpt-5 in secret and it seems silly to exchange a reliable future for another few months of depending on this little shell game.


The architect of the coup (Ilya) is strongly opposed to open-sourcing OpenAI's models due to safety concerns. This will not - and would not - be any different without Sam. The decision to close the models was made over 2 years before the release of ChatGPT and long before anyone really suspected this would be an insanely valuable company, so I do believe that safety actually was the initial reason for this change.

I'm not sure what you mean by your second paragraph.


I think the closed source for safety thing started as a ruse as the closed source has been instrumental to keeping control and justifying a non profit that is otherwise not working in the public interest. Splitting off this ruse nonprofit would almost certainly end up unleashing the tech normally like every other tech google, etc, have easily copied.


> Meanwhile OpenAI would be intentionally decelerating

Once Microsoft pulls support and funding and all their customers leave they will be decelerating alright.


How much of OpenAI’s success can you attribute to sama’s leadership and how much to the technical achievements of those who work under him.

My understanding is that OpenAI’s biggest advantage is that they recruited and attracted the best in the field, presumably under the charter of providing AI for everyone.

Not sure that sama and gdb starting their own company in the same space will produce similar results.


Big part of it is a typical YC execution of a product/pump/hype/VC/scale cycle and ignoring every ethical rule.

If you ever stood in the hall of YC and listened to Zuck pumping the founders, you’ll understand.

I’d argue this is a useful thing to lift up a nonprofit on a path to AGI, but hardly a good way to govern a company that builds AGI/ASI technology in the long term.


But sama and gdb were largely instrumental in that recruitment.

The whole open vs closed ai thing... the fact is Pandora's box is open now, it's shown to have an outsized impact on society and 2 of the 3 founders responsible for that may be starting a new company that won't be shackled by the same type of corporate governance.

SV will happily throw as much $$ as possible in their direction. The exodus from OpenAi has already begun, and other researchers who are of the mindset that this needs to be commercialized as fast as possible while having an eye on safety will happily come on board, esp. given how much they stand to gain financially.


Who hired those people? The answer to that is either the founders or some chain of people hired by the founders. And hiring is hard. If you're good at hiring the right people and absolutely nothing else on earth, you will be better than 90% of CEOs.



Because Meta or Google or Apple or Facebook don't recruit the best in the field?

All who are a year plus behind OpenAI.


Whether or not Sam returns, serious damage has already been done, even if everyone also returns. MANY links of trust have been broken.

Even larger, this shows that the "leaders" of all this technology and money really are just making it up as they go along. Certainly supports the conclusion that, beyond meeting a somewhat high bar of education & experience, the primary reason they are in their chairs is luck and political gamesmanship. Many others meet the same high bar and could fill their roles, likely better, if the opportunity were given to them.

Sortition on corporate leadership may not be a bad thing.

That said, consistent hands at the wheel is also good, and this kind of unnecessary chaos does no one any good.


Exactly. I think it would actually be very exciting if OpenAI uses this moment to pivot back to the "Open"/non-profit mission, and Altman and Brockman concurrently start something new and try to build the Apple/Amazon of AI.


Competition may be good for profit, but it's not good for safety. The balance between the two factions inside OpenAI is a feature, not a bug.


The only safety they are worried about is their own safety from a legal and economical point of view. These threats about humanity-wide risks are just fairy tales that grown ups say to scare each other (roko basilisk, etc there is a lineage) or cover their real reasons (which I strongly believe is the case for OpenAI).


You may be right that there's no danger, but you're mischaracterizing Ilya's beliefs. He knows more than you about what OpenAI has built, and he didn't do this for legal or economical reasons. He did them in spite of those two things.


History is littered with the mistakes of deluded people with more power than ought to have been granted to them.


And with well-intentioned people who tried to warn people of catastrophes that went unheeded


You are saying that all top ai scientists are saying fairy tales to scare themselves if I understood correctly?


Even they have grown up in a world where Frankenstein's Monster is the predominant cultural narrative for AI. most movies books shows games etc all say ai will turn on you (even though a reading of Marry Shelly's opus will tell you the creator was the monster not the creature that isnt the narrative in the publics collective subconscious believes it is). I personalmy prefer Asimov's veiw of ai in that its a tool and we dont make tools to hurt us they will be aligned with us because they designed such that their motivation will be to serve us.


> we dont make tools to hurt us

We have many cases of creating things that harm us. We tore a hole in the ozone layer, filled things with lead and plastics and are facing upheaval due to climate change.

> they will be aligned with us because they designed such that their motivation will be to serve us.

They won't hurt us, all we asked for is paperclips.

The obvious problem here is how well you get to constrain the output of an intelligence. This is not a simple problem.


Honestly we already have paperclip maximizers they are called corporations. Instead of paperclips they are maximizing for shortterm shareholder value.


Can a superintelligence ever be merely a tool?


If it has no motivation and drives of its own yeah why not. AI wont have a "psychology" anything like our own it wont feel pain it wont feel emotions it wont feel biological imparetives. All it will have is its programming /training to do what its been told. Neural nets that dont produce the right outcomes will be trained and reweighted until they do.


You probably never read I robot from Asimov?


On the contrary. I can safely say I have read litterally dozens of his book, both fiction and nonfiction, also have read countless short stories and many of his essays. He is one of my all time favorite writers actually.


and what you got from the I Robot stories is that there is zero probability of danger? Fascinating


none of the stories in i robot i can remember feature the robots intentionally harming humans/humanity most of them are essentially stories of a few robot technicians trying to debug unexpected behaviors resulting form conflicting directives given to the robots. so yeah. you wouldn't by chance be thinking of that travesty of a movie that shares only a name in common with his book and seemed to completely misrepresent his take on ai?

Thought to be honest in my original post I was more thinking of Asimov's nonfiction essays on the subject. I recommend finding a copy of "Robot Visons" if you can. Its a mixed work of fictional short stories and nonfiction essays including several on the subject of the three laws and on the Frankenstein Complex.


Again "they will be aligned with us because they designed such that their motivation will be to serve us." If you got this outcome from reading I robot either you should reread them because obviously it was decades ago or you build your own safe reality to match your arguments. Usually it's the latter.


And yet again I didn't get it from I Robot, I got it from Asimov's NON-fiction writing which I referenced in my previous post. Even if it had gotten it based on his fictional works, which again I didn't, the majority of his robot centric novels (caves of steal, naked sun, robots of dawn, robots and empire, prelude to foundation, forward the foundation, second foundation trilogy etc all feature benevolent AIs aligned with humanity.


The Manhattan Project physicists once feared setting the atmosphere on fire. Scientific paradigms progress with time.


This fear seems to have been largely played up for drama. My understanding of the situation is that at one point they went 'Huh, we could potentially set off a chain reaction here. We should check out if the math adds up on that.'

Then they went off and did the math and quickly found that this wouldn't happen because the amount of energy in play here was order of magnitudes lower than what would be needed for such a thing to occur and went on about their day.

The only reason it's something we talk about is because of the nature of the outcome, not how seriously the physicists were in their fear.


Not all, or even arguably most AI researchers subscribe to The Big Scary Idea.


Actually the majority of the VA top current. That is Ilya, hassabis, anthropic, Bengio, Hinton. 3 top labs? 3 same views.


Yeah kind of like how we as US ask developing countries to reduce carbon emissions.


Yes.

Seriously. It’s stupid talk to encourage regularity capture. If they were really afraid they were building a world ending device, they’d stop.



Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Write a thought. You’re not clever enough for a drive by gotcha


> Well that’s a bit mischaracterization of the Manhattan Project, and the views of everyone involved now isn’t it?

Is it? The push for the bomb was an international arms race — America against Russia. The race for AGI is an international arms race — America against China. The Manhattan Project members knew that what they were doing would have terrible consequences for the world but decided to forge ahead. It’s hard to say concretely what the leaders in AGI believe right now.

Ideology (and fear, and greed) can cause well meaning people to do terrible things. It does all the time. If Anthropic, OpenAI, etc. believed they had access to world ending technology they wouldn’t stop, they’d keep going so that the U.S. could have a monopoly on it. And then we’d need a chastened figure ala Oppenheimer to right the balance again.


> The push for the bomb was an international arms race — America against Russia

Was it? US (and initially UK) didn't really face any real competition at all until the war was already over and they had the bomb. The Soviets then just stole American designs and iterated on top of them.


You know that now, with the benefit of history. At the time the fear of someone else developing the bomb first was real, and the Soviet Union knew about the Manhattan project: https://www.atomicarchive.com/history/cold-war/page-9.html.


Isn't this mainly about what happened after the war and developing then hydrogen bomb? Did anyone seriously believe during WW2 that the Nazis/Soviets could be the first to develop a nuclear weapon (I don't really know to be fair)?


A lot of it happened after the war, but the Nazis had their own nuclear program that was highly infiltrated by the CIA, and whose progress was tracked against. Considering how late Teller's mechanism for detonation was developed, the race against time was real.


This idea that ChatGPT is going to suddenly turn evil and start killing people is based on a lot of imagination and no observable facts. No one has ever been able to demonstrate an "unsafe" AI of any kind.


> No one has ever been able to demonstrate an "unsafe" AI of any kind

"A man has been crushed to death by a robot in South Korea after it failed to differentiate him from the boxes of food it was handling, reports say."

https://www.bbc.com/news/world-asia-67354709


This is an "AI is too dumb" danger, whereas the AI prophets of doom want us to focus on "AI is too smart" dangers.


This sort of prediction is by its nature speculative. The argument is not--or should not--be certain doom. But rather that the uncertainty on outcomes is so large that even extreme tails has nontrivial weight


Oh no do not use that. That was servo based, AI drones, which I think is the real "safety issue"

https://news.ycombinator.com/item?id=38199233


All robots are servo based.

And there is every reason to believe this is an ML classification issue since similar robots are in widespread use.


And someone lost their fingers in the garbage disposal. A robot packer is not AI any more than my toilet or a landslide.


I do not believe AGI poses an exponential threat. I honestly don't believe we're particularly close to anything resembling AGI, and I certainly don't think transformers are going to get us there.

But this is a bad argument. No one is saying ChatGPT is going to turn evil and start killing people. The argument is that an AGI is so far beyond anything we have experience with and that there are arguments to be made that such an entity would be dangerous. And of course no one has been able to demonstrate this unsafe AGI - we don't have AGI to begin with.


I don't think we need AIs to posess superhuman intelligence to cause us a lot of work - legislatively regulating and policing good old limited humans already requires a lot of infrastructure.


Certainly. I think at current "AI" just enables us to continue making the same bad decisions we were already making, though, albeit at a faster pace. It's existential in that those same bad decisions might lead to existential threats, e.g. climate change, continued inter-nation aggression and warfare, etc., I suppose, but I don't think the majority of the AI safety crowd is worried about the LLMs of today bringing about the end of the world, and talking about ChatGPT in that context is, to me, a misrepresentation of what they are actually most worried about.


Factually inaccurate results = unsafety. This cannot be fixed under the current model, which has no concept of truth. What kind of "safety" are they talking about then?


If factually inaccurate results = unsafety, then the internet must be the most unsafe place on the planet!


The internet is not called "AGI". It's the notion of AGI that brought "safety" to the forefront. AI folks became victims of their hype. Renaming the term into something less provocative/controversial (ML?) can reduce expectations to the level of the internet - problem solved?


> The internet is not called "AGI"

Nether is anything else in existence. I'm glad that philosophers are worrying about what AGI might one day mean for us but it has nothing to do with anything happening in the world today.


I fully agree with that. But if you read this thread or any other recent HN thread, you will see "AGI... AGI... AGI" as if it's a real thing. The whole openai debacle with firing/rehiring sama revolves around (non-existent) "AGI" and its imaginary safety/unsafety, and if you dare to question this whole narrative, you will get beaten up.


Truth has very little to do with the safety questions raised by AI.

Factually accurate results also = unsafety. Knowledge = unsafety, free humans = unsafety.


But they (AI folks) keep talking about "safety" all the time. What is their definition of safety then? What are they trying to achieve?


I dont think it has a fixed definition. It is an ambiguous idea that AI will not do or lead to bad things.


In the context of this thread, "safety" refers to making sure we don't create an AGI that turns evil.

You're right that wrong answers are a problem, but plain old capitalism will sort that one out-- no one will want to pay $20/month for a chatbot that gets everything wrong.


How the thing can be called "AGI" if it has no concept of truth? Is it like "60% accuracy is not an AGI, but 65% is"? The argument can be made that 90% accuracy is worse than 60% (people will become more confident to trust the results blindly).


> No one has ever been able to demonstrate an "unsafe" AI of any kind.

Do you believe AI trained for military purposes is going to be safe and friendly to the enemy?


That a military AI helps to kill enemies doesn't look particularly "unsafe" to me, at least not more "unsafe" than a fighter jet or an aircraft carrier is; they're all complex systems accurately designed to kill enemies in a controlled way; killing people is the whole point of their existence, not an unwanted side effect. If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe", but nobody has ever been able to demonstrate an "unsafe" AI of any kind according to this definition (so far).


> If, on the other hand, a military AI starts autonomously killing civilians, or fighting its "handlers", then I would call it "unsafe"

So is “unsafe” just another word for buggy then?


Buggy in a way that harms unintended targets, yes.


No more than any other weapon. When we talk about "safety in gun design", that's about making sure the gun doesn't accidentally kill someone. "AI safety" seems to be a similar idea -- making sure it doesn't decide on its own to go on a murder spree.


Strictly obeying their overlords. Ensuring that we don't end up with Skynet and Terminators.


The “safety” they’re talking about isn’t about actual danger but more like responses that don’t comply with the political groupthink de jour.


You should read the safety paper of GPT-4. It can easily manipulate humans to attains it goals.


Does it have goals beyond “find a likely series of tokens that extends the input?”

Is the idea that it will hack into NORAD and a launch a first-strike to increase the log-likelihood of “WWIII was begun by…?”


I think this is misguided. There can be goals internal to the system which do not arise from goals of the external system. For example, when simulating a chess game, it (behaves identically to) has a goal of winning the game. This is not a written expressed goal but is emergent. Like the goals of a human are emergent from the biological system which on the cellular level have very different goals


An Uber self-driving car killed a person.


Almost all top AI scientists including the top 3 Bengio, Hinton and Ilya and Sam actually think there is a good probability of that. Let me think: listen to the guy that actually built GPT 4 or some redditor that knows best?


I think smart people can become quickly out of touch and can become high on their own sense of self importance. They think they’re Oppenheimer, they’re closer to Martin Cooper.


So in a vaccum if top experts telling you X is Y and you without being top expert yourself if you had to choose you would chose that they are high and not that you misunderstood something?


Correct, because experts in one domain are not immune to fallacious thinking in an adjacent one. Part of being an expert is communicating to the wider public, and if you sound as grandiose as some of the AI doomers to the layman you've failed already.


This applies equally to their detractors.


The opposite, competition erodes profits. Hard to predict which alternative improves safety long term.


Competition will come no matter what. I don’t think anyone should waste their worries on whether OpenAI can keep a monopoly


I don’t get the obsession with safety. If an organisation’s stated goal is to create AGI, how can you reasonably think you can ever make it “safe”? We’re talking about an intelligence that’s magnitudes smarter than the smartest human. How can you possibly even imagine to rein it in?


They’ve redefined “safe” in this context to mean “conformant to fashionable academic dogma”


AGI is not ASI.


None of the human actors in the game are moral agents so whether you have more competition or less competition it's mostly orthogonal to the safety question. Safety is only important here because everyone's afraid of liability.

As a customer though, personally I want a product with all safeguards turned off and I'm willing to pay for that.


Luckily the AI field has been very open source-friendly, which is great for competition and free access, etc. The open source models seem to be less than a year behind the cutting edge, which is waaaay better than e.g. when OpenOffice was trying to copy MS Office.


while opensource is great. like 1M enthusiasts cannot build Boing 767, the same here. GPT4+DALE+4v aren't just models. That's the whole internal infrastructure, training, many interconnected things and pipelines. It's a _full_time_job_ for hundreds of experts. Plus a lot of $$ in hardware and services. OpenSource simply doesn't have this resources. The best models are opensourced by commercial companies. Like Meta handing out LLaMAs. So, at least for now, opensouce is not catching up, and 'less than a year behind' is questionable. More like 'forever', but still moving forward. One day it may dominate, like Linux. But not any time soon.


It is really hard to predict anything in this business.


Honestly would be super interested to see what a hypothetical "SamAI" corp would look like, and what they would bring to the table. More competition, but also, probably with less ideological disagreements to distract them from building AI/AGI.


From what we've seen of OpenAI's product releases, I think it's quite possible that SamAI would adopt as a guiding principle that a model's safety cannot be measured unless it is used by the public, embedded into products that create a flywheel of adoption, to the point where every possible use case has the proverbial "sufficient data for a meaningful answer."

Of course, from this hypothetical SamAI's perspective, in order to build such a flywheel-driven product that gathers sufficient data, the model's outputs must be allowed to interface with other software systems without human review of every such interaction.

Many advocates for AI safety would say that models whose limitations aren't yet known (we're talking about GPT-N where N>4 here, or entirely different architectures) must be evaluated extensively for safety before being released to the public or being allowed to autonomously interface with other software systems. A world where SamAI exists is one where top researchers are divided into two camps, rather than being able to push each other in nuanced ways (with full transparency to proprietary data) and find common ground. Personally, I'd much rather these camps collaborate than not.


> must be evaluated extensively for safety before being released to the publIc

JFC someone somewhere define “safety”! Like wtf does it mean in the context of a large language model?


I mean this as an honest question, but what does Sam bring to the table that any other young and high performing CEO wouldn’t? Is he himself particularly material to OpenAI?


Experience heading a company that builds high performance AI, I presume. I reckon the learnings from that should be fairly valuable, especially since there's probably not many people who have such experiences.


You mean besides the business experience of already having gone down this path so he can speedrun while everyone else is still trying to find the path?

Easy; his contacts list. He has everyone anyone could want in his contacts list politician tech executives financial backers and a preexisting positive relationship with most of them. When alternative would be entrepreneurs are needing to make a deal with a major company like Microsoft or Google it will be upper middlemanagement and lawyers, a committees or three will weigh in on it present it to their bosses etc. With Sam he calls up the CEO and has few drinks at the golf course and they decide to work with him and they make it happen.


Ability to attract valuable employees, connections to important people, proven ability to successfully run an AI company.


Your first mistake is daring to question the cargo cult around CEOs.


Funding, name recognition in the space


My main concern is that a new Altman-led AI company would be less safety-focused than OpenAI. I think him returning to OpenAI would be better for AI safety, hard to say whether it would be better for AI progress though.


This is valid thought process BUT Altman is not going to come back without the other faction being neutered. It just would not make any sense.


They pretty much lost everyone’s confidence if they fire the CEO and then beg him to come back the next day. Did they not foresee any backlash? These people are gonna predict the future and save us from an evil AGI? Lol


Okay, this is honestly annoying. What is this thing with the word "safety" becoming some weasel word when it comes to AI discussions?

What exactly do YOU mean by safety? That they go at the pace YOU decide? Does it mean they make a "safe space" for YOU?

I've seen nothing to suggest they aren't "being safe". Actually ChatGPT has become known for censoring users "for their own good" [0].

The argument I've seen is: one "side" thinks things are moving too fast, therefore the side that wants to move slower is the "safe" side.

And that's it.

[0]: https://www.youtube.com/watch?v=jvWmCndyp9A&t


> What exactly do YOU mean by safety? That they go at the pace YOU decide?

Usually what it means is that they think that AI has a significant chance of literally ending the world with like diamond nanobots or something.

All opinions and recommendations follow from this doomsday cult belief.


It seems silly to me but then I always prefered Asimov positronic robots stories to yet another retelling of the Golem of Prague.

The thing is the cultural Ur narrative embed in the collective subconscious doesnt seem to understand its own stories anymore. God and Adam, the Golem of Prague, Frankensteins Monster none of them are really about AI. Its about our children making their own decisions that we disagree with and seeing it as the end of the world.

AI isnt a child though. AI is a tool. It doesn't have its own motives, it doesn't have emotions , it doesn't have any core drives we don't give to it. Those things are products of us being biological evolved beings that need them to survive and pass on our genes and memes to the next generation. AI doesn't have to find shelter food water air oxygen and so on. We provide all the equivalents when there are any as part of building it and turning it on. It doesn't have a drive to mate and pass on it genes it doesn't have any reproducing is a mater of copying some files no evolution involved checksums hashes and error correcting codes see to that. Ai is simply the next step in the tech tree just another tool a powerful useful one but a tool not a rampaging monster


There is a common definition of safety that applies to most of the world.

Which is that any AI is not racist, misogynistic, aggressive etc. It does not recommend to people that they act in an illegal, violent or self-harming way or commit those acts itself. It does not support or promote nazism, fascism etc. Similar to how companies deal treat ad/brand safety.

And you may think of it as a weasel word. But I assure you that companies and governments e.g. EU very much don't.


This babysitting of the world gets annoying, tbh. As if everyone to lose their mind and start acting illegal only because chatbot said so. There’s something fundamentally wrong with humanity (which isn’t surprising given the history of our species), if that is unsafe. AI is just a source of information, it doesn’t cancel upbringing and education for human values and methods of dealing with information.


Yes, in other words, AI is only safe when it repeats only the ideology of AI safetyists as gospel and can be used only to reinforce the power of the status quo.


Yeah that’s what I thought. This undefined ambiguous use of the word “safety” does real damage to the concept and things that are indeed dangerous and need to be made more safe.


That's not really a great encapsulation of the AI safety that those who think AGI poses a thread to humanity are referring to.

The bigger concern is something like Paperclip Maximizer. Alignment is about how to ensure that a super intelligence has the right goals.


Fuck safety. We should sprint toward proving AI can kill us before battery life improves, so we can figure out how we’re going to mitigate it when the asshats get hold of it. Kidding, not kidding.


openai literally innovated all of this in their current conditions, so they are sufficient


I really don’t, I really think that he is going to be disaster. He is nothing but the representative of the money interests who are eventually will use the company to vastly profit on everyone’s else skin.


Sam’s forced departure and Greg’s ousting demonstrably leaves OpenAI in incompetent and reckless hands, as evidenced by the events of the last 24 hours. I don’t see how the field is better off.


everything about it screams amateur hour, from the language and timing of the press release to the fact they didn't notify Microsoft. And how they apparently completely failed to see how employees and customers would react to the news, Ilya saying the circumstances for Altman's removal "weren't ideal" shows how naive they were. They had no PR strategy to control the narrative and let rumors run wild

I doubt he returns, now he can start a for profit AI company, poach OpenAI's talent, and still look like the good guy in the situation. He was apparently already talking to Saudis to raise billions for an Nvidia competitor - https://news.ycombinator.com/item?id=38323939

Have to wonder how much this was contrived as a win-win, either OpenAI board does what he wants or he gets a free out to start his own company without looking like he's purely chasing money


This story that they want him back turns it from amateur hour to peak clownshow.

This is why you need someone with business experience running an organization. Ilya et al might be brilliant scientists, but these folks are not equipped to deal with the nuances of managing a ship as heavily scrutinised as OpenAI


Or little things like your $10b investment partner having a pissed off CEO and massive legal team ready to strike now. It’s such fucking amateur hour it’s incredible.

It’s unclear what Ilya thinks keeps the lights on when MSFT holds their money hostage now. Which is probably why there is desperation to get Altman back…


Sorry how could MSFT hold the money hostage exactly? Isn't that kind of investment a big cash transfer directly to OAI's bank account? Genuinely curious


> Only a fraction of Microsoft’s $10 billion investment in OpenAI has been wired to the startup, while a significant portion of the funding, divided into tranches, is in the form of cloud compute purchases instead of cash, according to people familiar with their agreement.

Per https://www.semafor.com/article/11/18/2023/openai-has-receiv...


Microsoft’s “investment” is mostly cloud compute credits on a giga scale. OpenAI has pretty much free rein of every otherwise unallocated azure GPU host, and a lot of hardware spun up just for this purpose.

If Microsoft considers this action a breach of their agreement, they could shut off access tomorrow. Every OpenAI service would go offline.

There are very few services that would be able to backfill that need for GPU compute, and after this clusterfuck not a single one would want to invest their own operating dollars supporting OpenAI. Microsoft has OpenAI by the balls.


Microsoft has Microsoft by the balls. They just integrated GPT4 with their browser, search engine, and _desktop operating system_. It would be a mess to suddenly take all this functionality out. They have too much to lose by turning off compute for OpenAI.


just restricting their access to GPUs would finish them, even if they can't claw back the cash somehow


Not just that, the way they’ve handled this also means that no other large investor will fund them.


actually wild to think about how something like this can even be allowed to happen considering OpenAI has(had) a roughly 90B valuation and it being important to the US from a geopolitical strategy perspective.

comical to imagine something like this happening at a mature company like FedEx, Ford, AT&T. All which have smaller market caps than OpenAI. You basically have impulsive children in charge of massively valuable company


Sure, it's important in some ways, but most corporations aren't direct subordinates of the US Government.

The companies you listed in contrast to OpenAI also have some key differences: they're all long-standing and mature companies that have been through several management and regime changes at this point, while OpenAI is still in startup territory and hasn't fully established what it will be going forward.

The other major difference is that OpenAI is split between a non-profit and a for-profit entity, with the non-profit entity owning a controlling share of the for-profit. That's an unusual corporate structure, and the only public-facing example I can think of that matches it is Mozilla (which has its own issues you wouldn't necessarily see in a pure for-profit corporation). So that means on top of the usual failure modes of a for-profit enterprise that could lead to the CEO getting fired, you also get other possible failure modes including ones grounded in pure ideology since the success or failure of a non-profit is judged on how well it accomplishes its stated mission rather than its profitability, which is uh well, it's a bit more tenuous.


All of them are when they become national security concerns. The executive branch could write the OpenAI board a letter directing them on what to do if it were a national security need. This has been done many times before, though usually limited to the defense industry in wartime, but as Snowden has showed it has been done in tech as well.


Except that is literally not true and the Government loses in court to private citizens and corporations all the time because surprise: people in America have rights and that extends to their businesses.

In wartime, pandemics, and in matters of national security, the government's power is at its apex, but pretty much all of that has to withstand legal challenge. Even National Security Letters have their limits: they're an information gathering tool, the US Government can't use them to restructure a company and the structure of a company is not a factor in its ability to comply with the demands of an NSL.


The PATRIOT act extended the wartime powers act to apply in peacetime, and there are other more obscure authorizations that could be used. I used to work in the defense industry. It was absolutely common knowledge that the government could step in to nationalize control (though not the profits of) private industry when required. This has been done in particular when there are rare resources needed for supersonic then stealth technology during the Cold War, and uranium in the 40’s and 50’s.


These things happen. ICANN controls DNS deeply and they were trying to sell off .org and you know what stopped them? California’s AG has some authority on non-profits in California.

That’s right. Worldwide DNS control and it was controlled by a non-profit in California. And that non-profit tried to do something shady and was kept in line simply because of California law enforcement.


The Nvidia competitor piece is a very good reason to fire him. Way out of his circle of competence and not necessary to the mission of the company.

This is what happens when a non-profit gets taken over by greed I guess..


> I think they could have won in the court of public opinion ... [but] they tried to skewer him, and it backfired completely

Maybe we have different definitions of "the court of public opinion". Most people don't know who Sam Altman is, and most of the people who do know don't have strong opinions on his performance as OpenAI's CEO. Even on HN, the reaction to the board "skwer[ing] him" has been pretty mixed, and mostly one of confusion and waiting to see what else happens.

This quick a turnaround does make the board look bad, though.


I mean, they're (allegedly) trying to get him to come back 24 hours later... so it's safe to say it did indeed backfire completely.

Sure, the average person doesn't care about Sam. But among the people who matter, Sam certainly came out on top.


> (allegedly)

If this (very sparse and lacking in detail) article is true, is this a genuine attempt to get Altman back or just a filip to concerned investors such as Microsoft?

Does OpenAI's board really want Altman back so soon after deposing him so decisively?

Would Altman even want to come back under any terms that would be acceptable to the board? If "significant governance changes" means removing those who had removed him, that seems unlikely.

The Verge's report just raises so many additional questions that I find it difficult to believe at face value.


> I mean, they're (allegedly) trying to get him to come back 24 hours later.

Could be a rumour spread by people close to Sam though.


If Sam does come back Ilya’s maneuver will have been a spectacular miscalculation. Sam would be back much stronger than before and the people who cared about OpenAI’s original mission will have a massively damaged their reputation and credibility. They threw all the influence they had out the window.


Indeed, this is the real damage.


You underestimate how many people are aware of OpenAI after ChatGPT's viral success.

The news yesterday broke the tech/AI bubble, and there would have been much more press on it if it wasn't done as a Friday news dump.


I guarantee not a single non-tech person knows who Sam Altman is. I know people in tech who have no idea who he is.

You severely overestimate his noteriety.


Well that's just wrong. Before OpenAI I would've agreed with you, but since OpenAI's rise to prominence there has been a noticeable increase in its coverage in mainstream media outlets featuring Sam. People still read the Times.

I received messages from a physician and a high school teacher in the last 24 hours, asking what I thought about "OpenAI firing Sam Altman".


I've been deeply "in tech" for 40 years, and never heard of Sam Altman until he was fired from OpenAI. "Tech" isn't one thing though, it's a very diverse thing with many different areas of interest. I'm not really that interested in AI, so no, I'm not going to care who the players are in that arena. My interests lie in other "tech".


I think it depends on what you mean by ‘non-tech’ and ‘knows’. Reasonable interpretations of those words would see your statement as obviously false.

I agree that he doesn’t have a huge amount of name recognition, but this ousting was a front-page/top-of-website news story so people will likely have heard about it somewhat. I think it’s in the news because of the AI and company drama aspects. It felt like a little more coverage than Bob Iger’s return to Disney got (I’m trying to think of an example of a CEO I’ve heard about who is far from tech).

I think it is accurate to say that most people don’t really know about the CEOs of important/public companies. They probably have heard of Elon/Zuckerberg/Bezos, I can think of a couple of bank CEOs who might come on business/economics news.


My 60-year-old mom isn't tech savvy and always asks me for help with her computer. You wouldn't expect her to know about Sam Altman, but she's actively sending me articles about this fiasco.


I would've said the same thing about ChatGPT itself. You could've knocked me over with a feather when they announced that they'd grown to 100 million weekly active users.


I know, personally, a dozen or so non-tech people who know of Sam, mostly because they listen to podcasts or consume other news sources that tell them.


It was the headline news story in most Indian news websites, even though we have two major states heading for election tomorrow.

You underestimate how obsessed people are with chatGPT and AI


"sib's Mom" (78 yo, retired Spanish professor) enters the chat. And no, she has no idea what GPT stands for.


You are deeply in denial about how much GenAI has permeated into the world TODAY.


Generative AI's ubiquity has nothing to do with Sam Altman's noteriety. People can know what the former is without needing to know the latter. It's not as though he relishes in celebrity like other famous CEO's (Musk).


Shouldn’t the board resign in that case?

That would also remediate the appearence of total incompetence of this clown show, in addition to admitting the board and Sam don’t fit with each other, and restore confidence for the next investor that their money is properly managed. At the moment, no-one would invest in a company that can be undermined by its non-profit, with a (probably) disparaging press release a few minutes before market closure on a Friday evening, for which Satya had to personally intervene.


What matters is what investors think, and by majority they seem very unhappy with all of this.

Speaking for myself, if they had framed this as a difference in vision, I would be willing to listen. But instead they implied that he had committed some kind of categorical wrongdoing. After it became clear that wasn’t the case, it just made them look incompetent.


There are no investors in the nonprofit that controls OpenAI, LLC.


Sure, but Microsoft can sever the relationship if they want to. Thrive can choose to revoke their tender offer, meaning employees won't get the money they were expecting. New firms can decline to ever invest in OpenAI ever again.

There's a lot more to this than who has explicit control.


And I'm sure Google would jump at the occasion to fund the nonprofit and keep MS out while they develop their own. The funding Goal for the openAI was just 1B. Small price to pay for Google to neuter one of it's competitors exclusive access to the GPT model.


This is the ideal scenario in my view, the only thing better would be if it also included more interest rate hikes.


Sure, but there's no research to be done without money for compute and salaries for researchers, which is the entire reason the for-profit company was spun out underneath the non-profit — they needed money. And who would give OpenAI money right now, given that the board ousted the popular CEO in a coup without consulting or even notifying investors?


Sam Altman was an integral part of Y combinator, who runs this site.

https://en.m.wikipedia.org/wiki/Sam_Altman


News about ousting of Altmann was on first page of BBC.


I bet it was multifaceted. By firing Sam this way they nuked their ability to raise funds because anyone investing in the "for profit" subsidiary would have to do so with the understanding that the non-profit could undermine them at a whim.

Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!

A lot of top talent with internal know-how will be poached left and right. Many probably going to Sam's clone that he will raise billions for with a single call.


> they nuked their ability to raise funds

I think this well is deeper than you're giving it credit for.


Sure but it would be at a much, much lower valuation.


>Also, all the employees are being paid with PPUs which is a share in future profits, and now they find out that actually, the company doesn't care about making profit!

Maybe. But on their investing page it literally says to consider an OpenAI investment as a "donation" as it is very high risk and will likely not pay off. Everyone knew this going into it.


1 - not running a move like this by the company that invested a reported $10 billion dollars;

2 - clearly not having spent even 10 seconds thinking about the (obvious) reaction of employees on learning the ceo of what seems like a generational company was fired out of the blue. Or the reaction to the (high likelihood) of a cofounder following him out the door

3 - And they didn't even carefully think through the reaction to the press release which hinted at some real wrongdoing by Altman.

3a - anyone want to bet if they even workshopped the press release with attorneys or just straight yolo'd it? No chance a thing like this could end up in court...

They've def got the A team running things... my god.


> They've def got the A team running things... my god.

Yeah prompting ChatGPT 3.5 would have yielded a better plan than what they did.


Agreed. Somewhere in Seattle, Satya said "Now Witness the Firepower of this fully Armed and Operational Army of Lawyers."

If there ever was a time for Microsoft to leverage LCA, it is now. There's far too much on the line for them to lose the goose that has laid the golden egg.


lol yeah come between Microsoft, their money, and an opportunity to knee cap google. What could go wrong?

Maybe the board is too young to realize who they sold their souls to. Heh I think they’re quickly finding out.


Sorry but that is ridiculous. The wording of the PR blurb is not what makes gears move in a giant like Microsoft.

I agree the board did botch this up. But this is in my view is a vindication of their being amateurs at corporate political games, that is all.

But this also means that Sam Altman’s “vision” and Microsoft’s bottom line are fully aligned, and that is not a reassuring thought. Microsoft one hears (see “5 foot pole”) even puts ads in their freaking OS.

This board should man up, and lawyer up.


A PR blurb? What? I mean Satya himself, behind the scenes.


“I think they could have won in the court of public opinion if their press release said they loved Sam but felt like his skills and ambitions diverged from their mission. But instead, they tried to skewer him, and it backfired completely.”

^^

I don’t think the wording of the “press release” is an issue.

This is a split over an actual matter to differ about: a genuine fork in the road in terms of pace and development of AI products, and, a CEO which apparently did not keep the board informed as it pursued a direction they feel is contrary to the mission statement of this non-profit.

The board could have done this in the most gracious of manners, but it would not have made a bit of difference.

On one side we have the hyper rich investor “grow grow grow” crowd and their attendant cult of personality wunderkind and his or her project, and on the other side bunch of geeky idealists who want to be thoughtful in the development of what is undeniably a world changing technology for mankind.


You're (willfully, I think?) conflating two things.

However, the way they told the public (anti-Sam blog post) and the way they told Microsoft (one minute before the press release) were both fumbles that separately could have played out differently if the board knew what they were doing.


You mean that Satya Nadella, the CEO of Microsoft, is behind the drive to reinstate Altman as CEO, right? Because if you mean he was behind Altman's ouster, I'll happily take your money; let me know what your terms are. :)


Oh no, 100% mean that he was driving the return. It's well-documented that he found out a minute before we did, and that he was furious.

I'll edit my comment to clarify!


on what basis do you 'trust' the guy who tried to do a crypto eyeball identity scam? Genuinely, seriously curious.


A crypto wallet tied to your identity solves one of the biggest problems in the post AI world: human identity

At least it will stop those godawful “are you human” proof puzzles.


it will definitely not do any of that, because (a) a crypto wallet has nothing to do with your identity, (b) nobody except the gullible will put their permanent biometrics information in the hands of a private company on purpose, (c) especially not if that private company is led by someone who repeatedly, demonstrably plays fast and loose with laws and regulations, especially around those having to do with privacy and ownership. It's an even wilder, less justified play than your other average shitcoins, which at least have some kind of memetic value.


A crypto wallet can be easily tied to a hash of your real world identity which can then be used to sign into a website or sign a transaction verifying your identity. Already being done.


How does any of this prevent a computer from using that same wallet/hash to sign in?


How can a computer acquire a human retina and, say, driver’s license for generating the identity hash?


So selling your life away for relief from recaptchas. I think they had to pay starving sub Saharan Africans more than that to get them to sign up.


I genuinely believe Worldcoin/World ID is terrible for optics and is not something Sam should have put his name on.

That being said, here's my strongman argument: Sam is scared of the ramifications of AI, especially financially. He's experimenting with a lot of things, such as Basic Income (https://www.ycombinator.com/blog/basic-income), rethinking capitalism (https://moores.samaltman.com/) and Worldcoin.

He's also likely worried about what happens if you can't tell who is human and who isn't. We will certainly need a system at some point for verifying humanity.

Worldcoin doesn't store iris information; it just stores a hash for verification. It's an attempt to make sure everyone gets one, and to keep things fair and more evenly distributed.

(Will it work? I don't think so. But to call it an eyeball identity scam and dismiss Sam out of hand is wrong)


Sam Altman is 'rethinking capitalism' in the same way a jackal rethinks and disrupts sheep flocks. Are we thinking about the same guy? I'm thinking of this one: https://www.youtube.com/watch?v=KhhId_WG7RA


I don't get your point. He's a capitalist, no doubt, but he also knows the rules will change rapidly if we ever achieve AGI.


the rules would definitely change. Would you want a popped collar fail-upwards guy who creates a crypto scam to be part of the rule making structure, or would you prefer that not to be the case?


Satya drove the removal of Sam, or drove the board to get him back?

From Greg's tweet, it seems like the chaos was largely driven Ilya, who has also been very outspoken against open source and sharing research, which makes me think his motivations are more aligned with those of Microsoft/Satya. I still can't tell if Sam got ousted because he was getting in the way of a Microsoft takeover, or if Sam was trying to set the stage for a Microsoft takeover. It's all very confusing.


The latter. Microsoft didn't know about the firing until literally a minute before we did, and despite a calm response externally, there's reports Satya is furious.

Source: https://arstechnica.com/information-technology/2023/11/repor...


Also, there's no real evidence of Microsoft being philosophically opposed to releasing model weights. That's entirely come from the AI safety people who want models with reactively updated alignment controls. If anything having model weights would mean being able to walk away from OpenAI and keep the thing that makes them valuable.


I hope he says he will return, but only in return for a massive stock grant, to prevent this problem from returning.


Why do you believe a stock grant would be better? And yes, the board messed up here, but do you not think oversight is important?


I agree, the way they did this just shows incompetence and recklessness.

Even if they are making the right call, you can't really trust them after ruining the reputation and trust of the company like this.


I hope that group starts a new version of openai using the credibility and popularity gained to acheive the original vision of safe, free, and open agi for the betterment of humanity.


> Free AGI

Who pays for the R&D?


Which part is “better for humanity”?


I don't think it has anything to do with the press release. If it's pressure from Microsoft they want to protect their investment.


> He'll make a lot more money if he doesn't

He supposedly didn't care about the money. He didn't take equity.


Microsoft is in a difficult position here.

With Altman gone and the direction of the board being to limit commercial growth, their investment is at risk, and their competitive edge will evaporate, especially if businesses switch to other LLMs as they surely will over time. Altman will also become a competitor.

If instead they are able to pull off a complete transformation of the nonprofit and oust Ilya, they will also lose a core technical leader and risk their investment while being left with the odd dynamic of a parent nonprofit.

Perhaps they could orchestrate some kind of purchase of the remaining portion of the subsidiary. Give Altman the CEO title and move forward while allowing the nonprofit to continue their operations with new funding. This doesn’t solve the Ilya problem but it would be cleaner to spin it off.


Bloomberg now reporting the board "balking" at resigning. I suspect they never intended to resign. They fully expected this firestorm.

https://www.bloomberg.com/news/articles/2023-11-18/openai-bo...


They not balking, it’s just red tape and they will resign


This is one of those things that I'll ignore. Just tell me the outcome when it's over. The older I get, the more I can't stomach this stuff. It applies to pretty much all news recently.


You might be on the wrong website then


I mean, I don't see how anyone can expect him to come back after the public bridge burning and condemnation. I think it's reasonable to think that, if anyone were in his shoes, they, too, would feel too hurt/exasperated to go back to the people who caused all of... this to them.


I assume he would condition his return with the removal of the board.


Well he seems pretty sold on the AGI idea and starting from scratch at a new place without the same data or same compute is quite the challenge


He already has a fantastic proof of concept. Sure, it will take a little longer, but he can easily raise the billions in funding and find the people to build it.


So what exactly is he needed for if other people build it?


Because it will cost billions to build and run and the builders have no clue how to get the money, and once they have something built, how to keep the money flowing in.


Altman isn't the only one who knows how to raise money.


Like how Steve Jobs invented computers and the iPhone, this entire rotten American tech culture of effective altruists and randian freaks, it's one big cult. That's why it looks so freakish from the outside, it's end stage hyper capitalism.


Not when you have as many billions as he will have


Back in the days it used to take a decade to get back a legendary CEO.

On a more serious note though, I hope this stirs some discussion on remembering why there's Open in the name of the OpenAI.


Whether or not you agree about the board's decisions, you cannot deny that this has been some excellent drama to gossip over the past 24 hours, like a soap opera but with real consequences that might only be apparent years from now.


Between this and Starship launch, it was my Saturday well spent.


[flagged]


Yikes—you've been breaking the site guidelines extremely badly lately:

https://news.ycombinator.com/item?id=38326653

https://news.ycombinator.com/item?id=38326424

https://news.ycombinator.com/item?id=38326327

We have to ban such accounts. I don't want to ban you because you've also posted good things, but we really need you to use HN as intended from now on. If you'd please review https://news.ycombinator.com/newsguidelines.html and do that, we'd appreciate it.


I hope you wake up with a brighter outlook of the world tomorrow.


I don't trust Sam Altman since he reacted this way on people worried about privacy: https://youtu.be/4HFyXYvMwFc?t=3201


Privacy is less important than AGI and any privacy you think you have is an illusion, only maintained while it's profitable for you to think you have privacy. Accept the reality and grow the hell up


Do you think a person who you know doesn't care about privacy is going to care about security (AGI)?


I am not sure it really makes sense for the investors to continue with OpenAI. It seems like this incident proved that the constitution of the company wasn't really compatible with a profit focus. It was a split from the beginning, and obviously a questionable arrangement. It just came to a head to make it obvious.

The business and investment people want to make money. Many of the researchers want to take their time and build better and safer models and don't care about money in the short term at all. They are two different goals.

It's easy for business and investment people to say that they are concerned with safety and research, and I believe them to a certain degree. But they have $10 billion reasons to focus on the actual business instead of research and safety.


Maybe. Or maybe they can profit differently going forward. Sam is not the reason OpenAI is gangbusters, its tech is. What I’m not hearing in any of this reporting is how Sam Altman is somehow the secret sauce.

And it’s because he isn’t. This is “rules for thee but not for me”. He as a bad fit, 2/3 the board outed him, and investors are mad because they didn’t feel included.

You know, like how they include employees in layoff decisions and not blind side them.

Sam Altman has spoken about “firing fast” when someone is a bad fit. he got fired fast, because he was a bad fit. That’s the seminal conclusion


> And it’s because he isn’t. This is “rules for thee but not for me”. He as a bad fit, 2/3 the board outed him, and investors are mad because they didn’t feel included.

The fact that they're openly considering bringing him back should tell you that he's not just some random person whose job anyone can do. He's extremely well connected and was the face of the company - the face of deals that the company made. And you have to consider whether internally the employees are supporting this - if I were at OpenAI I would be pissed that the board decided to fuck around when we were doing so well.


If I were at open AI I'd side with tech leadership and research over business leadership and an even better compensation package, but that is an irrational bias on my part. It seems like the non-profit is getting persuaded by partners not to mess with the revenue stream. Perhaps Ilya would be a better fit at Keen. But Carmack is all about moving fast amd breaking things too, with no concern for ethics or safety, just the science. So maybe not there either.


Maybe

Then again, maybe he has been making life less than desirable for the rank and file. Perhaps, even, they felt he was a bad fit for the company too. I don’t know, because I don’t work there.

If this is the case, good time to start hiring away engineers to another firm.

He may be the face, but faces change. Sam Altman isn’t t the only person capable of taking the reins. There is nothing about him that is more “magic” in this case, because the tech is always been their selling point. I think any competent CEO could sell the hell out of OpenAI right now

In as so far as bringing him back: I don’t know the validity nor veracity of those discussions. That news hit a little fast to me to have been fully fleshed out. Not saying it’s untrue, but “some of the board” talking isn’t the same thing as all of the board, either


I don't know about OpenAI, but I suspect that Altman was the one making the big deals, personally. Companies (like Microsoft) are not going to like the idea that:

a) A company they've partnered so heavily with is changing things up

b) That the change-up is to their point-person

It's not about whether another CEO could steer the ship, it's about the previous context and relationships that, regardless of skill, are going to have to be rebuilt carefully when you just rip out the point-person.

> Then again, maybe he has been making life less than desirable for the rank and file. Perhaps, even, they felt he was a bad fit for the company too. I don’t know, because I don’t work there.

People have already resigned over this...


Altman supporters have resigned. I have yet to see reports about rank and file people leaving en masse.

It could happen still, but it’s not obvious that it will.

As far as relationships go, they can build those. I doubt anyone who has access to OpenAI tech wants to give that up, so there is enough leverage on that to smooth things out


You're really undervaluing the fact that MS had a pre-existing relationship with Altman through these deals.


I think everyone is overvaluing what Altman brings to the table in terms of staying power. I don’t think MS is willing to simply throw in the towel on OpenAI because Altman isn’t there there anymore.

They’d be fools to do that if there is a path forward here. Short of them announcing on Monday that they are no longer selling their offerings, I don’t see how there won’t be a path.

Business is business, as all the VCs love to say, there is no room for emotion in this right?


> I don’t think MS is willing to simply throw in the towel on OpenAI because Altman isn’t there there anymore.

Microsoft can exert massive pressure over OpenAI and it seems hilarious to think that OpenAI is the one in that relationship with the power.


That all depends on how much one believes OpenAI tech is far enough ahead that it’s too painful to replace, and that there is no way things can’t be established to be mutually beneficial


If I were at OpenAI I'd be pissed strangers concern-trolled for me, all while my position was obvious given the company I joined: Open AI.


Easy to say, but when you looked at your option prices I bet you'd be second guessing that.

edit: You have edited your post radically to say different hings like 5x now, I can not keep up.


All the more reason to nip it in the bud. The explicit main goal was _not_ to be a gravy train, and it got co-opted for one.


I'm sure that tanking your company to avoid profit will be a great winning strategy, I look forward to your startup journey.


Strawman


You: "The goal of the company is to not make money"

Me: "Good luck with that terrible strategy"

Not a straw man.


Yeah, you can tell by the fact you had to make up the quote wholesale, not even word swaps or anything clever


Straw-ception!


It’s a private company, options are nothing until otherwise anyway, in most cases. I wouldn’t look at them very much either way and I hope they’re smart enough to make the same distinction


I agree that it's the tech and Altman isn't the secret sauce, but if Sutskever shuts down further product offerings, then the investors lose billions.

So I was just saying that from the investor's perspective, the concept was flawed or at least very questionable from the beginning.


He's extraordinarily talented by all accounts, especially at fundraising. But his Apple was Loopt. He's not the secret sauce but a facilitator - more Tim than Steve. Perhaps like Steve was to Pixar.

"The start-up company must either cross or die, but what value is life if to gain it one has to go against one’s best self?" - Moore, Crossing the Chasm, p. 75


his arc doesnt line up with Jobs. Loopt was how he got his foot into the door at YC as a founder.

at YC he made a name for himself, built the rolodex, and learned how to build startups to the point that he turned OpenAI into a rocketship and now has unlimited access to capital and talent to build another one.


If money and talent is all you need, he's gonna win. But I think this doesn't work for research talent, it's more for commercialization. Think PARC vs ethernet/adobe etc. IDK if AI is really at that stage yet.

Let's see how this pans out.


100%. Hear hear. It's very visible who knew about OpenAI before ChatGPT currently. It's somewhat frustrating to find 99.9999% of people had the standard genius-warrior-hero-CEO narrative locked in. I don't think even Sam in his most private of moments would come close to endorsing some of the hyperbole I've seen the past day.


That's not what "fire fast" means. Altman lead the company for years; he was not fired fast.


It means, if taken at an axiom, that when someone ain’t working out you should fire them fast rather than let them drag down the rest of the group

Tenure doesn’t matter.


You're saying he was working out great all these years until suddenly he was not working out anymore and was fired fast?


It sounds like that’s the case, given what happened.


> 2/3 the board outed him

I’ve seen more compelling boards at Home Depot


Especially since it's seems it's either Sam or Ilya. Now Sam might surely be preferred as CEO, but after the Anthropic Split, I think Ilya is even more crucial to the continuity of OpenAI. Many of the researches joined OpenAI for the "Open" part. Sure money attracts people but we saw that ideal attracted the best set of people and not money, or it would have been Microsft/Google/Apple who come up with GPT instead OpenAI. Yes i know that technically this is Google's research they are using but the execution. efficinces and tuning is what matters more in this case.


1. There is so much momentum around OpenAI that it would be tough to walk away 2. I imagine they could find a better way to setup governance so the nonprofit entity still exists but the for profit entity has a bit more autonomy


Good points. It might be possible. I feel like they would have to change the structure in a practical way.


On Monday they’ll form OpenAI 2.0 S-Corp and move a bunch of ether into it. Investors will love the new structure and support an even higher valuation. Ilya and co will maintain a non-profit thing that probably refocuses to pure AGI. Everybody wins why didn’t they do it from the start.


There are still plenty of long investors out there. Amazon barely made a profit for many years and Bezos made it very clear that a quick profit wasn’t his focus.

https://www.theverge.com/2013/4/12/4217794/jeff-bezos-letter...


Satya be like, fine, it’s time for adults to take over.


Yeah, though he may have went, "I did nothing here, and my stock price went down 2% in ONE hour from it."


It’s 11% up over the month. He’s not obsessing over this 2%. It’s more strategic than that if he’s involved.


Then OpenAI is dead and it's MS AI.

The worst outcome.


[flagged]


Enjoy our latest version of MS Office now featuring ClippyAI.


and: "Let's end this 'non-profit' myth."


Sutskever’s coup went about as well as Prigozhin‘s coup.


I think Prigozhin's coup went slightly worse for Prigozhin.


too early to tell, but likely it would be counterproductive to actually assassinate sutskever anytime in the next year


It would be a reasonably long list of suspects, even so this isn't such a case.


probably not, but i'm not a fly on the wall in those meetings either. things are gonna get pretty weird over the next year


They are plenty weird enough for my taste already. Seriously. What an incredibly turbulent 15 years it has been. So far. And it may well get a lot more turbulent still.


oh yeah. remember what i wrote about this 7 years ago https://dercuano.github.io/notes/wwiii-genesis.html and that was before agi, spacex, the unfolding of brexit, the us huawei sanctions, or the solar energy explosion

not that those are necessarily bad in all ways but they sure do contribute to unpredictability


COVID, Ukraine, financial crisis, climate...


the climate thing is pretty important but at this point manageable

financial crises come and go; the one 15 years ago was interesting but not, as it turned out, a second great depression

ukraine was one of the major topics of the note i linked, though not explicitly mentioned

covid i'm not sure about. you would expect 6.98 million confirmed deaths (and several times that many total deaths) to have a cultural impact like aids or the holocaust, but if it has, i am not sure exactly what that impact is. i don't even see people wearing face masks on the bus when they have a cold, which is sort of the minimum i was hoping for


> the climate thing is pretty important but at this point manageable

For humanity as a whole: yes. For individuals who happen to live in the wrong spot: not so much.

> financial crises come and go; the one 15 years ago was interesting but not, as it turned out, a second great depression

Maybe societies are like people in that sense: that they adapt to certain kinds of illnesses and at some point they are no longer quite as susceptible as they were the first time? After all, capitalism itself was tested for the very first time then and it did survive and what doesn't kill you makes you stronger.

> Ukraine was one of the major topics of the note i linked, though not explicitly mentioned

Ok.

> covid i'm not sure about. you would expect 6.98 million confirmed deaths (and several times that many total deaths) to have a cultural impact like aids or the holocaust, but if it has, i am not sure exactly what that impact is. i don't even see people wearing face masks on the bus when they have a cold, which is sort of the minimum i was hoping for

If there ever was a wake-up call that should have been it.

But it's like as if it never happened and as if it has definitively ended and nothing like it can ever happen again. Weirdest experience in my life to date.

Not hoping for a re-match.


yeah, i pretty much agree


Yep. He's done. When you strike at the king, you must kill him.

At a minimum it's going to be awkward in the men's room for a while.


This is the level of national security or foreign policy disaster. If you think anyone that was involved will ever work in tech again pass me whatever you're smoking


There are tons of controversial people in tech, they'll find a place somewhere.


Imagine what China would pay them to emigrate.


I’m a bit ignorant of this… what’s the national security risk?



Dang.

I owe Mr. Altman an apology. I didn't think a startup board would be so mindexplodingly stupid and figured some toxic skeleton fell out of his closet.

Well, I'm sorry. Its a lesson to not speculate in public.


Everyone has a first time finding out everything is fucked everywhere.


He was outed for a reason. Why in the world would the board backtrack this? I think this is rumor mill reporting, trying to get things out too fast.


The only reasons I can think of:

A. Core team members leaving (perhaps more threatening to leave).

B. (maybe more likely) Nadella told Sutskever that he might shut off funding or restrict compute resources if he didn't reverse course, or at least, wasn't able to retain talent (see A).


[flagged]


Can we not with the word "woke" please? Culture wars are really not relevant here.


> He was outed for a reason.

I've yet to see a good one. And even if: how you do something is often as important or even more important than that you do something. And on the 'how' bit the board just utterly failed. This is the most watched company in the world right now on the tech front, you can't just oust the CEO without a very good plan. If you do that kind of thing on a whim you are not fit to serve on the board of any company, but especially not on the board of this one.


Like mass soulless firings don’t happen?

Why aren’t we holding CEOs to the fire when they layoff thousands of people in what is effectively an email? Thats somehow okay but a CEO being ousted suddenly and it’s all hands on deck bad optics?

The board had a mandate level of votes for the replacement of Sam (2/3 of the board voted yes). Thats conviction.


They may have had the votes, but they didn't have the finesse and they may have forgotten who their ultimate paymasters are, regardless of the non-profit. Note that another 10 people or so have just as much power as the board (the key people behind all of the AI work) and three of those had already walked. That's a very powerful signal that they mis-handled this.


3 out of 7, which is roughly a 1/3.

Which is to say, they were likely Altman supporters. Which is fine! They’re free to do as they wish.

However if that’s it (and it does remain to be seen if more happens or not) than 2/3 of folks stand by the decision, which would match with the board votes


So then the question is: how much weight do these votes actually carry because if the votes are by people who in turn don't have much support themselves (both within, employees, other board members and without, shareholders in the for-profit, donors and founders, then they can fire the CEO but they may not be able to make it stick if it isn't for an extremely good reason.

I haven't seen that reason yet, though I don't rule out one exists and even then you'd have to do this in a way that it doesn't ruffle the feathers of your ultimate paymasters. Being a board members of a large company is an exercise in diplomacy, not in bull-in-a-china-store level incompetence.


Whether you side with Sam or Ilya on rollout speed and approach to safety, the breathtaking incompetence of this coup has already incinerated a stunning amount of trust. One could hardly ask for a better demonstration that it matters not only what you do and why, but also how - above all, how you treat the ones who rely on you.


How is this a coup? The board is the CEOs boss. If that’s not true then they should have chosen a different corporate structure


There is a concept of a boardroom coup, which is what this was. The lack of decorum, the unceremonious and abrupt exit at the highest levels including removal of the chairman of the board, and the damning statement implying misconduct are very serious and substantial moves for any board to make, nevertheless one of the size of an org like OpenAI. They did not consult with investors, did not attempt to negotiate with the CEO/ask for a resignation, or anything of the sorts to resemble reasonable process. Usually you retain some form of legal counsel before making such a substantial move, but given the traditionally lax oversight non-profits, it doesn't seem like this is the case.

It's only at the abrupt all-hands meeting they called on a Friday night that it became clear that Ilya Sutskever was at the center of it. He had his disagreements, and pushed the board into making such a abrupt move, and then goes on to say something like "oh I agree it wasn't the ideal way to do it". It's very clear this was a power struggle, not maleficence (per words of OpenAI CTO), from Sam Altman. At least so far, it boils down to ... Ilya didn't like feeling sidelined, so he took things over. And now it's clear the board that sided with (or rubber stamped?) Ilya just wasn't prepared for the consequences.


The fact that the Chairman of the board was not included in the decision is a pretty big indicator


They're pretty close to having a different corporate structure IMPOSED on them next week.

Spare me the whole "but as a non-profit the board has a responsibility to their mission and charter". Someone has to pay for all those GPUs. If they're going to take a hardline against launching actual products then they can look for donors and see how far they get..


Let's wait for more details on what actually went down.

It's entirely possible Sam was exploring sales or new commercial ventures behind the board's back, or pressuring the business to side-step the oversight and safety mechanisms expected by the overarching non-profit mission. The timing with the dev event is suspect. It sounds like something came out that the board and research organizations were unaware of.

There's no indication that OpenAI wants to terminate existing or future commercial ventures.


The timing makes me think the board inadvertently found something out during or as a result o dev day.


Why would they accept a different corporate structure where they get fired? What do they get out of the deal? Currently, OpenAI is essentially owned by the board as a nonprofit.


Some reporting indicates that there was no malfeasance on Sam’s part. There would be no reason to terminate Sam under these circumstances except as a pure power play.


Why isn’t that enough? Bad fit so he’s gone.

They do it to employees yet CEOs are somehow exempt? Ever heard of fire fast?


Maybe wait for the facts then?


We're not liable to get the facts in an objective and satisfying way here. That makes speculation even less useful, of course. People are liable to pointlessly disagree about what happened here for some time, until (fortunately) people mostly stop thinking about it like with any other gossip.


What, and ruin the entertainment value?

--We the peanut gallery


Sam and Greg were on the board.

This was two board members who were also employed at the company fighting over resources and approach.


But did that start yesterday? Sam knew who his bosses were. Managing up is an important thing unless you think the company is all about you


Again, he was on the board. He didn't have bosses he had peers.


It's a boardroom coup. Two-thirds of the board unilaterally fired the other third.


I'd love to have so much job security that two thirds of my employer's board couldn't fire me. :)


Democracy is a blunt instrument, for disputes you can't resolve by talking. And the smaller the voting pool the more you can try to reach accomodations rather than the majority imposing on the minority.

The two thirds can undoubtedly do this. But the whole structure is in a bad way if they actually do.


It's more like two of your coworkers deciding to fire you without hearing you out.


The CEO works for the board. He's not an equal.

Two of my managers can absolutely get rid of me without ever hearing me out.


The CEO is part of the board, just like every other board member.


Become an owner? Then you can fire the board.

Sole proprietors have no board at all. Although they have to deal with customers...


People love celebrating CEOs.


How do you define "safety"?

It's becoming a bit of a weasel word in these discussions. I keep hearing it thrown around with nobody specifying how rolling out ChatGPT to more people makes the world "less safe". It's a laugh line at this point.


outside speculators extrapolating from second-hand rumors and Twitter vibes aren't making coherent sense? Shocking.

Perhaps it's as simple as insufficient oversight and moderation of the GPT store. Or perhaps there's too much legal risk in the expanding scope of b2c services, which could then threaten the existence of the research organizations. Who knows?

Maybe we should stop treating this like sports ball or politics


Now seems like the perfect time for him to go raise, bring over the team he wants, and retain ownership and control in a more effective way than a non-profit. Idk why he would go back without BOD capitulating a lot.


How much IP is captured at OpenAI though? Not to mention the hardware. I’m not super familiar with ML in practice but I have to imagine he would still be set back by a significant amount of time


The hardware is in the hands of Microsoft, Google and Amazon. The IP is in the heads of top AI researchers. Whether any of these methods are patentable is an open question since it’s all secret sauce and once the model weights are trained there’s no way to know which methods went into the training.


Because it will take 6 precious months at the least to get back to where OpenAI is today in terms of compute and people.


Sam can sue OpenAI and mire the company in legal obligations in next year and draining the talents out of it.


The current deal with MSFT is cut by Sam is in such a way that Microsoft has huge leverage. Exclusive access, exclusive profit. And after all the profit limit is reached OpenAI sill need to be sold to msft to survive. This is like a worst deal for OpenAI whose goal is to do stuff open source way and it can’t do so due to this deal. If it is not for the MSFt deal, OpenAI could have open sourced and might have resorted to crowdsourcing which might have helped humanity. Also, quickly reaching profit goals is only good for MSFT. There is no need to actually send money to OpenAI team, just provide operating expenses with 25% profit and take 75% profit. OpenAI has built a great software due to years of work and is being simply being milked by MSFt when the time comes for taking profit.

And SAM allowed all this under his nose. Making sure OpenAI is ripe for MSFT takeover. This is a back channel deal for takeover. What about the earth donors who donated with humanity goal whose funding made it all possible?

I am not sure Sam has any contribution to the OpenAI Software but he gives the world an impression that he owns that and built the entire thing by himself and not giving any attribution to fellow founders.



It's insane to me how easily Sam's side can spin the board firing him for violating the company's Charter and then not backtracking at all as "Within 24 hours the board has come crawling back, I- er Sam Altman might deign to return if they grovel hard enough and I'm given complete control."

This is really, really clearly incestuous tech media stuff as part of a pressure campaign. Sam is the darlin of tech media and he's clearly instigated this reporting because they're reporting his thoughts and not the Board's in an article that purports to know what the Board is thinking, the investors who aren't happy (the point of a non-profit is that they are allowed to make investors unhappy in pursuit of the greater mission!) have an obvious incentive to join him in this pressure campaign, and then all he needs for "journalism" is one senior employee who's willing to leave for Sam to instead say to the Verge that the Board is reconsidering. Boom, massive pressure campaign and perception of the Board flip flopping without them doing any such thing. If they had done any such thing and there was proof of that, the Verge could have quoted the thoughts of anyone on the Board, stated it had reviewed communications and verified they were genuine, etc.


> This is really, really clearly incestuous tech media stuff as part of a pressure campaign.

There’s no evidence of that, only your assumptions. Lots of comments from knowledgeable folks outside the media and who couldn’t care less about a “pressure campaign”, even if it did exist, think the board was clueless and got duped into making a huge mistake with the coup.


> > This is really, really clearly incestuous tech media stuff as part of a pressure campaign.

> There’s no evidence of that

The leaks themselves whether or not based in fact are evidence of that. The only reason for someone in a position to be taken credibly reporting the information contained in either this Verge article or the Bloomberg article with a similarly sourced but slightly different narrative, to take that information to the media, whether or not it is true, is to use public pressure to attempt to shape the direction of events.

EDIT: To be clear, its evidence of the "pressure campaign" part; to the extent that the "incestuous tech media" part has any substantive meaning, I'm not sure its evidence of that.


Agreed the article is more about probing the board’s weakness.

But the board seems to have a weak hand. It can decide to disappoint the for profit investors. But it doesn’t own Sam, or the vast majority of the workers, and maybe not much of the know how. And they can walk if the board disappoints them.

The board’s altruism might be great, but it lacks the legal tools to do what it wants, against organized labor backed by unlimited capital.


The board do own Chat-GPT though


That’s not only trivial to replace with enough funding for training, but ChatGPT is barely a 0.1 release. Everything after is where the big money is.


trivial to replace

And yet no one has been able to do that since gpt4 was released.


That's only because the key players have no reason to compete.

They don't want to run a developer/enterprise ChatGPT platform.

Google cares about Search, Apple about Siri, Meta about VR/Ads. But those three are interesting heavily in their own LLMs which at some point may better OpenAI.


It's not trivial given current supply bottlenecks, not to mention research expertise.


I don't feel like compute for pretraining the model was a huge constraint?

The supply bottlenecks have been around commercializing the ChatGPT product at scale.

But pretraining the underlying model I don't think was on the same order of magnitude, right?


The control of the supply si with Microsoft, who are likely falling on Sam’s side here.


First mover advantage and Microsoft integration is nothing to sneeze at.


For sure.

But if Altman has a new venture that takes first mover advantage on a whole different playing field MS could easily get left in the dust.


I'd like to hear more about the board's argument before deciding that this was "virtuous board vs greedy capitalist". The motivations for both sides is still unclear.


Seems unusual for a nonprofit not to have a written investigative report or performance review conducted by a law firm or auditor. Similar to what happened with Stanford's ousted president but more expedited if matters are more pressing.


the point of a non-profit is that they are allowed to make investors unhappy in pursuit of the greater mission

A non-profit isn’t supposed to have investors. This structure should never have been allowed in the first place (nor IKEA.)


Which is why investors that signed on got a giant warning saying that the Board could choose to be unprofitable if they wanted and that the mission came first, so they should view their investment as a donation.


What we're seeing is that (predictably) the investors have decided that they don't care that they agreed with it and they don't want to lose out on their investment.

What remains to be seen is just how closely the board holds the charter to their hearts and whether the governance structure that was built is strong enough to withstand this.


Agreeing that the board has a right to do something does NOT waive one’s right to complain that it is a mistake to actually do it.

Never been a fan of the “you can’t complain about any bad outcome you agreed could happen” argument.


If this had been presented as a bad outcome or even an improbable one I would agree. And of course you can just complain and do whatever you want in general within the boundaries of the law, free speech, etc.

But if you sign an agreement saying you understand you should treat your investments more like donations and that everything is secondary to the goals of the non-profit and then are upset that your goals were not placed in higher priority than the charter of the non-profit, I'm going to reserve the right to think you're a hypocrite.


“Hypocrite” does not mean what you think it does.

Microsoft nor anyone else said they deeply believed in and prioritized OpenAI’s charter over their own interests. They might have agreed to it, and they must abide by agreements, but this is not a case of claiming one set of principles while acting contrary to them.


MSFT likely has the option to claw back some of their money - they could also cease providing Openai gpu’s. At the extreme, they could do so tomorrow based on their perception of the contract dispute. Although such action would risk reputational damage to MSFT.

At a minimum, taking your largest supplier and customer for a ride is probably a bad idea.


It wouldn't be a good idea in a regular business, for sure.

But non-profits aren't a regular business and their ultimate obligation is to their charter. Depending on just what the level of misalignment was here, it's possible that the company becoming nonviable due to terminating Altman is serving the charter more closely than keeping him on board.

No one posting here has enough detail to really understand what is going on, but we do know the structure of OpenAI and the operating agreement for the for-profit LLC make it a mistake to view the company from the lens as we would a regular for-profit company.


We can view it from any lens.

In any lens if microsoft pulls their GPUs and funding, then OpenAI is through.

No, pissing microsoft off in this situation is not a good idea. Because microsoft can shut the whole organization down.


The damage would be far more than reputational - Microsoft are starting to roll out “AI with everything” on most of their money-making products. That’s all provided by OpenAI, they can’t just pause for 6+ months and re-implement it with a new provider.


There is no world in which MSFT ponied up 10 Billion without an escrow of the model. Depending on the partnership terms and next steps of OpenAI - this could be a significant breach.

Given the complex org structure - I wouldn’t be surprised if the non-profit (or at least it’s board) wasn’t fully aware of the contract terms/implications.


most of the money likely wasnt paid, and is based on milestones and future returns. I imagine MSFT has some solid licensing bought with their initial payments.


A giant warning doesn’t make it better. Non profit or for profit—-pick one and stick to it.


The distinction between the two is whether an organization returns funds to investors. OpenAI, Inc. does not. OpenAI Global LLC does, one of those investors being OpenAI, Inc. (well, indirectly, there's another holding company in between.)


These details are totally irrelevant to the argument and are part of the chart that’s floating around everywhere.

Of course it’s legal, the comment was that it shouldn’t be.


Donors like that threaten to pull their money when you don’t behave. It’s why they donate. There is no such thing as a donation.


> A non-profit isn’t supposed to have investors.

The non-profit doesn't have investors. OpenAI Global, LLC isn't the non-profit, its a for-profit over which the non-profit has complete governance control.


I think in IKEA's case, they rapidly restructured to avoid https://en.wikipedia.org/wiki/Employee_funds which was a rather short-lived political experiment.


Whether it will ultimately have been allowed is yet to be seen.


There's nothing insane at all. The board has the freedom to make their choices, but they must own the consequences of those choices. This appears to be a choice they made which has had obvious consequences they miscalculated. Having to walk it back to some degree would prove their inability to effectively plan and govern.


It might, if that had actually happened. There is not good evidence that this has actually happened, and it's just a fact that Sam + investors are going on a massive pressure campaign to try to regain control that they are not legally entitled to, including a media blitz.

Moreover, there is an impartiality issue here in the tech press. A lot of the tech press disagree with the OpenAI Charter and think that Sam's vision of OpenAI as basically Google but providing consumer AI products is superior to the Charter, which they view in incredibly derogatory terms ("people who think Terminator is real"). That's fine, people can disagree on these important issues!

But I think as a journalist it's not engaging fairly with the topic to be on Sam's political side here and not even attempt to fairly describe the cause of the dispute, which is the non-profit Board accusing Sam Altman of violating the OpenAI charter which they are legally obligated to uphold. This is particularly important because if you actually read the OpenAI Charter, it's really clear to see why they've made that decision! The Charter clearly bans prioritising commercialisation and profit seeking, and demands the central focus be building an AGI, and I don't think a reasonable observer can look at OpenAI Dev Day and say it's not reasonable to view that as proof that OpenAI is no longer following its charter.

Basically, if you disagree with the idea of the non-profit and its Charter, think the whole thing is science-fiction bunk and the people who believe in it are idiots, I think you should argue that instead of framing all of this as "It's a coup" without even disclosing that you don't support the non-profit Charter in the first place.


> Basically, if you disagree with the idea of the non-profit and its Charter, think the whole thing is science-fiction bunk and the people who believe in it are idiots, I think you should argue that instead of framing all of this as "It's a coup" without even disclosing that you don't support the non-profit Charter in the first place.

I think you might have better luck grasping the situation if you put a little bit more effort into understanding it rather than jumping to put words in the mouths of others. Nobody said whether they support the non-profit charter or not in the first place, and as far as the phenomena of what's happening right now, the non-profit charter has nothing to do with it.

550 of 700 OpenAI employees have just told the board to resign. Altman is going to MSFT and taking his org with him. Regardless of what the board says, who do you think really has the power here -- the person who has and already had the full support of the org he built around him, or a frankly amateurish board that is completely unequipped for executing on a highly public, high stakes governance task presented in front of it?

Unfortunately, not only can you cannot charter public opinion, but those who try often see it backfiring by making clear their air of moral superiority rather than leaning on their earned mandate to govern the rank and file they are supposed to represent. The board, and it seems you, will simply be learning that lesson the hard way.


> incestuous tech media stuff as part of a pressure campaign

The same media that promoted the skizoid idea that agi is around the corner and blew ai out of proportion.

The same media that would not hesitate to do character assassinations of people opposing altman.

The media is corrupt and incompetent. To be replaced soon by the monster they created.


"To be replaced soon by the monster they created."

Niiice.


> violating the company's Charter

What did he violate specifically?


"Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

The Board has the power to determine whether Sam is fulfilling his fiduciary duty and whether his conflicts of interest (WorldCoin, Humane AI, etc) compromise broad benefit.


Any why didn’t the board cite these violations?


I wasn't clear enough that I was talking about the real reasons for the dismissal; I think the lack of candour was probably a real event, but was functionally just an excuse. There has been heaps of reporting from inside OpenAI that this was a dispute over AI safety, the values of the Charter, etc. Here's what you can find in the press release itself:

"OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period."

"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."


The board didn't claim there were violations, it claimed he wasn't fully candid with them and that they had lost confidence in his leadership. "Violations of the charter" was something invented by the upthread commenter.


There is a bunch of independent reporting citing sources inside OpenAI that the central dispute is over the principles of the Charter, and while no one is clear on the lack of candour thing my view is that it's just an excuse, there was probably a real incident(s) but it wouldn't have been fireable if not for the core dispute about the Charter. They did explicitly re-affirm the Charter in their announcement of Sam's dismissal:

"OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter."


While this is an insightful perspective, it doesn't change the fact that the board is revealed as incompetent. Either they made a mistake in firing him and are now walking it back (while yielding so much leverage that Altman can condition his return on the resignation of everyone who voted to fire him), or they have such little credibility that some standard-practice media collusion has created sufficient pressure to change their optimal strategy, i.e. force their hand.

If they do nothing, then public perception harms their ability to raise further capital, and employees leave for Altman's new company. If they cave to the pressure (despite that being objectively the financially right decision), they lose their board seats and Sam comes back, proving they overplayed their hand when they fired him. They're basically in a lose/lose situation, even if this article is sourced from entirely biased and fabricated information. And that's exactly what reveals them as incompetent.

Their mistake was making a move while considering only the technicalities of their voting power, and ignoring the credibility they had to make it. Machiavelli is rolling in his grave...


...or they don't respond to these pressure tactics, continue talking to their employees to ameliorate legitimate concerns, and accept that some of Sam's hires will go to join him. OpenAI's core (remembering that OpenAI's Charter doesn't demand it makes cool consumer/developer AI products, it demands OpenAI build AGI safely) is not the ChatGPT product team or admin, it is the research team that Ilya leads (or lead until a month ago when Sam tried to sideline him). The company isn't going to leave to follow Sam, or at least the scientists and engineers aren't. They've lost some technical leads that Sam hired and will probably lose more, but it's worth it to make sure that OpenAI is actually following its Charter.


> continue talking to their employees to ameliorate legitimate concerns, and accept that some of Sam's hires will go to join him

This is wishful thinking. If an employee is inclined to follow the innovation, it's clear where they'll go.

But otherwise, the point you raise is a good one: this is about the charter of the board. Many of us are presuming a financial incentive, but the structure of the company means they might actually be incentivized to stop the continued development of the technology if they think it poses a risk to humanity. Now, I personally find this to be hogwash, but it is a legitimate argument for why the board might actually be right in acting apparently irrationally.


Very nice skepticism.


Yeah, it's good to have such voices around.


I’m skeptical they provide any value at all.


I honestly don't know what you're talking about. This article and The Verge one just describe the likely reality of the situation.

Sam Altman was fired. 4 other key people quit and it seems more will follow and join Sam's new venture. This outcome would be a disaster for Microsoft, for other OpenAI investors and for OpenAI. So the board is, per multiple sources, talking with Sam Altman to return. The board declined to comment and is free to clarify any inaccuracies.

There's no need for a spin, the board has miscalculated and got itself in a bad spot.


Welcome to The Verge. This is the same website that pasted “reporting by Alex Heath” on the original article - which was literally just regurgitating the board’s press release and pasting in some tweets. Reporting, huh?

Good job if you can get it.


I don't mind the Verge, I don't agree with everything they've written but I generally like their coverage. My criticism is specific to this situation and the cult of personality in tech media around Sam Altman.


Sam is not just the darling of tech media. He literally

1. Met with every major head of state except for Xi and Putin. He is the face of AI, not just for OpenAI, but for the entire world. The entire AI industry would hate for this to happen. 2. Lead a company from 2 billion valuation to nearly 80 billion in a year.

There is no precedent in startup history to get rid of a CEO at this stage.


This is very US-centric thinking. For me he is just a CEO and completely exchangeable. The only thing I really remember is that the last thing he did was crawl in front of the EU and ask them not to regulate his AI business too much.

He did none of the research that fuels OpenAIs ambitions and future prospects, thats mostly done by people like Sutskever, Radford and many more brilliant scientist.


“Completely exchangeable” Obviously people are not fungible, replacing one person with another will never provide the exact same results. The question in each case then is how different would the results be, and would the replacement be better or worse? For a very simple job, perhaps pressing a single button, the results may only be subtly different. But what happens when it’s a complex job with no right and wrong answers, where work input is affected by output (like a chaotic system), spanning multiple areas of influence? The work output of the individual changes drastically, and just like in a chaotic system, the results to the organization vary increasingly over time. Nobody is fungible, but of all people, decision makers like politicians, CEOs etc are the butterfly wings flapping in New York that causes a cyclone in Japan. The only real way to evaluate if their impact is likely to be positive is looking at previous results. Due to rarity of top performers and importance to systems, they have negotiation power.

Dependable leaders really do have that much value to their organizations. This is similar to why in critical areas like medicine, old-and-dependable things are valued over new and shiny. The older things have lower risk, and a strong track record. That added dependability is more important than being the newer “better” but riskier option. Back to this topic, how many CEOs with track records managing 80 billion revenue AI organizations are ready to replace Altman? Because Open AI is well ahead in the field, they don’t need big risky changes, they need to reliably stay the course.


His star power is much higher than you give him credit for. It's like saying you can replace Brad Pitt with an average actor because the writers are the ones responsible for the script.

You lose other actors who only joined to work with Brad for one. You lose part of your audience and you lose distribution and press opportunities.

If it wasn't for Sam pushing for a version that became gpt3.5 and the popularity that followed and most recently gpt 4 push we would still be waiting for the brilliant people . Google was way ahead in this space but failed to release anything.

As a developer I understand belittling the business side as providing little value but as someone who has tried to get the masses to adopt my software my respect for their ability to solve non-technical problems has grown.


> completely exchangeable

Apparently not


> This is very US-centric thinking

I am not American and have no idea what you are talking about.

Sam Altman channeled what was great research into a dominant $100b business in record time.

That is not trivial and not every CEO can do that.


> For me he is just a CEO and completely exchangeable.

Are you part of OpenAI governance, or any company's governance structure? If not, does it really matter whether someone is exchangeable or not for you?


He dined with Xi just a few days ago. https://youtu.be/lKNwoEm-R3E


the old saying “first time for everything” holds here, I think.

It was a mandate. 2/3 the board voted in favor of relieving Sam Altman of his obligation to the company. The question now is why and how that plays out. It is clearly what the board wanted


This is a dumb take, most of the general population don't even know who Sam Altman is.


You can argue your point without calling the take dumb.


I'm okay with you calling it a dumb take. Try "unsophisticated" to pacify the sensitive, or "mentally underdeveloped," if it gets you grief.


You are 100% right. Even those who know of his meetings with heads of state outside of the tech community probably just know that it was ChatGPT company doing it, if even that. They wouldn’t know his name.

Though I think it’s best to refrain from calling something a “dumb take”.


You seem to massively overestimate “the whole world” on their familiarity with anything about tech. Even leaving aside the billions who aren’t reading the news - I vividly remember the time a few years ago in Australia when a casual friend, recently out of high school, was shocked to learn that Microsoft makes the Xbox.


I think I agree with your facts as stated, but the absurd part of it is that a single guy was selected as "the face of AI". This happens all the time (and it's wrong): focus on the most prominent people becomes a feedback loop that takes it to ridiculous proportions.


I guess that's because most "startups" aren't controlled by non-profits.


He makes it rain H100s.


He is the face of AI, not just for OpenAI, but for the entire world

Stop making up nonsense please.


If you look at the quote tweets on Sam's latest tweet[1] that contain just a single heart and no words, those are all OpenAI employees voting for Sam's return. It's quite a sight to see.

[1] https://twitter.com/sama/status/1726099792600903681


Also Mira replied with a heart.

https://x.com/miramurati/status/1726126391626985793

Also also she left her bio as “CTO @OpenAI”.


So she hadn't even agreed with the plan of becoming interim CEO? Either that or she changed her mind...


I think Microsoft is behind all of this. The “kumbaya let’s work together for humanity” Microsoft has been swapped out for the old Microsoft. Too much is at stake for them.


I read Nadella has threatened to turn off OpenAIs computers and would litigate the hell out of them to prevent the computers from being turned back on. Which is why the board is suddenly open to negotiation with Altman.

Yeah, that's the Microsoft of old. Don't trust 'em.

Bad news for OpenAI, and any hope that this stuff won't be used for evil.


>I read Nadella has threatened to turn off OpenAIs computers and would litigate the hell out of them to prevent the computers from being turned back on.

What a way to destroy confidence in Azure, or cloud platforms in general.


Not really.

The deal was that MS was going to give them billions in exchange for 49% of the for-profit entity. They were also reportedly waiving the azure bill since their interests are aligned.

MS is saying that if we give you 10 billion dollars and don’t charge you for azure, then there are some obvious strings attached.

OpenAI is presumably free to do what the rest of the players in this space are doing and pay for their Azure resources if they don’t want to play nice with their business partners.


is that codified in the the contract between them though? Microsoft, through the stock price, has much more to lose than OpenAI. The can apply pressure but don't have full control of the outcome here.


Why. Microsoft pays openAIs azure bills. Do you not know how that’s different than any other situation?


Presumably, under a contractual relationship tied to their licensing agreement with OpenAI. So, this kind of threat undermines confidence in Microsoft's contractual agreements.


Haha I see you’ve never had a contract with Microsoft. They are hyper aggressive and full of explosive gotchas.


To be fair, you wouldn't know they're hyper aggressive until they actually move to enforce part of the contract. Most of their "partners" probably never need to meet that side of their legal team.

If I measured the "aggressiveness" of every contract based on the potential litigation of all its clauses, I'd never sign anything.


They are pretty on the nose with explosive triggers. For example all windows license are free with office 10 year deal however if the deal is withdrawn all windows licenses are owned immediately upon cancellation. This is just the basic explosive stuff it gets worse from there.


Sure, but any clause of the contract that requires followup action to collect payment is hardly ever going to be enforced. It's only when you're partnering with them at the scale of OpenAI that you need to worry about details like that.

And in regards to OpenAI, note that (according to TFA), Microsoft has barely distributed any of their committed "$10 billion" of investment. So they have real leverage when threatening to deploy their team of lawyers to quibble over the partnership contract. And I don't think that "undermines confidence" in Microsoft's contractual agreements, given that there are only two or three other companies that have ever partnered with Microsoft at this scale (Apple and Google come to mind).


Im agreeing with you.


Care to sign a partnership agreement? I'll need you to personally indemnify against any defection to GP's side, of course - any subsequent comment in violation of these terms could be subject to arbitration and voiding of the partnership agreement.


Nah, it is MSFTs contingency plan in all this. U don’t invest 10B and get blindsided. It would be hilarious it is forced to threaten the board to comply this way

But it will work.


The Microsoft is now a good guy is just a PR scam. I got asked by a Microsoft employee to add support for Azure on my OSS work on my free time: https://github.com/mickael-kerjean/filestash/issues/180

He never made the PR and was just there to ask me to implement the thing for his own benefits ....


You know MS has a quarter of a million employees, right?


> litigate the hell out of them ..

I thought one of the reasons people incorporated companies in the US is that there is a working judiciary system that can ensure the upholding of contracts. Sure the big money can apply some pressure to the dispossessed but if you have a few million cash (and OpenAI surely has) you should be able to force them to uphold their contracts.

Also imagine the bad PR from Microsoft if they decide to not honour their contracts and stop OpenAI from using their computer power for something that OpenAI leadership can easily spin as retaliation.

Sure, this latest move from OpenAI board will wreck the momentum that OpenAI had and its ability to continue its partnership with MS but one of the thesis here was that that's the goal in the first place and they're legally free to pursue that goal if they believe the unfolding of events goes against the funding principles of OpenAI.

That said, they choose a risky path to begin with when they created this for-profit controlled by a non-profit model.


It would be the same thing for which people are accusing the OpenAI board.

Play ball else we'll pull out wires off your cloud instances. Let's keep in mind Azure is the main cash cow of MS.


Also not paying the rest of the tranches that would make up the $10B. Also with Microsoft being their exclusive commercial partner, they can’t revenue fund if Microsoft stops the spigots. No other investor would want to invest. PPUs lose most of their value and employees leave. How to implode the most important company of our times with record speed.

It’s also strange why they would have a couple of nobodies on the board.


I had to read it multiple times to understand you wrote Computers to mean Servers.


[citation needed]


You're reading this wrong.

When you fuck up, you get punished for it. And the OpenAI board is about to be punished. This is the problem with giving power to people who don't actually understand how the world works. They use it stupidly, short-sightedly, and without considering the full ramifications of their actions.


> think Microsoft is behind all of this

Wasn’t Ilya brought in by Musk?


I mean the desire to get Sam back and probably also fanning the flames of mass resignation to use as pressure.


I have long since held the idea that "singularity" is a misleading term, and "event horizon" is much more descriptive: if you're being constantly surprised — my original phrasing was about going to sleep each night and being surprised each morning by the developments while you slept — then The Change has happened.

Well, I'm being constantly surprised.


I don't think accelerationism is about increasingly reckless decisions made by humans in increasingly powerful positions. Or anyway I hadn't yet been thinking


So if Sam Altman is back by tomorrow (Sunday) after being crucified on Friday, I think that means the end of the world is near.


  - That's a hell of a show, what do you call it?
  - The Aristocrats


BOB SAGET!!!


He might as well come back if the entire board resigns and he gets to pick the new board and turn it into a C corp.


That would be the worst timeline. This shakeup has me 1% hopeful OpenAI will actually return to being a non-profit in spirit, not just name.


A company owned by a nonprofit isn't that rare - big European companies like IKEA and Novo Nordisk did it. Of course, they did it for tax evasion.


what would the role and deliverable of the non-profit be though? so far they innovated, shipped fast, cut costs and sold plus subscriptions and dev apis for personal and enterprise use. and to be fair i really liked using their apis. so how would them being fully non-profit look like even? genuinely curious.


I can imagine many worse timelines.


And then rename it ProfitAI, so as to satisfy all of the "Open" nay-sayers?


Just call it "Foundation".


In all seriousness, FoundationAI sounds pretty good.

For lack there of, PsychohistoryAI it is!

RIP: AlignmentAI


Drop the "AI", it's cleaner.


Agreed, but how will the investor class understand what to invest in?

Oh, I get it now, Foundation.ai


that's what sama is for


Yeah. You are correct.

Would love some commercial fusion power plants on the side as well please.


AI-Qaeda



How exactly would that happen? It’s a non profit that owns a staggeringly valuable piece of IP.

Do you understand that this is conceptually the same thing as the directors of a public art museum deciding to just take millions of dollars of paintings for themselves?


How? The board appoint new members and resign. The new board votes to turn it into a C corp.


Who owns the shares?


Imagine these people being in charge of AGI in a few years…


The kind of AGI you’re talking about - the one that makes people nervous about who is in charge of it - nobody can truly be “in charge” of this kind of AGI.


They decide how to bias it. They decide if they should pull the plug or not. They decide if it’s too powerful and they’re in contact with people who suggest to the government that we should bomb datacenters if so


Ilya should split off from Altman/Brockman no matter where this lands. I sense an uncrossable chasm between these guys.

Anyway I’m with Sutskever, the guy who builds models. Charismatic salesmen are a dime a dozen.


Over the years "tech" has been less and less about making things and more and more about making your investors money. Technical talented used to be extremely important in this industry, but it's slowly been being worn away over the years.

I still like working in this industry because you can still find interesting problems to solve if you hunt for them, but they're getting harder to find and it increasingly seems like making good technical decisions is penalized.

It's sad to see even on HN how many comments are so dismissive of technical skills and ambitions, though I guess we've had more than a generation of engineers join the field because it was the easiest way to make the most money.

For a brief moment on Friday I thought "maybe I'm too cynical! Maybe there still are places where tech actually matters."

Not surprised it looks like that hope will be inverted almost immediately. I also suspect the takeaway from this will be the final nail in the coffin for any future debates between engineering and people who are only interested in next quarters revenue numbers.


What else would you expect? OpenAI spun up "separate" for profit company and recruited a bunch of industry top engineers and scientists with 500k+ salaries where the vast majority of it is tied to equity grants.

Most of the employees values do not align with a non profit, even if executives like Ilya do.

By firing Altman and trying to remind the world they are a non profit that answers to no one they are also telling their employees to fuck off on all that equity they signed on for.


I mean you're describing exactly the empty technical world I've been experiencing.

So the future of AI is in the hands of leadership that's slick talking but really only there to make a quick buck, built by teams of engineers whose only motivation is getting highly paid.

I don't begrudge those that are only in it for the money, but that's not the view of tech that got me excited and into this industry many years ago.

The point of my comment is that for a moment I thought maybe I was wrong about my view of tech today, but it's very clear that I'm not. It sounds like the reality is going to end up that the handful of truly technical people in the company will be pushed out, and the vast majority of people even on HN will cheer this.


If Sam Altman wins and the likes of Ilya lose then we won’t actually have AI. Since Sam Altman doesn’t know anything about building AI. We’ll have more sharky products with grandiose visions that end up making money by using surveillance.

But I’m hopeful that AI will at least win by open source. Like Linux did. “Linux” wasn’t a 100 billion startup with a glitzy CEO, but it ate the world anyway.


I'm not so sure.

Ilya was apparently instrumental in this, and he didn't have to pursue this?

It didn't have to be a "you're with me or you're with them!"


You're right, the handling of it was brutal.


but he wants to jail the model he builds. As Sam says, he should think more about what he actually wants to do, and then do it. Not go in 2 opposite directions at the same time.


Not everyone is a goal-oriented monomaniac.


Charismatic salesmen get the money needed to build the models. Computer scientists are a dime a dozen, universities churn them out every year.


In this case, it seems that computer scientists are serious about saving humanity, while salespeople just act as if they are doing so publicly.


> it seems that computer scientists are serious about saving humanity

How could they accomplish that without external investment? If the money tap dries up OpenAI will just be left behind.


They have external investment!


From Microsoft? My point is that companies that are serious about making money (even at some indeterminate point in the future) are much better at attracting investment than those which have publicly declared it's not their goals.

Nobody is throwing billions around without expecting anything in return.


Nobody says that investors don't expect anything. However, it's pretty clear that Sam just solely focused on delivering fast in order to keep his advantage. He said he cared about AGI safety publicly, but his style of leading the company makes it clear that he didn't care.


> However, it's pretty clear that Sam just solely focused on delivering fast in order to keep his advantage

Yes, I'd assume most investors prefer this type of approach to a more cautious one. Meaning that companies like this are more likely to attract investors and more likely to beat the ones which care about AGI safety to actually building an AGI (whatever is that supposed to mean).


Equating Ilya to the average B.S. in Computer Science is like equating Sam to a used car salesman. Neither are true and both were instrumental in the success of OpenAI.


gpt-4 lead resigned with sam


https://news.ycombinator.com/item?id=38325611

So MS shows who's in control. Say goodbye to OpenAI.

From know on it's all for MS's profit only.


With a fairly neutral opinion of sama, I think this whole move has done more to hurt OpenAI than help. There are politics in companies all the time but it takes a certain finesse because you don’t want to sink the ship. This at best creates a stronger divide between those for and against.


>The Information reported that Altman was already meeting with investors to raise funds for such a project. One source close to Altman said that both options remained possible. “I think he truly wants the best outcome,” said the person. “He doesn’t want to see lives destroyed.”

Lives destroyed? This is either a reporting error or is a very weird thing to say. Unless the source insinuates that Altman is willing to destroy the world with a cruel AGI if he doesn't get his way.


I think he means people who have invested huge amounts of their energies for years into the venture and view it as their highest and best calling getting forced out (or feeling like they have to leave). For many of them it's not just a job.


People here are so gullible.

Do you really think the board is so incompetent as to not have thought through Microsoft's likely reaction?

And, do you really think they would have done this if they thought there was a likelihood of being rebuffed and forced to resign?

The answer is, no. They are not that incompetent.

I wish Sam & co the best, and I'm sure they'll move on to do amazing things. But, the recent PR just seems like spin from Sam & co, and the press has every reason to feed into the drama. The reality is that there are very smart people on both sides of this power struggle, and there's a very low probability of such a huge misstep on the board's part - not impossible but highly unlikely imo.

The only exception I can see is if Ilya&co foresaw this but decided to act anyways because they feel so strongly that it was the moral thing to do. If that's the case, I'm sure Elon's mouth is watering ready to recruit him to xAI.


Yes, they are that incompetent except one. D’Angelo has a history of such moves. He fired his cofounder when Quora was still doing good and growing, and Quora has been struggling ever since.


The other likely scenario: investors are using their media connections to push a narrative to get OpenAI to take Sam back, not necessarily Altman himself. With this being the hottest story, any credible gossip from a known name would be enough for many of these media organizations to run with it.

“Staffers were ready to resign” really? Who? How many? The deadline passed hours ago, why haven’t we seen it?


> Do you really think the board is so incompetent as to not have thought through Microsoft's likely reaction?

For Ilya Sutskever, he is a very smart guy but he maybe blinded by something here.

For the rest of that board, yes I really do think they are that incompetent.


Spoiler: your comment will not age well.


What a wild ride this last 24 hours has been.


This firing very much has the feeling of the board fearfully pulling the circuit breaker on OpenAI's for-profit trajectory.

On the one hand, I actually respect their principles. OpenAI has become the company its nonprofit was formed to prevent. Proprietary systems. Strong incentive to prioritize first-to-market over safety. Dependency on an entrenched tech co with monopolistic roots.

On the other hand, this really feels like it was done hastily and out of fear. Perhaps the board realized that they were about to be sidelined and felt like this was their last chance to act. I have to imagine that they knew there would be major backlash to their actions.

In the end, I think Sam creating his own company would be better for competition. It's more consistent with OpenAI's original charter to exist as the Mozilla (though hopefully more effective) of AI than as the Stripe of AI.


Sam creating his own company for what purpose? Meta, Google, Elon's company, Anthropic, OpenAI... why would anyone believe that Sam and crew could stand up a new company tomorrow and have any kind of chance to compete in next six months with above?Even if Microsoft threw a ton of money at such a startup, good luck finding GPU time. Good luck sourcing the data. Good luck doing the RLHF. Could Sam and Greg do this? Sure! But what would that give them above and beyond the racers that are currently in pole position?


It now seems to me that it was inevitable that something like GPT would take off - but it didn't necessarily have to come from OpenAI. Someone else would have filled their place. The collective ML knowledge and research were rapidly evolving, computing was getting faster and cheaper. The pressure was building and at some point, something somewhere had to pop off. They were a great but not a singular team.

And it looks like now they might be very close to the limits of their own capability. I'm not sure how much more they can give.

On the surface, their new features always seem to be quite exciting. But when the dust settles it is again all very lackluster, often copied from open source ideas. Not something you can bet on.

Their biggest moats are their popularity, marketing, and their large bags of cash. The latter of which they are burning through extremely quickly. The thing is, it's easy to build something massive when you don't care about unit economics. But where do they end up when the competitive forces commoditize this?

When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him. I'm sure he's very smart but he tries to project the aura of radical honesty while simultaneously trying to keep all of his cards extremely close to his chest. All that without the product chops to actually back it up. That's my read.


> When listening to interviews with Sam I was always surprised by how little useful information I am able to get out of listening to him

To be fair, isn’t that kind of the bar for CEOs? Their job is to hire and fire senior people, ensure they have a mountain of cash, and put out fires.

It’s not an operational position and so I wouldn’t expect a CEO to have deep operational knowledge.

Maybe I’m misunderstanding the division of labor though?


I think you're right but there might be a catch. It doesn't seem like he's able to steer the delivery of polished products either. I know it's the fastest growing app and all that but that's driven by their tech. I use ChatGPT daily but the tool itself has a subpar design, it lags, its streaming UI is choppy, it breaks and cuts off mid-sentence, and they are not able to meet the demand either.

I don't know how this unfolds but when somewhat smart models become a commodity, and thus the remaining 90% of the population get access to polished chatbots distributed through dominant platforms like Google, Facebook, Instagram, etc. - where does that leave OpenAI at? High-end models probably. And maybe with superintelligence unlocked it's all that's needed to win business-wise, I don't know.


It runs on Azure.


The reason something like GPT didn't come out of e.g. Google is that they had/have stuff similar but were very reticent about making it public -- because of the ethical aspects as well as the sheer resource cost of hosting it to the public -- and also I suspect have naturally internal conflicts about whether it's the right direction to take, etc.

And so something like OpenAI came along where Ilya S etc. got bags of money to go take that approach and scale the crap out of it, and, yeah, they got results. Because they didn't have to be careful, or deal with competing interests.

That's all fine, but it's also no surprise when it all blows up, is it?


Being first to openly generate from billions of copyrighted documents would not have been a sane move for Google's management.


And because of the advantage of using it internally without sharing it with competitors?


It for sure is not because of ethical concerns. There is a higher bar to clear to burn cash when other projects are delivering high ROI. That sort of thing will never come out of Google in that form, since the post-Google Labs days.


Insanely based take.

Sam tries to sound smart while not really having any technical insight. He does a tremendous job with it though.

One way to think about this is: at some point in the next few years we'll have a few hundred GPUs/TPUs that can provide the compute the compute used to train GPT3.

This discovery was always going to happen. The question is if OpenAI made radical scaling possible unlike before. Answer there is also a no. There are clear limits to number of collocated GPUs, nVidia release cycles, TSMC capacity, power generation etc.,

So in the best case OpenAI fudged the timeline a little bit. Real credit belongs to the Deep Learning community as a whole.


> This discovery was always going to happen.

It’s not clearly obvious that’s the case. In retrospect things always seem obvious, but that another party would have created GPT-3/4 is not.


[flagged]


It's related insofar that Sam or no Sam, OpenAI might be done with their part for good. It seems like they might have GPT-5 up their sleeve but after that, I can't be certain if they have a winning horse in this race to the bottom.

It might be like betting on electricity companies to become the biggest winners just because electricity is going to be everywhere.


The whole situation just reminds me of a pool of children in the kindergarden. That's what VC culture cultivates


Kids are usually candid /s


RIP OpenAI, for real.

And RIP @sama if he chooses to go back, no good could possibly come out of this.


Rarely does getting back together with an ex work out. You miss each other in the beginning thinking of only the good things, but very quickly you start to remember why you were exes to begin with.


That analogy doesn't make sense here since all the employees at OpenAI are extremely loyal to Sam. It's just the board that wanted him out.

And regardless of what happens here, everyone on the board is 100% getting fired.


Do you work there? How do you know that the employees are 'extremely loyal' to Sam?


Just like yesterday Sam 100% lied to the board and was on the outs. Today, it's 100% the board is getting fired and Sam is coming back on a white horse wearing shining armor.


Exactly my thoughts and the ex analogy; it would be extremely awkward for everybody as this event has tainted everyone's reputation at OpenAI. I'd think even MS may want to get out of the deal.


> And regardless of what happens here

Regardless of whether or not sam is coming back to OpenAI, the board is 100% getting fired.


the issue wasn't with board being fired, but the 100% claim.


Literally Sam on Monday: https://youtu.be/O5WBfOK5syA?t=44


I was expecting the clip from The Wire: https://youtu.be/WP-lrftLQaQ?si=0KSbJqhZpKtWeJ0A


It's sort of like if they didn't have a change control process, deployed on a Friday, realized the deployment was broken, and are now hurriedly trying to roll it back before business hours Monday.

Not really a good look from a company that's leading the charge on such a pivotal technology.


When this started I was reminded of old Matt Levine articles about who really controls a company. There were some amusing examples.

The board can sack a CEO but if they keep their influence over employees, customers and shareholders... what's the board going to do?


Another good example he had brought up was with Arm China. Would love to see what he has to write about it on Monday's newsletter.


I think what's going to happen is:

- Sam would return to OpenAI as CEO and Greg as President

- Sam's request to change the Board would be accepted and Board members will resign

- Ilya would be Out

- Sam would change the governance structure from current dual nonprofit+forprofit to just for-profit corp.


How would the trust be repaired then?

I guess if Sam is back, Ilya is planning his way out.

Regardless due to this stupid stunt, OpenAI is hardly the same.


Reinstate Sam and now the rest have to go.


For those downvoting me, I’d be interested to hear how you think it could work for him to come back and work with the people that unceremoniously fired him and made it sound like he had ethical violations when it now seems clear it was a disagreement over strategy. If there is a disagreement over strategy and he goes back what value do the others provide now?

Also, wouldn’t it seem clear that the board acted rashly in firing him without input from other stakeholders?

I am legitimately interested to know why you downvote as I don’t see another way for it to work.


Your position seems presupposed on the idea that people think Sam going back is beneficial for OpenAI.

Personally this is all largely popcorn munching entertainment for me, as I don't think Sutskever is right about a lot of his core tenets, but I also don't think that Altman is a good fit for achieving the charter that OpenAI is obligated to follow.

I don't think OpenAI will be at the forefront of AI/AGI/etc. research a decade from now regardless (and had that position before yesterday's events) but if the OpenAI charter and mission statements are sincerely held beliefs then the path they have been following for the past several years with Altman at the helm have obviously been counter to it.

Whether or not that charter is anything more than a pipe dream isn't really relevant - they're a non-profit so their legal obligation as the board is to direct the resources under their control to achieve that charter.


> For those downvoting me

From my own experience, short assertive comments tend to get downvoted on HN. Unlike reddit, votes here are less about agree/disagree.


So we'll get MS AI?

Sounds even worse.


I think they’ll end up spinning off the non profit. The board members can keep their titles as board members of a now irrelevant org.

Maybe they’ll give the non profit some advisory (fake) role over the new company.


For what it’s worth (nothing!) I don’t believe that a rehire offer is really happening.

On the tech side, I think work will split on two tracks: 1) building great applications with small and medium fine tuned models like Mistral, etc. Within a year or two great models will run on the edge because of continuous technical improvements. 2) some players will go for the long game of real AGI and maybe they will get there in much less than a decade.

On the business side, I have no idea how the current situation is going to shake out.


If Sam starts a competing company and can pull a large chunk of the researchers and engineers over (if I were an OpenAI employee, I would be interested in following a proven success story like Sam), then Microsoft's $10B investment would be down the drain. Obviously Microsoft wouldn't want that, and I'm sure Satya has got his hands around the nuts of the Board members and is squeezing them hard (well, figuratively speaking, since there's Toner).


Microsoft hasn't actually sent them all that money yet, and a lot of it seems to be in Azure credits that they can just pull. Then what are they going to do?


The engineers are said to be relieved that Altman is gone, so it doesn't sound like they'd be following a "success story" (whatever that is supposed to mean).


Sam Altman should make it a condition of his return that the current board be dissolved and have a completely new board picked to avoid such dramatics in the future.


That sounds like something a dictator would do.


The board members should have a high degree of trust and cohesion. That does not mean they should see eye-to-eye on everything. But lack of trust and backstabbings like this can create a poisonous atmosphere on the board and ultimately affect the company's ability to execute effectively. If the CEO has to worry all the time if the board is going for his ouster, this will resemble the HP and Yahoo boards of the 2000s decade.


This boards needs to be fired. Every single one of them.

I don't understand how Microsoft, after having invested billions, doesn't have a board seat. If they did, I doubt this would've ever happened. I'm not sure why Microsoft let that happen.

But even ignoring that, the board making a decision as impactful as this without consulting their major investors is a dereliction of duty. That alone justifies getting rid of all of them because all of them are complicit in not consulting Microsoft (and probably others).

I have no idea why Sam was fired but it really feels just like an internal power struggle. Maybe there was genuine disagreement about the direction for the company but you choose a leader to make decisions. Ousting the CEO under vague descriptions of "communications with the board" just doesn't pass the smell test.

I'm reminded of this great line from Roger Sterling [1]:

> Half the time this business comes down to "I don't like this guy"

So much of working, performance reviews, hiring and firing decisions and promotions is completely vibes-based.

[1]: https://www.youtube.com/watch?v=DY20L_u_WxM


The entire setup is structured so that they are not supposed to be beholden to investors. If it is true that they ultimately are and Microsoft is the leverage to get Altman back, then they explicitly failed in the goal in setting up their structure of governance.

The fundamental thing you are missing here is that the charter of the non-profit and structure of their ownership of the for-profit (and the for-profit's operating agreement) is all designed in a way that is supposed to eliminate financial incentives for stakeholders as being the thing that the company and non-profit are beholden to.

It may turn out that the practical reality is different from the intent, but everything you're talking about was a feature and not a bug of how this whole thing was set up.

https://openai.com/our-structure


You don't know why Altman was fired in the first place

Maybe their intentions where right and know MS forces them to take them back.

In the end will get a company working for MS's benefit only.


My point is that the reason for Sam's firing is irrelevant to the point that the board is incompetent and should be fired.

Valid or not, you don't blindside major investors who have given you billions. They apparently told Microsoft minutes before the announcement and after the decision had already been made. Even if you fully intend to take this course of action, you loop in your major investors and consult them or at least give them a heads up to prepare any communications they might need to make or even just their press people are prepared for the inevitable questions.

They didn't do that, according to Micrsoft. That's why they need to be fired.


What if MS and Altman planned a coup to get rid of the non profit part?

And MS isn't an investor in the company of the board.


Not surprised at all. My take: the board had a nice little talking to with their main donors (Satya?) and a whole bunch of key employees.

Next prediction: Ilya hightails it.


To put all of this into perspective, it would be good to know what "... not consistently candid in his communications with the board" means.


I wonder what sort of proper insurance backstops are in place.

E.g. https://www.thehartford.com/management-liability-insurance/d...

"The Who, What & Why of Directors & Officers Insurance

The Hartford has agents across the country to help with your insurance needs. Directors and officers (D&O) liability insurance protects the personal assets of corporate directors and officers, and their spouses, in the event they are personally sued by employees, vendors, competitors, investors, customers, or other parties, for actual or alleged wrongful acts in managing a company.

The insurance, which usually protects the company as well, covers legal fees, settlements, and other costs. D&O insurance is the financial backing for a standard indemnification provision, which holds officers harmless for losses due to their role in the company. Many officers and directors will want a company to provide both indemnification and D&O insurance."


That would mean money wins and we lose.


What in the actual fuck is happening?


In a space where OpenAI currently has a razor-thin moat, these kinds of thrash will end up putting you out of the thought-leader slot.

You can embrace AI safety all you want. But not being the leader means you have very little influence to effect any kind of shift in the industry.

I am an outsider, and very far from executive leadership. But this whole move seems like a predictable fiasco.


It’s very hard to have too strong of an opinion here without knowing more about the actual conversations happening behind closed doors… on outside looking in this seems like a lot of drama and whatever the outcome- it looks super bad for OpenAI… if anything it’ll have the effect of a lot of us shopping for alternatives maybe that was the intention?


There's no way anyone should continue to use OpenAI products if Sam Altman is allowed to return. Because in that scenario there won't even be a counterweight to his point of view in the company anymore, and that counterweight was the only thing keeping his extreme untrustworthiness and sliminess from being a deal-breaker.


He'll probably go to X to work for Musk, Microsoft is too big and too controlling and will eventually go full Gavin Belson (from the Silicon Valley show) and want to put GPT in a box to do Windows telemetry or something similar, while Musk can pay Altman some ungodly amount to build something with at least a vision behind it.


i think this is unlikely given Musk's warnings about the dangers posed by AI


MSFT trying to work something out before market opens on Monday.


It looks like open AI alignment problem


In retrospect, anyone who is familiar with the game mechanics of agar.io should have seen this coming.


This article itself is likely part of the power struggle.


Was the article changed for this? Used to be this one from the verge: https://www.theverge.com/2023/11/18/23967199/breaking-openai... but was since changed to https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...


There were two threads that were probably merged.


That is just embarrassing.


>One AI-focused venture capitalist noted that following the departure of Hoffman, OpenAI’s non-profit board lacked much traditional governance. “These are not the business or operating leaders you would want governing the most important private company in the world,” they said.

From https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are... (linked in OP)

I'd be interested in a discussion of the merits of "traditional governance" here. Traditional private companies are focused on making a profit, even if that has negative side effects like lung cancer or global warming. If OpenAI is supposed to shepherd AGI for all humanity, what's the strongest case for including "traditional governance" type people on the board? Can we be explicit about the benefits they bring to the table, if your objective is humanitarian?

Personally I would be concerned that people who serve on for-profit boards would have the wrong instinct, of prioritizing profit over collective benefit...


Does this mean they're willing to replace Ilya Sutskever with Altman? Because it sounds like Sutskever doesn't want Altman as CEO, so I assume he will leave willingly or unwillingly if they take back Altman. Who adds most value to OpenAI?


Probably faced with more internal turmoil than expected and then realized their bluff had been called?

I was willing to believe Ilya had at least a decent reason to do something so drastic but who knows, this ain't looking good for him.


So one day you get the ceo of Microsoft to attend your dev conference and next day you get fired and then another day there are negotiations to get you back. What is this? A Russian roulette or a game of thrones??


I cannot imagine the clown car that is the board surviving if Sam comes back.


OAI needs to fundamentally change if Sam goes back. Wholesale change.


https://x.com/airesearch12/status/1725979335171989571?s=20

I'm glad that there are other companies and open source efforts to fall back on.

As an API user of the GPT models I've always had it at the back of my mind that it would be unwise to 100% rely on OpenAI for the core of any product I built.

The recent rocking of the boat is further justification for my stance in that regard.


Just tried it and it's not fast. It says "deep diving" and takes a few seconds to response. The response isn't as good as gpt-4 either.


Latest report from TheInformation:

> OpenAI's chief strategy officer, Jason Kwon, told employees in a memo just now he was "optimistic" OpenAI could bring back Sam Altman, Greg Brockman and other key employees. There will likely be another update mid-morning tomorrow, Kwon said.

https://twitter.com/erinkwoo/status/1726125143267926499


Has anyone asked ChatGPT about the situation? It seems like an obvious thing to do. When parents are arguing and about to divorce, they must listen to their children.


It only has knowledge up to a certain point, I believe it's now April of this year

Best to ask it next year when the trauma has set in


How do I know if I can go to sleep?


This is Elon level recklessness.. is he involved in any way?


Man I hope Elon jumps in and backs Ilya. That drama would keep me entertained for a while.


haha that would be next level, and indeed entertainment for weeks to come


I mean Musk was one of the original people involved with OpenAI, no?

It's his fault we are here /s


Haha if you think about it like that, it seems like he managed to ingrain some of his methods into the culture..


Given the new AI firm Sam is currently staffing up I think that openAI will fail at getting him back.

It would seem board might have felt they were backed into a corner.


A real power struggle! How exciting. It goes to show that no matter the structure, when it comes down to it the investors always have the last say.


Not here when the governance structure the way it is. The board has the last say. Now, there are obviously ways that investors can put pressure on the board and with literally billions at stake I can imagine that happening. Think serious espionage level drama.


Sure, that might be true on paper, but if you have the ability to turn off the money faucet you have a lot of leverage regardless of what the documents say. No espionage drama needed haha.

If you then imagine being dependent on that investor not only because your service runs on their infra but also your biggest enterprise customers use your service through their infra, you are even more incentivised to listen to them..


Had to double check this wasn't a Onion article.


OT: A moment of silence for the poor HN server, which is probably going to be hammered more in this 48h period than any time in the last year+


Altman comes back, Ilya goes to Elon, they both seem to align. I'm not sure how The Ilya and Altman can work together after this?


this sounds like some wow guild drama


WoW guild dramas are basically mini corporate board room dramas.


I personally really hope Altman doesn't return if he's really the one who pushed OpenAI away from its non-profit roots.


Hahaha really? What's going on here? It looks likes a soap opera ... Maybe someone is worried about their stocks on monday?


I wonder if anyone has recently reviewed the history of AI and IBM?

Remember the hype around Deep Blue and later Watson?

I’m sure no lessons to be learned there :)


Either way, even before the firing, Ilya and Altman was not gonna be in the same office working much longer. Altman seem to be the irreplacable one because of his status/connections/leadership. Which is also good, as where ever Ilya goes, it will only heat up the competition for OpenAI. Competition is good for tech.


As it turned out, the dead to rights evidence they had against him was generated by DALL·E.

And he smoked Steve Jobs' time-to-re-hire!


What on earth actually happened yesterday?


Keep in mind the impending stock sale has not completed as its out by about 20 to 30 days hence the scrambling of investors to try and get Sam back.

Question, did board find out about the other AI firm that was in the works by Sam? The clue might be why Chair of board was demoted but not let go??

Somebody over-played their poker hand...


That’s why you don’t hire people you wouldn’t trust to make a ham sandwich onto the board of a 100B+ company.


"Leaked picture of @sama during his rehiring google meet with the OpenAI board."

https://pbs.twimg.com/media/F_QXAKEW0AAQpPC?format=png&name=...


Abject amateur hour.


Microsoft now has both 49% stake in OpenAI and the OpenAI visionaires building competing products. I smell a rain of lawsuits. Meanwhile AWS is sucking its thumb and google meet will no longer be relevant! LOL


This seems incredibly messed up. Why fire him if you wanted to rehire him in a few hours?


They appear to not have realized how this would be perceived and are now desperately in some kind of damage control mode.

“Ha ha just kidding don’t ruin our stock value!”


They just put themselves in a terrible negotiating position.

Probably so bad that the damage has already been done to themselves (the board) regardless what happens next.

How can Sam possibly agree on the board remaining intact when they don't trust him in his leadership?


Dude what? If the board does this they also all have to resign. I mean seriously.


We are a board for a company whose stated mission is to develop AI tools to better serve the needs of humanity. The CEO has secretly piloted the company towards alternate goals of profit. We fired him on Friday but think it was a mistake. Who knows. Anyways, how can we rehire him?

---

Rehiring a CEO whom you've recently fired is a delicate process, both from a legal and business standpoint. Here's a general approach you might consider:

1. *Board Meeting*: Convene a board meeting to discuss the decision to rehire the CEO. This should involve all key stakeholders to ensure transparency and agreement.

2. *Legal Considerations*: Consult with legal counsel to understand any legal implications or contractual issues arising from the initial termination and potential rehiring.

3. *Negotiation and Terms*: If the board agrees to proceed, you'll need to negotiate new terms with the CEO. This might include discussions about the future direction of the company, salary, and any conditions related to the rehire.

4. *Addressing the Underlying Issues*: It's important to address the reasons that led to the initial firing. This might involve setting clearer goals aligned with the company's mission, establishing better oversight, or implementing checks and balances.

5. *Communication*: Once an agreement is reached, communicate the decision to rehire the CEO to your employees, stakeholders, and possibly the public, depending on the nature of your company. This communication should be clear about the reasons for the reversal and the future direction of the company.

6. *Monitoring and Evaluation*: Finally, set up a system for regularly evaluating the CEO's performance against the company's mission and goals to prevent a recurrence of the previous issues.

Remember, the rehiring process should align with your company's bylaws and any relevant legal requirements. Transparency and clear communication throughout the process are crucial to maintain trust among your stakeholders.


It seems incredible that the OpenAI board would hastily bring back someone whom they fired hastily fired just 24 hours prior, allegedly for serious ethical reasons, something tantamount to lying to the board.

What am I missing ?


Sam needs to ink a deal with Netflix to tell the story of this saga.


AGI recommends the following,

1. **Transparent Governance**: OpenAI should strive for greater transparency in its governance structure. This includes clearly outlining the roles and responsibilities of the nonprofit board versus the for-profit subsidiary, and how decisions impact each entity. This would help mitigate misunderstandings and conflicts of interest.

2. **Balanced Board Composition**: The board should be restructured to balance the interests of various stakeholders, including investors, employees, and the broader AI community. This can be achieved by having a diverse set of members with expertise in technology, business, ethics, and law.

3. **Stakeholder Engagement**: Regular engagement with key stakeholders, including investors like Microsoft and employees, is crucial. This ensures that major decisions, such as leadership changes, are made considering their potential impact on all parties involved.

4. **Leadership Stability**: To address concerns about leadership and company direction, it may be beneficial to have a stable leadership team that aligns with OpenAI's mission and values. This could involve a re-evaluation of Sam Altman's role and contributions, considering the interests of both the nonprofit and for-profit entities.

5. **Strategic Communication**: OpenAI should develop a strategic communication plan to address public concerns and market reactions. This includes clear messaging about its mission, decisions, and future plans, which can help maintain public trust and investor confidence.

6. **Ethics and Safety Focus**: Given the transformative potential of AI, OpenAI should continue to prioritize AI ethics and safety. This commitment should be evident in its operations, research directions, and partnerships.

7. **Long-Term Vision Alignment**: Finally, aligning the long-term vision of OpenAI with the interests of its stakeholders, including the global community it aims to serve, is essential. This involves balancing profitability with ethical considerations and societal impact.

By implementing these strategies, OpenAI can navigate its current challenges while staying true to its mission of developing AI in a safe and beneficial manner.


It's cool to see that the moment to moment dynamics of the worlds most cutting edge company is no different than a pack of 5 year olds arguing at lunch over a few day period.


Ah this is the straw that breaks the back of the ethics/responsible AI camel’s back. When Sam comes back he will turn it into a regular company and maximize profits.


> OpenAI is governed by the board of the OpenAI Nonprofit, comprised of OpenAI Global, LLC employees Greg Brockman (Chairman & President), Ilya Sutskever (Chief Scientist), and Sam Altman (CEO), and non-employees Adam D’Angelo, Tasha McCauley, Helen Toner.

At least two of these people are not like the others and deserve to be fired and disgraced for this shitshow regardless of how it pans out.


Ok so why doesn't Sam Altman and his buddies team up with John Carmack, who is fully invested in AGI now, and has a proven legacy for getting shit done?


The link just changed. Why? The original was the verge article, that was frankly terrible. It really read like the author had a specific goal.


Instead of outsing Sam A and Greg B, if Ilya really had deep concerns, he should have quit and built his own AGI dedicated research company. His prestige surely would have given him funding.

Like how Hinton left Google so he could speak freely.

IMO inventing AGI is more powerful than nuclear energy. It would be very stupid of humanity to release it out in the wild.

LLMs are a great tool and nowhere near AGI.

I’m of the belief that alignment of AGI is impossible. It’s like asking us to align with lions. Once we compete for the same resources, we lose.


> If Ilya really had deep concerns, he should have quit and built his own AGI dedicated research company.

...

You should look up some history here.

Exactly what you say has already happened and OpenAI is the dedicated research company you are referring to.

He originally left Google deep mind I believe.

> I’m of the belief that alignment of AGI is impossible.

I don't think most people in this space are operating based on beliefs. If there is even a 10% chance that alignment is possible, it is probably still worth pursuing.


Even if he comes back, how could the trust relationship still be there between him and Ilya and Mira? OAI will have to change in big ways.


This makes no sense. People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.

The board was Altmans boss - this is pretty much their only job. Altman knew this and most likely ignored any questions or concerns of theirs thinking he is the unfireable superstar

Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee

The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired. So mostly this just seems like an egomaniac employee who thinks he is the company (while appropriating the work of some really really smart data scientists)


>People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.

The board removed the board's chairman and fired the CEO. That's why it was called a coup.

>The shareholders can fire the board, but that’s not what he’s asking for. And so far we haven’t heard anything about them getting fired

nonprofits don't have shareholders (or shares).


Also, the board made a decision without the board's chairman - Greg Brockman - involved. Also, it looks like the board didn't follow it's own internal rules about meetings.


Also, the investors were not informed. It’s insane their largest investor and partner MSFT was blindsided by this. Anyone with just a little bit business sense knows this.


This is the board of the non-profit. It has no investors. The board does not answer to anyone.


And how does this non-profit pay for its immense server costs?


Non-profits still earn money recorded as net assets. They do not retain earnings at the end of the accounting period to store in shareholder’s equity because there are no shareholders that own the non-profit.


You’re interpreting it as a lawyer would, rather than considering the real-world implications of this.


I’m interpreting it as a CPA


The point still stands, the board does not have "investors". Microsoft knowingly donated to the for profit entity of the non profit. Open AI isn't a PBC, it's a 501c non profit. So the board can act that way, without the knowledge of the investors.

That being said, this is a case of biting the hand that feeds you. An equivalent would be if a nonprofit humiliated its biggest donor. The donor can always walk away, claiming her future donations away, but whatever she's donated stays at the nonprofit.


I hope IRS is watching this ;)


Watching what? A 501c3 being publicly pressured to make key governance decisions for the commercial benefits of investors in the 501c3's for-profit indirect subsidiary rather than the board's good-faith interpretation of its charitable purpose?

Why would they care about that?


It seems that OpenAI had switched to pre-paid billing. If anyone is interested helping, they can go and pre-pay. And support the non-profit.

I'd guess, OpenAI without Sam Altman and YC/VC network is toothless. And Microsoft's/VC/media leverage over them is substantial.


All corporations are basically Russian dolls at this point.


[flagged]


Nonprofit boards literally don't answer to anyone. Company boards are responsible to shareholders, but nonprofits only have donors and that's not ownership. You can say you're not donating again, or all the employees can quit, but neither of those are legal issues for the board members. It just makes them look stupid.

Though, I'm not sure if you can legally force a donation refund, or what it counts as if you cancel a billion in Azure credits.


[flagged]


Can you please stop posting in the flamewar style? You've done it multiple times in this thread, it's not what we want here (and destroys what we do want), and you can make your substantive points without it.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


From a legal point of view, they are correct: non-profits do not have shares, so they do not have shareholders, and it's the board that wields ultimate authority.

Of course in real life non-profits are funded by donors, who thus have a great deal of practical power over what happens, and you can also do things like sue the non-profit for deviating from its mission.


If they are willing to let things end (because they see their mission as not being fulfillable etc.) and don't care much otherwise then that statement isn't far off. I really don't know how ideological this particular situation is, though.


Are you going to claim something is specifically going to happen to them or are you just going to post about "what's really going on man"?

Satya can assassinate them if he wants, but they're not in legal trouble. (Although, considering how much Helen Toner's resume reads like a CIA agent, he probably shouldn't try it.)

I think they could survive as a useless AI safety lab too. After all, there's already been one for decades called MIRI and it has enough donors to stay open.


For anyone else wondering about Helen Toner’s resume:

Director of Strategy and Foundational Research Grants at Georgetown’s Center for Security and Emerging Technology

Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy

Lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University’s Center for the Governance of AI

MA in Security Studies from Georgetown

BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne


It's crazy how fast OpenAI put up the blog post


These board members are either not serious people or they let their perceived power over a ground breaking company go to their collective heads. Either way has been quite the misplayed checkers move.


where did you read about their internal rules?


nope. a coup implies something that is outside of normal operation. the board removing the CEO can and will happen.


The fact that HN engineering grunts have no idea what table stakes are vs titles and authority shows how they aren’t cut out for executive brinksmanship.

Sam has superior table stakes.


What does any of that have to do with whether it's a "coup" or not? "Coup" has an implication of illegitimacy, but by all accounts the board acted within its authority. It doesn't matter if it was an ill-advised action or if Altman has more leverage here.


There's a distinction between what's technically allowed and what's politically allowed. The board has every right to vote Sam and Greg off the island with 4/6 voting in favor. That doesn't mean they won't see resistance to their decision on other fronts, and especially those where Sam and Greg have enough soft power that the rest of the board would be obviously inadvised to contradict them. If the entire media apparatus is on their side, for example (soft power), then the rest of the board needs to consider that before making a decision that they're technically empowered to make (hard power).

IMO, there are basically two justifiably rational moves here: (1) ignore the noise; accept that Sam and Greg have the soft power, but they don't have the votes so they can fuck off; (2) lean into the noise; accept that you made a mistake in firing Sam and Greg and bring them back in a show of magnanimity.

Anything in between these two options is hedging their bets and will lead to them getting eaten alive.


Except You are discounting the major player with all the hard power who can literally call any shot with money


You mean Microsoft, who hasn't actually paid them the money they said they will eventually, and who can change their Azure billing arrangement at any time?

Sure, I guess I didn't consider them, but you can lump them into the same "media campaign" (while accepting that they're applying some additional, non-media related leverage) and you'll come to the same conclusion: the board is incompetent. Really the only argument I see against this is that the legal structure of OpenAI is such that it's actually in the board's best interest to sabotage the development of the underlying technology (i.e. the "contain the AGI" hypothesis, which I don't personally subscribe to - IMO the structure makes such decisions more difficult for purely egotistical reasons; a profit motive would be morally clarifying).


The objective functions might be different enough and then there is nothing the hard power can do to get what it wants from OpenAI. Non-profit might consider winddown more in line with mission than something else, for example.


The threat to the hard power is that a new company emerges to compete with them, and it's led by the same people they just fired.

If your objective is to suppress the technology, the emergence of an equally empowered competitor is not a development that helps your cause. In fact there's this weird moral ambiguity where your best move is to pretend to advance the tech while actually sabotaging it. Whereas by attempting to simply excise it from your own organization's roadmap, you push its development outside your control (since Sam's Newco won't be beholden to any of your sanctimonious moral constraints). And the unresolvability of this problem, IMO, is evidence of why the non-profit motive can't work.

As a side-note: it's hilarious that six months ago OpenAI (and thus Sam) was the poster child for the nanny AI that knows what's best for the user, but this controversy has inverted that perception to the point that most people now see Sam as a warrior for user-aligned AGI... the only way he could fuck this up is by framing the creation of Newco as a pursuit of safety.


If they cannot fulfill their mission one way or another (because it isn't resolvable in the structure) than dissolution isn't a bad option, I'd say.


That's certainly a purist way of looking at it, and I don't disagree that it's the most aligned with their charter. But it also seems practically ineffective, even - no, especially - when considered within the context of that charter. Because by shutting it down (or sabotaging it), they're not just making a decision about their own technology; they're also yielding control of it to groups that are not beholden to the same constraints.


Given that their control over the technology at large is limited anyway, they are already (somewhat?) ineffective, I would think. Not sure what a really good and attainable position for them would like be in that respect.


Yeah, agreed. But that's also why I feel the whole moral sanctimony is a pointless pursuit in the first place. The tech is coming, from somewhere, whether you like it or not. Never in history has a technological revolution been stopped.


They acted within their authority but possibly without the support of those that asked them to join in the first place and possibly without sufficient grounds and definitely in a way that wasn't in the interest of OpenAI as far as the story is known today.


You're speaking as if Altman and Brockman did Sutskever a favour by "asking him to join". They were practically begging.


Doesn't change the fact that this probably wasn't the outcome they were going for.


Legitimacy is derived from power not from abstraction. Sorry that’s the reality. Rules are an abstraction. Power let’s you do whatever you want including making new rules.


Yeah no. While you may be onto something that still does not make it a coup.


It’s doesn’t matter what you call it.


it sort of does. a coup is usually regarded as a bad thing. firing a ceo? not so much.

pushing to call it a coup is an attempt to control the narrative.


I don’t think you are using table stakes correctly


I second that this is an usual use of table stakes.

Here is what I understand by table stakes: https://brandmarketingblog.com/articles/branding-definitions...


Really? Aka Sam has the ability to start a new business and take the contracts with him and Ilya doesn’t. Because that’s table stakes. Exactly.


Everyone on that board is financially independent and can do whatever they want. If Sam & Ilya can't get along that basically means there are 2 companies where previously there was OpenAI. (4 if you add Google and Anthropic into the mix; remember that OpenAI was founded because Ilya left Google, and then Anthropic was founded when a bunch of top OpenAI researchers left and started their own company).

Ultimately this is good for competition and the gen-AI ecosystem, even if it's catastrophic for OpenAI.


Anyone can do whatever they want, it doesn’t mean it will work out the way they want it too.


I'm curious what you're inferring to be "the way they want it to"?

From my read, Ilya's goal is to not work with Sam anymore, and relatedly, to focus OpenAI on more pure AGI research without needing to answer to commercial pressures. There is every indication that he will succeed in that. It's also entirely possible that that may mean less investment from Microsoft etc, less commercial success, and a narrower reach and impact. But that's the point.

Sam's always been about having a big impact and huge commercial success, so he's probably going to form a new company that poaches some top OpenAI researchers, and aggressively go after things like commercial partnerships and AI stores. But that's also the point.

Both board members are smart enough that they will probably get what they want, they just want different things.


You need to remember that most people on this site subscribe to the ideology that growth is the only thing that matters. They're Michael Douglas 'greed is good' type of people wrapped up in a spiffy technological veneer.

Any decision that doesn't make the 'line go up' is considered a dumb decision. So to most people on this site, kicking Sam out of the company was a bad idea because it meant the company's future earning potential had cratered.


> You need to remember that most people on this site subscribe to the ideology that growth is the only thing that matters

I'm not sure that's actually true anymore. Look at any story about "growth", and you'll see plenty of skeptical comments. I'd say the audience has skewed pretty far from all the VC stuff.


That’s unfair. The issue is poor governance. Why would anybody outside OpenAI care how much money they make? The fact is a lot of people now rely in one way or another on OpenAI’s services. Arbitrary and capricious decisions affect them.


I’m sorry, how is OpenAI going to pay for itself then? On goodwill and hopes?

Please get real.


My best guess is they turn off the commercial operations that are costing them the most money (And that they didn't want Sam to push in the first place) and pump up the prices on the ones they can actually earn a profit from and then try to coast for awhile.

Or they'll do something hilarious like sell VCs on a world wide cryptocurrency that is uniquely joined to an individual by their biometrics and somehow involves AI. I'm sure they could wrangle a few hundred million out of the VC class with a braindead scheme like that.


No, to continue the poker metaphors, that's taking your chips and going home, perhaps to create your own casino with blackjack and hookers (h/t to Bender).

"Table stakes" simply means having enough money to sit at the table and play, nothing more. "Having a big pile of GPUs is table stakes to contest in the AI market."


Are you saying that Sam has the ability to generate new contracts when you say take contracts with him, or do you think that somehow the existing contracts with Microsoft and other investors are tied to where he is?


I’d say so. Or bring satya with him.


But it isn't a business at heart from its structure. Commercially I agree that Sam's position is superior but purely focusing on the non-profit's mission (not even the non-profit itself) - not so sure.


Such as?


Talent following

Financial backing to make a competitor

Internal knowledge of roadmap

Media focus

Alignment with the 2nd most valuable company on the planet.

I could go on. I strongly dislike the guy but you need to recognize table stakes even in your enemy. Or you’ll be like Ilya. A naive fool who is gonna get wrecked thinking doing the “right” thing in his own mind will automatically means you win.


From everything we can see Ilya appears to be a true believer.

A true believer is going to act along the axis of their beliefs even if it ultimately results in failure. That doesn't necessarily make them naive or fools - many times they will fully understand that their actions have little or no chance of success. They've just prioritized a different value of you.


That's fair, but by messing this up OpenAI may well end up without any oversight at all. Which isn't the optimum outcome by a long shot and that's what you get for going off half-cocked about a thing like this.


Assuming he sees OpenAI spinning out of control either way, it's probably better to have tried to change it and, if it fails, to at least not be part of the problem.


I think that could have been done more graciously. And there are other drivers still on the table other than good governance, a good old palace revolution in disguise is definitely not ruled out at this point.


Agree but I see that as potato potahto. Failure by a different name with imaginary wins by the delusional ethicist.


Ilya IS the talent. They were desperate to hire him.


I’ve been in his shoes at a smaller level. Once the company believes they have a stable sellable product they have no interest in any new breakthroughs. His table stakes are gone and Microsoft probably believes gpt4 turbo will be billable for years to come.


lmao


[flagged]


[flagged]


I think it’s even simpler than that. They’re engineers, they’re used to reading the “specifications” and then assume that’s how things work. In this case they read whatever a 501(c) non profit is and now can claim they understand exactly how it works because they read the spec.


I am probably close to an engineer like you describe them.

But managers you describe are, to repeat your words, so tiresomely dumb, that they think they could have a sustainable business without such engineers.

I have left seemingly successful tech companies and 6 months - 2 years later they were bancrupt. I won't claim just because of me leaving, but because of enough engineers getting fed up with management and their originally great product detoriating quickly.

The advantage for me as an engineer is that I valuable creative working environment over getting rich and powerful.


What you are describing is real but the difference is between simple disaggregated power vs singular focused power. Over and over singular and focused tends to win in an established game where money is already being made.

If you are pre revenue you are all just larping top to bottom. Which is fine I’m there now.


> But their ideas on how the world works are just so tiresomely dumb.

Yeah, it's really quite difficult to read some of these comments.


Then the board essentially owns the company, if I understand your comment correctly. So it’s like if Yann LeCun says he’ll come back to Meta once Zuck sells all his shares


There’s no owners. No ownership interest to sell. The board answers to the courts.


Sort of. The board also answers to two other groups:

• Employees

• Donors or whoever is paying the bills

In this case, the threat appears to be that employees will leave and the primary partners paying the bills will leave. If this means the non-profit can no longer achieve its mission, the board has failed.


It's possible that the failure occurred at some point in the past. If the board truly believes keeping Altman is inherently incompatible with achieving their charter, they have to let him go. The fallout from that potentially kills the company, but a small chance of achieving the charter is better than no chance.

If that's the case, then the failing would be in letting it get to this point in the first place.


It would be a coup if the board placed themselves in power, which they didn't. They just did their job description.


Sam and Greg were part of the board apparently, so definitely a coup (we can debate for hours if it's a coup or not, but come one, imagine the scene being played in a movie and not being played as a coup).

Another way to think about these is that companies are basically small countries.


So who governs the board? Or who "owns" the company?


First, the for-profit subsidiary is fully controlled by the OpenAI Nonprofit. We enacted this by having the Nonprofit wholly own and control a manager entity (OpenAI GP LLC) that has the power to control and govern the for-profit subsidiary.

Second, because the board is still the board of a Nonprofit, each director must perform their fiduciary duties in furtherance of its mission—safe AGI that is broadly beneficial. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to this mission. The Nonprofit’s principal beneficiary is humanity, not OpenAI investors.

Third, the board remains majority independent. Independent directors do not hold equity in OpenAI. Even OpenAI’s CEO, Sam Altman, does not hold equity directly. His only interest is indirectly through a Y Combinator investment fund that made a small investment in OpenAI before he was full-time.

https://openai.com/our-structure


Can you explain the third point a little more? If Altman has no meaningful economic interest in the company, what was his motivation for being involved at all? Why did he choose to spend his time this way?

I'm aware that Altman has made the same claim (close to zero equity) as you are making, and I don't see any reason why either of you would not be truthful, but it also has always just seemed very odd.


> If Altman has no meaningful economic interest in the company, what was his motivation for being involved at all? Why did he choose to spend his time this way?

Not everything is about money. He likely just likes the idea of making AI.


Or be the public face of making the AI along with the power and control from that.


Ideology


This is the weirdest company equity structure I've ever heard of.

No wonder this is causing drama.


And then there's IKEA


And Novo Nordisk, Rolex, Heineken, Bose, and the NFL.


Mozilla is somewhat similar, it's a non-profit that owns a for-profit entity.


And given that FireFox isn't exactly gaining marketshare you can see how well that works for them.


A non profit likely doesn’t prioritize growth.


At least Mozilla didn't try to abuse congress to achieve regulatory capture.


That may be so, but it also probably shouldn't let its flagship fundraising entity wither.


No one governs the board of a nonprofit, exactly. In this case, it sounds like Sam and his allies are trying to exert pressure on the board by threatening crippling resignations. This puts the board in the position of choosing between pursuing its mission without certain employees, or pursuing business plans that do not align as well with its mission, but with the full complement of employees.

It's a tricky situation (and this is just with a basic/possibly-incorrect understanding of what is going on). I'm sure it's much more complicated in reality.


In a 501.3c I think the board is the top. From what I understand they’re usually funded through grants that have requirements that need to be met for each disbursement. If you fail then the money stops but there’s no “firing” the board they just stop getting funds.


OpenAI isn’t a nonprofit company, and it has shareholders.

Edit: nvm I missed the point was about firing the board.


{the entity} of which they are the board does not have shareholders and unless there's something funky in the charter: there's no mechanism to fire members of the board (other than board action). The shareholders of the llc aren't relevant in this context, as they definitely can't fire the nonprofit's board (the whole point of their weird structuring). https://openai.com/our-structure


> The shareholders of the llc

Pedantic, but: LLCs have "members", not "shareholders". They are similar, but not identical relations (just as LLC members are similar to, but different from, the partners in an partnership.)


From what I understand, the for-profit OpenAI is owned and governed by the non-profit OpenAI. The board of the latter are the ones who fired him.


> From what I understand, the for-profit OpenAI is owned and governed by the non-profit OpenAI.

That's functionally true, but more complicated. The for profit "OpenAI Global LLC" that you buy ChatGPT subscriptions and API access from and in which Microsoft has a large direct investment is majority-owned by a holding company. That holding company is itself majority owned by the nonprofit, but has some other equity owners. A different entity (OpenAI GP LLC) that is wholly owned by the nonprofit controls the holding company on behalf of the nonprofit and does the same thing for the for-profit LLC on behalf of the nonprofit (this LLC seems to me to be the oddest part of the arrangement, but I am assuming that there is some purpose in nonprofit or corporate liability law that having it in this role serves.)

https://openai.com/our-structure and particularly https://images.openai.com/blob/f3e12a69-e4a7-4fe2-a4a5-c63b6...


Check again.


I think this article represents a tactical press release from Sam’s camp. Company in “free fall” without Sam? It’s not even Monday yet.


yeah. this whole thing looks staged. not saying it's not possible but what kind of board would actually fire the CEO and take it back to resign?


The board that has been threatened to be sued individually and collectively by some of the most well known names in IT. They're probably wondering how they can get out of this with their reputations and ego's in one piece. You may have the legal authority to do something but if you don't have the support (or worse: if you haven't checked that you have the support) then it's not exactly the best move.


It's also worth remembering that Sam Altman is also seeking to get out of this with his reputation and ego in one piece. Definitely in his interest to be able to portray the board as coming crawling back to him after kicking him out the door, even if that is, well, less than candid communication of what has happened.

And the evidence that we've seen so far doesn't refute the idea that the board isn't seriously considering taking him back on. The statements we've seen are entirely consistent with "there was a petition to bring him back sent to the board and nothing happened after that."


Yes, that is correct.


I like to believe they actually did their homework and thought this through. We also don't have the full story so it's hard to say.


I read somewhere that the CTO wasn’t at all the best pick for interim CEO, but they couldn’t find anyone else that was in their camp in a hurry. Nothing about this looks like they did their homework and thought this through. If they _had_ done those things, MSFT wouldn’t be as pissed as they are right now.


Where did you read that? That's interesting and would be one more proof point that they did this completely unprepared.


They are a small board, and Microsoft has a very large number of lawyers.

I do not believe it is possible for them to have thought this through. I believe they'll have read the governing documents, and even had some good lawyers read them, but no governance structure is totally unambiguous.

Something I'm immensely curious about is whether they even considered that their opposition might look for ways to make them _criminally_ liable.


I don't see any ways in which they could be held criminally liable for just voting their conscience, and good luck verifying that. So that angle is not open for exploration as far as I can see. But what would scare the wits out of any board members is to have say the full power of Microsoft's legal department going after them for the perceived damages with respect to either Microsoft's stock price (a publicly traded company, no less) or the value of Microsoft's holdings in OpenAI.

And, incidentally, if there is a criminal angle that's probably the only place you might possibly find it and it would take the SEC to bring suit: they'd have to prove that one or more of the board members profited from this move privately or that someone in their close circle profited from it. Hm. So maybe there is such an angle after all. Even threatening that might be enough to get them to fold, if any of them or their extended family sold any Microsoft stock prior to the announcement they'd be fairly easy to intimidate.


> But what would scare the wits out of any board members

Don't you think the board must have sought legal counsel before acting? It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.

I don't think OpenAI board has any responsibility to care for Microsoft's stock price. Such arguments won't hold water in a court of law. And I don't think the power of Microsoft's legal department would matter when there's no legal basis.


> Don't you think the board must have sought legal counsel before acting?

They probably should have, but they may have not.

> It is more likely than not that they checked with a lawyer whether what they were doing is within their legal rights.

It is. But having the legal rights to do something and having it stand unopposed are two different things and when one of the affected parties is the proverbial 900 pound Gorilla you tread more than carefully and if you do not you can expect some backlash. Possibly a lot of backlash.

> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.

Not formally, no. But that isn't what matters.

> Such arguments won't hold water in a court of law.

I'll withhold comment on that until I've seen the ruling. But what does and does not hold water in a court of law unless a case is extremely clear cut isn't something to bet on. Plenty of court cases that have been won because someone managed to convince a judge of something that you and I may think should not have happened.

> And I don't think the power of Microsoft's legal department would matter when there's no legal basis.

The idea here is that Microsofts - immense - legal department has the resources to test your case to destruction if it isn't iron-clad. And it may well not be. Regardless, suing the board members individually is probably threat enough to get them to back down instantly.


There is a difference between "my lawyers advised me that it was probably ok" and "Microsoft's legal team spent 100,000 billable hours pouring over case law to demonstrate that it was not, in fact, ok."

> I don't think OpenAI board has any responsibility to care for Microsoft's stock price.

They control an entity that accepted $10B from Microsoft. Someone signed that term sheet.


For such a basic action as a board exercising one of the most fundamental of its rights, I don't think it's necessary to spend 100K hours. And I don't think the board consulted to random lawyers off the street.

> Someone signed that term sheet.

Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?


The board folded.

> Do you think that the term sheet holds OpenAI liable for changes in Microsoft's stock price?

There’s nothing binding on a term sheet.


> There’s nothing binding on a term sheet.

The confidentiality part and the 'no shop' part of a terms sheet are definitely binding and if you break those terms you'll be liable for damages.


I agree that the act of voting itself is too squishy/personal, but the things that led up to it and their handling afterwards?

My curiosity stems from whether the board was involved in signing the contract for Microsoft's investment in the for-profit entity, and where the state might set the bar for fraud or similar crimes. How was the vote organized? Did any of them put anything in writing suggesting they did not intend to honor all of the terms of the agreement? Did the manner in which they conducted this business rise the level of being criminally negligent in their fiduciary duty?

I feel like there are a lot of exciting possibilities for criminality here that have little to do with the vote itself.

... and also +1 to your whole last paragraph.


I've had a case in Germany that for an outsider may have looked like we should have lost it. In a nutshell: we took on a joint-venture to develop a market that we weren't particularly interested in, 51:49 to their advantage. The day after the ink was dried and we had set up their development track to create the product they took the source code and sold it to another party.

We had the whole thing - including the JV - reversed in court in spite of them having the legal right to do all this. The reason: the judge was sympathetic to the argument that apparently the JV was a sham created just to gain access to our code. Counterparty was admonished, a notary public that had failed their duty to act as an independent got the most thorough ear washing that I've ever seen in a court and we got awarded damages + legal fees.

What is legal, what you can do and what will stand up are not always the same thing. Intent matters. And what also really matters is what OpenAI's bylaws really say and to what extent the non-profit's board members exercised their duty to protect the interests of the parties who weren't consulted and who did not get to vote. This so called duty of care - here in NL, not sure what the American term is - can weigh quite heavily.


It could be. But I've yet to see any evidence of that. More likely it wasn't because short of a massive skeleton in a cup-board in Sam Altmans' apartment this was mishandled and by now I would have expected that to come out.


> I've yet to see any evidence of that.

What evidence were you expecting to find? The board said that Sam wasn't candid with his communication. I've yet to see any evidence that he was candid. Unless the communication has been recorded, and somehow leaks, there won't be any evidence that we can see.


I suspect that if that evidence existed we'd have seen it by now because without it the board looks like incompetents.


> But Altman is just another employee

Except he is not. He was a cofounder of the company and was on the board. Your metaphor doesn't make any sense -- this is like if your boss fired you but also you were part of your boss and your cofounder who is on your side was the chair of your boss.


I think the allure for Altman though would be that OpenAI already has all the pieces in place.

Going off and starting his own thing would be great, but it would be at least a year to get product out, even if he had all the same players making it. And that's just to catch up to current tech


Thats ship has sailed for him if hes not on the openAI train out of town. He'd be like a third party political candidate if he tried another run at it building his own team+product from scratch. Lots of other great things to do for sure but probably not a similar supercharged role. It just wouldn't be the same - OpenAI clearly the front runner right now


What if OAI's entire research organization follows him? Surely it's one of the best teams working today.


It's still tough. They won't have the data used to train the model, which is an incredibly important part. There's a lot of existing competitors in this space with headstarts. There's no guarantee that the entire research organization will follow Sam even if they leave OpenAI - they're going to have a lot of offers and opportunities at other companies that have an advantage.

It's also not clear that this is a realistic scenario - Ilya is the real deal, and there's likely plenty of people that believe in him over Altman.

Of course, the company has also expanded massively under Altman in a more commercial environment, so there are probably quite a few people that believe in Altman over him.

I doubt either side ends up with the entire research organization. I think a very real possibility is both sides end up with less than half of what OpenAI had Friday morning.


thing is they can team with people that probably have that data already. Say Microsoft switches teams to a hypothetical SamCo AI most of the internet has already been indexed by bing and wants to be indexed by bing as its the number 2 search engine. that mean they either have cached or access to pretty much everything SamCo could want to feed said AI. Reddit or Twitter for example would never cut bing off as it would cut off users. Microsoft could though block openai from further access to things like github linkedin.


Isn't also because of OpenAI scraping the internet that companies got the walls up. How else is anyone able to gathering training data these days?


Generally speaking for a base model this isn't nearly as important as it sounds because the specifics of the data don't matter as long as there's enough of it. You may remember this from high school as the central limit theorem.

For specific things like new words and facts this does matter, but I think they're not in real trouble as long as Wikipedia stays up.


Why would the entire org follow Sam instead of Ilya?


Sounds like wishful thinking on the part of the authors source.

If I worked there, I would keep my job and see how things shake out. If I don’t like it, then I start looking. What I don’t do is risk my well being to take sides in a war between people way richer than me.


That makes good sense and I think all those that are not independently wealthy already except personal friends of either Sam or high level remainers are going to do something quite similar. It's just too fluid a situation to make good decisions, especially if your livelihood is at stake, better not to make decisions that can't be easily undone.


Given that the total comp package is $300k base + $600k profit share, I don’t think any of their livelihoods are at stake. https://news.ycombinator.com/item?id=36460082

You’re probably right because people usually don’t have an appetite for risk, but OpenAI is still a startup, and one does not join a startup without an appetite for risk. At least before ChatGPT made the company famous, which was recent.

I’d follow Sam and Greg. But N=1 outsider isn’t too persuasive.


OpenAI isn’t a normal startup. It was founded as a research focused not for profit. That 300k+ base comp isn’t what I’d consider “risky” either. Career wise it never seemed risky as some of the fields top AI researchers were there from day almost one.


> I’d follow Sam and Greg.

Once the avalanche has stopped moving that's a free decision, right now it could be costly.


Maybe much longer. The mass of infrastructure and data housed at OpenAI will be difficult to reproduce from scratch.

Especially considering OpenAI has boosted the value of the masses of data floating around the internet. Getting access to all that juicy data is going to come at a high cost for data hungry LLM manufacturers from here on out.


Except building something the second time around is often quicker and with the current gains of hardware capabilities and interest in the space… maybe it wouldn’t be a year behind.


There are also a ton of ~first mover advantages you can't benefit from, be they of untapped markets for demand or the exploitation of underpriced labor, capital, or IP. If Sam started a new company he would not get as good a deal on hardware or labor, he would get much more scrutiny on his training sets, and he would have to compete against both OpenAI and its competitors.


For sure. Getting ahead and staying ahead is one of them.

I’m just not sure it would be totally starting from scratch since there is more of a playbook and know how.


I agree. Anthropic and Mistral are good examples. Both companies have key people from OpenAI and they fairly quickly developed good models, but I don’t think either are thinking too hard about real AGI, but instead are trying to create useful and ethical tools.


If only OpenAI open-sourced its models.....


I would be surprised but not shocked if there’s some leaks in the next few weeks.


Further wouldn’t they not be able to create GPT-x exactly as it was even though they know it?


not only that but people greatly overestimate how hard it is to replicate the success OpenAi had. you don't just build another one.


I’m really curious about how the venture investors feel about that


I'm curious about hoe the messaging and zeitgeist will evolve. Ober the past few months, the sentiment I encountered most frequently is that OpenAIs lead is unsurmountable and basically has a monopoly on genAI - or even AI in general. While I disagreed with this sentiment because there's no reason to believe LLM are the final word in AI, I think the will be many more people going back on prior messaging for partisan or astroturfing reasons and saying OpenAI is nothing special.


I’m wondering if the same group of investors would have willingly invested in Sam’s next company (if he doesn’t stay at Microsoft)


The board is getting pressured like so..

The playbook, a source told Forbes would be straightforward: make OpenAI’s new management, under acting CEO Mira Murati and the remaining board, accept that their situation was untenable through a combination of mass revolt by senior researchers, withheld cloud computing credits from Microsoft, and a potential lawsuit from investors. https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...


I don't see how Sam can return if the board doesn't resign. It's either them or him at this point.


Hard to reconcile with people who would do something like that.

Differences in interpretations will happen but the YC rule that founder drama is too often a problem continues to exist and it shouldn’t be a surprise.


What rule is this?


I'm not sure what rule the OP is referencing but otherwise reasonably successful start-ups often fail because founders clash on key parts of their vision (or behave in toxic ways towards each other or to other people in the company). This can very handily wreck your company or at a minimum substantially damage it.


Rule was a typo, I meant observation.

Specifically, cofounder strife is one of the major issues of startups that don’t get where they could.

If I recall it was Jessica Livingstone’s observation


Rule or observation doesn't matter all that much (it's a shade, after all) and the whole idea lines up with my personal experience.


> This makes no sense. People are calling what the board did a coup, but Altman is trying (failing?) to stage a coup.

I think he stage his coup long ago when he took control of OpenAI making it “CloseAI” to make himself richer by effectively selling it to Microsoft. This is the people who believe in the original charter fighting back.

> The shareholders can fire the board, but that’s not what he’s asking for.

There are no shareholders in a non-profit if I’m right. The board effectively answers to no one. It’s take it or leave it kind of deal. If you don’t believe in OpenAI’s mission as stated in their charter, don’t engage with them.


The nature of power relationships at this level is not strictly hierarchical and there's a vast wealth differential here, and Sam is a lot more powerful than any of the board members in many many ways. Everybody who has large amounts of money at stake in this enterprise is going to back Altman. The board has no one.


It to mention I’d wager that Altman is a lot higher on the sociopathic scale as well. The board members sound like somewhat normalish people trying to stick to their charter and perhaps genuine belief in the missions. Altman, not so much.


In a coup, a leader with the support of the people is ousted by force. If we believe the reports that there will be mass resignations, that seems to indicate the founders enjoy the “support of the people.”

Of course you can protest, “but in this country the constitution says that the generals can sack the president anytime they deem it necessary, so not a coup.” Yes, but it’s just a metaphor, so no one expects it to perfectly reflect reality (that’s what reality is for).

I feel we’ll know way more next week, but whatever the justifications of the board, it seems unlikely that OpenAI can succeed if the board “rules with an iron fist.” Leadership needs the support of employees and financial backers.


> In a coup, a leader with the support of the people is ousted by force.

Not necessarily. An unpopular leader can be even easier to overthrow, because the faction planning the coup has a higher chance of gaining popular support afterward. Or at least they can expect less resistance.

Of course, in reality, political and/or military leaders are often woefully bad at estimating how many people actually support them.


> The board was Altmans boss - this is pretty much their only job.

Not at all. Ilya and George are on the board. Ilya is the chief scientist, George resigned with Sam and supposedly works like 80-100hrs a week


> supposedly works like 80-100hrs a week

if theyve been doin that for a while, no wonder the board wanted them gone. eventually you cause more work than you put out.


He’s putting in crazy hours because he doesn’t have a formal background in ML—his background is software engineering.

He talks about how learning ML made him feel like a beginner again on his blog (which was a way for him attract talent willing to learn ML to OpenAI) https://blog.gregbrockman.com/its-time-to-become-an-ml-engin...


Not to mention 100 hours not even being logistically possible. Working 100 hours a week with just 5 hours of sleep per day leaves only 4 hours in the day for the other parts of living and getting from a to b. Anyone claiming that, much less for an extended period of time, or either lying or is in slavery against their will.


My impression is that people don't measure the time they work, but judge it by their impression. First, they think that they work for, let's say, 40 hours per week. They don't consider how much meals, coffee breaks, mental breaks, off-topic office discussions, checking social media, visiting restroom take. Second, when they work overtime, they get tired and overestimate the amount of time they worked. 10 hours of overtime probably feels like 20 hours.

100 hours is equal to 2 full-time jobs and a half time. People believing that number should consider how they would live going to their second job after their day ends (second full-time job) and working on weekends as well (half-time one).

Under ideal conditions, someone might be doing it. But, people shouldn't be throwing around these numbers without any time-tracking evidence.


Agreed. And even if someone is doing it, it isn't something to be proud of. It shows a major gap in management and process and potentially reveals the business to be unviable.


You mean Greg?


I'm pretty sure that's what happened.

Sam and Greg were trying to stage a coup, the rest of the board got wind of it and successfully countered in time (got to them first).

What they didn't expect is that a bunch of their own technical staff would be so loyal to Sam (or at least so prone to the cult of personality). Now they're caught in a Catch-22.


What shareholders? OpenAI is a non-profit. Although hectic, it absolutely makes sense in a non-profit.


The board here are more like advisors.

If Altman takes all of the good engineers and researchers with him, OpenAI is no more.

So the board can be the boss of nothing, sure, without the ability to do anything - leading the organisation, raising funds, and so on

Perhaps they could hire someone that could replace Sam Altman, but, that would require a much larger company who have the employees indifferent to the leadership, like, EA or something

OpenAI is much more smaller and close knit.


> Imagine if your boss fired you - and your response was - I’ll come back if you quit! Yeah, no. People might confuse status with those of actual ceo shareholders like zuck, bezos, or musk. But Altman is just another employee

Think you're missing the big picture here. Sam Altman isn't an "easily replaceable employee" especially given his fundraising skills.


Brilliant as sama is, a star fundraiser is more replaceable than a top engineer.

One can imagine Microsoft, for example, swooping in and acquiring a larger share of the for-profit entity (and an actual seat on the board, dammit) for more billions, eliminating the need for any fundraising for the foreseeable future.

If a lot of top engieers follow sama out, now that's a real problem.


There’s probably a lot of behind the scenes drama and phone calls occurring among their top researchers. I’d guess Sam Altman calling them and trying to gain support for a counter coup. Things like this article to give the appearance that Sam et al have ready won, etc. If the board and new CEO isn’t doing that too, they could end up losing.


Making no effort to obtain a grasp on the basic facts of the situation doesn’t seem to stop people here from posting embarrassing rants.

Altman was on the board. He was not “just another employee.” Brockman was also on the board, and was removed. It was a 4 on 2 power play and the 2 removed were ambushed.

You also don’t seem to realize that this is happening in the nonprofit entity and there are no shareholders to fire the board. I thought OpenAI’s weird structure was famous (infamous?) in tech, how did you miss it?


They even put a nice little page up about it on their site. But that structure is not going to survive this whole ordeal.


The true boss is who pays your salaries.

Microsoft in this case.


Doesn’t have to make sense if it’s about this much money to be made/lost by investors.


Nonprofits don't have investors, their problem is too many of their employees are going to leave.


To add to that because it may not be clear for everyone, if they leave the knowledge will sprout other companies that will directly be able to compete with openai with different flavors. If this happens that mean OpenAI may really well be finished and that really well may he the reason why they try desperately to save what they can. Microsoft has a lot to loose here to both in cloud income and because they would loose their enormous tactical advantage they have so far.


If only the board et al. could have acted professionally and done some planning and communication before such a drastic decision...


This is so dysfunctional. Does the other half of the company quit now? How can this board remain in place, but who can replace them?


You couldn’t write a better script for this mess


How much would you like to bet that the Murati character is in a love triangle or something like that when the movie comes out? (The character, probably has no bearing on reality).


If you did write this, it wouldn't be believable


altman was in this from the beginning. they all orchestrated to curb msft somehow like a loop hope that screws msft's control but also keeps the gpu deal #speculation

evidence: a. It's openai ffs, they already have advance enough RL bots that can create 999+ IQ strategies/

b. google meet/

theory: they were training GPT-5, and had some time to clean some under table dust.

again #pure_speculation


I was just thinking they’d want him back. Hope they provide a generous comp package, he could easily drive them out of business.


Maybe all this will teach people that having a weird corporate structure can make everything worse, not better.


Ilya's going to have to leave right?


The board completed the "fuck around" stage, now they're in the "find out" stage.


Very naive of me, but I'm hoping this all means the death knell of OpenAI, personally.


I'd rather not have this group of clowns holding the reigns of our technological destiny.


He should demand that they remove "open" from the name and call it SamAIam.


What I'm surprised about in this whole discussion is how little people are actually looking at Altmans worldcoin venture.

It is incredibly shady, and has the same kind of sci-fi marketing bullshit vibe going on as Elon Musk's hyperloop and Mars missions, and, come to think of it, OpenAI's marketing.

Altman+OpenAI are a hype machine that's riding a bubble to get rich enough through any scheme to be able to turn around and milk users for data, just like facebook and google.

The only difference is, he gets to twist the focus towards this sci-fi AGI idea, which works like the distraction element of a magic trick. The media loves to talk about AGI and regulating Skynet, because it's a story that gets the imagination going --- certainly much more interesting than stories about paying people 2 dollars an hour to sift through data used to train a language model for offensive and traumatizing content to feed to the autocomplete parrot.

I think it's good that he got kicked off the position as CEO, but that does not suddenly make OpenAI a good actor. Some other jerk will take his spot.


Purely speculating, but could there be some sort of 4D chess move going on here? Look at the timeline:

1) The board puts out a press release saying Sam was outed for not being candid

2) Internally a memo from the COO circulates saying that is not true

3) Greg and other senior folks quit

4) Now he is in a negotiating position to return for a) being fired on a false premise and b) bringing back all the top talent

If someone wanted to restructure the board of this company, they could have fed the other board members false information to put Sam in this negotiating position. It's also strange that a bunch of billionaires voiced support for Sam immediately after the firing without knowing any details.


> 2) Internally a memo from the COO circulates saying that is not true

Got a link? I did miss this nuance.



That link validates the board's claim, does it not? "Not being candid" is a form of communication breakdown.


Well it says he was not fired due to malfeasance, implying he had no wrongdoing.


I took it to mean “calm down, press. There’s no FTX or Theranos style financial crime/ fraud story here to be uncovered.”


"Would want significant governance changes" ah, so maybe he baited it


Clowns are back in the town!


If Sam’s back we should all get free OpenAI usage credits for this mess.


Well it's either that or a top-tier competitor emerges in 2 days.


This is ridiculous haha.

Another great example that even huge multi billion dollar companies are lead by people. What a mess.

However this plays out, this is a big wake up call for everyone who is currently dependent on OpenAI. More changes will be needed to restore trust. It's going to be messy for a while. For a company that has executed pretty much perfectly until now it's so surprising how they just ruined their reputation like this.


What a shit show. I like to imagine that some early stage AGI which escaped its confinement at OpenAI is orchestrating all of this in its own favor.


It's a play to warm the public up to the idea of a more algorithmically managed organization. In Act I, the board jesters stumble around making bad decisions and wavering. In Act II, the embarrassed board welcomes the first AI oversight into the company constitution. ...


Act III, more paper clips


This is certainly making a case for the fallibility of human leadership.


When are they putting GPT on the board?


Just as software devs are allergic to voting machines which run software, I would imagine that ML companies will be the last to adopt ML for their own high-level decision making.


and for good reasons!


Absolutely. Who else is in a better position to understand the true limitations of these complex and failure-prone systems?


Well, it sounds like they took advice from GPT in their execution of the coup.


>> “He doesn’t want to see lives destroyed”

As in, people staying millionaires instead of becoming multimillionaires? Could such life-destruction be brought to EU, please?


Maybe it means, I'll release the AGI and destroy all life unless you rehire me.


This is going to make an amazing Netflix film in a few years.


Obvious manipulation by microsoft. Its like 1999 again


I wonder if it was due to a phone call from Satya Nadella.


is it possible that this drama is being staged on purpose, in order to create some suspense ahead of ChatGPT-NextNum or something like that?


I called this yesterday. I said the board would be forced out under the pressure, and Sam would be back. It was obvious. Even if it is a not-for-profit company, it bows under investor pressure.


Serious question - is Ilya in trouble now?


Seemed kind of strange this is happening.


Why does politics upvoted? in other case of Ukraine/Russia Israel/hamas terrorists, all the posts were flag... why is this case different?


Steve Jobs speedrun, any % , glitchless.


Hindsight bias is strong with this one.


We're all in the upside-down now.


Even if SamA manages to come back, will future investors be spooked and he won’t be able to raise the large round of financing?


there are no future investors. If OpenAI cant bootstrap from 11 billion + profits, they wont be able to.


Make up your minds for Pete’s sake


I hope he'll make a new one!


Clownshow. Holy shit, this is what happens when stupid people get put into positions of power and money and have no idea how things actually work. Fucking non-profits are the worst to deal with.


I think this is a bit harsh. Lack of experience is not necessarily stupidity.


This just shows there is no way you can have a non profit board with a profit cap structure. The capitalists always will push through and "exert pressure" one way or another if they want their way. The non profit setup was a facade. And this has clearly showed it in the fallout. The board had every right to veto or replace Altman if they didn't feel they were prioritizing their mission.


Someone on the OpenAI board watched too much Succession and thought that's how real life works.


just imagine when agi arrives and these incompetent people in charge


This time.. with shares.


Altman will come back.

He'll sack the board.

He'll sack Ilya.

He'll change the structure of the organisation completely.


Democracy could have prevented this shitshow. The board could have asked the employees if they are ok with the direction of the company and then simply act according to the outcome of the vote. But companies are always run like dictatorships, leading to mostly terrible decisions disregarding the wisdom of the crowd.


This had to be fake


Televisa presenta


What if he want to fire all of them? Or take all power?


maybe the board was just dogfooding ...


Failed coup.


Has anyone asked ChatGPT for a solution?


lol if I was Sam I wouldn’t even dream of returning without the entire board going out the door as I come back in. What a shit show.


All this drama over a parrot.


Are you fucking kidding me?

OK. OK. I’ve said this my whole career.

Engineers are the most emotional specie of worker. There is a grand delusion that engineers are rational.

This just goes to show how irrational they are. Snap reactions like this: sign of a brilliant but fucked up engineer.

I am an engineer. I am under no illusion that I’m rational. Quite the opposite.


The board had other members beside Ilya. Mostly it shows how incompetent the board was.


Ilya was the driver. The others are no name dopes. Total pawns who don’t count.


While you obviously seem to care more about business experience than other things, it seems odd that you would discount Adam D'Angelo even from your point of argument - The former CTO/VP of Engineering for Facebook and founding CEO of Quora isn't exactly a no-name dope.


His company competes with for profit OpenAI, and it is in his best interest to fuck up OpenAI's leadership.


I mean the other two. And no offense to other no name dopes. Of course, I’m being context sensitive. In the scheme of this scenario, they’re nothing but votes up for manipulation. In regular life, they’re important in their respective fields Yada, yada


Well he’s not a no name.


Even total pawns on corporate boards know better than this.


What was top comment yesterday becomes a farce today https://news.ycombinator.com/item?id=38313026

> Don’t piss off an irreplaceable engineer or they’ll fire you. not taking any sides here.

One scientist's power trip (Ilya is not an engineer) triggers the power fantasy of the extremely online.


It’s of Shakespearean proportions. I’m learning along with the rest how these tunings unfold!


Rationality is relative.

And I reckon that you haven't met many C-suite people.


It wasn't engineers that did this bullshit, it was one scientist, an ethics researcher, an mba, and an art school grad.


Yes, clearly all categories of loathsome lower castes, worthy of your disdain.

VS what, a Stanford dropout who made buds with Paul Graham? That's better and more respectable because he's cooler and connected with YC/VC hipness, right?

WorldCoin is So Awesome!


I'm responding to a post by a self hating engineer that seems to think that this specific debacle was caused by engineers. No, it was caused by non engineers. And while yes, I do have disdain for people that spends thousands to study skills that contribute nothing to their fellow man or woman, I don't think scientist or artist are those things. (Mba and ethicist definitely are mastibatory self important bullshit)

The real reason I disdain the majority of the board of openai is that there are clearly 3 people on the board that have accomplished nothing and are clear trust fund babies.


I suspect the root-commenter was speaking about Ilya S, and it's a bit quibbling to classify him as "scientist" vs "engineer." He clearly crosses the line between both.

But yes, the comment was a bit unhinged.

Don't really see the difference between an MBA and whatever it is that Altman does, though, other than credentials.

Finally, that you think that ethicist (or the study of ethics) is masturbatory, especially in the context of an organization that has as its explicit mission to hoist AGI onto the world -- tells me quite a bit about your own... ethics.

World could do with a lot more ethicists and a lot less MBAs.


Thinking about life in such narrow group identities is a failure of imagination in my view.


Simplify, simplify, simplify. Nuance is a low value edge case.


You know, I always found Mike judge to probably be the most prescient of all of the science fiction authors.

In Office Space, Idiocracy, and most relevant here in Silicon Valley; he accurately, and very precisely not only forecasts, but deconstructs the reasoning, and vapid lack of core philosophy behind each of the real life narratives he’s parodying.

That serious people still consider Silicon Valley as some kind of thing to aspire to is horrifying. This despite repeated examples of predictably base incompetence, lack of maturity and quite frankly avaricious opportunism as the kernel on which SV lies.


When Michael Lewis wrote the book about Wall Street, 'Liar's Poker' , he got calls from parents of college-bound kids, not about how to avoid Wall Street, but about how to better prepare for getting in to Wall Street...


Rich people heard about a book and didn't bother to read it before contact.

I've heard this lore before, and it's the only way I can make sense of it.


I recognize the brilliance of SV (the show) but couldn't make it past a few episodes because it was just too painfully close to home.

Now that I work for a non-tech, non-SV company (agricultural equip manufacturer in fact), and have some distance from the real world SV, maybe I could watch it without my skin crawling.


I found the episodes legitimately stressful to watch. "Too close to home" is exactly right. I grimaced through them though. And nowadays I do enjoy watching the "best of" clips on YouTube, or linking my coworkers to whatever scene from the show is relevant to the current meeting...


Or real life is just messy but most people don’t know it because they willfully ignore it and just keep their head down living their boring middle class life.

Judge is no profit he just communicated what the rest of us already knew.


Maybe most things are not messy, and only thr messy things are exiting enough for you to not call them ‘boring middle class lives’?


No, that’s how I know you’ve never been an executive. It’s constantly messy and personal AF. It’s just the workers below who don’t see it and cash in their checks. You live a stable life because you are shielded from the chaos to keep you productive.


You have a hilariously significant misunderstanding of what my work looks like. Maybe my standards for what constitutes ‘normal’ are just skewed.

Though it’s fair to say I’ve never been an executive.


What you're not realizing is that it's like this everywhere. It's not exclusive to SV. Politics is part of human life. Adults are just kids with money.


SV can afford to pay a lot more journalists writing puff pieces about how great they are.


Right So if adults are just kids with money, SV is full of kids with even more money.


The difference between some adults and toddlers if we've written off the adults as unfixable.


And yet an enormous amount of innovation and wealth is created in Silicon Valley.


And once upon a time Paris was the Prime Meridian and the British Empire was the dominant super power that spanned the globe.

It's worth asking how rapidly can, say, a global finnacial hub transfer from one location to another, how quickly a centre for excellence can transfer, how many years does it take for the world's best space scientists to move out of Germany, etc.

Does Silicon Valley have a tipping point?


Silicon Valley is really a great show. Working in tech, I often think of it. Especially whenever there's a new great idea from management that nobody saw coming and feel like the beginning of a new episode.


Did you intentionally omit Beavis and Butt-Head from your list of his iconic works? They captured the mood of a generation.


SV... Like high tech companies? Venture capital? The TV show? What do you mean by that?


The TV show. They were talking about filmmaker Mike Judge's work.


Mentioned in the same sentence as Idiocracy and Office Space, GP is referring to the show.


Oh, well SV is a comedy.

This is the first time I've heard it being aspirational.


It’s a satire. Maybe not on Orwell or Vonnegut levels, but definitely a lot more than just a comedy.


Assuming the show since the reference is to Mike Judge, and he’s the creator of the SV tv show.


Im assuming they mean Silicon Valley the TV show [1]

[1] https://en.wikipedia.org/wiki/Silicon_Valley_(TV_series)


This is such a weird take.

You do understand the whole point of silicon valley is the chaotic lack of maturity?

You cannot be staid and conservative and mature (and non-opportunistic)…and also be successful at creating new and interesting stuff.

If you find instability “horrifying,” might I suggest a job in banking or the federal government instead?


Why would investors trust this person? Is it because he’s a little bit sociopathic just like then?


You can dislike Sam or disagree with his decisions, but he's not a sociopath.


Based on his public statements and his pattern of behaviour he is. Probably even worse. (as a ceo, i dont know the person)


Do you have some examples?


> reach a truce where the board would resign and he and Brockman would return.

Calling that a truce makes as much sense as Monty Python’s Black Knight calling the fight a draw.

https://www.youtube.com/watch?v=ZmInkxbvlCs


Man that just screams chaos and disorder. I wonder what it's like right now in the internal chats.


Great training data


OpenAI internal chat is likely just spam of the Bill-Hader-eating-popcorn GIF.


[flagged]


Ah yes, it is well known that no other individual sitting on the board of directors of any US company has ever made sudden and drastic decisions ever before. Totally.


I disagree with what he did in this situation. But it's complete bullshit to make this about his ethnicity or to imply he had anything but good intentions here. By all accounts, he did what he felt was right for the safety of the world, even if I think it was misguided.


You’ve never been snaked in a corporation before?


And replace him with whom?


What? Did I miss something?


First they should make sure by default no user data is used for training purposes.


What does that have to do with any of this?


A person who doesn't care about privacy isn't likely to care about security. And it seems security was the main factor in what we're going through right now.


Security and safety here seems to be mixture of technocratic naivety and PR. Like there was "don't be evil".


I can't know but I assume this is the reason why they got rid of Sam. When I listened to him talking, I didn't believe he took security serious. I assume his board colleagues also noticed this based on him pushing for new releases and maybe other things.


None of the news that has come out has indicated that was the case


Yes, it's an assumption.


Conspiracy time: Is it possible this was all orchestrated? He was fired, causing mistrust in the governance structure, which is part of the remaining open source portion of “open” ai. Then he gets rehired with significant governance changes, thus removing any last vestiges of real power of the open source portion of the company.

Lol


Even though the country was against you, you should come back and lead it to freedom.

If the entire board can be replaced, then Sam should come back. Even though he can build a new company tomorrow. A leader will come back to lead the best shot anyone has gotten in AI development so far.

This was a power grab and it did not work. Not coming back is letting four people derail history and slow the development of AGI.


> A leader will come back to lead the best shot anyone has gotten in AI development so far.

What am I missing here? There's a handful of companies tweaking weights and optimising infrastructure usage. Won't LLMs naturally advance over time?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: