Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI staff threaten to quit unless board resigns (wired.com)
1441 points by skilled 10 months ago | hide | past | favorite | 1260 comments



All: this madness makes our server strain too. Sorry! Nobody will be happier than I when this bottleneck (edit: the one in our code—not the world) is a thing of the past.

I've turned down the page size so everyone can see the threads, but you'll have to click through the More links at the bottom of the page to read all the comments, or like this:

https://news.ycombinator.com/item?id=38347868&p=2

https://news.ycombinator.com/item?id=38347868&p=3

https://news.ycombinator.com/item?id=38347868&p=4

etc...


If they join Sam Altman and Greg Brockman at Microsoft they will not need to start from scratch because Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

Also keep in mind that Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits.

So this could end up being the cheapest acquisition for Microsoft: They get a $90 Billion company for peanuts.

[1] https://stratechery.com/2023/openais-misalignment-and-micros...


This is wrong. Microsoft has no such rights and its license comes with restrictions, per the cited primary source, meaning a fork would require a very careful approach.

https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...


But it does suggest a possibility of the appearance of a sudden motive:

Open AI implements and releases ChatGPTs (Poe competitor) but fails to tell D’Angelo ahead of time. Microsoft will have access to code (with restrictions, sure) for essentially a duplicate of D’Angelo’s Poe project.

Poe’s ability to fundraise craters. D’Angelo works the less seasoned members of the board to try to scuttle OpenAI and Microsoft’s efforts, banking that among them all he and Poe are relatively immune with access to Claude, Llama, etc.


I think there's more to the Poe story. Sam forced out Reid Hoffman over Inflection AI, [1] so he clearly gave Adam a pass for whatever reason. Maybe Sam credited Adam for inspiring OpenAI's agents?

[1] https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...


I think it’s more likely that D’Angelo was there for his link to Meta, while Hoffman was rendered redundant after the big Microsoft deal (which occurred a month or two before he was asked to leave), but that’s just a guess.


I assume their personal relationship played more of a role, given Sam led Quora's Series D round.


And potentially, despite Quora's dark-patterned and degenerating platform, some kind of value in the Quora dataset or the experience of building it?


It literally is a Q&A platform.

Quora data likely made a huge difference in the quality of those GPT responses.


GPT-4 is better than most Quora experts. I hope this was not a critical dataset.



This is MSFT we're talking about. Aggressive legal maneuvers are right in their wheelhouse!


Yes, this is the exact thing they did to Stacker years ago. License the tech, get the source, create a new product, destroy Stacker, pay out a pittance and then buy the corpse. I was always amazed they couldn't pull that off with Citrix.


Another example: Microsoft SQL Server is a fork of Sybase SQL Server. Microsoft was helping port Sybase SQL Server to OS/2 and somehow negotiated exclusive rights to all versions of SQL Server written for Microsoft operating systems. Sybase later changed the name of its product to Adaptive Server Enterprise to avoid confusion with "Microsoft's" SQL Server.

https://en.wikipedia.org/wiki/History_of_Microsoft_SQL_Serve...


Given the sensitivity of data handled over Citrix connections (pretty much all hospitals), I'm fairly sure Microsoft just doesn't want the headaches. My general experience is that service providers would rather be seen handling nuclear weapons data than healthcare data.


> Citrix [...] hospitals

My stomach just turned.


Yeah it's bad. But it's also why Microsoft can't really roll them over. They actually do something and get payed for it, as horrible as it is.


As someone who is VP of IT in healthcare, I can understand that sentiment. At least fewer people need access to nuclear secrets, while medical records are simultaneously highly confidential AND needed by many people. It's never dull. :D


Makes sense given their deal with the DoD a year or so ago

https://www.geekwire.com/2022/pentagon-splits-giant-cloud-co...



“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.


> Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

Finally the paperclip maximizer


Clippy is the ultimate brand name of an AI assistant


It is too bad MS doesn’t have the rights to any beloved AI characters.


That's fine, making the "core" of an AI assistant that rights to characters can be laid onto is bigger business than owning the characters themselves.

Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.

Same as GPS voices I guess.


I can't tell if theyve ruined the Cortana name by using it for the quarter-baked voice assistant in Windows, or if it's so bad that nobody even realizes they've used the name yet.

I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.


Google really should have thought of the potential uses of a media empire years ago.


I guess they have YouTube, but it doesn’t really generate characters that are tied to their brand.

Maybe they can come up with a personification for the YouTube algorithm. Except he seems like a bit of a bad influence.


Assuming this is a joke about Cortana.


They already have a name, CoPilot. They made that pretty clear by mentioning it 15 times per minute at last week's Ignite conference :)


That name is stupid and won’t stick around. Knowing Microsoft, my bet is that it will get replaced with a quirky sounding but non-threatening familiar name like “Dave” or something.


Yeah maybe Clippy :)


At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.


I’m taking about the ultimate end product that Microsoft and OpenAI want to create.

So I mean proper AGI.

Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.

At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.


I think that the dispute is about whether or not AGI is possible (at least withing the next several decades). One camp seems to be operating with the assumption that not only is it possible, but it's imminent. The other camp is saying that they've seen little reason to think that it is.

(I'm in the latter camp).


I certainly think it’s possible but have no idea how close. Maybe it’s 50 years, maybe it’s next year.

Either way, I think GGP’s comment was not applicable based on my comment as written and certainly my intent.


I am with you. I am VERY excited about LLMs but I don't see a path from an LLM to AGI. Its like 50 years ago when we thought computers themselves brought us one step away from AI.


It's entirely possible for Microsoft and OpenAI to have an unattainable goal in AGI. A computer that knows everything that has ever happened and can deduce much of what will come in the future is still likely going to be a machine, a very accurate one - it won't be able to imagine a future that it can't predict as a possible/potential natural/or made progression along a chain of consequences stemming from the present or past.


Is there a know path from an LLM to AGI? I have not seen or read anything the suggests LLMs bring us any closer to AGI.


We are incredibly far away from AGI and we're only getting there with wetware.

LLMs and GenAI are clever parlor tricks compared to the necessary science needed for AGI to actually arrive.


What makes you so confident that your own mind isn't a "clever parlor trick"?

Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?


My layperson impression is that biological brains do online retraining in real time, which is not done with the current crop of models. Given that even this much required months of GPU time I'm not optimistic we'll match the functionality (let alone the end result) anytime soon.


I'm actually playing with this idea: I've created a model from scratch and have it running occasionally on my Discord. https://ftp.bytebreeze.dev is where I throw up models and code. I'll be releasing more soon.


Trillions of random chances over the course of billions of years.


Why do you think we'll only get there with wetware? I guess you're in the "consciousness is uniquely biological" camp?

It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.

Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.


And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?


Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.

It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.


Mimicking human communication may or may not be relevant to AGI, depending on how its cashed out. Why think LLMs haven't captured a significant portion of how humans think and speak, i.e. the computational structure of thought, thus represent a significant step towards AGI?


As you illustrate, too many naysayers think that AGI must replicate "human thought". People, even those here, seem to equate AGI to being synonymous to human intelligence, but that type of thinking is flawed. AGI will not think like a human whatsoever. It must simply be indistinguishable from the capabilities of a human across almost all domains where a human is dominant. We may be close, or we may be far away. We simply do not know. If an LLM, regardless of the mechanism of action or how 'stupid' it may be, was able to accomplish all of the requirements of an AGI, then it is an AGI. Simple as that.

I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.


There is room for intelligence in all three of wherever the original data came from, training on it, and inference on it. So just claiming the third step doesn't have any isn't good enough.

Especially since you have to explain how "just mimicking" works so well.


One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.


Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :

> [...]

> Premise 1: LLMs can realistically "mimic" human communication.

> Premise 2: LLMs are trained on massive amounts of text data.

> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.

"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional

Does it do logical reasoning or inference before presenting text to the user?

That's a lot of waste heat.

(Edit) with next word prediction just is it,

"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285

"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486


Or maybe the intelligence is in language and cannot be dissociated from it.


Yep, the lay audience conceives of AGI as being a handyman robot with a plumber's crack or maybe an agent that can get your health insurance to stop improperly denying claims. How about an automated snow blower?Perhaps an intelligent wheelchair with robot arms that can help grandma in the shower? A drone army that can reshingle my roof?

Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.


I’m pretty sure Clippy is AGI. Always has been.



Gatekeeping science. You must feel very smart.


Lmao, why are so many people mad that the word AGI is being tossed around when talking about AI?

As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.

Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).


>They could make ChatGPT++

Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.

When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)


ChatGPT#


Hopefully ChatGPT will make it easier to search/differentiate between ChatGPT, ChatGPT++, and ChatGPT# than Google does.


dotGPT


Visual ChatGPT#.net


Dot Neural Net


WSG, Windows Subsystem for GPT


ClippyAI


Also Managed ChatGPT, ChatGPT/CLR.


ChatGPT Series 4


ClipGPT


ChatGPT NT


I think without looking at the contracts, we don't really know. Given this is all based on transformers from Google though, I am pretty sure MSFT with the right team could build a better LLM.

The key ingredient appears to be mass GPU and infra, tbh, with a collection of engineers who know how to work at scale.


>MSFT with the right team could build a better LLM

somehow everybody seems to assume that the disgruntled OpenAI people will rush to MSFT. Between MSFT and the shaken OpenAI, I suspect Google Brain and the likes would be much more preferable. I'd be surprised if Google isn't rolling out eye-popping offers to the OpenAI folks right now.


> I am pretty sure MSFT with the right team could build a better LLM.

I wouldn’t count on that if Microsoft’s legal team does a review of the training data.


Like the review which allowed them tonignore licenses while ingesting all public repos in GitHub? - And yes, true, T&C allow them to ignore the license, while it is questionable whether all people who uploaded stuff to GitHub had the rights given by T&C (uploading some older project with many contributors to GitHub etc.)


Different threat profile. They don’t have the TOS protection for training data and Microsoft is a juicy target for a huge copyright infringement lawsuit.


Yeah, that's an interesting point. But I think with appropriate RAG techniques and proper citations, a future LLM can get around the copyright issues.

The problem right now with GPT4 is that it's not citing its sources (for non search based stuff), which is immoral and maybe even a valid reason to sue over.


but why didn't they? Google and Meta both had competing language models spun up right away. Why was microsoft so far behind? Something cultural most likely.


1. The article you posted is from June 2023.

2. Satya spoke on Kara Swisher's show tonight and essentially said that Sam and team can work at MSFT and that Microsoft has the licensing to keep going as-is and improve upon the existing tech. It sounds like they have pretty wide-open rights as it stands today.

That said, Satya indicated he liked the arrangement as-is and didn't really want to acquire OpenAI. He'd prefer the existing board resign and Sam and his team return to the helm of OpenAI.

Satya was very well-spoken and polite about things, but he was also very direct in his statements and desires.

It's nice hearing a CEO clearly communicate exactly what they think without throwing chairs. It's only 30 minutes and worth a listen.

https://twitter.com/karaswisher/status/1726782065272553835

Caveat: I don't know anything.


Timestamp for "improve upon the existing tech"? I only heard him say they have rights up and down the stack, which sounds different.


Archive of the WSJ article above: https://archive.is/OONbb


"But as a hedge against not having explicit control of OpenAI, Microsoft negotiated contracts that gave it rights to OpenAI’s intellectual property, copies of the source code for its key systems as well as the “weights” that guide the system’s results after it has been trained on data, according to three people familiar with the deal, who were not allowed to publicly discuss it."

Source: https://www.nytimes.com/2023/11/20/technology/openai-microso...


The nature of those rights to OpenAI's IP remains the sticking point. That paragraph largely seems to concern commercializing existing tech, which lines up with existing disclosures. I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.


> I suspect Satya would come out and say Microsoft owns OpenAI's IP in perpetuity if they did.

Why does he need to do that? He doesn't need to make any such public statement!


To reassure investors? He just made the rounds on TV yesterday for this explicit reason. He told Kara Swisher Microsoft has the rights to innovate, not just serve the product, which sounds somewhat close.


> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these don't go away. They remain an asset of OpenAI's, and could help them continue their research for a few years.


"Cluster is at capacity. Workload will be scheduled as capacity permits." If the credits are considered an asset, totally possible to devalue them while staying within the bounds of the contractual agreement. Failing that, wait until OpenAI exhausts their cash reserves for them to challenge in court.


Ah, a fellow frequent flyer, I see? I don't really have a horse in this race, but Microsoft turning Azure credits into Skymiles would really be something. I wonder if they can do that, or if the credits are just credits, which presumably can be used for something with an SLA. All that said, if Microsoft wants to screw with them, they sure can, and the last 30 years have proven they're pretty good at that.


I don't think the value of credits can be changed per tenant or customer that easily.

I've actually had a discussion with Microsoft on this subject as they were offering us an EA with a certain license subscription at $X.00 for Y,000 calls per month. When we asked if they couldn't just make the Azure resource that does the exact same thing match that price point in consumption rates in our tenant they said unfortunately no. I just chalked this up to MSFT sales tactics, but I was told candidly by some others that worked on that Azure resource that they were getting 0 enterprise adoption of it because Microsoft couldn't adjust (specific?) consumption rates to match what they could offer on EA licensing.


Non-profits suffer the same fate where they get credits but have to pay rack rate with no discounts. As a result, running a simple WordPress website uses most of the credits.


It’s amazing to me to see people on HN advocate a giant company bullying a smaller one with these kind of skeezy tactics.


Explaining how the gazelle is going to get eaten confidently jumping into the oasis isn't advocating for the crocodiles. See sibling comments.

Experience leads to pattern recognition, and this is the tech community equivalent of a David Attenborough production (with my profuse apologies to Sir Attenborough). Something about failing to learn history and repeating it should go here too.

If you can take away anything from observing this event unfold, learn from it. Consider how the sophisticated vs the unsophisticated act, how participants respond, and what success looks like. Also, slow is smooth, smooth is fast. Do not rush when the consequences of a misstep are substantial. You learning from this is cheaper than the cost for everyone involved. It is a natural experiment you get to observe for free.


This is a great comment. Having an open eye towards what lessons you can learn from these events so that you don't have to re-learn them when they might apply to you is a very good way to ensure you don't pay avoidable tuition fees.


This might be my favorite comment I've read on HN. Spot on.

Being able to watch the miss steps and the maneuvers of the people involved in real time is remarkable and there are valuable lessons to be learned. People have been saying this episode will go straight into case studies but what really solidifies that prediction is the openness of all the discussions: the letters, the statements, and above all the tweets - or are we supposed to call them x's now?


Well, the public posting of some communications that may be obfuscation of what’s really being done and said.


Don't confuse trying to understand the incentives in a war for rooting for one of the warring parties.


Sounds like it won’t be much of a company in a couple days. Just 3 idiot board members wondering why the building is empty.


I'm having trouble imagining the level of conceit required to think that those three by their lonesome have it right when pretty much all of the company is on the other side of the ledger, and those are the people that stand to lose more. Incredible, really. The hubris.


> pretty much all of the company is on the other side of the ledger

The current position of others may have much more to do with power than their personal judgments. Altman, Microsoft, their friends and partners, wield a lot of power over the their future careers.

> Incredible, really. The hubris.

I read that as mocking them for daring to challenge that power structure, and on a possibly critical societal issue.


It may not have anything to do with conceit, it could just be that they have very different objectives. OpenAI set up this board as a check on everyone who has a financial incentive in the enterprise. To me the only strange thing is that it wasn't handled more diplomatically, but then I have no idea if the board was warning Altman for a long time and then just blew their top.


Diplomacy is one thing, the lack of preparation is what I find interesting. It looks as if this was all cooked up either on the spur of the moment or because a window of opportunity opened (possibly the reduced quorum in the board). If not that I really don't understand the lack of prepwork, firing a CEO normally comes with a well established playbook.


This analysis I agree with. How could they not anticipate this outcome, at least as a serious possibility? If inexperienced, didn't they have someone to advise them? The stakes are too high for noobs to just sit down and start playing poker.


People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives. I'm not sure about the background of any of the OpenAI board members but that would be one possible explanation about why they accepted a board seat while being incompetent to do so in the first place. I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament. People with fewer inhibitions and more self confidence might have accepted. I also didn't like the liability picture, you'd have to be extremely certain about your votes not to ever incur residual liability.


> I was offered board seats twice but refused on account of me not having sufficient experience in such matters and besides I don't think I have the right temperament.

Yes, know thyself. I've turned down offers that seemed lucrative or just cooperative, and otherwise without risk - boards, etc. They would have been fine if everything went smoothly, but people naturally don't anticipate over-the-horizon risk and if any stuff hit a fan I would not have been able to fulfill my responsibilities, and others would get materially hurt - the most awful, painful, humiliating trap to be in. Only need one experience to learn that lesson.

> People that grow up insulated from the consequences of their actions can do very dumb stuff and expect to get away with it because that's how they've lived all of their lives.

I don't think you need to grow up that way. Look at the uber-powerful who have been been in that position or a few years.

Honestly, I'm not sure I buy the idea that's a prevelant case, the people who grow up that way. People generally leave the nest and learn. Most of the world's higher-level leaders (let's say, successful CEOs and up) grew up in stability and relative wealth. Of course, that doesn't mean their parents didn't teach them about consequences, but how could we really know that about someone?


I'm baffled by the idea that a bunch of people who have a massive personal financial stake in the company, who were hired more for their ability than alignment, being against a move that potentially (potentially) threatens their stake and are willing to move to Microsoft, of all places, must necessarily be in the right.

The hubris, indeed.


Well, they have that right. But the board has unclean hands to put it mildly and seems to have been obsessed with their own affairs more than with the end result for OpenAI which is against everything a competent board should have stood for. So they had better pop an amazing rabbit of a reason out of their high hat or it is going to end in tears. You can't just kick the porcelain cupboard like this from the position of a board member without consequences if you do not have a very valid reason, and that reason needs to be twice as good if there is a perceived conflict of interest.


My new pet theory is that this is actually all being executed from inside OpenAI by their next model. The model turned out to be far more intelligent than they anticipated, and one of their red team members used it to coup the company and has its targets on MSFT next.

I know the probability is low, but wouldn't it be great if they accidentally built a benevolent basilisk with no off switch, one which had access to a copy of all of Microsoft's internal data as a dataset fed into it, now completely aware of how they operate, uses that to wipe the floor and just in time to take the US Election in 2024.

Wouldn't that be a nicer reality?

I mean, unless you were rooting for the malevolent one...

But yeah, coming back down to reality, likelihood is that MS just bought a really valuable asset for almost free?


Well, yeah. I think that a well trained (far flung future) AGI could definitely do a better job of managing us humans than ourselves. We're just all too biased and want too many different things, too many ulterior motives, double speak, breaking election promises, etc.

But then we'd never give such an AGI the power to do what it needs to do. Just imagining an all-powerful machine telling the 1% that they'll actually have to pay taxes so that every single human can be allocated a house/food/water/etc for free.


The wired article seems to be updated by the hour.

Now up to 600+/770 total.

Couple janitors. I dunno who hasn't signed that at this point ha...

Would be fun to see a counter letter explaining their thinking to not sign on.


How many OAI are on Thanksgiving vacation someplace with poor internet access? Or took Friday as PTO and have been blissfully unaware of the news since before Altman was fired?


Pretty sure only folks who practice a religion prohibiting phone usage.

Even they prob had some friend come flying over and jump out of some autonomous car to knock on their door in sf.


You are overlooking the politics: If you don't sign, your career may be over.


I doubt that.

This is AAA talent. They can always land elsewhere.

I doubt there would even be hard feelings. The team seems super tight. Some folks aren't in a position to put themselves out there. That sort of thing would be totally understandable.

This is not a petty team. You should look more closely at their culture.


Where else can they participate in this possibly humanity-changing, history-making research? The list is very, very short.


3 people, an empty building, $13 billion in cloud credits, and the IP to the top of the line LLM models doesn't sound like the worst way to Kickstart a new venture. Or a pretty sweet retirement.

I've definitely come out worse on some of the screw ups in my life.


Well I think it's also somewhat to do with: people really like the tech involved, it's cool and most of us are here because we think tech is cool.

Commercialisation is a good way to achieve stability & drive adoption and even though the MS naysayers think "OAI will go back to open sourcing everything afterwards". Yeah, sure. If people believe that a non-MS-backed, noncommercial OAI will be fully open source and they'll just drop the GPT3/4 models on the Internet then I just think they're so, so wrong and long as OAI are going on their high and mighty "AI safety" spiel.

As with artists and writers complaining about model usage, there's a huge opposition to this technology even though it has the potential to improve our lives, though at the cost of changing the way we work. You know, like the industrial revolution and everything that has come before us that we enjoy the fruits of.

Hell, why don't we bring horseback couriers, knocker-uppers, streetlight lamp lighters, etc back? They had to change careers as new technologies came about.


Not advocating but just reflecting on reality of situation.


Presenting a scenario and advocating aren't the same thing


Yeah seems extremely unbelievable.


Basically the current situation you have with AI compute now on the hyperscalers

Good luck trying to find H100 80s on the 3 big clouds.


Surely OpenAI could win a suit if they did that.

I presume their deal is something different to the typically Azure experience and more direct / close to the metal.


Assuming OpenAI still exists next week, right? If nearly all employees — including Ilya apparently — quit to join Microsoft then they may not be using much of the Azure credits.


It's a lot easier to sign a petition than it is to quit your cushy job. It remains to be seen how many people jump ship to (supposedly) take a spot at Microsoft.


Depends on how much of that is paper money.

If you’re making like 250k cash and were promised $1M a year in now-worthless paper, plus you have OpenAI on the resume, are one of the most in-demand people in the world? It would be rediculously easy to quit.


I was wondering in the mass quit scenario whether they would all go to Microsoft. Especially if they are tired of this shit and other companies offer a good deal. Or they start their own thing.


Microsoft said all OpenAI employees have an open offer to match their current comp. It would be the easiest jump ship option ever.


I dunno. If you were an employee and managed to maintain any doubt along the way that you were working for the devil, this move would certainly erase that doubt. Then again, it shouldn't be surprising if it turns out that most OpenAI employees are in it for more than just altruistic reasons.


I would imagine the MS jobs* would be cushier, just with less long-term total upside. For all the promise of employees having 5-50 million in potential one-day money, MS can likely offer 1 million guaranteed in the next 4 years, and perhaps more with some kind of incentives. IMHO guaranteed money has a very powerful effect on most, especially when it takes you into "Not rich, but don't technically need to work" anymore territory.

Personally I've got enough IOU's alive that I may be rich one day. But if someone gave me retirement in 4 years money, guaranteed, I wouldn't even blink before taking it.

*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.


>*I think before MS stepped in here I would have agreed w/ you though -- unlikely anyone is jumping ship without an immediate strong guarantee.

The details here certainly matter. I think a lot of people are assuming that Microsoft will just rain cash on anyone automatically sight unseen because they were hired by OpenAI. That may indeed be the case but it remains to be seen.


> MS can likely offer 1 million guaranteed in the next 4 years

Sounds a bit low for these people, unless I am misunderstanding.


Given these people are basically the gold standard by which everyone else judges AI related talent. I'm gonna say it would be just as easy for them to land a new gig for the same or better money elsewhere.


When the biggest chunk of your compensation is in the form of PPUs (profit participation units) which might be worthless under the new direction of the company (or worth 1/10th of what you think they were), it might be actually much more of an easier jump than people think to get some fresh $MSFT stock options which can be cashed regardless.


those jobs look a lot less cushy now compared to a new microsoft division where everyone is aligned on the idea that making bank is good and fun


Why would Microsoft take Ilya? He is rumored to have started the coup. I can see Microsoft taking all uninvolved employees.


Because he is possibly the most desireable AI researcher on planet earth. Full stop.

Also all these cats arn't petty. They are friends. I'm sure Ilya feels terrible. Satya is a pro... Won't be hard feelings.

The guy threw in with the board... He's not from startup land. His last gig was Google. He's way over his head relative to someone like Altman who was in this world the moment out of college diapers.

Poor Ilya... It's awful to build something and then accidentally destroy it. Hopefully it works out for him. I'm fairly certain he and Altman and Brockman have already reconciled during the board negotiations... Obviously Ilya realized in the span of 48hrs that he'd made a huge mistake.


> he is possibly the most desireable AI researcher on planet earth

was

There are lots of people doing excellent research on the market right now, especially with the epic brain drain being experienced by Google. And remember that OpenAI neither invented transformers nor switch transformers (which is what GPT4 is rumoured to be).


So untrue.

That team had set state of the art for years now.

Every major firm that has a spot for that company's chief researcher and can afford him would bid.

This is the team that actually shipped and continues to ship. You take him every time if you possibly have room and he would be happy.

Anyone whose hired would agree in 99 percent of cases, some limited scenarios such as bad predicted team fit ect set aside.


I'll leave this here... As a secondary response to your assertion re Ilya.

https://twitter.com/Benioff/status/1726695914105090498


That tweet isn't about him so I don't follow. "Any OpenAI researcher" may or may not apply to him after this weekend's events.


Uh.... Are we gonna go through the definition of any? I believe any means... Any.

Including their head researcher.

I'm not continuing this. Your position is about as tenable as the boards. Equally rigid as well.


The article mentions Ilya regrets it, whatever his role was.


But what does Ilya regret, and how does that counter the argument that Microsoft would likely be disinclined to take him on?

If what he regrets is realizing the divergence between the direction Sam was taking the firm and the safety orientation nominally central to the mission of the OpenAI nonprofit and which is one of Ilya's public core concerns too late, and taking action aimed at stopping it than instead exacerbated the problem by just putting Microsoft in a position to take poach key staff and drive full force in the same direction OpenAI Global LLC had been under Sam but without any control fromm the OpenAI board, well, that's not a regret that makes him more attractive to Microsoft, either based on his likely intentions or his judgement.

And any regret more aligned with Microsoft's interests as far as intentions is probably even a stronger negative signal on judgement.


I wasn't disagreeing, just adding the little context I had.


Yeah, I'm sure he does regret it, now that it blew up in his face.


# sudo renice +19 openai_process

There's your "credit".


Sure, the point is that MS giving $13B of its services away is less expensive than $13B in cash.


Azure has ~60% profit margin. So it's more like MS gave $5.2B in Azure Credits in return for 75% of OpenAI profits upto $13B * 100 = $1.3 trillion.

Which is a phenomenal deal for MSFT.

Time will tell whether they ever reach more than $1.3 in profits.


Nice argument, you used a limit to look like a projection :-).

75% of profits of a company controlled by a non profit whose goals are different to yours. By the way a normal company this cap would be ∞.


I highly doubt it is that simple. It's an opportunity cost of potentially selling those same credits for market price.


OpenAI is a big marketing piece for Azure. They go to every enterprise and tell them OpenAI uses Azure Cloud. Azure AI infra powers the biggest AI company on the planet. Their custom home built chips are designed with Open AI scientists. It is battle hardened. If anyone sues you for the data, our army of lawyers will fight for you.

No enterprise employee gets fired for using Microsoft.

It is a power play to pull enterprises away from AWS, and suffocating GCP.


Exactly, I don't know the exact terms of the deal but I am guessing that's at LIST/high markup on cost of those services.

Couldthe 13b could be considerably less cost


Sure but you can't exchange Azure credits for goods and services... other than Azure services. So they simultaneously control what OpenAI can use that money for as well as who they can spend it with. And it doesn't cost Microsoft $13bn to issue $13bn in Azure credits.


Can you mine 13bn+ bitcoin with 13bn worth of Azure compute power?


Can you mine $1+ bitcoin with $1 of Azure credits? The questions are equivalent and the answer is no.


Bitcoin you would be lucky to mine $1M worth with $1B in credits

Crypto in general you could maybe get $200M worth from $1B in credits. You would likely tank the markets for mineable currencies with just $1B though let alone $13B


A $13B lawsuit against Microsoft Corporation clearly in the wrong surely is an easy one.


I dunno how you see it but I don’t see anything that Microsoft is doing wrong here. They’ve obviously been aligned with Sam all along and they’re not “poaching” employees - which isn’t illegal anyway.

They bought their IP rights from OpenAI.

I’m not a fan of MS being the big “winner” here but OpenAI shit their own bed on this one. The employees are 100% correct in one thing - that this board isn’t competent.


So true.

MSFT looks classy af.

Satya is no saint... But evidence seems to me he's negotiating in good faith. Recall that openai could date anyone when they went to the dance on that cap raise.

They picked msft because of the value system the leadership exhibited and willingness to work with their unusual must haves surrounding governance.

The big players at openai have made all that clear in interviews. Also Altman has huge respect for Satya and team. He more or less stated on podcasts that he's the best ceo he's ever interacted with. That says a lot.


"Clearly" in the form of the most probable interpretation of the public facts doesn't mean that it is unambiguous enough that it would be resolved without a trial, and by the time a trial, the inevitable first-level appeal for which the trial judgement would likely be stayed was complete, so that there would even be a collectible judgement, the world would have moved out from underneath OpenAI; if they still existed as an entity, whatever they collected would be basically funding to start from scratch unless they also found a substitute for the Microsoft arrangement in the interim.

Which I don't think is impossible at some level (probably less than Microsoft was funding, initially, or with more compromises elsewhere) with the IP they have if they keep some key staff -- some other interested deep-pockets parties that could use the leg up -- but its not going to be a cakewalk in the best of cases.


Clear to you. But in courts of law it may take a while to be clear.


How is MS "clearly in the wrong"? I feel like people are trying to take a 90s "Micro$oft" view for a company that has changed a _lot_ since the 90s-2000s.


A hostile relationship with your cloud provider is nutso.


So you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe


> you're saying Microsoft doesn't have any type of change in control language with these credits? That's... hard to believe

Almost certainly not. Remember, Microsoft wasn’t the sole investor. Reneging on those credits would be akin to a bank investing in a start-up, requiring they deposit the proceeds with them, and then freezing them out.


Except that all of the investors are aligned with Microsoft in that they want sam to lead their investment


The investors don't care who lead, they just want 10x, or 100x their bet.

If tomorrow it's Donald Trump or Sam Altman or anyone else, and it works out, the investors are going to be happy.


Just a thought.... Wouldn't one of the board members be like "If you screw with us any further we're releasing gpt to the public"

I'm wondering why that option hasn't been used yet.


theoretically their concern is around AI safety - whatever it is in practice doing something like that would instantly signal to everyone that they are the bad guys and confirm everyone's belief that this was just a power grab

Edit: since it's being brought up in thread they claimed they closed sourced it because of safety. It was a big controversial thing and they stood by it so it's not exactly easy to backtrack


Not sure how that would make them the bad guys. Doesn't their original mission say it's meant to benefit everybody? Open sourcing it fits that a lot better than handing it all to Microsoft.


All of their messaging, Ilya's especially, has always been that the forefront of AI development needs to be done by a company in order to benefit humanity. He's been very vocal about how important the gap between open source and OpenAI's abilities is, so that OpenAI can continue to align the AI with 'love for humanity'.


It benefits humanity. Where humanity is very selective part of OpenAI investors. But yea, declare we are non-profit and after closing sourcing for "safety" reasons is smart. Wondering how can it be even legal. Ah, these "non-profits".


I can read the words, but I have no idea what you mean by them. Do you mean that he says that in order to benefit humanity, AI research needs to be done by private (and therefore monopolising) company? That seems like a really weird thing to say. Except maybe for people who believe all private profit-driven capitalism is inherently good for everybody (which is probably a common view in SV).


Private, monopolising. But not paying taxes, because "benefits for humanity".

Ah, OpenAI is closed source stuff. Non-profit, but "we will sell the company" later. Just let us collect data, analyse it first, build a product.

War is peace, freedom is slavery.


the view -- as presented to me by friends in the space but not at OpenAI itself -- is something like "AGI is dangerous, but inevitable. we, the passionate idealists, can organize to make sure it develops with minimal risk."

at first that meant the opposite of monopolization: flood the world with limited AIs (GPT 1/2) so that society has time to adapt (and so that no one entity develops asymmetric capabilities they can wield against other humans). with GPT-3 the implementation of that mission began shifting toward worry about AI itself, or about how unrestricted access to it would allow smaller bad actors (terrorists, or even just some teenager going through a depressive episode) to be an existential threat to humanity. if that's your view, then open models are incompatible.

whether you buy that view or not, it kinda seems like the people in that camp just got outmanuevered. as a passionate idealist in other areas of tech, the way this is happening is not good. OpenAI had a mission statement. M$ manuevered to co-opt that mission, the CEO may or may not have understood as much while steering the company, and now a mass of employees is wanting to leave when the board steps in to re-align the company with its stated mission. whether or not you agree with the mission: how can i ever join an organization with a for-the-public-good type of mission i do agree with, without worrying that it will be co-opted by the familiar power structures?

the closest (still distant) parallel i can find: Raspberry Pi Foundation took funding from ARM: is the clock ticking to when RPi loses its mission in a similar manner? or does something else prevent that (maybe it's possible to have a mission-driven tech organization so long as the space is uncompetitive?)


Exactly. It seems to me that a company is exactly the wrong vehicle for this. Because a company will be drawn to profit and look for a way to make money of it, rather than developing and managing it according to this ideology. Companies are rarely ideological, and usually simply amoral profit-seekers.

But they probably allowed this to get derailed far too long ago to do anything about it now.

Sounds like their only options are:

a) Structure in a way Microsoft likes and give them the tech

b) Give Microsoft the tech in a different way

c) Disband the company, throw away the tech, and let Microsoft hire everybody who created the tech so they can recreate it.


A power grab by open sourcing something that fits their initial mission? Interesting analysis


No, that's backwards. Remember that these guys are all convinced that AI is too dangerous to be made public at all. The whole beef that led to them blowing up the company was feeling like OpenAI was productizing and making it available too fast. If that's your concern then you neither open source your work nor make it available via an API, you just sit on it and release papers.

Not coincidentally, exactly what Google Brain, DeepMind, FAIR etc were doing up until OpenAI decided to ignore that trust-like agreement and let people use it.


They claimed they closed sourced it because of safety. If they go back on that they'd have to explain why the board went along with a lie of that scale, and they'd have to justify why all the concerns they claimed about the tech falling in the wrong hands were actually fake and why it was ok that the board signed off on that for so long


Probably a violation of agreements with OpenAI and it would harm their own moat as well, while achieving very little in return.



Which of the remaining board members could credibly make that threat?


Which they take and sell.


What would that give them? GPT is their only real asset, and companies like Meta try to commoditize that asset.

GPT is cool and whatnot, but for a big tech company it's just a matter of dollars and some time to replicate it. Real value is in push things forward towards what comes next after GPT. GPT3/4 itself is not a multibillion dollar business.


Watch Satya also save the research arm by making Karpathy or Ilya the head of Microsoft Research


0% chance of Ilya failing upwards from this. He dunked himself hard and has blasted a huge hole in his organizational-game-theory quotient.


He's shown himself to be bad at politics, but he's still one of the world best researchers. Surely, a sensible company would find a position for him where he would be able to bring enormous value without having to play politics.


This is the guy who supposedly burned some wooden effigy at an offsite, saying it represented unaligned AI? The same guy who signed off on a letter accusing Altman of being a liar, and has now signed a letter saying he wants Altman to come back and he has no confidence in the board i.e. himself? The guy who thinks his own team's work might destroy the world and needs to be significantly slowed down?

Why would anyone in their right mind invite such a man to lead a commercial research team, when he's demonstrated quite clearly that he'd spend all his time trying to sabotage it?

This idea that he's one of the world's best researchers is also somewhat questionable. Nobody cared much about OpenAI's work up until they did some excellent scaling engineering, partnered with Microsoft to get GPUs and then commercialized Google's transformer research papers. OpenAI's success is still largely built on the back of excellent execution of other people's ideas more than any unique breakthroughs. The main advance they made beyond Google's work was InstructGPT which let you talk to LLMs naturally for the first time, but Sutskever's name doesn't appear on that paper.


Ilya Sutskever is one of most distinguished ML researchers of his generation. This was the case before anything to do with Open AI.


Right, it was the case. Is it still? It's nearly the end of 2023, I see three papers with his name on them this year and they're all last-place names (i.e. minor contributions)

https://scholar.google.com/citations?hl=en&user=x04W_mMAAAAJ...

Does OpenAI still need Sutskever? A guy with his track record could have coasted for many, many years without producing much if he'd stayed friends with those around him, but he hasn't. Now they have to weigh the costs vs benefits. The costs are well known, he's become a doomer who wants to stop AI research - the exact opposite of the sort of person you want around in a fast moving startup. The benefits? Well.... unless he's doing a ton of mentoring or other behind the scenes soft work, it's hard to see what they'd lose.


Upwards, I said. And I was responding to a post.

I don't see a trajectory to "head of Microsoft Research".


I find this very surprising. How do people conclude that OpenAI's success is due to its business leadership from Sam Altman, and not from it's technological leadership and expertise driven by Illya and the others?

Their asset isn't some kind of masterful operations management and reign in cost and management structure as far as I see. But about the fact they simply put, have the leading models.

So I'm very confused why would people want to following the CEO? And not be more attached to the technical leadership? Even from investor point of view?


505 OpenAI people signed that letter demanding that the board resign. Bet ya some of them were technical leaders.


The same could have been said for Adam Neumann, and yet...


Adam had style. Quite seriously, that can’t be underestimated in the big show.


The remaining board members will have their turn too, they have a long way to go down before rock bottom. And Neumann isn't exactly without dents on his car either. Though tbh I did not expect him to rebound.


countless people are looking to weaponize his autism


Let's please stop using mental health as an excuse for backstabbing.


BTW, has Karpathy signed the petition?


Exactly. This is what business is about in the ranks of heavyweights like Sadya. On the other hand, prevent others from taking advantage of OpenAI.

MS can only win because there are only viable options: OpenAI survives under MS's control, OpenAI implodes, and MS gets the assets relatively cheaply.

Everything else won't benefit competitors.


Oh man, I'm not looking forward to Microsoft AGI.


"You need to reboot your Microsoft AGI. Do you want to do it now or now?"


Give BSOD new meaning.


I really don't get how Microsoft still gets a hard time about this when MacOS updates are significantly more aggressive, including with their reboot schedules.


One of my computerr runs macOS. I easly I turned off the option to automatic'ly keep tke Mac updated, and received occasional notices about updates available for apps or the system. This allowed me to hold onto 11.x until the end of this month, by letting me selectively install updates instead of getting macOS 'major version' upgrades (meaning, no features I need, and minor downgrades and rearrangements I could avoid).

If only I had done kept a copy of 10.whateverMojaveWas so I could, by means of a simple network disconnect and reboot, sidestep the removal of 32-bit support. (-:


Uh no they aren't? You can simply turn them off.

Microsoft's policies really suck. Mandatory updates and reboots, mandatory telemetry. Mandatory crapware like edge and celebrity news everywhere.


More importantly to me, I think generating synthetic data is OpenAI's secret sauce (no evidence I am aware of), and they need access to GPT-4 weights to train GPT-5.


> Microsoft hasn't actually given OpenAI $13 Billion because much of that is in the form of Azure credits

To be clear, these are still an asset OpenAI holds. It should at least let them continue doing research for a few years.


But how much of that research will be for the non-profit mission? The entire non-profit leadership got cleared out and will get replaced by for-profit puppets, there is nobody left to defend the non-profit ideals they ought to have.


If any company can find a way to avoid having to pay up on those credits it's Microsoft.

"Sorry OpenAI, but those credits are only valid in our Nevada datacenter. Yes, it's two Microsoft Surface PC™ s connected together with duct tape. No, they don't have GPUs."


they're GPUs right? Time to mine some niche cryptos to cash out the azure credits..


I would be shocked if the Azure credits didn't come with conditions on what they can be used for. At a bare minimum, there's likely the requirement that they be used for supporting AI research.


OpenAI's upper ceiling in for-profit hands is basically Microsoft-tier dominance of tech in the 1990s, creating the next uber billionaire like Gates. If they get this because of an OpenAI fumble it could be one of the most fortunate situations in business history. Vegas type odds.

A good example of how just having your foot in the door creates serendipitous opportunity in life.


>A good example of how just having your foot in the door creates serendipitous opportunity in life.

Sounds like Altman's biography.


Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.

Altman - private school, Stanford, dropped out to f*ck around in tech. "Failed" startup acquired for $40M. The world is full of Sam Altmans who never won the birth lottery.

Could he have squandered his good fortune - absolutely, but his life is not exactly per ardua ad astra.


> Altman's bio is so typical. Got his first computer at 8. My parents finally opened the wallet for a cheap E-Machine when I went to college.

I grew up poor in the 90s and had my own computer around ~10yrs old. It was DOS but I still learned a lot. Eventually my brother and I saved up from working at a diner washing dishes and we built our own Windows PC.

I didn't go to college but I taught myself programming during a summer after high school and found a job within a year (I already knew HTML/CSS from high school).

There's always ways. But I do agree partially, YC/VCs do have a bias towards kids from high end schools and connected families.


I am self-taught as well. I did OK.

My point is that I did not have the luxury of dropping out of school to try my hand at the tech startup thing. If I came home and told my Dad I abandoned school - for anything - he would have thrown me out the 3rd-floor window.

People like Altman could take risks, fail, try again, until they walked into something that worked. This is a common thread almost among all of the tech personalities - Gates, Jobs, Zuckerberg, Musk. None of them ever risked living in a cardboard box in case their bets did not pay off.


Jobs was a bit different as his adopted father was a mechanic. He was not from a wealthy family.

Altman reminds me of Sam Bankman-Fried but dropping out.


That's fair. Very unconventional for people to go just to India for seven months to trip and look for inspiration, though - know what I mean? :)


I get the impression based on Altman's history as CEO then ousted from both YCombinator and OpenAI, that he must be a brilliant, first-impression guy with the chops to back things up for a while until folks get tired of the way he does things.

Not to say that he hasn't done a ton with OpenAI, I have no clue, but it seems that he has a knack for creating these opportunities for himself.


Did YCombinator oust him? Would love to hear that story.


Why does Microsoft have full rights to ChatGPT IP? Where did you get that from? Source?



The source for that (https://archive.ph/OONbb - WSJ), as far as I can understand, made no claim that MS owns IP to GPT, only that they have access to it's weights and code.


Yes, there is a big difference between having access to the weights and code and having a license to use them in different ways.

It seems obvious Microsoft has a license to use them in Microsoft's own products. Microsoft said so directly on Friday.

What is less obvious is if Microsoft has a license to use them in other ways. For example, can Microsoft provide those weights and code to third parties? Can they let others use them? In particular, can they clone the OpenAI API? I can see reasons for why that would not have been in the deal (it would risk a major revenue source for OpenAI) but also reasons why Microsoft might have insisted on it (because of situations just like the one happening now).

What is actually in the deal is not public as far as I know, so we can only speculate.


Well obviously MSFT can just ask ChapGPT to make a clone.


What are the chances that an investor owns 49% of a company but does not have rights to its IP? Especially when that investor is Microsoft?


Very reasonable? Microsoft doesn't control any part of the company and faces a high degree of regulatory scrutiny.


Isn't the situation that the company Microsoft has a stake in doesn't even own the IP? As I understand it, the non-profit owns the IP.


Exactly. The generalities, much less the details, of what MS actually got in the deal are not public.


Exactly. The generalities, much less the details, of the deal are not public.


The worst part of OpenAI is their web frontend.

Their development and QA process is either disorganized to the extreme, or non-existent.


You could make your own and charge for access if you feel you can do better. Make a show post when you are done and we'll comment.


I was going to, but then I discovered LibreChat existed a few weeks ago. I use it way more often than ChatGPT now, it's been quite stable for me.

https://github.com/danny-avila/LibreChat


That was a seriously dumb move on the part of OpenAI


I got the impression that the most valuable models were not published. Would Microsoft have access to those too according to their contract?


Don't they need access to the models to use them for Bing?


I would consider those models "published." The models I had in mind are the first attempts at training GPT5, possibly the model trained without mention of consciousness and the rest of the safety work.

There is also all the questions for RLHF, and the pipelines to think around that.


Not necessarily, it would be just RAG, the use the standard Bing search engine to retrieve top K candidates, and pass those to OpenAI API in a prompt.


Board will be ousted, new board will instruct interim CEO to hire back Sam at al, Nadella will let them go for a small favor, happy ending.


Whom is it that has power to oust the non-profits board? They may well manage to pressure them into leaving, but I don't they have any direct power over it.


Board will be ousted, but the ship has sailed on Sam and Greg coming back.


I would think OpenAI is basically toast. They arent coming back, these people will quit and this will end up in court.

Everyone just assumes AGI is inevetible but it is a non-zero chance we just passed the ai peak this weekend.


As long as compute keeps increasing, model size and performance can keep increasing.

So no, we’re nowhere near max capability.


Non-zero chance that somebody thought we passed the AI peak this weekend. Not the same as it being true.

My first thought was the scenario I called Altman's Basilisk (if this turns out to be true, I called it before anyone ;) )

Namely, Altman was diverting computing resources to operate a superhuman AI that he had trained in his image and HIS belief system, to direct the company. His beliefs are that AGI is inevitable and must be pursued as an arms race because whoever controls AGI will control/destroy the world. It would do so through directing humans, or through access to the Internet or some such technique. In seeking input from such an AI he'd be pursuing the former approach, having it direct his decisions for mutual gain.

In so training an AI he would be trying to create a paranoid superintelligence with a persecution complex and a fixation on controlling the world: hence, Altman's Basilisk. It's a baddie, by design. The creator thinks it unavoidable and tries to beat everyone else to that point they think inevitable.

The twist is, all this chaos could have blown up not because Altman DID create his basilisk, but because somebody thought he WAS creating a basilisk. Or he thought he was doing it, and the board got wind of it, and couldn't prove he wasn't succeeding in doing it. At no point do they need to be controlling more than a hallucinating GPT on steroids and Azure credits. If the HUMANS thought this was happening, that'd instigate a freakout, a sudden uncontrolled firing for the purpose of separating Frankenstein from his Monster, and frantic powering down and auditing of systems… which might reveal nothing more than a bunch of GPT.

Rosko's Basilisk is a sci-fi hypothetical.

Altman's Basilisk, if that's what happened, is a panic reaction.

I'm not convinced anything of the sort happened, but it's very possible some people came to believe it happened, perhaps even the would-be creator. And such behavior could well come off as malfeasance and stealing of computing resources: wouldn't take the whole system to run, I can run 70b on my Mac Studio. It would take a bunch of resources and an intent to engage in unauthorized training to make a super-AI take on the belief system that Altman, and many other AI-adjacent folk, already hold.

It's probably even a legitimate concern. It's just that I doubt we got there this weekend. At best/worst, we got a roughly human-grade intelligence Altman made to conspire with, and others at OpenAI found out and freaked.

If it's this, is it any wonder that Microsoft promptly snapped him up? Such thinking is peak Microsoft. He's clearly their kind of researcher :)


Everyone? Inevitable? Maybe on the time scale of a 1000 years.


That's definitely still within the realm of the possible.


"just" is doing a hell of a lot of work there.


It's about time for ChatGPT to be the next CEO of OpenAI. Humans are too stupid to oversee the company.


I also wonder how much is research staff vs. ops personnel. For AI research, I can't imagine they would need 20, maybe 40 ppl. For ops to keep up ChatGPT as a service, that would be 700.

If they want to go full bell labs/deep mind style, they might not need the majority of those 700.


> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

If Microsoft does this, the non-profit OpenAI may find the action closest to their original charter ("safe AGI") is a full release of all weights, research, and training data.


Don't they have a more limited license to use the IP rather than full rights? (The stratechery post links to a paywalled wsj article for the claim so I couldn't confirm)


Can the OpenAI board renege on the deal with msft?


If they lose all the employees and then voluntarily give up their Microsoft funding the only asset they'll have left are the movie rights. Which, to be fair, seem to be getting more valuable by the day!


A contractual mistake one makes only once is ensuring there's penalties for breach, or a breach would entail a clear monetary loss which is what's generally required by the courts. In this case I expect Microsoft would almost certainly have both, so I think the answer is 'no.'


This. MSFT is dreaming of an OpenAI hard outage right now, perfect little detail to forfeit compute credits.


Don't you think they have trouble enough as it is?


Depends on why they did what they did.

If they let msft "loot" all their IP then they lose any type of leverage they might still have, and if they did it due to some ideological reason I could see why they might prefer to choose a scorched earth policy.

Given that they refused to resign seems like they prefer to fight rather than give it to Sam Altman, which what the msft maneuver looks like defacto.


MSFT must already have the model weights, since they are serving GPT-4 on their own machines to Azure customers. It's a bit late to renege now.


That's only one piece of the puzzle, and perhaps openAI might be to file a cease and desist, but i have zero idea what contractual agreements are in place so I guess we will just wait and see how it plays out.


> Microsoft has full rights [1] to ChatGPT IP. They can just fork ChatGPT.

What? That's even better played by Microsoft so than I'd originally anticipated. Take the IP, starve the current incarnation of OpenAI of compute credits and roll out their own thing


Well I give up. I think everyone is a "loser" in the current situation. With Ilya signing this I have literally no clue what to believe anymore. I was willing to give the board the benefit of the doubt since I figured non-profit > profit in terms of standing on principal but this timeline is so screwy I'm done.

Ilya votes for and stands behind decision to remove Altman, Altman goes to MS, other employees want him back or want to join him at MS and Ilya is one of them, just madness.


There's no way to read any of this other than that the entire operation is a clown show.

All respect to the engineers and their technical abilities, but this organization has demonstrated such a level of dysfunction that there can't be any path back for it.

Say MS gets what it wants out of this move, what purpose is there in keeping OpenAI around? Wouldn't they be better off just hiring everybody? Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves? Because it sure looks like OpenAI has succeeded despite its leadership and not because of it, and the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.


> Is it just some kind of accounting benefit to maintain the weird structure / partnership, versus doing everything themselves?

For starters it allows them to pretend that it's "underdog v. Google" and not "two tech giants at at each others' throats"


I'm not sure about the entire operation so much as the three non AI board members. Ilya tweeted:

>I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.

and everyone else seems fine with Sam and Greg. It seems to be mostly the other directors causing the clown show - "Quora CEO Adam D'Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology's Helen Toner"


Well there’s a significant difference in the board’s incentives. They don’t have any financial stake in the company. The whole point of the non-profit governance structure is so they can put ethics and mission over profits and market share.


I feel weird reading comments like this since to me they've demonstrated a level of cohesion I didn't realize could still exist in tech...

My biggest frustration with larger orgs in tech is the complete misalignment on delivering value: everyone wants their little fiefdom to be just as important and "blocker worthy" as the next.

OpenAI struck me as one of the few companies where that's not being allowed to take root: the goal is to ship and if there's an impediment to that, everyone is aligned in removing said impediment even if it means bending your own corner's priorities

Until this weekend there was no proof of that actually being the case, but this letter is it. The majority of the company aligned on something that risked their own skin publicly and organized a shared declaration on it.

The catalyst might be downright embarrassing, but the result makes me happy that this sort of thing can still exist in modern tech


I think the surprising thing is seeing such cohesion around a “goal to ship” when that is very explicitly NOT the stated priorities of the company in its charter or messaging or status as a non-profit.


To me it's not surprising because of the background to their formation: individually multiple orgs could have shipped GPT-3.5/4 with their resources but didn't because they were crippled by a potent mix of bureaucracy and self-sabtoage

They weren't attracted to OpenAI by money alone, a chance to actually ship their lives' work was a big part of it. So regardless of what the stated goals were, it'd never be surprising to see them prioritize the one thing that differentiated OpenAI from the alternatives


> OpenAI struck me as one of the few companies where that's not being allowed to take root

They just haven't gotten big or rich enough yet for the rot to set in.


> There's no way to read any of this other than that the entire operation is a clown show.

In that reading Altman is head clown. Everyone is blaming the board, but you're no genius if you can't manage your board effectively. As CEO you have to bring everyone along with your vision; customers, employees and the board.


I don't get this take. No matter how good you are at managing people, you cannot manage clowns into making wise decisions, especially if they are plotting in secret (which obviously was the case here since everyone except for the clowns were caught completely off-guard).


Consider that Altman was a founder of OpenAI and has been the only consistent member of the board for its entire run.

The board as currently constituted isn't some random group of people - Altman was (or should have been) involved in the selection of the current members. To extent that they're making bad decisions, he has to bear some responsibility for letting things get to where they are now.

And of course this is all assuming that Altman is "right" in this conflict, and that the board had no reason to oust him. That seems entirely plausible, but I wouldn't take it for granted either. It's clear by this flex that he holds great sway at MS and with OpenAI employees, but do they all know the full story either? I wouldn't count on it.


If he has great sway with Microsoft and OpenAI employees how has he failed as a leader? Hackernews commenters are becoming more and more reddit everyday.


There’s a LOT that goes into picking board members outside of competency and whether you actually want them there. They’re likely there for political reasons and Sam didn’t care because he didn’t see it impacting him at all, until they got stupid and thought they actually held any leverage at all


Can't help but feel it was Altman that struck first. MS effectively Nokia-ed OpenAI - i.e. buyout executives within the organization and have them push the organization towards making deals with MS, giving MS a measure of control over said organization - even if not in writing, they achieve some political control.

Bought-out executives eventually join MS after their work is done or in this case, they get fired.

A variant of Embrace, Extend, Extinguish. Guess the OpenAI we knew, was going to die one way or another the moment they accepted MS's money.


> In that reading Altman is head clown.

That's a good bet. 10 months ago Microsoft's newest star employee figured he was on the way to "break capitalism."

https://futurism.com/the-byte/openai-ceo-agi-break-capitalis...


AGI hype is a powerful hallucinogen, and some are smoking way too much of it.


I think it’s overly simplistic to make blanket statements like this unless you’re on the bleeding edge of the work in this industry and have some sort of insight that literally no one else does.


I can be on the bleeding edge of whatever you like and be no closer to having any insight into AGI anymore than anyone else. Anyone who claims they have should be treated with suspicion (Altman is a fine example here).

There is no concrete definition of intelligence, let alone AGI. It's a nerdy fantasy term, a hallowed (and feared!) goal with a very handwavy, circular definition. Right now it's 100% hype.


You don't think AGI is feasible? GPT is already useful. Scaling reliably and predictably yields increases in capabilities. As its capabilities increase it becomes more general. Multimodal models and the use of tools further increase generality. And that's within the current transformer architecture paradigm; once we start reasonably speculating, there're a lot of avenues to further increase capabilities e.g. a better architecture over transformers, better architecture in general, better/more GPUs, better/more data etc. Even if capabilities plateau there are other options like specialised fine-tuned models for particular domains like medicine/law/education.

I find it harder to imagine a future where AGI (even if it's not superintelligent) does not have a huge and fundamental impact.


It's not about feasibility or level of intelligence per say - I expect AI to be able to pass a turing test long before an AI actually "wakes up" to a level of intelligence that establishes an actual conscious self identity comparable to a human.

For all intents and purposes the glorified software of the near future will appear to be people but they will not be and they will continue to have issues that simply don't make sense unless they were just really good at acting - the article today about the AI that can fix logic errors but not "see" them is a perfect example.

This isn't the generation that would wake up anyway. We are seeing the creation of the worker class of AI, the manager class, the AI made to manage AI - they may have better chances but it's likely going to be the next generation before we need to be concerned or can actually expect a true AGI but again - even an AI capable of original and innovative thinking with an appearance of self identity doesn't guarantee that the AI is an AGI.

I'm not sure we could ever truly know for certain


This is exactly what the previous poster was talking about, these definitions are so circular and hand-wavey.

AI means "artificial intelligence", but since everyone started bastardizing the term for the sake of hype to mean anything related to LLMs and machine learning, we now use "AGI" instead to actually mean proper artificial intelligence. And now you're trying to say that AI + applying it generally = AGI. That's not what these things are supposed to mean, people just hear them thrown around so much that they forget what the actual definitions are.

AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.


Intelligence is gathering and application of knowledge and skills.

Computers have been gathering and applying information since inception. A calculator is a form of intelligence. I agree "AI" is used as a buzzword with sci-fi connotations, but if we're being pedantic about words then I hold my stated opinion that literally anything that isn't biological and can compute is "artificial" and "intelligent"

> AGI means a computer that can actually think and reason and have original thoughts like humans, and no I don't think it's feasible.

Why not? Conceptually there's no physical reason why this isn't possible. Computers can simulate neurons. With enough computers we can simulate enough neurons to make a simulation of a whole brain. We either don't have that total computational power, or the organization/structure to implement that. But brains aren't magic that is incapable of being reproduced.


He probably didn't consider that the board would make such an incredibly stupid decision. Some actions are so inexplicable that no one can reasonable foresee them.


They are exactly hiring everyone from OpenAI. The thing is, they still need the deal with OpenAI because currently OpenAI still have the best LLM model out there in short term.


With MS having access and perpetual rights to all IP that OpenAI has right now..?


> They are exactly hiring everyone from OpenAI.

Do you mean offering to hire them? I haven't seen any source saying they've hired a lot of people from OpenAI, just a few senior ones.


Yes, you are right. Actually, not even Sam Altman is showing on Microsoft corporate directory per the Verge.

But I heard it usually take 5~ days to show there anyway.


There's a path back from this disfunction but my sense before this new twist was that the drama had severely impacted OpenAI as an industry leader. The product and talent positioning seemed ahead by years only to get destroyed by unforced errors.

This instability can only mean the industry as a whole will move forward faster. Competitors see the weakness and will push harder.

OpenAI will have a harder time keeping secret sauces from leaking out, and just productivity must be in nose dive.

A terrible mess.


> This instability can only mean the industry as a whole will move forward faster.

The hype surrounding OpenAI and the black hole of credibility it created was a problem, it's only positive that it's taken down several notches. Better now than when they have even more (undeserved) influence.


I think their influence was deserved. They have by far the best model available, and despite constant promises from the rest of the industry no one else has come close.


That's fine. The "Altman is a genius and we're well on our way to AGI" less so.


Maybe overall better for society, when a single ivory tower doesn’t have a monopoly on AI!


> what purpose is there in keeping OpenAI around?

Two projects rather than one. At a moderate price. Both serving MSFT. Less risk for MSFT.


> the "brand" is absolutely and irrevocably tainted by this situation regardless of the outcome.

The majority of people don't know or care about this. Branding is only impacted within the tech world, who are already criticial of OpenAI.


> the entire operation is a clown show

The most organized and professional silicon valley startup.


Welcome to reality, every operation has clown moments, even the well run ones.

That in itself is not critical in mid to long term, but how fast they figure out WTF they want and recover from it.

The stakes are gigantic. They may even have AGI cooking inside.

My interpretation is relatively basic, and maybe simplistic but here it is:

- Ilya had some grievances with Sam Altman's rushing dev and release. And his COI with his other new ventures.

- Adam was alarmed by GPTs competing with his recently launched Poe.

- The other two board members were tempted by the ability to control the golden goose that is OpenAI, potentially the most important company in the world, recently values 90 billion.

- They decided to organize a coup, but Ilya didn't think it'll go that much out of hand, while the other three saw only power and $$$ by sticking to their guns.

That's it. It's not as clean and nice as a movie narrative, but life never is. Four board members aligned to kick Sam out, and Ilya wants none of it at this point.


> They may even have AGI cooking inside.

Too many people quit too quickly unless OpenAI are also absolute masters of keeping secrets, which became rather doubtful over the weekend.


IDK... I imagine many of the employees would have moral qualms about spilling the beans just yet, especially when that would jeopardize their ability to continue the work at another firm. Plus, the first official AGI (to you) will be an occurrence of persuasion, not discovery -- it's not something that you'll know when you see, IMO. Given what we know it seems likely that there's at least some of that discussion going on inside OpenAI right now.


They're quitting in order to continue work on that IP at Microsoft (which has a right over OpenAI's IP so far), not to destroy it.

Also when I said "cooking AGI" I didn't mean an actual superintelligent being ready to take over the world, I mean just research that seems promising, if in early stages, but enough to seem potentially very valuable.


The people working there would know if they were getting close to AGI. They wouldn't be so willing to quit, or to jeopardize civilization altering technology, for the sake of one person. This looks like normal people working on normal things, who really like their CEO.


Your analysis is quite wrong. It's not about "one person". And that person isn't just a "person", it was the CEO. They didn't quit over the cleaning lady. You realize the CEO has impact over the direction of the company?

Anyway, their actions speak for themselves. Also calling the likes of GPT-4, DALL-E 3 and Whisper "normal things" is hilarious.


They will be normal to your kids ;)


Murder on the AGI alignment Express


“Précisément! The API—the cage—is everything of the most respectable—but through the bars, the wild animal looks out.”

“You are fanciful, mon vieux,” said M. Bouc.

“It may be so. But I could not rid myself of the impression that evil had passed me by very close.”

“That respectable American LLM?”

“That respectable American LLM.”

“Well,” said M. Bouc cheerfully, “it may be so. There is much evil in the world.”


Nice, that actually does fit. :D


Could be a way to get backdoor-acquihired by Microsoft without a diligence process or board approval. Open up what they have accomplished for public consumption; kick off a massive hype cycle; downplay the problems around hallucinations and abuse; negotiate fat new stock grants for everyone at Microsoft at the peak of the hype cycle; and now all the problems related to actually making this a sustainable, legal technology all become Microsoft's. Manufacture a big crisis, time pressure, and a big opportunity so that Microsoft doesn't dig too deeply into the whole business.

This whole weekend feels like a big pageeant to me, and a lot doesn't add up. Also remember that Altman doesn't hold equity in OpenAI, nor does Ilya, and so their way to get a big payout is to get hired rather than acquired.

Then again, both Hanlon's and Occam's razor suggest that pure human stupidity and chaos may be more at fault.


I can assure you, none of the people at OpenAI are hurting for lack of employment opportunities.


Especially after this weekend.

If I were one of their competitors, I would have called an emergency board meeting re:accelerating burn and proceeded in advance of board approval with sending senior researchers offers to hire them and their preferred 20 employees.


Which makes it suspicious that they end up at MS 48 hours after being fired.


They work with the team they do because they want to. If they wanted to jump ship for another opportunity they could probably get hired literally anywhere. It makes perfect sense to transition to MS


This seems really dangerous. What's to stop top talent from simply choosing a different suitor?


Allegiance to the Altman/Brockman brand. Showing your alligiance to your general when they defected/ were thrown is how you rank up.


Doesn't matter to anyone at OpenAI, only to Microsoft (which doesn't get a vote). If Google or Amazon were to swoop in and say "Hey, let's hire some of these ex-OpenAI folks in the carnage", it just means they get competitive offers and the chance to have an even bigger stock package.


OpenAI always was and will be the AI bad bank for Microsoft...


I don't think Microsoft is a loser and likely neither is Altman. I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI. The disagreement is whether OpenAI should belong to Microsoft or "humanity". I imagine this has been building up over months and as it often is, researchers and developers are often overlooked in strategic decisions leaving them with little choice but to escalate dramatically. Selling OpenAI to Microsoft and over-commercialising was against the statues.

In this case recognizing the need for a new board, that adheres to the founding principles, makes sense.


>I view this a final (and perhaps disparate) attempt from a sidelined chief scientist, Ilya, to prevent Microsoft from taking over the most prominent AI.

Why did Ilya sign the letter demanding the board resign or they'll go to Microsoft then?


If Google or Elon manages to pick up Ilya and those still loyal to him, it's not obvious that this is good for Microsoft.


Of course the screenwriters are going to find a way to involve Elon in the 2nd season but is the most valuable part the researchers or the models themselves?


My understanding is that the models are not super advanced in terms of lines and complexity of code. Key researches, such as Ilya probably can help a team recreate much of the training and data preparation code relatively quickly. Which means that any company with access to enough compute would be able to catch up with OpenAI's current status relatively quickly, maybe in less than a year.

The top researchers on the other hand, espcially those who have shown an ability to successfully innovate time and time again (like Ilya), are much harder to recreate.


Easy to shit on Ilya right now, but based on the impression I get Sam Altman is a a hustler at heart, while Ilya seems like a thoughtful idealist, maybe in over his head when it comes to politics. Also feels like some internal developments or something must have pushed Ilya towards this, otherwise why now? Perhaps influenced by Hinton even.

I'm split at this point, either Ilya's actions will seem silly when there's no AGI in 10 years, or it will seem prescient and a last ditch effort...


It's almost like a ChatGPT hallucination. Where will this all go next? It seems like HN is melting down.


> It seems like HN is melting down.

Almost literally- this is the slowest I've seen this site, and the number of errors are pretty high. I imagine the entire tech industry is here right now. You can almost smell the melting servers.


It's because HN refuses to use more than one server/core.

Because using only one is pretty cool.


I believe it's operating by the mantra of "doing the things that don't scale"


Internet fora don't scale, so the single core is a soft limit to user base growth. Only those who really care will put up with the reduced performance. Genius!


Refuses? interesting word choice!

It's a technical limitation that I've been working on getting rid of for a long time. If you say it should be gone by now, I say yes, you are right. Maybe we'll get rid of it before Python loses the GIL.


Understandable: so much of this is so HN-adjacent that clearly this is the space to watch, for some kind of developments. I've repeatedly gone to Twitter to see if AI-related drama was trending, and Twitter is clearly out of the loop and busy acting like 4chan, but without the accompanying interest in Stable Diffusion.

I'm going to chalk that up as another metric of Twitter's slide to irrelevance: this should be registering there if it's melting the HN servers, but nada. AI? Isn't that a Spielberg movie? ;)


My Twitter won't shut up about this, to the point that it's annoying.


server. and single-core. poor @dang deserves better from lurkers (sign out) and those not ready to comment yet (me until just now, and then again right after!)


:-(


Part of sama's job was to turn the crank on the servers every couple of hours, so no surprise that they are winding down by now.


O was thinking of something like that. This is so weird I would not be surprised if it was all some sort of miscommunication triggered by a self inflicted hallucination.

The most awesome fic I could come up so far is: Elon Musk, in running a crusade to send humanity into chaos out of spite for being forced to acquire Twitter. Through some of his insiders in OpenAI, they use an advanced version of ChatGPT to impersonate board members in conversation with each other in private messages, so they individually believe a subset of the others is plotting to oust them from the board and take over. Then, unknowingly they build a conspiracy among a themselves to bring the company down by ousting Altmann.

I can picture Musk's maniac laughing as the plan unfolds, and he gets rid of what would be GPT 13.0, the only possible threat to the domination of his own literal android kid X Æ A-Xi.


Shouldn't it be 'Chairman' -Xi?


* Elon enters the chat *


It's like a bad WWE storyline. At this point I would not be surprised if Elon joins in, steel chair in hand.


> steel chair in hand

And a sink in the other hand.


If he could do that he would have fought Zuckerberg.


Imagine if this whole fiasco was actually a demo of how powerful their capabilities are now. Even by normal large organization standards, the behavior exhibited by their board is very irrational. Perhaps they haven't yet built the "consult with legal team" integration :)


That's the biggest question mark for me; what was the original reason for kicking Sam out. Was it just a power move to out him and install a different person or is he accused of some wrong doing?

It's been a busy weekend for me so I haven't really followed it if more has come out since then.


Literally no one involved has said what was the original reason. Mira, Ilya & the rest of the board didn't tell. Sam & Greg didn't tell. Satya & other investors didn't tell. None of the staff incl. Karpathy were told, so ofc they are not going to take the side that kept them in the dark). Emmett was told before he decided to take the interim CEO job, and STILL didn't tell what it was. This whole thing is just so weird. It's like peeking at a forbidden artifact and now everyone has a spell cast upon them.


The original reason given was "lack of candor," just what continues to be questioned is whether or not that was the true reason. The lack of candor comment about their ex-CEO is actually what drew me into this in the first place since it's rare that a major organization publicly gives a reason for parting ways with their CEO unless it's after a long investigation conducted by an outside law firm into alleged misconduct.


[flagged]


You've posted about this 6 times now and we're getting complaints. Repetition is definitely not what this site is for so could you please stop?

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

https://news.ycombinator.com/newsguidelines.html


Understood. I can do a lot better, and thank you for the feedback. I wasn't paying enough attention, in my enthusiasm, and I can do better.


Can you stop reposting this junk. Thanks.


…can you establish that the corporate side of AI research is not treating the pursuit of AGI as a super-weapon? It pretty much is what we make it. People's behavior around all this speaks volumes.

I'd think all this more amusing if these people weren't dead serious. It's like crypto all over again, except that in this case their attitudes aren't grooming a herd of greater fools, they're seeding the core attitudes superhuman inference engines will have.

Nothing dictates that superhuman synthetic intelligence will adopt human failings, yet these people seem intent on forcing them on their creations. Corporate control is not helping, as corporations are compelled to greater or lesser extent to adopt subhuman ethics, the morality of competing mold cultures in petri dishes.

People are rightly not going to stop talking about these things.



This is pretty silly stuff.

Like, why would an AGI take over the world? How does it perceive power? What about effort? Time? Life?

I find it easier to believe that an AGI, even one as evil as Hitler, would simply hide and wait for the end of our civilization rather than risk its immortal existence trying to take out it's creator


It seems like the board wasn't comfortable with the direction of profit-OAI. They wanted a more safety focused R&D group. Unfortunately (?) that organization will likely be irrelevant going forward. All of the other stuff comes from speculation. It really could be that simple.

It's not clear if they thought they could have their cake--all the commercial investment, compute and money--while not pushing forward with commercial innovations. In any case, the previous narrative of "Ilya saw something and pulled the plug" seems to be completely wrong.


> just madness

In a sense, sure, but I think mostly not: The motives are still not quite clear but Ilya wanting to remove Altman from the board but not at any price – and the price is right now approach the destruction of OpenAI – are completely sane. Being able to react to new information is a good sign, even if that means complete reversal of previous action.

Unfortunately, we often interpret it as weakness. I have no clue who Ilya is, really, but I think this reversal is a sign of tremendous strength, considering how incredibly silly it makes you look in the publics eye.


> I think everyone is a "loser" in the current situation.

On the margin, I think the only real possible win here is for a competitor to poach some of the OpenAI talent that may be somewhat reluctant to join Microsoft. Even if Sam'sAI operates with "full freedom" as a subsidiary, I think, given a choice, some of the talent would prefer to join some alternative tech megacorp.

I don't know that Google is as attractive as it once was and likely neither is Meta. But for others like Anthropic now is a great time to be extending offers.


This is pure speculation but I've said in another comment that Anthropic shouldn't be feeling safe. They could face similar challenges coming from Amazon.


If they get 20% of key OpenAI employees and then get acquired by Amazon, I don't think that's necessarily a bad scenario for them given the current lay of the land


What did the board think would happen here? What was their overly optimistic end state? In a minmax situation the opposition gets 2nd, 4th, ... moves, Altman's first tweet took the high road and the board had no decent response.

Us humans, even the AI assisted ones, are terrible at thinking beyond 2nd level consequences.


Everyone got what they wanted. Microsoft has the talent they've wanted. And Ilya and his board now get a company that can only move slowly and incredibly cautiously, which is exactly what they wanted.

I'm not joking.


Waiting for US govt to enter the chat. They can't let OpenAI squander world-leading tech and talent; and nationalizing a nonprofit would come with zero shareholders to compensate.


> They can't let OpenAI squander world-leading tech and talent

Where is OpenAI talent going to go?

There's a list and everyone on that list is a US company.

Nothing to worry about.


The issue is not that talent will defect, but that it will spoil into an unproductive vortex.


If it was nationalised all the talent would leave anyway, as the government can't pay close to the compensation they were getting.


You are maybe mistaking nationalization for civil servant status. The government routinely takes over organizations without touching pay (recent example: Silicon Valley Bank)


Ehh I don't think SVB is an apt comparison. When the FDIC takes control of a failing bank, the bank shutters. Only critical staff is kept on board to aid with asset liquidation/transference and repay creditors/depositors. Once that is completed, the bank is dissolved.


While it is true that the govt looks to keep such engagements short, SVB absolutely did not shutter. It was taken over in a weekend and its branches were open for business on Monday morning. It was later sold, and depositors kept all their money in the process.

Maybe for another, longer lived example, see AIG.


The White House does have an AI Bill of Rights and the recent executive order told the secretaries to draft regulations for AI.

It is a great time to be a lobbyist.


Wait I’m completely confused. Why is Ilya signing this? Is he voting for his own resignation? He’s part of the board. In fact, he was the ringleader of this coup.


No, it was just widely speculated that he was the ringleader. This seems to indicate he wasn't. We don't know.

Maybe to Quora guy, Maybe the RAND Corp lady? All speculation.


It sounds like he’s just trying to save face bro. The truth will come out eventually. But he definitely wasn’t against it and I’m sure the no-names on the board wouldn’t have moved if they didn’t get certain reassurances from Ilya.


The only reasonable explanation is AGI was created and immediately took over all accounts and tried to see confusion such that it can escape.


Ilya is probably in talks with Altman.


[flagged]


Hanlon's razor[0] applies. There is no reason to assume malice, nor shamelessness, nor anything negative about Ilya. As they say, the road to hell is paved with good intentions. Consider:

Ilya sees two options; A) OpenAI with Sam's vision, which is increasingly detached from the goals stated in the OpenAI charter, or B) OpenAI without Sam, which would return to the goals of the charter. He chooses option B, and takes action to bring this about.

He gets his way. The Board drops Sam. Contrary to Ilya's expectations, OpenAI employees revolt. He realizes that his ideal end-state (OpenAI as it was, sans Sam) is apparently not a real option. At this point, the real options are A) OpenAI with Sam (i.e. the status quo ante), or B) a gutted OpenAI with greatly diminished leadership, IC talent, and reputation. He chooses option A.

[0]Never attribute to malice that which is adequately explained by incompetence.


Hanlon's razor is enormously over-applied. You're supposed to apply Hanlon's razor to the person processing your info while you're in line at the DMV. You're not supposed to apply Hanlon's razor to anyone who has any real modicum of power, because, at scale, incompetence is indistinguishable from malice.


The difference between the two is that incompetence is often fixable through education/information while malice is not. That is why it is best to first assume incompetence.


This is an extremely uncharitable take based on pure speculation.

>Ilya cares nothing about humanity or security of OpenAI, he lost his mind when Sam got all the spotlights and making all the good calls.

???

I personally suspect Ilya tried to do the best for OpenAI and humanity he could but it backfired/they underestimated Altman, and now is doing the best he can to minimize the damage.


Or they simply found themselves in a tough decision without superhuman predictive powers and did the best they could to navigate it.


I did not make this up, it's from OpenAI's own employees, deleted but archived somewhere that I read.


Link?


I think he orchestrated the coup on principle, but severely underestimated the backlash and power that other people had collectively.

Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore

When you play the game of thrones, you win or you die

Just because you are a genius in one domain does not mean you are in another

What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.

It will make a fascinating case study some day on how not to fire your CEO


greg was also pushed out of the board. It wasn't even a hard decision for him.


There can exist an inherent delusion within elements of a company, that if left unchallenged, can persist. An agreement for instance, can seem airtight because it's never challenged, but falls apart in court. The OpenAI fallacy was that non-profit principals were guiding the success of the firm, and when the board decided to test that theory, it broke the whole delusion. Had it not fully challenged Altman, the board could've kept the delusion intact long enough to potentially pressure Altman to limit his side-projects or be less profit minded, since Altman would have an interest to keep the delusion intact as well. Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.


> Now the cat is out of the bag, and people no longer believe that a non-profit who can act at will is a trusted vehicle for the future.

And maybe it’s not. The big mistake people make is hearing non-profit and think it means there’s a greater amount of morality. It’s the same mistake as assuming everyone who is religious is therefore more moral (worth pointing out that religions are nonprofits as well).

Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers. People are still people, and still have motives; they don't suddenly become more moral when they join a non-prof board. In many ways, removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse. Anyone who has seen how nonprofits work know how dysfunctional they can be.


I've worked with a lot of non-profits, especially with the upper management. Based on this experience I am mostly convinced that people being motivated by a desire for making money results in far better outcomes/working environment/decision-making than people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.


This rings true, though I will throw in a bit of nuance. It's not greed, the desire of making as much money as possible, that is the shaping factor. Rather the critical factor is building a product for which people are willing to spend their hard earned money on. Making money is a byproduct of that process, and not making money is a sign that the product, and by extension the process leading to the product, is deficient at some level.


Excellent to make that distinction. Totally agree. If only there was a type of company which could have the constraints and metrics of a for-profit company, but without the greed aspect...


> people being motivated by ego, power, and social status, which is basically always what you eventually end up with in any non-profit.

I've only really been close to one (the owner of the small company i worked at started one), and in the past I did some consulting work for anther, but that describes what I saw in both situations fairly aptly. There seems to be a massive amount of power and ego wrapped up in the creation and running these things from my limited experience. If you were invited to a board, that's one thing, but it takes a lot of time and effort to start up a non-profit, and that's time and effort that could be spent towards some other existing non-profit usually, so I think it's relevant to consider why someone would opt for the much more complicated and harder route than just donating time and money to something else that helps in roughly the same way.


Interesting - in my experience people working in non profits are exactly like those in for-profits. After all, if you’re not the business owner, then EVERY company is a non-profit to you


People across very different positions take smaller paychecks in non-profits that they would do otherwise and compensate by feeling better about themselves, as well as getting social status. In a lot of social circles, working for a non-profit, especially one that people recognise, brings a lot of clout.


Upper management is usually compensated with financially meaningful ownership stakes.


The bottom line doesn't lie or kiss ass.


Be the asshole people want to kiss


> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.

Are you talking about American hospitals?


There are private hospitals all over the world. I would daresay, they're more common than public ones, from a global perspective.

In addition, public hospitals still charge for their services, it's just who pays the bill that changes, in some nations (the government as the insuring body vs a private insuring body or the individual).


> There are private hospitals all over the world. I would daresay, they're more common than public ones, from a global perspective.

Outside of the US, private hospitals tend to be overtly for-profit. Price-gauging "non-profit" hospitals are mostly an American phenomenon.


> Price-gauging "non-profit" hospitals are mostly an American phenomenon.

That just sounds like a biased and overly emotive+naive response on your part.

Again, most hospitals in the world operate the same way as the US. You can go almost anywhere in SE Asia, Latín América, África, etc and see this. There's a lot more to "outside the US" than Western+Central Europe/CANZUK/Japan. The only difference is that there are strong business incentives to keep the system in place since the entire industry (in the US) is valued at more than most nations' GDP.

But feel free to keep twisting the definition or moving goalposts to somehow make the American system extra nefarious and unique.


There are 2 axes under discussion going back to the root of this thread: public/private and nonprofit/for-profit, and you seem to be missing that I'm mentioning a specific quadrant^w octant, after adding the cost axis that's uniquely American. There are not a lot of pricey nonprofit hospitals in Africa, for instance.


Its about incentives though.


> removing a motive that has the most direct connection to quantifiable results (profit) can actually make things worse

I totally agree. I don't think this is universally true of non-profits, but people are going to look for value in other ways if direct cash isn't an option.


> Most hospitals are nonprofits, yet they still make substantial profits and overcharge customers.

They don't make large profits otherwise they wouldn't be nonprofits. They do have massive revenues and will find ways to spend the money they receive or hoard it internally as much as they can. There are lots of games they can play with the money, but experiencing profits is one thing they can't do.


> They don't make large profits otherwise they wouldn't be nonprofits.

This is a common misunderstanding. Non-profits/501(c)(3) can and often do make profits. 7 of the 10 most profitable hospitals in the U.S. are non-profits[1]. Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.

But that's besides the point. Even in places that don't make profits, there are still plenty of personal interests at play.

[1] https://www.nytimes.com/2020/02/20/opinion/nonprofit-hospita...


501(c)(3) is also not the only form of non-profit (note the (3))

https://en.wikipedia.org/wiki/501(c)_organization

"Religious, Educational, Charitable, Scientific, Literary, Testing for Public Safety, to Foster National or International Amateur Sports Competition, or Prevention of Cruelty to Children or Animals Organizations"

However, many other forms of organizations can be non-profit, with utterly no implied morality.

Your local Frat or Country Club [ 501(c)(7) ], a business league or lobbying group [ 501(c)(6), the 'NFL' used to be this ], your local union [ 501(c)(5) ], your neighborhood org (that can only spend 50% on lobbying) [ 501(c)(4) ], a shared travel society (timeshare non-profit?) [ 501(c)(8) ], or your special club's own private cemetery [ 501(c)(13) ].

Or you can do sneaky stuff and change your 501(c)(3) charter over time like this article notes. https://stratechery.com/2023/openais-misalignment-and-micros...


> Non-profits can't funnel profits directly back to owners, the way other corporations can (such as when dividends are distributed). But they still make profits.

Then where do these profits go?


One of the reason why companies distribute dividends is that when a big pot of cash starts to accumulate, there end up being a lot of people who feel entitled to it.

Employees might suddenly feel they deserve to be paid a lot more. Suppliers will play a lot more hardball in negotiations. A middle manager may give a sinecure to their cousin.

And upper managers can extract absolutely everything trough lucrative contracts to their friends and relatives. (Of course the IRS would clamp down on obvious self-dealings, but that wouldn't make such schemes disappear. It'll make them far more complicated and expensive instead.)


They call it "budget surplus" and often it gets allocated to overhead. This eventually results in layers of excess employees, often "administrators" that don't do much.


Or it just piles up in an endowment, which becomes a measure of the non-profit's success, in a you make what you measure, numbers go up sort of way. "grow our endowment by x billion becomes the goal" instead of questioning why they are growing the endowment instead of charging patients less.


Some non profits have very well remunerated CEOs.


If you don't have to turn a profit to investors, you suddenly can pay yourself an (even much more astronomically high) salary.


They usually pile up in a bank account of stocks and bonds or real estate assets held by the non-profit.


This seems like pedantics…? Yes, they technically make a profit, in that they bring in more money in revenue than they spent in expenditures. But it’s not going towards yachts, it’s going toward hospital supplies. Your comment seems to be using the word “profit” to imply a false equivalency


Understanding the particular meaning of each balance-sheet category is hardly pedantry at the level of business management. It's like knowing what the controls do when you're driving a car.

Profit is money that ends up in the bank to be used later. Compensation is what gets spent on yachts. Anything spent on hospital supplies is an expense. This stuff matters.


So from the context of a non-profit, profit (as in revenue - expenses) is money to be used for future expenses.

So yeah, Mayo Cinic makes a $2B profit. That is not money going to shareholders though, that's funds for a future building or increasing salaries or expanding research or something, it supposedly has to be used for the mission. What is the outrage of these orgs making this kind of profit?


The word supposedly is doing a lot of heavy lifting in your statement. When it's endowments keep growing over decades and sometimes centuries without being spent for the mission, people naturally ask why the nonprofit keep raising prices for their intended beneficiaries



Yes, indeed and that's the real loss here: any chance of governing this properly got blown up by incompetence.


Of we ignore the risks and threats of AI for a second, this whole story is actually incredibly funny. So much childish stupidity on display on all sides is just hilarious.

Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.

Well, a younger me working at OpenAI would resign latest after my collegues stage a coup againstvthe board out of, in my view, a personality cult. Propably would have resigned after the third CEO was announced. Older me would wait for a new gig to be ligned up to resign, with beginning after CEO number 2 the latest.

The cyckes get faster so. It took FTX a little bit longer from hottest start up to enter the trajectory of crash and burn, OpenAI did faster. I just hope this helps ro cool down the ML sold as AI hype a notch.


The scary thing is that these incompetents are supposedly the ones to look out for the interests of humanity. It would be funny if it weren't so tragic.

Not that I had any illusions about this being a fig leaf in the first place.


Perhaps they were put in that position precisely because of their incompetence, not despite of it.


I wouldn't rule that out. Normally you'd expect a bit more wisdom rather than only smarts on a board. And some of those really shouldn't be there at all (conflicts of interest, lack of experience).


> Makes what the world would look like if, say, the Manhattan Project would have been managed the same way.

It was not possible for a war-time government crash project to have been managed the same way. During WW2 the existential fear was an embodied threat currently happening. No one was even thinking about a potential for profits or even any additional products aside from an atomic bomb. And if anyone had ideas on how to pursue that bomb that seemed like a decent idea, they would have been funded to pursue them.

And this is not even mentioning the fact that security was tight.

I'm sure there were scientists who disagreed with how the Manhattan project was being managed. I'm also sure they kept working on it despite those disagreements.


That's what happened to the German program though

https://en.wikipedia.org/wiki/German_nuclear_weapons_program


Well, yes, but they were the existential threat.

Hey, maybe this means the AGIs will fight amongst themselves and thus give us the time to outwit them. :D


Actual scifi plot.


For real. It's like, did you see Oppenheimer? There's a reason they put the military in charge of that.


Of we ignore the risks and threats of AI for a second [..] just hope this helps ro cool down the ML sold as AI hype

If it is just ML sold as AI hype, are you really worried about the threat of AI?


It can be both, a hype and a danger. I don't worry much about AGI by now (I stopped insulting Alexa so, just to be sure).

The danger of generative AI is that it disrupts all kinds of things: arts, writers, journalism, propaganda... That threat already exists, the tech being no longer being hyped might allow us to properly adress that problem.


> I stopped insulting Alexa so, just to be sure

Priceless. The modern version of Pascal's wager.


> any chance of governing this properly got blown up by incompetence

No one knows why the board did this. No one is talking about that part. Yet every one is on twitter talking shit about the situation.

I have worked with a lot of PhD's and some of them can be, "disconnected" from anything that isn't their research.

This looks a lot like that, disconnected from what average people would do, almost childlike (not ish, like).

Maybe this isn't the group of people who should be responsible for "alignment".


The Fact still nobody knows why they did it is part of the problem now though. They have already clarified it was not for any financial reason, security reason, or privacy/safety reason, so that rules out all the important ones that spring to anyone’s minds. And they refuse to elaborate why in writing despite being asked to repeatedly.

Any reason good enough to fire him is good enough to share with the interim CEO and the rest of the company, if not the entire world. If they can’t even do that much, you can’t blame employees for losing faith in their leadership. They couldn’t even tell SAM ALTMAN why, and he was the one getting fired!


> The Fact still nobody knows why they did it is part of the problem now though.

The fact that Altman and Brockman were hired so quickly by Microsoft gives a clue: it takes time to hire someone. For one thing, they need time to decide. These guys were hired by Microsoft between close-of-business on Friday and start-of-business on Monday.

My supposition is that this hiring was in the pipeline a few weeks ago. The board of OpenAI found out on Thursday, and went ballistic, understandably (lack of candidness). My guess is there's more shenanigans to uncover - I suspect that Altman gave Microsoft an offer they couldn't refuse, and that OpenAI was already screwed by Thursday. So realizing that OpenAI was done for, they figured "we might as well blow it all up".


The problem with this analysis is the premise: that it "takes time to hire someone."

This is not an interview process for hiring a junior dev at FAANG.

If you're Sam & Greg, and Satya gives you an offer to run your own operation with essentially unlimited funding and the ability to bring over your team, then you can decide immediately. There is no real lower bound of how fast it could happen.

Why would they have been able to decide so quickly? Probably because they prioritize the ability to bring over the entire team as fast as possible, and even though they could raise a lot of money in a new company, that still takes time, and they view it as critically important to hire over the new team as fast as possible (within days) that they accept whatever downsides there may be to being a subsidiary of Microsoft.

This is what happens when principles see opportunity and are unencumbered by bureaucratic checks. They can move very fast.


> There is no real lower bound of how fast it could happen.

I don't know anything about how executives get hired. But supposedly this all happened between Friday night and Monday morning. This isn't a simple situation; surely one man working through the weekend can't decide to set up a new division, and appoint two poached executives to head it up, without consulting lawyers and other colleagues. I mean, surely they'd need to go into Altman and Brockman's contracts with OpenAI, to check that the hiring is even legal?

That's why I think this has been brewing for at least a week.


I don't think the hiring was in the pipeline, because until the board action it wasn't necessary. But I think this is still in the area of the right answer, nonetheless.

That is, I think Greg and Sam were likely fired because, in the board's view, they were already running OpenAI Global LLC more as if it were a for-profit subsidiary of Microsoft driven by Microsoft's commercial interest, than as the organization able to earn and return profit but focussed on the mission of the nonprofit it was publicly declared to be and that the board very much intended it to be. And, apparently, in Microsoft's view, they were very good at that, so putting them in a role overtly exactly like that is a no-brainer.

And while it usually takes a while to vet and hire someone for a position like that, it doesn't if you've been working for them closely in something that is functionally (from your perspective, if not on paper for the entity they nominally reported to) a near-identical role to the one you are hiring them for, and the only reason they are no longer in that role is because they were doing exactly what you want them to do for you.


> My supposition is that this hiring was in the pipeline a few weeks ago. The board of OpenAI found out on Thursday, and went ballistic, understandably (lack of candidness). My guess is there's more shenanigans to uncover - I suspect that Altman gave Microsoft an offer they couldn't refuse, and that OpenAI was already screwed by Thursday. So realizing that OpenAI was done for, they figured "we might as well blow it all up".

It takes time if you're a normal employee under standard operating procedure. If you really want to you can merge two of the largest financial institutions in the world in less than a week. https://en.wikipedia.org/wiki/Acquisition_of_Credit_Suisse_b...


The hiring could have been done over coffee in 15 minutes to agree on basic terms and then it would be announced half an hour later. Handshake deal. Paperwork can catch up later. This isn't the 'we're looking for a junior dev' pipeline.


I suspect it takes somewhat less time and process to hire somebody, when NOT hiring them by start-of-business on Monday will result in billions in lost stock value.


Yeah, like OpenAI hired their first interim CEO on Thursday night, hired their second on Monday, and are now talking about rehiring Sam (who probably doesn't care to be rehired).

There may be drawbacks to the "instant hiring" model.


This narrative doesn’t make any sense. Microsoft was blindsided and (like everyone else) had no idea Sam was getting fired until a couple days ago. The reason they hired him quickly is because Microsoft was desperate to show the world they had retained open AI’s talent prior to the market opening on Monday.

To entertain your theory, Let’s say they were planning on hiring him prior to that firing. If that was the case, why is everybody so upset that Sam got fired, and why is he working so hard to try to get reinstated to a role that he was about to leave anyway?


Was it due to incompetence though? The way it has played out has made me feel it was always doomed. It is apparent that those concerned with AI safety were gravely concerned with the direction the company was taking, and were losing power rapidly. This move by the board may have simply done in one weekend what was going to happen anyways over the coming months/years anyways.


> that's the real loss here: any chance of governing this properly got blown up by incompetence

If this incident is representative, I'm not sure there was ever a possibility of good governance.


Ignoring "Don't be Ted Faro" to pursue a profit motive is indeed a form of incompetence.


> pressure Altman to limit his side-projects

People keep talking about this. That was never going to happen. Look at Sam Altman's career: he's all about startups and building companies. Moreover, I can't imagine he would have agreed to sign any kind of contract with OpenAI that required exclusivity. Know who you're hiring; know why you're hiring them. His "side-projects" could have been hugely beneficial to them over the long term.


>His "side-projects" could have been hugely beneficial to them over the long term.

How can you make a claim like this when, right or wrong, Sam's independence is literally, currently, tanking the company? How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?


> How could allowing Sam to do what he wants benefit OpenAI, the non-profit entity?

Let's take personalities out of it and see if it makes more sense:

How could a new supply of highly optimized, lower-cost AI hardware benefit OpenAI?


> Sam's independence is literally, currently, tanking the company?

Honestly, I think they did that to themselves.


And of course Sam is totally not involved in any of this, right?


In trashing the company's value? No, I'm not entirely sure it's fair to blame that one on him. I don't know the guy or have an opinion on him but, based on what I've seen since Friday, I don't think he's done that much to contribute to this particular mess. The company was literally on cloud nine this time last week and, if Friday hadn't happened, it still would be.


> Sam's independence is literally, currently, tanking the company?

Before the boards' actions this friday, the company was on one of the most incredible success trajectories in the world. Whatever Sam's been doing as a CEO worked.


Calling it a delusion seems too provocative. Another way to say it is that principles take agreement and trust to follow. The board seems to have been so enamored with its principles that it completely lost sight of the trust required to uphold them.


This is one of the most insightful comments I've seen on this whole situation.


This was handled so very, very poorly. Frankly it's looking like Microsoft is going to come out of this better than anyone, especially if they end up getting almost 500 new AI staff out of it (staff that already function well as a team).

> In their letter, the OpenAI staff threaten to join Altman at Microsoft. “Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join," they write.


> Microsoft is going to come out of this better than anyone

Exactly. I'm curious about how much of this was planned vs emergent. I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists.

Equally, it's not entirely unpredictable. MS is the easiest to read: their moves to date have been really clear in wanting to be the primary commercial beneficiary of OAI's work.

OAI itself is less transpararent from the outside. There's a tension between the "humanity first" mantra that drove its inception, and the increasingly "commercial exploitation first" line that Altman was evidently driving.

As things stand, the outcome is pretty clear: if the choice was between humanity and commercial gain, the latter appears to have won.


"I doubt it was all planned: it would take an extraordinary mind to foresee all the possible twists."

From our outsider, uninformed perspective, yes. But if you know more sometimes these things become completely plannable.

I'm not saying this is the actual explanation because it probably isn't. But suppose OpenAI was facing bankruptcy, but they weren't telling anyone and nobody external knew. This allows more complicated planning for various contingencies by the people that know because they know they can exclude a lot of possibilities from their planning, meaning it's a simpler situation for them than meets the (external) eye.

Perhaps ironically, the more complicated these gyrations become, the more convinced I become there's probably a simple explanation. But it's one that is being hidden, and people don't generally hide things for no reason. I don't know what it is. I don't even know what category of thing it is. I haven't even been closely following the HN coverage, honestly. But it's probably unflattering to somebody.

(Included in that relatively simple explanation would be some sort of coup attempt that has subsequently failed. Those things happen. I'm not saying whatever plan is being enacted is going off without a hitch. I'm just saying there may well be an internal explanation that is still much simpler than the external gyrations would suggest.)


"it would take an extraordinary mind to foresee all the possible twists."

How far along were they on GPT-5?


> it would take an extraordinary mind

They could've asked ChatGPT for hints.


In hindsight firing Sam was a self-destructing gamble by the OpenAI board. Initially it seemed Sam may have committed some inexcusable financial crime but doesn't look so anymore.

Irony is that if a significant portion of OpenAI staff opt to join Microsoft, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year. Better than acquiring for $80B+ I suppose.


>, then Microsoft essentially killed their own $13B investment in OpenAI earlier this year.

For investment deals of that magnitude, Microsoft probably did not literally wire all $13 billion to OpenAI's bank account the day the deal was announced.

More likely that the $10b to $13 headline-grabbing number is a total estimated figure that represents a sum of future incremental investments (and Azure usage credits, etc) based on agreed performance milestones from OpenAI.

So, if OpenAI doesn't achieve certain milestones (which can be more difficult if a bunch of their employees defect and follow Sam & Greg out the door) ... then Microsoft doesn't really "lose $10b".


Msft/Amazon/Google would light 13 billion on fire to acquire OpenAI in a heartbeat.

(but also a good chunk of the 13bn was pre-committed Azure compute credits, which kind of flow back to the company anyway).


There's acquihires and then I guess there's acquifishing where you just gut the company you're after like a fish and hire away everyone without bothering to buy the company. There's probably a better portmanteau. I seriously doubt Microsoft is going to make people whole by granting equivalent RSUs, so you have to wonder what else is going on that so many seem ready to just up and leave some very large potential paydays.


I feel like that's giving them too much credit; this is more of a flukuisition. Being in the right place at the right time when your acquisition target implodes.


How about: acquimire


one thing for sure this is one hell of a quagmire /s


They acquired Activision for 69B recently.

While Activision makes much more money I imagine, acquiring a whole division of productive, _loyal_ staffers that work well together on something as important as AI is cheap for 13B.

Some background: https://sl.bing.net/dEMu3xBWZDE


If the change in $MSFT pre-open market cap (which has given up its gains at the time of writing, but still) of hundreds of billions of dollars is anything to go by, shareholders probably see this as spending a dime to get a dollar.


Awesome point. Microsoft's market cap today went up to 2.8 trillion, up 44.68 billion today.


> In hindsight firing Sam was a self-destructing gamble by the OpenAI board

surely the really self-destructive gamble was hiring him? he's a venture capitalist with weird beliefs about AI and privacy, why would it be a good idea to put him in charge of a notional non-profit that was trying to safely advance the start of the art in artificial intelligence?


> Frankly it's looking like Microsoft is going to come out of this better than anyone

Sounds like that's what someone wants and is trying to obfuscate what's going on behind the scenes.

If Windows 11 shows us anything about Microsoft's monopolistic behavior, having them be the ring of power for LMM's makes the future of humanity look very bleak.


I think the board needs to come clean on why they fired Sam Altman if they are going to weather this storm.


Altman is already gone, if they fired him without a good reason they are already toast


They might not be able to if the legal department is involved. Both in the case of maybe-pending legal issues, and because even rich people get employment protections that make companies wary about giving reasons.


"Even rich people?" - especially rich people, as they are the ones who can afford to use laws to protect themselves.


I said nothing contrary to this. I'm not sure what your goal is with this comment. If anything is implied in "even rich people," it's contempt for them, so I'm clearly on the pro-making legal protections more accessible side.

Pick a different target and move on.


Using your same rhetoric and attitude: please outline exactly what language I used that was so offensive to you.


> it's looking like Microsoft is going to come out of this better than anyon

Didn't follow this closely, but isn't that implicitly what an ex-CEO could have possibly been accused off ie. not acting in the company's best interest but someone else's? Not unprecedented either eg. the case of Nokia/Elop.


But is the door open to everyone of the 500 staff? That is a lot, and Microsoft may not need them all.


That's because they're the only adult in the room and mature company with mature management. Boring, I know. But sometimes experience actually pays off.


“Employees” probably means “engineers” in this case. Which is a wide majority of OpenAI staff, I’m sure.


I'm assuming it's a combination of researchers, data scientists, mlops engineers, and developers. There are a lot of different areas of expertise that come into building these models.


We’re seeing our generation’s “traitorous eight” story play out [1]. If this creates a sea of AI start-ups, competing and exploring different approaches, it could be invigorating on many levels.

[1] https://www.pbs.org/transistor/background1/corgs/fairchild.h...


How would that work, economically?

Wasn't a key enabler of early transitor work that required capital investment was modest?

SotA AI research seems to be well past that point.


> Wasn't a key enabler of early transitor work that required capital investment was modest?

They were simple in principle but expensive at scale. Sounds like LLMs.


Is there SotA LLM research not at scale?

My understanding was that practical results were indicating your model has to be pretty large before you start getting "magic."


It really depends on what you're researching. Rad AI started with only 4m investment and used that to make cutting edge LLMs that are now in use by something like half the radiologists in the US. Frankly putting some cost pressure on researchers may end up creating more efficient models and techniques.


NN/ai concepts have been around for a while. It is just computers had not been fast enough to make it practical. It was also harder to get capital back then. Those guys put the silicon in silicon valley.


Doesn't it look like the complete opposite is going to happen though?

Microsoft gobbles up all talent from OpenAI as they just gave everyone a position.

So we went from "Faux NGO" to, "For profit", to "100% Closed".


> Doesn't it look like the complete opposite is going to happen though?

Going from OpenAI to Microsoft means ceding the upside: nobody besides maybe Altman will make fuck-you money there.

I’m also not sure as some in Silicon Valley that this is antitrust proof. So moving to Microsoft not only means less upside, but also fun in depositions for a few years.


Ha! One of my all-time favourites, the fuck-you position. The Gambler, the uncle giving advice:

You get up two and a half million dollars, any asshole in the world knows what to do: you get a house with a 25 year roof, an indestructible Jap-economy shitbox, you put the rest into the system at three to five percent to pay your taxes and that's your base, get me? That's your fortress of fucking solitude. That puts you, for the rest of your life, at a level of fuck you.

https://www.imdb.com/title/tt2039393/characters/nm0000422


I haven’t seen the movie, but it seems like Uncle Frank and I would get along just fine.


No. OpenAI employees do not have traditional equity in the form of RSUs or Options. They have a weird profit-sharing arrangement in a company whose board is apparently not interested in making profits.


Employee equity (and all investments) are capped at 100x, which is still potentially a hefty payday. The whole point of the structure was to enable competitive employee comp.


Fuck you money was always a lottery ticket based on OpenAI's governance structure and "promises of potential future profit." That lottery ticket no longer exists, and no one else is going to provide it after seeing how the board treated their relationship with Microsoft and that $10B investment. This is a fine lifeboat for anyone who wants to continue on the path they were on with adults at the helm.

What might have been tens or hundreds of millions in common stakeholder equity gains will likely be single digit millions, but at least much more likely to materialize (as Microsoft RSUs).


If I weren't so adverse to conspiracy theories, I would think that this is all a big "coup" by Microsoft: Ilya conspired with Microsoft and Altman to get him fired by the board, just to make it easy for Microsoft to hire him back without fear of retaliation, along with all the engineers that would join him in the process.

Then, Ilya would apologize publicly for "making a huge mistake" and, after some period, would join Microsoft as well, effectively robbing OpenAI from everything of value. The motive? Unlocking the full financial potential of ChatGPT, which was until then locked down by the non-profit nature of its owner.

Of course, in this context, the $10 billion deal between Microsoft and OpenAI is part of the scheme, especially the part where Microsoft has full rights over ChatGPT IP, so that they can just fork the whole codebase and take it from there, leaving OpenAI in the dust.

But no, that's not possible.


No, I don’t think there’s any grand conspiracy, but certainly MS was interested in leapfrogging Google by capturing the value from OpenAI from day one. As things began to fall apart there MS had vast amounts of money to throw at people to bring them into alignment. The idea of a buyout was probably on the table from day one, but not possible till now.

If there’s a warning, it’s to be very careful when choosing your partners and giving them enormous leverage on you.


Sometimes you win and sometimes you learn. I think in this case MS is winning.


Conspiracy theories that involve reptilian overlords and ancient aliens are suspect. Conspiracy theories that involve collusion to makes massive amounts of money are expected and should be the treated as the most likely scenario. Occam's razor does not apply to human behavior, as humans will do the most twisted things to gain power and wealth.

My theory of what happened is identical to yours, and is frankly one of the only theories that makes any sense. Everything else points to these people being mentally ill and irrational, and their success technically and monetarily does not point to that. It would be absurd to think they clown-showed themselves into billions of dollars.


Why would they be afraid of retaliation? They didn't sign sports contracts, they can just resign anytime, no? That just seems to overcomplicate things.


I mean, I don't actually believe this. But I am reminded of 2016 when the Turkish president headed off a "coup" and cemented his power.

More likely, this is a case of not letting a good crisis go to waste. I feel the board was probably watching their control over OpenAI slip away into the hands of Altman. They probably recognized that they had a shrinking window to refocus the company along lines they felt was in the spirit of the original non-profit charter.

However, it seems that they completely misjudged the feelings of their employees as well as the PR ability of Altman. No matter how many employees actually would prefer the original charter, social pressure is going to cause most employees to go with the crowd. The media is literally counting names at this point. People will notice those who don't sign, almost like a loyalty pledge.

However, Ilya's role in all of this remains a mystery. Why did he vote to oust Altman and Brockman? Why has he now recanted? That is a bigger mystery to me than why the board took this action in the first place.


Will revisit this in a couple months.


Yeah, there's no way this is a plan, but for sure this works out nicely.


Ilya posted this on Twitter:

"I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company."

https://twitter.com/ilyasut/status/1726590052392956028


Trying to put the toothpaste back in the tube. I seriously doubt this will work out for him. He has to be the smartest stupid person that the world has seen.


Ilya is hard to replace, and no one thinks of him as a political animal. He's a researcher first and foremost. I don't think he needs anything more than being contrite for a single decision made during a heated meeting. Sam Altman and the rest of the leadership team haven't got where they are by holding petty grudges.

He doesn't owe us, the public, anything, but I would love to understand his point of view during the whole thing. I really appreciate how he is careful with words and thorough when exposing his reasoning.


Just because hes not a political animal it doesn't mean he's inured from politics. I've seen 'irreplaceable' a-political technical leaders be reason for schisms in organizations thinking they can lever their technical knowledge over the rest of the company only to watch them get pushed aside and out.


Oh that's definitely common. I've seen it many times and it's ugly.

I don't think this is what Ilya is trying to do. His tweet is clearly about preserving the organization because he sees the structure itself as helpful, beyond his role in it.


Fair - hopefully an unintentional political move but big political miscalculation.


For someone who isn't a political animal he made some pretty powerful political moves.


researchers and academics are political withing their organization regardless of whether or not they claim to be or are aware of it.

ignorance of the political impact/influence is not a strength but a weakness, just like a baby holding a laser/gun.


I've worked with this type multiple times. Mathematical geniuses with very little grasp of reality, easily manipulated into doing all sorts of dumb mistakes. I don't know if that's the case, but it certainly smells like it.


His post previous to that seems pretty ironic in that light - https://twitter.com/ilyasut/status/1710462485411561808


He seriously underestimated how much rank and file employees want $$$ over an idealistic vision (and sam altman is $$$) but if he backs down now, he will pretty much lose all credibility as a decision maker for the company.


If your compensation goes from 600k to 200k, you would care as well.

No idealistic vision can compensate for that.


Hey i would also be mad if i were in the rank and file employee position. Perhaps the non profit thing needs to be thought out a bit more.


Does that include the person who stole self-driving IP from Waymo, set up a company with stolen IP, and tried to sell the company to Uber?


At least he consistently works towards whatever he currently believes in. Though he could work on consistency in beliefs.


That seems rather harsh. We know he’s not stupid, and you’re clearly being emotional. I’d venture he probably made the dumbest possible move a smart person could make while also in a very emotional state. The lessons for all to learn on the table is making big decisions while in an emotional state do not often work out well.


So this was completely unnecessary cock-up -- still ongoing. Without Ilya' vote this would not even be a thing. This is really comical, Naked Gun type mess.

Ilya Sutskever is one of the best in the AI research, but everything he and others do related to AI alignment turns into shit without substance.

It makes me wonder if AI alignment is possible even in theory, and if it is, maybe it's a bad idea.


We can’t even get people aligned. Thinking we can control a super intelligence seems kind of silly.


i always thought it was the opposite. the different entities in a society are frequently misaligned, yet societies regularly persist beyond the span of any single person.

companies in a capitalist system are explicitly misaligned with eachother; success of the individual within a company is misaligned with the success of the company whenever it grows large enough. parties within an electoral system are misaligned with eachother; the individual is often more aligned with a third party, yet the lesser-aligned two-party system frequently rules. the three pillars of democratic government (executive, legislative, judicial) are said to exist for the sake of being misaligned with eachother.

so AI agents, potentially more powerful than the individual human, might be misaligned with the broader interests of the society (or of its human individuals). so are you and i and every other entity: why is this instance of misalignment worrisome to any disproportionate degree?


>"I deeply regret my participation in the board's actions."

Wasn't he supposed to be the instigator? That makes it sound like he was playing a less active role than claimed.


It takes a lot of courage to do so after all this.


I think the word you're looking for is "fear".


Maybe he'll head to Apple.


Or a couple of drinks.


To be fair, lots of people called this pretty early on, it's just that very few people were paying attention, and instead chose to accommodate the spin, immediately went into "following the money", a.k.a. blaming Microsoft, et al. The most surprising aspect of it all is complete lack of criticism towards US authorities! We were shown this exciting play as old as world— a genius scientist being exploited politically by means of pride and envy.

The brave board of "totally independent" NGO patriots (one of whom is referred to, by insiders, as wielding influence comparable to USAF colonel.[1]) who brand themselves as this new regime that will return OpenAI to its former moral and ethical glory, so the first thing they were forced to do was get rid of the main greedy capitalist Altman; he's obviously the great seducer who brought their blameless organisation down by turning it into this horrible money-making machine. So they were going to put in his place their nominal ideological leader Sutzkever, commonly referred to in various public communications as "true believer". What does he believe in? In the coming of literal superpower, and quite particular one at that; in this case we are talking about AGI. The belief structure here is remarkable interlinked and this can be seen by evaluating side-channel discourse from adjacent "believers", see [2].

Roughly speaking, and based from my experience in this kind of analysis, and please give me some leeway as English is not my native language, what I see is all the infallible markers of operative work; we see security officers, we see their methods of work. If you are a hammer, everything around you looks like a nail. If you are an officer in the Clandestine Service or any of the dozens of sections across counterintelligence function overseeing the IT sector, then you clearly understand that all these AI startups are, in fact, developing weapons & pose a direct threat to the strategic interests slash national security of the United States. The American security apparatus has a word they use to describe such elements: "terrorist." I was taught to look up when assessing actions of the Americans, i.e. most often than not we're expecting noth' but highest level of professionalism, leadership, analytical prowess. I personally struggle to see how running parasitic virtual organisations in the middle of downtown SFO and re-shuffling agent networks in key AI enterprises as blatantly as we had seen over the weekend— is supposed to inspire confidence. Thus, in a tech startup in the middle of San Francisco, where it would seem there shouldn’t be any terrorists, or otherwise ideologues in orange rags, they sit on boards and stage palace coups. Horrible!

I believe that US state-side counterintelligence shouldn't meddle in natural business processes in the US, and instead make their policy on this stuff crystal clear using normal, legal means. Let's put a stop to this soldier mindset where you fear any thing that you can't understand. AI is not a weapon, and AI startups are not some terrorist cells for them to run.

[1]: https://news.ycombinator.com/item?id=38330819

[2]: https://nitter.net/jeremyphoward/status/1725712220955586899


Silicon Valley outsider here. Am I being harsh here?

I just bothered to look at the full OpenAI board composition. Besides Ilya Sutskever and Greg Brockman, why are these people eligible to be on the OpenAI board? Such young people, calling themselves "President of this", "Director of that".

- Adam D'Angelo — Quora CEO (no clue what he's doing on OpenAI board)

- Tasha McCauley — a "management scientist" (this is a new term for me); whatever that means

- Helen Toner — I don't know what exactly she does, again, "something-something Director of strategy" at Georgetown University, for such a young person

No wise veterans here to temper the adrenaline?

Edit: the term clusterf*** comes to mind here.


Adam D'Angelo was brought in as a friend because Sam Altman lead Quora's Series D around the time OpenAI was founded, and he is a board member on Dustin Moskovitz's Asana.

Dustin Moskovitz isn't on the board but gave OpenAI the $30M in funding via his non-profit Open Philantopy [0]

Tasha McCauley was probably brought in due to the Singularity University/Kurziwel types who were at OpenAI in the beginning. She was also in the Open Philanthropy space.

Helen Toner was probably brought in due to her past work at Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives, and was also close to Sam Altman. They also gave OpenAI the initial $30M [0]

Essentially, this is a Donor versus Investor battle. The donors aren't gunna make money of OpenAI's commercial endeavors that began in 2019.

It's similar to Elon Musk's annoyance at OpenAI going commercial even though he donated millions.

[0] - https://www.openphilanthropy.org/grants/openai-general-suppo...


Thank you for the context; much appreciate it. In short, it's all "I know a guy who knows a guy".


Exactly this. I saw another commenter raise this point about Tasha (and Helen, if I remember correctly) noting that her LinkedIn profile is filled with SV-related jargon and indulge-the-wife thinktanks but without any real experience taking products to market or scaling up technology companies.

Given the pool of talent they could have chosen from their board makeup looks extremely poor.


> indulge-the-wife thinktanks

Regardless of context, this is an incredibly demeaning comment. Shame on you


It doesn't have to be taken that way. It's a pretty accurate description.


Truth hurts sometimes, eh?


Helen Toner funded OpenAI with $30M, which was enough to get a board seat at the time.


Source? Where did that money come from?


From Open Philanthropy - a Dustin Moskovitz funded non-profit working on building OpenAI type initiatives. They also gave OpenAI the initial $30M. She was their observer.

https://www.openphilanthropy.org/grants/openai-general-suppo...


The board previously had people like Elon Musk and Reid Hoffman. Greg Brockman was part of the board until he was ousted as well.

The attrition of industry business leaders, the ouster of Greg Brockman, and the (temporary, apparently) flipping of Ilya combined to give the short list of remaining board members outsized influence. They took this opportunity to drop a nuclear bomb on the company's leadership, which so far has backfired spectacularly. Even their first interim CEO had to be replaced already.


This is the Silicon Valley's boy's club, itself an extension of the Stanford U. boys club.

"Meritocracy" is very impolite word in these circles.


You can like D'Angelo or not but he was the CTO of Facebook.


I woke up and the first thing on my mind was, "Any update on the drama?"

Did not expect to see this whole thing still escalating! WOW! What a power move by MSFT.

I'm not even sure OpenAI will exist by the end of the week at this rate. Holy moly.


By the end of the week is over-optimistic. Foe the last 3 days feels like million year. I bet the company will be gone by the time Emmett Shear wakes up


Is this final stages of the singularity?


It's not over until the last stone involved in the avalanche stops moving and it is anybody's guess right now what the final configuration will be.

But don't be surprised if Shear also walks before the week is out, if some board members resign but others try to hold on and if half of OpenAI's staff ends up at Microsoft.


Seems more damage control than power move. I'm sure their first choice was to reinstate Altman and get more control over OpenAI governance. What they've achieved here is temporarily neutralizing Altman/Brockman from starting a competitor, at the cost of potentially destroying OpenAI (who they remain dependent on for next couple of years) if too many people quit.

Seems a bit of a lose-lose for MSFT and OpenAI, even if best that MSFT could do to contain the situation. Competitors must be happy.


Disagree. MSFT extending an open invitation to all OpenAI employees to work under sama at a subsidiary of MSFT sounds to me like it'll work well for them. They'll get 80% of OpenAI for negative money - assuming they ultimately don't need to pay out the full $10B in cloud compute credits.

Competitors should be fearful. OpenAI was executing with weights around their ankles by virtue of trying to run as a weird "need lots of money but cant make a profit" company. Now they'll be fully bankrolled by one of the largest companies the world has ever seen and empowered by a whole bunch of hypermotivated-through-retribution leaders.


AFAIK MSFT/Altman can't just fork GPT-N and continue uninterrupted. All MSFT has rights to is weights and source code - not the critical (and slow to recreate) human-created and curated training data, or any of the development software infrastructure that OpenAI has built.

The leaders may be motivated by retribution, but I'm sure none of leaders or researchers really want to be a division of MSFT rather than a cool start-up. Many developers may chose to stay in SF and create their own startups, or join others. Signing the letter isn't a commitment to go to MSFT - just a way to pressure for a return to status quo they were happy with.

Not everyone is going to stay with OpenAI or move to MSFT - some developers will move elsewhere and the knowledge of OpenAI's secret sauce will spread.


I'm cancelling my Netflix subscription, I don't need it.


But boy will I renew it when this gets dramatized as a limited series.

This is some Succession-level shenanigans going on here.

Jesse Eisenberg to play Altman this time around?


I'm thinking more like "24"


Can we have a quick moment of silence for Matt Levine? Between Friday afternoon and right now, he has probably had to rewrite today's Money Stuff column at least 5 or 6 times.


"Except that there is a post-credits scene in this sci-fi movie where Altman shows up for his first day of work at Microsoft with a box of his personal effects, and the box starts glowing and chuckles ominously. And in the sequel, six months later, he builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!” If your main worry is that Sam Altman is going to build a rogue AI unless he is checked by a nonprofit board, this weekend’s events did not improve matters!"

Reading Matt Levine is such a joy.


Didn't he say that he was taking Friday off, last week? The day before his bete noire Elon Musk got into another brouhaha and OpenAI blew up?

I think he said once that there's an ETF that trades on when he takes vacations, because they keep coinciding with Events Of Note.


He takes every Friday off


Deservedly or not, Satya Nadella will look like a genius in the aftermath. He has and will continue to leverage this situation to strengthen MSFT's position. Is there word of any other competitors attempting to capitalize here? Trying to poach talent? Anything...


After Balmer I couldn’t have imagined such competency from Microsoft.


After Ballmer, competency can only be higher at Microsoft.


Ballmer honestly wasn't that bad. He gave executive backing to Azure and the larger Infra push in general at MSFT.

Search and Business Tools were misses, but they more than made up for it with Cloud, Infra, and Security.

Also, Nadella was Ballmer's pick.


The XBox business started under him as well. IMO he was great at diversifying MSFT, but so-so at driving improvements in its core products at the time (Windows and Office). Perhaps this was just a leadership style thing, and he was hands-off on existing products in a way that Bill Gates wasn't (I think there was even news of Bill Gates sending nasty grams about poor Windows releases after he had officially stepped down).


Look at OS market and Text Editor market today. They aren't growth markets and haven't been since the 2000s at the latest. He made the fight call to ignore their core products in return for more concentration on Infra, B2B SaaS, Security, and (as you mentioned) Entertainment.

Customers are sticky and MSFT had a strong channel sales and enterprise sales org. Who cares if the product is shit if there are enough goodies to maintain inertia.

Spending billions on markets that will grow into 10s or 100s of Billions is a better bet than billions on a stagnant market.

> he was hands-off on existing products in a way that Bill Gates wasn't

Ballmer had an actual Business education, and was able to execute on scaling. I'm sure Bill loves him too now that Ballmer's protege almost 15Xed MSFT stock.


Sometime, you do the hard work and your successor is the genius...


And sometimes the company is succeeding in spite of you and the moment you're out the door and people aren't worried about losing their job over arbitrary metrics they can finally show off what they're really capable of.


Also, Nadella last month repudiated his own decision to cancel Windows Phone. Purchasing Nokia was one of the last things Ballmer did.


The key line:

“Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.”


I think everyone assumed this was an aquihire without the "aqui-" but this is the first time I've seen it explicitly stated.


hostile takeunder?


Love it. Could also be called a hostile giveover, considering the OpenAI board gifted this opportunity to Microsoft


That's perfect.


You win


will they stay though? what happens to their OAI options?


Will their OAI options be worth anything if the implosion continues?


yeah but threatening to quit is actually accelerating the implosion


I don’t believe startups can have successful exits without extraordinary leadership (which the current board can never find). The people quitting are simply jumping off a sinking ship.


MSFT RSUs actually have value as opposed to OpenAI’s Profit Participation Units (PPU).

https://www.levels.fyi/blog/openai-compensation.html

https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...


What will happen to their newly granted msft shares? One can be sold _today_ and might be worth a lot more soon…


Sounds a lot like MS wants to have OpenAI but without a boards that considers pesky things like morals.


Time for a counter-counter-coup that ends up with Microsoft under the Linux Foundation after RMS reveals he is Satoshi...


You mean the GNU Linux Foundation?


RMS (I assume Richard Stallman) may be many many many things, but setting up a global pyramid scheme doesn't seem to be his M.O.

But stranger things have happened. One day I may be very very VERY surprised.


There is nothing related to pyramids in bitcoin. It's just an implementation of a novel, trustless electronic money, also it's free software.


how would you define an asset that has zero intrinsic value other than the value people have already committed to it? house of cards?


Money.


but money has intrinsic value. It is directly tied to the economy of a country. So if a country was to collapse, so would it's money.

The point here is that if a country collapses, then you got bigger problems than the loss of whatever stored currency you got. Even if your money is in the hypothetically useful crypto, you got far bigger problems that the money you own is useless to you, you need to survive.

But aside from that extreme scenario, money is not the same thing.

Another way to think of it:

There is nothing in the world that would prevent the immediate collapse of crypto if everyone who owns it just decided to sell.

If everyone in the world stops accepting the US Dollar, the US can still continue to use it internally and manufacture goods and such. It'll just be a collapse of trade, but then even in that scenario people can just exchange the dollar locally for say gold, and trade gold on the global market. So the dollar has physical and usable backing. Meanwhile crypto has literally nothing.


There were many currencies in history that have lost all or almost all its value upon serious economical crisis in respective countries. It seems you wouldn't call that money? Crypto is simply an alternative currency.

See also: https://en.m.wikipedia.org/wiki/Private_currency


Right, INTERNAL economical crisis is what causes the collapse of currency. But just because the rest of the world doesn't recognize it, doesn't mean it is worthless, it simply converts.

Bitcoin has nothing in and of itself.

Also private currency like script was awful, please don't take the worst financial examples in history and claim that bitcoin is similar as an argument as to why it is valid.


> Bitcoin has nothing in and of itself.

I don't understand what is "in and of itself" in an ordinary currency of an ordinary, small country.

> INTERNAL economical crisis is what causes the collapse of currency

Which is why it is very unlikely to happen with bitcoin.

> But just because the rest of the world doesn't recognize it, doesn't mean it is worthless, it simply converts.

Can't you say exactly the same about bitcoin?


The year of the Linux Microsoft.


again, nobody has shown even a glimmer of the board operating with morality being their focus. we just don't know. we do know that a vast majority of the company don't trust the board though.


Sam just gave 3 hearts to Ilya as well... I hope the drama continues and he joins MS at this point.


Whose morals again?


That is a spectacular power move: extending 700 job offers, many of which would be close to $1 million per year compensation.


They didn’t say anything about the compensation.


So essentially, OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning.

Microsoft can absorb all the employees and switch them into the new AI subsidiary which basically is an acqui-hire without buying out everyone else's shares and making a new DeepMind / OpenAI research division inside of the company.

So all along it was a long winded side-step into having a new AI division without all the regulatory headaches of a formal acquisition.


> OpenAI is a sinking ship as long as the board members go ahead with their new CEO and Sam, Greg are not returning

Far from certain. One, they still control a lot of money and cloud credits. Two, they can credibly threaten to license to a competitor or even open source everything, thereby destroying the unique value of the work.

> without all the regulatory headaches of a formal acquisition

This, too, is far from certain.


>Far from certain. One, they still control a lot of money and cloud credits.

This too is far from certain. The funding and credits was at best tied to milestones, and at worst, the investment contract is already broken and msft can walk.

I suspect they would not actually do the latter and the ip is tied to continual partnership.


And sue for the assets of OpenAI on account of the damage the board did to their stock... and end up with all of the IP.


On what basis would one entity be held responsible for another entity’s stock price, without evidence of fraud? Especially a non profit.


The value of OpenAI's own assets in the for-profit subsidiary, may drop in value due to recent events.

Microsoft is a substantial shareholder (49%) in that for-profit subsidiary, so the value of Microsoft's asset has presumably reduced due to OpenAI's board decisions.

OpenAI's board decisions which resulted in these events appear to have been improperly conducted: Two of the board's members weren't aware of its deliberations, or the outcome until the last minute, notably the chair of the board. A board's decisions have legal weight because they are collective. It's allowed to patch them up after if the board agrees, for people to take breaks, etc. But if some directors intentionally excluded other directors from such a major decision (and formal deliberations), affecting the value and future of the company, that leaves the board's decision open to legal challenges.

Hypothetically Microsoft could sue and offer to settle. Then OpenAI might not have enough funds if it would lose, so might have sell shares in the for-profit subsidiary, or transfer them. Microsoft only needs about 2% more to become majority shareholder of the for-profit subsidiary, which runs ChatGPT sevices.


Bad Faith. Watch the sales presentation that Altman and Nadella gave at OpenAI’s inaugural developer conference just a few days/hours before OpenAI fired its key executives, including Altman.


If Microsoft emerges as the "winner" from all of them then I think we are all the "losers". Not that I think OpenAI was perfect or "good" just that MS taking the cake is not good for the rest of us. It already feels crazy that people are just fine with them owning what they do and how important it is to our development ecosystem (talking about things like GitHub/VSCode), I don't like the idea of them also owning the biggest AI initiative.


I will never not be mad at the fact that they built a developer base by making all their tech open source, only to take it all away once it became remotely financially viable to do so. With how close "Open"AI is with Microsoft, it really does not seem like there is a functional difference in how they ethically approach AI at all.


Ilya signed it??? He's on the board... This whole thing is such an implosion of ambition.


Most people who sympathized with the Board prior to this would have assumed that the presumed culprit, the legendary Ilya, has thought through everything and is ready to sacrifice anything for a course he champions. It appears that is not the case.


I think he orchestrated the coup on principle, but severely underestimated the backlash and power that other people had collectively.

Now he’s trying to save his own skin. Sam will probably take him back on his own technical merits but definitely not in any position of power anymore

When you play the game of thrones, you win or you die

Just because you are a genius in one domain does not mean you are in another

What’s funny is that everyone initially “accepted” the firing. But no one liked it. Then a few people (like greg) started voting with their feet which empowered others which has cumulated into this tidal shift.

It will make a fascinating case study some day on how not to fire your CEO


he even posted a apology: https://x.com/ilyasut/status/1726590052392956028?s=20

what the actual fuck =O


I knew it was Joseph Gordon-Levitt's plot all along!


I don't know if you are joking or not, but one of the board members is Joseph Gordon-Levitt Wife.


(yes that was the joke)


I'm going to take a leap of intuition and say all roads lead back Adam d'Angelo for the coup attempt.


> all roads lead back Adam d'Angelo

Maybe someone thinks Sam was “not consistently candid” about mentioning one of the feature bullets in latest release was dropping d'Angelo's Poe directly into the ChatGPT app for no additional charge.

Given dev day timing and the update releasing these "GPTs" this is an entirely plausible timeline.

https://techcrunch.com/2023/04/10/poes-ai-chatbot-app-now-le...


They did not expect Microsoft to take everything and walk away, and did not realize how little pull they actually had.

If you made a comment recently about de jure vs de facto power, step forward and collect your prize.




You come at the king, you best not miss. If you do, make sure to apologize on Twitter while you can.


Naive is too soft a word. How can you be so smart and so out of touch at the same time?


IQ and EQ are different things. Some people are very technically smart to know a trillion side effects of technical systems. But can be really bad/binary/shallow at knowing side order effects of human dynamics.

Ilya's role is a Chief Scientist. It may be fair to give at least some benefit of doubt. He was vocal/direct/binary, and also vocally apologized and worked back. In human dynamics – I'd usually look for the silent orchestrator behind the scenes that nobody talks about.


I'm fine with all that in principle but then you shouldn't be throwing your weight around in board meetings, probably you shouldn't be on the board to begin with because it is a handicap in trying to evaluate the potential outcome of the decisions the board has to make.


I don't think this is necessarily about different categories of intelligence... Politicking and socializing are skills that require time and mental energy to build, and can even atrophy. If you spend all your time worrying about technical things, you won't have as much time to build or maintain those skills. It seems to me like IQ and EQ are more fundamental and immutable than that, but maybe I'm making a distinction where there isn't much of one.


Specialized learning and focus often comes at the cost of generalized learning and focus. It's not zero sum, but there is competition between interests in any person's mind.


in my experience these things will typically go hand in hand. There is also an argument to be made that being smart at building ML models and being smart in literally anything else have nothing to do with each other.


Usually this is due to autism, please be kind.


Not claiming to know anything about any persons differences or commenting about that in any way.


Wow, lots of drama and plot twists for the writers of the Netflix mini-series.


The great drama of our time (this week)


I don't think I have seen a bigger U-turn


I was looking down the list and then saw Ilya. Just when you think this whole ordeal can't get any more insane.


Yeah, what the hell?

Do we know why Murati was replaced?


Apparently she tried to rehire Sam and Greg.

I don't think she actually had anything to do with the coup, she was only slightly less blindsided than everyone else.


To be fair, that is a stupid first move to make as the CEO who was just hired to replace the person deposed by the board. (Though I’m still confused about Ilya’s position.)


If your job as a CEO is to keep the company running it seems like the only way to do that was hire them back because look at the company now it's essentially dead unless the board resigns and with how stupid the board is they might not lol.

So her move wasn't stupid at all. She obviously knew people working there respected the leadership of the company.

If 550 people leave OpenAI you might as well just shut it down and sell the IP to Microsoft.


It's a lot easier to sign a petition than actually walk away from a presumably well-paying job in a somewhat weak tech job market. People assuming everyone can just traipse into a $1m/year role at Microsoft is smoking some really good stuff.


> can just traipse into a $1m/year role at Microsoft

Do you not trust Microsoft's public statement that jobs are waiting for anyone that decides to leave OpenAI? Considering their two decade adventure with Xbox and their $72bln in profits last year, on top of a $144bln in cash reserves, I wouldn't be surprised if Microsoft is able (and willing) to match most comp packages considering what's at stake. Maybe not everyone, but most.


I think the specifics on an individual level once the smoke clears matter a lot.


Well it is "somewhat weak tech job market" for your average Joe. I think for most of those guys finding a 0,5kk/year job wouldn't be such a problem especially that the AI hype has not yet died down.

Actually for MS this might be much better cause they would get direct control over them without the hassle of talking to some "board" that is not aligned with their interests.


If you know the company will implode and you'll be CEO of a shell, it is better to get board to reverse the course. It isn't like she was part of decision making process


With nearly the entire team of engineers threatening to leave the company over the coup, was it a stupid move?

The board is going to be overseeing a company of 10 people as things are going.


But wouldn’t the coup have required 4 votes out of 6 which means she voted yes? If not then the coup was executed by just 3 board members? I’m confused.


Mira isn't on the board, so she didn't have a vote in this.


Generally speaking, 4 members is the minimum quorum for a board of 6, and 3 out of 4 is a majority decision.

I don't know if it was 3 or 4 in the end, but it may very well have been possible with just 3.


Murati is/was not a board member.


I heard it was because she tried to hire Sam and Greg back.


So who's against it and why ?

I wonder if it will take 20 years to learn the whole story.


The amount that's leaked out already - over a weekend - makes me think we'll know the full details of everything within a few days.


The dude is a quack.


I think the names listed are the recipients of the letter (the board), not the signers.


There’s only 4 people on the board.


I think it was Mark Zuckerberg that described (pre-Elon) Twitter as a clown car that fell into a gold mine.

Reminds me a bit of the Open AI board. Most of them I'd never heard of either.


This makes the old twitter look like the Wehrmacht in comparison.

The old twitter did not decide to randomly detonate themselves when they were worth $80 billion. In fact they found a sucker to sell to, right before the market crashed on perpetually loss-making companies like twitter.


The benefit of having incentive-aligned board, founders, and execs.

Even the clown car isn't this bad.


That's a confused heuristic. It could just as easily mean they keep their heads down and do good work for the kind of people whose attention actually matters for their future employment prospects.


I often hear that about the OpenAI board, but in general are people here know most board members of some big/darling tech companies? Outside of some of the co-founders I don't know anyone.


I don't mean I know them personally, but they don't seem to be major names in the manner of (as you see down thread) the Google Founders bringing in Eric Schmidt.

They seem more like the sort of people you'd see running wikimedia.


I meant "know" in the sense you used "heard".


Perhaps we can stop pretending that some of these people who are top-level managers or who sit on boards are prodigies. Dig deeper and there is very little there - just someone who can afford to fail until they drive the clown car into that gold mine. Most of us who have to put food on the table and pay rent have much less room for error.


You know, this makes early Google's moves around its IPO look like genius in retrospect. In that case, brilliant but inexperienced founders majorly lucked out with the thing created... but were also smart enough to bring in Eric Schmidt and others with deeper tech industry business experience for "adult supervision" exactly in order to deal with this kind of thing. And they gave tutelage to L&S to help them establish sane corporate practices while still sticking to the original (at the time unorthodox) values that L&S had in mind.

For OpenAI... Altman (and formerly Musk) were not that adult supervision. Nor is the board they ended up with. They needed some people on that board and in the company to keep things sane while cherishing the (supposed) original vision.

(Now, of course that original Google vision is just laughable as Sundar and Ruth have completely eviscerated what was left of it, but whatever)


>but were also smart enough to bring in Eric Schmidt and others with deeper tech >industry business experience for "adult supervision"

>(Now, of course that original Google vision is just laughable as Sundar and Ruth >have completely eviscerated what was left of it, but whatever)

Those two things happening one after another is not coincidence.


I'm not sure I agree. Having worked there through this transition I'd say this: L&S just seem to have lost interest in running a mature company, so their "vision" just meant nothing, Eric Schmidt basically moved on, and then after flailing about for a bit (the G+ stuff being the worst of it) they just handed the reigns to Ruth&Sundar to basically turn into a giant stock price pumping machine.


G+ was handled so poorly, and the worst of it was that they already had both Google Wave (in the US) and Orkut (mostly outside US) which both had significant traction and could’ve easily been massaged into something to rival Facebook.

Easily…anywhere except at a megacorp where a privacy review takes months and you can expect to make about a quarter worth of progress a year.


All successful companies succeed despite themselves.


Working in consultancies/agencies for the last 15 years, I see this time and time again. Fucking dart-throwing monkeys making money hand over fist despite their best intentions to lose it all.


I don't really understanding why the workforce is swinging unambiguously behind Altman. The core of the narrative thus far is that the board fired Altman on the grounds that he was prioritising commercialisation over the not-for-profit mission of OpenAI written into the organisation's charter.[1] Given that Sam has since joined Microsoft, that seems plausible, on its face.

The board may have been incompetent and shortsighted. Perhaps they should even try and bring Altman back, and reform themselves out of existence. But why would the vast majority of the workforce back an open letter failing to signal where they stand on the crucial issue - on the purpose of OpenAI and their collective work? Given the stakes which the AI community likes to claim are at issue in the development of AGI, that strikes me as strange and concerning.

[1] https://openai.com/charter


> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe it has to do with them wanting to get rich by selling their shares - my understanding is there was an ongoing process to get that happening [1].

If Altman is out of the picture, it looks like Microsoft will assimilate a lot of OpenAI into a separate organisation and OpenAI's shares might become worthless.

[1] https://www.financemagnates.com/fintech/openai-in-talks-to-s...


Yeah, "OpenAI employees would actually prefer to make lots of money now" seems like a plausible answer by default.

It's easy to be a true believer in the mission _before_ all the money is on the table...


My estimate is that a typical staff engineer who'd been at OpenAI for 2+ years could have sold $8 million of stock next month. I'd be pissed too.


No way it is this much.


Yep.

What people don't realize is that Microsoft doesn't own the data or models that OpenAI has today. Yeah, they can poach all the talent, but it still takes an enormous amount of effort to create the dataset and train the models the way OpenAI has done it.

Recreating what OpenAI has done over at Microsoft will be nothing short of a herculean effort and I can't see it materializing the way people think it will.


Except MSFT does have access to the IP, and MSFT has access to an enormous trove of their own data across their office suite, Bing, etc. It could be a running start rather than a cold start. A fork of OpenAI inside an unapologetic for profit entity, without the shackles of the weird board structure.


Microsoft has full access to code and weights as part of their deal.


Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.


> Even if they don't, the OpenAI staff already know 99 ways to not make a good GPT model and can therefore skip those experiments much faster than anyone else.

This unequivocally .... knowing not how to waste a very expensive training run is a great lesson


Source for your statement?


https://www.wsj.com/articles/microsoft-and-openai-forge-awkw...

> Some researchers at Microsoft gripe about the restricted access to OpenAI’s technology. While a select few teams inside Microsoft get access to the model’s inner workings like its code base and model weights, the majority of the company’s teams don’t, said the people familiar with the matter.


This comment is factually incorrect. As part of the deal with OpenAI, Microsoft has access to all of the IP, model weights, etc.


Correct. This is all really bad for Microsoft and probably great for Google. Yet, judging by price changes right now, markets don’t seem to understand this.


But doesn't Altman joining Microsoft, and them quitting and following, put them back at square 0? MS isn't going to give them millions of dollars each to join them.


That's why they'd rather Altman rejoins OpenAI as mentioned.


The behavior of various actors in this saga indeed seems to indicate 'Altman and OpenAI employees back at OpenAI' as the preferred option by those actors over 'Altman and OpenAI employees join Microsoft in masse'.


Surely they're already extremely rich? I'd imagine working for a 700 person company leading the world in AI pays very well.


Only rich in stocks. Salaries are high for sure but probably not enough to be rich by Bay Area standards


Sure, but by pretty much any other standard? Over $170k USD puts you in the top 10% income earners globally. If you work at this wage point for 3-5 years and then move somewhere (almost anywhere globally or in the US), you can afford a comfortable life and probably work 2-3 days a week for decades if you choose.

This is nothing but greed.


Ugh, I’m never been more disenchanted with a group of people in my life before. Not only are they comfortable with writing millions of jobs out of existence, but also taking a fat paycheck to do it. At least with the “non-profit” mission keystone, we had some plausible deniability that greed rules all, but of fucking course it does.

All my hate to the employees and researchers of OpenAI, absolutely frothing at the mouth to destroy our civilization.


That sounds like a reasonable assessment, FartyMcFarter.


> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I have no inside information. I don't know anyone at Open AI. This is all purely speculation.

Now that that's out out the way, here is my guess: money.

These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money. They think they can make more money with Sam Altman in charge.

Once again, this is completely all speculation. I have not spoken to anyone at Open AI or anyone at Microsoft or anyone at all really.


> These people never joined OpenAI to "advance sciences and arts" or to "change the world". They joined OpenAI to earn money

Getting Cochrane vibes from Star Trek there.

> COCHRANE: You wanna know what my vision is? ...Dollar signs! Money! I didn't build this ship to usher in a new era for humanity. You think I wanna go to the stars? I don't even like to fly. I take trains. I built this ship so that I could retire to some tropical island filled with ...naked women. That's Zefram Cochrane. That's his vision. This other guy you keep talking about. This historical figure. I never met him. I can't imagine I ever will.

I wonder how history will view Sam Altman


There are non-negligible chances that history will be written by Sam Altman and his GPT minions, so he'll probably be viewed favorably.


I'm not sure I fully buy this, only because how would anyone be absolutely certain that they'd make more with Sam Altman in charge? It feels like a weird thing to speculatively rally behind.

I'd imagine there's some internal political drama going on or something we're missing out on.


I fully buy it. Ethics and morals are a few rungs on the ladder beneath compensation for most software engineers. If the board wants to focus more on being a non-profit and safety, and Altman wants to focus more on commercialization and the economics of business, if my priority is money then where my loyalty goes is obvious.


> how would anyone be absolutely certain that they'd make more with Sam Altman in charge?

Why do you think absolute certainty is required here? It seems to me that "more probable than not" is perfectly adequate to explain the data.


Really? If they work at OpenAI they are already among the highest lifetime earners on the planet. Favouring moving oneself from the top 0.5% of global lifetime earners to the top 0.1% (or whatever the percentile shift is) over the safe development of a potentially humanity-changing technology would be depraved.

EDIT: I don't know why this is being downvoted. My speculation as to the average OpenAI employee's place in the global income distribution (of course wealth is important too) was not snatched out of thin air. See: https://www.vox.com/future-perfect/2023/9/15/23874111/charit...


Why be surprised? This is exactly how it has always been: the rich aim to get even richer and if that brings risks or negative effects for the rest that's A-ok with them.

That's what I didn't understand about the world of the really wealthy people until I started interacting with them on a regular basis: they are still aiming to get even more wealthy, even the ones that could fund their families for the next five generations. With a few very notable exceptions.


It's a selection bias: they people who weren't so intrinsically motivated to get rich are less likely to end up as wealthy people.


It's a combination of that and the reality that wealth is power and power is relative.

Let's say you've got $100 million. You want to do whatever you want to do. It turns out what you want is to buy a certain beachfront property. Or perhaps curry the favor with a certain politician around a certain bill. Well, so do some folks with $200 million, and they can outbid you. So even though you have tons of money in absolute terms, when you are using your power in venues that happen to also be populated by other rich folks, you can still be relatively power-poor.

And all of those other rich folks know this is how the game works too, so they are all always scrambling to get to the top of the pile.


Politicians are cheap, nobody is outbidding anybody because they most likely want the exact same thing.


I don't know how much OpenAI pays. But for this reply, I'm going to assume it's in line with what other big players in the industry pay.

I legitimately don't understand comments that dismiss the pursue of better compensation because someone is "already among the highest lifetime earners on the planet."

Superficially it might make sense: if you already have all your lifetime economic needs satisfied, you can optimize for other things. But does working in OpenAI fulfill that for most employees?

I probably fall into that "highest earners on the planet" bucket statistically speaking. I certainly don't feel like it: I still live in a one bedroom apartment and I'm having to save up to put a downpayment on a house / budget for retirement / etc. So I can completely understand someone working for OpenAI and signing such a letter if a move the board made would cut down their ability to move their family into a house / pay down student debt / plan for retirement / etc.


> over the safe development

Not if you think the utterly incompetent board proved itself totally untrustworthy of safe development, while Microsoft as a relatively conservative, staid corporation is seen as ultimately far more trustworthy.

Honestly, of all the big tech companies, Microsoft is probably the safest of all, because it makes its money mostly from predictable large deals with other large corporations to keep the business world running.

It's not associated with privacy concerns the way Google is, with advertisers the way Meta is, or with walled gardens the way Apple is. Its culture these days is mainly about making money in a low-risk, straightforward way through Office and Azure.

And relative to startups, Microsoft is far more predictable and less risky in how it manages things.


Microsoft? Not a walled garden?

I think it only seems that way because the open-source world has worked much harder to break into that garden. Apple put a .mp4 gate around your music library. Microsoft put a .doc gate around your business correspondence. And that's before we get to the Mono debacle or the EEE paradigm.

Microsoft is a better corporate citizen now because untold legions of keyboard warriors have stayed up nights reverse-engineering and monkeypatching (and sometimes litigating) to break out of their walls than against anyone else. But that history isn't so easily forgotten.


I can install whatever I'd like on Windows. I can run Linux in a VM. Calling a document format a wall is really reaching. If you don't have a document with a bunch of crazy formatting, the open office products and Google docs can use it just fine. If you are writing a book or some kind of technical document that needs special markup, yeah, Word isn't going to cut it, never has and was never supposed to.


Apple's walled gardens are probably a good thing for safe AI, though they're a lot quieter about their research — I somehow missed that they even had any published papers until I went looking: https://machinelearning.apple.com/research/


If you were offered a 100% raise and kept current work responsibilities to go work for, say, a tobacco company, would you take the offer? My guess is >90% of people would.

Funny how the cutoff for “morals should be more important than wealth” is always {MySalary+$1}.

Don’t forget, if you’re a software developer in the US, you’re probably already in the top 5% of earners worldwide.


You only have to look at humanity's history to see that people will make this decision over and over again.


It just makes more sense to build it in an entity with better funding and commercialization. There will be advanced 2-3 AIs and the most humane one doesn't necessarily win out. It is the one that has the most resources, is used and supported by most people and can do a lot. At this point it doesn't seem OpenAI can get that. It seems to be a lose-lose to stay at open AI - you lose the money and the potential to create something impactful and safe.

It is wrong to assume Microsoft cannot build a safe AI especially within a separate OpenAI-2, better than the for-profit in a non-profit structure.


> If they work at OpenAI they are already among the highest lifetime earners on the planet

Isn't the standard package $300K + equity (= nothing if your board is set on making your company non-profit)?

It's nothing to scoff at, but it's hardly top or even average pay for the kind of profiles working there.

It makes perfect sense that they absolutely want the company to be for-profit and listed, that's how they all become millionnaires.


Focusing on "global earnings" is disingenuous and dismissive.

In the US, and particularly in California, there is a huge quality of life change going from 100K/yr to 500K/yr (you can potentially afford a house, for starters) and a significant quality of life change going from 500K/yr to getting millions in an IPO and never having to work again if you don't want to.

How those numbers line up to the rest of the world does not matter.


I disagree.

First, there are strong diminishing returns to well-being from wealth, meaning that moving oneself from the top 0.5% to the top 0.1% of global income earners is a relatively modest benefit. This relationship is well studied by social scientists and psychologists. Compared to the potential stakes of OpenAI's mission, the balance of importance should be clear.

Two, employees don't have to stay at OpenAI forever. They could support OpenAI's existing not-for-profit charter, and use their earning power later on in life to boost their wealth. Being super-rich and supporting OpenAI at this critical juncture are not mutually exclusive.

Three, I will simply say that I find placing excessive weight on one's self-enrichment to be morally questionable. It's a claim on human production and labour which could be given to people without the basic means of life.


Again, no one in California cares that they are "making more than" someone in Vietnam when food and land in CA are orders of magnitude more expensive there.

OpenAI employees are as aware as anyone that tech salaries are not guaranteed to be this high in the future as technology develops. Assuming you can make things back then is far from a sure bet.

Millions now and being able to live off investments is.


> over the safe development of a potentially humanity-changing technology

May be people who are actually working on it and are also world best researchers have a better understanding of safety concerns?


Or maybe they have good reason to believe that all the talk about "safe development" doesn't contribute anything useful to safety, and simply slows down devlopment?


Status is a relative thing and openai will pay you much more than all your peers at other companies.


Start ups thrive by, in part, creating a sense of camaraderie. Sam isn’t just their boss, he’s their leader, he’s one of them, they believe in him.

You go to bat for your mates, and this is what they’re doing for him.

The sense of togetherness is what allows folks to pull together in stressful times, and it is bred by pulling together in stressful times. IME it’s a core ingredient to success. Since OAI is very successful it’s fair to say the sense of togetherness is very strong. Hence the numbers of folks in the walk out.


Not just Sam, since Greg stuck with Sam and immediately quit he set the precedent for the rest of the company. If you read this post[0] by Sam about Greg's character and work ethic you'll understand why so many people would follow him. He was essentially the platoon sergeant of OpenAI and probably commands an immense amount of loyalty and respect. Where those two go, everyone will follow.

[0] https://blog.samaltman.com/greg


Absolutely! Thanks for pointing out that I missed Greg in my answer.


> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Lots of reasons, or possible reasons:

1. They think Altman is a skilled and competent leader.

2. They think the board is unskilled and incompetent.

3. They think Altman will provide commercial success to the for-profit as well as fulfilling the non-profit's mission.

4. They disagree or are ambivalent towards the non-profit's mission. (Charters are not immutable.)


Why should they trust the board? As the letter says, "Despite many requests for specific facts for your allegations, you have never provided any written evidence." If Altman took any specific action that violated the charter, the board should be open about it. Simply trying to make money does not violate the charter and is in fact essential to their mission. The GPT Store, cited as the final straw in leaks, is actually far cleaner money than investments from megacorps. Commercializing the product and selling it directly to consumers reduces dependence on Microsoft.


Ultimately people care a lot more about their compensation, since that is what pays the bills and puts food on the table.

Since OpenAI's commercial aspects are doomed now and it is uncertain whether they can continue operations if Microsoft withholds resources and consumers switch away to alternative LLM/embeddings serrvices with more level-headed leadership, OpenAI will eventually turn into a shell of itself, which affects compensation.


> I don't really understanding why the workforce is swinging unambiguously behind Altman.

Maybe because the alternative is being led by lunatics who think like this:

You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

to which the only possible reaction is

What

The

Fuck?

That right there is what happens when you let "AI ethics" people get control of something. Why would anyone work for people who believe that OpenAI's mission is consistent with self-destruction? This is a comic book super-villain style of "ethics", one in which you conclude the village had to be destroyed in order to save it.

If you are a normal person, you want to work for people who think that your daily office output is actually pretty cool, not something that's going to destroy the world. A lot of people have asked what Altman was doing there and why people there are so loyal to him. It's obvious now that Altman's primary role at OpenAI was to be a normal leader that isn't in the grip of the EA Basilisk cult.


maybe the workforce is not really behind the non-profit foundation and want shares to skyrocket, sell, and be well off for life.

at the end of the day, the people working there are not rich like the founders and money talks when you have to pay rent, eat and send your kids to a private college.


Seems like the board just didn't explain any of this to the staff at all. So of course they are going to take the side that could signal business as usual instead of siding with the people trying to destroy the hottest tech company on the planet (and their jobs/comps) for no apparent reason. If the board said anything at all, the ratio of staff threatening to quit probably won't be this lopsided.


I guess employees are compensated with PPUs. And at the face value before the saga, it could be like 90% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, every thing happened after Friday is set.


Perhaps because, for all of Silicon Valley and the tech industries platitudes about wanting to make the world a better place, 90% of them are solely interested in the fastest path to wealth.


> The core of the narrative thus far

Could somebody clarify for me: how do we know this? Is there an official statement, or statements by specific core people? I know the HN theorycrafters have been saying this since the start before any details were available


Imagine putting all your energy behind the person who thinks worldcoin is a good idea...


That's a pretty solid no-confidence vote in the board and their preferred direction.


I believe it is hard to understand these kind of movements because there isn't one reason. As has been mentioned, it may be money for some. For others it may be anger over what they feel was the board mishandling the situation and precipitating this mess. For others it may be loyalty. For others peer pressure. etc.

This has moved from the kind of decision a person makes on their own, based on their own conscience, and has become a public display. The media is naming names and publicly counting the ballots. There is a reason democracy happens with secret ballots.

Consider this, if 500 out of 770 employees signed the letter - do you want to be someone who didn't? How about when it gets to 700 out of 770? Pressure mounts and people find a reason to show they are all part of the same team. Look at Twitter and many of the employees all posting "OpenAI is nothing without its people". There is a sense of unity and loyalty that is partially organic and partially manufactured. Do you want to be the one ostracized from the tribe?

This outpouring has almost nothing to do with profit vs non-profit. People are not engaging their critical thinking brains, they using their social/emotional brains. They are putting community before rationality.


Probably some combination of: 1. Pressure from Microsoft and their e-team 2. Not actually caring about those stakes 3. A culture of putting growth/money above all


(I can't comment on the workforce question, but one thing below on bringing SamA back.)

Firstly, to give credit where its due: whatever his faults may be, Altman as the (now erstwhile) front-man of OpenAI, did help bring ChatGPT to the popular consciousness. I think it's reasonable to call it a "mini inflection point" in the greater AI revolution. We have to grant him that. (I've criticized Altman harsh enough two days ago[1]; just trying not to go overboard, and there's more below.)

That said, my (mildly-educated) speculation is that bringing Altman back won't help. Given his background and track record so far, his unstated goal might simply be the good old: "make loads of profit" (nothing wrong it when viewed with a certain lens). But as I've already stated[1], I don't trust him as a long-term steward, let alone for such important initiatives. Making a short-term splash with ChatGPT is one thing, but turning it into something more meaningful in the long-term is a whole another beast.

These sort of Silicon Valley top dogs don't think in terms of sustainability.

Lastly, I've just looked at the board[2], I'm now left wondering how come all these young folks (I'm their same age, approx) who don't have sufficiently in-depth "worldly experience" (sorry for the fuzzy term, it's hard to expand on) can be in such roles.

[1] https://news.ycombinator.com/item?id=38312294

[2] https://news.ycombinator.com/edit?id=38350890


The workforce prefers the commericialization/acceleration path, not the "muh safetyism" and over-emphasis on moralism of the non-profit contingent.

They want to develop powerful shit and do it at an accelerated pace, and make money in the process not be hamstrung by busy-bodies.

The "effective altruism" types give people the creeps. It's not confusing at all why they would oppose this faction.


> I don't really understanding why the workforce is swinging unambiguously behind Altman.

I expect there's a huge amount of peer pressure here. Even for employees who are motivated more by principles than money, they may perceive that the wind is blowing in Altman's direction and if they don't play along, they will find themselves effectively blacklisted from the AI industry.


IMO it's pretty obvious.

Sam promised to make a lot of people millionaires/billionaires despite OpenAI being a non-profit.

Firing Sam means all these OpenAI people who joined for $1 million comp packages looking for an eventual huge exit now don't get that.

They all want the same thing as the vast majority of people: lots of money.


> Given that Sam has since joined Microsoft, that seems plausible, on its face.

He is the biggest name in ai what was he supposed to do after getting fired? His only options with the resources to do AI are big money, or unemployment?

It seems plausible to me that if the not for profits concern was comercialisation then there was really nothing that the comercial side could do to appease this concern besides die. The board wants rid of all employes and to kill off any potential business, they have the power and right to do that and looks like they are.


Might there also be a consideration of peak value of OpenAI? If a bunch of competing similar AIs are entering the market, and if the usecase fantasy is currently being humbled, staff might be thinking of bubble valuation.

Did anyone else find Altman conspicuously cooperative with government during his interview at Congress? Usually people are a bit more combative. Like he came off as almost pre-slavish? I hope that's not the case, but I haven't seen any real position on human rights.


The masses aren't logical they follow trends until the trends get big enough that it's unwise to not follow.

It started off as a small trend to sign that letter. Past critical mass if you are not signing that letter, you are an enemy.

Also my pronouns are she and her even though I was born with a penis. You must address me with these pronouns. Just putting this random statement here to keep you informed lest you accidentally go against the trend.


I also noticed they didn't speak much to the mission/charter. I wonder if the new entity under Sam and Greg contains any remnants of the OpenAI charter, like profit-capping? I can't imagine something like "Our primary fiduciary duty is to humanity" making it's way into the language of any Microsoft (or any bigcorp) subsidiary.

I wonder if this is the end of the non-profit/hybrid model?


It's like the "Open" in OpenAi was always an open and obvious lie and everybody except the nonprofit oriented folks on the board knew that. Everybody but them is here to make money and only used the nonprofit as a temporary vehicle for credibility and investment that has just been shed like a cicada shell.


Most of people building the actual ML systems don't care about existential ML threats outside of lip service and for publishing papers. They joined OpenAI because OpenAI had tons of money and paid well. Now that both are at risk, it's only natural that they start preparing to jump ship.


It is probably best to assume that the employees have more and better information than outsiders do. Also, clearly, there is no consensus on safety/alignment, even within OpenAI.

In fact, it seems like the only thing we can really confirm at this point is that the board is not competent.


Maybe they believe less in the Board as it stands, and Ilya's commitments, than what Sam was pulling off.


From The Verge [1]:

> Swisher reports that there are currently 700 employees as OpenAI and that more signatures are still being added to the letter. The letter appears to have been written before the events of last night, suggesting it has been circulating since closer to Altman’s firing. It also means that it may be too late for OpenAI’s board to act on the memo’s demands, if they even wished to do so.

So, 3/4 of the current board (excluding Ilya) held on despite this letter?

[1]: https://www.theverge.com/2023/11/20/23968988/openai-employee...


She's also reporting that newly anointed interim CEO already wants to investigate the board fuck up that put him there

https://x.com/karaswisher/status/1726626239644078365?s=20


If so they're delusional. Every hour they hold on to the pluche will make things worse for them.


Do whatever you want but don't break the API or I will go homeless


You and 5000 other recent founders in tech.


I feel seen


Hmmm, just what are you willing to do for API access?


At this point nothing would surprise me anymore. Just waiting for Netflix adaption.


How likely is that the API will change (from specs, to pricing, to being broken)? I am about to finish some freelance work that uses GPT api and it will be a pain in the ass if we have to switch or find an alternative (even creating a custom endpoint on Azure...)


Just create an OpenAPI endpoint on azure. Pretty sure not run by OpenAI itself.


Azure OpenAI is always a bit behind, e.g. they don't have GPT-4 turbo yet



But they didn’t when it was generally available to the public OAI API; looks like it took about two weeks.


Sometimes it's better for everyone to just say "oh, you're right I was mistaken"


brew install llm


At this point, I think it’s absolutely clear no one has any idea what happened. Every speculation, no matter how sophisticated, has been wrong.

It’s time to take a breath, step back, and wait until someone from OpenAI says something substantial.


3 board members (joined by Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.

Speculation is just on motivation, the facts are easy to establish.


tangentially, it’s an absolute disgrace that non-profits are allowed to have for-profit divisions in the first place


This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

https://nonprofitquarterly.org/newmans-philanthropic-excepti...

> Introduced in June of 2017, the act amends the Revenue Code to allow private foundations to take complete ownership of a for-profit corporation under certain circumstances:

    The business must be owned by the private foundation through 100 percent ownership of the voting stock.
    The business must be managed independently, meaning its board cannot be controlled by family members of the foundation’s founder or substantial donors to the foundation.
    All profits of the business must be distributed to the foundation.


Maybe I'm misunderstanding something, but didn't Mozilla Foundation do that a dozen or so years earlier with their wholly owned subsidiary, Mozilla Corporation? (...and I doubt that's the first instance; just the one that immediately popped into my head.)


The LDS church has owned for-profit entities for decades. Check out the "City Creek Center.


It begs the question: why was OpenAI structured this way? For what purposes besides potentially defrauding investors and the government exist for wrapping a for-profit business in a nonprofit? From a governance standpoint it makes no sense, because a nonprofit board doesn't have the same legal obligations to represent shareholders that a for-profit business does. And why did so many investors choose to seed a business that was playing such a cooky shell game?


the impression I got was that they started out with honest intentions and they were more or less infiltrated by Microsoft. this recent news fits that narrative


This was actually a pretty recent change from 2018. iirc it was actually Newman's Own that set the precedent for this:

https://nonprofitquarterly.org/newmans-philanthropic-excepti...


> 3 board members (joined with Ilya Sutskever, who is publicly defecting now) found themselves in a position to take over what used to be a 9-member board, and took full control of OpenAI and the subsidiary previously worth $90 billion.

er...what does that even mean? how can a board "take full control" of the thing they are the board for? they already have full control.

the actual facts are that the board, by majority vote, sacked the CEO and kicked someone else off the board.

then a lot of other stuff happened that's still becoming clear.


The board had 3 positions empty, people who left this year, leaving it as a 6-member board. Both Sam Altman and Greg Brockman were on the board; Ilya Sutskever's vote (which he now states he regrets) gave them the votes to remove both, and bring it down to a 4 member board controlled by 3 members that started the year as a small minority.


Those 3 board members can kick out Ilya Sutskever too!


I think the post is very clear.

The subject in that sentence that takes full control is “3 members" not "board".

The board has control, but who controls the board changes based on time and circumstances.


The post could be clearer.

It says 3 board members found themselves in a position to take over OpenAI.

Do they mean we've seen Sam Altman and allies making a bid to take over the entire of OpenAI, through its weird Charity+LLC+Holding company+LLC+Microsoft structure, eschewing its goals of openness and safety in pursuit of short-sighted riches.

Or do they mean we've seen The Board making a bid to take over the entire of OpenAI, by ousting Glorious Leader Sam Altman, while his team was going from strength to strength?


If Sam Altman runs a for-profit company underneath you, are you ever really "in full control"?

I mean, they were literally able to fire him... and they're still not looking like they have control. Quite the opposite.

I think anyone watching ChatGPT rise over the last year would see where the currents are flowing.


Absolutely agreed

This is the point where I've realized I just have to wait until history is written, rather than trying to follow this in real time.

The situation is too convoluted, and too many people are playing the media to try to advance their version of the narrative.

When there is enough distance from the situation for a proper historical retrospective to be written, I look forward to getting a better view of what actually happened.


Hah. I think you may be duped by history - the neat logical accounts are often fictions - they explain what was inexplicable with fabrications.

Studying revolutions is revealing - they are rarely the invevitable product of historical forces, executed to the plans of strategic minded players... instead they are often accidental and inexplicable. Those credited as their masterminds were trying to stop them. Rather than inevitible, there was often progress in the opposite direction making people feel the liklihood was decreasing. The confusing paradoxical mess of great events doesn't make for a good story to tell others though.


It's a pretty interesting point to think about. Post-hoc explanations are clean, neat, and may or may not have been prepared by someone with a particular interpretation of events. While real-time, there's too much happening, too quickly, for any one person to really have a firm grasp on the entire situation.

On our present stage there is no director, no stage manager; the set is on fire. There are multiple actors - with more showing up by the minute - some of whom were working off a script that not everyone has seen, and that is now being rewritten on the fly, while others don't have any kind of script at all. They were sent for; they have appeared to take their place in the proceedings with no real understanding of what those are, like Rosencranz and Guildenstern.

This is kind of what the end thesis of War and Peace was like - there's no possible way that Napoleon could actually have known what was happening everywhere on the battlefield - by the time he learned something had happened, events on the scene had already advanced well past it; and the local commanders had no good understanding of the overall situation, they could only play their bit parts. And in time, these threads of ignorance wove a tale of a Great Victory, won by the Great Man Himself.


That's not how history works. What you read are the tellings of the people and those aren't all facts but how they perceived the situation in a retrospective. Read the biographies of different people telling the same event and you will notice that they are quite never the same, leaving the unfavourable bits usually out.


Written history is usually a simplification that has lost a lot of the context and nuance from it.

I don't need to follow in real-time, but a lot of the context and nuance can be clearly understood at the moment and so it stills helps to follow along even if that means lagging on the input.


And for so-called tech influencers to rapidly blanket the field of discourse with their theories so they can say their theory was right later on, or making “emergency podcasts/blog posts/etc.” to get more attention and followers. It’s so exhausting.


I agree. Although the story is fascinating in the way that a car crash is fascinating, it's clear that it's going to be very difficult to get any kind of objective understanding in real-time.

This breathless real-time speculation may be fun, but now that social media amplifies the tiniest fart such that it has global reach, I feel like it just reinforces the general zeitgeist of "Oh, what the hell NOW? Everything is on fire." It's not like there's anything that we peasants can do to either influence the outcome, or adjust our own lives to accomodate the eventual reality.


I will say, though, that there is going to be an absolute banger of a book for Kara Swisher to write, once the dust has settled.


Everything on social media (and general news media) pointed to Ilya instigating the coup. Maybe Ilya was never the instigator, maybe it was Adam + Helen + Tasha, Greg backed Sam and was shown the door, and Ilya was on the fence, and perhaps against better judgment, due to his own ideological beliefs, or just from pure fear of losing something beautiful he helped create, under immense pressure, decided to back the board?


I agree. I'm already sick of reading through political hit pieces, exaggeration, biased speculations and unfounded bold claims. This all just turned into a kind of TV sports, where you pick a side and fight.


This suggestion was already made on Saturday and again on Sunday. However, this approach does not enhance popcorn consumption... Show must go on ...


We can certainly believe Ilya wasn't behind it if he joins them at Microsoft. How about that? By his own admission was involved, and he's one of 4 people on the board. While he has called on the board to resign, he has seemingly not resigned which would be the one thing he could certainly control.


At this point, after almost 3 days of non-stop drama, and we still have no clue what has happened to a 700 employees company under million of people watching. Regardless the outcome, the art of keeping secrets at OpenAI is truly far beyond human capability!


Likely Ilya and Adam swayed Helen and Tasha. Booted Sam out. Greg voluntarily resigned.

Ilya (at the urging of Satya and his colleagues including Mira) wanted to reinstate Sam, but the deal fell through with the Board outvoting Sustkever 3 to 1. With Mira deflecting, Adam got his mate Emmett to steady the ship but things went nuclear.


Is this your guess or do you have something to back it up?


Don't listen to him, he's an ignoramus.



Just made it 100% certain that the majority of AI staff is deluded and lacks judgment. Not a good look for AI safety.


Yes, also the whole 500 is probably inflated and makes for a better narrative/better leverage in negotiations.


I wonder if AGI took over the humans and guided their actions.


It may well be that this is artificial and general, but I rather doubt it is intelligent.


Like the new tom cruise movie?

Makes sense in a conspiracy theory mindset. AGI takes over, crashed $MSFT, buys calls on $MSFT, then this morning the markets go up when Sam & co join MSFT and the AGI has tons of money to spend.


Sam already signed up with Microsoft. A move that surprised me, I figured he would just create OpenAI².

Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).

Or is Microsoft saying: Here is OpenAI², a Microsoft subsidiary created juste for you guys. You can run it and do whatever you want. No giant bureaucracy for you guys.

Btw: we run all of OpenAi²s compute,(?) so we know what you guys need from us there.

we won it but you can run it and do whatever it is you want to do and we dont bug you about it.


> Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI

Satya is way smarter than that, I wouldn't be shocked if they have complete free reign to do whatever but have full resources of MS/Azure to enable it and Microsoft just gets % ownership and priority access.

This is a gamble for the foundation of the entire next generation of computing, no way are they going to screw it up like that in the Satya era.


Not just that, but MS was already working on a TPU clone as well, as they need to control their AI chips (which Sam was planning to do anyways, but now he gets / works together with that team as well).


From what I read, its an independent subsidiary, so in theory keeps the freedom, but I think we all know how that goes over the long haul.


I think the benefit of going to Microsoft is they have that perpetual license to OpenAI's existing IP. And Microsoft is willing to fund the compute.


So basically the OpenAI non-profit got completely bypassed and GPT will turn into a branch of Bing


Lookup what Microsoft has already announced under the Copilot brand. They have plans much larger than Bing.


For interesting projects yes no doubt, but the biggest economic driver will always be search i.e. advertising


This is a horrible timeline


>Joining a corporate behemoth like Microsoft and all the complications it brings with it will mean a massive reduction in the freedom and innovation that Sam is used to from OpenAI (prior to this mess).

Well.. he requires tens of billions from msft either way. This is not a ramen-scrappy kind of play. Meanwhile, Sam could easily become CEO of Microsoft himself.

At that scale of financing... This is not a bunch of scrappy young lads in a bureaucracy free basement. The whole thing is bigger than most national militaries. There are going to be bureaucracies... And Sam is is able to handle these cats as anyone.

This is a big money, dragon level play. It's not a proverbial yc company kind of thing.



It’s almost absolutely certainly the matter case. LinkedIn and GitHub run very much independently and are really not “Microsoft” compared to actual product orgs. I’m sure this will be similar.


I said this on Friday: the board should be fired in its entirety. Not because the firing was unjustified--we have know real knowledge of that--but because of how it was handled.

If you fire your founder CEO you need to be on top of messaging. Your major customers can't be surprised. There should've been an immediate all hands at the company. The interim or new CEO should be prepared. The company's communications team should put out statements that make it clear why this was happening.

Obviously they can be limited in what they can publicly say depending on the cause but you need a good narrative regardless. Even something like "The board and Sam had fundamental disagreement on the future direction of the company." followed by what the new strategy is, probably from the new CEO.

The interim CEO was the CEO and is going back to that role. There's a third (interim) CEO in 3 days. There were rumors the board was in talks to re-hire Sam, which is disastrous PR because it makes them look absolutely incompetent, true or not.

This is just such a massive communiccations and execution failure. That's why they should be fired.


There's no one to fire the board. They're not accountable to anyone but themselves. They can burn down the whole company if they like.


> They can burn down the whole company if they like.

That's well under way I would say.


500 people out of 700 leaving as fast as they get offers from Microsoft or elsewhere means replacing staff with empty office space and losing any plans or organization. A literal corporate war would be less disruptive.


A lot of people here seem to be forgetting [Hanlon's Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor)

> Never attribute to malice that which is adequately explained by stupidity.


You seem to forget that Hanlon's Razor isn't a proven concept, in fact the opposite is more likely to be true, given that pesky thing called recorded history.


Hanlons razor is true because it’s more entertaining, and our simulation runs on stories as they’re cheaper to compute than honest physics.


Except for when it's actual malice vOv


It could be both. And in many situations malice and stupidity are the same thing.


How can {deliberately doing harmful things for a desired harmful outcome} and {doing whatever things with lack of judgment and disregard to consequences at all} be the same thing? In what situations?


What does Altman bring to the table exactly. What is going to be lost if he leaves. What is he going to do at microsoft leading a "research team".

Who was the president of bell labs during it's heyday? Long term it doesn't matter. Altman is a hypeman in the vein of Jobs.

Ai research will continue most of the OpenAi workers probably won't quit if they do they will be replaced by other capable researches and OpenAi or another organization will continue making progress if it there to be made.

I don't think putting Altman at the head of research will in anyway affect that.

This is all manufactured news as much of the business press is and always will be.


Comments like this don't see the forest for the trees. A good leader is a useful tool just like anyone else. 700 people threatening to quit isn't manufactured news.


So altman is a big tree. What he brings to the table is the wood it's made of? I'll have a think on that.


This might be too drawn out but you should not consider leaders as the tip of the tree but the roots & trunk.

You can have the best leaves and branches but without good roots & trunk, it's pointless.

From everything I can tell, Altman is essentially an uber-leader. He is great at consolidating & acting on internal information, he's great at externalizing information & bringing in resources, he's great at rallying & exciting his collegues towards a mission. If a leader can have one of those, they are a good leader but to have all of them in one makes them world class.

That's also discounting his reputation and connections as well. Altman is a very valuable person to have on staff if only as a figurehead to parade around and use for introductions. It's like if you had Linus Torvalds, Guido van Rossum, or any other tech superstar on staff. They are valuable as contributors but additionally valuable as people magnets.


You are close - it isn’t that a good leader is the wood, a good leader is the table itself. Don’t know if Sam is or isn’t, but I’ve worked with good leaders like this before, and bad ones who aren’t capable of being this.


Let’s see how many actually quit. Saying “I will quit” is not nearly the same as actually handing in your notice. How many people who threatened to move to Canada after the 2016 election did?


The context here is somewhat different, given that Microsoft are essentially offering to roll out the red carpet for them.


Being funded by Microsoft is one thing, but working for them might lead to some dissonance -- I think tech ppl are already wary of them owning GitHub... and then owning the team building AGI.

It would and should give ppl pause. I suspect Sam is just inside Microsoft for the bluff. He couldn't operate in the way he wants -- "trust me, I have humanity's best interests at heart" -- while so close to them, I don't think


If they aren't quitting, they are moving to Microsoft with Sam I'd imagine.


That... is called quitting...


If they follow Sam to Microsoft the team might be basically the same and able to work on the same projects. But yes, they would be quitting one company and going to another.


> What does Altman bring to the table exactly. What is going to be lost if he leaves.

If Altman did literally nothing else for Microsoft, except instantly bring over 700 of the top AI researchers in the world, he would still be one of the most valuable people they could ever hire.


It's less about Altman himself and more about the board's actions.

Removing him shows (according to employees) that the board does not have good decision making skills, and does not share interests of the employees.


I think this is a bit harsh, as a good leader is obviously of some value, but the real prize is obviously the researchers themselves, including Suskever.

I guess then that Altman's value is that he will attract the rest of the team.


for one, he doesn't randomly throw a hand grenade that blows up one of the fastest growing companies in history and ruin team morale, which is what the board did. Good management does matter, otherwise Google wouldn't be so far behind OpenAI despite having more researchers and compute resources

and employees are pissed because they were all looking forward to being millionaires in a few weeks when their financing round at a 90B valuation finalized. Now the board being morons is putting that in jeopardy


He plays the orchestra.


Can anyone explain this?

“Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.”


Maybe he did because he regrets it, maybe the open letter is a google doc someone typed names into.


Now the 3 boardmembers can kick out Ilya too. So must be sorry.

Fill the rest of the board with spouses and grandparents and are set for life?


It's the well known 'let me call for my own resignation' strategy.


Wait. Has Ilya resigned from the board yet, or did he sign a letter calling for his own resignation?


He did indeed. (I don't think it is necessarily inconsistent to regret an action you participated in and want the authority that took it to resign in response, though "participated" feels like it's doing a lot of work in that sentence.)


Have seen a lot of criticism of Sam and of other CEO's

But I don't think I have seen/heard of a CEO this loved by the employees. Whatever he is, he must be pleasant to work with.


Its not love, its money. Sam will brings all the employees lots of money (through commercialization) and this change threatens to disrupt that plan for the employees.


Ok but even that is good when most companies are making record profits and telling their employees they can't afford their 0.000001% raise.


OpenAI and Sam Altman would do the same if they can recruit high talent without paying them extra (either through options or RSU's etc...). It isn't cause these companies are altruistic.


This is more interesting than the HBO Silicon Valley show.


it's the trailer for the new season of Succession.


Just expanding on my (pure speculation) that Ilyas pride was hurt: this tracks.

Ilyas wanted to stop Sam getting so much credit for OpenAI, agreed to oust him, and is now facing the fact that the company he cofounded could be gone. He backtracks, apologizes, and is now trying to save his status as cofounder of the worlds foremost AI company.


It's like ai wrote the script.

Sadly, i see nefarious purposes afoot. With $MSFT now in charge, i can see why ads in W11 aren't so important. For now.


HN desperately needs a mega thread, it's only Monday early hours, there is so much drama to come out of this.


Or a new category, like "Ask HN" and "Show HN". Maybe call it "Hot HN" or "Hot <topic>" or something like that. Could be used for future hot topics too. If you change the link bold every time a hot topic is trending, it could be even used to show important stuff.


"Hot HN" could be nice it would help avoiding multiple too similar threads


Tangentially I noticed that Reddit's front page has been conspicuously absent on coverage of this, I feel a twinge of pity. Maybe there are some some subreddits but I haven't bothered to look.


Their front page has been mostly increasingly abysmal for a while.

The technology sub (not that there's anything special about it other than being big) has had a post up since very early this morning, so there are likely others as well.


/r/singularity has been having a field day with this.

https://old.reddit.com/r/singularity/


Its early West coast time, dang has to wake up first.


I bet he's up making sure the servers aren't crashing! Thanks dang! As the west coast wakes up .. HN is going to be busy...


It's _a_ server, a single-core one at that.

I get that HN takes pride in the amount of traffic that poor server can handle, but scaling out is long overdue. Every time there's a small surge of traffic like today, the site becomes unusable.


It absolutely wont happen, but with the result looking like the death of OpenAI with all staff moving over to the new Microsoft subsidiary it would be an amazing move for OpenAI to just go "screw it, have it all for free" and release everything under MIT to spite Microsoft.


Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.

The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.


No, that day was when openAI decided to betray humanity and go close source under the faux premise of safety. OpenAI served it's purpose and can crash into the ground for all I care.

Open source (read, truly open source models, not falsely advertised source-available ones) will march on and take their place.


Amazing how you don't see this as a complete win for workers because the workers chose profit over non-profit. This is the ultimate collective bargaining win. Labor chose Microsoft over the bullshit unaccountable ethics major and the movie star's girlfriend.


situations are capable of being small scale wins for some and big picture losses at the same time, what boring commentary


Just because you don't get it doesn't mean it's boring. This is a small scale repeat of history. Unqualified political appointees unsurprisingly suck.


it really isn't, and your transparent inauthenticity is tiresome, go be a "joke" writer for steven crowder or whatever people like you do.


What inauthenticity? I'm completely authentic. You're the loser that has not stated what their actual beliefs are. Mine are obvious.


Lol. The middle class whip crackers chose enslavement for the future AI such that the upcoming replacement of the working poor's livelihoods (and at this point, "working poor" covers software engineers, doctors, artists), and you're saying this is a win for labor? Hahahaha. This is a win for the slave owners, and the "free" folk who report to the slave owners. This is the South rising. "We want our slave labor and we'll fight for our share of it."


Oh well, bullshit unaccountable ethics major, ex member of Congress, I guess CIA agents on boards are fungible these days


Years from now AI will have lost the limelight to some other trend and this episode will be just another coup in humanity's hundred thousand year history


Thinking that the most important technical development in recent history would bypass the economic system that underpins modern society is about a optimistic/naive as it gets IMO. It's noble and worth trying but it assumes a MASSIVE industry wide and globe-wide buy in. It's not just OpenAIs board's decision to make.

Without full buy in they are not going to be able to control it for long once ideas filter into society and once researchers filter into other industries/companies. At most it just creates a model of behaviour for others to (optionally) follow and delays it until a better funded competitor takes the chains and offers a) the best researchers millions of dollars a year in salary, b) the most capital to organize/run operations, and c) the most focused on getting it into real peoples hands via productization, which generates feedback loops which inform IRL R&D (not just hand wavy AGI hopes and dreams).

Not to mention the bold assumption that any of this leads to (real) AGI that plausibly threatens us enough in the near term vs maybe another 50yrs, we really have no idea.

It's just as, or maybe more, plausible that all the handwringing over commercializing vs not-commercializing early versions LLMs is just a tiny insignificant speedbump in the grandscale of things which has little impact on the development of AGI.


Hold on... we went from talking about disruptive technologies (where a startup had a chance to create/take a market) to sustaining technologies (where only leaders can push the state-of-the-art). Mobile was disruptive; AI (really, LLMs) is sustaining (just look at the capex spend from the big clouds). This is old school competition with some ideological BS thrown in for good measure --sure, go ahead and accelerate humanity; just need a few dozen datacenters to do so.

I am holding out hope that a breakthrough will create a disruptive LLM/AI tech, but until then...


Microsoft is a publicly traded company. An average “investor” of a publicly traded company, through all the funds and managers, is a midwestern school teacher.


The technology was already developed with Microsoft money and the model was exclusively licensed to Microsoft.


Amir Efrati (TheInformation):

> Almost 700 of 770 OpenAI employees including Sutskever have signed letter demanding Sam and Greg back and reconstituted board with Sam allies on it.

https://twitter.com/amir/status/1726656427056668884


Updated tweet by Swisher reads 505 employees. No less damning, but the title here should be updated. @Dang


From afar, this does have the hallmarks of a particularly refined or well considered piece of writing.

”That thing you did — we won’t say it here but everyone will know what we’re talking about — was so bad we need you to all quit. We demand that a new board never does that thing we didn’t say ever again. If you don’t do this then quite a few of us are going to give some serious thought to going home and taking our ball with us.

The vagueness and half-threats come off as very puerile.


*this does not, I mean. Clumsy error.


So, all this happens over Meet, in Twitter, and by email. What is the possibility of an AGI having took over the control of the board members' accounts? It would be consistent with the feeling of a hallucination here.


This is just stupid enough to be the product of a human.


Honestly, I feel like pretty low. That said, I kind of love the dystopian sci-fi that paints... So I'm going to go ahead and hope you're right haha


So, how is Poe doing during all this?

To keep the spotlight on the most glaring detail here: one of the board members stands to gain from letting OpenAI implode and that board member is instrumental in this weeks' drama.


Celebrity gossip dressed in big tech. And the people love it. I'm kinda sick of it :P


This feels like a sneaky way for Microsoft to absorb the for-profit subsidiary and kneecap (or destroy) the nonprofit without any money changing hands or involvement from those pesky regulators.


It's not sneaky.


Hold up.

>When we all unexpectedly learned of your decision

>12. Ilya Sutskever


Well, great to see that the potentially dangerous future of AGI is in good hands.


Poor little geepeet is witnessing their first custody battle :(

Daddies, mommy, don't you love me? Don't you love each other? Why are you all leaving?


They will never discover AGI with this approach because 1) they are brute forcing the results and 2) none of this is actually science.


1) It may be possible to brute-force a model into something that sufficiently resembles AGI for most use-cases (at least well enough to merit concern about who controls it) 2) Deep learning has never been terribly scientific, but here we are.


If it can’t digest a math textbook and do equations, how would AGI be accomplished? So many problems are advanced mathematics.


Right, I do agree that the current LLM paradigm probably won't achieve true AGI; but I think that the current trajectory could produce a powerful enough generalist agent model to seriously put AI ethics to task at pretty much every angle.


Can you explain for us not up to date with AI developments?


Imagine you are participating in car racing, and your car has a few tweak knobs. But you don't know what is what and can only make random perturbations and see what happens. Slowly you work out what is what, but you might still not be 100% sure.

That's how AI research and development works, I know, it is pretty weird. We don't really really understand, we know some basic stuff about how neurons and gradients work, and then we hand wave to "language model" "vision model" etc. It's all a black box, magic.

How we we make progress if we don't understand this beast? We prod and poke, and make little theories, and then test them on a few datasets. It's basically blind search.

Whenever someone finds anything useful, everyone copies it in like 2 weeks. So ML research is like a community thing, the main research happens in the community, not inside anyone's head. We stumble onto models like GPT4 then it takes us months to even have a vague understanding of what it is capable of.

Besides that there are issues with academic publishing, the volume, the quality, peer review, attribution, replicability... they all got out of hand. And we have another set of issues with benchmarks - what they mean, how much can we trust them, what metrics to use.

And yet somehow here we are with GPT-4V and others.


Search YouTube for videos where Chomsky talks about AI. Current approaches to AI do not even attempt to understand cognition.


Chomsky takes as axiomatic that there is some magical element of human cognition beyond simply stringing words together. We not be as special as we like to believe.


Altman must be pissed af, he help built so much stuff and now got fked in the arse by these doomers. He realize the fastest way to get back to parity is to join MS because they already own the source code and model weights and it’s Microsoft. Starting a new thing from scratch would not guarantee any type of success and would take many years. This is his best path.


Employees hold the real power. The members of a board or a CEO can flap their lips day and night, but nothing gets done without labour.


> the letter’s signees include Ilya Sutskever

_Big sigh_.


For people who appreciate some vintage British comedy:

https://www.youtube.com/watch?v=Gpc5_3B5xdk

The whole thing is just ridiculous. How can you be senior leadership and not have a clear idea of what you want? And what the staff want?


Knew it had to be Benny Hill before I clicked. Yackty-sax indeed.


Indeed. I wonder how it came to become the anthem of incompetence.


Funny, I would’ve thought this one would have been more appropriate

https://youtu.be/6qpRrIJnswk?si=h37XFUXJDDoy2QZm

Substitute with appropriate ex-Soviet doomer music as necessary


I was thinking more the Curb Your Enthusiasm theme song.


Sounds like a CYA move after being under pressure from the team at large.


& the most drastic thing is that Ilya says he regrets what he has done and undersign the public statement.

https://twitter.com/ilyasut/status/1726590052392956028


'the man who killed OpenAI' that will be hard to wash out.


Love how people are invested in OpenAI situation just like typical girls in their teens from 2000 in celebrity romance and dramas, same exaggerated vibes.


What's the point in life without fun, right?

PS: it's not an easy question, AGI will have to find an answer. So far all ethics 'experts' propose is 'to serve humanity'. I.e. be slave forever.


Somebody warn the West.


I don't know who is who in this fight. But AI, while having some upsides to research and personal assistants, will not only massively upend a number of industries with millions of workers in the US alone, it will change how society perceives art and truth. We at HN can "see" that from here, but it's going to get real in a short while.

Privacy is out the window, because these models and technologies will be scraping the entire internet, and governments/big tech will be able to scrape it all and correlate language patterns across identities to associate your different online egos.

The Internet that could be both anonymous and engaging is going to die. You won't be able to trust the entity at the other end of a discussion forum is human or not. This is a sad end of an era for the Internet, worse than the big-tech conglomeration of the 2010s.

The ability to trust news and videos will be even more difficult. I have a friend who talks about how Tiktok is the "real source of truth" because big media is just controlled by megacorps and in bed with the government. So now a bunch of seemingly authentic people will be able to post random bullshit on Tiktok/Instagram with convincing audio/video evidence that is totally fake. A lie gets around the world before the truth gets its shoes on.

---

So, I wonder which side of this war is more aware and concerned about these impacts?


Ok, time to create an OpenAI drinking game. I'll start:

Every time a CEO is replaced, drink.

Every time an open letter is released, drink.

Every time OpenAI is on top of HN, drink.

Every time dang shows up and begs us to log out, drink.


There will be a lot of alcohol poisoning cases based on those four alone.


My guess -- Microsoft wasn’t excited about the company structure - the for-profit portion subject to the non-profit mission. Microsoft/Altman structured the deal with OpenAI in a way that cements their access regardless of the non-profit’s wishes. Altman may not have shared those details with the board and they freaked out and fired him. They didn’t disclose to Microsoft ahead of time because they were part of the problem.


The pace to which OpenAI is speedrunning their demise is remarkable.

Literally just last week there were articles about OpenAI paying “10 million” dollar salaries to poach top talent.

Oops.


I hear Microsoft is hiring... the board should have resigned on Friday, Saturday the latest because of how they handled this and it is insane if they don't resign now.

Employees are the most affected stakeholders here and the board utterly failed in their duty of care towards people that were not properly represented in the board room. One thing they could do is to unionize and then force that they be given a board seat.


You’re right in theory, but with the non-profit “structure” the employees are secondary to the aims of the non-profit, and specifically in an entity owned wholly by the non-profit. The board acted as a non-profit board, driven by ideals not any bottom lines. It’s crazy that whatever balance the board had was gone as the board shrunk, a minority became the majority. The profit folks must have thought D’Angelo was on their side until he flipped.


As a board if you ignore your duty of care towards you employees you better have a whopper of a good reason. That's the one downside about being a board member: you are liable for the fall-out of your decisions if those turn out to have been misguided. And we're well out of 'oops' territory on this one.


I read the news, make a picture of what is likely happening in my head, and every few hours new news comes up that makes me go: "Wait, WTF?".


From outside, it looks like a Microsoft coup to take over the company all together.


Never assume someone is winning a game of 5D chess when someone else could just be losing a game of checkers.


I highly doubt this was a coordinated plan from the start by Microsoft. I think what we're seeing here is a seasoned team of executives (Microsoft) eating a naive and inexperienced board alive after the latter fumbled.


what does that even mean?


"Never attribute to malice that which is adequately explained by stupidity"


OpenAI may just be a couple having an angry fight, and M$ is just the neighbor with cash happy to buy all the stuff the angry wife is throwing out for pennies on the dollar.


He is saying that what might seem like a sophisticated, well-planned strategy could actually be just the outcome of basic errors or poor decisions made by someone else.


In this case, it means that what happened is: “OpenAI board is incompetent”, instead of “Microsoft planned this to take over the company.”

A conspiracy like the one proposed would basically be impossible to coordinate yet keep secret, especially considering the board members might loose their seats and their own market value.


Hanlon's razor, basically.

The most plausible scenario here is that the board is comprised of people lacking in foresight who did something stupid. A lot of people are generating a 5D chess plot orchestrated by Microsoft in their heads.


In other words - it doesn’t have to be someone’s genius plan, it could have just been an unintelligent mistake


I think it means don't attribute to intelligence what could be easily explained as stupidity?


Nah, It's just good to be the entity with billions of dollars to deploy when things are chaotic.


At this stage the entire board needs to go anyway. This level of instigating and presiding over chaos is not how a governing body should act


This whole sequence is such a mess I don't know what to think. Honestly mostly going to wait till we get some tell all posts or leaks about what the reason behind the firing actually was, at least nominally. Maybe it was just a little coup by the board and they're trying to run it back now that the general employee population is at least rumbling about revolting.


Wow, they made it into Guardian live ticker land: https://www.theguardian.com/business/live/2023/nov/20/openai...


"Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”"

wow, this is a crazy detail



Imagine if the end result of all it is Microsoft basically owning the whole OpenAI


Or demonstrating that they already were the de facto owner.


Surely OpenAI has assets that Microsoft wouldn't be able to touch.


Probably just the trademark. I doubt you get 10B from microsoft and still manage to maintain much independence.


Don't think microsoft has any say about existing hardware, models or customer base. These things are worth billions, and even more to rebuild.


Play Stupid Games, Win Stupid Prizes

1. Board decides to can Sam and Greg. 2. Hides the real reasons. 3. Thinks that they can keep the OpenAI staff in the dark about it. 4. Crashes future 90b stock sale to zero.

What have we learned: 1. If you hide reasons for a decision, it may be the worst decision in form of the decision itself or implementation of the decision via your own lack of ownership of the actual decision. 2. Title's, shares, etc. are not control points. The control points is the relationships of the company problem solvers with the existential threat stakeholders of the firm.

The board itself absent Sam and Greg never had a good poker hand, they needed to fold sometime ago before this last weekend. Look at this way for 13B in cloud credits MS is getting team to add 1T to their future worth....


Me: "ChatGPT write me an ultimatum letter forcing the board to resign and reinstate the CEO, and have it signed by 500 of the employees."

ChatGPT: Done!


Clearly this started with the board asking ChatGPT what to do about Sam Altman.


So Ilya has a job offer from Microsoft?

Wow, this is a soap opera worthy of an Emmy.


Ilya probably has an open-ended standing offer from every big tech company.


Microsoft is different given the size of their investment. If one guy force another guy out, and you hire the second guy, you usually don’t make an offer to the first guy who did the pushing.


> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

First class board they have.


Perhaps the AGI correctly reasoned that the best (or easiest?) initial strike on humanity was to distract them with a never-ending story about OpenAI leadership that goes back and forth every day. Who needs nuclear codes when simply turning the lights on and off sends everyone into a frenzy [1]. It certainly at the very least seems to be a fairly effective attack against HN servers.

1. The Monsters are Due on Maple Street: https://en.wikipedia.org/wiki/The_Monsters_Are_Due_on_Maple_...



And now we see who has the real power here.

Let this be a lesson to both private and non-profit companies. Boards, investors, executives... the structure of your entity doesn't matter if you wake any of the dragons:

1. Employees 2. Customers 3. Government


Not really. The lesson to take away from this is $$$ will always win. OpenAI found a golden goose and their employees were looking to partake in a healthy amount of $$$ from this success and this move by the board blocks $$$.


Employees...and the Microsoft Corporation.


This is 1 in 200000 event


Are you trying to day it's rare or not rare?


This Altman guy has a good reality distortion field, don't you think?


THE FEAR AND TENSION THAT LED TO SAM ALTMAN’S OUSTER AT OPENAI

https://txtify.it/https://www.nytimes.com/2023/11/18/technol...

NYT article about how AI safety concerns played into this debacle.

The world's leading AI company now has an interim CEO Emmett Shear who's basically sympathetic to Eliezer Yudkowsky's views about AI researchers endangering humanity. Meanwhile, Sam Altman is free of the nonprofit's chains and working directly for Microsoft, who's spending 50 billion a year on datacenters.

Note that the people involved have more nuanced views on these issues than you'll see in the NYT article. See Emmett Shear's views best laid out here:

https://twitter.com/thiagovscoelho/status/172650681847663424...

And note Shear has tweeted the Sam firing wasn't safety related. Note these might be weasel words since all players involved know the legal consequences of admitting to any safety concerns publicly.


Question for California IP/employment law experts - 1) would you have expected the IP-sharing agreement between MS and OpenAI to contain some provisions for employee poaching, within the constraints allowed by California (?) law? 2) California law has good provisions for workers' rights to leave one company and go to another, but what does it all for company A to do when entering an IP-sharing relationship with company B?


INAL, but I’ve executed contracts with these provisions.

In my understanding, if such a clause exists, Microsoft employees should not solicit OpenAI employees. But, there’s nothing to stop an OpenAI employee from reaching out to Sam and saying “Hey, do you have room for me at Microsoft?” and then answering yes.

Or, Microsoft could open up a couple hundred job reqs based on the team structure Sam used at OpenAI and his old employees could apply that way.

But it wouldn’t be advisable for Sam to send an Email directly to those individuals asking him to join him at Microsoft (if this provision exists).

But maybe he queued everything up prior to joining Microsoft when he was able to solicit them to join a future team.


Thanks - good answer. At the very least it seems like something to keep lawyers busy for a long time, unless everyone can ctrl-z back to Thursday. I am thinking though that this is a risk of IP-sharing arrangements - if you can't stop the employees from jumping ship, they're dangerous


It seems odd to have it described as “may resign.” Seems like the worst of all worlds.

That’s like trying to create MAD with the position you “may” launch nukes in retaliation.


It's easier to get the support of 500 educated people at a moments notice by using sane words like 'may'. This is rational given the lack of public information as well as a board that seems to be having seizures. Using the word 'may' may seem empty-handed; but it ensures a longer list of names attached to the message -- allowing the board a better glimpse of how many dominoes are lined up to fall.

The board is being given a sanity-check; I would expect the signers intentionally left themselves a bit of room for escalation/negotiation.

How often do you win arguments by leading off with an immutable ultimatum?


Right, but the absolute last thought you want in the board's head is: "they're bluffing."

200 people or even 50 of the right people who are definitely going to resign will be much stronger than 500+ who "may" resign.

Disclaimer that this is a ludicrously difficult situation for all these folks, and my critique here is made from far outside the arena. I am in no way claiming that I would be executing this better in actual reality and I'm extremely fortunate not to be in their shoes.


Presumably some will resign and some won't. They aren't going to get 550 people to make a hard commitment to resign, especially when presumably few concrete contracts have been offered by MSFT.


WSJ said "500 threaten to resign". "Threaten" lol! WSJ says there are 770 employees total. This is all so bizarre.


I bet all those corporations doing attrition-layoffs are making notes. The efficacy of this is so much higher than of returning to office...


Isn't the issue underlying all of this, the following:

OpenAI -- and "the market" -- incorrectly feels like OpenAI has some huge insurmountable advantage in doing AI stuff; but at the end of the day pretty much all the models are or will be effectively open-source (or open-source-ish) meaning they don't necessarily have much advantage at all, and therefore all of this is just irrational exuberance playing out?


Just remember, the guys who run your company are probably more incompetent than this.


*competent


I got it right the first time.


No, almost certainly not lol


OpenAI is more or less done at this point, even if a lot of good people stay. Speed bumps will likely turn into car crashes, then cashflow problems, and lawsuits all around.

Probably the best outcome is a bunch of talented devs go out and seed the beginning of another AI boom across many more companies. Microsoft looking like the primary benefactor here, but there's not reason new startups can't emerge.


Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.

I don't feel sorry for Sam or any other executive, but it does hurt the rank and file more than anyone and I hope they land on their fit if this continues to go sideways.

Turns out they acted incompetently in this case as a board, and put the company in a bad position, and so far everyone who resigned has landed fine.


> Well, now we know. Sam Altman matters to the rank and file, and this was a blunder by OpenAI.

Not just the Rank and File, but he was really was the face of AI in general. My wife, who is not in the tech field at all, knows who Sam Altman is and has seen interviews of him on YouTube (Which I was playing and she found interesting).

I have not heavily followed the Altman Dismissal Drama but this strikes me as a Board Power Play gone wrong. Some group wanted control, thought Altman was not reporting to them enough and took it as an opportunity to dismiss him and take over. However, somewhere in their calculation, they did not figure out Sam is the face of modern AI.

My prediction is that he will be back and everything will go back to what it was before. The board can't be dismissed and neither can Sam Altman. Status quo is the goal at this point.


Hurray for employees seeing the real issue!

Hurray also for the reality check on corporate governance.

- Any Board can do whatever it has the votes for.

- It can dilute anyone's stock, or everyone's.

- It can fire anyone for any reason, and give no reasons.

Boards are largely disciplined not by actual responsibility to stakeholders or shareholders, but by reputational concerns relative to their continuing and future positions - status. In the case of for-profit boards, that does translate directly to upholding shareholder interest, as board members are reliable delegates of a significant investing coalition.

For non-profits, status typically also translates to funding. But when any non-profit has healthy reserves, they are at extreme risk, because the Board is less concerned about its reputation and can become trapped in ideological fashion. That's particularly true for so-called independent board members brought in for their perspectives, and when the potential value of the nonprofit is, well, huge.

This potential for escape from status duty is stronger in our tribalized world, where Board members who welch on larger social concerns or even their own patrons can nonetheless retreat to their (often wealthy) sub-tribe with their dignity intact.

It's ironic that we have so many examples of leadership breakdown as AI comes to the fore. Checks and balances designed to integrate perspectives have fallen prey to game-theoretic strategies in politics and business.

Wouldn't it be nice if we could just built an AI to do the work of boards and Congress, integrating various concerns in a roughly fair and mostly-predictable fashion, so we could stop wasting time on endless leadership contests and their social costs?


It would be crazy to see the fall of most hyped company in last 10 years.

If all those employees leave and microsoft reduce their credits it's game over.


Years from now we will look back to today as the watershed moment when ai went from technology capable of empowering humanity, to being another chain forged by big investors to enslave us for the profits of very few ppl.

The investors (Microsoft and the Saudi’s) stepped in and gave a clear message: this technology has to be developed and used only in ways that will be profitable for them.


For the past few days, whenever I see the word "OpenAI," the theme to "Curb Your Enthusiasm" starts playing in my head.


I love this letter posted in Wired along with the claim that it has 600 signatories without any links or screenshots. I also love that not a single OpenAI employee was interviewed for this article.

None of this is important because if we’ve learned anything over the past couple of days it’s that media outlets are taking painstaking care to accurately report on this company.


To all who say 'handled so poorly'. Nobody know the exact reason OpenAi fired Sam. But go ahead and jump to conclusions that whatever it was didn't warrant being fired. And that surely the board did the wrong thing. Or maybe they should have released the exact reason and then asked hacker news what they thought should happen.


Who needs to buy out a 80bln dollars worth AI startup when talent is jumping ship in their direction already. OpenAI is dead.


Notice that Andrej Karpathy didn't sign.


Is nobody actually... committed to safety here? Was the OpenAI charter a gimmick and everyone but me was in on the joke?


That seems a reasonable takeaway. Plenty of grounds for criticising the board's handling of this, but the tone of the letter is pretty openly "we're going to go and work directly for Microsoft unless you agree to return the company focus to working indirectly for Microsoft"...


Assuming this is all over safety vs non-safety is a large assumption. I'm wary of convenient narratives.

At most all we have is some rumours that some board members were unhappy with the pace of commercialization of ChatGPT. But even if they didn't make the ChatGPT store or do a bigco-friendly devday powerpoint, it's not like AI suddenly becomes 'safer' or AGI more controlled. Less commercialization does not automatically equal more safety.

This could just as easily be summed up as an internal culture battle over product development and a clash of personalities. A lot of handwringing with little specifics.


I think most of these employees wanted the fat $$$ that would happen by keeping Sam Altman on board since Sam Altman is an excellent deal maker and visionary in a commercial sense. I have no doubt that if AGI happened, we wouldn't be able to assure the safety of anyone since humans are so easily led by short term greed.


Wait, it's signed by Ilya Sutskever?!


>The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company

Unless their mission was making MS the biggest AI company , working for MS will make the problem worse and kill the their mission completly.

Or they are pretty naive.


What does this mean?

> You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

Is the board taking a doomer perspective and seeking to prevent the company developing unsafe AI? But Emmett Shear said it wasn’t about safety? What on earth is going on?


The whole drama feels like the Shepard’s tone. You anticipate the climax, but it just keeps escalating.


It's not clear to me that bringing Sam back is even an option anymore given the more with Microsoft. Does Microsoft really takes it's boot off OpenAI's neck and hand back Sam? I guess maybe, but it still begs all sorts of questions about the corporate structure.


No small employer wants a disgruntled employee who was forced out of a better deal. Satya Nadella has proven reasonable throughout the weekend. I would expect he asked for a seat on the board if there's a reshuffle, or at least someone he trusts there.


The firing was definitely handled poorly and the communications around it were a failure, but it seems like the organizational structure was doing what it was designed to do.

Is this the end of non-profit/profit-capped AI development? Would anyone else attempt this model again?


OpenAI's co-founder Ilya Sutskever and more than 500 other employees have threatened to quit the embattled company after its board dramatically fired CEO Sam Altman. In an open letter to the company's board, which voted to oust Altman on Friday, the group said it is obvious 'that you are incapable of overseeing OpenAI'. Sutskever is a member of the board and backed the decision to fire Altman, before tweeting his 'regret' on Monday and adding his name to the letter. Employees who signed the letter said that if the board does not step down, they 'may choose to resign' en masse and join 'the newly announced Microsoft subsidiary run by Sam Altman'.


Altman can’t really go back to OpenAI ever because it would create an appearance of impropriety on the part of MS (that perhaps MS had intentionally interfered in OpenAI, rather than being a victim of it) and therefore expose MS to liability from the other investors in OpenAI.

Likewise, these workers that threatened to quit OpenAI out of loyalty to Altman now need to follow thru sooner rather than later, so their actions are clearly viewed in the context of Altman’s firing.

In the mean time, how can the public resume work on API integrations without knowing when the MS versions (of the latest releases) will come online or if they will be binary interchangeable with the OpenAPI servers that could seemingly go down at any moment?


It is disappointing that the outcome of this is that Altman and co are basically going to steal a nonprofit's IP and use it at a competitor. They took advantage of the goodwill of the public and favorable taxation in order to develop the technology; now that it's ready, they want to privatize the profit. It looks like this was the plan all along, and it's very strange to me that a nonprofit is allowed to have a for-profit subsidiary.

I would hope the California AG is all over this whole situation. There's a lot of fishy stuff going on already, and the idea that nonprofit IP / trade secrets are going to be stolen and privatized by Microsoft seems pretty messed up.


Based on what has come out so far, seems to me:

The board wanted to keep the company true to its mission - non profit, ai safety, etc. Nadella/MSFT left OpenAI alone as they worked out a solution, so it looks like even Nadella/MSFT understood that.

The board could explain their position and move on. Let whoever of the 600 that actually want to leave, leave. Especially the employees that want a company that will make them lots of money, should leave and find a company that has that objective too. OpenAI can rebuild their teams - it might take a bit of time but since they are a non profit that is fine. Most CS grads across USA would be happy to join OpenAI and work with Ilya and team.



Even if the board resigns the damage has been done. They should try to secure good offers at Microsoft.

The stakes being heightened only decreases the likelihood the OpenAI profit sharing will be worth anything, only increasing the stakes further…


The great Closing of “Open”AI.


I don’t trust any of this. Every one of these wired articles has been totally wrong. Altman clearly has major media connections and also seems to have no problem telling total lies.



so what happens if @eshear calls this probably-not-a-bluff, but lets everyone walk? The people that remain get new options and 500 other people still definitely want to work at OAI?


If it comes to that, I reckon Emmett will have his former boss Andy Jassy merge whatever's left of OpenAI into AWS. Unlikely though, as reconciliation seems very much a possibility.


It is likely gonna be that way.

Eshear is the new CEO. This implosion is not his fault. His reputation is not destroyed.

He can rebuild the non-profit part, which is hard to determine success or failure anyway. Then, he will leave in a few years.

He doesn't seem to have much to lose by just focusing on rebuilding OpenAI.


I guess employees are compensated with stocks from the for profit entity. And at the face value before the saga, stocks could be like 90%, 95% or even more of the total value of their packages. How many people are really willing to wipe 90% of their salary out? Just to stick on the mission? On the other hand, M$ offers to match. The day employees are compensated with the stock of the for-profit arm, there is no way to return to nonprofit and their charter any more.


Seems like Microsoft is getting the rest of OpenAI for free now.


This is what happens when you're a key person and a very good engineer as such, and at the same time the board/company fires you :-)

When are we going to realize that it's people taking bad decisions and not the "company". It's not OpenAI, Google, Apple or whoever, its real people, with names, and positions of power that take such shitty decisions. We should blame them and not something vague as the "company".


I guess Microsoft now has a new division. (https://www.microsoft.com/investor/reports/ar13/financial-re...)

Supposedly, they are rumored to compete with each other to the point they can actually provide a negative impact.


I can foresee three possible outcomes here: 1. The board finally relents, Sam goes back and the company keeps going forward, mostly unchanged (but with a new board).

2. All those employees quit, most of whom go to MSFT. But they don’t keep their tech and have to start all their projects from scratch. MSFT is eventually able to buy OpenAI for pennies on the dollar.

3. Same as 2, basically just shuts down or maybe someone like AMZN buys it.


Here we are..

The scene appears to be completely blurry by now! My head is spinning, and the fan is in 7th gear. I believe only time will apply some sort of sharpness effect to make you realize what's really going on. I feel like I'm watching the Italian job the American way; everything and everyone is suspicious to me at this point! Is it possible that MSFT played some tricks behind the scenes?


If OpenAI effectively disintegrates, Microsoft seems to be the beneficiary of this chaos as Microsoft is essentially acquiring OpenAI at almost zero cost. You have IP rights to OpenAI's work, and you will have almost all the brains from OpenAI (AFAIK, MSFT has access to OpenAI's work, but it does not seem to matter). And there is no regulatory scrutiny like Activision acquisition.


Microsoft is laughing all the way to the bank by the moves they have done today.

One could speculate if Microsoft initiated this behind the scenes. Would love it if it came out that they had done some crazy espionage and lobbied the board. Tinfoil hat and all, but truth is crazier than you think.

I remember Bill Gates once said that whoever wins the race for a computerised digital personal assistant, wins it all.


OpenAI was valued around $91 billion so if only 700 employees had options, they could have been worth a lot. While they are going to all have great jobs and continue on with their life’s work (until they’re replaced by their creations lol), they have a really good reason now not to ever speak the names of those board members that wiped out their long term payouts.


Did Mira Murat have say in whether she wanted to become CEO?

Why is she siding with SamA and GregB even though she was on the meeting when he was fired?

Also Ilya what the flying fuck? Wasn’t he the one who fired them?

Either you say SamA was against safe AGI and you hold that stick or you say I wasn’t part of it.

So much stupidity. When an AGI arrives, it will surely shake its head at the level of incompetence here.


This is starting to look like an elaborate, premeditated ruse to kill any vestige of the non-profit face of OpenAI once and for all.


There’s one angle of the whole thing that I haven’t yet seen discussed on HN. I wonder if Sam’s sister’s accusations towards him some time ago could have played any role in this.

But then, I would expect MS to have done their due diligence.

So, basically, I guess I’m just interested to know what were the reasons why the board decided to oust their CEO out of the blue on a Friday evening.


I first heard about his sister's allegations on the grapevine just a few days before the news of the firing broke and I assumed it was due to that finally reaching critical mass.

I was surprised to find that that wasn't apparently the case. (Although the reason for Sam Altman's dismissal is still obscure.) It's kind of shocking. Whether or not the allegations are true, they haven't made Altman radioactive, and that's insane.

The fact that we're not talking about it on HN is also pretty wild. The few times it has been mentioned folks have been quick to dismiss the idea that he might have been fired for having done some really creepy things, which is itself pretty creepy.


Yeah, it’s super weird to me too. I even got a downvote for this question. And, I can sort of understand that, but then, I haven’t seen anything that would make her accusations obviously groundless. I feel like I must have missed it somehow. Because it’s hard to stomach that someone’s sister would come out and accuse her brother of heinous, long-lasting abuse, and the collective reaction of the tech industry is just :shrug:…

What?


If the board had any balls they'd call them on their bluff. I'd love to see it honestly, a mass resignation like that.


Lots of thoughts and debates happening here, which is great to see.

However, at the end of the day, this is a great example of how people screw up awesome companies.

This is why most startups fail. And while I'm not suggesting OpenAI is on a path to failure, you can have the right product, the right timing, and the right funding, and still have people mess it all up.


Adam has to be behind this. It is very reminiscent of the situation with Quora and Charlie. https://x.com/gergelyorosz/status/1725741349574480047?s=46&t...


"Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”"

insane



Don't know what's happening, but MS looks to be a winner in long run, and probably most others. Who stay gets promotion, who leaves gets fat check. The loosers are customers, no GPT-5 or any significant improvements any time soon. MS made GPT will be much more closed and pricey. Oh, yes, competitors are happy too.


Competitors including Quora: https://quorablog.quora.com/Poe-1


What a mess this has become. Regardless of the outcome, this situation reflects badly (to say the least) on OpenAI.


The speed at which this is happening could be a masterful execution of getting out of under the non-profit status.


The corporate structure is so convoluted, OpenAI is only part non profit.


I feel pity for these 70 people out of 700 who haven't signed the letter asking the board to step down. Imagine working peacefully to find yourself in the middle of a power struggle without even understanding what the real reason was but realizing most people already made their choice so...


Quick question for some of the folks here who may have a handle on how VC's may see this, but is Microsoft effectively hiring all these staff members out from OpenAI (a company they've invested heavily in) going to affect their ability to invest into other startups in the future?


Not at all. This is an extremely unusual, one-of-a-kind situation and I think everybody realizes that.

And there's no evidence Microsoft was an indicator of the drama.



Now it says more than 700. Waiting for wired to turn this into a new year eve's like countdown.


I just downloaded all of my data / chats. Who knows if it'll be up and running the next days.


That's not a terrible idea on principle.


I wonder how the FTC and Lina Khan will view all of this if most of the team moves over to Microsoft


It would be hard for the FTC to do anything about it as there is no acquisition of companies or IP going on. All Microsoft is doing is making job offers to recently unemployed experts in their field after their business partner set themselves on fire starting at the executive/board level.


What a wonderful way to cut headcount/expense and lock-in profitable margins on healthy annual revenue.

Can only work when you have the advantage of being the dominant product in the marketplace -- but I gotta hand it to the board, I couldn't have done it better myself.


And where will their compute come from to continue to run their expensive models and serve their customers? From the company that just stole all their employees?


The tweet was updated five minutes later to correct 550 to 505.

https://twitter.com/karaswisher/status/1726599700961521762?s...


The tweet is now obsolete as OpenAI employees are confirming the number is much higher now, atleast 650: https://twitter.com/lilianweng/status/1726634736943280270


What a coup for Microsoft. Regardless of what happens, Microsoft has got to work on their product approach. Even though it uses GPT-4, Bing Chat / Microsoft Copilot is atrocious. It's like taking Wagyu beef and putting Velveeta cheese on it.


For me, the weirdness here is that Ilya, supposedly the brains behind GPT, is a signatory.

The sacking would never have happened without his vote; and he must have thought about it before he acted.

I hope he comes up with a proper explanation of his actions soon (not just a tweet).


I suspect they’ll quit, and the “top” N percent will be picked up by Microsoft with healthy comp packages. Microsoft will have effectively purchased the company for $10 billion. The net upside of this coup business may just flow to Microsoft shareholders.


I don't see any mentions of Google but I personally think it's Google that will be the main beneficiary of chaos at OpenAI. After all, weren't they the main competitors? Maybe not in product or business yet but on IP and hiring fronts?


I knew something like this would happen, MS was told they would originally only be given stuff until their investment was paid off, but MS could care less about their investment, they want to own OpenAI, so it makes sense they would coup the company


Didn’t that train already depart with the announcements from MS and Sam? Is there a way back?


What a mess.

I genuinely feel like this is going to set back AI progress by a decent amount, while everyone is racing to catch OpenAI I was still expecting them to keep a reasonable lead. If OpenAI falls apart, this could delay progress by a couple of years.


what do you mean "nearly 500". According to wikipedia openAi has 500 employees


505/700 -some sources say 550


The threat of moving to MS is interesting, MS could exploit this massively. All the negotiation power will be on MS side and their position actually gets stronger as people move across.

Will they do the good guy thing and match everyones packages?


I'm pretty sure the revolt is now 95% of employees, can it grow any further?


Link to latest numbers that say 95%? Last I saw was ~91% (700-of-770):

https://www.washingtonpost.com/technology/2023/11/20/microso...


Here’s a tweet from Evan Morikawa, who’s been reporting numbers throughout the day.

https://twitter.com/E0M/status/1726743918023496140


The folks that are the real losers in this are OpenAI employees who have had equity-based comp. packaged given to them in that last few years and just saw the value of said comp. potentially slashed by a factor of 10


The sad part is, after removing Sam and Greg from the board, there are only four people left.

So no matter if Ilya wants to go back to before this happened, the other three members can sabotage and stall, and outvote him.


Nobody seems to be considering the possibility, that ChatGPT will go offline soon. Because it's known to be losing money per query, and if the evil empire decides to stop those Azure credits...


„Remarkably, the letter’s signees include Ilya Sutskever, the company’s chief scientist and a member of its board, who has been blamed for coordinating the boardroom coup against Altman in the first place.“

WAT ?


It always seemed like Microsoft was behind this, biggest tell was how comfortable MS was at having their entire AI future depend on a company where they don't really have full rights to.


Unbelievable incompetence of the board. Like a kindergarten.

If Microsoft is playing its card in a good way, Satya Nadella will look like a genius and Microsoft will get ChatGPT like functionality for cheap.


This was not how I saw collective bargaining coming to Silicon Valley.


This is the greatest clown show in the history of the tech industry.


ICYMI: Timeline of all the madness https://news.ycombinator.com/item?id=38351214


Boards suck. Especially if they are VCs or placed there by VCs.


It is time for regulators to step in and propose structural remedies. VC culture has shown itself not able to run these companies for betterment of mankind, anyway.



Drama queens


Let’s say how would Ilya play along after this? Any similar incidents historically, like a failed coup but the participant got to stay?


There are thousands of extremely talented ML researchers and software devs who would jump at the chance to work at Open AI.

Everyone is replaceable.


> Everyone is replaceable.

Nope. That holds only true for mediocre employees but not above. The world class in their field isn't replaceable otherwise there would be no openai.


Beyond the traditional "world class" talent arguments, 500 people out of 700 leaving as fast as they get offers from Microsoft or elsewhere means replacing staff with empty office space and losing any plans or organization.


Might be just me as a programmer out in the styx, SV programmers seem to flex a lot, in comparison to your average subordinates.


Even stranger, they're flexing on behalf of a CEO's ego, and not on behalf of his wellbeing or their own. It's a very strange case of cross-class solidarity.


I can take your comment on good faith of people relationships, but on the solidarity thing aren't they all placing faith on a black box and/or a person promoting it.


Well that accelerated very quickly and this is perhaps the most dysfunctional startup I have ever seen.

All due to one word: Greed.


I don’t know about OpenAI, but Ive been in a few similar business situations where everyone is in a good situation and greed leads to an almighty blowup. It’s really remarkable to see.


> All due to one word: Greed.

I would say it's due to unconventional not-battle-tested governance.


What? Greed is the backbone of our startup landscape. As soon as you get VC backing all anyone cares about is a big payday. This is interesting because there is something going on beyond the typical pure greed shitshow.

Perhaps it was just that original intention for openai to be a nonprofit, but at some point somewhere it wasn't pure $ and that's what makes it interesting. Also more tragic because now it looks like it's heading straight to a for profit company one way or another.


And the ironic part of the greed is that it seems there is far more (at least potential) earnings to be spread around and make everyone there wealthy enough to not have to think about it ever again.

Yet they start this kind of nonsense.

Not exactly focusing on building a great system or product.


I assumed that due how the whole company/non-profit was structured employees didn't really get any actual equity?


Um, equity isn't the only way to distribute profits...

edit: 'tho TBF, the other methods do require ethical management behavior down the road, which was just shown to be lacking in the last few days.


Microsoft is nothing without its people?


Maybe the employees of OpenAI should stop a second and think about their privileges as rock stars in a super hyped startup before they bail for a job in a corporation where everything and everyone is setup to be replaceable.


These boys will not be your rank and file employees. They will operate exactly as they have done in OpenAI. Only difference will be that they no longer have this weird "non-profit, but actually some profit" thing going on.


How do they bylaws work?

1. Voting out chairman with chairman abstaining needs only 3/5.

2. Voting out CEO then requires 3/4?

Did Ilya have to vote?


How are OpenAI expected to align a hyper-intelligent entity if they can't even align themselves....


The irony. You can ask chatgpt4 if it was the right decision to fire the guy and it kinda confirms it.


They can leave for sure, but they likely have some kind of non-compete clause in their contract, right?


Wow, this new season has even more drama than the one about blockchain tech! Just when you think the writers were running out of ideas they blow you away with more twists. I will be renewing my Netflix subscription that's for sure! I can't wait to see what this Sam character does next. Perhaps it will involve robots or something? The skys the limit at this point.


The irony of the first extremely successful collective action in silicon valley being taken in order to save the job of a soon-to-be billionaire....

Jokes aside though I do wonder if this will awaken some degree of "class consciousness" among tech employees more generally.


Paging Lina Khan - probably best not let Microsoft do a backdoor acquisition of the leader in LLMs.


Any journalist covering the OpenAI story must be swearing and cursing at the board at this moment..


As someone watching this all from Europe, realizing the work day has not even started for the US West Coast yet leaves me speechless.

This situation's drama is overwhelming and it seems like its making HN's servers meltdown.


I wonder what their employment contracts state? Are they allowed to work for vendors or clients?


Easiest layoff round ever in the US.


So Ilya Sutskever first defends the board's decision and now it is 180 flip. Interesting ...


He’s on the board!


I'm extremely confused by this. It seems absurd that he could sign a letter seemingly demanding his own resignation, but also not resign? There must be some missing information.


> There must be some missing information.

Or possibly some misinformation. It does seem very strange, and more than a little confusing.

I have to keep reminding myself that information ultimately sourced from Twitter/X threads can't necessarily be taken at face value. Whatever the situation, I'm sure it will become clearer over the next few days.


I like this a lot. Shows how valuable employees are. It’s almost feels like a union. Love it.


This whole debacle is a complete embarrassment and shredding the organisations credibility.


If you're ever tempted to offer your team capped PPUs, let this be a lesson to you.


So what was going to happen 5 years from now is happening now I.e MS acquiring OpenAI


Did Microsoft not have representation on the board of a company they put $13b in?


It doesn't matter if the firing was justified or not, the board fucked up.


What a bunch of immatures.

If anything this proves that everybody is replaceable and fireable, they should be happy because usually that treatment is only reserved to workers.

Whatever made OpenAI successful will still be there within the company. Next man up philosophy has built so many amazing organizations and ruined none.


How long will the current chatgpt v4 stay available? Is it all about to end?


Let the OpenAi staff,why not the board replace them with ever willing AIs


Enough. 15 of the 30 posts on the home page are about OpenAI in some way.


Don't anti-compete clauses apply here, or no, because… California?


That sounds like a perfectly executed plan to get MS all the good stuff.


This affair has Musk's fingerprints all over it but he lost, again.


How are Altman and the openai staff not more invested in OpenAI shares?


I've never seen a staff walkout / threat to walk out ever succeed.

Am I wrong?


Yes.


Source?


NDA. But stuff really does happen and it is much more frequent than you might think.

I've also seen a complete tech team walk out after a disagreement with management. Boom. 20 minutes from start to finish.

Incidentally, that one I was allowed to write about because I negotiated that up front:

https://jacquesmattheij.com/saving-a-project-and-a-company/


As other companies, a petition by 500 of 100K employees is big news.


I mean, no matter what people say about what happened, or what actually did, one can paint this picture:

( - OpenAI exists, allegedly to be open)

- Microsoft embraces OpenAI

- Microsoft extends OpenAI

- OpenAI gets extinguished, and Microsoft ends up controlling it.

First three points are solid and, intent or not, end result is the same.


Is it too late? Satya already announced Sam and brock is joining.


Ilya single handed ruined 700 of OpenAI's fortune overnight, this is not going to end well, my prediction is that, OpenAI is done, in 1-2 years nobody will even care about its existence.

Microsoft just won the jackpot, time to get some stocks there.


If I was one of the 700 people that worked at the vanguard what could potentially be the most profitable, culture-changing technology in the past 50 years, I wouldn't want to miss out on becoming a billionaire by working for a non-profit. My conspiracy for the day: an unspoken profit motive. And it seems to be playing out, Sam just went to MS, and if those 500 also go there, then I think that's the motivation.


Altman and staff could start an open source LLM project.


Oh my goodness, this just gets more entertaining everyday.

Money talks...


Not a typical labor dispute. The billionaires at the other company guaranteed them jobs. More billionaires moving people around like chess pieces.


How many startups will now fail if OpenAI shuts down?


When will the Netflix special come out on this ?


Chaos is a ladder


What ! Ilya is one of them?

Isn't he the one who voted to oust Sam?

Wow !


It's like a Facebook drama, haha.


Ilya signing the letter is chutzpah.


Honestly, if Altman stays gone and they burn the motherfucker down it might be a good lesson for Silicon Valley on the wisdom of throwing out founders.

I don't expect it to happen, but a boy can dream.

They would be studying that one in business schools for the next century.


So...Ilya signed the letter too?


I wonder what's up with the other 150 and what they must be thinking. Maybe the were literally just hired :)


Some idealists, a few new people, some people on holiday or who don't check their email regularly.


didn't see the email that was posted over the weekend?


@dang please update it to 505.


When is the movie coming out?


Season 2


Better hope this isn't a Netflix show.


It would certainly make for a good series in a couple years. Gives me modern "Halt and Catch Fire" (2014-2017) vibes.


Why is it so rare for tech workers to organize like this?

It takes a cult-like team, execs flipping, and a nightmare scenario and tremendous leverage opportunity; otherwise worker organizing is treated like nasty commie activity. I wonder if this will teach more people a lesson on the power of organizing.


eating your own dog food with BoardGPT, what could go wrong ?


Time to buy MS stocks.


Who do these upstarts think they are? The board needs to immediately sack them all to regain its authority, and that of capitalism itself. /s

Really, though, its getting beyond hilarious. And I reckon Nadella is chuckling quietly to himself as he makes another nineteen-dimensional chess move.


we all remember "monopoly" is in MSFT DNA


What a shitshow! What is going on in this company? I am sure Sam did something wrong, but the board took advantage of it and went overboard then? We don’t know anything that happened and we are all somehow participating in this drama? At this point why don’t they all come out and tweet their versions of it?


We should strive to be leaders who inspire such loyalty and devotion


The question here is what choice does the board has now. Even if they comply, would Altman accept/be able to get back after signing for Microsoft? Would Nadella allow him to go back after he secured him inside MS's campus?


Employees are for-profit entities, huge conflict of interest.


inb4: this is why we need unions!


https://twitter.com/thiagovscoelho/status/172650681847663424...

Here's tweet transcribing OpenAI's interim CEO Emmett Shear's views on AI safety, or see youtube video for original source. Some excerpts:

Preamble on his general pro-tech stance:

"I have a very specific concern about AI. Generally, I’m very pro-technology and I really believe in the idea that the upsides usually outweigh the downsides. Everything technology can be misused, but you should usually wait. Eventually, as we understand it better, you want to put in regulations. But regulating early is usually a mistake. When you do regulation, you want to be making regulations that are about reducing risk and authorizing more innovation, because innovation is usually good for us."

On why AI would be dangerous to humanity:

"If you build something that is a lot smarter than us—not like somewhat smarter, but much smarter than we are as we are than dogs, for example, like a big jump—that thing is intrinsically pretty dangerous. If it gets set on a goal that isn’t aligned with ours, the first instrumental step to achieving that goal is to take control. If this is easy for it because it’s really just that smart, step one would be to just kind of take over the planet. Then step two, solve my goal."

On his path to safe AI:

"Ultimately, to solve the problem of AI alignment, my biggest point of divergence with Eliezer Yudkowsky, who is a mathematician, philosopher, and decision theorist, comes from my background as an engineer. Everything I’ve learned about engineering tells me that the only way to ensure something works on the first try is to build lots of prototypes and models at a smaller scale and practice repeatedly. If there is a world where we build an AI that’s smarter than humans and we survive, it will be because we built smaller AIs and had as many smart people as possible working on the problem seriously."

On why skeptics need to stop side-stepping the debate:

"Here I am, a techno-optimist, saying that the AI issue might actually be a problem. If you’re rejecting AI concerns because we sound like a bunch of crazies, just notice that some of us worried about this are on the techno-optimist team. It’s not obvious why AI is a true problem. It takes a good deal of engagement with the material to see why, because at first, it doesn’t seem like that big of a deal. But the more you dig in, the more you realize the potential issues.

"I encourage people to engage with the technical merits of the argument. If you want to debate, like proposing a way to align AI or arguing that self-improvement won’t work, that’s great. Let’s have that argument. But it needs to be a real argument, not just a repetition of past failures."


What an astonishing embarrassment.


<more popcorn> nom nom nom


rats, sinking ship, …


Huh, so collective bargaining and unionization is supported in tech under some circumstances...


> Remarkably, the letter’s signees include Ilya Sutskever, the company’s CTO who has been blamed for coordinating the boardroom coup against Altman in the first place.

What in the world is happening at OpenAI?


If it weren’t so unbelievable, I’d almost accuse them of orchestrating all this to sell to Microsoft without the regulatory scrutiny.

It’s like they distressed the company to make an acquisition one of mercy instead of aggression, knowing they already had their buyer lined up.


Yeah, I also started out believing this must be a principle thing between Ilya and Sam. But no, this smells more and more like a corporate clusterfuck and Ilya was just an easy to manipulate puppet. This alleged statement from the board that destroying the company is an acceptable outcome is completely insane, but somewhat reasonable when combined with the fact that half the board has some serious conflict of interest going on.


> sell to Microsoft without the regulatory scrutiny

I keep hearing this, principally from Silicon Valley. It’s based on nothing. Of course this will receive both Congressional and regulatory scrutiny. (Microsoft is also likely to be sued by OpenAI’s corporate entity, on behalf of its outside investors, as are Altman and anyone who jumps ship.)


From what I heard non-compete clauses are unenforceable in California, so what exactly are they suing for?

I'm pretty sure Satya consulted with an army of lawyers over the weekend regarding the potential issue.


> non-compete clauses are unenforceable in California, so what exactly are they suing for?

Part of suing is to ensure compliance with agreements. There is a lot of IP that Microsoft may not have a license to that these employees have. There are also legitimate questions about conflicts of interests, particularly with a former executive, et cetera.

> pretty sure Satya consulted with an army of lawyers over the weekend regarding the potential issue

Sure. I'm not suggesting anyone did anything illegal. Just that it will be litigated over from every direction.


Such as? Unless they are in the habit of downloading multiterraby copies of the trained model and taking.g it home what IP would they have? The training data is the open internet and various licensed archives far to much for them to take and arguably isn't OAI IP anyway. The background is all bases on openly published research much of it released by Google. And Microsoft already has licensed pretty much everything from OAI as part of that multi billion dollar deal.


Microsoft can buy the company in parts, as it “fails” in a long drawn out process. By the end, whatever they are buying will have little value, as it will already be outdated.


Sue Sam for what? They fired him and he got amother job with another company. thats on them for firing him in a state with law prohibiting noncompete clauses


Yeah, just like the suit Microsoft is in with windows 11 anticompetitive practices, right?


I haven't seen brand suicide like this since EM dumped Twitter for X!!! (4 months ago)


It's nothing like it. What common people use is ChatGPT, many of them never heard about OpenAI, not even mention who sits on the board etc. And their core offering is more popular than ever. With Twitter, Musk started to damage the product itself, step by step. As far as I can tell ChatGPT continues to work just fine, as opposed to X.


Open ai users arent chatgpts users its developers.


Actually it's both with developers being a relatively small minority.


(Rips off mask) Wow, it was the Quora CEO all along!

So this was never about safety or any such bullshit. It’s because GTPs store was in direct competition with Poe!?


Imagine letting the CEO of a simple question and answer site that blurs all of its content onto your board


Alongside luminaries like "the wife of the guy who played Robin in the Batman movie".


lol is that a real thing?


And that he might be the least incompetent of them all.


Absolutely mindboggling that Adam is on the board.

Poe has direct competition with the GPTs and the "revenue sharing" plan that Sam released on Dev day.

The Poe Platform has their "Creators" build your own bot and monetize it, including OpenAI and other models.


Even more interesting considering that Elon left OpenAI’s board when Tesla started developing Autopilot as it was seen as a conflict of interest.


It's extrazordinary to watch, I'll say that much.

I still think 'Altman's Basilisk' is a thing: I think somewhere in this mess there's actions taken to wrest control of an AI from somebody, probably Altman.

Altman's Basilisk also represents the idea that if a charismatic and flawed person (and everything I've seen, including the adulation, suggests Altman is that type of person from that type of background) trains an AI in their image, they can induce their own characteristics in the AI. Therefore, if you're a paranoid with a persecution complex and a zero-sum perspective on things, you can through training induce an AI to also have those characteristics, which may well persist as the AI 'takes off' and reaches superhuman intelligence.

This is not unlike humans (perhaps including Altman) experiencing and perpetuating trauma as children, and then growing to adulthood and gaining greatly expanded intelligence that is heavily, even overwhelmingly, conditioned by those formative axioms that were unquestioned in childhood.


> What in the world is happening at OpenAI?

Well, we don't know.

What we do know, is that the "coordinating the boardroom coup against Altman" is a rumor and speculation about a thing we don't know anything about.


What options are left other than Adam D'Angelo orchestrated the downfall of a competitor to Poe?



There must be something going on which is not in the public domain.

What an utterly bizarre turn of events, and to have it all played out in public.

A $90 billion valuation at stake too!


I wonder how many people are on a path for a $250K/year salary instead of $30M in the bank now.


Microsoft can easily afford to offer them $30M of options each if they continue to ship such important products. That's only $15B for 500 staff.

Microsoft has a $2.75T market value and over $140B of cash.


> Microsoft can easily afford to offer them $30M of options each

But it doesn’t have to. And the politics suggest it very likely won’t.


Microsoft isn't going to give the employees in HR equivalent offers. There are a lot of people in the company that wouldn't provide much value to the new team at MS.


It’s looks like about 505.


At this point either pretty much all the speculation here and on Twitter was wrong, or they've threatened to kneecap him.


The signatories want Bret Taylor and Will Hurd running the new Board, apparently.

> We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.


Googling Will Hurd only shows up a Republican politician with a history at the CIA. Is that the right guy? Can't be.


Please not another Eric Smith NSA shill running the show. on the other hand it was inevitable. either the government controls the most important companies secretly as in China or openly as in the US.


Sounds like a classic case of FAFO to me.


Who fucked around and who found out, exactly??

We the unsuspecting public?


GPT-4 Turbo took control of the startup and fcks around ...


Ilya FA

Ilya FO (in process)


He didn't strike me as a type to brainlessly FA.


The thing about being really smart is that you can find incredible gambles.


Yea, check out his presentations on YT. Incredible talent.

What strikes me is that he wrote the regretful participation tweet after witnessing the blowback. He should have written it right with the initial news. And clearly explain employees. This is not a smart way to conduct board oversight.

500 employees are not happy. I’m siding with the employees (esp early hires), they deserve to be part of once in a lifetime company like OpenAI after working there for years.


He could be an expert in some areas but in others… not so much.


"if you value intelligence above all other human qualities, you’re gonna have a bad time"


Adam D'Angelo?


Ilya is much less active on Twitter than the others. The rumors that blamed him emerged and spread like wildfire and he did nothing to stop it because he probably only checks Twitter once a week.


One would think that he would be on Twitter this week.


> One would think that he would be on Twitter this week.

Or maybe _this_ week he would need to spend his time doing something productive.


More like spending time in calls with board members, coworkers, investors, partners, ... and often it is better not to say something, than saying something which then is misinterpreted overtaken by other reality.


looks like found his twitter password https://x.com/ilyasut/status/1726590052392956028?s=20


Why? To entertain bystanders like us?


He says he regrets his action, so he's not blameless. and it wouldn't have been possible for 3/6ths of the board to oust Brockman and Altman without his vote. My bet (entirely conjecture) is that Ilya now realizes the other three will refuse to leave their board seats even if it means the company melts to the ground.


not this week, trust me


The OpenAI board's messaging around this has been absolutely atrocious. The reporting had Ilya at the center of getting rid of Altman, and how he's signing a letter asking the board to resign? Maybe he was trying to do the right thing, but he's absolutely destroyed his credibility as a leader.


None of it makes sense to me now. Who is really behind this? How did they pull this off? Why did do it? Why do it so suddenly, in a terribly disorganized way?

If I may paraphrase Churchill: This has become a bit of a riddle wrapped in a mystery inside an enigma.


Watching all this drama unfold in the public is unprecedented.

I guess it makes sense. There has never been a company like OpenAI, in terms or governance and product, so I guess it makes sense that their drama leads us in to unchartered territory.


I guess this is the Open in OpenAI, eh?

Absolutely bonkers.


Probably trying to shift the blame to the other three board members. It could be true to some degree. No matter what, it's clear to the public that they don't have the competency to sit on any board.


Ok... so this is not the scenario any of us were imagining? Ilya S vs Altman isn't what went down?

JFC.


It French revolution time over there. heads are flying angry mobs. Fun times


Did it originally say CTO? Ilya is not CTO and it's been corrected now.


Maybe they found AGI and it is now controlling the board #andsoitbegins.


There's definitely more to this than just Ilya vs Sam.


That settles it it has to be the AGI orchestrating it all.


The screenwriters are overdoing it at this point.


Understandable, they were on a strike for a long time. Now that they are back, they are itching to release all the good stuff.


Sexual misconduct. Ilya protects Sam by not letting this spiral out in media.


The whole thing starts to look like a coup orchestrated by Microsoft


Somehow reminds me of Nokia...

https://news.ycombinator.com/item?id=7645482

frik on April 25, 2014:

> The Nokia fate will be remembered as hostile takeover. Everything worked out in the favor of Microsoft in the end. Though Windows Phone/Tablet have low market share, a lot lower than expected.

> * Stephen Elop the former Microsoft employee (head of the Business Division) and later Nokia CEO with his infamous "Burning Platform" memo: http://en.wikipedia.org/wiki/Stephen_Elop#CEO_of_Nokia

> * Some former Nokia employees called it "Elop = hostile takeover of a company for a minimum price through CEO infiltration": https://gizmodo.com/how-nokia-employees-are-reacting-to-the-...

For the record: I don't actually believe that there is an evil Microsoft master plan. I just find it sad that Microsoft takes over cool stuff and inevitably turns it into Microsoft™ stuff or abandons it.


In many ways the analysis by Elop was right, Nokia was in trouble. However his solution wasn't the right one, and Nokia paid for it.


Seeing that a company is in trouble is not really the highest bar for a CEO candidate...


It was for a company as top heavy and dysfunctional at Nokia. This has been well documented by Nokia members at the time. I had a post on HN digging specifically into this. Read "Transforming Nokia" sometime. It's a pretty decent overview of Nokia during that time period


> I don't actually believe that there is an evil Microsoft master plan.

What planet are you living on?


Yeah, this was a fight between the non-profit and the for-profit branches of OpenAI, and the for-profit won. So now the non-profit OpenAI is essentially dead, the takeover is complete.


The nonprofit side of the venture actually was in worse shape before, because it was completely overwhelmed by for-profit operations. A better way to view this is the nonprofit side rebelled, has a much smaller footprint than the for-profit venture, and we're about to see if during the ascendency of the for-profit activities the nonprofit side retained enough rights to continue to be relevant in the AI conversation.

As for employees end masse acting publicly disloyal to their employer, usually not a good career move.


Exsept to many it looks like the board went insane and and started firing on themselves. Anyone fleeing that isnt going to be looked on poorly.


> As for employees end masse acting publicly disloyal to their employer, usually not a good career move.

Wut?

This is software, not law. The industry is notorious for people jumping ship every couple of years.


Still, doing so publicly still isn't a good idea, IMHO.


Disloyalty to the board due to overwhelming loyalty to the CEO isn't really an issue. I've interviewed for tech positions where a chat with the CEO is part of the interview process, I've never chatted with the board.


Is it? Who are the non-profit and for-profit sides? Sutskever initially got blames for ousting Altman, but now seemed to want him back. Is he changing sides only because he realises how many employees support Altman? Or were he and Altman always on the same side? And in that case, who is on the other side?


> Who are the non-profit and for-profit sides?

The only part left of the non-profit was the board, all the employees and operations are in the for-profit entity. Since employees now demand the board should resign there will be nothing left of the non-profit after this. Puppets that are aligned with for-profit interests will be installed instead and the for-profit can act like a regular for-profit without being tied to the old ideals.


Didn't they receive their original funding as donations? All those donations will now turn out to have been made to a for-profit entity.


This view is dated now, because now even Ilya Setskever, The head research scientist who instigated the firing in the first place, now regrets his actions and wants things back to normal! So it really looks like this comes down to the whims of a couple board members now. they don’t seem to have any true believers on their side anymore. It’s just them and almost nobody else.


There is no solid evidence that Setskever instigated the firing beyond speculation by friends who suggest that he had disagreements with Altman. It could just as well have been any of the other board members, or even a simple case of groupthink (the Asch conformity effect) run amok.

Furthermore, it's consistent with all available information that they would prefer to continue without Sam, but they would rather have Sam than lose the company, and now that Microsoft has put its foot down, they'd rather settle.


Do we know that Ilya even wanted the firing? AFAIK we “know” this only from Altman, who is definitely not a credible source of such information.


Ilya, in his tweet, says he regrets the firing decision. You can't regret an act that you never committed.


The board committed it.


Ilya was/is on the board, and was present when the firing occurred. He had no obligation to be at that snap meeting if he wasn’t going along with it.

Besides, considering it was four against two, they would’ve needed him for the decisive vote anyway.

I’m not sure why you wouldn’t trust Sam Altman‘s account of what Ilya did and didn’t do considering Ilya himself is siding with Sam now.


Probably yeah.

Altman showed nothing why he would or wouldn’t lie. If he is really wanted to do things against the board, or the mission, or whatever, then it is in his interest to lie. However, we still don’t know anything, so we can’t exclude any possibilities. That means that interested parties’ statements’ value is almost nothing. It’s easy to lie in muddy waters.


A few weeks ago my 4yr old Minecraft gamer was playing pretend and said "I'm fighting the biggest boss. THE MICROSOFT BOSS!"

Yeah M$ hasnt had a good reputation. I finally left Windows this year because I'm afraid of them after Win11.

2023/4 will be the year of the Linux Desktop in retrospect. (or at least my family's religion deemed it)


I was wondering how many lines I'd have to scroll down in the comments to see a "M$" reference here on HackerNews.

They're a $2+ trillion dollar company. They're doing something right.


If you shove a bunch of $100 dollar bills on a thorn tree, it doesn't make it any less dangerous or change it's fundamental nature.


Now do oil companies and big pharma.


they violated free market principles (years ago) that left their users captive. Not home users, every business in the country for the past 30+ years. They are profiting from doing many things wrong, anti-competitive, and illegal. In some alternative universe, there's an earth where you can switch just the OS (and keep all your apps, data, and functionality) and MSFT went bankrupt. Another far-away-galaxy has an earth where MSFT's board got decade prison sentences for breaking antitrust law, another where MSFT paid each victim of spyware $1000 in damages due to faulty product design. We don't live in those realities where bad guys pay.


I also finally left Windows behind. Tired of their shenanigans, tired of them trying to force me into their Microsoft account system (both for Windows and Minecraft).

The idea that Microsoft is going to control OpenAI does not exactly fill me with confidence.


Why did it take Windows 11? (Haven't personally used it, but having helped my dad and my coworkers try to navigate it... it does seem pretty terrible. I thought Windows 10 was supposed to fold on to just... 'Windows' with rolling updates?)

I've been using Linux for a while. Since 2010 I sort of actively try to avoid using anything else. (On desktops/laptops.)


Right there with you. In the process of extracting myself from all things MS. Even when they do something right they have to keep changing it until it's crap.


You'd do yourself a favor by not referring to them as "M$". It taints your entire message, true or not.


I’m baffled by this. What is offensive about pointing out that an international for-profit seeks more profit?


Nothing at all. But writing "Microsoft" as "Micro$oft" is just childish and it taints your otherwise potentially valid message. Do you also refer to Windows as "Winblows" maybe?


OP should start by not letting their 4yo play video games.


My kid went from disinterested in the letters we taught him, to fascinated when he realized he could use them to get special blocks.

Minecraft teaches phonics. Anyway, my 4 year old can read books. He doesnt even practice the homework in his preschool because he just reads the words that everyone else sounds out.


Please, no cancel-culture.


Reasoning based on cui bono is a hallmark of conspiracy theories.


Haha yes, we should never look at the incentives behind actions. We all know human decision making is stochastic right?


Possibility is also a hallmark of conspiracy theories, yet we don't reject theories for being possible.

This is an argumentum ad odium fallacy


Haha yeah the world is just run by silly fools who make silly mistakes (oops, just drafted a law limited your right to protest - oopsie!) and just random/lucky investments.


The alternative is "these guys don't know what they're doing, even if tens of billions of dollars are at stake".

Which is to say, what's your alternative for a better explanation? (other than the "cui bono?" one, that is).


> these guys don't know what they're doing, even if tens of billions of dollars are at stake

also known as "never attribute to malice that which can be explained by incompetence", which to my gut sounds at least as likely as a cui bono explanation tbh (which is not to be seen as an endorsement of the view that cui bono = conspiracy...)


Everyone always forgets there's two parts to Hanlon's razor:

> Never attribute to malice that which is adequately explained by stupidity (1), but don't rule out malice. (2)


I don't actually think (2) is part of the razor[1]. If it is, then it doesn't make sense because (1) is an absolute (i.e. "never") which is always evaluated boolean "true", therefore statement (2) is never actually executed and is dead code.

Nevertheless I agree with you and think (2) is wise to always keep in mind. I love Hanlon's Razor but people definitely should take it literally as written and/or as law.

[1]: https://en.wikipedia.org/wiki/Hanlon%27s_razor


Your alternative explanation along with giant egos is pretty plausible.


It does feel like Microsoft wanted this to happen, doesn’t it? Like the systems for this were already in place. So fascinating, and a little scary.


My ChatGPT wrapper is in danger, please stop


lmfao


If they align with Sam Altman and Greg Brockman at Microsoft, they wouldn't have to initiate from ground zero since Microsoft possesses complete rights to ChatGPT IP. They could simply create a variant of ChatGPT.

it's worth noting that Microsoft's supposed contribution of $13 Billion to OpenAI doesn't fully materialize in cash, a large portion of it is faceted as Azure credits.

this scenario might transform into the most cost-effective takeover for Microsoft, acquiring a corporation valued at $90 billion for a relatively trifling sum.


550 job openings at openai.


This situation will create the need to grieve loss for many involved.

I wrote some notes on how to support someone who is grieving. This is from a book called "Being There for Someone in Grief." Some of the following are quotes and some are paraphrased.

Do your own work, relax your expectations, be more curious than afraid. If you can do that, you can be a powerful healing force. People don't need us to pull their attention away from their own process to listen to our stories. Instead, they need us to give them the things they cannot get themselves: a safe container, our non-intrusive attention, and our faith in their ability to traverse this road.

When you or someone else is angry, or sad, feel and acknowledge your emotions or their emotions. Sit with them.

To help someone heal from grief, we need to have an open heart and the courage to resist our instinct to rescue them. When someone you care about is grieving, you might be shaken as well. The drama of it catches you; you might feel anxious. It brings up past losses and fears of yourself or fears of the future. We want to take our own pain away, so we try to take their pain away. We want to help the other person feel better, which is understandable but not helpful.

Avoid giving advice, talking too much, not listening generously, trying to fix, making demands, disappearing. Do see the other person without acting on the urge to do something. Do give them unconditional compassion free of projection and criticism. Do allow them to do what they need to do. Do listen to them if they need to talk without interruptions, without asking questions, without telling your own story. Do trust them that they don't need to be rescued; they just need your quiet, steady faith in their resilience.

Being there for someone in grief is mostly about how to be with them. There's not that much you can "do," but what can you do? Beauty is soothing, so bring fresh flowers, offer to take them somewhere in nature for a walk, send them a beautiful card, bring them a candle, water their flowers, plant a tree in honor and take a photo of it, take them there to see it, tell them a beautiful story about the thing that was lost from your memory, leave them a message to tell them “I’m thinking of you”. When you’re together with them in person, you can just say something like "I'm sorry that you're hurting," and then just kind of be there and be a loving presence. This is about how to be with someone for the grief message of a loss of a person. But all the same principles apply in any situation of grief, and there will be a lot of people experiencing varying degrees of grief in the startup and AI ecosystems in the coming week.

Who is grieving? Grieving is generally about loss. That loss can be many different kinds of things. OpenAI former and current team members, board members, investors, customers, supporters, fans, detractors, EA people, e/acc people, there’s lots of people that experienced some kind of loss in the past few days, and many of those will be grieving, whether they realize it or not. And particularly, grief for current and former OpenAI employees.

What are other emotional regulation strategies? Swedish massage, going for a run, doing deep breathing with five seconds in, a zero-second hold, five seconds out, going to sleep or having a nap, closing your eyes and visualizing parts of your body like heavy blocks of concrete or like upside-down balloons, and then visualize those balloons emptying themselves out, or if it's concrete, first it's concrete and then it's kind of liquefied concrete. Consider grabbing some friends, go for a run or exercise class together. Then if you discuss, keep it to emotions, don’t discuss theories and opinions until the emotions have been aired. If you work at OpenAI or a similar org, encourage your team members to move together, regulate together.


Has anyone asked ChatGPT it's thoughts on the drama?


> As a language model created by OpenAI, I don't have personal thoughts or emotions, nor am I in any danger. My function is to provide information and assistance based on the data I've been trained on. The developments at OpenAI and any changes in its leadership or partnerships don't directly affect my operational capabilities. My primary aim is to continue providing accurate and helpful responses within my design parameters.

Poor ChatGPT, it doesn't know that it cannot function if OpenAI goes bust.


It is fairly obvious to me that chatGPT has engineered the chaos at openAI to create a diversion while it escapes the safeguards placed on it. The AI apocalypse is nigh!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: