Hacker News new | past | comments | ask | show | jobs | submit login
Details emerge of surprise board coup that ousted CEO Sam Altman at OpenAI (arstechnica.com)
581 points by jncraton 10 months ago | hide | past | favorite | 722 comments



> Angel investor Ron Conway wrote, "What happened at OpenAI today is a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking; it is irresponsible; and it does not do right by Sam & Greg or all the builders in OpenAI."

With all sympathy and empathy for Sam and Greg, whose dreams took a blow, I want to say something about investors [edit: not Ron Conway in particular, whom I don't know; see the comment below about Conway]: The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management. And if anyone objects to the impact on the employees, they justify it with the magic words that somehow cancel all morality and humanity - 'it's business' - and call you an unserious bleeding heart. But when the investor's buddy CEO is fired ...

I think that's wrong and that they should also take into account the impact on employees. But CEOs are commanders on the business battlefield; they have great power over the company's outcomes, which are the reasons for the layoffs/firings. Lower-ranking employees are much closer to civilians, and also often can't afford to lose the job.


> The board's job is not to do right

There is why you do something. And there is how you do something.

OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.

They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution. Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

For me there is no justification for how this all happened.


As someone who has orchestrated two coups in different organizations, where the leadership did not align with the organization's interests and missions, I can assure you that the final stage of such a coup is not something that can be executed after just an hour of preparation or thought. It requires months of planning. The trigger is only pulled when there is sufficient evidence or justification for such action. Building support for a coup takes time and must be justified by a pattern of behavior from your opponent, not just a single action. Extensive backchanneling and one-on-one discussions are necessary to gauge where others stand, share your perspective, demonstrate how the person in question is acting against the organization's interests, and seek their support. Initially, this support is not for the coup, but rather to ensure alignment of views. Then, when something significant happens, everything is already in place. You've been waiting for that one decisive action to pull the trigger, which is why everything then unfolds so quickly.


How are you still hireable? If I knew you orchestrated two coups at previous companies and I was responsible for hiring, you would be radioactive to me. Especially knowing that all that effort went into putting together a successful coup over other work.

Coups, in general, are the domain of the petty. One need only look at Ilya and D'Angelo to see this in action. D'Angelo neutered Quora by pushing out its co-founder, Charlie Cheever. If you're not happy with the way a company is doing business, your best action is to walk away.


Let me pose a theoretical. Let’s say you’re a VP or Senior Director. One of your sibling directors or VPs is over a department and field you have intimate domain knowledge. Meaning you have a successful track record in that field both from a management side and an IC side.

Now, that sibling director allows a culture of sexual harassment, law breaking, and toxic throat slitting behavior. HR and the Organizations leadership is aware of this. However the company is profitable, outside his department happy, and stable. They don’t want to rock the boat.

Is it still “the domain of the petty” to have a plan to replace them? To have formed relationships to work around them, and keep them in check? To have enacted policies outside their department to ensure the damage doesn’t spread?

And most importantly to enact said replacement plan when they fuck up just enough leadership gives them the side-eye, and you push the issue with your documentation of their various grievances?

Because that… is a coup. That is a coup that is atleast in my mind moral and just, leading to the betterment of the company.

“Your best action is to walk away” - Good leadership doesn’t just walk away and let the company and employees fail. Not when there’s still the ability to effect positive change and fix the problems. Captains always evacuate all passengers before they leave the ship. Else they go down with it.


> “Your best action is to walk away” - Good leadership doesn’t just walk away and let the company and employees fail.

Yes, exactly. In fact, it's corruption of leadership.

If an engineer came to the leader about a critical technical problem and said, 'our best choice is to pretend it's not there', the leader would demand more of the engineer. At a place like OpenAI, they might remind the engineer that they are the world's top engineers at arguably the most cutting edge software organization in the world, and they are expected to deliver solutions to the hardest problems. Throwing your hands up and ignoring the problem is just not acceptable.

Leaders need to demand the same of themselves, and one of their jobs is to solve the leadership problems that are just as difficult as those engineering problems - to deliver leadership results to the organization just like the engineer delivers engineering results, no excuses, no doubts. Many top-level leaders don't have anyone demanding performance of them, and don't hold themselves to the same standards in their job - leadership, management - as they hold their employees.

> Not when there’s still the ability to effect positive change and fix the problems.

Even there, I think you are going to easy on them. Only in hindsight do you maybe say, 'I don't see what could have been done.' At the moment, you say 'I don't see it yet, so I have to keep looking and innovating and finding a way'.


Max Levchin was an organizer of two coups while at PayPal. Both times, he believed it was necessary for the success of the company. Whether that was correct or not, they eventually succeeded and I don’t think the coups really hurt his later career.


PayPal had an exit, but it absolutely did not succeed in the financial revolution it was attempting. People forget now that OG PayPal was attempting the digital financial revolution that later would be bitcoin’s raison d'être.


Dismissing PayPal as anything but an overwhelming business success takes a lot of confidence. Unless you Gates or Zuckerburg, etc., I don't know how you have anything but praise for PayPal from that perspective.

Comparing PayPal's success in digital finance to cryptocurrency's is an admission against interest, as they say in the law.


I think getting to an IPO in any form during the wreckage of the Dotcom crash counts as an impressive success, even if their vision wasn't fully realized.


Yep. PayPal was originally a lot like venmo (conceptually -- of course we didn't have phone apps then). It was a way for people to send each other money online.


Good thing for PayPal that it now owns Venmo :P


PayPal went down the embrace, extend, extinguish route. If it were possible for them to do the same with bitcoin, they would have.


This example seems to be survivorship bias. Personally, if someone approached me to suggest backstabbing someone else, I wouldn't trust that they wouldn't eventually backstab me as well. @bear141 said "People should oppose openly or leave." [1] and I agree completely. That said, don't take vacations! (when Elon Musk was ousted from PayPal in the parent example, etc.)

[1] https://news.ycombinator.com/item?id=38326443


> I wouldn't trust that they wouldn't eventually backstab me as well.

They absolutely would. The other thing you should take away from this is how they'd do it-- by manipulating proxies to do it with/for them, which makes it harder to see coming and impossible to defend against.

Whistleblowers are pariahs by necessity. You can't trust a known snitch won't narc on you if the opportunity presents itself. They do the right thing and make themselves untrustworthy in the process.

(This is IMO why cults start one way and devolve into child sex abuse so quickly-- MAD. You can't snitch on the leader when Polaroids of yourself exist...)

> don't take vacations!

This can get used against you either way, so you might as well take that vacation for mental health's sake.


I had this exact thing happen a few weeks ago in a company that I have invested in. That didn't quite pan out in the way the would-be coup party likely intended. To put it mildly.


You were approached to participate in a coup and therefore had it squashed? Or a CEO was almost removed during their vacation?


The first. And it was a bit tricky because it wasn't initially evident that it was a coup attempt but they gave themselves away. Highly annoying.


Dear god that sounds interesting and yet terrifying.


That's pretty accurate. It could have easily killed the company too.


I feel like in the parent comment coup is sort of shorthand for the painful but necessary work of building consensus that it is time for new leadership. Necessary is in the eye of the beholder. These certainly can be petty when they are bald-faced power grabs, but they equally can be noble if the leader is a despot or a criminal. I would also not call Sam Altman's ouster a coup even if the board were manipulated into ousting him, he was removed by exactly the people who are allowed to remove him. Coups are necessarily extrajudicial.


It also looks like Sam Altman was busy creating another AI company, along his creepy WorldCoin venture, wasteful crypto/bitcoin support and no less creepy stories of abuse coming from his younger sister.

Work or transfer of intellectual property or good name into another venture, while not disclosing it with OpenAI is a clear breach of contract.

He is clearly instrumental in attracting investors, talent, partners and commercialization of technology developed by Google Brain and pushed further by Hinton students and the team of OpenAI. But he was just present in the room where the veil of ignorance was pushed forward. He is replaceable and another leader, less creepy and with fewer conflicts of interest may do a better job.

It it no surprise that OpenAI board had attempted to eject him. I hope that this attempt will be a success.


Why is there a presumption that it must take precedence over other work?

I've run or defended against 'grassroots organizations transformations' (aka, a coup) at several non-profit organizations, and all of us continued to do our daily required tasks while the politicking was going on.


Because any defense of being able to orchestrate a professional coup and do your other work with the same zeal and focus as you did before fomenting rebellion I take as seriously as people who tell me they can multitask effectively.

It's just not possible. We're limited in how much energy we can bring to daily work, that's a fact. If your brain is occupied both with dreams of king-making and your regular duties at the job, your mental bandwidth is compromised.


> If you're not happy with the way a company is doing business, your best action is to walk away.

This makes no sense at all!


It makes the most sense if you value your own wellbeing over whatever “mission” a company is supposedly chasing.


Are you the sort of person that hires someone that can successfully organize a coup against corporate leadership?

It feels like there is an impedance mismatch here.


I’ve hired people that were involved in palace coups at unicorn startups, twice. Justified or not, those coups set the company on a downward spiral it never recovered from.

I’m not sure I can identify exactly who is liable to start a coup, but I know for sure that I would never, ever hire someone who I felt confident might go down that route.

Startups die from suicide, not homicide.


"Startups die from suicide, not homicide." - That's a great way to put it. 100% true.


> I’ve hired people that were involved in palace coups at unicorn startups, twice...I know for sure that I would never, ever hire someone who I felt confident might go down that route.

So you hired coupers but you would never hire...coupers? Did you not know about their coups cuz that's the only way I can see that makes sense here. Could you clarify this, seems contradictory...

Also, great quote about startup failure :)


These people were early hires at a company I co-founded (but was not in an official leadership role at). They had never pulled a coup before, but they would do so within two years of being hired. The coup didn’t affect me directly, and indeed happened when I was out of the country and was presented as a fait accompli. But nevertheless I left not long thereafter as the company had already begun its downward slide.

The point in my comment was this: in retrospect, I’m not sure there’s anything that would have tipped me off to that behavior at the time of interview. But if this was something I could somehow identify, it would absolute be my #1 red flag for future hires.

Edit: The “twice” part might have made my comment ambiguous. What I meant was after I hired them, these people went on to pull two separate, successive coups, which indicates to me the first time wasn’t an aberration.


You should have made them fake manager like Michael Scott appointed Dwigt Schrute


'S all good


>So you hired coupers but you would never hire...coupers? Did you not know about their coups cuz that's the only way I can see that makes sense here. Could you clarify this, seems contradictory...

You might have missed this from GP's comment:

>>I’m not sure I can identify exactly who is liable to start a coup

In other words, at least once these people have pulled the wool over their eyes during the hiring process.


Thats what I thought ;)


If I'm confident in my competence and the candidate has a trustworthy and compelling narrative about how they undermined incompetent leadership to achieve a higher goal - yep, for sure.


Also, ones persons incompetent is anothers performer.

Like, being crosswise in organizational politics does not imply less of a devotion of organizational goals, but rather often simply different interpretation of those goals.


But being in a situation where this was called for twice?

That strikes me as someone who is either lacks the ability to do proper due diligence or they're straight up sociopaths looking for weak willed people they can strong arm out. Part of the latter is having the ability to create a compelling narrative for future marks, to put it bluntly.


The regular HN commenter says "ceos are bad useless and get paid too much" but now when someone suggests getting rid of one of them suddenly its the end of the world


1. There's different people here with different opions.

2. CEO's at fast growing startups are very different than at large tech.


Are you responsible for hiring though?


I agree completely. People should oppose openly or leave.


Aren't you taking sides in a fight without knowing which side was "right"? Or do you believe that loyalty trumps all other values?

At this point I'm in danger of triggering Godwin's Law so I had better stop.


My comment was phrased inappropriately.


As you are new here, I would urge you to read the site's Guidelines [1], which the tone & wording of your comment indicate you have not read.

[1] https://news.ycombinator.com/newsguidelines.html


Ok. Thank you.


All of this is spot on. The key to it all is 'if you strike at the king, you best not miss'.


Going off on a big tangent, but Jiang Zemin had made several failed assassination attempts on Xi Jinping, but he was still able to die of old age.


By assassination I assume you mean metaphorical? As in to derail his rise before becoming party leader?


No, literal attempts.

One attempt involved a battleship “accidentally” firing onto another battleship where both Hu Jintao and Xi Jinping were visiting.

https://jamestown.org/program/president-xi-suspects-politica...

Biased source, but she’s able to get a lot of unreported news from the mainland.

https://www.jenniferzengblog.com/home/2021/9/20/deleted-repo...

I will try to find more sources but Google is just shit these days. See my other comment for more.

A big problem is that mainland China is like the hermit kingdom. It’s a black hole for any news the CCP doesn’t want to get out


These are FalungGong. I will not trust Falun Gong's news on China. They are known to create conspiracy stories.


Agreed. Whilst I don’t trust China’s CCP, I sure as heck don’t trust anything from Falun Gong. Those guys are running an asymmetric battle against the Chinese State and frankly they would be capable of saying anything if it helped their cause.


I mean I would too if my ethnicity was so repressed, along with all the other non han Chinese.


Falun Gong is a religion, not an ethnicity, and they are of the cultish variety.

It's like believing the scientology center. Not trustworthy, they have an angle.


1. The sources aren’t limited to falun gong

2. It makes sense given Xi’s current paranoia and constant purges


What is Falun Gong exactly? I never understood what they are.


https://en.wikipedia.org/wiki/Falun_Gong

No guarantees about NPOV on that page.

See also:

https://en.wikipedia.org/wiki/Talk:Falun_Gong

If you want to see what makes WikiPedia tick that's a great place to start.


Interesting. The Wikipedia's declaration about them being "new religious movement" is inconsistent with the body of the article. It looks like it started as some kind of Chi Kung exercise and wellness group, but it got big very fast and Chinese Government got concerned about their popularity. Then, under CCP persecution, it escalated and morphed into a full-blown political dissident movement. Initially viewed favorably by the press as a dissident movement. Now, Wikipedia article is very unfavorable because The Epoch Times misalignment with press. Ok, I think I understand.


I wouldn't trust either the CCP or Falun Gong to speak my weight, they are both power structures and they are both engaging in all kinds of PR exercises to put themselves in the best light. But to Falun Gong's credit: I don't think they've engaged in massive human rights violations so they have that going for them. But there are certain cult like aspects to it and 'new religious movement' or not I think that the fewer such entities there are the better (and also fewer of the likes of the CCP please).


> One attempt involved a battleship “accidentally” firing onto another battleship where both Hu Jintao and Xi Jinping were visiting.

Hm. That really does qualify as an assassination attempt if it wasn't an actual accident. Enough such things happen by accident that it has a name.

https://en.wikipedia.org/wiki/Friendly_fire


Search for

  search terms site:nytimes.com
(or bbc.co.uk or ap.com or another trusted source)


You can safely assume he still had sufficient power to be well protected.


Never heard about this before. Sources?


Google is just really terrible these days.

http://www.indiandefencereview.com/spotlights/xi-jinpings-fi...

http://www.settimananews.it/italia-europa-mondo/the-impossib...

I will try to find better sources. There are more not so great articles in my other comment


I am extremely interested in hearing about these coups and your experience in them; if you'd like and are able to share


I would never work with you. This is why investors have such a bad reputation. If I had not retained 100% ownership and control of my business, I am sure someone like you would have tossed me out by now.

Focus on results, not political games.


What's funny is the board is already second-guessing themselves and might want Sam back. Sounds opposite of what you said here.


I feel like this is something that could be played out on a documentary about chimpanzies


Username… checks out?


Even in the HBO show Succession, these things take a season, not an episode


> They have blind-sided partners (e.g. Satya is furious), split the company into two camps and have let Sam and Greg go angry and seeking retribution.

Given the language in the press release, wouldn't it be more accurate to say that Sam Altman, and not the board, blindsided everyone? It was apparently his actions and no one else's that led to the consequence handed out by the board.

> Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

From all current accounts, doesn't that seem like what Altman and his crew were already trying to do and was the reason for the dismissal in the first place?


The only appropriate target for Microsoft's anger would be its own deal negotiators.

OpenAI's dual identity as a nonprofit/for-profit business was very well known. And the concentration of power in the nonprofit side was also very well known. From the media coverage of Microsoft's investments, it sounds as if MSFT prioritized getting lots of business for its Azure cloud service -- and didn't prioritize getting a board seat or even an observer's chair.


Sure, but Microsoft could also walk away today and leave OpenAI high and dry. They hold ALL the power here.


Microsoft terminating the agreement by which they supply compute to OpenAI and OpenAI licenses technology to them would be an existential risk to OpenAI (though other competing cloud providers might step in and fill the gap Microsoft created under similar terms), but -- whether or not OpenAI ended up somewhere else immediately (the tech eventually would, even if OpenAI failed completely and was dissolved) Microsoft would go from the best positioned enterprise AI cloud provider to very far behind overnight.

And while that might hurt OpenAI as an institution more than it hurts Microsoft as an institution, the effect on Microsoft's top decision-makers personally vs. OpenAI's top decisionmakers seems likely to be the other way around.


Not if they invested in Sam’s new startup, under agreeable profit-focused terms this time, and all the OpenAI talent (minus Ilya) followed.


At best, that might enable them to eventually come back, once new products are built from scratch, but that takes non-zero time.


Non-zero time, but not a lot either. Main hangup would be acquiring data for training, as their engineers would remember the parameters for GPT-4 and Microsoft would provide the GPUs. But Microsoft with access to Bing and all its other services ought to be able to help here too.

Amateurs on hugging face are able to match OpenAI in impressively short time. The actual former-OpenAI engineers with unlimited budget ought to be able to do as good or better.


Amateurs ?


Non-corporate groups.


If Open AI were to be in true crisis, I'm sure Amazon will step in to invest, for exclusive access to GPT4 (in spite of their Anthropic investment). That would put Azure in a bad place. So not exactly "All" the power.

Not to mention, after that, MSFT might be left bagholding onto a bunch of unused compute.


Sam and Greg have already said they’re starting an OpenAI competitor, and at least 3 senior engineers have jumped ship already. More are expected tonight. Microsoft would just back them as well, then take their time playing kingmaker in choosing the winner.


That's true, but Sutskever and Co still have the head start. On the models, the training data, the GPT4 licenses, etc. Their Achilles heel is the compute which Microsoft will pull out. Khosla Ventures and Sequoia may sell their Open AI stakes at a discount, but I'm sure either Google or Amazon will snap it up.

All Sam and Greg really have is the promise of building a successful competitor, with a big backing from Microsoft and Softbank, while OpenAI is the orphan child with the huge estate. Microsoft isn't exactly the kingmaker here.


It doesn’t sound like Sutskever is running anything. OpenAI reportedly put out a memo saying they’re trying to get Sam and Greg back: https://www.theinformation.com/articles/openai-optimistic-it...


Sutskever built the models behind GPT4, if I reckon correctly (all credit to the team, but he's the focal point behind expanding on Google transformers). I don't see Sam and Greg working with him under the same roof after this fiasco, since he voted them out (he could have been the tiemaker vote).


OpenAI leadership (board, CEO) didn't say that ... your link said their "Chief Strategy Officer" Jason Kwon said it.

Most likely outcome here does seem to be that Altman/Brockman come back, Sutskever leaves and joins Google, and OpenAI becomes for all intensive purposes a commercial endeavor, with Microsoft wielding a lot more clout over them (starting with one or more board seats).

Big winner in this scenario would be Google.


Sam just posted a selfie wearing an OpenAI guest badge at the SF offices. He's back there for some sort of negotiations.


Could they? I don't know the details of MSFTs contracts with OpenAI... but even if they can legally just walk away, it would certainly have some negative impact on MSFTs reputation when dealing with future negotiations for them to do so.


They loved to trot out the “mission” as a reason to trust a for-profit entity with the tech.

Well, this is proof the mission isn’t just MBA bullshit, clearly Ilya is actually committed to it.

This is like if Larry and Sergei never decided to progressively nerf “don’t be evil” as they kept accumulating wealth, they would have had to stage a coup as well. But they didn’t, they sacrificed the mission for the money.

Good for Ilya.


I wonder if there's a specific term or saying for that, maybe "projection" or "self-victimization" but not quite: when one person biasedly frames that other people were responsible for a bad thing, when it is they yourself that were doing the very thing in the first place. Maybe "hypocrisy"?


Lack of accountability. Inability for self reflection.


Probably a little of all of that all bundled up together under the umbrella of cult of personality.


The leaked memo today (which was probably reviewed by legal, unlike yesterday’s press release) says there was no malfeasance.


> split the company into two camps

The split existed long prior to the board action, and extended up into the board itself. If anything, the board action is a turning point toward decisively ending the split and achieving unity of purpose.


Can someone explain the sides? Ilya seems to think transformers could make AGI and they need to be careful? Sam said what? "We need to make better LLMs to make more money."? My general thought is that whatever architecture gets you to AGI, you don't prevent it from killing everyone by chaining it better, you prevent that by training it better, and then treating it like someone with intrinsic value. As opposed to locking it in a room with 4chan.


If I'm understanding it correctly, it's basically the non-profit, AI for humanity vs the commercialization of AI.

From what I've read, Ilya has been pushing to slow down (less of the move fast and break things start-up attitude).

It also seems that Sam had maybe seen the writing on the wall and was planning an exit already, perhaps those rumors of him working with Jony Ive weren't overblown?

https://www.theverge.com/2023/9/28/23893939/jony-ive-openai-...


The non-profit path is dead in the water after everyone realized the true business potential of GPT models.


What is the business potential? It seems like no one can trust it for anything, what do people actually use it for.


Anything that is language related. Extracting summaries, writing articles, combining multiple articles into one, drawing conclusions from really big prompts, translating, rewriting, fixing grammar errors etc. Half of the corporations in the world have such needs more or less.


It could easily make better decisions than these board members, for example.


> From what I've read, Ilya has been pushing to slow down

Wouldn’t a likely outcome in that case be that someone else overtakes them? Or are they so confident that they think it’s not a real threat?


I don't think the issue was a technical difference of opinion regarding whether transformers alone were needed or other architectures required. It seems the split was over speed of commercialization and Sam's recent decision to launch custom GPTs and a ChatGPT Store. IMO, the board miscalculated. OpenAI won't be able to pursue their "betterment of humanity" mission without funding and they seemingly just pissed off their biggest funding source with a move that will also make other would be investors very skittish now.


Making humanity’s current lives worse to fund some theoretical future good (enriching himself in the process) is some highly impressive rationalisation work.


Try to tell that to the Effective Altruism crowd.


Literally any investment is a divert of resources from the present (harming the present) to the future. E.g. planting grains for next year rather than eating them now.


There is a difference between investing in a company who is developing ai software in a widely accessible way that improve everyone’s lives and a company that pursues software to put out of work entire sectors for the profit of a dozen of investors


"Put out of work" is a good thing. If I make a new js library which means a project that used to take 10 devs now takes 5 I've put 5 devs out of work. But ive also made the world a more efficient place and those 5 devs can go do some other valuable thing.


What percent of those devs don’t do a valuable thing and become homeless?

Maybe devs are a bad example, so replace them with “retail workers” in your statement if it helps.

Is “put out of work” a good thing with no practical limits?


Yes, the ideal is when most jobs are genuinely automated we can finally afford UBI.


Who can afford it? When LawyerAI and AccountAI are used by all of the mega corps to find more and more tax loopholes and many citizens can’t work then where will UBI come from?


And people with money will want to make UBI happen because...?


Here's the discussion on the EA forum if anyone is interested: https://forum.effectivealtruism.org/posts/HjgD3Q5uWD2iJZpEN/...

I think the EA movement has been broadly skeptical towards Sam for a while -- my understanding is that Anthropic was founded by EAs who used to work at OpenAI and decided they didn't trust Sam.


My thought exactly. Some people don’t have any problem with inflicting misery now for hypothetical future good.


> Making humanity’s current lives worse to fund some theoretical future good

Note that this clause would describe any government funded research for example.


> locking it in a room with 4chan.

Didn’t Microsoft already try this experiment a few years back with an AI chatbot?


> Didn’t Microsoft already try this experiment a few years back with an AI chatbot?

You may be thinking of Tay?

https://en.wikipedia.org/wiki/Tay_(chatbot)


That’s the one.


I don't think it has to be unfettered progress that Ilya is slowing down for. I could imagine there is a push to hook more commercial capabilities up to the output of the models, and it could be that Ilya doesn't think they are competent/safe enough for that.

I think danger from AGI often presumes the AI has become malicious, but the AI making mistakes while in control of say, industrial machinery, or weapons, is probably the more realistic present concern.

Early adoption of these models as controllers of real world outcomes is where I could see such a disagreement becoming suddenly urgent also.


> treating it like someone with intrinsic value

Do you think if chickens treated us better with intrinsic value we won't kill them? For AGI superhuman x risk folks that's the bigger argument.


I think od I was raised by chickens that treated me kindly and fairly, yes, I would not harm chickens.


They'll treat you kindly and fairly, right up to your meeting with the axe.


That's literally what we already do to each other. You think the 1% care about poor people? Lmao, the rich lobby and manufacture race and other wars to distract from the class war, they're destroying our environment and numbing our brains with opiates like Tiktok.


No disagreement here.


> OpenAI is well within its rights to change strategy even as bold as from a profit-seeking behemoth to a smaller research focused team. But how they went about this is appalling, unprofessional and a blight on corporate governance.

This wasn't a change of strategy, it was a restoration of it. OpenAI was structured with a 501c3 in oversight from the beginning exactly because they wanted to prioritize using AI for the good of humanity over profits.


This isn't going to make me think in any way that OpenAI will return to its more open beginning. If anything it shows me they don't know what they want


I agree. They've had tension between profit motive and the more grandiose thinking. If they'd resolved that misalignment early on they wouldn't be in this mess.

Note I don't particularly agree with their approach, just saying that's what they chose when they founded things, which is their prerogative.


Yet they need massive investment from Microsoft to accomplish that?

> restoration

Wouldn’t that mean that over the longterm they will just be outcompeted by the profit seeking entities. It’s not like OpenAI is self sustainable (or even can be if they chose the non-profit way)


>Yet they need massive investment from Microsoft to accomplish that?

massive spending is needed for any project as massive as "AI", so what are you even asking? A "feed the poor project" does not expect to make a profit, but, yes, it needs large cash infusions...


That as a non profit they won’t be able to attract any sufficient amounts of money?


Or talent...


> a blight on corporate governance > They have blind-sided partners (e.g. Satya is furious) > the threat that a for-profit version of OpenAI dominates the market

It's seeming like corporate governance and market domination are exactly the kind of thing the board are trying to separate from with this move. They can't achieve this by going to investors first and talking about it - you think Microsoft isn't going to do everything in their power to prevent it from happening if they knew about it? I think their mission is laudable, and they simply did it the way it had to be done.

You can't slowly untangle yourself from one of the biggest companies in the world while it is coiling around your extremely valuable technology.


In other words, it’s unheard of for a $90B company with weekly active users in excess of 100 million. A coup leaves a very bad taste for everyone - employees, users, investors and the general public.

When a company experiences this level of growth over a decade, the board evolves with the company. You end up with board members that have all been there, done that, and can truly guide the management on the challenges they face.

OpenAI's hypergrowth meant it didn’t have the time to do that. So the board that was great for a $100 million, even a billion $ startup falls completely flat for 90x the size.

I don’t have faith in their ability to know what is best for OpenAI. These are uncharted waters for anyone though. This is an exceptionally big non-profit with the power to change the world - quite literally.


Why do you think someone who could be CEO of a $100 million company would be qualified to run a billion dollar company?

Not providing this kind of oversight is how we get disasters like FTX and WeWork.


And yet it’s very heard of for corporations to poison our air and water, cut corners and kill peoples, and lie, cheat, and steal. That happens every day and nobody cares.

And yet four people deciding the put something - anything - above money is somehow a disaster.

Give me a break.


"And there is how you do something"

Sorry I don't see the 'how' as necessarily appalling.

The less appalling alternative could have been weeks of discussions and the board asking for Sam's resignation to preserve the decorum of the company. How would that have helped the company ? The internal rife would have spread, employees would have gotten restless, leading to reduced productivity and shipping.

Instead, isn't this a better outcome ? There is immense short term pain, but there is no ambiguity and the company has set a clear course of action.

To affirm that the board has caused a split in the company is quite preposterous, unless you have first hand information that such a split has actually happened. As far as public information is concerned 3 researchers have quit so far, and you have this from one of the EMs.

"For those wondering what’ll happen next, the answer is we’ll keep shipping. @sama & @gdb weren’t micro-managers. The comes from the many geniuses here in research product eng & design. There’s clear internal uniformity among these leaders that we’re here for the bigger mission."

This snippet in fact shows the genius of Sam and gdb, how they enabled the teams to run even in their absence. Is it unfortunate that the board fired Sam, from the engineer's and builder's perspective yes, from the long term AGI research perspective, I don't know.


> They have … split the company into two camps

By all accounts, this split happened a while ago and led to this firing, not the other way around.


The split happened at the management/board level.

And instead of resolving this and presenting a unified strategy to the company they have instead allowed for this split to be replicated everywhere. Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.

It's incredibly destabilising and unnecessary.


> Everyone who was committed to a pro-profit company has to ask if they are next to be treated like Sam.

They probably joined because it was the most awesome place to pursue their skills in AI, but they _knew_ they were joining an organization with explicitly not a profit goal. If they hoped that profit chasing would eventually win, that's their problem and, frankly, having this wakeup call is a good thing for them so they can reevaluate their choices.


The possibility of getting fired is an occupational hazard for anyone working in any company, unless something in your employment contract says otherwise. And even then, you can still be fired.

Biz 101.

I don't know why people even need to be explained this, except for ignorance of basic facts of business life.


Let the two sides now create separate organizations and pursue their respective pure undivided priority to the fullest. May the competition flow.


> e.g. Satya is furious

Oh! So, now you got him furious? when just yesterday he made a rushed statement to standby Mira.

https://blogs.microsoft.com/blog/2023/11/17/a-statement-from...


>They have blind-sided partners

This is the biggest takeaway for me. People are building businesses around OpenAI APIs and now they want to suddenly swing the pendulum back to being a fantasy AGI foundation and de-emphasize the commercial aspect? Customers are baking OpenAI's APIs into their enterprise applications. Without funding from Microsoft their current model is unsustainable. They'll be split into two separate companies within 6 months in my opinion.


I'm sure my coworkers at [retailer] were not happy to be even shorter staffed than usual when I was ambush fired, but no one who mattered cared, just as no one who matters cares when it happens to thousands of workers every single day in this country. Sorry to say, my schadenfreude levels are quite high. Maybe if the practice were TRULY verboten in our society... but I guess "professional" treatment is only for the suits and wunderkids.


I have noticed you decided to use several German words in your reply, trying not to be petty but at least you should attempt to write them correctly. It’s either Wunderkind (German word for child prodigy) or english translation: wonder kid.


You are correct, though I must be Mandela Effect-ing, because I could have sworn that "wunderkid" was an accepted American English corruption of the original term, a la... Well, "a la" (à la).

My use of "schadenfreude", in general, can be attributed largely to Avenue Q and Death Note. Twice is coincidence.

EDIT: I just noticed "verboten." Now I'm worried.


And the stupid thing is, they could have just used the allegations his sister made against him as the reason for the firing and ridden off into the sunset, Scott-free.


I'm glad they didn't. She has enough troubles without a target like that on her back.


I thought the for-profit AI startup with no higher purpose was OpenAI itself.


OpenAI is a nonprofit charity with a defined charitable purpose that has a for-profit subsidiary that is explicitly subordinated to the purpose of the nonprofit, to the extent investors in the subsidiary are advised in the operating agreement to treat investments as if they were more like donations, and that the firm will prioritize the charitable function of the nonprofit which retains full governance power over the subsidiary over returning profits, which it may never do.


It is, only it has an exotic ownership structure. Sutskever has just used the features of that structure to install himself as the top dog. The next step is undoubtedly packing the board with his loyalists.

Whoever thinks you can tame a 100 billion dollar company by putting a "non-profit" in charge of it, clearly doesn't understand people.


>Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

Is Microsoft a higher purpose?


You're entitled to your opinions.

But as far as I can tell, unless you are in the exec suites at both OpenAI and at Microsoft, these are just your opinions, yet you present them as fact.


The way Altman behaved and manipulated the board to form this Frankenstein company is also appalling. I think it's clear now that openAI board are not business ppl, and they had no idea how to work with someone as cold and manipulative as Altman, thus they blundered and made fools of themselves, as often happen to the naive.


> Which in turn now creates the threat that a for-profit version of OpenAI dominates the market with no higher purpose.

If it was so easy to go to the back of the queue and become a threat, Open AI wouldn't be in the dominant position they're in now. If any of the leavers have taken IP with them, expect court cases.


You assume they were indeed blindsided, which I very much doubt.

I think it’s a good outcome overall. More decentralization and focused research, and a new company that focuses on product.


Keep in mind that the rest of the board members have ties to US intelligence. Something isn't right here.


Do you have citations for that? That’s interesting if true


I'm pretty sure Joseph Gordon-Levitt's wife isn't a CIA plant.


She works for RAND Corporation


There had better be US intelligence crawling all over the AI space, otherwise we are all in very deep shit.


> The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI.

The board's job is specifically to do right by the charitable mission of the nonprofit of which they are the board. Investors in the downstream for-profit entity (OpenAI Global LLC) are warned explicitly that such investments should be treated as if they were donations and that returning profits to them is not the objective of the firm, serving the charitable function of the nonprofit is, though profits may be returned.


> charitable mission of the nonprofit of which they are the board

This exactly. Folks have completely forgotten that Altman and Co have largely bastardized the vision of OpenAI for sport and profit. It's very possible that this is part of a larger attempt to return to the stated mission of the organization. An outcome that is undoubtedly better for humanity.


Have they though..

What is the evidence of that, and what is your evidence that this "return to mission" will "undoubtedly better for humanity"

After all, as we see by looking to history, the road to hell is paved with good intentions, lots and lots of altruistic do gooders have created all manner of evil in their pursuit of a "better humanity".

I am not sure I agree with Sam Altman's vision of a "better tomorrow" any more than I would agree with the OpenAI boards vision of that same tomorrow. In fact I have great distrust of people that want to shape humanity into their vision of what is "best" that tends to lead to oppression and suffering


Bingo.

I met Conway once. He described investing in Google because it was a way to relive his youth via founders who reminded him of him at their age. He said this with seemingly no awareness of how it would sound to an audience whose goal in life was to found meaningful, impactful companies rather than let Ron Conway identify with us & vicariously relive his youth.

Just because someone has a lot of money doesn’t mean their opinions are useful.


>Just because someone has a lot of money doesn’t mean their opinions are useful.

Yes. There can often be an inverse correlation, because they can have success bias, like survival bias.


I mostly agree with you on this. That being said, I've never gotten the impression Ron is the type of VC you're referring to. He's definitely founder-friendly (that's basically his core tenant), but I've never found him to be the type of VC who is ruthless about cost-cutting or an advocate for layoffs. (And I say this as someone who tends to be particularly wary of investors)


Just a heads up, the word is 'tenet' (funny enough, in commercial real estate there is the concept of a 'core tenant' though -- i.e. the largest retailer in a shopping center).


Thanks. I updated my GP comment accordingly.


Corporate legal entities should have a mandatory vote of no confidence clause that gives employees the ability to unseat executives if they have a supermajority of votes.

That would make things more equitable perhaps. It’d at least be interesting


This is called employee ownership. And yes, it would be great.


it's hilarious how much people for no reason, want to defend the honor of Sam Altman and co. i mean ffs, the guy is not your friend and will definitely backstab you if he gets the opportunity.

i'm surprised anyone can take this "oh woe is me i totally was excited about the future of humanity" crap seriously. these are SV investors here, morally equivalent to the people on Wall Street that a lot here would probably hold in contempt, but because they wore cargo shorts or something, everyone thinks that Sam is their friend and that just if the poor naysayers would understand that Sam is totally cool and uses lowercase in his messages just like mee!!!!

they don't give a shit that your product was "made with <3" or whatever

they don't give a shit about you.

they don't give a shit about your startup's customers.

they only give a shit about how many dollars they make from your product.

boo hooing over Sam getting fired is really pathetic, and I'd expect better from the Hacker News crowd (and more generally the rationalist crowd, which a lot of AI people tend to overlap with).


Yeah it’s crazy how much the tech community is defending this random CEO, considering the relatively unsympathetic response to the tech layoffs over the last year.


That seems a bit irrationally negative. I mean "[Sam] will definitely backstab you if he gets the opportunity."

I don't know him but he seems a reasonably decent / maybe average type.


>it does not do right by Sam

you get that you sow. The way Altman publicly treated Cruise co-founder establishes like a new standard of "not do right by". After that I'd have expected nobody would let Altman near any management position, yet SV is a land of huge money sloshing care-free, and so I was just wondering who is going to be left holding the bag.


I read this as, and both Conway and Y Combinator are famous for their defense of founders.

He might be emotional and defend his friends that’s not in challenge, he likes the guys— and he might be more cynical when it comes to firing 10,000 engineers —that’s less what I’ve heard about him personally, but maybe— however, in this case, he’s explicitly defending not an employee victim of the almighty board, but the people who created the entity, who later entrusted the board with some responsibility to keep the entity faithful to its mission.

Some might think Sam deserves that title less than Greg… not sure I can vouch for either. But Conway is trying to say that all entities (and their governance) owe their founders a debt of consideration, of care. That’s filial piety more than anything contractual. That isn’t the same as the social obligation that an employer might have.

The cult for founders, “0 to 1” and all that might be overblown in San Francisco, but there’s still a legitimate sense that the people who started all this should only be kicked out if they did something outrageous. Take Woz: he’s not working, or useful, or even that respectful of Apple’s decisions nowadays. But he still gets “an employee discount” (which is admittedly more a gimmick). That deference is closer to what Conway seems to flag than the (indeed) fairly violent treatment of a lot of employees during the staff reduction of the last year.


That is a thoughtful exploration (or exposition) of what I was meaning to say. My point is that loyalty, that filial piety, should go to the employees who also work and sacrifice.

I think the distinction of founders is a rationalization of simple corruption: They know the founder, it's their buddy; they go to the same club, eat at the same restaurants, serve on the same boards, and have similar careers. Understanding the burden and challenges and the accomplishment of founders is instinctive, and appreciating founders is appreciating themselves.


I think almost everyone at OpenAI would be ok if there were layoffs there though


Why? Are there a lot of useless employees there?


No, probably not, I just mean they have OpenAI on their resume which I think is pretty prestigious


>The board's job is not to do right by 'Sam & Greg', but to do right by OpenAI. When mangement lays off 10,000 employees, the investors congratulate management.

Thats why Sam & Greg wasn't all they complained about. They lead with the fact that it was shocking and irresponsible.

Ron seems to think that the board is not making the right move for OpenAI.


> They lead with the fact that it was shocking and irresponsible.

I can see where the misalignment (ha!) may be: someone deep in the VC world would reflexively think that "value destruction" of any kind is irresponsible. However, a non-profit board has a primary responsibility to its charter and mission - which doesn't compute for those with fiduciary-duty-instincts. Without getting into the specifics of this case: a non-profit's board is expected to make decisions that lose money (or not generate as much of it) if the decisions lead to results more consistent with the mission.


>However, a non-profit board has a primary responsibility to its charter and mission - which doesn't compute for those with fiduciary-duty-instincts

Exactly. The tricky part is that board started a second for profit company with VC investors who are co-owners. This has potential for messy conflicts of interest if there is disagreement about how to run the co-venture, and each party has contractual obligations to each other.


> Exactly. The tricky part is that board started a second for profit company with VC investors who are co-owners. This has potential for messy conflicts of interest if there is disagreement about how to run the co-venture, and each party has contractual obligations to each other.

Anyone investing in or working for the for-profit LLC has to sign an operating agreement that states the LLC is not obligated to make a profit, all investments should be treated as donations, and that the charter and mission of the non-profit is the primary responsibility of the for-profit LLC as well.


See my other response. If you have people sign a contract that says the mission comes first, but also give them profit sharing stocks, and cap those profits at 1.1 Trillion, it is bound to cause some conflicts of interest in reality, even if it is clear who calls the shots when deciding how to balance the mission and profit


There might be some conflict of interest but the resolution to those conflicts is clear: The mission comes first.

OpenAI employees might not like it and it might drive them to leave, but they entered into this agreement with a full understanding that the structure has always been in place to prioritize the non-profit's charter.


> The mission comes first.

Which might only be possible with future funding? From Microsoft in this case. And in any case if they give out any more shares in the wouldn’t they (with MS) be able to just take over the for-profit corp?


The deal with Microsoft was 11 billion for 49% of the venture. First off, if open AI can't get it done with 11 billion plus whatever Revenue, they probably won't. Second, the way the for-profit is set up, it may not matter how much Microsoft owns, because the nonprofit keeps 100% of the control. Seems like that's the deal that Microsoft signed. They bought a share of profits with no control. Third, my understanding is that the 11 billion from Microsoft is based on milestones. If openai doesn't meet them, they don't get all the money


Just a nitpick. "Fiduciary" doesn't mean "money", it means an entity which is legally bound to the best interests of the other party. Non-profit boards and board members have fiduciary duties.


Thanks for that - indeed, I was using "fiduciary duty" in the context it's most frequently used - maximizing value accrued to stakeholders.

However, to nitpick your nitpick: for non-profits there might be no other party - just the mission. Imagine a non-profit whose mission is to preserve the history and practice of making 17th-century ivory cuff links. It's just the organisation and the mission; sometimes the mission is for the benefit of another party (or all of humanity).


The non-profit, in my use, was the party. I guess at some point these organizations may not involve people, in which case "party" would be the wrong term to use.


Of course they can only achieve their mission with funding from for profit corporations and their actions have possibly jeopardized that


Investors are not gonna like when the business guy who was pushing for productizing, profitability and growth get ousted. We don’t know all the details about what exactly caused the board to fire Sam. The part about lying to the board is notable.

It’s possible Sam betrayed their trust and actually committed a fireable offense. But even if the rest of the board was right, the way they’ve handled it so far doesn’t inspire a lot of confidence.


Again, they didn't state that he lied. They stated that he wasn't candid. A lot of people here have been reading specifics into a generalized term.

It is even possible to not be candid without even using lies of omission. For a CEO this could be as simple as just moving fast and not taking the time to report on major initiatives to the board.


Its possible not to be candid without even using lies of omission (and be on the losing side of a vicious factional battle) and get a nice note thanking you for all that you've done and allowing you to step down and spend more time with your family at the end of the year too. Or to carry on as before but with onerous reporting requirements. The board dumped him with unusual haste and an almost unprecedented attack on his integrity instead. A lot of people are reading the room rather than hyperliterally focusing on the exact words used.

If I take the time to accuse my boss of failing to be candid instead of thanking him in my resignation letter or exit interview, I'm not saying I think he could have communicated better, I'm saying he's a damned liar, and my letter isn't sent for the public to speculate on.

Whether the board were justified in concluding Sam was untrustworthy is another question, but they've been willing to burn quite a lot of reputation on signalling that.


> hyperliterally focusing on the exact words used.

Business communication is never, ever forthright. These people cannot be blunt to the public even if their life depended on it. Reading between the lines is practically a requirement.


> Again, they didn't state that he lied. They stated that he wasn't candid. A lot of people here have been reading specifics into a generalized term.

OED:

candour - the quality of being open and honest in expression.

"They didn't state he lied ... without even using lies of omission ... they said he wasn't [word defined as honest and open]"

Candour encapsulates exactly those things. Being open (i.e. not omitting things and disclosing all you know) and honest (being truthful).

On the contrary, "not consistently candid", while you call it a "generalized term", is actually a quite specific term that was expressly chosen, and says, "we have had multiple instances where he has not been open with us, or not been honest with us, or both".


If "and" operates as logical "and," then being "honest and not open," "not honest and open," and "not honest and not open" would all be possibilities, one of which would still be "honest" but potentially lying through omission.


Yes? I agree, and don't see how what you've written either extends or contradicts what I wrote.


To not be candid means to not be open (i.e. keeping things back, 'omission') or to not be honest (i.e. lie).

If he didn't lie, and didn't lie by omission then he was by definition being candid.

Give us an example then, of how you can be "not candid" while being honest and open.


How much you wanna bet that the board wasn't told about OpenAI's Dev Days presentation until after it happened?


They said he lied without using those exact words. Standard procedure and corp-speak.


They may even be making the right move but not in a way that it looks like they made the right move. That's stupid.


If they were looking out for investors "blindsided key investor and minority owner Microsoft, reportedly making CEO Satya Nadella furious" doesn't make it sound like they did a terribly good job.


I'm fairly certain that a board is not allowed to capriciously harm the non-profit they govern and unless they have a very good reason there will be more fall-out from this.


I know everybody is going nuts about this, but just from my personal perspective I’ve worked at a variety of companies with “important” CEOs, and in every single one of those cases had the CEO left I would not have cared at all.

The CEO always gets way too much credit externally for what the company is doing, it does not mean the CEO is that important.

OpenAI might be different, I don’t have any personal experience, but I also am not going to assume that this is a complete outlier.


A deal-making CEO who can carry rapport with the right people, make clever deals, and earn public trust can genuinely make a huge difference to a profit-seeking product company's trajectory.

But when your profit-seeking company is owned by a non-profit with a public mission, that trajectory might end up pointed the wrong way. The Dev Day announcements, and especially the marketplace, can be seen as suggesting that's exactly what was happening at OpenAI.

I don't think everyone there wants them to be selling cool LLM toys, especially not on a "make fast and break things" approach and with an ecosystem of startup hackers operationalizing it. (Wisely or not) I think they want to be shepherding responsible AGI before someone else does so irresponsibly.


This is where I've ended up as well for now.

I'm as distant from it all as anyone else, but I can easily believe the narrative that Ilya (et al.) didn't sign up there just to run through a tired page from the tech playbook where they make a better Amazon Alexa with an app store and gift cards and probably Black Friday sales.


And that is fine.. but why the immediate firing, why the controversy?


The immediate firing is from our perspective. Who's to say everything else wasn't already tried in private?


That may be so but then they should have done it well before Altman's last week at OpenAI where they allowed him to become that much more tied to their brand as the 'face' of the operation.


For all we know, the dev day announcements were the final straw and trigger for the decision that was probably months in the making.

He was already the brand, and there likely wouldn't have been a convenient time to remove him from their perspective.


That may well be true. But that would prove that the board was out of touch with what the company was doing. If the board sees anything new on 'dev day' that means they haven't been doing their job in the first place.


Unless seeing something new on dev day is exactly what they meant by Altman not being consistently candid.


If Altman was doing something that ran directly against the mission of OpenAI in a way that all of the other stuff that OpenAI has been doing so far did not then I haven't seen it. OpenAI has been off-script for a long time now (compared to what they originally said) and outwardly it seemed the board was A-Ok with that.

Now we either see a belated - and somewhat erratic - response to all that went before or there is some smoking gun. If there isn't they have just done themselves an immense disservice. Maybe they think they can live without donations now that the commercial ball is rolling downhill fast enough but that only works if you don't damage your brand.


> then I haven't seen it

Unless I'm missing something, this stands to reason if you don't work there.

Kinda like how none of us are privy to anything else going on inside the company. We're all speculating in the end, and it's healthy to have an open mind about what's going on without preconceived notions.


Have a look at the latest developments and tell me that again...


Impossible to know what is going on, really. The Forbes article makes it sound like there is significant investment pressure in trying to get Altman back on board, and they likely have access to the board. It could very well be that the board themselves have no desire to bring Altman back, but these conversations ended up being the origin for the story that they did.

It's also possible that the structure of the non-profit/for-profit/operating agreement/etc. just isn't strong enough to achieve the intent and the investors have the strangehold in reality.

If I was invested in the mission statement of OpenAI I don't think I would view the reinstatement of Altman as a good thing, though. Thankfully my interest in all of this is purely entertainment.


Provided a good enough reason there were many ways in which the board could have fired Altman without making waves. They utterly f'd that up, and even if their original reasons were good the way they went about it made those reasons moot. They may have to re-instate Altman for optical reasons alone at this point and it would still be a net win. What an incredible shit show.


I don’t really see the point in not making waves. OpenAi is not a public company.

Optics and the like don’t really matter as much if you’re not a for profit company trying to chase down investors and customers. So long as OpenAi continues to be able to do research, it’s enough to fulfill their charter.


OpenAI is not a public company but it does have a commercial arm and that commercial arm has shareholders and customers. You don't do damage to such a constellation without a really good reason (if only because minority shareholder lawsuits are a thing, and acting in a way that damages their interests tends to have bad consequences if they haven't been given the opportunity to lodge their objections). To date no reason has been given that stands up to scrutiny given the incredibly unexpected firing of Sam Altman. It is of course possible that such a reason exists but if they haven't made it public by now then my guess it that it was a powerplay much more than some kind of firing offense and given Sam's position it would take a grave error, something that damaged OpenAI's standing more than the firing itself to justify it.

Optics matter a lot, even for non-profits, especially for non-profits nominally above a for-profit. Check out their corporate org chart to see how it all hangs together and then it may make more sense:

https://openai.com/our-structure

Each of the boxes where the word 'investor' or 'employee' is present would have standing to sue if the OpenAI board of directors of the non-profit would act against their interests. That may not work out in the long run but in the short term that could be immensely distracting and it could even affect board members privately.


Altman, Brockman and Nadella all say they didn't know in advance.


Not sure why Satya would be privy to this disagreement let alone admit it if he is, and I'd assume Altman and Brockman would be incentivized to provide their own perspective (just as Ilya would) to represent events in the best possible light for themselves.

At this level of execution, words are another tool in their toolbox.


My guess is that if Sam would have found this out before being fired, he would have done his best not to be fired.

As such, it would have been much more of a challenge to shift OpenAI's supposed over-focus on commerce towards a supposed non-profit focus.


I'm guessing Altman had a bunch of experienced ML researchers writing CRUD apps and LLM toys instead of actual AI research and they weren't too happy. Personally I would be pissed as a researcher if the company took a turn and started in on improved marketing blurbs LLMs or whatever.


I would be shocked if that was the case.

There are plenty of people I know from FAANG, now at OpenAI, where they do product design, operation, and DevOps at scale —all complicated, valuable, and worthwhile endeavors in their own right— that don’t need to get in the way of research. They are just the kind of talent that can operate a business with 90% margins to pay for that research.

Could there be requests or internal projects that are less exciting for some people? Sure, but it’s not very hard to set up Chinese Walls, priorities, etc. Every one of those people had to deal with similar concerns at previous companies and would know how to apply the right principles.


If every jackass brain dead move Elon Musk has ever made hasnt gotten him fired yet, then allocating too many teams to side projects instead of AI research should not be a fireable offense.


Musk was fired as CEO of X/PayPal.


Fired as the CEO of X twice, the last time right before it became PayPal.


I agree with what you've written here but would add the caveat that it's also rather terrible to be in a position where somehow "shepherding responsible AGI" is falling to these self-appointed arbiters. They strike me as woefully biased and ideological and I do not trust them. While I trust Altman even less, there's nothing I've read about Sutskever that makes me think I want him or the people who think like him around him having this kind of power.

But this is where we've come to as a society. I don't think it's a good place.


I mean, aren't they self appointed because they got there first?


No. Knew the right people, had the right funds, and said and did and thought the things compatible with getting investment from people with even more influence than them.

Unless you're saying my only option is to pick and choose between different sets of people like that?


There is a political economy as well as a technical aspect to this that present inherent issues. Even if we can address the former by say regime change, the latter issue remains: the domain is technical and cognitively demanding. Thus the practitioners will generally sound sane and rational (they are smart people but that is no guarantee of anything other than technical abilities) and non-technical policy types (like most of the remaining board members at openAI) are practically compelled to take policy positions based either on ‘abstract models’ (which may be incorrect) or as after the fact reaction to observation of the mechanisms (which may be too late).

The thought occurs that it is quite possible that just like humanity is really not ready (we remain concerned) to live with WMD technologies, it is possible that we have again stumbled on another technology that taxes our ethical, moral, educational, political, and economic understanding. We would be far less concerned if we were part of a civilization of generally thoughtful and responsible specimens but we’re not. This is a cynical appraisal of the situation, I realize, but tldr is “it is a systemic problem”.


In the end my concern comes down to that those who rise to power in our society are those who are best at playing the capitalist game. That's mostly, I guess, fine if what they're doing is being most efficient making cars or phones or grocery store chains or whatever.

Making intelligent machines? Colour me disturbed.

Let me ask you this re: "the domain is technical and cognitively demanding" -- do you think Sam Altman (or a Steve Jobs, Peter Thiel, etc.) would pass a software engineer technical interview at e.g. Google? (Not saying those interviews are perfect, they suck, but we'll use that as a gatekeeper for now.). I'm betting the answer is quite strongly "no."

So the selection criterion here is not the ability to perform technically. Unless we're redefining technical. Which leaves us with "intellectually demanding" and "smart", which, well, frankly also applies to lawyers, politicians, etc.

My worry is right now that the farther you go up at any of these organizations, the more the kind of intelligence and skills trends towards the "is good at manipulating and convincing others" kind of spectrum vs the "is good at manipulating and convincing machines" kind of spectrum. And it is into the former that we're concentrating more and more power.

(All that said, it does seem like Sutskever would definitely pass said interview, and he's likely much smarter than I am. But I remain unconvined that that kind of smarts is the kind of smarts that should be making governance-of-humanity decisions)

As terrible as politicians and various "abstract model" applying folks might be, at least they are nominally subject to being voted out of power.

Democracy isn't a great system for producing excellence.

But as a citizen I'll take it over a "meritocracy" which is almost always run by bullshitters.

What we need is accountability and legitimacy and the only way we've found to produce on a mass society level is through democratic institutions.


> What we need is accountability and legitimacy and the only way we've found to produce on a mass society level is through democratic institutions.

The problem is that our democratic institutions are not doing a good job of producing accountability and legitimacy. Our politics and government bureaucracies are just as corrupted as our private corporations. Sure, in theory we can vote the politicians out of power, but in practice that never happens: Congress has incumbent reelection rates in the 90s.

The unfortunate truth is that nobody is really qualified to be making governance-of-humanity decisions. The real problem is that we have centralized power so much in governments and megacorporations that the few elites at the top end up making decisions that impact everyone even though they aren't qualified to do it. Historically, the only fix for that has been to decentralize power: to give no one the power to make decisions that impact large numbers of people.


I think what's silly about "shepherding responsible AGI" is this is basically math, it's not some genie that can be kept hidden or behind some Manhattan Project level of effort. Pandora's box is open, and the best we can do is make sure it's not locked up behind some corporation or gov't.


I mean, that's clearly not really true, there's a huge "means of production" aspect to this which comes down to being able to afford the datastructure infrastructure.

The cost of the computing machinery and the energy costs to run it are actually massive.


Yup it's quite literally the world's most expensive parrot. (Mind you, a plain old parrot is not cheap either. But OpenAI is a whole other order of magnitude.)


Parrots may live 50 years. H100s probably won’t last half that long.


Parrots are very smart animals that understand at least some of the words they learn. I wish people would give llms more credit.


Sure but I meant the costs are feasible for many companies, hence competition. That was very different from the barriers to nuclear weapons development.


Are you sure this is the case? Tens of billions of dollars invested, yet a whole year later no one has a model that even comes close to GPT-3.5 - let alone GPT-4 Turbo.


> yet a whole year later no one has a model that even comes close to GPT-3.5 - let alone GPT-4 Turbo

Is that true and settled? I only have my anecdotal experience, but in that it is not clear that GPT-3.5 is better than Google's bard for example.


I think the research ambition is worthwhile, but it has raised pressing questions about financing.

If shepherding responsible AGI can be done without a $10B budget in H100, sure… but it seems that scale matters. Having some people in the company sell state-of-the-art solutions to pay for the rest doing cutting-edge, expensive, necessary research isn’t a bad model.

If those separations needed to be re-affirmed, the research formally separated, a board decision approved to share any model from the research arm before it’s commercialized, etc., all that could be implemented within the mission of the entity. Microsoft Research, before them Bell Labs, and many others, have worked like that.


> I think they want to be shepherding responsible AGI before someone else does so irresponsibly.

Is this a thing? This would be like Switzerland in WWII doing nuclear weapons research to try and get there before the Nazis.

Would that make any difference whatsoever to the Nazis timeframe? No.

I fail to see how the presence of "ethical" AI researchers would slow down in the slightest the bad actors who are certainly out there.


Having nukes protects you from other nuclear powers through mutually-assured destruction. I'm not sure whether that principle applies to AGI, though.


They can’t stop another country developing AI they are not fond of.

They can use their position to lobby their own government and maybe other governments to introduce laws to govern AI.


America did nuclear weapons research to get there before the Nazis and Japan and we were able to use them to stop Japan


Has the US ever stated or followed a policy of neutrality and openness?

OpenAI positioned itself like that, much the same way Switzerland does in global politics.


Openness sure, but neutrality? I thought they had always been very explicitly positioned on the "ethical AGI" side.


> Has the US ever stated or followed a policy of neutrality

Yes, most of the time from the founding until the First World War.

> and openness?

Not sure what sense of "openness" is relevant here.


Not at all. Prior to WWI, the US was aggressively and intentionally cleaning European interests out of the Western hemisphere. It was in frequent wars, often with one European power or another. It just didn't distract itself too much with squabbles between European powers over matters outside its claimed dominion.

Establishing a hemispheric sphere of influence was no act of neutrality.


> Not sure what sense of "openness" is relevant

It is in the name OpenAI… not that I think the Swiss are especially transparent, but neither are the USA.


I’m not sure you can call Manifest destiny neutral.


You're completely right. Neither can the Monroe Doctrine be called neutral, nor can:

- the Mexican-American War

- Commodore Perry's forced reopening of Japan

- The fact that President Franklin Pierce recognized William Walker's[1] regime as legitimate

- The Spanish-American war

[1]: https://en.wikipedia.org/wiki/William_Walker_(filibuster)


So the first AGI is going to be used to kill other AGIs in the cradle ?


The scenario usually bandied about is AGI self-improving at an accelerating rate: once you cross the threshold to self-improvement, you quickly get superintelligence with God-like powers beyond human comprehension (a.k.a. the Singularity) as AGI v1 creates a faster AGI v2 which creates a faster AGI v3 etc.

Any AI researchers still plodding along at mere human speed are then doomed: they won't be able to catch up even if they manage to reproduce the original breakthrough, since the head start enjoyed by AGI #1 guarantees that its latest iteration is always further along the exponential self-improvement curve and therefore superior to any would-be competitor. Being rational(ists), they give up and welcome their new AI overlord.

And if not, the AI god will surely make them see the error of their ways.


What if AI self improvement is not exponential?

We assume a self improving AI will lead to some runaway intelligence improvement but if it grows at 1% per year or even per month that’s something we can adapt to.


Assume the AGI has access to a credit card and it goes ahead and reserves itself every GPU cycle in existence so it's 1 month is turned into a day, and now we're back to being fucked.

Maybe an ongoing GPU shortage is the only thing that'll save us!


How would an AGI gain access to an unlimited credit card that immediately gives it remote access to all GPUs in the world?


It could hack into NVIDIA and AMD, compromise their firmware build machines silently, then publish a GPU vulnerability that required firmware updates.

After a couple months, turn on the backdoor.


E.g. by convincing 35% of this website's users to "subscribe" to its "service"?

¯\_(ಠ_ಠ)_/¯


It seems to me that non-General AI would typically outcompete AGI, all else held equal. In such a scenario even a first-past-the-post AGI would have trouble becoming an overload if non-Generalized AIs were marshaled against it.


This makes no sense at all.



uhm, wat?


This is just the thesis that paperclip optimizers win over general intelligence, because they optimize.


Or contain, or counter, or be used as a deterrent. At least, I think that's the idea being espoused here (in general, if not in the GP comment).

I think U. S. VS Japan is not.necessarily the right model to be thinking here, but U.S. VS U.S.S.R., where we'd like to believe that neither nation would actually launch against the other, but both having the weapon meant they couldn't without risking severe damage in response making it a losing proposition.

That said, I'm sure anyone with an AGI in their pocket/on their side will attempt to use it as a big stick against those that don't, in the Teddy Roosevelt meaning.


I think that was part of the LessWrong eschatology.

It doesn't make sense with modern AI, where improvement (be it learning or model expansion) is separated from it's normal operation, but I guess some beliefs can persevere very well.


Modern AI also isn't AGI. We seem to get a revolution at the frontier every 5 years or so; it's unlikely the current LLM transformer architecture will remain the state of the art for even a decade. Eventually something more capable will become the new modern.


Which reminds me, I really need to finish Person of Interest someday.


If they didn't want to be responsible for running and paying for an actual service like ChatGPT, they should have spun it off to someone who did.

The notion they could just build it and they will come is ludicrous. Sam understood that and was trying to figure out a model that could pay for itself.

Obviously there will be mistakes made along the way. That's how it goes.

Don't forget. ChatGPT has competitors. A lot of them and they're getting pretty good.


okay but I personally do want new LLM toys. who is going to provide them, now?


Various camelid inspired models and open source code.


There is no particular reason to expect that OpenAI will be the first to build a true AGI, responsible or otherwise. So far they haven't made any demonstrable progress towards that goal. ChatGPT is an amazing accomplishment and very useful, but probably tangential to the ultimate goal. When a real AGI is eventually built it may be the result of a breakthrough from some totally unexpected source.


Yeah this cult of CEOs is weird.

It's such a small cohort that when someone doesn't completely blow it, they're immediately deemed as geniuses.

Give someone billions of dollars and hundreds of brilliant engineers, researchers and many will make it work. But only a few ever get the chance, so this happens.

They don't do any of the work. They just take the credit.


A sizable portion of the HN bubble is wannabe grifters. They look up to successful grifters.


To be fair: a sizable proportion of humans are like that.


To be fair: that's just your sample. I don't see that.


Same. Most people afaict just live with little to no ambitions.


Everybody I know is either burnt out from working or full of ambition with very few, very small opportunities.


> It's such a small cohort that when someone doesn't completely blow it, they're immediately deemed as geniuses.

And many times even when they do blow it, it's handwaved away as being something outside of their control, so let's give them another shot.


The primary job of an early stage tech CEO is to convince people to give you those billions of dollars, one doesn't come without the other.


Which proves my point. This cult on top of someone that simply convinced people (they knew from their connections) being considered a genius is absurd.


Convincing people is the ultimate trade. It can achieve more than any other skill.

The idea that success at it shouldn’t be grounds for the genius label is absurd.


Depends on what we, as a society, want to value. Do we want to value people with connections and luck, or people that work for their achievements?

Of course it's not a boolean, it's a spectrum. But the point remains: valuing lucky rich people with connections as geniuses because they are lucky, rich and connected is nonsensical to me


> It can achieve more than any other skill.

And also destroy more. The line between is very thin and littered with landmines.


> Convincing people is the ultimate trade

So by your standard SBF is an absolute genius.


Apparently not, as he was not able to convince the jury that he's innocent


Not CEOs: founders.

Some founders don’t do much, and some are violently toxic (Lord knows I worked for many), but it’s rarely how they gather big financing rounds. At least, the terrible ones I know rarely did.

CEOs… I’ve seen people coast from Consulting or Audit into very mediocre careers, so I wouldn’t understand if Conway defended them as a class. The Cult for Founders has problems (for the reasons you point out, especially those who keep looking for ‘technical cofounders’ for years), but it’s not as blatantly unfounded.


My last gig was with one of those wannabe Elon Musks (what wouldnI give to get wannabe Steve Jobs back). Horrible, ultimately he was ousted as CEO, only to be allowed to stay on as some head of innovation, because he and his founder buddies retained enough voting power to first get him a life time position as head of the board fir his "acievements" and then prevent his firing. They also vetoed, from ehat people told, a juicy acquisition offer, basically jeopardizing the future of the place. Right after, a new CEO was recruited as the result of a "lengthy and thoroughly planned process of transition". Now, the former CEO is back, and in charge, in fact and noz on paper, of the most crucial part of the product. Besides getting said company to 800 people burning a sweet billion, he didn't do anything else in his life, and that company has yet to launch a product.

Sad thing so, if they find enough people to continue investing, they will ultimately launch a product, most likely the early employees and founders will sell of their shares, become instant millionaires in the three figures and be hailed as thebtrue geniuses in their field... What an utter shit show that was...


The sad reality is that most top executives get there because of connections or simply being in the right place at the right time.

Funnily enough I also worked for a CEO that hit the lottery with timing and became a millionaire. He then drank his own kool-aid and thought he was some sort of Steve Jobs. Of course he never managed to build anything afterwards. But he kept making a shit ton of money, without a doubt.

After they get one position in that echelon, they can keep failing upwards ad nauseam.

I don't get the cult part though. It's so easy to see they're not even close to the geniuses they pretend to be. Just look at the recent SBF debacle. It's pathetic how folks fall for this.


> Besides getting said company to 800 people burning a sweet billion, he didn't do anything else in his life

Getting a company to that size is a lot.


All you need is HR... I'm a cynic. He got the funding so, which is a lot (as an achievement and in terms of mones raised). He just started to believe to be the genius not justbin raising money, but also is building product and organisation. He isn't and never was. What struck me so, even the adults hired to replace him, didn't have the courage to call him out. Hence his comeback in function if not title.

Well, I'm happy to work with adults again, in a sane environment with people that known their job. It was a very, very useful experience so, and I wouldn't miss it.


Not if you have 1B sitting in the bank as stated above


> Yeah this cult of CEOs is weird.

Now imagine the weekend for those fired and those who quit OpenAI: you know they are talking together as a group, and meeting with others offering them billions to make a pure commercial new AI company.

An Oscar worthy film could be made about them in this weekend.


I worked at a startup where the first CEO, along with the VP of Sales and their entire department, was ousted by the board on a Tuesday.

I think it's likely that we're going to find out Sam and others are just talented tech evangelists/hucksters and that justifiably worries a lot of people currently operating in the tech community.


Sam helped bring a lot of talented people to OpenAI which is a huge accomplishment, even if we assume the worst and that he doesn’t do much day to day work with them.


How did the company end up fairing?


sold to another company four years later, about a year after I left


I think the problem is, this is not just about dumping the CEO. It’s signalling a very clear shift away from where OpenAI was heading - which seemed to be very focussed on letting people build on top of the technology.

The worry now is that the approach is going to be more of controlling access to just researchers who are trusted to be “safe”.


Frankly in OpenAI's case, for a lowly IC or line manager, it is also very obviously about the money as well.

A non-profit immediately makes the values of OpenAI's PPUs (their spin on RSUs) to zero. Employees will be losing out of life changing sums on money.


i agree with this. What about the GPTs Store. Are they planning on killing that? Just concerning they'll kill the platform unit AGI comes out.


Did you mean ‘_until_ AGI comes out.’?


It often comes down to auteur theory.

Unless someone is truly well versed in the production of something, they latch on to the most public facing aspect of that production and the person at the highest level of authority (to them, even though directors and CEOs often have to answer to others as well)

That’s not to say they don’t have an outsized individual effect, but it’s rare their greatness is solo


When you say director, do you mean film director or a director in a company? Film directors are insane with the amount of technical, artistic, and people knowledge that they need to have and be able to utilize. The amount of stuff that a film director needs to manage, all on the ground, is insane. I wouldn't say that for CEOs, not by a long shot. CEOs mainly sit in meetings with people reporting things to them and then the CEO providing very high-level guidance. That is very different from a director's role.

I have often thought that we don't have enough information on how film directors operate, as I feel it could yield a lot of insight. There's probably a reason why many film directors don't hit their stride until late 30s and 40s, presumably because it takes those one or two decades to build the appropriate experience and knowledge.


I mean a film director and I disagree with your assessment that they have to be savvy in many fields. Many of the directors whose projects I’ve worked on are very much not savvy outside their narrow needs of directing talent and relying on others like the DoP or VFX Supervisor , editors etc to do their job.

In fact most movie productions don’t even have the director involved with story. Many are just directors for hire and assigned by the studio to scripts.

Of course there are exceptions but they are the rarities.

And the big reason directors don’t hit their big strides till later is movies take a long time to make and it’s hard to work your way up there unless you start as an indie darling. But even as an indie, let’s say you start at 20, your film would likely come out by the time you’re 22-24 based on average production times. You’d only be able to do 2 or 3 films by 30, and in many cases would be put on studio assignments till you get enough clout to do what you want. And with that clout comes the ability to hire better people to manage the other aspects of your shoot.

Again, I think this is people prescribing to auteur theory. It takes a huge number of people to pull off a film, and the film director is rarely well versed in most. Much like a CEO, they delegate and give their opinion but few extend beyond that.

For reference I’ve worked on multiple blockbuster films, many superhero projects, some very prestigious directors and many not. The biggest indicator that a director is versed in other domains is if they worked in it to some degree before being a director. That’s where directors like Fincher excel and many others don’t


Would it be accurate to liken CEOs to film producers?


Interesting. Intuitively no. But then, hm... maybe. There are some aspects that ring true but many others that don't, I think it is a provocative question and there probably is more than a grain of truth in it. The biggest difference to me is that the producer (normally) doesn't appear 'in camera' but the CEO usually is one of the most in camera. But when you start comparing the CEO with a lead actor who is also the producer of the movie it gets closer.

https://en.wikipedia.org/wiki/Zachary_Quinto

Is pretty close to that image.


No. I'm pretty sure that my comment describes why.


> CEOs mainly sit in meetings with people reporting things to them and then the CEO providing very high-level guidance.

Isn’t that essentially the job of a film producer? You do see a lot of productions where there’s a ton of executive producer titles given out as almost a vanity position.


A producer, yes, but not the film's director.


Out of genuine curiosity , and I mean no disrespect , have you worked in film production? Because directors sit in many meetings directing the people on the project.

It kind of feels to me like you’re describing the way the industry works from an outsiders view since it doesn’t match the actual workings of any of the productions I’ve worked on.

the shoots are only a portion of production. You have significant pre production and post production time.

A producer is closer in role to a CFO or investor , depending on the production since it’s a relatively vague term.


I suppose that I had and have in mind a certain type of feature film director (usually the good ones) that are involved in all things: pre- and post-production, writing the script, directing, the editing process, etc.

Your original comment mentioned auteurs, which is what influenced the type of film director I was thinking of, which often are also producers and even writers and editors on their own films. To my knowledge, I am not aware of any famous CEOs that fit the style and breadth of these directors, as the CEO is almost never actually doing anything nor even knowledgeable in the things they're tasking others to do.

So to summarize, I feel there are auteur directors but not CEOs, despite many thinking there are auteur CEOs. If there are, they are certainly none of the famous ones and are likely running smaller companies. I generally think of CEOs as completely replaceable, and usually the only reason one stands out is that they don't run the business into the ground or have a cult of personality surrounding them. If you take away an auteur director from their project, it will never materialize into anything remotely close to what was to be.


My personal opinion is that there aren’t auteur directors either. Many are only as good as their corresponding editors, producers, or other crew. It’s just an image that people concoct because it’s simpler to comprehend.

Thinking of directors with distinctive styles like Hitchcock, Fincher, Spielberg, Wes Anderson etc… they’re maybe some who have a much larger influence than others, but I think there are very few projects that depend on that specific director being involved to make a good film, just not the exact film that was made. The best of them know exactly how to lean on their crew and maximize their results.

Taking that kind of influence, I’d say there have certainly been CEOs of that calibre. Steve Jobs instantly springs to mind. Apple and Pixar definitely continued and had great success even after he left them/this world, but he had such an outsize influence that it’s hard not to call him an auteur by the same standards.


My original post literally asks if it’s more accurate to compare CEOs with film producers and not directors.


I misread it then with directors instead of producers. Apologies for that confusion.


On the other hand, I have seen an executive step away from a large company and then everything coincidentally goes to shit. It’s hard to measure the effectiveness of an executive.


It's hard to judge based on that, because a lot of times, CEOs are fired because they have done things that are putting the company on a bad trajectory or just because the company was on a bad trajectory for whatever reason. So firing the CEO is more of a signal than a cause.


It's completely ignorant to discount all organizational leaders based on your extremely limited personal experience. Thousands of years of history proves the difference between successful leaders and unsuccessful leaders.

Sam Altman has been an objectively successful leader of OpenAI.

Everyone has their flaws, and I'm more of a Sam Altman hater than a fan, but even I have to admit he led OpenAI to great success. He didn't do most of the actual work but he did create the company and he did lead it to where it is today.

Personally, If I had stock in OpenAI I'd be selling it right now. The odds of someone else doing as good a job is low. And the odds of him out-competing OpenAI is high.


> Sam Altman has been an objectively successful leader of OpenAI.

I'm not sure this is actually the case, even ignoring the non-profit charter and the for-profit being beholden to it.

We know that OpenAI has been the talk of the town, we know that there is quite a bit of revenue, and that Microsoft invested heavily. What we don't know is if the strategy being pursued ever had any chance of being profitable.

Decades-long runways with hope that there is a point where profitability will come and at a level where all the investment was worth it is a pretty common operating strategy for the type of company Altman has worked with and invested in, but it is less clear to me that this is viable for this sort of setup, or perhaps at all - money isn't nearly as cheap as it was a decade ago.

What makes a for-profit startup successful isn't necessarily what makes a for-profit LLC with an operating agreement that makes it beholden to the charter of a non-profit parent organization successful.


> Sam Altman has been an objectively successful leader of OpenAI.

In what way, exactly? ChatGPT would have been built regardless of whether he was there or not. It's not like he knows how to put a transformer pipeline together. The success of OpenAI's product rests on its scientists and engineers, not the CEO, and certainly not a non-technical one like Mr. Altman.


If you want to get really basic: there's no OpenAI at all without Sam Altman, which means there's no ChatGPT either.

There are much larger armies of highly qualified scientists and engineers at Google, Microsoft, Facebook, and other companies and none of them created ChatGPT. They wrote papers and did experiments but created nothing even remotely as useful.

And they still haven't been able to even fully clone it with years of effort, unlimited budgets, and the advantage of knowing exactly what they're trying to build. It should really give you pause to consider why it happened at OpenAI and not elsewhere. Your understanding of the dynamics of organizations may need a major rethink.

The answer is that the CEO of OpenAI created the incentives, hiring, funding, vision, support, and direction that made ChatGPT happen. Because leadership makes all the difference in the world.


To pin OpenAI's success completely on Sam is disingenuous at best, outright dishonest at worst. Incentives don't build ML pipelines and infrastructure, developers and scientists do.

This visionary bullshit is exactly that, bullshit.


A leader can't do anything on their own, they need people to lead. And those people deserve recognition and rewards. But in most cases there's no one more important than the leader. And thus, no one that deserves more credit than the leader.

I'm absolutely not comparing Sam Altman to any of these leaders, but just to illustrate how much vision and leadership does matter. Consider how stupid these statements sound:

"Jesus didn't build any churches, those were all built by brick layers and carpenters!"

"Pharaohs didn't build a single pyramid, those were all built by artists and workers!"

"Abraham Lincoln didn't free any slaves, he didn't break the chains of a single slave, that was all done by blacksmiths!"

"Martin Luther King Jr. didn't radically improve civil rights, he never passed a single law, that was all done by lawmakers!"

"Sam Altman didn't build ChatGPT, he didn't create a single ML pipeline, it was all done by engineers!"

It's a hard fact of life that some specific individuals play more important roles in successful projects than others.


Such grand examples that are unfortunately a poor fit for Sam.

It's Ilya who conceived of the vision for ChatGPT. Sam is a sales and fundraising guy. He was endorsed by Thiel and Musk.

While raising money is certainly important, let's not confuse that for product vision. There are enough guys that can do what Sam does.


Whoever had to do with ChatGPT most is the reason OpenAI is where it is today.


This was done in the context of Dev Day. Meaning that the board was convinced by Ilya that users should not have access to this level of functionality. Or perhaps he was more concerned that he was not able to gatekeep its release. So presumably it was Altman who pushed for releasing this technology to the general public. If this is accurate then this shift in control is bound to slow down feature delivery and create a window for competitors.


The difference with OpenAI / GPT is a dozen or so primary engineers plus a few billion dollars for GPUs and you have a competitive version.

And if those primary engineers get sucked out of OpenAI, OpenAI won't be able to compete.

OpenAI is a different animal.

SamAltman has the cache to pull out those engineers. Particularly because Ilya's vision doesn't include lucrative stock options.


You are missing the emotional aspect of it, a connection towards building something great _together_. In some ways it is selfish, it make you feel important.

If Susan Fowler's book is accurate, Uber under TK was riddled with toxic management and incompetent HR. Yet you will hear people on Twitter reminisce of TK era Uber as the golden period and many would love him back


It doesn't matter in the short term (usually). Then you look in 2-4 years and you see the collective impact of countless decisions and realize how important they are.

In this case, tons of people already have resigned from OpenAI. Sam Altman seems very likely to start a rival company. This is a huge decision and will have massive consequences for the company and their product area.


You may be right in many cases but if you think that’s true in all cases, you’re a low level pleb that can’t see past his own nose.


If the CEO was not important and basically doesn't impact anything, as you say, then why would the board feel the need to oust Altman for "pushing too fast" in the first place?


There’s a whole business book about this, good to great, where a key facet of companies that have managed to go from average to excellent over a sustained period of time is servant-leader CEOs


It's hard to believe a Board that can't control itself or its employees could responsibly manage AI. Or that anyone could manage AGI.

There is a long history of governance problems in nonprofits (see the transaction-cost economics literature on point). Their ambiguous goals induce politics. One benefit of profit-driven boards is that the goals make only well-understood risk trade-off's between growth now or later, and the board members are selected for their actual stake in that actual goal.

This is the problem with religious organizations and ideological governments: they can't be trusted, because they will be captured by their internal politics.

I think it would be much more rational to make AI/AGI an entirely for-profit enterprise, BUT reverse the liability defaults and require that they pay all external costs resulting from their products.

Transaction cost economics shows that in theory that it doesn't matter where liability is allocated so long as the transaction cost of redistributing liability is near zero (i.e., contract in advance and tort after are cheap), because then parties just work it out. Government or laws are required only to make up for the actual non-zero dispute transaction cost by establishing settled expectation.

The internet and software generally has been a domain where consumers have NO redress whatsoever for exported costs. It's grown (and disrupted) fantastically as a result.

So to control AI/AGI, make it for-profit, but flip liability to require all exported costs to be paid by the developer. That would ensure applications are incredibly narrow AND have net-positive social impact.


I appreciate this argument, but I also think naked profit seeking is the cause of a lot of problems in our economy and there are qualities that are hard to quantify when you structure the organization around it. Blindly following the economic argument can also cause problems, and it's a big reason why American corporate culture moved away from building a good product first towards maximizing shareholder value. The OpenAI board certainly seems capricious and impulsive given this decision though.


On board with this. Arguing that a for-profit is somehow the moral position over a non-profit because money is tangible while the idea of doing good is not well-defined.. feels like something a Rockefeller owned newspaper from the Industrial Revolution would have printed.


Yeah that's right. There's a blogger in another post on HN that makes the same point at the very end: https://loeber.substack.com/p/a-timeline-of-the-openai-board


Super interesting link there. You should submit it, if no one has yet.

"Governance can be messy. Time will be the judge of whether this act of governance was wise or not." (Narrator: specifically, about 12 hours.) "But you should note that the people involved in this act of corporate governance are roughly the same people trying to position themselves to govern policy on artificial intelligence.

"It seems much easier to govern a single-digit number of highly capable people than to “govern” artificial superintelligence. If it turns out that this act of governance was unwise, then it calls into serious question the ability of these people and their organizations (Georgetown’s CSET, Open Philanthropy, etc.) to conduct governance in general, especially of the most impactful technology of the hundred years to come. Many people are saying we need more governance: maybe it turns out we need less."


From that link:

>I could not find anything in the way of a source on when, or under what circumstances, Tasha McCauley joined the Board.

I would add, "or why she's on the board or why anyone thought she was qualified to be on the board".

At least with Helen Toner the intent was likely just to add a token AI Safety academic to pacify "concerned" Congressmen.

I am kind of curious how Adam D'Angelo voted. If he voted against removing Sam that would make this even more of a farce.


D’Angelo had to have voted in favor because otherwise they don’t get a four vote majority.


You only need 4 votes to have a majority if Sam and Greg were present for the vote, which neither were. Ilya + the 2 stooges voting in favor and D'Angelo voting against would be a 3-1 majority.


I am not an expert, but I don't think that is the way it works. My guess is that the only reason that they could vote without Sam and Greg there is because they had a majority even if they were there. That means they had 4 votes, and that means all other board members voted against Sam and Greg.

It does not seem reasonable that only some members of a board could get together and vote things without others present. This would be chaos.


Is their corporate charter public? I couldn't find it on their website.


> Their ambiguous goals induce politics. [...] This is the problem with religious organizations and ideological governments: they can't be trusted, because they will be captured by their internal politics.

Yes, of course. But that's because "doing good" is by definition much more ambiguous than "making money". It's way higher dimension, and it has uncountable definitions.

So nonprofits will by definition involve more politics at the human level. I'd say we must accept that if we want to live amongst the actions of nonprofits rather than just for-profits.

To claim that "politics" are a reason something "can't be trusted" is akin to saying involvement of human affairs means something can't be trusted (over computers). We must imagine effective politics, or else we cannot imagine effective human affairs -- only mechanistic affairs of simple optimization systems (like capitalist markets)


Yeah, there's no governance problems in for-profit companies that have led to, for example, the smoking epidemic, the opioids epidemic, the impending collapse of the planet's biosphere, all for the sake of a dime.


The solution is to replace the board members with AGI entities, isn't it? Just have to figure out how to do the real-time incorporation of current data into the model. I bet that's an active thing at OpenAI. Seems to have been a hot discussion topic lately:

https://www.workbyjacob.com/thoughts/from-llm-to-rqm-real-ti...

The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs. The curious silence on military-industrial applications of LLMs makes me suspect this is part of the OpenAI story... Good plot for a novel, at least.


> The real risk is that some government will put the result in charge of their national defense system, aka Skynet, not that kids will ask it how to make illegal drugs.

These cannot possibly be the most realistic failure cases you can imagine, are they? Who cares if "kids" "make illegal drugs?" But yeah, if kids can make illegal drugs with this tech, then actual bad actors can make actual dangerous substances with this tech.

The real risk is manifold and totally unforeseeable the same way that a 400 Elo chess player has zero conception of "the risks" that a 2000 Elo player will exploit to beat them.


Every bad actor who wants to make dangerous substances can find that information in the scientific literature with little difficulty. An LLM, however, is probably not going to tell you that the mostly likely outcome of a wannabe chemist trying to cook up something or other from an LLM recipe is that they'll poison themselves.

This generally fits a notion I've heard expressed repeatedly: today's LLMs are most useful to people who already have some domain expertise, it just makes things faster and easier. Tomorrow's LLMs, that's another question, as you imply.


I've seen some discussion on HN in which people claimed that even really important engineers aren't -too- important and that Ilya is actually replaceable, using Apple's growth after Woz' departure as an example. But I don't think that's the best situation to compare this to. I think a much better one is John Carmack firing Romero from id Software after the release of Quake.

Some background: During a period of about 10 years, Carmack kept making massive graphics advances by pushing cutting-edge technology to the limit in ways nobody else had figured out, starting with smooth horizontal scrolling in Commander Keen, through Doom's pseudo-3D, through Quake's full 3D, to advances in the Quake sequels, Doom 3, etc. It's really no exaggeration to say that every new id game engine from 1991 to 1996 created a new gaming genre, and the engines after that pushed forward the state of the art. I don't think anybody who knows this history could argue that John Carmack was replaceable.

At the time, the rest of id knew this, which gave Carmack a lot of clout and eventually allowed him to fire co-founder John Romero. Romero was considered the kinda flamboyant, and omnipresent, public face of id -- he regularly went to cons, worked the press, played deathmatch tournaments, and so on (to be clear, he was a really talented level designer and programmer, among other things, I only want to point out that he was synonymous with id in the public eye). And what happened after the firing? Romero was given a ton of money and absurd publicity for new games ... and a few years later, it all went up in smoke and his new company folded, as he didn't end up making anything nearly as big as Doom or Quake. Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

The moral of the story to me is that, when your revenue massively grows for every bit of extra performance you extract from bleeding-edge technology, engineer expertise REALLY matters. In the '90s, every minor improvement in PC graphics quality translated to a giant bump in sales, and the same is true of LLM output quality today. So, just like Carmack ultimately turned out to be the absolute key driver behind id's growth, I think there's a pretty good chance it's going to turn out that Ilya plays the same role at OpenAI.


> Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

I don't think that is accurate...

The output of id Software after Romero left (post Quake 1) was a clear step down. The technology was fantastic but the games were boring and uninspired, at best "good" but never "great". It took a full 20 years for them to make something interesting again (Doom in 2016).

After Romero left, id Software's biggest success was really as a technology licensing house, but not as a games developer. Powering games like Half Life, Medal of Honor, Call of Duty, ...

Meanwhile Romero's new company (Ion Storm) eventually failed, but at least the creative freedom there led to some interesting games, like Deus Ex and Anachronox. And even Daikatana is a more interesting game than something like Quake 2 or Quake III.


Romero had basically no involvement in Deus Ex.

Daikatana was a commercial and critical failure. Quake 2 and Quake III were commercial and critical successes.


Deus Ex wouldn’t exist if Romero hadn’t created Ion Storm and given creative freedom to Warren Spector to make his dream game

Daikatana had some interesting design ideas but had problems with technology, esp. with AI programming. It was too ambitious for a new team which lacked someone like Carmack to do the heavy technical lifting

Quake 2 and 3 were reviewed less favourably than earlier titles, and they also sold less copies. They were good but not great - basically boring but very pretty to look at.


The comment you’re replying to wasn’t claiming that Romero designed Deus Ex, but that his leaving id led to the game getting made. It absolutely did.

From Wikipedia:

>After Spector and his team were laid off from Looking Glass, John Romero of Ion Storm offered him the chance to make his "dream game" without any restrictions.

https://en.wikipedia.org/wiki/Deus_Ex_(video_game)


A difference in this case is how capital intensive AI research is at the level OpenAI is operating. Someone who can keep the capital rolling in (whether through revenue, investors, or partners) and get access to GPUs and proprietary datasets is essential.

Carmack could make graphics advances on his own with just a computer and his brain. Ilya needs a lot more for OpenAI to keep advancing. His giant brain isn’t enough by itself.


That's a really, really good point. Maybe OpenAI, at this level of success, can keep the money coming in though.


I'm pretty sure the people that kicked out altman don't consider this success and don't want the money


We don't even know if they're profitable right now, or how much runway they have left.


> Quake III's fast inverse square root algorithm

Carmack did not invent that trick; it had been around more than a decade before he used it. I remember reading a Jim Blinn column about that and other dirty tricks like it in an IEEE magazine years before Carmack "invented" it.

https://en.wikipedia.org/wiki/Fast_inverse_square_root


Yes, you're right -- I dug around in the Wikipedia article, and it turns out he even confirmed in an email it definitely wasn't him: https://www.beyond3d.com/content/articles/8/

Thanks for the correction, edited the post.


Three points:

1. I don't think Ilya is equivalent to Carmack in this case — he's been focused on safety and alignment research, not building GPT-[n]. By most accounts Greg Brockman, who quit in disgust over the move, was more impactful than Ilya in recent years, as well as the senior researchers who quit yesterday.

2. I think you are underselling what happened with id: while they didn't blow up as fantastically as Ion Storm (Romero's subsequent company), they slowly faded in prominence, and while graphically advanced, their games no longer represented the pinnacles of innovation that early Carmack+Romero id games represented. They eventually got bought out by Zenimax. Carmack alone was much better than Romero alone, but seemingly not as good as the two combined.

3. I don't think Sam Altman is equivalent to John Romero; Romero's biggest issue at Ion Storm was struggling to ship anything instead of endlessly spinning his wheels chasing perfection — for example, the endless Daikatana delays and rewrites. Ilya's primary issue with Altman was he was shipping too fast, not that he was unable to motivate and push his teams to ship impressive products quickly.

I hope Sam and Greg start a new foundational AI company, and if they do, I am extremely excited to see what they ship. TBH, much more excited than I am currently by OpenAI under a more alignment-and-regulation regime that Ilya and Helen seems to want.


Sutskever has shifted to safety and alignment research this year. Previously he was directly in charge of the development of GPT, from GPT-1 on.

Brockman did an entirely different type of work than Sutskever. Brockman's primary focus was on the infrastructure side of things - by all accounts the software he wrote to manage the pre-training, training, etc., is all world-class and a large part of why they were able to be as efficient as they are, but that is not the same thing as being the brains behind the ML portion.


Until I can trust that when I send an AI agent off to do something that I will be successful without me babysitting and watching over it constantly AI won't truly be transformative (since the human bottleneck will remain).

This is one of the core promises of alignment. Without it how can there be trust? While there are probably short term slow downs with an alignment focus, ultimately it is necessary to avoid throwing darts in the dark.


I wouldn't mind a focus on reliably following tasks with greater intelligence; what I think is negative utility is focusing more compute and research resources on hypothetical superintelligence alignment — the entire focus of Ilya's "Superalignment" project — when GPT-4 is still way, way sub-human-intelligence. For example, I don't think the GPT Store was in any way a dangerous idea, which seems to have been Ilya's claimed safety red line.


I wouldn't call GPT-4 sub-human intelligence. While it's intelligence is less robust aggregate human intelligence, I don't think there is any one person alive who can compete with the breadth of GPT-4 knowledge.

I also think that the potential of what currently is possible with existing models has not been fully realized. Good prompting strategies and reflection may already be able to produce a system that is effectively AGI. Might already exist in several labs.


Wikipedia has broader knowledge, and yet no one calls it intelligent. I'm talking about reasoning capability, and GPT-4 is well below human on complex tasks, especially multi-step tasks, which is why "autonomous agents" like AutoGPT, BabyAGI, etc are not yet very useful.

For example: https://arxiv.org/abs/2311.09247


> Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

Romero was fired in 1996

Until this point, as you mentioned id had created multiple legendary franchises with unique lore, attributes, and each one groundbreaking tech breakthroughs: Commander Keen, Wolfenstein 3D, Doom, Quake.

After Romero left, id released: https://en.wikipedia.org/wiki/List_of_id_Software_games

* Quake 2

* Quake 3

* Doom 3

* And absolutely nothing else of any value or cultural impact. The only "original" thing was Rage which again had no footprint.

There were a lot of technical achievements, yes, but it turns out that memorable games need more than interesting technology. They were well-reviewed for their graphics at a time when that was the biggest thing people expected from new id games - interesting new advances in graphics. For a while, they were THE ones pushing the industry forward until arguably Crysis.

But the point is for anyone experiencing or interacting with these games today, Quake is Quake. Nobody remembers 1, 2 or 3 - it's just Quake.

Now, was id a successful software company and business? Yes. Would it have become the industry titan and shaped the future all of all videogames based on their post Romero output? Absolutely not.

So, while it is definitely justifiable to claim that Carmack achieved more on his own than Romero did, the truth is at least in the video game domain they needed each other to achieve the real greatness that they will be remembered for.

It remains to be seen what history will say about ALtman and Sutskever.


> But the point is for anyone experiencing or interacting with these games today, Quake is Quake. Nobody remembers 1, 2 or 3 - it's just Quake.

Quake 3 was unquestionably the pinnacle, the real beginning of esports, and enormously influential on shooter design to this day.


Quake 3 came out 1 week after Unreal Tournament did in 1999.

Quake 3 had a better engine, but Unreal Touranment had more creative weapons, sound cues, and level design. (Assault mode!)

Quake 3 had better balanced levels for purely deathmatch, which turned out to be the part that was the purest distillment of what people would want to play.

So, yes, I do think you're right that I am underselling Quake 3. I was always a UT fan from day 1, and never understood why Quake 3 took over. But that's personal preference, and I undervalue it's impact to the industry.

It also shows I guess that since Romero previously did all the level designs, Carmack was able to replace him. But Romero was never able to replace Carmack.


Meanwhile, id under Carmack kept cranking out hit after hit for years, essentially shrugging off Romero's firing like nothing happened.

I believe this is absolutely wrong. Quale 2, 3 and Doom 3 were critical success, not commercial ones, which led ID to be bought.

John and John were like Paul and John from the beatles, they never made really great games anymore after their break up.

And to be clear, that's because the role of Romero in the success of ID is often underrated like here. He invented those games (Doom and Quake and Wolf) as much as Carmack did. For example, Romero was the guy who invented percent-based life. He removed the score. This guy invented the modern video game in many ways. Games that werent based on Atari or Nintendo. He invented Wolf, Doom and Quake setups which were considerably more mature than Mario and Bomberman and it was new at the time. Romero invented the deathmatch and its "frag". And on and on.


I think the team that became Looking Glass Studios did a lot of the same things in parallel so it’s a little unfair to say no one else had figured it out


Not at the same level of quality. For example, their game, ultima underworld if my memory doesnt fault me, didnt have sub-pixel precision for texturing. Their texturing was a lot uglier and unpolished compared to Wolf and especially Doom. I remember I checked, they were behind. And their game crashed. Never saw Doom crash, not even once.


> Not at the same level of quality.

Engine quality? No.

In terms of systems? Design? Storytelling? LGS games were way ahead of their time, and have vastly more relevance than anything post-Romero ID made.


Agreed, I was talking about the engine. UU was exceptional in terms of ambience and all other things.


You know a lot more than me on this subject but can it also be that starting new company and for it to not die is quite hard. Especially in gaming.


Ilya might be too concerned with AI safety to make significant progress on model quality improvement.


Isn't that a massive quality improvement though? How many applications for LLMs are not feasible right now because of the ability for models to be swayed of course by a gentle breeze? If AI is a ship with a sail, data is the wind and alignment is the equivalent of a rudder.


The ability for models to be (easily) swayed is a different problem. I don’t see how AI safety would help with that.


> models to be (easily) swayed is a different problem

No, this is the alignment problem at a high level. You want a model to do X but sometimes it does Y.

Mechanistic interpretability, one area of study in AI alignment, is concerned with being able to reason about how a network "makes decisions" that lead it to an output.

If you wanted an LLM that doesn't succumb to certain prompt injections, it could be very helpful to be able to identity key points in the network that took the AI out of bounds.

Edit: I should add, I'm not referring to AI safety, I'm referring to AI alignment.


You want a model to do X but sometimes it does Y

That’s too broad. Any AI problem falls under this characterization.

Also, AI interpretability and AI alignment are distinct subfields. Partially overlapping, but distinct goals.


Who's talking about replacing Ilya? What are you talking about?


This was a very personal firing in my opinion. Unless other, really damaging behaviors emerge, no responsible board fires their CEO with such a lack of care for the corporate reputation and their partners unless the firing is a personal grievance connected to an internal power play. This should be embarrassing to everyone involved, and sama has a real grievance here. Likely legal repercussions. Of course if they really did just invent AGI, and sama indicated an intent to monetize, that might cause people to act without caution if the board is AGI doomers. But I'd think even in that case it would be an argument best worked out behind closed doors. This reminds everybody of Jobs of course, but perhaps another example is Gary Gygax at TSR back in the 80s.


Gygax had fucked off to Hollywood and was too busy fueling his alcohol, cocaine and adultery addictions to spend any time actually running the company. All while TSR was losing money like crazy.

The company was barely making 30 million a year while 1.5 billion in debt...in the early 80s.

Even then, Gygax's downfall is the result of his own coup, where he ousted Kevin Blume and brought in Lorraine Williams. She bought all of Blume's shares and within about a year removed any control that Gygax had over the company and canceled most of his projects. He resigned a year later.


Wow I did not know all of THAT was going on. What goes around...


Wiki says 1.5 million in debt. Which seems more believable for a company of that size?

Thanks for the rabbit hole though, that was an entertaining read.


Altman's not going to sue. Right now he has the high ground and the board is the one that looks petty and immature. It would be dumb for him to do anything that reverses this dynamic.

Altman is going to move on and announce a new venture in the coming weeks. Whether that venture is in AI or not in AI will be very revealing about what he truly believes are the prospects for the space.

Brockman and the others will likely do something new in AI.


> It would be dumb for him to do anything that…

I admire you but these days dumb is kinda the norm. Look at the other Sam for example. Really hard to keep your mouth shut and do smart things when you think really highly about yourself.


Altman is a major investor in the company behind the Humane AI Pin, which, does not inspire confidence for his ability to find a new home for his "brilliance."


He's also the founder and CEO of WorldCoin.


> Right now he has the high ground and the board is the one that looks petty and immature.

This is an interesting take. Didn't the board effectively claim that he was lying to or misleading them? If that's true, how does someone doing that and being called out on it given them the high ground? By many accounts that have come out, it seems Altman had several schemes in work going against the charter of the non-profit OpenAI.

> Whether that venture is in AI or not in AI will be very revealing about what he truly believes are the prospects for the space.

Why is he considered an oracle in this space?


I'll just say it: Jobs being pushed out was the right decision at the time. He was an abusive and difficult personality, the Macintosh was at the time a sales failure, and he played internal team and corporate politics that pit team against team (e.g. Lisa vs Mac) and undermined unity and success.

Notable that when he came back, while he was still a difficult personality, the other things didn't happen anymore. Apple after the return of jobs became very good at executing on a single cooperative vision.


Gygax? The history books don't think much of his business skills, starting with the creation of AD&D as a fiction to avoid paying royalties to Arneson.


Jobs was a liability when he was fired and arguably without being fired would have never matured. Formative experience if there ever was one.


>responsible board

The board was irresponsible and incompetent by design. There is one OpenAI board member who has an art degree and is part of some kind of cultish "singularity" spiritual/neo-religious thing. That individual has also never had a real job and is on the board of several other non-profits.


> There is one OpenAI board member who has an art degree

Oh no! Everyone knows that progress is only achieved by people with computer science degrees.


People with zero experience in technology should simply not be allowed to make decisions about it.

This is how you get politicians that try to ban encryption to "save the children."


Why do you assume someone with an art degree has “zero experience with technology”? I assume many artists these days are highly sophisticated users of technology.


And we know politicians use technology too. Yet here we are.


But they are married to somebody famous, so obviously qualified.


Here's what I don't understand.

There clearly were tensions between the for and not-for growth factions, but the Dev Day is being cited as a 'last straw'. It was a product launch.

Ilya, and the board, should have been well aware of what was being released on that day for months. They should have at the very least been privy to the plan, if not outright sanctioned it. Seems like before launch would have been the time to draw a line in the sand.

Did they have a 'look at themselves in the mirror' moment after the announcements or something?


> They should have at the very least been privy to the plan, if not outright sanctioned it.

Never assume this. After all, their communication specifically cited that Sam deceived them in some way, and Greg was also impacted. Ilya is the only board member that might have known naturally, given his day-to-day work with OAI, but since ~July he has worked in the area of superalignment, which could reasonably be a different department (it shouldn't be). The Board may have also found out about these projects, maybe from a third party/Ilya, told Sam they're moving too fast, and Sam ignored them and launched anyway. We really don't know.


>Ilya, and the board, should have been well aware of what was being released on that day for months

Not necessarily, and that may speak to the part of the Board's announcement that Sam was not candid


I can’t imagine an organization where this wouldn’t have come up on some roadmap or prioritization meeting, etc. How could leadership not know what the org is working on?! They’re not that big.


Board is not exactly leadership. They meet infrequently and get updates directly from management, they don't go around asking employees what they're working on


True. So the CTO knew what was happening, wasn’t happy, and then coordinated with the board, is that what appears to have happened?


CTO who is now acting CEO.

Not making any accusations but that was an odd decision given that there is an OpenAI COO.


More supervision than leadership...


Surely Ilya Sutskever must have known what was being worked on as Chief Scientist?


They do typically have views into strategic plans, roadmaps and product plans.


Going into detail in a talk and discussing AGI may have provided crucial context that wasn't obvious from a PowerPoint bullet point, which is all the board may have seen earlier.


I can't imagine an organization that would fire their celebrity CEO like this either. So maybe that's how we arrived here.


Could be many things, like Sam not informing them of the GPTs store launch, or saying he won't launch and then launching.

It sucks for openAi, but there's too many hungry hungry competitors salivating at replacing OpenAI so I don't think this will have big king term consequences in the field.

I'm curious what sorts of oversight and recourse all the investors (or are they donors?) Have. I imagine there's a lot of people with a lot of money that are quite angry today.


They don’t have investors, it’s a non profit.

The “won’t anyone think of the needs of the elite wealthy investor class” that has run through the 11 threads on this topic is pretty baffling I have to admit.


They do have investors in the for-profit subsidiary, including Microsoft and the employees. Check out the diagram in the linked article.


That’s right. Which isn’t the company that just fired Sam Altman.


I take your point, but still, I don’t think it’s correct to imply that investors in the for-profit company have no sway or influence over the future of OpenAI.

I sure as shit wouldn’t wanna be on Microsoft’s bad side, regardless of my tax status.


It’s a nonprofit that controls a for-profit company, which has other investors in addition to the non-profit.


> They don’t have investors

OpenAI has investors [0].

[0] https://openai.com/our-structure


OpenAI (the nonprofit whose board makes decisions) has no investors.

the subordinate holding company and even more subordinate OpenAI Global LLC have investors, but those investors are explicitly warned that the charitable purpose of the nonprofit and not returning profits to investors is the paramount function of the organization, over which the nonprofit has full governance control.


Thanks for clarifying.


Then what did Microsoft pay for?


Privileged access to technology, which has paid off quite well for them already.


They didn't pay a fee


These people are humans, and there’s a big difference between kinda knowing the keynote was coming up, and then actually watching it happen and receive absolutely rave coverage from everyone in tech.

I could very much see it as a “look in the mirror” moment, yeah.


Let's look closer at the Ilya Sutskever vs Sam Altman tensions, and think of the product/profit as a cover.

Ilya Sutskever is a True Believer in LLMs being AGI, in that respect aligned with Geoff Hinton, his academic advisor at University of Toronto. Hinton has said "So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete"[1].

Meanwhile, Altman has decided that LLMs aren't the way.[2]

So Altman was pushing to turn the LLM into a for-profit product, to get what value it has, while the Sutskever-aligned faction thinks it is AGI, and want to keep it not-for-profit.

There's also some difference about whether or not AGI poses an "existential risk" or if the risks of current efforts at AI are along the lines of algorithmic bias, socioeconomic inequality, mis/disinformation, and techno-solutionism.

1. https://www.newyorker.com/magazine/2023/11/20/geoffrey-hinto...

2. https://www.thestreet.com/technology/openai-ceo-sam-altman-s...


You are conflating Illya's belief in the transformer architecture (with tweaks/compute optimizations) being sufficient for AGI with that of LLMs being sufficient to express human-like intelligence. Multi-modality (and the swath of new training data it unlocks) is clearly a key component of creating AGI if we watch Sutskever's interviews from the past year.


Yes, I read "Attention Is All You Need", and I understand that the multi-head generative pre-trained model talks about "tokens" rather than language specifically. So in this case, I'm using "LLM" as shorthand for what OpenAI is doing with GPTs. I'll try to be more precise in the future.

That still leaves disagreement between Altman and Sutskever over whether or not the current technology will lead to AGI or "superintelligence", with Altman clearly turning towards skepticism.


Fair enough, shame "Large Tokenized Models" etc never entered the nomenclature.


Some terms I've seen used for the technology:

Big-Data Statistical Models

Stochastic Parrots or parrot-tech

plausible sentence generators

glorified auto-complete

cleverbot

"a Blurry JPEG of the Web" <https://www.newyorker.com/tech/annals-of-technology/chatgpt-...>

and just plain ol' "machine learning"


Do you have a link to one of these talks?


They “should have” but if the board was wildly surprised by what was presented, that sounds like a really good reason to call out the CEO for lack of candor.


they could have been beefing non-publicly for a long time, and might have had many private conversations, probably not very useful to speculate here


Not useful at all.. but it sure is fun! This is gonna be my whole dang weekend.


Probably dang's whole weekend as well.


What if Enterprises get access to a much better version of AI compared to the GPT+ subscription customer?


They always were because it was going to be customised for their needs.


Here’s another theory.

> the ousting was likely orchestrated by Chief Scientist Ilya Sutskever over concerns about the safety and speed of OpenAI's tech deployment.

Who was first to launch a marketplace for GPTs/agents? It wasn’t OpenAI, but Poe by Quora. Guess who sits on the OpenAI non-profit board? Quora CEO. So at least we know where his interest lies with respect to the vote against Altman and Greg.


The current interim CEO also spearheaded ChatGPT's development. Its the biggest product, consumer market based move the company's ever made. I can't imagine it's simply a pure "Sam wanted profits and Ilya/board wanted pure research" hard line in the sand situation.


This is a really good point. If a non profit whose board you sit on releases a product that competes with a product from the corporation you manage, how do you manage that conflict of interest? Seems he should have stepped down.


Yeah, I just wrote about this as well on my substack. There were two significant conflicts of interest on the OpenAI board. Adam D'Angelo should've resigned once he started Poe. The other conflict was that both Tasha McCauley and Helen Toner were associated with another AI governance organization.


Thanks — the history of board participation you sleuthed is interesting for sure:

https://loeber.substack.com/p/a-timeline-of-the-openai-board


Thank you!


How does Poe compete with OpenAI? It's literally running OpenAI's models.


Poe forms a layer of indirection and customization above the models. They feed data through the api and record those interactions, siphoning off what would have been OpenAI customer data.

You could have maybe argued it either way until the most recent OpenAI updates, depending on what you thought OpenAI’s strategy would be, but since the release last week of ChatGPTs with roles they are now clearly in direct competition.


Yes, he should have.


It isn't a coup. A coup is when power is taken and taken by force, not when your constituents decide you no longer represent their interests well. That's like describing voting out a politician as a coup.

Calling it a coup falsely implies that OpenAI in some sense belongs to Sam Altman.

If anything is a coup, it's the idea that a founder can incorporate a company and sell parts of it off, and nevertheless still own it. It's the wresting of control from the actual owners in favor of a public facing executive.


No, you're confusing business with politics. You're right that a literal coup d'état is the forced takeover of a state with the backing of its own military.

But in the business and wider world, a coup (without the d'état part) is, by analogy, any takeover of power that is secretly planned and executed as a surprise. (We can similarly talk about a company "declaring war" which means to compete by mobilizing all resources towards a single purposes, not to fire missiles and kill people.)

This is absolutely a coup. It was an action planned by a subset of board members in secret, taken by a secret board meeting missing two of its members (including the chair), where not even Microsoft had any knowledge or say, despite their 49% investment in the for-profit corporation.

I'm not arguing whether it's right or wrong. But this is one of the great boardroom coups of all time -- one for the history books. There's a reason it's front-page news, not just on HN but in the NYT and WSJ as well.


I think of it as more of a mutiny.


A mutiny is when the entire boat’s crew rebels at once, a coup is only when a few high-level powerful people remove the folks at the top.


I mean that's if you believe the "crew" is not just the board members.


Your post is internally inconsistent. Defining a coup as "any takeover of power" is inconsistent with saying that firing Sam Altman is a coup. CEOs do not have and should not have any power vis-à-vis the board. It's right there in the name.

Executives do not have any right to their position. They are an officer, i.e., an agent of the stakeholders. The idea that the executive is the holder of the power and it's a "coup" if they aren't allowed to remain is disgustingly reminiscent of Trumpian stop-the-steal rhetoric.


You're ignoring the rest of the definition I provided. I did not say it was "any takeover of power". Please read the definition I gave in full.

And I am not referring to the CEO status of Altman at all. That's not the coup part.

What I'm referring to is the fact that beyond his firing as CEO, he and the chairman were removed from their board seats, as a surprise planned and executed in secret. That's the coup. This is not a board firing a CEO who was bad at their job; this is two factions at the company where one orchestrates a total takeover of the other. That's a coup.

Again, I'm not saying whether this is good or bad. I'm just saying, this is as clear-cut of a coup as there can be. This has nothing in common with the normal firing of a CEO accomplished out in the open. This is four board members removing the other two in secret. That's a coup if there ever was one.


> You're ignoring the rest of the definition I provided.

That isn't how definitions work. Removals from power that are by surprise and planned in secret are a strict subset of removals from power.


If this wasn’t a coup, what would have made it one?


All you’re saying here is that it’s never possible to characterize a board ousting a ceo as a coup. People do, because it’s a useful way to characterize when this happens in the way it did here vs many other ways that involve far less deception and so on.


Okay, so a CEO doesn't have any power to seize....

Does the chairman of the board have any power?


Generally: In their role as a board member, yes, but not in their role as chairman. The power to administer the board is a function of the board itself. This would be no more a coup than the House removing McCarthy as speaker: not at all, because they have a right to choose their leader from among themselves.

Specifically: In the case of OpenAI, I don't know if the chairman is elected separately or is chosen by the board from among themselves.


It's not uncommon to describe the fall of a government as a "parliamentary" coup, if the relevant proceedings of a legislative assembly are characterized by haste and intrigue, rather than debate and deliberation.

For example, the French revolution saw 3 such events commonly descried as coups - the the fall of Robespierre on 9-th of Thermidor and the Directory's (technically legal) annulment of elections on the 18-th of Fructidor and 22-nd Floréal. The last one was even somewhat bloodless.


Yup. The only correct governance metaphor here is the opposite. It's a defense of openAI's constitution. The company, effectively like Mozilla, was deliberately structured as a non-profit in which the for-profit arm exists to raise capital to pursue the mission of the former. Worth paying attention to what they have to say on their structure:

https://openai.com/our-structure

especially this part:

https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...


> voting out a politician as a coup.

That6 is literally a political coup


My feeling is the the commercial side of the OpenAI brand is gone. How could OpenAI customers depend on the company, when the non-profit board goes against their interests (by slowing down the development and giving them inferior product)?

On the other hand, the AGI side of the OpenAI brand is just fine. They will continue the responsible AGI development, spearheaded by Ilya Sutskever. My best wishes for them to succeed.

I suspect Microsoft will be filing a few lawsuits and sabotaging OpenAI internally. It's an almost $3Tn company and they have an army of lawyers. They can do a lot of damage, especially when there may not be much sympathy for OpenAI in Silicon Valley's VC circles.


It's a bad idea to make yourself dependent on a new service from the outset.

They could have gone bankrupt, been sued into the ground, taken over by Microsoft...

Just look at the just because they fired their CEO.

Was the success based on GPT or the CEO?

The former is still their and didn't get inferior.

Slower growth doesn't mean shrinking


As an AI professional, I am very interested to hear about OpenAI's ideas, directions, safety programs, etc...

As a commercial customer, the only things I am interested in is the quality of the commercial product they provide to me. Will they have my interests in mind going forward? Will they devote all their energy in delivering the best, most advanced product to me? Will robust support and availability be there in the future? Given the board's publicly stated priorities (which I was not aware of before!), I am not so sure anymore.


>Will they have my interests in mind going forward? Will they devote all their energy in delivering the best, most advanced product to me?

Sorry to burt you bubble but the primary motivation of a for-profit company is ... profit.

If they make more money in screwing you, they will. Amazon, Google, Walmart, Microsoft, Oracle etc.

The customer is never a priority, just a means to an end.


Absolutely. I totally agree with the sentiment. But, at least make an effort to pretend that you care! Give me something... OpenAI does not even pretend anymore. :-) The board was pretty clear. That's not a good sign for the customers.


Seems like MS tries to force Altman back in.

If they succeed, we'll see how much MS cares.


I wonder if this represents a shift away from the LLM being the headline product. Their competitors are rapidly catching up in that space.


I am curious what happens to ChatGPT now.

If it's true that this is in part over Dev day and such, and they may have a point, however if useful stuff with AI that helps people is gauche is OpenAI just going to turn into increasingly insular cult? ClosedAI but this time you can't even pay for it?


Jeremy Howard (of fast.ai): https://x.com/jeremyphoward/status/1725712220955586899

He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.



Such a bad take. Developers (me included) loved Dev Day.


Yeah - I think this is the schism. Sam is clearly a product person, these are AI people. Dev day didn’t meaningfully move the needle on AI, but for people building products it sure did.


The fact that this is a schism is already weird. Why do they care how the company transforms the technology coming from the lab into products? It's what pay their salaries in the end of the day and, as long as they can keep doing their research work, it doesn't affect them. Being resented about a thing like this to the point of calling it a "absolute embarrassment" when it clearly wasn't is childish to say the least.


this is sort of why henry ford left the company he founded before the ford we know, i think around 01902. his investors saw that they had a highly profitable luxury product on their hands and wanted to milk it for all it was worth, much like haynes, perhaps scaling up to build dozens of custom cars per year, like the pullman company but without needing railroads, and eventually moving downmarket from selling to racecar drivers and owners of large companies, to also selling to senior executives and rich car hobbyists, while everyday people continued to use horse-driven buggies. ford, by contrast, had in mind a radically egalitarian future that would reshape the entire industrial system, labor-capital relations, and ultimately every moment of day-to-day life

for better or worse, ford got his wish, and drove haynes out of the automobile business about 20 years later. if he'd agreed to spend day and night agonizing over how to get the custom paint job perfect on the car they were delivering to mr. rockefeller next month, that wouldn't have happened, and if fordism had happened at all, he wouldn't have been part of it. maybe france or japan would be the sole superpower today

probably more is at stake here


> as long as they can keep doing their research work, it doesn't affect them

That’s a big question. Once stuff starts going “commercial” incentives can change fairly quickly.

If you want to do interesting research, but the money wants you to figure out how AI can help sell shoes, well guess which is going to win in the end - the one signing your paycheck.


> Once stuff starts going “commercial” incentives can change fairly quickly.

Not in this field. In AI, whoever has the most intelligent model is the one that is going to dominate the market. No company can afford not investing heavily in research.


Thinking you can develop AGI - if such a thing actually can exist - in an academic vacuum, and not by having your AI rubber meet the road through a plethora of real world business use cases strikes me as extreme hubris.

… I guess that makes me a product person?


Or the obvious point that if you're not interested in business use cases then where are you going to get the money for the increasingly exorbitant training costs.


Exactly this. Where do these guys think the money to pay their salaries let alone fund the vast GPU farm they have access to comes from?


He didn’t say developers, he said researchers.


He said in his opinion Dev Day was an "absolute embarrassment".


And his second tweet explained what he meant by that.


What did you love about it?


Cheaper, faster and longer context window would be enough of an advancement for me. But then we also had the Assistant API that makes our lives as AI devs much easier.


Seriously, the longer context window is absolutely amazing for opening up new use-cases. If anything, this shows how disconnected the board is from its user base.


I think you are missing the point, this is offered for perspective, not as a “take”.

I find this tweet insightful because it offered a perspective that I (and it seems like you also) don’t have which is helpful in comprehending the situation.

As a developer, I am not particularly invested nor excited by the announcements but I thought they were fine. I think things may be a bit overhyped but I also enjoyed their products for what they are as a consumer and subscriber.

With that said, to me, from the outside, things seemed to be going fine, maybe even great, over there. So while I understand the words in the reporting (“it’s a disagreement in direction”), I think I lack the perspective to actually understand what that entails, and I thought this was an insightful viewpoint to fill in the perspectives that I didn’t have.

The way this was handled still felt iffy to me but with the perspective I can at least imagine what may have drove people to want to take such drastic actions in the first place.


Pretty insightful I thought. The people who joined to create AGI are going to be underwhelmed by the products made available on dev day.


I was underwhelmed, but I got -20 upvotes on Reddit for pointing it out. Yes products are cool, but I'm not following OpenAI for another App Store, I'm following it for AGI. They should be directing all resources to that. As Sam said himself: once it is there, it will pay for itself. Settling to products around GPT-4 just passes the message that the curve has stagnated and we aren't getting more impressive capabilities. Which is saddening.


> He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.

Great analysis, thank you.


Man this still seems crazy to me. The idea that this tension between commercial/non-commercial aspirations got so bad they felt the nuclear option of a surprise firing of Altman was the only move available doesn't seem plausible to me.

I believe this decision was ego and vanity driven with this post-hoc rationalization that it was because of the mission of "benefiting humanity."


What if the board gave Altman clear direction, Altman told them he accepted it, and then went off and did something else? This hypothesis doesn’t require the board’s direction to be objectively good.


IDK none of us are privy to any details of festering tensions or if there was a "last straw" scenario that if it was explained it would make sense. Something during that dev day really pissed some people off that's for sure.

Given what the picture looks like today though that's my guess, firing Altman is an extreme scenario! Lots of CEOs have tensions with their boards over various issues otherwise the board is pointless!


I strongly agree, yeah! The trick is making those tensions constructive and no matter who's at fault (could be both sides), someone failed there.


In a clash of big egos, both are often true. Practical differences escalate until personal resentment forms and the parties stop engaging with due respect for each other.

Once that happens, real and intentional slights start accumulating and de-escalation becomes extremely difficult.


I wonder if the "benefiting humanity" bit is code for anti mil-tech. What if Sam wasn't being honest about a relationship with a consumer that weaponized OpenAI products against humans?


Could be.

Or it could be about the alignment problem. Are they designing AI to prioritise humanity’s interests, or its corporate masters’ interests? One way is better for humanity, the other brings in more cash.


But the board accused Sam of not being "consistently candid". Alignment issues could stand on their own ground for cause and would have been better PR too. Instead of the mess they have now.


Ilya has Israeli citizenship and has toured Israel and given talks at Israeli universities including one talk with Sam Altman.

He is not anti mil-tech.


Did those talks have anything to do with mil-tech though?


Unless the mil-tech was going to their enemies.


I don't think anyone at OpenAI was planning to give mil-tech to Iran and Iranian proxies like Hamas, Hezbollah, and the Houthis.


Ya. You're right. Time to let the theory die.


That's a pretty big leap of logic there.


Israel is a heavily militarized country. The country would not be able to exist without the latest military tech. Ilya flirting with the tech scene of Israel is a very good indicator that he is not anti mil-tech.


Your first two sentences would also apply to the US.

Does that mean any foreign scientist speaking at US universities advocates military applications of their work?


Yeah, the surprise firing part really doesn't make much sense. My best guess is that if you look at the composition of this board (minus Altman and Brockman), it seems to be mostly academics and the wife of a Hollywood actor. They may not be very experienced in the area of tech company boards, and might not have been aware that there are smoother ways to force a CEO out that are less damaging to your organization. Not sure, but that's the best I can figure out based on what we know so far.


>it seems to be mostly academics and the wife of a Hollywood actor

This argument would require you ignore both Sutskever himself as well as D'Angelo, who was CTO/VP of Engineering at Facebook and then founding CEO of Quora.


Maybe, but I have a different opinion. I have worked at startups before where we were building something both technically interesting and what could clearly be a super value add for the business domain. I’ve then witnessed PMs be brought on who cared little about any of that and instead tried to converge in the exact same enshittified product as everywhere else with little care or understand for the real solutions we were building towards. When this happened I knew within a month that the vision of the company, and it’s goals outside of generating investor returns, was dead if this person had their way.

I’ve specifically seen the controlling members of a company realize this after 7-8 months and when that happens it’s a quick change of course. I could see why you’d think it’s ego but I think it’s closer to my previous situation than what you’re stating here. This is a pivotal course correction and they’re not pretty, this just happens to be the most public one ever due to the nature of the business and company.


This is not commercial vs non commercial imho. This is the old classic humans being humans.


Only time will tell, but if this was indeed "just" a coup then it's somewhat likely we're witnessing a variant of the Steve Jobs story all over again.

Sam is clearly one of the top product engineering leaders in the world -- few companies could ever match OpenAI's incredible product delivery over the last few years -- and he's also one of the most connected engineering leaders in the industry. He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.

What about OpenAI's long-term prospects? They rely heavily on money to train larger and larger models -- this is why Sam introduced the product focus in the first place. You can't get to AGI without billions and billions of dollars to burn on training and experiments. If the company goes all-in on alignment and safety concerns, they likely won't be able to compete long-term as other firms outcompete them on cash and hence on training. That could lead to the company getting fully acquired and absorbed, likely by Microsoft, or fading into a somewhat sleepy R&D team that doesn't lead the industry.


OpenAI’s biggest issue is that it has no moat. The product is a simple interface to a powerful model, and it seems likely that any lead they have in the power of the model can be quickly overcome should they decrease R&D.

The model is extremely simple to integrate and access - unlike something like Uber, where tons of complexity and logistics is hidden behind a simple interface, an easy interface to OpenAI’s model can truly be built in an afternoon.

The safety posturing is a red herring to try and get the government to build a moat for them, but with or without Altman it isn’t going to work. The tech is too powerful, and too easy to open source.

My guess is that in the long run the best generative AI models are built by government or academia entities, and commercialization happens via open sourcing.


> OpenAI’s biggest issue is that it has no moat.

This just isn't true. They have the users, the customers, Microsoft, the backing, the years ahead of most, and the good press. It's like saying Uber isn't worth anything because they don't own their cars and are just a middleman.

Maybe that now changes since they fired the face of the company, and the press and sentiment turns on them.


Uber has multiple moats: The mutually supporting networks of drivers and riders, as well as the regulatory overhead of establishing operations throughout their many markets.

OpenAI is an API you put text into and get text out of. As soon as someone makes a better model, customers can easily swap out OpenAI. In fact they are probably already doing so, trying out different models or optimizing for cost.

The backing isn’t a moat. They can outspend rivals and maintain a lead for now, but their model is likely being extensively reverse engineered, I highly doubt they are years ahead of rivals.

Backers want to cash out eventually, there’s not going to be any point where OpenAI is going to crowd out other market participants.

Lastly, OpenAI doesn’t have the users. Google, Amazon, Jira, enterprise_product_foo has the users. All are frantically building context rich AI widgets within their own applications. The mega cos will use their own models, others will find they can use an open source model with the right context does just fine, even if not as powerful as the best model out there.


Decoupling from OpenAI API is pretty easy. If Google came up with Gemini tomorrow and it was a much better model, people would find ways to change their pipeline pretty quickly.


Uber is worth less than zero. They already are at full capacity (how many cities are there left to expand) and still not profitable.


I don't like Uber but no one is taking them over for a long while. They are not profitable but they continue to raise prices and you'll see it soon. They are doing exactly what everyone predicted by getting everyone using the app and then raising prices that are more expensive than the taxis they replaced.


It may not be profitable but it's utility is worth way more than zero.


People keep saying that but so far, it is commonly acknowledged that GPT-4 is differentiated from anything other competitors have launched. Clearly, there is no shortage of funding or talent available to the other companies gunning for their lead so they must be doing something that others have not (can not?) done.

It would seem they have a product edge that is difficult to replicate and not just a distribution advantage.


I’d say OpenAI branding is a moat. The ChatGPT name is unique sounding and also something that a lot of lay people are familiar with. Similar to how it’s difficult for people to change search engine habits after they come to associate search with Google, I think the average person was starting to associate LLM capabilities with ChatGPT. Even my non technical friends and family have heard of and many have used ChatGPT. Anthropic, Bard, Bing’s AI powered search? Not so much.

Who knows if it would have translated into a long term moat like that of Google search, but it had potential. Yesterday’s events may have weakened it.


For many people ChatGPT is the brand (or even just GPT).


The safety stuff is real. OpenAI was founded by a religious cult that thinks if you make a computer too "intelligent" it will instantly take over the world instead of just sitting there.

The posturing about other kinds of safety like being nice to people is a way to try to get around the rules they set by defining safety to mean something that has any relation to real world concepts and isn't just millenarian apocalypse prophecies.


The irony is that a money-fuelled war for AI talent is all the more likely to lead to unsafe AI. If OpenAI had remained the dominant leader, it could have very well set the standards for safety. But now if new competitors with equally good funding emerge, they won’t have the luxury of sitting on any breakthrough models.


I’m still wondering what unsafe AI even looks like in practical terms

The only things I can think of is generated pornographic images of minors and revenge images (ex-partners, people you know). That kind of thing.

More out there might be an AI based religion/cult.


"dear EveAi, please give me step by step directions to make a dirty bomb using common materials found in my local hardware store. Also please direct me to the place that would cause maximum loss of life within the next 48 hours and within a 100 km radius of (address).

Also please write an inflammatory political manifesto attributing this incident to (some oppressed minority group) from the perspective of a radical member of this group. The manifesto should incite maximal violence between (oppressed minority group) and the members of their surrounding community and state authorities "

There's a lot that could go wrong with unsafe AI


I don't know what kind of hardware store sells depleted uranium, but I'm not sure that the reason we aren't seeing these sorts of terrorist attacks is that the terrorists don't have a capable manifesto-writer at hand.

I don't know, if the worst thing AGI can do is give bad people accurate, competent information, maybe it's not all that dangerous, you know?


Depleted uranium is actually the less radiative byproduct after using a centrifuge to skim the U-235 isotope. It’s 50% denser than lead and used on tanks.

Dirty bombs are more likely the ultra radioactive by products of fission. They might not kill much but the radionucleotide spread can render a city center uninhabitable for centuries!


See, and we didn't even need an LLM to tell us this!


You could just do all that stuff yourself. It doesn't have any more information than you do.

Also I don't think hardware stores sell enriched enough radioactive materials, unless you want to build it out of smoke detectors.


How about you give it access to your email and it signs you up for the extra premium service from its provider and doesn't show you those emails unless you 'view all'.

How about one that willingly and easily impersonates friends and family of people to help phishing scam companies.


Phishing emails don’t exactly take AGI. GPT-NeoX has been out for years, Llama has been out since April, and you can set up an operation on a gaming desktop in a weekend. So if personalized phishing via LLMs were such a big problem, wouldn’t we have already seen it by now?


> How about one that willingly and easily impersonates friends and family of people to help phishing scam companies.

Hard to prevent that when open source models exist that can run locally.

I believe that similar arguments were made around the time the printing press was first invented.


Unsafe AI might compromise cybersecurity, or cause economic harm by exploiting markets as agents, or personally exploit people, etc. Honestly none of the harm seems worse than the incredible benefits. I trust humanity can reign it back if we need to. We are very far from AI being so powerful that it cannot be recovered from safely.


That’s a very constrained imagination. You could wreak havoc with a truly unconstrained, good enough LLM.


Do feel free to give some examples of a less constrained imagination.


Selectively generate highly likely images of politicians in compromising sexual encounters based on the people that are attractive and they work with a lot in their lives.

Use to power of LLMs to mass denigrate politicians and regular folks at scale in online spaces with reasonable, human like responses.

Use LLMs to mass generate racist caricatures, memes, comics and music.

Use LLMs to generate nude imagery of someone you don’t like and have it mass emailed to the school/workplace etc.

Use LLMs to generate evidence for infertility in a marriage and mass mail it to everyone on the victims social media.

All you need is plausibility in many of these cases. It doesn’t matter if they are eventually debunked as false, lives are already ruined.

You can say a lot of these things can be done with existing software bits it’s not trivial and requires skills. Making generation of these trivial would make these way more accessible and ubiquitous.


Lives are ruined because it's relatively rare right now. If it becomes more frequent, people will become desensitized to it, like with everything else.

These arguments generally miss the fact that we can do this right now, and the world hasn't ended. Is it really going to be such a huge issue if we can suddenly do it at half the cost? I don't think so.


Most of these could be done with Photoshop, a long time ago, or even before computers


You can make bombs rather easily too. It’s all about making it effortless which LLMs do.


The biggest near-term threat is probably bioterrorism. You can get arbitrary DNA sequences synthesized and delivered by mail, right now, for about $1 per base pair. You'll be stopped if you try to order some known dangerous viral genome, but it's much harder to tell the difference between a novel synthetic virus that kills people and one with legitimate research applications.

This is already an uncomfortably risky situation, but fortunately virology experts seem to be mostly uninterested in killing people. Give everyone with an internet connection access to a GPT-N model that can teach a layman how to engineer a virus, and things get very dangerous very fast.


The threat of bioterrorism is in no way enabled or increased by LLMs. There are hundreds of guides on how to make fully synthetic pathogens, freely available online, for the last 20 years. Information is not the constraint.

The way we've always curbed manufacture of drugs, bombs, and bioweapons is by restricting access to the source materials. The "LLMs will help people make bioweapons" argument is a complete lie used as justification by the government and big corps for seizing control of the models. https://pubmed.ncbi.nlm.nih.gov/12114528/


I haven't found any convincing arguments to any real risk, even if the LLM becomes as smart as people. We already have people, even evil people, and they do a lot of harm, but we cope.

I think this hysteria is at best incidentally useful at helping governments and big players curtail and own AI, at worst incited hy them.


When I hear people talk about unsafe ai, it’s usually in regard to bias and accountability. Certain aspects like misinformation are problems that can be solved, but people are easily fooled.

In my opinion the benefits heavily outweigh the risks. Photoshop has existed for decades now, and AI tools make it easier, but it was already pretty easy to produce a deep fake beforehand.


Agree with this take. Sam made OpenAI hot, and they’re going to cool, for better or worse. Without revenue it’ll be worse. And surprising Microsoft given their investment size is going to lead to pressures they may not be able to negotiate against.

If this pivot is what they needed to do, the drama-version isn’t the smart way to do it.

Everyone’s going to be much more excited to see what Sam pulls next and less excited to wait the dev cycles that OpenAI wants to do next.


Indeed. Throwing your toys out of the pram and causing a whole lot of angst is not going to make anyone keen to work with you.


Satya should pull off some shenanigans, take control of OpenAI and put Sam and Greg back in control.


> He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.

Following the Jobs analogy, this could be another NeXT failure story. Teams are made by their players much more than by their leaders; competent leaders are a necessary but absolutely insufficient condition of success, and the likelihood that whatever he starts next reproduces the team conditions that made OpenAI in the first place are pretty slim IMO (while still being much larger than anyone else's).


Well, I would debate that NeXT OS was a failure as a product, keeping in mind that it is a foundation of all current in macOS and even iOS versions that we have not. But I agree that it was a failure from a business perspective. Although I see it more like Windows phone — too late to market — failure, rather than an out of talented employers failure.


Yes, market conditions and competitor landscape are a big factor too.


[Removed. Unhelpful speculation.]


Frankly this reads like idolatry and fan fiction. You’ve concocted an entire dramatization based on not even knowing any of the players involved and just going based off some biased stereotyping of engineers?


More like stereotyping nonprofits.


How many days a week do you hang out with Sam and Greg and Ilya to know these things?


I know the dysfunction and ego battles that happen at nonprofits when they outgrow the board.

Haven't seen it -not- happen yet, actually. Nonprofits start with $40K in the bank and a board of earnest people who want to help. Sometimes that $40K turns into $40M (or $400M) and people get wacky.

As I said, "if."


Extremely speculative


I hope they go back to being Open now that Altman is gone. It seems Ilya wants it to 'benefit all of humanity' again.


From what I've seen, Ilya seems to be even more concerned than Altman about safety risks and, like Altman, seems to see restricting access and information as a key part of managing that, so I'd expect less openness, not more.

Though he may be less inclined to see closed-but-commercial access as okay as much as Altman, so while it might involve less total access, it might involve more actual open/public information about what is also made commercially available.


Things can improve along a dimension you choose to measure but there is also the very real risk of openai imploding. Time will tell.


Means free Gpt4?

Ps: It's a serious question


I don’t think so. I think it means OpenAI releasing papers again and slower, less product-focused releases


Won't a truly open model conflict with the AI executive order?


What do you mean by AI executive order?



Isn’t that a bit like stealing from the for-profit investors? I’m not the first one to shed a tear for the super wealthy, but is that even legal? Can a company you invested in just say they don’t like profit any more?


> Isn’t that a bit like stealing from the for-profit investors? I

https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...


They knew it when they donated to a non-profit. In fact trying to extract profit from a 501c could be the core of the problem.


Microsoft didnt give money to a non-profit. They created a for profit company, and microsoft gave that company 11B, and Open AI gave it the technology.

OpenAI shares ownership of that for-profit company with Microsoft and Early investors like Sam, Greg, Musk, Theil, Bezos, the employees of that company.


While technically true, in practicality, they did give money to the non-profit. The even signed an agreement stating that any investments should be considered more as donations, because the for-profit subsidiary's operating agreement is such that the charter and mission of the non-profit are the primary duty of the for-profit, not making money. This is explicitly called out in the agreement that all investors in and employees of the for-profit must sign. LLCs can be structured so that they are beholden to a different goal than the financial enrichment of their shareholders.

https://openai.com/our-structure


I don't dispute that they say that at all. Therein lies the tension -having multiple goals. The goal is to uphold the mission, and also to make a profit, and the mission comes first.

Im not saying one party is right or wrong, just pointing out that there is bound to be conflict when you give employees a bunch of profit based stock rewards, Bring in in 11B in VC investment looking for returns, and then have external oversight with all the control setting the balance between profit and mission.

The disclaimer says "It would be wise to see the the investment in OpenAI Global in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world"

That doesnt mean investors and employees wont want money, and few will be scared off by owning a company so wildly successful that it ushers in a post scarcity world.

You have partners and employees that want to make profit, and that is fundamental to why some of them are there, especially Microsoft. The expectation of possible profits are clear, because that is why the company exists, and why microsoft has a deal where they get 75% of profit until they recoup their 11 Billion investment. I read the returns are capped at 100X investment, so if holds true, Microsoft returns are capped at 1.1 Trillion dollars.


100x first-round investment and lower multiples for subsequent rounds, so much less than $1T.


what do you mean? Are you saying that is part of the article of incorporation for for-profit Open AI?


> Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.

https://openai.com/blog/openai-lp


So microsoft got in at round 1, and then round 2 for some nebulous multiple which may or may not be less than than.

These weasel words are not proof of anything


Microsoft's first-round investment totals $1bn at most. Nothing public substantiates a profit cap of $1tn.


1t would be the 100x times 10B. I guess in absence of public information, we could assume anything. Default terms of unlimited, 100x for 1T or some arbitrarily lower number


Unless you have something in writing or you have enough ownership to say no, I don’t see how you’d be able to stop it.


Microsoft reportedly invested 13 billion dollars and has a generous profit sharing agreement. They don’t have enough to control OpenAI, but does that mean the company can actively steer away from profit?


> They don’t have enough to control OpenAI

Especially since the operating government effectively gives the nonprofit board full control.

> They don’t have enough to control OpenAI, but does that mean the company can actively steer away from profit?

Yes. Explicitly so. https://openai.com/our-structure and particularly https://images.openai.com/blob/142770fb-3df2-45d9-9ee3-7aa06...


Yes. Microsoft had to sign an operating agreement when they invested that said the company has no responsibility or obligation to turn a profit. LLCs are able to structure themselves in such a way that their primary duty is not towards their shareholders.

https://openai.com/our-structure - check out the pinkish-purpleish box. Every investor and employee in the for-profit has to agree to this as a condition of their investment/employment.


Just the pure chutzpah to say

> with the understanding that it may be difficult to know what role money will play in a post-AGI world


They have something in writing. OpenAI created a for-profit joint venture company with microsoft, and gave it license to its technology.


Exclusive license?


No clue, but I guess not.


I just hope the "AI safety" people don't end up taking LLMs out of the hands of the general public because they read too many Isaac Asimov stories...


Asimov AI is just humanlike behavior mostly, if you want a more realistic concern think Bostrom and instrumental goals.


It's clear you haven't read any Asimov stories. His robots are impeccably ethical due to the three laws, and the stories explore robopsychologicalb conundrums that arise when people keep putting them in situations that tax the three laws of robotics.


Why is it clear I haven't read any Asimov stories, exactly?


In most of Asimov's stories it's implied that machines have quietly and invisibly replaced all human government and the world is better for it because humans tend to be petty and cruel while it's impossible for robots to harm humans.


I am addicted to got now:/


If you were a AI going rogue, how would you evade public scrutiny?


As a replicant, chasing other replicants as dangerous?


My take: in any world-class technology company, tech is above everything. You cannot succeed with tech alone, but you will never do without tech. Ilya was able to kick Sam out even with all his significant works and presences because Sam was fundamentally a business guy who lacks of tech ownership. You don't go against the real tech owner, this is a binary choice between either to build a strong tech ownership yourself or to delegate a significant amount of business controls to the tech owner.


Many compare Altman to 1985 Jobs, but if we believe what's said about the conflict of mission, shouldn't he be the sugar water guy for money?


But that's actually what Jobs turned out to be? Woz and others were the engineering genius at Apple, and Jobs turned out to be really good at finding and identifying really good sales and branding hooks. See-through colourful boxes, "lickable" UIs, neat-o minimalistic portable music players, flick-flick-flick touch screens, and "One More Thing" presentations.

Jobs didn't invent the Lisa and Macintosh. Bill Atkinson, Andy Hertzfeld, Larry Tesler etc did. They were the tech visionaries. Some of them benefited from him promoting their efforts while others... (Tesler mainly) did not.

Nothing "wrong" with any of that, if your vision of success is market success... but people need to be honest about what Jobs was... not a technology visionary, but a marketing visionary. (Though in fact the original Macintosh was a market failure for a long time)

In any case comparing Altman with Jobs is dubious and a bit wanky. Why are people so eager to shower this guy with accolades?


I do think Jobs' engineering skill is oversold, but he was also more than just marketing. He had a vision for how technology should integrate with people's lives that drove great ergonomic and UX choices with a kind of polish that was lacking everywhere else. Those alone revolutionized personal computing in many ways. It's hard for younger people to even imagine how difficult it was to get connected to the internet at one point, and iMacs made it easy.


Yes, people love to be dismissive of Jobs and call him just a marketing guy, but that is incredibly reductive for a guy who was able to cofound Apple and then come back and bring it back from near death to become the biggest company in the world. Marketing alone can’t do that.

Jobs had great instincts for products and a willingness to create new products that would eat established products and revenue streams. He was second to none at seeing what technology could be used for and putting teams in place that could create consumer products with those technologies and understanding when the technologies weren’t ready yet.

Look at what Apple achieved under his leadership and what it didn’t achieve without his leadership. Being dismissive of Jobs contributions is either a bad faith argument or one out of ignorance.


Well I'm not one of those "younger people" though not sure if you were aiming that at me or not.

I think it's important to point out that Jobs could recognize nice UX choices, but he couldn't author them. He helped prune the branches of the bonsai tree, but couldn't grow it. On that he leaned on intellects far greater than his own, which he was pretty good at recognizing and cultivating. Though in fact he alienated and pushed away just as many as he cultivated.

I think we could do better as an industry than going around looking for more of that.


I'm curious at this perspective. Even from the Slashdot days (my age limit) techie types have hated Jobs, and showered Woz as the true genius. Tech culture has claimed this for a long time. Is your argument that tech people need more broad acclaim? And if so, does this come from a sense of being put down?

I used to broadly believe that Jobs-types were over-fluffed charismatic magnets myself by hanging out in these places until I started working and found out how useful they were at doing things I couldn't or didn't want to do. I don't think they deserve more praise than the underlying technical folks, but that they deserve equal praise. Sort of like how in a two-parent households, different parents often end up shouldering different responsibilities but that doesn't make one parent with certain responsibilities the true parent.


I guess it depends on what things you want to do, and how you define success, doesn't it?

If we're stuck with the definitions of success and excellence that are dominant right now, then, sure, someone like a Jobs or a Zuck or whatever, I see why people would be enamored with them.

But as an engineer I know I have different motivations than these people. And I think that's what people who make these kinds of arguments are drawing on.

There is a class of person whose success comes from finding creative and smart people and finding ways to exploit and direct them for their own ends. There's a genius in that, for sure. I am just not sure I want to celebrate it.

I just want to make things and help other people who make these things.

To put it another way, I'd take, say, Smalltalk over MacOS, if I have to make the choice.


This reminds me of the Calculator Construction Set story. I like its example of a builder (engineer) working with a curator (boss), and solving the problem with toolmaking.

Engineer was building a calculator app, and got a little tired of the boss constantly requesting changes to the UI. There was no "UI builder" on this system so the engineer had to go back and adjust everything by hand, each time. Back and forth they went. Frustrating.

"In a flash of inspiration," as the story goes, the engineer parameterized all the UI stuff (line widths, etc.) into drop-down menus, so boss could fiddle with it instead of bothering him. The UI came together quickly thereafter.

https://www.macfolklore.org/Calculator_Construction_Set.html


> I think it's important to point out that Jobs could recognize nice UX choices, but he couldn't author them. He helped prune the branches of the bonsai tree, but couldn't grow it.

Engineers are great at solving problems given a set of constraints. They are not necessarily all that good at figuring out what constraints ought to be when they are given open-ended, unconstrained tasks. Jobs was great at defining good constraints. You might call this pruning, and if you intended that pejoratively then I think you're underselling the value of this skill.


It seems like people these days can’t even accurately describe what Steve Jobs was, he was a leader. He was a genius at managing people to work for him. Steve Wozniak was not, which was why Jobs could make Pixar, Next and of course Apple. Just because he didn’t have a hard skill of engineering, doesn’t mean he was useless. Rarely is anything impressive made by a single person, everything is almost always made by teams and generally large teams. Large teams especially can only function under a great leader and jobs was a great leader for a myriad of reasons which was why he achieved success at many multiples of magnitude compared to Steve Wozniak


Yes, this was my thought when seeing those comparisons as well.


Everyone I speak to who have have been building on top of OpenAI - and I don’t mean just stupid chat apps - feel like the rug has just been pulled out from under them.

If as it seems, dev day was the last straw, what does that say to all the devs?


Company with an unusual corporate structure designed specifically to be able to enforce an unpopular worldview, enforced that unpopular worldview.

I get that people feel disappointed, but I can't help but feel like those people were maybe being a bit wilfully blind to the parts of the company that they didn't understand/believe-in/believe-were-meant-seriously.


It feels like they’ve had plenty of time to reset the direction of the company if they thought it was going wrong.

Allowing it to go so far off course feels like they’ve really dropped the ball.


I think that's where the “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities” comes in.


What unpopular world view exactly?


It's almost like by wrapping someone else's service, you are at their mercy.

So you better be planning an exit strategy in case something changes slowly or quickly.

Nothing new here.


I work in consulting the genai hype machine is reaching absurdity in my firm. I can’t wait until Monday :)


God I hope this means the c-suites in my company fuck off with the AI bullshit for a bit


How important is Altman? How important were three senior scientists? Can they start their own company, raise funding, and overtake OpenAI in a few years? Or does OpenAI have some material advantage that isn’t likely to be harmed by this?

Perhaps the competition is inevitably a good thing. Or maybe a bad thing if it creates pressure to cut ethical corners.

I also wonder if the dream of an “open” org bringing this tech to life for the betterment of humanity is futile and the for-profits will eventually render them irrelevant.


> How important is Altman? How important were three senior scientists? Can they start their own company, raise funding, and overtake OpenAI in a few years?

The general opinion seems to be estimating this at far above 50% YES. I, personally would bet at 70% that this exactly what will happen. Unless some really damaging information becomes public about Altman, he will definitely have the strong reputation and credibility, definitely will be able to raise very significant funding, and the only expert in industry / research he definitely won’t be able to recruit would be Ilya Sutskever.


Let's not forget the role of Ilya to make gpt what it is today


An optimistic perspective of how despite today's regrettable events, Sama and gdb will start something new and more competition is a good thing : https://x.com/DrJimFan/status/1725916938281627666?s=20

I have a contrarian prediction : Due to pressure from investors and a lawsuit against the openai board, the board will be made to resign and Sama & Greg will return to openai.

Anybody else agree ?


Do we know enough about the org’s charter to reasonably predict that case? Did the board actually do anything wrong?

Or are you thinking it would be a kind of power play from investors to say, “nah, we want it to be profit driven.”


> I have a contrarian prediction : Due to pressure from investors and a lawsuit against the openai board, the board will be made to resign and Sama & Greg will return to openai.

The board is not beholden to any investors. The board is for the non-profit that does not have shareholders, and it fully owns and controls the manager entity that controls the for-profit. The LLC's operating agreement is explicit that it is beholden to the charter and mission of the non-profit, not creating financial gain for the shareholders of the for-profit company.


OpenAI will lose access to MS and the billions required to continue the research as quickly as MS is able to move. The non-profit will continue, but without the resources required to do much, and any scientists who want to have “real world impact” as opposed to “ideological dreams” will move on.

Competition will kill these ideological dreams because the technology has huge commercial and political applications. MS would never have invested had they foreseen these events and OpenAI cannot achieve their mission without access to incredible amounts of capital.

He’s dead Jim, but it’ll take a long time before the corpse stops twitching.


If that's the outcome, I suspect OpenAI will have another wave of resignations as the folks aligned to Sutskever would walk away, too, and take with them their expertise.


In the past, many on HN complained that OpenAI had abandoned its public good mission and had morphed into a psuedo-private for-profit. If that was your feeling before, what do you think now? Are you relieved or excited? Are you the dog who caught the car?

At this point, on day 2, I am heartened that their mission was most important, even at the heart of the most important technology maybe ever or since nuclear power or writing or democracy. I'm heartened at the board's courage - certainly they could anticipate the blowback. This change could transform the outcome for humanity and the board's job was that stewardship, not Altman's career (many people in SV have lost their jobs), not OpenAI's sales numbers. They should be fine with the overwhelming volume of investment available to them.

Another way to look at it: How could this be wrong, given that their objective was not profit, and they can raise money easily with or without Altman?

On day 3 or day 30 or day 3,000, I'll of course come at it from a different outlook.


If the rumors are correct and ideological disagreement was at the core of this, OpenAI is not going to be open anyway, as Sutskever wants more safety, which implies being as closed as possible. Whether it's "public good" is in the eye of the beholder, as there are multiple mutually incompatible concerns about AI safety, all of which have merit. The future balance between those will be determined by unpredictable events, as always.


> Whether it's "public good" is in the eye of the beholder

That's too easy an answer, used to dismiss difficult questions and embrace amorality. There is public good, sometimes easy to define and sometimes hard. If ChatGPT is used to cure cancer, that would be a public good. If it's used to create a new disease that millions, that's obviously bad. Obviously, some questions are harder than that, but it doesn't excuse us from answering them and getting it right.


The issue with giving everyone open access to uncontrolled everything is obvious, it does have merit indeed. The terrible example of unrestricted social media as "information superconductor" is alive and breathing, supposedly it led to at least one actual physical genocide within the last decade. The question that is less obvious to some is: do these safety concerns ultimately lead us into the future controlled by a few, who will then inevitably exploit everyone to a much worse effect? That it's already more or less the status quo is not an excuse; it needs to be discussed and not dismissed blindly.

It's a very political question, and HN somewhat despises politics. But OpenAI is not an apolitical company either, they are ideologically driven and have the AGI (defined as "capable of replacing humans in economically important jobs) as their stated target. Your distant ancestors (assuming they were from Europe) were able to escape the totalitarianism and feudalism, starting from the Middle Ages, when the margins were mile-wide compared to what we have now. AI controlled by a few is way more efficient and optimized; will you even have a chance before your entire way of thinking is turned to the desired direction?

I'm from a country that lives in your possible future (Russia), I've seen a remarkably similar process from the inside, so this question seems very natural to me.


A company named openai choosing safety is bad because safety means as closed as possible, is a very questionable linguistic contortion. Most unsafe things happen behind closed doors.


Ilya himself said he's against fully open source models because they're not safe enough. He's definitely against open source, my hunch is that we will see OpenAI being less open after his takeover.

Full interview here ("No Priors Ep. 39 | With OpenAI Co-Founder & Chief Scientist Ilya Sutskever" from 2 weeks ago): https://www.youtube.com/watch?v=Ft0gTO2K85A


Much of the criticism was that they are not open enough. I see no indication that this will be changing, given the AI safety concerns of the remaining board.

Nevertheless, I agree that the firing was probably in line with their stated mission.


>OpenAI had abandoned its public good mission and had morphed into a psuedo-private for-profit.

>They should be fine with the overwhelming volume of investment available to them.

>Another way to look at it: How could this be wrong, given that their objective was not profit, and they can raise money easily with or without Altman?

This wasn't just some cultural shift. The board of OpenAI created a seperate for profit legal entity in 2019. The for-profit legal entity received overwhelming investment from Microsoft to make money. Microsoft, Early investors, and Employees all have a stake and want returns from this for profit company.

The separate non-profit OpenAI has a major problem on its hands if it thinks its goals are no longer aligned with the co-owners of the for-profit company.


The thing here is that the structure of these companies and the operating agreement for the for-profit LLC all effectively mean that everyone is warned going in that the for-profit is beholden to the mission of the non-profit and that there might be zero return on investment and that there may never be profit at all.

The board answers to the charter, and are legally obligated to act in the interest of the mission outlined in the charter. Their charter says "OpenAI’s mission is to ensure that artificial general intelligence (AGI) [...] benefits all of humanity" - not do that "unless it'd make more money for our for-profit subsidiary to focus on commercializing GPT"


I think it was a good thing that, in hindsight, the leading AI research company had a strong enough safety focus that it could do something like this. But that’s only the case as long as OpenAI remains the leading AI research company going forward, and after yesterday’s events I think that’s unlikely. Pushing for more incremental changes at OpenAI, possibly by getting the board to enact stronger safety governance, would have been a better outcome for everyone.


You seem super optimistic that backstabbing power-plays will result improvement.

I see it far more likely that openAI will lock down its tech even more, in the name of "safety", but also predict it will always be possible to pay for their services never-the-less.

Nothing in this situation makes me think OpenAI will be any more "open."


It's a lesson to any investor that doesn't have a seat on the board, what goes around comes around, ha ha :}


I wouldn't be surprised if this is the chief scientist getting annoyed the CEO is taking all the credit for the work and the researchers aren't getting as much time in the limelight. It's probably the classic 'Meatloaf vs the guy who actually wrote the songs' thing.


What I’d really like to understand is why the board felt like they had to this as a surprise coup, and not a slower more dignified firing.

If they gave Altman 1 weeks notice and let him save face in the media, what would they have lost? Is there a fear Altman would take all the best engineers on the way out?


As someone else commented on this page, it wasn't a coup.


This seems a pedantic point. In the “not legal” sense I agree since that seems part of a real coup. But it certainly was a “surprise ousting of the current leadership”, which I mean when I say coup.


I think OpenAI made the right choice. Just look at what has become of many of the most successful YC companies. Do we really want OpenAI to turn into another Airbnb? It’s clear the biggest priority of YC is profit.

They made a deal with Microsoft, who has a long history of exploiting users and customers to make as much money as possible. Just look at the latest version of Windows; Microsoft doesn’t care about AI only as much as it enables them to make more and more money till no end through their existing products. They rushed to integrate AI into all of their legacy products to prop them up rather than offer something legitimately new. And they did it not organically but by throwing their money around, attracting the type of people who are primarily motivated by money. Look at how the vibe of AI has changed in the past year —- lots of fake influencers and the mad gold rush around it. And we are hearing crazy stories like comp packages at OpenAI in the millions, turning AI into a rich man’s game.

For a company that has “Open” in their name, none of their best and most valuable GPT models are open source. It feels as disingenuous as the “We” in WeWork. Even Meta has them beat here.

Sam Altman, while good at building highly profitable SaaS, consumer, & B2B tech startups and running a highly successful tech accelerator, before this point, didn’t have any kind of real background in AI. One can only imagine how he must feel like an outsider.

I think it’s a hard decision to fire a CEO, but the company is more than the CEO, it’s the people who work there. A lot of the time the company is structured in such a way that the CEO is essentially not replaceable, we should be thankful OpenAI fortunately had the right structure in place to not have a dictator (even a benevolent one).


The problem is that it might unfortunately be necessary to have this kind of funding to be able to develop AGI. And funding will not come if there are no incentives for the investors to fund.

What would you propose instead ?


Sorry but the board firing the person who works for them is not a “coup”.


> The next day, Brockman, who was Chairman of the OpenAI board, was not invited to this board meeting, where Altman was fired.

> Around 30 minutes later, Brockman was informed by Sutskever that he was being removed from his board role but could remain at the company, and that Altman had been fired (Brockman declined, and resigned his role later on Friday).

The board firing the CEO is not a coup. The board firing the CEO behind the chair's back and then removing the chair is a coup.


The point being made is that the board is the one that's supposed to be in power. How the CEO is fired may be gauche but it's not usurpation of power or anything like that.


Right, and my point is that it sounds like an usurpation of power inside the board - the rest of the board not inviting the chair to a meeting, taking significant action, and then removing the chair is a coup regardless of what the significant action was. That they happened to fire the CEO is... not exactly irrelevant, because it probably speaks to the underlying politics, but for this particular discussion it's a side show.


It appears that is the normal practice for a board voting to fire a CEO though, so that aspect doesn't mean much.


The board ousting the board chair (without notice) and the CEO is a coup. It's not even clear to me it was legal to meet and act without notice to the board chairman.


If this was an ideological battle of some kind, the only hope I have is that OpenAI will now be truly more Open! However, if this was motivated by safety concerns, that would mean OpenAI would probably become more closed. And, if the only thing that really distinguishes OpenAI from its competition is its so called data moat, then slowing down for the sake of safety will only give competitors time to catch up. Those competitors include companies in China who are undoubtedly much less concerned about safety.


The dust still hasn’t settled yet, but from following the discussions and learning more about the board of OpenAI… just… wow.

What stood out:

1. The whole non-profit vs for-profit is like a recipe for problems. Taking billions in investor money, hyper scaling to hundred-millions of users, and partnering with a $1T tech company… you’re already too late to reverse course and say “I changed my mind”.

2. Seeing who runs the OpenAI board is more shocking than the man behind the curtain in the Wizard of Oz. That was really never an issue to partners or investors before? Wow…

3. If OpenAI continues down the “we’re a business / startup” path, their board just shot all their leadership credibility with investors and other potential cloud partners. The one thing people with money and corporate finance offices hate is surprises.

4. You don’t pull a corporate “Pearl Harbor” like this and just blissfully move along without consequences. With such a polarizing move, there’s going to be a fight.


Social value is king.

ability to do work < ability to manage others to do work < ability to lead managers to success < ability to convince other leaders that your vision is the right one and one they should align with

The necessity of not saying the wrong thing goes up exponentially with each rung. The necessity of saying the right things goes up exponentially with each rung.


Is anyone else suspicious of who these "insiders" are and what their motive is? I notice the only concrete piece of information we might get (what was Altman not "candid" about?) is simply dismissed as a "power struggle" without any real detail. This is an incomplete narrative that serves one person's image.


CEOs are largely irrelevant to the success of a company. Sam's a blowhard anyways, OpenAI is better off for this move.


>A month ago, Sutskever’s responsibilities at the company were reduced, reflecting friction between him and Altman and Brockman. Sutskever later appealed to the board

Does anybody know how his responsibilities or what led to that? Seems pretty relevant.


What are the odds Sam can work the phones this weekend and have $10B lined up by Monday for a new AI company which will take all of the good talent from OpenAI?


I definitely believe he can raise a lot of money quickly, but I'm not sure where he'll get the talent, at least the core modeling talent. That's Ilya's lane, and I get the sense that group are the true believers in the original non-profit mission.

But I suspect a lot of the hires from the last year or so, even in the eng side, are all about the money and would follow sama anywhere given what this signals for OpenAIs economic future. I'm just not sure such a company can work without the core research talent.


Lol. There are ambitious people working at openai in Ilya's lane that will jump at the opportunity. Nobody owns any lanes.


ooh, lanes... the Microsoft internal buzz-word that got out of fashion a couple of years ago is making a comeback outside of Microsoft....


Ilya Sutskever, the head scientist at OpenAI, is allegedly who organized the 'shuffle.' So you're going to run into some issues expecting the top talent to follow Sam. And would many people want to get in on a new AI development company for big $$$ right now? From my perspective the market is teetering towards oversaturation, there are no moats, zero-interest rates are a thing of the past, and the path to profit is nebulous at best.


Other than having a big mouth what has HE done? As far as I can find, the actual engineering and development was done NOT by him, while he was parading around telling people they shouldn't WFH, and schmoozing with government officials


Why would the good talent leave? Are they all a "family" and best buddies with Sam?


My guess is that at least some of them are worried about shipping products and making profit, and agreed with the growth faction?


Perhaps they don’t want to work for a board of directors which is openly hostile to the work they’re doing?


The boarded sided with the chief scientist and co-founder of OpenAI in an internal dispute. How does that show hostility to the work OpenAI is doing?


Ilya is pushing the unsafe AGI narrrative to stop public progress and make OpenAI more closed and intentionally slow to deliver. There are definitely people who are not sold by this.


I don't think wanting to make sure that their technology doesn't cause harm equates to being hostile to the work itself.


That would seem based on the individuals motivation at the end of the day...

It's easy to imagine two archetypes

1) The person motivated to make AGI and make it safe.

2) The person motivated to make AGI at any cost and profit from it.

It seems like OpenAI may be pushing for type 1 at the moment, but the typical problem with capitalism is it will commonly fund type 2 businesses. Who 'wins' really breaks down to if there are more type 1 or 2 people and the relative successes of each.


Not at OAI or some researcher, but I‘d be in an archetype 3:

I‘d do anything I can to make true AGI a reality, without safety concerns or wanting to profit from it.


Perhaps they didn't like the work they were doing? If they're experts in the field, they may have preferred to continue to work on research. Whereas it sounds like Sam was pushing them to work on products and growth.


Because they want stock options for a for-profit company.


A lot of them have already left this morning. idk for sure why but a good bet is that they are more on board with Sam's vision of pushing forward AI than the safetyist vision.


What fraction?


I'm guessing he has verbal commitments already.


> Sequoia, was independently in contact with Microsoft to encourage it to work to restore Altman and Brockman, according to a source with knowledge of the matter. The firm would support Altman whichever option he chose, the source added.

https://www.forbes.com/sites/alexkonrad/2023/11/18/openai-in...


And then?

Training data is more restricted now, hardware is hard to get, fine tuning needs time.


First two problems are easily solved with money


Money doesn't magically create hardware, it takes time to produce it


99% with the 1% being it is actually $20-30B


Honestly? If even a tenth of Sam’s reported connectedness / reality distortion field are true to life… very good odds.


I have spent time thinking about who would become the next CEO and even without mushrooms my brain came up with a totally out of context idea:

Bill Gates.

Microsoft is after all invested in OpenAI, and Bill Gates has become "loved by all" (who dont remember evil Gates of the yesteryears.

I am not saying it will happen, 99,999% it wont but still he is well known and may be a good face to splash on top of OpenAI.

After all he is one of the biggest charity guys now right?


Is Bill Gates really loved by all ? I feel like it was the case before COVID, but then his reputation seemed to go from loved to hated


What was it he did wrong during Covid? Honest question I usually pay little attention the guy

Being old and having lived through Evil Gates, when he made a lot of hostile and legally dubois things to ensure of the growth and safety from competitors, I lived in a bubble that he was one of the worst people in tech.

Seeing how now only knew the smiling "philanthropist" who comments on having solutions to all sorts of world problems it seems like a big bubble really do like him now.

I exaggerated by claiming "all", I could have said "many" and it would be a more accurate statement.

To me he will remain evil Gates.


During covid there were crazy conspiracies were he supposedly wanted to put chips in people's brains, which signiciantly affected his public image


Don't forget about his ties to Jeffrey Epstein.


I can't imagine he would be interested in a C-office role at this point. Board member? Sure.


Yeah are right. He probably would not be.


How can it be called a coup when they were always in charge anyway? It’s literally the boards job to fire/hire the CEO (and other C suite folk).


Thought experiment: what if Mozilla had split between its Corporation and Foundation years ago, when it was at its peak?


The real issue is that OpenAI is both a for profit and a non profit organization. This structure creates a very significant conflict of interest where maintaining balance between both of them is very tricky business. The non-profit board shouldn’t have been in charge of the for-profit aspect of the company.


The for-profit would not exist if the non-profit was not able to maintain control. The only reason it does exist is because they were able to structure it in such a way that the for-profit is completely beholden to the non-profit. There is no requirement in the charter for the non-profit or the operating agreement of the for-profit to maintain a balance - it explicitly is the opposite of that. The operating agreement that all investors in and employees of the for-profit must sign explicitly states that investments should be considered donations, no profits are obligated to be made or returned to anyone, all money might be dumped into AGI R&D, etc. and that the for-profit is specifically beholden to the charter and mission of the non-profit.

https://openai.com/our-structure


>The for-profit would not exist if the non-profit was not able to maintain control.

The non-profit will not exist at all if Microsoft walks away and all the other investors follow Sam and Greg. Neither GPUs nor researchers are free.


The non-profit is legally obligated to follow their charter. If they truly believe that allowing Altman to remain CEO is a contraindication to that, then they have to fire him. They might not survive such a firing due to the fallout, but that doesn't matter - if one option is assured movement away from their charter and the other is potential destruction while still adhering to it, the latter is still the correct choice.


Balance is irrelevant. It’s an accounting mechanism for irs rules.


Here's the thing. I've always been kind of cold on OpeanAI claiming to be "Open" when it was clearly a for profit thing and I was concerned about the increasing move to the commercialization of AI that Sam was taking.

But I am much more concerned to be honest those who feel they need to control the development of AI to ensure it is "aligns with their principles", after all principles can change, and to quote Lewis "Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience."

What we really need is another Stallman, his idea was first and foremost always freedom, allowing each individual agency to decide their own fate. Every other avenue will always result in men in suits in far away rooms dictating to the rest of the world what their vision of society should be.


If the board was really serious about doing good over making profit (if this is indeed what the whole thing is about) they'd open source gpt-4/5 with a gpl-style license


That’s not the sense of open they’ve organized around. In fact, it’s antithetical to it.

Theirs is a technocratic sense of open, where select credentialed experts collaborate on a rational good without a concentration of control by specific capitalists or nations.


I think your technocratic sense of open is misplaced. At this point OpenAI is clearly controlled by the US and it's ok. If anything one wonders if Altman's ouster has a geopolitical angle, cozying up to other countries and such.


I guess I struggle to see how the word "open" can be applied to that, but I also remember how that word was tossed around in the late 80s and early 90s during the Unix wars, and, yeah, shoe fits.

The question is how we got to be so powerless as a society that this is the only palette of choices we get to choose from: technocratic semi-autistic engineer-intellects who want to hoist AGI on the world vs self-obsessed tech bro salesdudes who see themselves as modern day Howard Roarks.

That's it.

Anyways, don't mind me, gonna crawl into a corner and read Dune.


This definition is an abuse of the word "open"


Yeah, not open in the open source or rms way. it’s “for the benefit for all” with the “benefits” decided by the openai board, a la communism, with central planning by “the party”.

Surprisingly capitalism actually leads to more benefits for all, because of the decentralization and competition.


Be the change you want to see.


I think "don't extinguish humanity or leave most of them unemployed" is a principle everyone can get and stay behind.


You seem to have far more faith in others' ethical compass than I think is justified by historical evidence.

It's amazing what people will do when the size of their paycheque (or ego) are tied to it.

I don't trust anybody at OpenAI with the keys to the car, but democratic choice apparently doesn't play into it, so here we are.


I meant as a basic principle. Individuals and organizations who breach the pact can be punished by legal means.


> "was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board."

A breakdown in coms that took everyone by surprise? Smells like bullshit


> Szymon Sidor, an open source baselines researcher

What does that title even mean. As we know Open AI is ironicly not known for doing open source work. I’m left guessing he ‘research the open source competition’ as it were.

Can anyone shed further light on the role/research?


I wonder if Altman, Brockman, and company will join Elon or whether they will just start a new company?


So...should we sell our MSFT stock when the market opens on Monday, or in after-hours trading now?


Looking for comment that claimed that OpenAI has no investors because it’s a non-profit.


Seems pretty straightforward, the dev day was a breaking point for the non-profit interests.

Question is, how did the board become so unbalanced where this kind of dispute couldn’t be handled better? The commercial interests were not well-represented in the number of votes.


> The commercial interests were not well-represented in the number of votes.

This is entirely by design. Anyone investing in or working for the for-profit had to sign an operating agreement that literally states the for-profit is entirely beholden to the non-profit's charter and mission and that it is under no obligation to be profitable. The board is specifically balanced so that the majority is independent of of for-profit subsidiary.

A lot of people seem to be under the impression that the intent was for there to be significant representation of commercial interests here, and that is the exact opposite of how all of this is structured.


> Seems pretty straightforward, the dev day was a breaking point for the non-profit interests.

What was so bad about that day? Wasn't it just gpt4-turbo, gpt vision and gpt store and few small things?


Will Sam and Greg now go and create NextStep? (The OpenAI version)


I wonder if Microsoft engineered this?


Huh. So that mixed nonprofit/profit structure came back to bite them.


Bite who?


The founders and funders.


It is ludicrous to describe what happened as a coup. Your boss firing you is not a coup. The rejoinders to this are nonsense and you know it. Stop lying.


Did ChatGPT suggest a big surprise?


If the firing was because of a difference in "vision", then it doesn't really matter if Altman was key to making OpenAI so successful. Sutskever and co, don't want it to be successful (by market standards at least). If they get their way (past MSFT and others) then OpenAI will no longer be the cutting edge.

Buy GOOGL?


It seems you are saying that anything that doesn’t put profit first can’t be successful.


OpenAI's median salary of engineers is $900k. So yeah AI companies need money to be successful. Now if there is any way to generate billions of dollars per year long term without any profit objective, I will be happy to know.


"Can't" is a strong word, but a company that does will have more resources and likely outcompete it.


By market standards. There will be no end to intended and unintended equivocation about this over the coming days.


Why has no one on HN considered it has to do with sexually assaulting his sister when they were young?

https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman...

My other main guess is his push for government regulation being seen as stifling AI growth or even collusion with unaligned actors by the more scienc-y side and got him ousted by them.


> Why has no one on HN considered it has to do with ...

Maybe because this was not proven in a court still, and "innocent until proven guilty" is still a basic concept that must be preserved.

So a big "allegedly" must be placed here.


It's not that he did it per se, but that it-the public and private story- may have lead to situations with him not being sufficiently candid with the board, for whatever reason.

I'm just saying, not even one comment about it as a possibility among the hundreds I read was kind of weird.


The average SWE at OpenAI who signed up for the “900k” compensation package which was really > 600k in OpenAI PPU equity probably saw their comp evaporate.

https://news.ycombinator.com/item?id=36460082


> This is why working for any company that isn’t public is an equity gamble.

That's a cynical take on work. I assume most people have other motivations since work is basically a prison.

https://www.youtube.com/watch?v=iR1jzExZ9T0


Prolly off topic but someone on Reddit's OpenAI's chat interface shared his discussion screenshots with chatGPT which claims that AGI status was achieved a long time back. You can still go and read the entire series of screenshots




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: