Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman Will Not Return to OpenAI as CEO (theinformation.com)
64 points by omarfarooq on Nov 20, 2023 | hide | past | favorite | 61 comments



The reporting on this in the last two days has been bizarre and of such shoddy quality, particularly from The Verge.

So many articles with no real sources saying the board was desperate for him to come back, or reneged on their decision, or that there would be an exodus of employees!

I’m glad the board wasn’t browbeaten by Sam and his cohort of VC friends.

I am also incredibly doubtful this will have any meaningful impact in terms of engineers/researchers who will choose to leave. Most of the core group of researchers and engineers joined OpenAI the non-profit, not OpenAI the LLC, that is to say I assume many have strong feelings about safety around AI, which judging by the employee testimonies in the recent Atlantic article Sam acted negligently towards.


> the board was desperate for him to come back

Who reported that?

> reneged on their decision

Who reported that?

> there would be an exodus of employees

The chairman of the board and three senior scientists resigned before the weekend was over. You'd have to be naive to think it's not a strong possibility.


https://www.theverge.com/2023/11/18/23967199/breaking-openai...

https://www.theverge.com/2023/11/18/23967199/breaking-openai...

“The OpenAI board is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes.

Update November 18th, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled”

Do you want me to hold your hand to google anything else or is that enough?

Former-chairman*, he was removed from the board immediately after Sam’s ouster. OpenAI has 700 employees, 4 resignations do not make an exodus.


> desperate


Just a little bit of feedback. I'm seeing you comment a lot and quite rapidly in these Altman threads and sometimes in a quite reactionary way. (Other times usefully.)

This isn't reddit; we're not hear to 'one-up' a parent commenter with a one word response. By the guidelines :

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.


I’m sure the one you’re replying to knows the guidelines. No need for the reddit remark.


Not OP, but the choice of phrasing in a number of reports implied that the board was uncertain of their decision. Normal sensationalization if the same limited information that was publicly available at the time.


I’m glad people in this thread are supporting the OpenAI board in this decision. There seems to be too much celebrity worship around Altman from all the tech/crypto bros.

Also, any new competition started by Altman would be good for the ecosystem in general.


> There seems to be too much celebrity worship around Altman from all the tech/crypto bros

I haven't seen anyone worship Altman.

I've seen many say the board made an incredibly dumb decision. Like Apple-firing-Jobs level dumb decision.

---

Whether you think it's net good for OpenAI to be brought down a rung or two...you can still think it was bad for OpenAPI.


OpenAI is a non-profit focused on developing safe AGI and Steve Jobs was pivotal in the success of Apple - not sure if Jobs' firing is a good comparison for Sam's firing.


> not sure if Jobs' firing is a good comparison

Because they are different products?

Or because Altman wasn't pivotal in the success of OpenAI?


I've seen some likening Sam to Caesar, being betrayed and such. Certainly there are communities that match what the parent comment describes. HN has generally been more balanced than I would have though, though. The fan boys I've personally encountered have been more in the lands of YT and Reddit.


Firing and homicide are different, but certainly he was betrayed, yes?


I don't really know if he was betrayed because the relationships and communication between Sam and the rest of the board that voted him out isn't exactly public. To use language feels like you are already taking despite lacking information.

Anyhow, I just was using the Caesar example because it was an memorable example that stuck out to me.


It's certainly not certain.


No source provided the exact reasons why Altman was removed by his left hand. Please explain this first before going on to support any follow up actions. What exactly happened … this is beyond ridiculous


'OpenAI’s "primary fiduciary duty is to humanity," not to investors or even employees.'

I'm so proud of the board for sticking to their principles over profit. This is a huge victory for all of us, and a chance for everyone to come together and approach AGI safely and for the benefit of all.


I'm not a believer in "AI safety."

But if you do believe, I don't see how this a "victory."

OpenAI is now existentially at risk. Even if it does survive it will very likely lose its dominance to a non-"AI safety" commercial endeavor.

---

Classic "biting the hand that feeds you" https://youtu.be/Lg2dqFCU67Q?si=TWX8slGW_8hLdgQu&t=35


By that do you mean that you believe AI is safe by default? Or that AI safety is not possible?


I mean that "AI Safety" is (1) nebulous and (2) impossible.

Encryption can be used for good things and bad things. And yet no one is talking about "encryption safety."


> And yet no one is talking about "encryption safety

Okay, you are getting me off topic, but this is totally not the case. Hasn't governments been especially interesting in dealing with encryption lately? UK or EU (can't remember which) being most recently raising the pitchforks.

Anyway, the more important thing for me to point out is that AI is anticipated to possess exponential properties that technology like encryption does not have. Encryption is predictable while AI is not.

Edit: fixed typo


> Hasn't governments been especially interesting in dealing with encryption lately?

Fair enough :)

People have been talking about it, and it's a similarly bad idea.

> Encryption is predictable while AI is not.

Banking info vs child porn is pretty divergent.


They are both just data being made hard to read, though. You could pretty much sum up the "risks" of encryption as: you want to read some data that someone else didn't want you to read. Which means you might fail to detect something, or fail to prove something as easily as if you could read that data. While we could enumerate specific examples of scenarios that fit the generalization, I think we are talking about something pretty narrow. It is also not really a new problem. Historically communication has been impossible to snoop on at scale, and encryption exists to maintain a semblance of past privacy in the present.

Lol not sure where I want to take this and it's getting pretty late...


The board's motivation may have been good, they may be "in the right" here (it depends on your view on AI risk, and also on a lot of details of what was going on that we still don't know). But unambiguously they executed abysmally. There is little to be proud of here. OpenAI may have needed to be reined in, and they have probably succeeded in accomplishing that, but they have badly damaged the company in the process, and catastrophically damaged their own credibility, thus reducing their ability to influence events in the future and plausibly damaging the credibility of the entire idea of responsible development. (To be clear, I believe responsible development is important and I am sad to see it discredited.)

Edit to add: if you believe the rapid commercialization of frontier models at OpenAI was a cancer that needed to be destroyed, the likely outcome of the board's clumsy handling is that it probably now metastasizes.


You're assuming it'll continue on that path, and well.

Personally I think this is where they'll end up stagnating, although I'm happy to be wrong. It's going to make recruiting top talent far harder, and ultimately this hiccup will cause them to lose their edge. Losing that edge lessens the desire to go there, which will drive down competitiveness, which will further pull them back from the fray. I believe it'll make the US less competitive, which makes me sad.


Hard to say what the impact on access to talent will be. I know people working in AI that wouldn't consider working at OpenAI out of perceived recklessness. At least one that I've been speaking with during this weekend said that this made them more open to the idea.


> approach AGI safely

The only risk AGI poses to the average american is unemployment. If immigration/outsourcing is fine, i don't see anthing being done about this.


Sam Altman, Ilya Sutskever, Yoshua Bengio, Geoff Hinton, Demis Hassabis (DeepMind CEO), Dario Amodei (Anthropic CEO), and Bill Gates disagree with you.

https://twitter.com/robbensinger/status/1726039794197872939


They only signed that to pull the ladder up after they have a mass adopted model to lower competition. Nothing about that is about concern for humanity, it's about lowering competition to strengthen their moat.

Now that sam is out and his new ai startup if he makes one will be a new entrant in the ai space, he probably should have waited a couple weeks before recommending all the ai regulations.

As an aside, i would support a system like the patent system where if new ai models are going to be heavily regulated, then the uncensored/pre-gated weights of the already launched model should be released to the public in much the same way your patent idea is protected but you have to release the design in the application.


No, Sam Altman and others have written about this a long time ago. 8 years ago:

"Why You Should Fear Machine Intelligence"

https://blog.samaltman.com/machine-intelligence-part-1

It's unfortunate how crank people on HN like you turn when it comes to this grave topic. Multiple people, for years, many without incentive, will tell you what they think and you will just "no they're lying," especially when YC has placed a huge amount of value on earnestness forever. Not too mention they've addressed the regulation conspiracy and how the regulation restricts them and big co's, not GPT4 level models by anyone smaller.


May i interest you in a bridge, sir?


> So, here's what happened at OpenAI tonight. Mira [CTO and interim CEO] planned to hire Sam and Greg back. She turned Team Sam over past couple of days. Idea was to force board to fire everyone, which they figured the board would not do. Board went into total silence. Found their own CEO Emmett Shear

https://twitter.com/emilychangtv/status/1726468006786859101


“My understanding is that Sam is in shock”

https://x.com/emilychangtv/status/1726468006786859101?s=46


XD


This is good. If Sama managed to overthrow the board I'd be worried about the future of AI safety


Why? Now they'll just start their own company that is 100% for profit and OpenAI may eventually fade into nothingness as money chases the new venture, possibly even Microsoft and employees flee. There's a rumor that Microsoft is already interested in funding Sam's new venture.


They won't be able to catch up to and surpass OpenAI. At least not for a few years. I'm in the camp that we ought to solve alignment before we get to AGI and also of the camp that this is unlikely. Therefore, the pace of AI progress slowing and a point won by the safety side is a good thing based on my assumptions.


I get the sense that there was not a lot of time to do this before Sam was too difficult to remove and appropriated the brand and assets.

From the reading on this I did this evening (which was much more than I probably should have) I had seen it suggested that this might have been a limited window in time where the board was small enough to pull this off. It was 9 members not long ago before being reduced down to 6 (where 4 of them collaborated). Allegedly Sam was looking to grow the board back up.


It's good because Sam was pushing for regulatory capture to harm startups and open-source efforts.


Great news, guess throwing shade and getting a mob to publicly support you on Twitter doesn't do anything


Thank god


Emmett Shear:

"Motte: e/acc is just techno-optimism, everyone who is against e/acc must be against building a better future and hate technology

Bailey: e/acc is about building a techno-god, we oppose any attempt to safeguard humanity by regulating AI in any form around and around and around"

https://twitter.com/eshear/status/1683208767054438400


Can we use English rather than Twitterisms, please? That's "effective accelerationism"; a term I, along with most people, have not heard of.

https://beff.substack.com/p/notes-on-eacc-principles-and-ten...


Expanding the term is still not a huge help without an accompanying search, which worked fine with the short version too.

https://www.businessinsider.com/silicon-valley-tech-leaders-...


I can't tell. Is Emmett for or against e/acc


He's pointing out a motte/bailey meaning he's against the motte. If to him e/acc is the motte, then he's likely against


I love that he called this out. In my experience, e/acc people love using this inane reasoning


Counter-opinion to mainstream here: - backstabbing CEO and founder like that has very bad optics and karma - given ceo was taking part in all hiring many employees will be more attached to him, oh btw he also knows pay and structure of all - board knows they made themselves unhireable at any positions of managerial control/oversight. Rather than realize loss now they better kick the can forward - headcount fallout will only be quantified 3-6 months after Sam sets new company, current assessment is premature - for some some months company will follow momentum set by previous CEO


I’ve heard Emmet speak before at Startup School nearly a decade ago. He is very technically minded and a good leader. They could do a lot worse, at the very least I think he will keep an open mind.


> threatening to set off a broader wave of departures to OpenAI’s rivals, including Google, and to a new venture Altman has been plotting in the wake of his firing.

This is really the question: how many employees will stay, go to existing competitors, or go to AltmanAI.

Without brain drain from OpenAI, Altman's new venture will not be nearly as interesting. (Though given that three senior scientists resigned already...it seems probable.)

---

Does OpenAPI not have any non-compete agreements? Is that why this is possible?


Non-competes are not enforceable in California.


OpenAI never found a true identity under Sam. Sam treated pursued hyper growth (treating it like a YC startup) and many in the company, including Ilya, wanted it to be a research company emphasizing AI safety.

Whether you're Team Sutzkaver or team Altman, you can't deny it's been interesting to see extremely talented people fundamentally disagree what to do with godlike technology.


Unless there is one ultra genius who holds all the knowledge to create AGI staying at OpenAI - there will be no “safe AI” built by OpenAI that saves the world. They are likely to lose all the key talent left, over the next month as they cluster around Sam and his Newco or go out and start their own.


What are the exact reasons why he was fired? Come on guys, this is ridiculous ….


It’s all rumors and innuendos at this point.


I’m sure it’s a stressful moment in time for him, but he certainly won’t look back and say it was boring.


Emmett Shear is the new interim CEO



Does this mean the Safetyists won?


The battle, yes. The war, no.


Bloody monday




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: