The reporting on this in the last two days has been bizarre and of such shoddy quality, particularly from The Verge.
So many articles with no real sources saying the board was desperate for him to come back, or reneged on their decision, or that there would be an exodus of employees!
I’m glad the board wasn’t browbeaten by Sam and his cohort of VC friends.
I am also incredibly doubtful this will have any meaningful impact in terms of engineers/researchers who will choose to leave. Most of the core group of researchers and engineers joined OpenAI the non-profit, not OpenAI the LLC, that is to say I assume many have strong feelings about safety around AI, which judging by the employee testimonies in the recent Atlantic article Sam acted negligently towards.
The chairman of the board and three senior scientists resigned before the weekend was over. You'd have to be naive to think it's not a strong possibility.
“The OpenAI board is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes.
Update November 18th, 5:35PM PT: A source close to Altman says the board had agreed in principle to resign and to allow Altman and Brockman to return, but has since waffled”
Do you want me to hold your hand to google anything else or is that enough?
Former-chairman*, he was removed from the board immediately after Sam’s ouster. OpenAI has 700 employees, 4 resignations do not make an exodus.
Just a little bit of feedback. I'm seeing you comment a lot and quite rapidly in these Altman threads and sometimes in a quite reactionary way. (Other times usefully.)
This isn't reddit; we're not hear to 'one-up' a parent commenter with a one word response. By the guidelines :
> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
Not OP, but the choice of phrasing in a number of reports implied that the board was uncertain of their decision. Normal sensationalization if the same limited information that was publicly available at the time.
I’m glad people in this thread are supporting the OpenAI board in this decision. There seems to be too much celebrity worship around Altman from all the tech/crypto bros.
Also, any new competition started by Altman would be good for the ecosystem in general.
OpenAI is a non-profit focused on developing safe AGI and Steve Jobs was pivotal in the success of Apple - not sure if Jobs' firing is a good comparison for Sam's firing.
I've seen some likening Sam to Caesar, being betrayed and such. Certainly there are communities that match what the parent comment describes. HN has generally been more balanced than I would have though, though. The fan boys I've personally encountered have been more in the lands of YT and Reddit.
I don't really know if he was betrayed because the relationships and communication between Sam and the rest of the board that voted him out isn't exactly public. To use language feels like you are already taking despite lacking information.
Anyhow, I just was using the Caesar example because it was an memorable example that stuck out to me.
No source provided the exact reasons why Altman was removed by his left hand. Please explain this first before going on to support any follow up actions. What exactly happened … this is beyond ridiculous
'OpenAI’s "primary fiduciary duty is to humanity," not to investors or even employees.'
I'm so proud of the board for sticking to their principles over profit. This is a huge victory for all of us, and a chance for everyone to come together and approach AGI safely and for the benefit of all.
> And yet no one is talking about "encryption safety
Okay, you are getting me off topic, but this is totally not the case. Hasn't governments been especially interesting in dealing with encryption lately? UK or EU (can't remember which) being most recently raising the pitchforks.
Anyway, the more important thing for me to point out is that AI is anticipated to possess exponential properties that technology like encryption does not have. Encryption is predictable while AI is not.
They are both just data being made hard to read, though. You could pretty much sum up the "risks" of encryption as: you want to read some data that someone else didn't want you to read. Which means you might fail to detect something, or fail to prove something as easily as if you could read that data. While we could enumerate specific examples of scenarios that fit the generalization, I think we are talking about something pretty narrow. It is also not really a new problem. Historically communication has been impossible to snoop on at scale, and encryption exists to maintain a semblance of past privacy in the present.
Lol not sure where I want to take this and it's getting pretty late...
The board's motivation may have been good, they may be "in the right" here (it depends on your view on AI risk, and also on a lot of details of what was going on that we still don't know). But unambiguously they executed abysmally. There is little to be proud of here. OpenAI may have needed to be reined in, and they have probably succeeded in accomplishing that, but they have badly damaged the company in the process, and catastrophically damaged their own credibility, thus reducing their ability to influence events in the future and plausibly damaging the credibility of the entire idea of responsible development. (To be clear, I believe responsible development is important and I am sad to see it discredited.)
Edit to add: if you believe the rapid commercialization of frontier models at OpenAI was a cancer that needed to be destroyed, the likely outcome of the board's clumsy handling is that it probably now metastasizes.
You're assuming it'll continue on that path, and well.
Personally I think this is where they'll end up stagnating, although I'm happy to be wrong. It's going to make recruiting top talent far harder, and ultimately this hiccup will cause them to lose their edge. Losing that edge lessens the desire to go there, which will drive down competitiveness, which will further pull them back from the fray. I believe it'll make the US less competitive, which makes me sad.
Hard to say what the impact on access to talent will be. I know people working in AI that wouldn't consider working at OpenAI out of perceived recklessness. At least one that I've been speaking with during this weekend said that this made them more open to the idea.
They only signed that to pull the ladder up after they have a mass adopted model to lower competition. Nothing about that is about concern for humanity, it's about lowering competition to strengthen their moat.
Now that sam is out and his new ai startup if he makes one will be a new entrant in the ai space, he probably should have waited a couple weeks before recommending all the ai regulations.
As an aside, i would support a system like the patent system where if new ai models are going to be heavily regulated, then the uncensored/pre-gated weights of the already launched model should be released to the public in much the same way your patent idea is protected but you have to release the design in the application.
It's unfortunate how crank people on HN like you turn when it comes to this grave topic. Multiple people, for years, many without incentive, will tell you what they think and you will just "no they're lying," especially when YC has placed a huge amount of value on earnestness forever. Not too mention they've addressed the regulation conspiracy and how the regulation restricts them and big co's, not GPT4 level models by anyone smaller.
> So, here's what happened at OpenAI tonight. Mira [CTO and interim CEO] planned to hire Sam and Greg back. She turned Team Sam over past couple of days. Idea was to force board to fire everyone, which they figured the board would not do. Board went into total silence. Found their own CEO Emmett Shear
Why? Now they'll just start their own company that is 100% for profit and OpenAI may eventually fade into nothingness as money chases the new venture, possibly even Microsoft and employees flee. There's a rumor that Microsoft is already interested in funding Sam's new venture.
They won't be able to catch up to and surpass OpenAI. At least not for a few years. I'm in the camp that we ought to solve alignment before we get to AGI and also of the camp that this is unlikely. Therefore, the pace of AI progress slowing and a point won by the safety side is a good thing based on my assumptions.
I get the sense that there was not a lot of time to do this before Sam was too difficult to remove and appropriated the brand and assets.
From the reading on this I did this evening (which was much more than I probably should have) I had seen it suggested that this might have been a limited window in time where the board was small enough to pull this off. It was 9 members not long ago before being reduced down to 6 (where 4 of them collaborated). Allegedly Sam was looking to grow the board back up.
Counter-opinion to mainstream here:
- backstabbing CEO and founder like that has very bad optics and karma
- given ceo was taking part in all hiring many employees will be more attached to him, oh btw he also knows pay and structure of all
- board knows they made themselves unhireable at any positions of managerial control/oversight. Rather than realize loss now they better kick the can forward
- headcount fallout will only be quantified 3-6 months after Sam sets new company, current assessment is premature
- for some some months company will follow momentum set by previous CEO
I’ve heard Emmet speak before at Startup School nearly a decade ago. He is very technically minded and a good leader. They could do a lot worse, at the very least I think he will keep an open mind.
> threatening to set off a broader wave of departures to OpenAI’s rivals, including Google, and to a new venture Altman has been plotting in the wake of his firing.
This is really the question: how many employees will stay, go to existing competitors, or go to AltmanAI.
Without brain drain from OpenAI, Altman's new venture will not be nearly as interesting. (Though given that three senior scientists resigned already...it seems probable.)
---
Does OpenAPI not have any non-compete agreements? Is that why this is possible?
OpenAI never found a true identity under Sam. Sam treated pursued hyper growth (treating it like a YC startup) and many in the company, including Ilya, wanted it to be a research company emphasizing AI safety.
Whether you're Team Sutzkaver or team Altman, you can't deny it's been interesting to see extremely talented people fundamentally disagree what to do with godlike technology.
Unless there is one ultra genius who holds all the knowledge to create AGI staying at OpenAI - there will be no “safe AI” built by OpenAI that saves the world. They are likely to lose all the key talent left, over the next month as they cluster around Sam and his Newco or go out and start their own.
So many articles with no real sources saying the board was desperate for him to come back, or reneged on their decision, or that there would be an exodus of employees!
I’m glad the board wasn’t browbeaten by Sam and his cohort of VC friends.
I am also incredibly doubtful this will have any meaningful impact in terms of engineers/researchers who will choose to leave. Most of the core group of researchers and engineers joined OpenAI the non-profit, not OpenAI the LLC, that is to say I assume many have strong feelings about safety around AI, which judging by the employee testimonies in the recent Atlantic article Sam acted negligently towards.