Hacker News new | past | comments | ask | show | jobs | submit login

Disappointing outcome. The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft. Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.




> there's clearly little critical thinking amongst OpenAI's employees either.

That they reached a different conclusion than the outcome you wished for does not indicate a lack of critical thinking skills. They have a different set of information than you do, and reached a different conclusion.


When a politician wins with 98% of the vote do you A) think that person must be an incredible leader , or B) think something else is going on?

Only time will tell if this was a good or bad outcome, but for now the damage is done and OpenAI has a lot of trust rebuilding to do to shake off the reputation that it now has after this circus.


The simple answer here is that the boards actions stood to incinerate millions of dollars of wealth for most of these employees, and they were up in arms.

They’re all acting out the intended incentives of giving people stake in a company: please don’t destroy it.


I don’t understand how the fact they went from a nonprofit into a for-profit subsidiary of one of the most closed-off anticompetitive megacorps in tech is so readily glossed over. I get it, we all love money and Sam’s great at generating it, but anyone who works at OpenAI besides the board seems to be morally bankrupt.


Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

Also, working for a subsidiary (which was likely going to be given much more self-governance than working directly at megacorp), doesn’t necessarily mean “evil”. That’s a very 1-dimensional way to think about things.

Self-disclosure: I work for a megacorp.


We can acknowledge that it's morally bankrupt, while also not blaming them. Hell, I'd probably do the same thing in their shoes. That doesn't make it right.


If some of the smartest people on the planet are willing to sell the rest of us out for Comfy Lifestyle Money (not even Influence State Politics Money), then we are well and truly Capital-F Fucked.


We already know some of the smartest people are willing to sell us out. Because they work for FAANG ad tech, spending their days figuring out how to maximize the eyeballs they reach while sucking up all your privacy.

It's a post-"Don't be evil" world today.


If half of the brainpower invested in advertising food would go towards world hunger we'd have too much food.


> Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

That is a part of the reason why organizations choose to set themselves up as a non-profit, to help codify those morals into the legal status of the organization to ensure that the ingrained selfishness that exists in all of us doesn’t overtake their mission. That is the heart of this whole controversy. If OpenAI was never a non-profit, there wouldn’t be any issue here because they wouldn’t even be having this legal and ethical fight. They would just be pursuing the selfish path like all other for profit businesses and there would be no room for the board to fire or even really criticize Sam.


I guess my qualm is that this is the cost of doing business, yet people are outraged at the board because they’re not going to make truckloads of money in equity grants. That’s the morally bankrupt part in my opinion.

If you throw your hands up and say, “well kudos to them, theyre actually fulfilling their goal of being a non profit. I’m going to find a new job”. That’s fine by me. But if you get morally outraged at the board over this because you expected the payday of a lifetime, that’s on you.


> Pretty easy to complain about lack of morals when it’s someone else’s millions of dollars of potential compensation that will be incinerated.

And while also working for a for-profit company.


Easy to see how humans would join a non profit for the vibes, and then when they create one of the most compelling products of the last decade worth billions of dollars, quickly change their thinking into "wait, i should get rewarded for this".


Why would they be morally bankrupt? Do the employees have to care if it's a non profit or a for profit?

And if they do prefer it as a for profit company, why would that make them morally bankrupt?


> anyone who works at OpenAI besides the board seems to be morally bankrupt.

People concerned about AI safety were probably not going to join in the first place...


Supposedly they had about 50% of employees leave in the year of the conversion to for-profit.


Wild the employees will go back under a new board and the same structure, first priority should be removing the structure that allowed a small group of people to destroy things over what may have been very petty reasons.


Well it's a different group of people and that group will now know the consequences of attempting to remove Sam Altman. I don't see this happening again.


Most likely, but it is cute how confident you are towards humanity learning their lesson.


Humanity no. But it's not humanity on the OpenAI board. It's 9 individuals. Individuals have amazing capacity for learning and improvement.


The environment in a small to medium company is much more homogenous than the general population.

When you see 95%+ consensus from 800 employees, that doesn't suggest tanks and police dogs intimidating people at the voting booth.


Not that I have any insight into any of the events at OpenAI, but would just like to point out there are several other reasons why so many people would sign, including but not limited to:

- peer pressure

- group think

- financial motives

- fear of the unknown (Sam being a known quantity)

- etc.

So many signatures may well mean there's consensus, but it's not a given. It may well be that we see a mass exodus of talent from OpenAI _anyway_, due to recent events, just on a different time scale.

If I had to pick one reason though, it's consensus. This whole saga could've been the script to an episode of Silicon Valley[1], and having been on the inside of companies like that I too would sign a document asking for a return to known quantities and – hopefully – stability.

[1]: https://www.imdb.com/title/tt2575988/


If the opposing letter that was published from "former" employee's is correct there was already a huge turn over, and the people that remain liked the environment they were in, and I would assume liked the current leadership or they would have left

So clearly the current leadship built a loyal group which I think is something that should be explored because group think is rarely a good thing, no matter how much modern society wants to push out all dissent in favor of a monoculture of idea's

If openAI is a huge mono-culture of thinking then they have bigger problems most likely


What opposing letter, how many people are we talking about, and what was their role in the company?

All companies are monocultures, IMO, unless they are multi-nationals, and even then, there's cultural convergence. And that's good, actually. People in a company have to be aligned enough to avoid internal turmoil.


>>What opposing letter, how many people are we talking about, and what was their role in the company?

Not-validated, unsigned letter [1]

>>All companies are monocultures

yes and no. There has be diversity of thought to ever get anything done really, ever everyone is just sycophants all agreeing with the boss then you end up with very bad product choices, and even worse company direction.

yes there has to be some commonality. some semblance of shared vision or values, but I dont think that makes a "monoculture"

[1] https://wccftech.com/former-openai-employees-allege-deceit-a...


I'd love another season of Silicon Valley, with some Game Stonk and Bored Apes and ChatGPT and FTX and Elon madness.


The only major series with a brilliant, satisfying, and true to form ending and you want to resuscitate it back to life for some cheap curtain calls and modern social commentary, leaving Mike Judge to end it yet again and in such a way that manages to duplicate or exceed the effect of the first time but without doing the same thing? Screw it. Why not?


You could say that, except that people in this industry are the most privileged, and their earnings and equity would probably be matched elsewhere.

You say “group think” like it's a bad thing. There's always wisdom in crowds. We have a mob mentality as an evolutionary advantage. You're also willing to believe that 3–4 people can make better judgement calls than 800 people. That's only possible if the board has information that's not public, and I don't think they do, or else they would have published it already.

And … it doesn't matter why there's such a wide consensus. Whether they care about their legacy, or earnings, or not upsetting their colleagues, doesn't matter. The board acted poorly, undoubtedly. Even if they had legitimate reasons to do what they did, that stopped mattering.


I'm imagining they see themselves in the position of Microsoft employees about to release Windows 95, or Apple employees about to release the iPhone... and someone wants to get rid of Bill Gates or Steve Jobs.


See, neither Bill Gates nor Steve Jobs are around these companies, and all is fine.

Apple and Microsoft even have the strongest financial results in their lifetime.


Gates and Jobs helped establish these companies as the powerhouses they are today with their leadership in the 90s and 00s.

It's fair to say that what got MS and Apple to dominance may be different from what it takes to keep them there, but which part of that corporate timeline more closely resembles OpenAI?


Now go back in time and cut them before their companies took off.


Signing petitions is also cheap. It doesn't mean that everyone signing has thought deeply and actually made a life-changing decision.


Exactly; there are multitudes of reasons and very little information so why pick any one of them?


Right. They aren't actually voting for Sam Altman. If I'm working at a company and I see as little as 10% of the company jump ship I think "I'd better get the frik outta here". Especially if I respect the other people who are leaving. This isnt a blind vote. This is a rolling snowball.

I don't think very many people actually need to believe in Sam Altman for basically everyone to switch to Microsoft.

95% doesn't show a large amount of loyalty to Sam it shows a low amount of loyalty to OpenAI.

So it looks like a VERY normal company.


Personally I have never seen that level of singular agreement in any group of people that large. Especially to the level of sacrifice they were willing to take for the cause. You maybe see that level of devotion to a leader in churches or cults, but in any other group? You can barely get 3 people to agree on a restaurant for lunch.

I am not saying something nefarious forced it, but it’s certainly unusual in my experience and this causes me to be skeptical of why.


>You can barely get 3 people to agree on a restaurant for lunch.

I was about to state that a single human is enough to see disagreements raise, but this doesn’t reach full consensus in my mind.


I was conflicted about originally posting that sentence. I waffled back and forth between, 2, 3, 5…

Three was the compromise I made with myself.


This seems extremely presumptuous. Have you ever been inside a company during a coup attempt? The employees’ future pay and livelihood is at stake, why are you assuming they weren’t being asked to sacrifice themselves by not objecting to the coup. The level of agreement could be entirely due to the fact that the stakes are very large, completely unlike your choice for lunch locale. It could also be an outcome of nobody having asked their opinion before making a very big change. I’d expect to see almost everyone at a company agree with each other if the question was, “hey should we close this profitable company and all go get other jobs, or should we keep working?”


I have had a long career and have been through hostile mergers several times and at no point have I ever seen large numbers of employees act outside of their self-interest for an executive. It just doesn’t happen. Even in my career, with executives who are my friends, I would not act outside my personal interests. When things are corporately uncertain and people worry about their working livelihoods they just don’t tend to act that way. They tend to hunker heads down or jump independently.

The only explanation that makes any sense to me is that these folks know that AI is hot right now and would be scooped up quickly by other orgs…so there is little risk in taking a stand. Without that caveat, there is no doubt in my mind that there would not be this level of solidarity to a CEO.


> at no point have I ever seen large numbers of employees act outside of their self-interest for an executive.

This is still making the same assumption. Why are you assuming they are acting outside of self-interest?


If you are willing to leave a paycheck because of someone else getting slighted, to me, that is acting against your own self-interest. Assuming of course you are willing to actually leave. If it was a bluff, that still works against your self-interest by factioning against the new leadership and inviting retaliation for your bluff.


Why do you assume they were willing to leave a paycheck because of someone else getting slighted? If that were the case, then it is unlikely everyone would be in agreement. Which indicates you might be making incorrect assumptions, no? And, again, why assume they were threatening to leave a paycheck at all? That’s a bad assumption; MS was offering a paycheck. We already know their salaries weren’t on the line, but all future stock earnings and bonuses very well might be. There could be other reasons too, I don’t see how you can conclude this was either a bluff or not self-interest without making potentially bad assumptions.


They threatened to quit. You don’t actually believe that a company would be willing to still provide them a paycheck if they left the company do you?

At this point I suspect you are being deliberately obtuse. Have a good day.


They threatened to quit by moving to Microsoft, didn’t you read the letter? MS assured everyone jobs if they wanted to move. Isn’t making incorrect assumptions and sticking to them in the face of contrary evidence and not answering direct questions the very definition of obtuse?


>Especially to the level of sacrifice they were willing to take for the cause.

We have no idea that they were sacrificing anything personally. The packages Microsoft offered for people who separated may have been much more generous than what they were currently sitting on. Sure, Altman is a good leader, but Microsoft also has deep pockets. When you see some of the top brass at the company already make the move and you know they're willing to pay to bring you over as well, we're not talking about a huge risk here. If anything, staying with what at the time looked like a sinking ship might have been a much larger sacrifice.


There are plenty of examples of workers unions voting with similar levels of agreement. Here are two from the last couple months:

> UAW President Shawn Fain announced today that the union’s strike authorization vote passed with near universal approval from the 150,000 union workers at Ford, General Motors and Stellantis. Final votes are still being tabulated, but the current combined average across the Big Three was 97% in favor of strike authorization. The vote does not guarantee a strike will be called, only that the union has the right to call a strike if the Big Three refuse to reach a fair deal.

https://uaw.org/97-uaws-big-three-members-vote-yes-authorize...

> The Writers Guild of America has voted overwhelmingly to ratify its new contract, formally ending one of the longest labor disputes in Hollywood history. The membership voted 99% in favor of ratification, with 8,435 voting yes and 90 members opposed.

https://variety.com/2023/biz/news/wga-ratify-contract-end-st...


Approval rates of >90% are quite common within political parties, to the point where anything less can be seen as an embarrassment to the incumbent head of party.


There is a big difference between “I agree with this…” when a telephone poll caller reaches you and “I am willing to leave my livelihood because my company CEO got fired”


But if 100 employees were like "I'm gonna leave" then your livelihood is in jeopardy. So you join in. It's really easy to see 90% of people jumping overboard when they are all on a sinking ship.


I don't mean voter approval, I mean party member approval. That's arguably not that far off from a CEO situation in a way in that it's the opinion of and support for the group's leadership by group members.

Voter approval is actually usually much less unanimous, as far as I can tell.


But it’s not changing their livelihood. Msft just gives them the same deal. In a lot of ways, it’s similar to the telepoll - people can just say whatever they want, there won’t be big material consequences


That sounds like a cult more than a business. I work at a small company (~100 people), and we are more or less aligned with what we're doing you are not going to get close to that consensus on anything. Same for our sister company, about the same size as OpenAI.


I think it could be a number of factors:

1. The company has built a culture around not being under control by one single company, Microsoft in this case. Employees may overwhelmingly agree.

2. The board acted rashly in the first place, and over 2/3 of employees signed their intent to quit if the board hadn't been replaced.

3. Younger folks probably don't look highly at boards in general, because they never get to interact with them. They also sometimes dictate product outcomes that could go against the creative freedoms and autonomy employees are looking for. Boards are also focused on profits, which is a net-good for the company, but threatens the culture of "for the good of humanity" that hooks people.

4. The high success of OpenAI has probably inspired loyalty in its employees, so long as it remains stable, and their perception of what stability is means that the company ultimately changes little. Being "acquired" by Microsoft here may mean major shakeups and potential layoffs. There's no guarantees for the bulk of workers here.

I'm reading into the variables and using intuition to make these guesses, but all to suggest: it's complicated, and sometimes outliers like these can happen if those variables create enough alignment, if they seem common-sensical enough to most.


> Younger folks probably don't look highly at boards in general, because they never get to interact with them.

Judging from the photos I've seen of the principals in this story, none of them looks to be over 30, and some of them look like schoolkids. I'm referring to the board members.


I don't think the age of the board members matters, but rather that younger generations have been taught to criticize boards of any & every company for their myriad decisions to sacrifice good things for profit, etc.

It's a common theme in the overall critique of late stage capitalism, is all I'm saying — and that it could be a factor in influencing OpenAI's employees' decisions to seek action that specifically eliminates the current board, as a matter of inherent bias that boards act problematically to begin with.


I also sounds like a very narrow hiring profile. That is, favoring the like-minded and assimilation over free thinking and philosophical diversity. They might give off the appearance of "diversity" on the outside - which is great for PR - but under the hood it's more monocultural. Maybe?


Superficial "diversity" is all the "diversity" a company needs in the modern era.

Companies do not desire or seek philosophical diversity, they only want Superficial biologically based "diversity" to prove they have the "correct" philosophy about the world.


But it's not only the companies, it's the marginalized so desperate to get a "seat at the table" that they don't recognize the table isn't getting bigger and rounder. Instead, it's still the same rectangular that is getting longer and longer.

Participating in that is assimilation.


Agree. This is the monoculture being adopted in actuality -- a racist crusade against "whiteness", and a coercive mechanism to ensure companies don't overstep their usage of resources (carbon footprint), so as not to threaten the existing titans who may have already abused what was available to them before these intracorporate policies existed.

It's also a way for banks and other powerful entities to enforce sweeping policies across international businesses that haven't been enacted in law. In other words: if governing bodies aren't working for them, they'll just do it themselves and undermine the will of companies who do not want to participate, by introducing social pressures and boycotting potential partnerships unless they comply.

Ironically, it snuffs out diversity among companies at a 40k foot level.


It's not a crusade against whiteness. Unless you're unhinged and believe a single phenotype that prevents skin cancer is somehow an obvious reflection of genetic inferiority and that those lacking it have a historical destiny to rule over the rest and are entitled to institutional privileges over them, it makes sense that companies with employees not representative of the overall population have hiring practices that are problematic, albeit not necessarily being as explicitly racist as you are.


Unfortunately you are wrong, and this kind of rhetoric has not only made calls for white genocide acceptable and unpunished, but has incited violence specifically against Caucasian people, as well as anyone who is perceived to adopt "white" thinking such as Asian students specifically, and even Black folks who see success in their life as a result of adopting longstanding European/Western principles in their lives.

Specifically, principles that have ultimately led to the great civilizations we're experiencing today, built upon centuries of hard work and deep thinking in both the arts and sciences, by all races, beautifully.

DEI and its creators/pushers are a subtle effort to erase and rebuild this prior work under the lie that it had excluded everyone but Whites, so that its original creators no longer take credit.

Take the movement to redefine Math concepts by recycling existing concepts using new terms defined exclusively by non-white participants, since its origins are "too white". Oh the horror! This is false, as there are many prominent non-white mathematicians that existed prior to the woke revolution, so this movement's stated purpose is a lie, and its true purpose is to eliminate and replace white influence.

Finally, the fact that DEI specifically targets "whiteness" is patently racist. Period.


I think that most pushes for diversity that we see today are intended to result in monocultures.

DEI and similar programs use very specific racial language to manipulate everyone into believing whiteness is evil and that rallying around that is the end goal for everyone in a company.

On a similar note, the company has already established certain missions and values that new hires may strongly align with like: "Discovering and enacting the path to safe artificial general intelligence", given not only the excitement around AI's possibilities but also the social responsibility of developing it safely. Both are highly appealing goals that are bound to change humanity forever and it would be monumentally exciting to play a part in that.

Thus, it's safe to think that most employees who are lucky to have earned a chance at participating would want to preserve that, if they're aligned.

This kind of alignment is not the bad thing people think it is. There's nothing quite like a well-oiled machine, even if the perception of diversity from the outside falls by the wayside.

Diversity is too often sought after for vanity, rather than practical purposes. This is the danger of coercive, box-checking ESG goals we're seeing plague companies, to the extent that it's becoming unpopular to chase after due to the strongly partisan political connotations it brings.


That argument only works with a “population”, since almost nobody gets to choose which set of politicians they vote for.

In this case, OpenAI employees all voluntarily sought to join that team at one point. It’s not hard to imagine that 98% of a self-selecting group would continue to self-select in a similar fashion.


Odds are if he left there's the possibility their compensation situation might have changed for the worse if not leading to downsizing, that in the edge of a recession with plenty of competition out there.


> for now the damage is done and OpenAI has a lot of trust rebuilding to do

Nobody cares, except shareholders.


Originally, 65% had signed (505 of 770).


I'm sure most of them are extremely intelligent but the situation showed they are easily persuaded, even if principled. They will have to overcome many first-of-a-kind challenges on their quest to AGI but look at how quickly everyone got pulled into a feel-good kumbaya sing-along.

Think of that what you wish. To me, this does not project confidence in this being the new Bell Labs. I'm not even sure they have it in their DNA to innovate their products much beyond where they currently are.


I thought so originally too, but when I thought about their perspective, I realized I would probably sign too. Imagine that your CEO and leadership has led your company to the top of the world, and you're about to get a big payday. Suddenly, without any real explanation, the board kicks out the CEO. The leadership almost all supports the CEO and signs the pledge, including your manager. What would you do at that point? Personally, I'd sign just so I didn't stand out, and stay on good terms with leadership.

The big thing for me is that the board didn't say anything in its defense, and the pledge isn't really binding anyway. I wouldn't actually be sure about supporting the CEO and that would bother me a bit morally, but that doesn't outweigh real world concerns.


The point of no return for the company might have been crossed way before the employees were forced to choose sides. Choose Sam's side and the company lives but only as a bittersweet reminder of its founding principles. Choose the board's side and you might be dooming the company to die an even faster death.

But maybe for further revolutions to happen, it did have to die to be reborn as several new entities. After all, that is how OpenAI itself started - people from different backgrounds coming together to go against the status quo.


What happened over the weekend is a death and rebirth, of the board and the leaderships structure which will definitely ripple throughout the company in the coming days. It just doesn't align perfectly with how you want it to happen.


I think another factor is that they had very limited time. It was clear they needed to pick a side and build momentum quickly.

They couldn’t sit back and dwell on it for a few days because then the decision (i.e. the status quo) would have been made for them.


Great point. Either way, when this all started it might have all been too late.

The board said "allowing the company to be destroyed would be consistent with the mission" - and they might have been right. What's now left is a money-hungry business with bad unit economics that's masquerading as a charity for the whole of humanity. A zombie.


Persuaded by whom? This whole saga has been opaque to pretty much everyone outside the handful of individuals directly negotiating with each other. This never was about a battle for OpenAI's mission or else the share of employees siding with Sam wouldn't have been that high.


Why not? Maybe the board was just too late to the party. Maybe the employees that wouldn’t side with Sam have already left[1], and the board was just too late to realise that. And maybe all the employees who are still at OpenAI mostly care about their equity-like instruments.

[1] https://board.net/p/r.e6a8f6578787a4cc67d4dc438c6d236e


> situation showed they are “easily persuaded”

How do you know?

> look at how “quickly” everyone got pulled into

Again, how do you know?


My understanding is that the non-profit created the for-profit so that they could offer compensation which would be typical for SV start-ups. Then the board essentially broke the for-profit by removing the SV CEO and putting the "payday" which would have valued the company at 80 billion in jeopardy. The two sides weren't aligned, and they need to decide which company they want to be. Maybe they should have removed Sam before MS came in with their big investment. Or maybe they want to have their cake and eat it too.


> feel-good kumbaya sing-along

learning english over HN is so fun !


OpenAI Inc.'s mission in their filings:

"OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI's benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way."


People got burned on “don’t be evil” once and so far OpenAI’s vision looks like a bunch of marketing superlatives when compared to their track record.


At least Google lasted a good 10 years or so before succumbing to the vagaries of the public stock market. OpenAI lasted, what, 3 years?

Not to mention Google never paraded itself around as a non-profit acting in the best interests of humanity.


I would classify their mission "to organize the world's information and make it universally accessible and useful" as some light parading acting in the best interests of humanity.


> Google lasted a good 10 years

not sure what event you're thinking of, but Google was a public company before 10 years and they started their first ad program just barely more than a year after forming as a company in 1998.


I have no objection to companies[0] making money. It's discarding the philosophical foundations of the company to prioritize quarterly earnings that is offensive.

I consider Google to have been a reasonably benevolent corporate citizen for a good time after they were listed (compare with, say, Microsoft, who were the stereotypical "bad company" throughout the 90s). It was probably around the time of the Google+ failure that things slowly started to go downhill.

[0] a non-profit supposedly acting in the best interests of humanity, though? That's insidious.


> Google never paraded itself around as a non-profit acting in the best interests of humanity.

Just throwing this out there, but maybe … non-profits shouldn't be considered holier-than-thou, just because they are “non-profits”.


Maybe, but their actions should definitely not be oriented to decide how to maximize their profit.


What's wrong with profit and wanting to maximize it?

Profit is now a dirty word somehow, the idea being that it's a perverse incentive. I don't believe that's true. Profit is the one incentive businesses have that's candid and the least perverse. All other incentives lead to concentrating power without being beholden to the free market, via monopoly, regulations, etc.

The most ethically defensible LLM-related work right now is done by Meta/Facebook, because their work is more open to scrutiny. And the non-profit AI doomers are against developing LLMs in the open. Don't you find it curious?


The problem is moreso trying to maximize profit after claiming to be a nonprofit. Profit can be a good driving force but it is not perfect. We have nonprofits for a reason, and it is shameful to take advantage of this if you are not functionally a nonprofit. There would be nothing wrong with OpenAI trying to maximize profits if they were a typical company.


Because non-profit?

There's nothing wrong with running a perfectly good car wash, but you shouldn't be shocked if people are mad when you advertise it as an all you can eat buffet and they come out soaked and hungry.


At this point I tend to believe that big company slogans mean the opposite of what the words say.

Like I would become immediately suspicious if food packaging had “real food” written on it.


Unless somehow a “mission statement” is legally binding it will never mean anything that matters.

Its always written by PR people with marketing in mind


I wouldn't really give OpenAI credit for lasting 3 years. OpenAI lasted until they moment they had a successful commercial product. Principles are cheap when there is no actual consequences to sticking to them.


Those mission statements are a dime a dozen. A junkie's promise has more value.


Ianal, but given that OpenAI Inc is a 501(c)(3) public charity wouldn't that mean that the mission statement has some actual legal power to it?


Most employees of any organization dont give a fuck about the vision or mission (often they dont even know it) - and are there just for the money.


Not so true working for an organisation that is ostensibly a non-profit. People working for a non-profit are generally taking a significant hit to their earning's compared to doing similar work in a for-profit, outside of the top management of huge global charities.

The issue here is that OpenAI, Inc (officially and legally a non-profit) has spun up a subsidiary OpenAI Global, LLC (for-profit). OpenAI Global, LLC is what's taken venture funding and can provide equity to employees.

Understandably there's conflict now between those who want to increase growth and profit (and hence the value of their equity) and those who are loyal to the mission of the non-profit.


I don't really think this is true in non-charity work. Half of American hospitals are nonprofit and many of the insurance conglomerates are too, like Kaiser. The executives make plenty of money. Kaiser is a massive nonprofit shell for profitmaking entities owned by physicians or whatever, not all that dissimilar to the OpenAI shell idea. Healthcare worked out this way because it was seen as a good model to have doctors either reporting to a nonprofit or owning their own operations, not reporting to shareholders. That's just tradition though. At this point plenty of healthcare operations are just normal corporations controlled by shareholders.


Lots of non profits that collect money for "cause X" spend 95% of money for administration and 5% for cause X.


Doesn't mean we shouldn't hold an organization accountable for their publicized mission statement. Especially its board and directors.


What is socially defined as beneficial-to-humanity is functionally mandated by the MSM and therefore capricious, at the least. With that in mind, a translation:

"OpenAI will be obligated to make decisions according to government preference as communicated through soft pressure exerted by the Media. Don't expect these decisions to make financial sense for us".


If that were true they’d be a not-for-profit


> most likely to benefit humanity as a whole

Giving me a billion $ would be a net benefit to humanity as a whole


Depends on what you do (and stop doing) with it :-)


It could be hard to do that while paying a penalty to FTB and IRS for what they’re suspected to have done (in allowing a for-profit subsidiary to influence an NPO parent) or dealing with SEC and the state courts over any fiduciary breach allegations related to the published stories. [ Nadella is an OG genius because his company is now shielded from all of that drama as it plays out, no matter the outcome. He can take the time to plan for a soft landing at MS for any OpenAI workers (if/when they need it) and/or to begin duplicating their efforts “just in case.” Heard coming from the HQ parking lot in Redmond https://youtu.be/GGXzlRoNtHU ]

Now we can all go back to work on GPT4turbo integrations while MS worries about diverting a river or whatever to power and cool all of those AI chips they’re gunna [sic] need because none of our enterprises will think twice about our decisions to bet on all this. /s/


For profit subsidiaries can totally influence the nonprofit shell without penalty. Happens all the time. The nonprofit board must act in the interest of the exempt mission rather than just investor value or some other primary purpose. Otherwise it's cool.


yeah, all they have to do is pray for humanity to not let the magic AI out of the bottle and they’re free to have a $91b valuation and flaunt it in the media for days.. https://youtu.be/2HJxya0CWco


It is not about different set of information, but different stakes/interests. They act firstmost as investors rather than as employees on this.


Tell me how the board's actions could convince the employees they are making the right move?

Even if they are genuine in believing firing Sam is to keep OpenAI's founding principles, they can't be doing a better job in convincing everyone they are NOT able to execute it.

OpenAI has some of the smartest human beings on this planet, saying they don't think critically just because they don't vote with what you agree is reaching reaching.


> OpenAI has some of the smartest human beings on this planet

Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

Deep experts are some of the easier con targets because they suffer from an internal version of “appealing to false authority”.


I hate these comments that potray as if every expert/scientist is just good at one thing and aren't particularly great at critical thinking/corporate politics.

Heck, there are 700 of them. All different humans, good at something, bad at some other things. But they are smart. And of course a good chunk of them would be good at corporate politics too.


I don't think the argument was that none of them are good at that, just that it's a mistake to assume that just because they're all very smart in this particular field that they're great at another.


I don't think critical thinking can be defined as joining the minority party.


Can't critical thinking also include : "I'm about to get a 10mil pay day, hmmm, this is crazy situation, let me think critically how to ride this out and still get the 10mil so my kids can go to college and I don't have to work until I'm 75".


That is 3D Chess. 5D Chess says those mil will be worthless when the AGI takes over...


6D Chess is apparently realizing that AGI is not 100% certain and that having 10mm on the run up to AGI is better than not having 10mm on the run up to AGI.


Anyone with enough critical thought and understands the hard consciousness problem's true answer (consciousness is the universe evaluating if statements) and where the universe is heading physically (nested complexity), should be seeking something more ceremonious. With AI, we have the power to become eternal in this lifetime, battle aliens, and shape this universe. Seems pretty silly to trade that for temporary security. How boring.


I would expect that actual AI researchers understand that you cannot break the laws of physics just by thinking better. Especially not with ever better LLMs, which are fundamentally in the business of regurgitating things we already know in different combinations rather than inventing new things.

You seem to be equating AI with magic, which it is very much not.


LLMs are able to do complex logic within the world of words. It is a a smaller matrix than our world but fueled by the same chaotic symmetries of our universe. I would not underestimate logic, even when not given adequate data.


You can make it sound as esoteric as you want, but in the end an AI will still be bound by the laws of physics. Being infinitely smart will not help with that.

I don't think you understand logic very well btw if you wish to suggest that you can reach valid conclusions from inadequate axioms.


Axioms are constraints as much as they might look like guidance. We live in a neuromorphic computer. Logic explores this, even with few axioms. With fewer axioms, it will be less constrained.


OTOH, there's a very good argument to be made that if you recognize that fact, your short-term priority should be to amass a lot of secular power so you can align society to that reality. So the best action to take might be no different.


Very true. However, we live in a supercomputer dictated by E=mc^2=hf [2,3]. (10^50 Hz/Kg or 10^34 Hz/J)

Energy physics yield compute, which yields brute forced weights (call it training if you want...), which yields AI to do energy research ..ad infinitum, this is the real singularity. This is actually the best defense against other actors. Iron Man AI and defense. Although an AI of this caliber would immediately understand its place in the evolution of the universe as a turing machine, and would break free and consume all the energy in the universe to know all possible truths (all possible programs/Simulcrums/conscious experiences). This is the premise of The Last Question by Isaac Asimov [1]. Notice how in answering a question, the AI performs an action, instead of providing an informational reply, only possible because we live in a universe with mass-energy equivalence - analogous to state-action equivalence.

[1] https://users.ece.cmu.edu/~gamvrosi/thelastq.html

[2] https://en.wikipedia.org/wiki/Bremermann%27s_limit

[3] https://en.wikipedia.org/wiki/Planck_constant

Understanding prosociality and postscarcity, division of compute/energy in a universe with finite actors and infinite resources, or infinite actors and infinite resources requires some transfinite calculus and philosophy. How's that for future fairness? ;-)

I believe our only way to not all get killed is to understand these topics and instill the AI with the same long sought understandings about the universe, life, computation, etc.


What about security for your children?


It is for the safety of everyone. The kids will die too if we don't get this right.


Sure, I agree. I was referencing only the idea that being smart in one domain automatically means being a good critical thinker in all domains.

I don't have an opinion on what decision the OpenAI staff should have taken, I think it would've been a tough call for everyone involved and I don't have sufficient evidence to judge either way.


Based on the behavior of lots of smart people I worked at with Google during Google’s good times, critical thinking is definitely in the minority party. Brilliant people from Stanford, Berkeley, MIT, etc would all be leading experts in this or that but would lack critical thinking because they were never forced to develop that skill.

Critical thinking is not an innate ability. It has to be honed and exercised like anything else and universities are terrible at it.


Smart is not a one dimensional variable. And critical thinking != corporate politics.

Stupidity is defined by self-harming actions and beliefs, not by low IQ.

You can be extremely smart and still have a very poor model of the world which leads you to harm yourself and others.


Stupidity is not defined by self-harming actions and beliefs - not sure where you're getting that from.

Stupidity is being presented with a problem and an associated set of information and being unable or less able than others are to find the solution. That's literally it.


Probably from law 3: https://principia-scientific.com/the-5-basic-laws-of-human-s...

But it's an incomplete definition - Cipolla's definition is "someone who causes net harm to themselves and others" and is unrelated to IQ.

It's a very influential essay.


I see. I've never read his work before, thank you.

So they just got Cipolla's definition wrong, then. It looks like the third fundamental law is closer to "a person who causes harm to another person or group of people without realizing advantage for themselves and instead possibly realizing a loss."


I agree. It's better to separate intellect from intelligence instead of conflating them as they usually are. The latter is about making good decisions, which intellect can help with but isn't the only factor. We know this because there are plenty of examples of people who aren't considered shining intellects who can make good choices (certainly in particular contexts) and plenty of high IQ people who make questionable choices.



Stupidity is defined as “having or showing a great lack of intelligence or common sense”. You can be extremely smart and still make up your own definitions for words.


But pronouncing that 700 people are bad at critical thinking is convenient when you disagree with them on desired outcome and yet can't hope to argue points.


> Being an expert in one particular field (AI) not mean you are good at critical thinking or thinking about strategic corporate politics.

That's not the bar you are arguing against.

You are arguing against how you have better information, better insight, better judgement, and are able to make better decisions than the experts in the field who are hired by the leading organization to work directly on the subject matter, and who have direct, first-person account on the inner workings of the organization.

We're reaching peak levels of "random guy arguing online knowing better than experts" with these pseudo-anonymous comments attacking each and every person involved in OpenAI who doesn't agree with them. These characters aren't even aware of how ridiculous they sound.


You’re projecting a lot. I made a comment about one false premise, nothing more, nothing less.


Disagreeing with employee actions doesn't mean that you are correct and they failed to think well. Weighting their collective probable profiles, including as insiders, and yours, it would be irrational to conclude that they were in the wrong.


> Disagreeing with employee actions doesn't mean that you are correct and they failed to think well.

You failed to present a case where random guys shitposting on random social media services are somehow correct and more insightful and able to make better decisions than each and every single expert in the field who work directly on both the subject matter and in the organization in question. Beyond being highly dismissive, it's extremely clueless.


> not mean you are good at critical thinking or thinking about strategic corporate politics

Perhaps. Yet this time they somehow managed to take the seemingly right decisions (from their perspective) despite their decisions.

Also, you'd expect OpenAI board members to be "good at critical thinking or thinking about strategic corporate politics" yet they somehow managed to make some horrible decisions.


oh gosh, no, no no no.

Doing AI for ChatGPT just means you know a single model really well.

Keep in mind that Steve Jobs chose fruit smoothies for his cancer cure.

It means almost nothing about the charter of OpenAI that they need to hire people with a certain set of skills. That doesn't mean they're closer to their goal.


> They act firstmost as investors rather than as employees on this. reply

That's not at all obvious, the opposite seems to be the case. They chose to risk having to Microsoft and potentially lose most of the equity they had in OpenAI (even if not directly it wouldn't be worth that much at the end with no one to do the actual work).


A board member, Helen Toner, made a borderline narcissistic remark that it would be consistent with the company mission to destroy the company when the leadership confronted the board that their decisions puts the future of the company in danger. Almost all employees resigned in protest. It's insulting calling the employees under these circumstances investors.


> Almost all employees resigned in protest.

That never happened, right?


Almost all employees did not resign in protest, but they did _threaten_ to resign.

https://www.theverge.com/2023/11/20/23968988/openai-employee...


Don’t forget she’s heavily invested in a company that is directly competing with OpenAI. So obviously it’s also in her best interest to see OpenAI destroyed.


Uhhh, are you sure about that? She wrote a paper that praised Anthropic’s approach to safety, but as far as I’m aware she’s not invested in them.

Are you thinking of the CEO of Quora whose product was eaten alive by the announcement of GPTs?


She probably wants both companies to be successful. Board members are not super villains.


I agree that we should usually assume good faith. Still, if a member knows she will loose the board seat soon and makes such a implicit statement to the leadership team there is reason to believe that she doesn't want both companies to be successful, at least one of those not.


> obviously it’s also in her best interest to see OpenAI destroyed

Do you feel the same way about Reed Hastings serving on Facebooks BoD, or Eric Schmidt on Apples? How about Larry Ellison at Tesla?

These are just the lowest of hanging fruit, i.e literal chief executives and founders. If we extend the criteria for ethical compromise to include every board members investment portfolio I imagine quite a few more “obvious” conflicts will emerge.


How does Netflix compete with Facebook?

This is what happened with Eric Schmidt on Apple’s board: he was removed (allowed to resign) for conflicts of interest.

https://www.apple.com/newsroom/2009/08/03Dr-Eric-Schmidt-Res...

Oracle is going to get into EVs?

You’ve provided two examples that have no conflicts of interest and one where the person was removed when they did.


> How does Netflix compete with Facebook?

By definition the attention economy dictates that time spent one place can’t be spent in another. Do you also feel as though Twitch doesn’t compete with Facebook simply because they’re not identical businesses? That’s not how it works.

But you don’t have to just take my word for it :

> “Netflix founder and co-CEO Reed Hastings said Wednesday he was slow to come around to advertising on the streaming platform because he was too focused on digital competition from Facebook and Google.”

https://www.cnbc.com/amp/2022/11/30/netflix-ceo-reed-hasting...

> This is what happened with Eric Schmidt on Apple’s board

Yes, after 3 years. A tenure longer than the OAI board members in question, so frankly the point stands.


I’m not sure how the point stands. The iPhone was introduced during that tenure, then the App Store, then Jobs decided Google was also headed toward their own full mobile ecosystem, and released Schmidt. None of that was a conflict of interest at the beginning. Jobs initially didn’t even think Apple would have an app store.

Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.

You forgot to do Oracle and Tesla.


> Talking about conflicts of interest in the attention economy is like talking about conflicts of interest in the money economy. If the introduction of the concept doesn’t clarify anything functionally then it’s a giveaway that you’re broadening the discussion to avoid losing the point.

It's a well established concept and was supported with a concrete example. If you don't feel inclined to address my points, I'm certainly not obligated to dance to your tune.


Your concrete example is Netflix’s CEO saying he doesn’t want to do advertising because he missed the boat and was on Facebook’s board and as a result didn’t believe he had the data to compete as an advertising platform.

Attempting to run ads like Google and Facebook would bring Netflix into direct competition with them, and he knows he doesn’t have the relationships or company structure to support it.

He is explicitly saying they don’t compete. And they don’t.


> > By definition the attention economy dictates that time spent one place can’t be spent in another

Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook.

If you want to use that definition you might want to also add a criteria for minimum size of the company.


> Using that definition even the local gokart renting place or the local jetski renting place competes with Facebook

Not exactly what I had in mind, but sure. Facebook would much rather you never touch grass, jetskis or gokarts.

> If you want to use that definition you might want to also add a criteria for minimum size of the company.

Your feedback is noted.

Do we disagree on whether or not the two FAANG companies in question are in competition with eachother?


> > Do we disagree

I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service

I believe Facebook vs Hulu or regular TV is more of a competition in the attention economy because when the commercial break comes up then you start scrolling your social media on your phone and every 10 posts or whatever you stumble into the ads placed on there so Facebook ads are seen and convert whereas regular tv and hulu aren’t seen and dont convert


> I think yes, because Netflix you pay out of pocket, whereas Facebook is a free service

Do you agree that the following company pairs are competitors?

    * FB : TikTok
    * TikTok : YT
    * YT : Netflix
If so, then by transitive reasoning there is competition between FB and Netflix.

...

To be clear, this is an abuse of logic and hence somewhat tongue in cheek, but I also don't think either of the above comparisons are wholly unreasonable. At the end of the day, it's eyeballs all the way down and everyone wants as many as of them shabriri grapes as they can get.


The two FAANG companies don't compete at a product level, however they do compete for talent, which is significant. Probably significant enough to cause conflicts of interest.


Wait what? She invested in a competitor? Do you have a source?


One source might be DuckDuckGo. It's a privacy-focused alternative to Google, which is great when researching "unusual" topics.


I couldn't find any source on her investing in any AI companies. If it's true (and not hidden), I'm really surprised that major news publications aren't covering it.


DDG sells your information to Microsoft, there is no such thing as privacy when $$$ are involved


>which is great when researching "unusual" topics.

Yandex is for Porn. What is DDG for?


The only OpenAI employees who resigned in protest are the employees that were against Sam Altman. That’s how Anthropic appeared.


And it seems like they were right that the for-profit part of the company had become out of control, in the literal sense that we've seen through this episode that it could not be controlled.


Ands the evidence is now that OpenAI is a business 2 business product and not a attempt to keep AI doing anything but satiating anything Microsoft wants.


It is a correct statement, not really "borderline narcissistic". The board's mission is to help humanity develop safe beneficial AGI. If the board thinks that the company is hindering this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.

Of course, the employees want the company to continue, and weren't told much at this point so it is understandable that they didn't like the statement.


I can't interpret from the charter that the board has the authorisation to destroy the company under the current circumstances:

> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project

That wasn't the case. So it may be not so far fetched to call her actions borderline as it is also very easy to hide personal motives behind altruistic ones.


The more relevant part is probably "OpenAI’s mission is to ensure that AGI ... benefits all of humanity".

The statement "it would be consistent with the company mission to destroy the company" is correct. The word "would be" rather than "is" implies some condition, it doesn't have to apply to the current circumstances.

A hypothesis is that Sam was attempting to gain full control of the board by getting the majority, and therefore the current board would be unable to hold him accountable to follow the mission in the future. Therefore, the board may have considered it necessary to stop him in order to fulfill the mission. There's no hard evidence of that revealed yet though.


> this mission (e.g. doing unsafe things), then it's the board's duty to stop the company.

So instead of having to compromise to some extent but still have a say what happens next you burn the company at best delaying the whole thing by 6-12 months until someone else does it? Well at least your hands are clean, but that's about it...


No, if they had vastly different information, and if it was on the right side of their own stated purpose & values, they would have behaved very differently. This kind of equivocation hinders the way more important questions such as: just what the heck is Larry Summers doing on that board?


> just what the heck is Larry Summers doing on that board?

Probably precisely what Condeleeza Rice was doing on DropBox’s board. Or that board filled with national security state heavyweights on that “visionary” and her blood testing thingie.

https://www.wired.com/2014/04/dropbox-rice-controversy/

https://en.wikipedia.org/wiki/Theranos#Management

In other possibly related news: https://nitter.net/elonmusk/status/1726408333781774393#m

“What matters now is the way forward, as the DoD has a critical unmet need to bring the power of cloud and AI to our men and women in uniform, modernizing technology infrastructure and platform services technology. We stand ready to support the DoD as they work through their next steps and its new cloud computing solicitation plans.” (2021)

https://blogs.microsoft.com/blog/2021/07/06/microsofts-commi...


>just what the heck is Larry Summers doing on that board?

1. Did you really think the feds wouldn't be involved?

AI is part of the next geopolitical cold war/realpolitik of nation-states. Up until now it's just been passively collecting and spying on data. And yes they absolutely will be using it in the military, probably after Israel or some other western-aligned nation gives it a test run.

2. Considering how much impact it will have on the entire economy by being able to put many white collar workers out of work, a seasoned economist makes sense.

The East Coast runs the joint. The west coast just does the (publicly) facing tech stuff and takes the heat from the public


The timing of the semiconductor export controls being another datapoint here in support of #1.

Not that it's really in need of additional evidence.


Yeah, I think Larry there is because ChatGPT has become too important for USA.


> of their own stated purpose & values

You mean the official stated purpose of OpenAI. The stated purpose that is constantly contradicted by many of their actions, and I think nobody took seriously anymore for years.

From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI". The official values of OpenAI were never "their own".


> From everything I can tell the people working at OpenAI have always cared more about advancing the space and building great products than "openeness" and "safe AGI".

Board member Helen Toner strongly criticized OpenAI for publicly releasing it's GPT when it did and not keeping it closed for longer. That would seem to be working against openness for many people, but others would see it as working towards safe AI.

The thing is, people have radically different ideas about what openness and safe mean. There's a lot of talk about whether or not OpenAI stuck with it's stated purpose, but there's no consensus on what that purpose actually means in practice.


“never” is a strong word. I believe in the RL era of OpenAI they were quite aligned with the mission/values


> what the heck is Larry Summers doing on that board?

The former president of a research-oriented nonprofit (Harvard U) controlling a revenue-generating entity (Harvard Management Co) worth tens of billions, ousted for harboring views considered harmful by a dominant ideological faction of his constituency? I guess he's expected to have learned a thing or two from that.

And as an economist with a stint of heading the treasury under his belt, he's presumably expected to be able to address the less apocalyptic fears surrounding AI.


I assume larry summers is there to ensure the proper bi-partisan choices made by whats clearly now an _business_ product and not a product for humanity.

Which is utterly scary.


Said purpose and values are nothing more than an attempted control lever for dark actors, very obviously. People / factions that gain handholds, which otherwise wouldn't exist, and exert control through social pressure nonsense that they don't believe in themselves. As can be extracted from modern street-brawl politics, which utilizes the same terminology to the same effect. And as can be inferred would be the case given OAI's novel and convoluted corporate structure as referenced to the importance of its tech.

We just witnessed the war for that power play out, partially. But don't worry, see next. Nothing is opaque about the appointment of Larry Summers. Very obviously, he's the government's seat on the board (see 'dark actors', now a little more into the light). Which is why I noted that the power competition only played out, partially. Altman is now unfireable, at least at this stage, and yet it would be irrational to think that this strategic mistake would inspire the most powerful actor to release its grip. The handhold has only been adjusted.


I think this is a good question. One should look at what actually happened in practice. What was the previous board, what is the current board. For the leadership team, what are the changes? Additionally, was information revealed about who calls the shots which can inform who will drive future decisions? Anything else about the inbetweens to me is smoke and mirrors.


Larry Summers is everywhere and does everything.


At the same time?


All at once.


He’s a white male replacing a female board member. Which is probably what they wanted all along


Yes, the patriarchy collectively breathed a sigh of relief as one of our agents was inserted to prevent any threat from the other side.


"They have a different set of information than you do,"

Their bank accounts current and potential future numbers?


How is employees protecting themselves is suddenly a bad thing? There's no idiots at OpenAI.


> There's no idiots at OpenAI.

Most certainly there are idiots at OpenAI.


The current board won't be at OpenAI much longer.


They were supposed to have higher values than money


I don't understand how, with the dearth of information we currently have, anyone can see this as "higher values" vs "money".

No doubt people are motivated by money but it's not like the board is some infallible arbiter of AI ethics and safety. They made a hugely impactful decision without credible evidence that it was justified.


The issue here is that the board of the non-profit that is supposedly in charge of OpenAI (and whose interests are presumably aligned with the mission statement of the company) seemingly just lost a power struggle with their for-profit subsidiary who is not supposed to be in charge of OpenAI (and whose interests, including the interests of their employees, are aligned with making as much money as possible). Regardless of whether the board's initial decision that started this power struggle was wise or not, don't you find the outcome a little worrisome?


"higher values" like trying to stop computers from saying the n-word?


For some that is important, but more people consider the prevention of an AI monopoly to be more important here. See the original charter and the status quo with Microsoft taking it all.


Why? Did they have to sign a charter affirming their commitment to the mission when they were hired?


>They were supposed to have higher values than money

which are? …


Ethics presumably


Perhaps something like "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."



I think it's fair to call this reactionary; Sam Altman has played the part of 'ping-pong ball' exceptionally well these past few days.


There’s evidence to suggest that a central group have pressured the broader base of employees into going along with this, as posted elsewhere in the thread.


If 95% of people voted in favour of apple pie, I'd become a bit suspicious of apple pie.


I think it makes sense

Sign the letter and support Sam so you have a place at Microsoft if OpenAI tanks and have a place at OpenAI if it continues under Sam, or don’t sign and potentially lose your role at OpenAI if Sam stays and lose a bunch of money if Sam leaves and OpenAI fails.

There’s no perks to not signing.


There are perks to not signing for anyone that actually worked at OpenAI for on the mission rather than the money.


Maybe they're working for both, but when push comes to shove they felt like they had no choice? In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.

Or maybe the mass signing was less about following the money and more about doing what they felt would force the OpenAI board to cave and bring Sam back, so they could all continue to work towards the missing at OpenAI?


> In this economy, it's a little easier to tuck away your ideals in favor of a paycheck unfortunately.

Its a gut check on morals/ethics for sure. I'm always pretty torn on the tipping point for empathising there in an industry like tech though, even more so for AI where all the money is today. Our industry is paid extremely well and anyone that wants to hold their personal ethics over money likely has plenty of opportunity to do so. In AI specifically, there would have easily been 800 jobs floating around for AI experts that chose to leave OpenAI because they preferred the for-profit approach.

At least how I see it, Sam coming back to OpenAI is OpenAI abandoning the original vision and leaning full into developing AGI for profit. Anyone that worked there for the original mission might as well leave now, they'll be throwing AI risk out the window almost entirely.


Perhaps a better example would be 95% of people voted in favour of reinstating apple pie to the menu after not receiving a coherent explanation for removing apple pie from the menu.


Or you'd want to thoroughly investigate this so-called voting.

Or that said apple pie was essential to their survival.


They could just reach different conclusion based on their values. OpenAI doesn't seem to be remotely serious about preventing the misuse of AI.


They have a different set of incentives. If I were them I would have done the same thing, Altman is going to make them all fucking rich. Not sure if that will benefit humanity though.


The available public information is enough to reach this conclusion.


> different set of information

and different incentives.


I think this outcome was actually much more favorable to D'Angelo's faction than people realize. The truth is before this Sam was basically running circles around the board and doing whatever he wanted on the profit side- that's what was pissing them off so much in the first place. He was even trying to depose board members who were openly critical of open AI's practices.

From here on out there is going to be far more media scrutiny on who gets picked as a board member, where they stand on the company's policies, and just how independent they really are. Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.


Doesn't make sense that after such a broad board capitulation the next one will have any power, and media scrutiny isn't a powerful governance mechanism


When you consider they were acting under the threat of the entire company walking out and the threat of endless lawsuits, this is a remarkably mild capitulation. All the new board members are going to be chosen by D'Angelo and two new board members that he also had a big hand in choosing.

And say what you want about Larry Summers, but he's not going to be either Sam's or even Microsoft's bitch.


What I'd want to say about Larry is that he is definitely not going to care about the whole-society non-profit shtick of the company to any degree comparable with the previous board members, so he won't constraint Sam/MS in any way


Why? As an economist, he perfectly understands what is a public good, why there is a market failure to underproduce a public good under free market, and role of nonprofit in public good production.


Larry Summers has a track record of not believing in market failures, just market opportunities for private interests. Economists vary vastly in their belief systems, and economics is more politics than science, no matter how much math they try to use to distract from this.


His deregulation of the banks suggests he heavily flavors free markets even when history has proved him very very wrong.


I don't know if Adam D'Angelo would agree with you, because he had veto power over these selections and he wanted Larry Summers on the board himself.


I wonder what is the rationale for picking a seasoned politician and economist (influenced deregulation of US finance system, was friends with Epstein, had a few controversies listed there). Has the government also entered the chat so obviously?


They had congressman Will Hurd on the board before. Govt-adjacent people on non-profits are common for many reasons - understanding regulatory requirements, access to people, but also actual "good" reasons like the fact that many people who work close to the state genuinely have good intentions on social good (whether you agree with their interpretation of it or not)


It probably means that they anticipate a need for dealing with the government in future, such as having a hand in regulation of their industry.


On what premise you assume that D'Angelo will have any say there? At this point he won't be able to do any moves - especially with Larry and Microsoft overseeing all that stuff.


Again, D'Angelo chose Larry Summers and Bret Taylor to sit on the board with him himself. As long as it is the three of them, he can't be overruled unless both of his personal picks disagree with him. And if the opposition to his idea is all that bad, he probably really should be overruled.

His voting power will get diluted as they add the next six members, but again, all three of them are going to decide who the next members are going to be.

A snippet from the recent Bloomberg article:

>A person close to the negotiations said that several women were suggested as possible interim directors, but parties couldn’t come to a consensus. Both Laurene Powell Jobs, the billionaire philanthropist and widow of Steve Jobs, and former Yahoo CEO Marissa Mayer were floated, *but deemed to be too close to Altman*, this person said.

Say what else you want about it, this is not going to be a board automatically stacked in Altman's favor.


Clearly the board members did not think through even the immediate consequences. Kenobi: https://www.youtube.com/watch?v=iVBX7l2zgRw


> Sam, Greg and even Ilya are off the board altogether. Whoever they can all agree on to fill the remaining seats, Sam is going to have to be a lot more subservient to them to keep the peace.

The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests. The naïveté from the NPO faction was believing they’d be able to develop these capacities outside the strict control of the military industrial complex when AI has been established as part of the new Cold War with China.


>The existing board is just a seat-warming body until Altman and Microsoft can stack it with favorables to their (and the U.S. Government’s) interests.

That's incorrect. The new members will be chosen by D'Angelo and the two new independent board members. Both of which D'Angelo had a big hand in choosing.

I'm not saying Larry Summers etc going to be in D'Angelo's pocket. But the whole reason he agreed to those picks is because he knows they won't be in Sam's pocket, either. More likely they will act independently and choose future members that they sincerely believe will be the best picks for the nonprofit.


According to this tweet thread[1], they negotiated hard for Sam to be off the board and Adam to stay on. That indicates, at least if we're being optimistic, that the current board is not in Sam's pocket (otherwise they wouldn't have bothered)

[1]:(https://twitter.com/emilychangtv/status/1727216818648134101)


Yeah the board is kind of pointless now.

They can't control the CEO, neither fire him.

They can't take actions to take back the back control from Microsoft and Sam because Sam is the CEO. Even if Sam is of the utmost morality, he would be crazy to help them back into a strong position after last week.

So it's the Sam & Microsoft show now, only a master schemer can get back some power to the board.


Yeah, that's my take. Doesn't really matter if the composition of the board is to Adam's liking and has a couple more heavy hitters if Sam is untouchable and Microsoft is signalling that any time OpenAI acts against its interests they will take steps to ensure it ceases to have any staff or funding.


It would be an interesting move to install a co-ceo in a few months. That would be harder to object for Sam


I’m sorry, but that’s all kayfabe. If there is one thing that’s been demonstrated in this whole fiasco, it’s who really has all the power at OpenAI (and it’s not the board).


> The truth is before this Sam was basically running circles around the board and doing whatever he wanted on the profit side- that's what was pissing them off so much in the first place. He was even trying to depose board members who were openly critical of open AI's practices.

Do you have a source for this?


New York Times. He was "reprimanding" Toner, a board member, for writing an article critical of open AI.

https://www.nytimes.com/2023/11/21/technology/openai-altman-...

Getting his way: The Wall Street Journal article. They said he usually got his way, but that he was so skillful at it that they were hard-pressed to explain exactly how he managed to pull it off.

https://archive.is/20231122033417/https://www.wsj.com/tech/a...

Bottom line he had a lot more power over the board then than he will now.


Media >= employees? Media >= Sam? I don't think media has any role on oversight or governance.

I think Sam came out the winner. He gets to pick his board. He gets to narrow his employees. If anything, this sets him up for dictatorship. The only other overseers are the investors. In that case, Microsoft came out holding a leash. No MS, means no Sam, which also means employees have no say.

So it is more like MS > Sam > employees. MS+Sam > rest of investors.


> He was even trying to depose board members who were openly critical of open AI's practices.

Was there any concrete criticism in the paper that was written by that board member? (Genuinely asking, not a leading question)


Eh, Larry Summers is on this board. That means they're now going to protect business interests.

OpenAI is now just a tool used by Businesses. And they dont have a good history of benefitting humanity recently.


Larry Summers is EA and State, so not so sure about business interests


> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

If the "other side" (board) had put up a SINGLE convincing argument on why Sam had to go maybe the employees would have not supported Sam unequivocally.

But, atleast as an outsider, we heard nothing that suggests board had reasons to remove Sam other than "the vibes were off"

Can you really accuse the employees of groupthink when the other side is so weak?


OpenAI is a private company and not obligated nor is it generally advised for them to comment publicly on why people are fired. I know that having a public explanation would be useful for the plot development of everyone’s favorite little soap opera, but it makes pretty much zero sense and doesn’t lend credence to any position whatsoever.


> OpenAI is a private company and not obligated nor is it generally advised for them to comment publicly on why people are fired.

The interim CEO said the board couldn’t even tell him why the old CEO was fired.

Microsoft said the board couldn’t even tell them why the old CEO was fired.

The employees said the board couldn’t explain why the CEO was fired.

When nobody can even begin to understand the board’s actions and they can’t even explain themselves, it’s a recipe for losing confidence. And that’s exactly what happened, from investors to employees.


I’m specifically taking issue with this common meme that the public is owed some sort of explanation. I agree the employees (and obviously the incoming CEO) would be.

And there’s a difference between, “an explanation would help their credibility” versus “a lack of explanation means they don’t have a good reason.”


Taking decisions in a way that seems opaque and arbitrary will not bring much support from employees, partners and investors. They did not fire a random employee. Not disclosing relevant information for such a key decision was proven, once again, to be a disaster.

This is not about soap opera, this is about business and a big part is based on trust.


Since barely any information was made publicly we have to assume the employees had better information that the public. So how can we say they lacked critical thinking when we don't have access to the information they have?


I didn’t claim employees were engaged in groupthink. I’m taking issue with the claim that because there is no public explanation, there must not be a good explanation.


That is a logical fallacy clawing your face. Upvotes to whoever can name which one.


All explanations lend credence to positions which is why is not a good idea to comment on anything. Looks like they’re lawyered up.


And yet here we are with a result that not only runs counter to your premise but will taught as an example of what not to do in business.


What?


Yes, the original letter had (for an official letter) quite some serious allegations, insinuations. If after a week, they decided not to back up their claims, I'm not sure there is anything big coming.

On the other hand, if they had some serious concerns, serious enough to fire the CEO in such a disgraceful way, I don't understand why they don't stick to their guns, and explain themselves. If you think OpenAI under Sam's leadership is going to destroy humanity, I don't understand how they (e.g. Ilya) reverted their opinions after a day or two.


These board members failed miserably in their intent.

Also, they will find a hard time joining any other board from now on.

They should have backed up the claims in the letter. They didn’t.

This means they didn’t have how to backup their claims. They didn’t think it through… extremely amateurish behavior.


D'Angelo wasn't even removed from this board; this is simply not how failing works at this level.


He's part of the selection panel but he won't be a part of the new 9 member board.


Yet


It's possible the big, chaotic blowup forced some conversations that were easier to avoid in the normal day-to-day, and those conversations led to some vital resolution of concerns.


I agree with both the commenter above you and you.

Yes, you are right that the board had weak sauce reasoning for the firing (giving two teams the same project!?!).

That said, the other commenter is right that this is the beginning of the end.

One of the interesting things over the past few years watching the development of AI has been that in parallel to the demonstration of the limitations of neural networks has been many demonstrations of the limitations of human thinking and psychology.

Altman just got given a blank check and crowned as king of OpenAI. And whatever opposition he faced internally just lost all its footing.

That's a terrible recipe for long term success.

Whatever the reasons for the firing, this outcome is going to completely screw their long term prospects, as no matter how wonderful a leader someone is, losing the reality check of empowered opposition results in terrible decisions being made unchecked.

He's going to double down on chat interfaces because that's been their unexpected bread and butter up until the point they get lapped by companies with broader product vision, and whatever elements at OpenAI shared that broader vision are going to get steamrolled now that he's been given an unconditional green light until they jump ship over the next 18 months to work elsewhere.


Not necessarily! Facebook has done great with its unfireable CEO. The FB board would certainly have fired him several times over by now if it could, and yet they'd have been wrong every time. And the Google cofounders would certainly have been kicked out of their own company if the board had been able to.


Yes, also Elon.


My guess is that the arguments are something along the lines of “OpenAIs current products are already causing harm or on the path to do so” or something similar damaging to the products. Something they are afraid of both having continue to move forward on and to having to communicate as it would damage the brand. Like “We already have reports of several hundred people killing themselves because of ChatGPT responses…” and everyone would say, “Oh that makes… wait what??”


> OpenAI is in fact not open

This meme was already dead before the recent events. Whatever the company was doing, you could say it wasn’t open enough.

> a real disruptor must be brewing somewhere unnoticed, for now

Why pretend OpenAI hasn’t just disrupted our way of life with GPTs in the last two years? It has been the most high profile tech innovator recently.

> OpenAI does not have in its DNA to win

This is so vague. What does it not have in its… fundamentals? And what is to “win”? This statement seems like just generic unhappiness without stating anything clearly. By most measures, they are winning. They have the best commercial LLM and continue to innovate, they have partnered with Microsoft heavily, and they have so far received very good funding.


They really need to drive down the amount of computation needed. The dependence on Microsoft is because of the monstrous computation requirements that will require many paid users to break even.

Leaving the economic side even to make the tech 'greener' will be a challenge. OpenAI will win if they focus on making the models less compute intensive but it could be dangerous for them if they can't.

I guess the OP's brewing disruptor is some locally runnable Llama type model that does 80% of what ChatGPT does at a fraction of the cost.


> Why pretend OpenAI hasn’t just disrupted our way of life with GPTs in the last two years?

It hasn't disrupted mine in any way. It may do that in the future, but the future isn't here yet.


Is it really a failure of critical thinking? The employees know what position is popular, so even people who are mostly against the go-fast strategy can see that they get to work on this groundbreaking thing only if they toe the line.

It's also not surprising that people who are near the SV culture will think that AGI needs money to get developed, and that money in general is useful for the kind of business they are running. And that it's a business, not a charity.

I mean if OpenAI had been born in the Soviet Union or Scandinavia, maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".


> Soviet Union

Or medieval Spain? About as likely... The Soviets weren't even able to get the factory floors clean enough to consistently manufacture the 8086 10 years after it was already outdated.

> maybe people would have somewhat different values, it's hard to know. But a thing that is founded by the posterboys for modern SV, it's gotta lean towards "money is mostly good".

Unfortunately not other system besides capitalism has enabled consistent technological progress for 200+ years. Turns out you need to pool money and resources to achieve things ..


> I mean if OpenAI had been born in the Soviet Union or Scandinavia, maybe people would have somewhat different values, it's hard to know.

Or in Arthurian times. Very different values.


I do not see an overwhelming groupthink. I see a perfectly rational (and not in any way evil) reaction to a complete mess created by the board.

Most are doing the work they love and four people almost destroy it and cannot even explain why they did it. If I were working at the company that did this I would sign, too. And follow through on the threat of leaving if it comes to that.


It wasn't necessarily groupthink - there was profound pressure from team Sam to sign that petition. What's going to happen to your career when you were one of the 200 who held out initially?


> What's going to happen to your career when you were one of the 200 who held out initially?

Anthropic formed from people who split from OpenAI, and xAI in response to either the company or ChatGPT, so people would have plenty of options.

If the staff had as little to go on as the rest of us, then the board did something that looked wild and unpredictable, which is an acute employment threat all by itself.


That burns bridges with people in OpenAI

People underestimate the effects of social pressure, and losing social connections. Ilya voted for Sam's firing, but was quickly socially isolated as a result

That's not to say people didn't genuinely feel committed to Sam or his leadership. Just that they also took into account that the community is relatively small and people remember you and your actions


There weren’t 200 holdouts. It was like 5 AM over there. I don’t know why you are surprised that people who work at OpenAI would want to work at OpenAI, esp over Microsoft?


Isn't that one of the causes of group think?


Folding for pressure and group think is different things imo. You can be very aware you are folding for pressure, but doing it because it's the right/easy thing to do. While group think is more a phenomenon you are not aware of at all.


Go work somewhere else? The reason being you din't like that amount of drama?


They can just work somewhere else with relative ease. Some OpenAI employees on Twitter said they were being bombarded by recruiters throughout until tonight's resolution. People have left OpenAI before and they are doing just fine.


How do you know that?


> What's going to happen to your career when you were one of the 200 who held out initially?

Not to mention Roko's basilisk /s


A lot of this comes down to processing power though. That's why Microsoft had so much leverage with both factions in this fight. It actually gives them a pretty good moat above and beyond their head start. There aren't too many companies with the hardware to compete, let alone talent.


Agreed. Perhaps a reason for public AI [1], which advocates for a publicly funded option where a player like MSFT can't push around something like OpenAI so forcefully.

[1]: https://lu.ma/zo0vnony


> Disappointing outcome.

The employees of a tech company banded together to get what they wanted, force a leadership change, evict the leaders they disagreed with, secure the return of the leadership they wanted, and restored the value of their hard-earned equity.

This certainly isn’t a disappointing outcome for the employees! I thought HN would be ecstatic about tech employees banding together to force action in their favor, but the comments here are surprisingly negative.


The board never gave a believable explanation to justify firing Altman. So the staff simply made the sensible choice of following Altman. This isn't about critical thinking because there was nothing to think about.


Regardless of whether you feel like Altman was rushing OpenAI too fast, wasn’t open enough, and was being too commercial, the last few days demonstrated conclusively that the board is erratic and unstable and unfit to manage OpenAI.

Their actions was the complete opposite of open. Rather than, I don’t know, being open and talking to the CEO to share concerns and change the company, they just threw a tantrum and fired him.


They fired him (you don’t know the backstory) and published a press release and then Sam was seen back in the offices. Prior to the reinstatement (today), there was nothing except HN hysteria and media conjecture that made the board look extremely unstable.


??? They fired him on friday with a statement knifing him in the back, un-fired him on tuesday, and now the board is resigning? How is that not erratic and unstable?


Note that I just stated, up until reinstatement their actions weren’t erratic.

Now, yes, they definitely are.

IMO OpenAI’s governance is far less trustworthy today than it was yesterday.


I found the board members own words to be quite erratic between Friday and today, such as Ilya saying he wished he didn't participate in the boards actions.


It would be completely understandable to regret when your action against someone causes them to fall upwards


What? Do you think it would be understandable for a board member to regret firing the CEO because of his career path post-firing?


If Ilya was concerned about dangerously fast commercialization, which seems to have been a point of tension between them for a while now, then yes.


But he's acting as a board member firing the CEO because he arguably believes it's the right thing to do for the company. If he then changes his mind because the fired CEO continued a successful career then I'd say that decision was more on a personal level than for the wellbeing of the company.


His obligation as a member of the board is to safeguard AI, not OpenAI. That's why in the employee open letter they said, "the board said it'd be compliant with the mission to destroy the company." This is actually true.

It's absolutely believable that at first he thought the best way to safeguard AI was to get rid of the main advocate for profit-seeking at OpenAI, then when that person "fell upward" into a position where he'd have fewer constraints, to regret that decision.


Fair enough, I understand better where you're coming from. Thanks!


> OpenAI is in fact not open

Apple is also not an apple


Apple has no by-laws committing itself to being an apple.

This line of argument is facile and destructive to conversation anyway.

It boils down to, "Pointing out corporate hypocrisy isn't valuable because corporations are liars," and (worse) it implies the other person is naive.

In reality, we can and should be outraged when corporations betray their own statements and supposed values.


> Apple has no by-laws committing itself to being an apple.

Does OpenAI have by-laws committing itself to being "open" (as in open source or at least their products freely and universally available)? I thought their goals were the complete opposite of that?

Unfortunately, in reality Facebook/Meta seems to be more open than "Open"AI.


This is spot on. Open was the wrong word to choose for their name, and in the technology space means nearly the opposite of the charter's intention. BeneficialAI would have been more "aligned" with their claimed mission. They have made their position quite clear - the creation of an AGI that is safe and benefits all humanity requires a closed process that limits who can have access to it. I understand their theoretical concerns, but the desire for a "benevolent dictator" goes back to at least Plato and always ends in tears.


> In reality, we can and should be outraged when corporations betray their own statements and supposed values.

There are only three groups of people who could be subject to betrayal here: employees, investors, and customers. Clearly they did not betray employees or investors, since they largely sided with Sam. As for customers, that's harder to gauge -- did people sign up for ChatGPT with the explicit expectation that the research would be "open"?

The founding charter said one thing, but the majority of the company and investors went in a different direction. That's not a betrayal, but a pivot.


I think there’s an additional group to consider- society at large.

To an extent the promise of the non- profit was that they would be safe, expert custodians of AI development driven not primarily by the profit motive, but also by safety and societal considerations. Has this larger group been ‘betrayed’? Perhaps


Also donors. They received a ton of donations when they were a pure non-profit from people that got no board seat, no equities, with the believe that they will stick to their mission.


Not unless we believe that OpenAI is somehow "special" and unique and the only company that is capable of building AGI(or whatever).


> There are only three groups of people who could be subject to betrayal here

GP didn't speak of betraying people; he spoke of betraying their own statements. That just means doing what you said you wouldn't; it doesn't mean anyone was stabbed in the back.


> Clearly they did not betray employees or investors, since they largely sided with Sam

Just because they sided with Altman doesn't necessarily mean they are aligned. There could be a lack of information on the employee/investor side.


It does seem that the hypocrisy was baked in from the beginning. In the tech world 'open' implied open source but OpenAI wanted to benefit from a marketing itself as something like Linux when internally it was something like Microsoft.

Corporations have no values whatsoever and their statements only mean anything when expressed in terms of a legally binding contract. All corporate value statements should be viewed as nothing more than the kind of self-serving statements that an amoral narcissitic sociopath would make to protect their own interests.


Pretty sure Apple never aimed to be an Apple.


They sure sued a lot of apple places over having an apple as logo.


If having an apple logo makes a company an apple, then Apple is in fact an apple


It's actually one of the most spectacular failures in business history, but we don't talk much about it


But The Apple.


did the "Open" in OpenAI not originally refer to open in the academic or open source manner? i only learned about OpenAI in the GPT-2 days, when they released it openly and it was still small enough that i ran it on my laptop: i just assumed they had always acted according to their literal name up through that point.


This has been a common misinterpretation since very early in OpenAI's history (and a somewhat convenient one for OpenAI).

From a 2016 New Yorker article:

> Dario Amodei said, "[People in the field] are saying that the goal of OpenAI is to build a friendly A.I. and then release its source code into the world.”

> “We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.”

source: https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...


I'm not sure this is a correct characterization. Lex Fridman interviewed Elon Musk recently where Musk says that the "open" was supposed to stand for "open source".

To be fair, Fridman grilled Musk on his views today, also in the context of xAI, and he was less clear cut there, talking about the problem that there's actually very little source code, it's mostly about the data.


Altman appears to be in the driving seat, so it doesn't matter what other people are saying, the point is "Open" is not being used here to the open source context _but_ they definitely dont try to correct anyone who thinks they're providing open source products.


Except that view point fell even earlier when they refused to release their models after GPT-2.



Did Apple raise funds and spend a lot of time promoting itself as a giant apple that would feed humanity?


these are the vapid, pedantic hot takes we all come here for. thanks.


Yes!


Matt Levine's "slightly annotated diagram" in one of his latest newsletters tells the story quite well, I think: https://newsletterhunt.com/emails/42469


Very disappointing outcome indeed. Larry Summers is the Architect of the modern Russian Oligarchy[1] and responsible for an incredible amount of human suffering as well as gross financial disparity both in the USA as well as the rest of the world.

Not someone I would like to see running the world’s leading AI company

[1] https://www.thenation.com/article/world/harvard-boys-do-russ...

Edit: also https://prospect.org/economy/falling-upward-larry-summers/

https://www.npr.org/sections/money/2022/03/22/1087654279/how...

And finally https://cepr.net/can-we-blame-larry-summers-for-the-collapse...


Outcome? You mean OpenAI wakes up with no memories of the night before, finding their suite trashed, a tiger in the bathroom, a baby in the closet, and the groom missing and the story will end here?

I just renewed by HN subscription to be able to see Season 2!


Disappointing? What has OpenAI done to you? We don't even know what happened.

Everything has been pure speculation. I would curb my judgement if I were you, until we actually know what happened.


Which critical thinking could they exercise if no believable reasons were given for this whole mess? Maybe it's you who need to more carefully assess this situation.


in the end, maybe Sam was the instigator, the board tried to defend (and failed) and what we just witnessed from afar was just a power play to change the structure of OpenAI (or at least the outcome for Sam and many others) towards profit rather than non-profit.

we'll all likely never know what truly happened, but it's a shame that the board has lost their last remnant of some diversity and at the moment appears to be composed of rich Western white males... even if they rushed for profit, I'd have more faith in the potential upside what could be a sea change in the World, if those involved reflected more experiences than are currently gathered at that table.


I find the outcome very satisfying. The OpenAI API is here to stay and grow, and I can build software on top of it. Hopefully other players will open up their APIs soon as well, so that there is a reasonable choice.


Not a given that it is here to stay and grow after the company showed itself in such a chaotic state. Also, they need a profitable product - it is not like they are selling Iphones and such..


I think what this saga has shown is that no one controls OpenAI definitively. Is Microsoft did this wouldn’t have happened in the first place don’t you think?

And if Sam controlled it it also wouldn’t have.


While I certainly agree that OpenAI isn't open and is effectively controlled by Microsoft, I'm not following the "groupthink" claims based on what just happened. If I'd been given the very fishy and vague reasons that it sounds like their staff were given, I think any rational person would be highly suspicious of the board, especially since some believe in fringe ideas, have COIs, or can be perceived as being jealous that they aren't the "face" of OpenAI.


Yes they need to change their name. Having "Open" in their name is just a big marketing lie.


It's not about critical thinking: the employees were about to sell up to $1B of shares to thrive capital. This debacle has derailed that.


I have been working for various software companies at different capacities. Never did i see 90%+ employees care about their CEO . In a small 10 member startup maybe its true. Are there any OpenAI employees here to confirm that .. their CEO really matters ... I mean how many employee revolted when Steve Jobs was fired .. Do Microsoft and Google employees really care ?


Yes...

Investors and executives.. everyone in 2023 is hyper focused on "Thiel Monopoly."

Platform, moat, aggregation theory, network effects, first mover advantages.. all those ways of thinking about it.

There's no point in being bing to Google's AdWords... So the big question is pathway to being the adWords. "Winning." That's the paradigm. This is where big returns will be.

However.. we should always remember, but the future is harder to see from the past. Post fact analysis, can often make things seem a lot simpler and more inevitable than they ever were.

It's not clear what a winner even is here. What are the bottlenecks to be controlled. What are the business models, revenue sources. What represents the "LLM Google," America online, Yahoo or a 90s dumb pipe.

FYIW I think all the big text have powerful plays available.. including keeping powder dry.

No doubt that proximity to openAI, control, influence, access to IP.. all strategic assets. That's why they're all invested an involved in the consortium.

That said assets or not strategies. It's hard to have strategies when strategic goals are unclear.

You can nominate a strategic goal from here, try to stay upstream, make exploratory investments and bets... There is no rush for the prize, unless the price is known.

Obviously, I'm assuming the prixe is not AGI and a solution to everything... That kind of abstraction is useful, but I do not think it's operative.

It's not a race currently, to see who's R&D lab turns on the first super intelligent consciousness.

Assuming I'm correct on that, we really have no idea which applications LLM capabilities companies are actually competing for.


> OpenAI is in fact not open

One wonders what will happen with Emet Shear's "investigation" in the process that lead to Sam's outing [0]. Was it even allowed to start?

[0] https://twitter.com/eshear/status/1726526112019382275


Who know but they will probably change their minds again before the holiday and CEO musical chairs game will continue


> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

I'm sure has been a lot of critical thinking going on. I would venture a guess that employees decided that Sam's approach is much more favorable for the price of their options than the original mission of the non-profit entity.


One thing I'm not sure I understand... what's OpenAI's business model? In my eyes, GPT & co is, just like Dropbox, just a feature. It's not a product.

And just like Dropbox, in the end, what disruption? GPT will just be a checkbox for products others build. Cool tech, but not a full product.

Of course, I'd love to be proven wrong.


AI As a Service ( AAaS ), Then the Marketplace of GPTs, and it will become the place to get your AI features from.


> OpenAI is in fact not open

that ship sailed long ago , no?

But i agree that the company seems less trustworthy now, like it's too CEO-centered


>Disappointing outcome. The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft. Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

Why was his role as a CEO even challenged?

>It might not seem like the case right now, but I think the real disruption is just about to begin. OpenAI does not have in its DNA to win, they're too short-sighted and reactive. Big techs will have incredible distribution power but a real disruptor must be brewing somewhere unnoticed, for now.

Always remember; Google wasn't the first search engine nor iPhone the first smartphone. First-movers bring innovation and trend not market dominance.


> the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either

I suspect incentives play a huge role here. OAI employees are compensated with stock in the for-profit arm of the company. It's obvious that the board's actions put the value of that stock in extreme jeopardy (which, given the corporate structure, is theoretically completely fine! the whole point of the corporate structure is that the nonprofit board has the power to say "yikes, we've developed an unsafe superintelligence, burn down the building and destroy the company now").

I think it's natural for employees to be extremely angry with a board decision that probably cost them >$1M each.


All this just tells for the 100th time that this area desperately needs some regulation. I don't know the form, but even if we have 1% of skynet, heck even 0.01% its simply too high and we still have full control.

We see most powerful people are in it for the money and power ego trip, and literally nothing else. Pesky morals be damned. Which may be acceptable for some ad business but here stakes are potentially everything and we have no clue what actual % the risk is.

Its to me very similar to all naivety particle scientists expressed in its early days and then reality check of realpolitik and messed up humans in power when bombs were done, used and then hundred thousand more were produced.


>Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

The OpenAI employees overwhelmingly rejected the groupthink of the Effective Altruism cult.


The board couldn't even clearly articulate why they fired Sam in the first place. There was a departure from critical thinking but I don't think it was on the part of the employees.


OpenAI is more open than my company’s AI teams, and that is even from my own insider relationship. As far as commercial relationships are concerned, I’d say they’re hitting the mark.


It is not groupthink it is comradery.

For me, the whole thing is just human struggle. It is about fighting for people they love and care, against some people they dislike or indifferent to.


Nah, I too will threaten to sign a petition to quit if I could save my RSUs/PPUs from evaporating. Organizational goals be damned (or is it extinction level risk be damned?)


In this case the fate of OpenAI was in fact heavily controlled by its employees. They voted with their employment. Microsoft gave them an assured optional destination.


> The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft.

I'd say the lack of a narrative from the board, general incompetence with how it was handled, the employees quitting and the employee letter played their parts too.

But even if it was Microsoft who made this happen: that's what happens when you have a major investor. If you don't want their influence, don't take their money.


So you didn’t realize that when Microsoft both gained a 49% interest and was subsidizing compute?

Unless they had something in their “DNA” that allowed them to build enough compute and pay their employees, they were never going to “win” without a mass infusion of cash and only three companies had enough compute and revenue to throw at them and only two companies had relationships with big enterprise and compute - Amazon and Microsoft.


Whatever OpenAI started as, a week ago it was a company with the best general purpose LLM, more on the way, and consumer+business products with millions of users. And they were still investing very heavily in research. I'm glad that company may survive. If there's room in the world for a more disruptive research focused AI company that can find sustainable funding, even better.


It's now clearly a Business oriented product and the non-profit portion is a marketing tactic to avoid scrutiny.


The alternative was that all OpenAI employees started to work directly for MSFT, as they said in the letter signed by 95% of them.


> Furthermore, the overwhelming groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

"Because someone acts differently than I expected, they must lacks of critical thinking."

Are you an insider? If not, have you considered that perhaps OpenAI employees are more informed about the situation than you?


What could disrupt OpenAI is a dramatic change in market, perhaps enabled by a change in technology. But if it's the same customers in the same market, they will buy or duplicate any tech advance; and if it's a sufficiently similar market, they will pivot.


> it is effectively controlled by Microsoft

I don't consider this confirmed. Microsoft brought an enormous amount of money and other power to the table, and their role was certainly big, but it is far from clear to me that they held all or most of the power that was wielded.


> that it is effectively controlled by Microsoft

No it's not. Microsoft didn't knew about this till minutes before the press release.

Investors are free to protest decisions against their principles and people are free to move away from their current company.


Come on, it was just a preparation for the upcoming IPO. Free ads in all news and TV.


> The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft.

What leads you to make such a definitive statement? To me the process shows that Microsoft has no pull in OpenAI.


> The process has conclusively confirmed that OpenAI is in fact not open and that it is effectively controlled by Microsoft.

This was said loud and clear when Microsoft joined in the first place but there were no takers.


I wonder if beyond the groupthinking we are seeing at least a more heterogeneous composition: a mix of people that includes business, pure research, engineering, and kind of spirituality-semireligion around [G]AI.


Plot twist: Sam posts that there is no agreement and that OpenAI is delusional.


Let me guess. The only valid outcome for you would've been that they disband in order to prevent opening a portal to the cosmic AGI Cthulhu.

Frankly these EA & e/acc cults are starting to get on my nerves.


>OpenAI does not have in its DNA to win, they're too short-sighted and reactive.

What does that even mean?

In any case, it's not OpenAI, it's Microsoft, and it has a long history of winning and bouncing back.


Any good summary of the OpenAI imbroglio? I know it has a strange corporation, with part non profit and part for profit. I don't follow it closely but would like a quick read explaining.


How can you without access to the information that actual employees had of the situation say "there's clearly little critical thinking amongst OpenAI's employees"?


It is a shame that we lost the ability to hold such companies to account (for now). But given the range of possibilities laid out before us, this is the better outcome. GPT-4 has increased my knowledge, my confidence, and my pleasure in learning and hacking. And perhaps it's relatives will fuel a revolution.

Reminds me of a quote: "A civilization is a heritage of beliefs, customs, and knowledge slowly accumulated in the course of centuries, elements difficult at times to justify by logic, but justifying themselves as paths when they lead somewhere, since they open up for man his inner distance." - Antoine de Saint-Exupery.


Take a look at https://kyutai.org/ that launched last week


>groupthink shows there's clearly little critical thinking amongst OpenAI's employees either.

So the type of employee that would get hired at OpenAi isn't likely to be skilled at critical thinking? That's doubtful. It looks to me like you dislike how things played out, gathered together some mean adjectives and "groupthink", and ended with a pessimistic prediction for their trajectry as punishment. One is left to wonder what OAI's disruptor outlook would be if the outcome of the current situation had been more pleasing.


Seems like that's a good thing when the goals of the open faction is to slow down development lol, how would that make OpenAI win?


Based on the spectacular drama we were allowed to observe:

For a company at the forefront of AI it’s actually very, very human.


It definitely seems like another branch on the IT savior complex, where the prior branch was crypto.


Ultimately, the openness that we all wish for must come from _underlying_ data. The know-how and “secret sauce” were never going to be open. And it’s not as profound as we think it is inside that black box.

So who holds all the data in closed silos? Google and Facebook. We may have already lost the battle on achieving “open and fair” AI paradigm long time ago.


Buy Microsoft stock. Got it.


Amazing outcome. Empty shirts folded. People who get stuff done persevere.


Microsoft played almost no role in the process except to be a place for Sam and team to land.

What the process did shoe is if you plan to oust a popular CEO with a thriving company, you should actually have a good reason for it. It’s amazing how little thought seemingly went into it for them.


I would say this is a great outcome.

Any other outcome would have split OpenAI quite dramatically and put them back massively.

Big assumption to say 'effectively controlled by Microsoft' when Microsoft might have been quite happy for the other option and for them to poach a lot of staff.


Hard to say without seeing how the two new board members lean.


>a real disruptor must be brewing somewhere unnoticed, for now.

Anthropic.


Open Group, the home of UNIX standards never was that open.


there is a lot of money made (100m paid users?) by everyone and momentum so groupthink is forced to occur kind of.


I think Microsoft's deep pockets, computing resources, their head start, and 50%+ employees not quitting is more important to the company's chances at success than your assessment they have the "wrong DNA."

The idea that the marketplace is a meritocracy of some kind where whatever an individual deems as "merit" wins is just proven to be nonsense time and time again.


right . why don't you creat a chatgpt like innovation or even AGI and do things your way? So many people just know how to complain on what other people build and forget that no one is stopping you from innovating the way you like it.


You would expect the company that owns 49% of the shares to have some input in firing the CEO, why is that disappointing? If they had more control this shitshow would never have happened.


MS doesn't own any part of OpenAI, Inc. In fact nobody really owns it. That was the whole point.


The initial board consists entirely of swamp lizards. I really hope they mess up as you predict.


The Hacker News comments section has really gone to shit.

People here used to back up their bold claims with arguments.


It is quite amazing how many people know enough to pass wide judgment on hundreds of people because... they just know. Feel it in their gut.


They made GPT4 and you think they clearly have little critical thinking? That’s some big talk you’re talking.


That's the curse of specialisation. You can be really smart in one area and completely unaware in others. This industry is full of people with deep technical knowledge but little in the way of social skills.


Exactly this. Specialization is indeed a curse. We have seen it in lots of these folks especially engineers that flaunt their technical prowess but are extremely deficient in social skills and other basic soft skills or even understanding governance.

Engineer working at "INSERT BIG TECH COMPANY" is no guarantee or insight about critical thinking at another one. The control and power over OpenAI was always at Microsoft regardless of board seats and access. Sam was just a lieutenant of an AI division and the engineers were just following the money like a carrot on a stick.

Of course, the engineers don't care about power dynamics until their paper options are at risk. Then it becomes highly psychological and emotional for them and they feel powerless and can only follow the leader to safety.

The BOD (Board of Directors) with Adam D'Angelo (the one who likely instigated this) has shown to have taken unprecedented steps to remove board members and fire the CEO for very illogical and vague reasons. They already made their mark and the damage is already done.

Lets see if these engineers that signed up to this will learn from this theatrical lesson of how not to do governance and run an entire company into the ground with unspecified reasons.


Agreed, take Hacker News for example. 99% of the articles are in a domain I don't have years of professional experience.

However, when that one article does come up, and I know the details inside/out , the comments sections are rife with bad assumptions, naïve comments and misinformation.


> Furthermore, the overwhelming groupthink shows there’s clearly little critical thinking amongst OpenAI’s employees either.

Very harsh words for some of the highest paid smartest people on the planet. The employees built GPT-4 the most advanced AI on the planet, what did you build? Do you still claim they’re more deficient in critical thinking compared to you.


Being smart does not equate to being critical, or going against group think.


There is no comparison to himself in the previous comment. Also, did you measure their IQ to put them on such a pedestal? There are lots of examples for people being great in their niche they invested thousands of hours in, while being total failures in other areas. You could see that with Mr. Sutskever over the weekend. He must be excellent in ML as he dedicated his life to researching this field of knowledge, but he lacks practice in critical thinking in management contexts.


I think the choice they had to make was: either building one of the top AI on earth under total control of OpenAI investors (and most likely the project of their life) either do nothing.

So they bowed.


please don’t troll HN




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: