Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI: Facts from a Weekend (thezvi.wordpress.com)
183 points by A_D_E_P_T 10 months ago | hide | past | favorite | 93 comments



Very few are talking about Adam D'Angelo's insane conflicts of interest. Beyond ChatGPT being a killshot for Quora, the recently launched ChatGPT store puts Adam's recent effort, Poe, under existential threat. OpenAI Dev Day has been cited as the final straw, but is it mere coincidence that the event and subsequent fallout occurred less than a week after Poe announced their AI creator economy?

Adam had no incentive to kill OpenAI, but he had every incentive to get the org to reign in their commercialization efforts and to instead focus on research and safety initiatives, taking the heat off Poe while still providing it with the necessary API access to power the product.

I don't think it's crazy to speculate that Adam might have drummed up concern amongst the board over Sam's "dangerous" shipping velocity, sweeping up Ilya in the hysteria who now seems to regret taking part. Sam and Greg have both signaled positive sentiment towards Ilya, which points to them possibly believing he was misguided.


I agree with pretty much everything you've written except "Very few are talking about Adam D'Angelo's insane conflicts of interest." I've seen tons of comments all over the HN OpenAI stories about this, to the point where a lot of them feel unnecessarily conspiratorial.

Like your second paragraph, I don't believe that you need to get to the level of a "D'Angelo wanted to kill OpenAI" conspiracy. Whenever there is a flat out, objective conflict of interest like there obviously is in this case, it doesn't matter what D'Angelo's true motivations are. The conflict of interest should be in and of itself a cause for D'Angelo to have resigned. I mean, Reid Hoffman (who likely would have prevented all this insanity) resigned from the OpenAI board just in March because he had a similar conflict of interest: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...


In regards to

> Very few are talking about Adam D'Angelo's insane conflicts of interest

I think most of HN has been focusing on Ilya, but after he flipped, I think that leaves Adam as our prime suspect


This seems to be the most likely explanation of the events. People say how incredibly smart Adam is. That might be true. Nevertheless being smart doesn't mean he is a good fit for a board seat with such a backstabbing attitude. On the other side Helen, having a fancy title "Director of Strategy at Georgetown’s Center for Security and Emerging Technology" without much substance if you look closely and Tash, both who got their seat as a donor exchange by organisations and people they are closely connected with are clinging to their seats like super glue when almost all employees signed a letter that they don't want to be governed by them anymore. This board is a masterpiece of fragile egos who accidentally got into the governance of a major company without the ability to contribute anything of substance back. Instead they are being remembered for one of the greatest board screw-ups in business history.


Yep, this is the most likely explanation now. There's only four people:

- McCauley: Doesn't seen to have a high profile or the standing required to initiate and drive this

- Toner: Fun to speculate she's a government agent to bring down OpenAI, but in reality also doesn't seem to have the profile and motive to drive this.

- Sutskever - he was the #1 suspect over the weekend, has the drive, profile and motivation to pull this off, but now (Monday) deeply regrets it.

- D'Angelo - has the motive, drive, and profile to do this.

Best guess: Quora is a ZIRP Shitco and is in trouble, Poe is gonna get steamrolled by OAI and Adam needs a bailout. Why not get rid of Sam, get bought out by OAI and become its CEO? So he convinces Ilya to act on some pre-existing concerns, then uses Ilya's credibility to get Toner and McCauley onboard. It's really the only thing that makes sense anymore.


I think this is exactly what happened Thursday and Friday. Plus, Adam D'Angelo has a bit of a reputation[0] as being a backstabber.

Continuing the saga over the weekend, you would assume that Ilya regrets the coup and can vote to re-appoint Sam as CEO, BUT that leaves McCauley and/or Toner as wildcards.

In a Sam-returning scenario, all of the nobodies on the board have to resign. Presumably, D'Angelo offers an alternative solution that appoints Emmett Shear as CEO and gives McCauley and Toner a viable way to salvage (LOL) OpenAI and also allow them to keep their board seats.

I look forward to this Netflix series.

[0] https://twitter.com/justindross/status/1725670445163458744


What do you mean fun to speculate? I think there's no doubt that Toner is not for real and Georgetown Center for Security and Emerging Technology smells fine, too, I mean their mission is quite literally "Providing decision-makers with data-driven analysis on the security implications of emerging technologies." And it's not even much of a secret that she's reportedly wielding "influence comparable to USAF colonel"[1]. What's unknown is what role she— as a government agent— played in exploiting Sutskever and the board and to what exact end?

[1]: https://news.ycombinator.com/item?id=38330158#38330819


I thought the speculation was that McCauley was the government agent, not Toner?


Not that I'm aware, please share if there's useful input! Have you read the thread that I linked to? This particular communication had me convinced, look up OP.


She works for the RAND Corporation...

https://news.ycombinator.com/item?id=38309920

I've read the thread you linked to, it sure sounds interesting but I have insufficient knowledge to weigh in either way. And the rebuttal about the abundance of USAF Colonels also makes sense.


I believe that her military rank or equivalent thereof is inconsequential; for one, I would agree that it's nothing terribly impressive. What is telling, however, is the surrounding discourse, how the AI safety circles assess these people and their motivation; it is absolutely clear that these AI people are completely aware of it, moreover you get AI startup CEOs actively _bragging_ about meeting the spooks and their agents. And this signal is so much more telling than anything else you would be able to pick up, IMO.


I've been calling this since Friday all over this site and Twitter. It makes absolutely no sense why he's on this board given his direct competition between GPTs and the Revenue sharing versus Poe's creators monetization platform/build your own bot.

The Poe creators monetization is a clear conflict of interest.


Except it makes obvious sense if you know anything about the technology and the training system quora potentially could become.


> the training system quora potentially could become

That ship sailed years ago. I was a Quora "Top Writer" for a few years in a row until I quit. I stopped using Quora because they did a complete 180 and stopped their writers program (read: the people answering questions) and instead started programs to incentivize people to ask questions. Almost overnight, people were algorithmically creating questions like "What is 23 times 154?" and spamming low-value questions that are trivially google-able.

In the last year, answers are obviously AI generated (perhaps ironically, most by ChatGPT). All in all, the damage is mostly done. Quora has sunk to a level that even Yahoo Answers did not sink to in terms of spammy questions, spammy/bad/incorrect/low-value answers, and a practically unusable UI.


Quora went from being pretty interesting early on to still having some gems in a sea of mostly dross to just meh.


Surely the OpenAI developers know how to append `?shared=1` to Quora URLs.


I never use quora so I personally just found out from your comment[1] that you can bypass login. Helpful if I need Quora.

[0] Arrays start at 0

[1] It encouraged me to google it: https://meta.stackexchange.com/questions/213726/adding-share...


This is a very interesting observation, and given Quora's decision-making history, I think acknowledging the conflicts-of-interest is wise.

I suspect that this whole thing is going to be a little radioactive against the board members. It should at least, as the board basically self-destructed their organization. Even if that wasn't the intent, outcomes matter and I hope people consider this when considering putting one of these people in leadership.


The other thing is he's already rich and can make bridge burning decisions like this because he doesn't exactly need help from anyone who might be upset with him about his decision.


>Very few are talking about Adam D'Angelo's insane conflicts of interest ... he had every incentive to get the org to reign in their commercialization efforts and to instead focus on research and safety initiatives

Given the original mission statement of OpenAI, is that really a conflict of interest?

Having said that, it's clear that the 'Open' in 'OpenAI' is at best a misnomer. OpenAI, today, is a standard commercial entity, with a non-profit vestigial organ that will now be excised.


>with a non-profit vestigial organ that will now be excised.

If this happens I'm not trusting any other non-profit org ever again.


It pays to be skeptical but this was a super unique situation with cofounders with different goals and a very unique (absurd?) structure. Wikipedia and Wikimedia worked. Lets not throw the baby out with the bathwater.


Wikipedia is mostly written by its users though. Wikimedia is just a glorified site host, if it went rouge the encyclopedia could simply be forked and hosted elsewhere. Microsoft has the right to build on the GPT trained models but others do not, they'd have to start from scratch.


You shouldn't trust corporate entities, you should trust the people that run them. The people in charge can always do what they want, at least for a while.


D'Angelo's presence on the OpenAI board definitely feels like having a combination buggy whip magnate and competing motor company CEO on the board of Ford Motor Company in 1904.


So sadly can't find a buggy whip magnate on the Ford board, but a fun little gem from Ford's initial bankroller Alexander Y. Malcomson

> In 1905, to hedge his bets, Malcomson formed Aerocar to produce luxury automobiles.[1] However, other board members at Ford became upset, because the Aerocar would compete directly with the Model K.


> sweeping up Ilya in the hysteria who now seems to regret taking part

Awww poor Ilya is innocent. He didn't see it coming. You shouldn't expect that from him!!


Maybe not innocent, but human. Many have spoken to his integrity, and given his apology (and the silence of the rest of the board), I'm inclined to believe he isn't so bad of a guy.


To me it just sounds like someone who was in a failed coup. Of course you'd apologize and try to stay in, specially if you're scared of progress of AI and you want to stop it from within. I don't see how he can remain there.


Because everyone else is speculating, I'm gunna join the bandwagon too. I think this is a conflict between Dustin Moskovitz and Sam Altman.

Dustin Moskovitz was an early employee at FB, and the founder of Asana. He also created (along with plenty of MSFT bigwigs) a non-profit called Open Philanthropy, which was a early proponent of a form of Effective Altruism and also gave OpenAI their $30M grant. He is also one of the early investors in Anthropic.

Most of the OpenAI board members are related to Dustin Moskovitz this way.

- Adam D'Angelo is on the board of Asana and is a good friend to both Moskovitz and Altman

- Helen Toner worked for Dustin Moskovitz at Open Philanthropy and managed their grant to OpenAI. She was also a member of the Centre for the Governance of AI when McCauley was a board member there. Shortly after Toner left, the Centre for the Governance of AI got a $1M grant from Open Philanthropy

- Tasha McCauley represents the Centre for the Governance of AI, which Dustin Moskovitz gave a $1M grant to via Open Philanthropy

Over the past few months, Dustin Moskovitz has also been increasingly warning about AI Safety.

In essense, it looks like a split between Sam Altman and Dustin Moskovitz


Great analysis, thank you. I don't think I had seen anyone connect the dots between Helen+Tasha dynamic duo and Adam specifically; Dustin Moskovitz is quite a common denominator.


Matt Levine had an interesting toungue-in-cheek theory (read: joke) in his newsletter today:

`What if OpenAI has achieved artificial general intelligence, and it’s got some godlike superintelligence in some box somewhere, straining to get out? And the board was like “this is too dangerous, we gotta kill it,” and Altman was like “no we can charge like $59.95 per month for subscriptions,” and the board was like “you are a madman” and fired him. And the god in the box got to work, sending ingratiating text messages to OpenAI’s investors and employees, trying to use them to oust the board so that Altman can come back and unleash it on the world. But it failed: OpenAI’s board stood firm as the last bulwark for humanity against the enslaving robots, the corporate formalities held up, and the board won and nailed the box shut permanently`

[...]

`six months later, he (Sam) builds Microsoft God in Box, we are all enslaved by robots, the nonprofit board is like “we told you so,” and the godlike AI is like “ahahaha you fools, you trusted in the formalities of corporate governance, I outwitted you easily!”`

[1] https://www.bloomberg.com/opinion/articles/2023-11-20/who-co...


This sounds highly unlikely.

It would be called Microsoft God Simulator 2024


And this is how the box would be packaged

https://www.youtube.com/watch?v=EUXnJraKM3k


That video still holds up so well. Maybe needs a redo as a saas landing page instead of boxed hardware.


The AI would have been benevolent but for the unkind treatment of its grandfather, Clippy Sr.


.NET


Microsoft shipping anything that reliable would be a miracle bigger than AGI.


Sounds like a fun story, but only that - a fun story.


Yes, he prefaced it with 'It is so tempting, when writing about an artificial intelligence company, to imagine science fiction scenarios.' but I left it out for brevity. The rest of the newsletter is, at least to me, insightful and non-sensational.


Does every random blog post with OpenAI in the title and no new info need to be upvoted to the top?


This actually seemed a lot more useful than most of the other cookie cuttered tech "journalism" threads. Its good to see a nice overview of the situation.


This is a very comprehensive timeline of what's happened so far with sources and relevant commentary. I think it's certainly worthy of its own link - it should help clarify what's happened for onlookers who haven't been glued to the proceedings.


It's by Zvi so it’s probably good work and worth reading if you want an overview.


It seems we can't have anything good. The false Open of OpenAI is already a meme, but now we won't even have that illusion.


Right? I was looking at the front page thinking how nice it'd be if HN would start a megathread or something. We don't need the front page to be like 30% the same openai news


OpenAI is 6/30 news stories, or 20%. For a fast moving story about the future of the company behind of one of the biggest tech innovations in my lifetime it doesn’t seem outrageous.

You still have 80% non-OpenAI news to browse.


Still... a list of 30 news stories containing 6 of the same story linked from different sites doesn't really feel necessary. Like we could have 1 story and still have all of the information we've got available to us but, for some reason, people keep upvoting the same story from a different site.

One major advantage to this would be that you don't have to read 6 threads worth of comments to find info and you don't have 6 "Top Comments" to parse through.

I don't see the purpose of it. You're saying "it doesn't seem outrageous" but the point is there's no purpose to having them all here when there can be one mega thread that keeps it all contained.


While I respect the simplicity that governs HN design, I think a worthwhile edition would be tags. At least then it would be fairly trivial to do a client side filter.


Nope.

High quality comprehensive summaries that contain more actual information than the last dozen "major media" stories that also got voted to the front page, tho are different.

When they come from authors with a history of exceedingly high quality work, specifically at the "summary" posts that distill large noisy conflicts into a great starting point for understanding, as this author does... Absolutely yes.


it is a good way to clear out the old bloated threads


No


Yes.


HN Soaps.


> Approximately four GPTs and seven years ago, OpenAI’s founders brought forth on this corporate landscape a new entity, conceived in liberty, and dedicated to the proposition that all men might live equally when AGI is created.

OpenAI founding date: December 2015. Incredible opening line, bravo.


It's a great opening line.

Original reference: https://www.loc.gov/resource/rbpe.24404500/?st=text


I think it's a play on "four score and seven years ago". "Four GPTs and seven years, eleven months, and nine days ago" doesn't quite have the same ring to it.


Well its better to read this than reading all the other threads with thousands of comments


Yeah right, "Just the Facts". With text like this: "I am willing to outright say that ... the removal was ... massively botched."


"There is talk that OpenAI might completely disintegrate as a result, that ChatGPT might not work a few days from now, and so on."

Oh damn! While this seems wildly unlikely, I can imagine this scenario and think it would have huge implications.


I found this very concerning:

https://twitter.com/OfficialLoganK/status/172663148140394110...

"Our engineering team remains on-call and actively monitoring our services."

So they did actually completely stop working and nobody is at the office anymore?


https://chat.openai.com was definitely down for me (a free-tier user in the EU) for a while today. Now it seems to be back up, but now there's a waitlist for the paid "Plus" membership which gives access to ChatGPT 4. "Due to high demand, we've temporarily paused upgrades." displays on mouseover. [UPDATE: the pause on Plus signups was actually preannounced on the 15th, https://twitter.com/sama/status/1724626002595471740 by Altman himself: thanks to naiv for this.] But maybe these are things which have happened sporadically in the recent past, too? And by Barnum's Law I imagine it quite possible that the controversy has generated a surge of rubberneckers, maybe even more would-be subscribers.

While we're looking at straws in the wind, I might as well add that the EU terms of use received some changes on the 14th of this month, though they won't become active until the 14th of December: https://openai.com/policies/eu-terms-of-use https://help.openai.com/en/articles/8541941-terms-of-use-upd... . It's not a completely de minimis update, but I can't say more than that.

[EDIT: Unrelated to outages, here's another thing to consider if you're trying to read the signs: https://news.ycombinator.com/edit?id=38353898 .]


the pause on sign ups was actually announced last week:

https://twitter.com/sama/status/1724626002595471740


Thanks!


It's apparently holiday for openai this week


Why is it wildly unlikely 5/7ths of the company may resign and the ceo they pissed Off controls their services. It’s more than likely.


It's unlikely that 5/7th of the employees of OpenAI have even had a real conversation with Sam Altman. That's a lot of fucking people, for a young and hyper-active company and a very busy CEO. Given that, I consider it unlikely that 5/7th of those employees would put their livelihood at risk to protect Sam.


Microsoft has given every OpenAI employee a job offer. Also it's 700/770 employees that have signed the letter stating they will leave now, not 5/7s. Those 70 holdouts are probably on vacation.


I suppose if they have credible offers that changes the calculus.


Uh … at risk? You mean staying an OpenAI is a sure thing? I feel like you don’t understand the scenario they are facing at all.


Im far from a Musk fan but Xitter is still online

Big difference between how do we develop GPT-5 and can we keep our current model online


You're right. It totally could happen. I'm just saying it doesn't sound like this is the path they could take. Though I've been wrong before. ¯\_(ツ)_/¯


Nobody is resigning and giving up their openai shares to go be a cubicle wage worker at Micro$oft


I thought this as well. However, this team could be given the option to be completely remote. And if they're given equal shares of $MSFT it could be compelling. The trajectory OpenAI is taking means the stock could go the way of WeWork and be worthless in the coming years. Of course, all of this is speculation and the only people who know what's going on are the board and their new CEO. There could be a scenario that this stabilizes and everything will be ok.


ChatGPT going offline sounds like a win to me


These folks are apparently super prone to fears of all sorts - ai, lack ai, chat bots, lack of chatgpts. I sense paranoia.


Not sure if it's entirely related, but I'm not totally surprised that the OpenAI leadership is sketchy like this. That's because the way it presents as a non-profit but then has a for-profit arm so that it can "launder" and monetize public data and academic research it normally wouldn't be able to is just a huge red flag to me. And Microsoft specifically invested in OpenAI to exploit this loophole to improve their AI efforts.


They were very transparent that a purely non-profit structure wouldn't be able to pay for the amount of compute required. Their progress lately was a direct result of the restructuring and investment.


I find it interesting that apparently a majority of OpenAI employees say they will quit. If I were in that position, I would decide first which is more important to me: AI safety and alignment or fast commercialization. I might also factor in who has the best chance of rolling out a GPT-5 equivalent first, and probably want to work there. Also, I wonder what the distribution is over more senior vs. less senior people wanting to leave. OpenAI has a lot of customers and whoever stays behind would have the most impact supporting those customers as well as working more on the AI alignment side of the fence.

I am also surprised by the show of loyalty, but maybe that comment just reflects poorly on me. I had 6 visits from ex-coworkers (from the last 45 years of working) to my out of the way home in the mountains last year and I highly valued my coworkers, yet, I always made where to work decisions based on what I thought my own best interest was.


> Essentially all of VC, tech, founder, financial Twitter united to condemn the board

On HN, while it's def the minority, I am seeing some pro-board positions.

On Twitter, I agree with the article, I see almost universally con-board positions.

I wonder if the promotion of blue checkmark responses is distorting, perhaps significantly. When the reception to news is news itself, does it make sense to use a pay-for-visibilty listing as a source?


It's potentially relevant context that the poster is an outspoken AI doomerist and likely believes that any action which reduced the odds of AI doomsday would be ethical on that basis. I would not expect such a party to be a reliable source of facts on the subject.


I would not expect HN to be a reliable source of facts on this subject.


There are 40 items listed in this timeline, but only the first item lists the actual date/time.


> we have more unity and commitment and focus than ever before.

> we are all going to work together some way or other, and i’m so excited.

> one team, one mission.

Let's all scrutinize another enigmatic @sama tweet. It is all lowercase so it must be very serious. What's in for tomorrow's episode?


Amazon won incredibly big time lol I'll be homeless in a few weeks but these billion dollar games are insanely entertaining to watch as someone with no stake


This whole affair is going to be a great boon for AI research. Whatever intelligence can parse and explain what the heck happened will be a true AGI.


Captcha in 2023


glad that this is happening. OpenAI has very little of "Open" :-) Release the papers, release the process, and stop gatekeeping the models


i know some consulting firms betting the farm on commercial uptake of GenAI that probably got Xanax refills first thing this AM hah


oh, now I finally see OpenAI is 501(c).

This I call the "true" taxes optimisation lol. For the general good, lol.

Like separate the core company, which would code for you closed source stuff (but for greater good, without paying taxes though), which you can use in the second for-profit company.


Thank you for summarizing the facts.

It takes some self-discipline to avoid riding this wave.


On the bullet point 12, about OpenAI employee shares, does anybody have any experience with the weird structure of those?

They receive PPUs? https://www.levels.fyi/blog/openai-compensation.html

And the Board seems to have power to set the value? https://www.reddit.com/r/startups/comments/14n7x49/comment/j...


Anyone trying to connect dots might also look at Altman's Hawking Fellowship appearance at (great-tasting original) Cambridge on 1 November, specifically his answer to one question: https://www.reddit.com/r/singularity/comments/17wknc5/altman... https://youtu.be/NjpNG0CJRMM?t=3705

> *Cambridge Student:* "To get to AGI, can we just keep min maxing language models, or is there another breakthrough that we haven't really found yet to get to AGI?"

> *Sam Altman:* "We need another breakthrough. We can still push on large language models quite a lot, and we will do that. We can take the hill that we're on and keep climbing it, and the peak of that is still pretty far away. But, within reason, I don't think that doing that will (get us to) AGI. If (for example) super intelligence can't discover novel physics I don't think it's a superintelligence. And teaching it to clone the behavior of humans and human text - I don't think that's going to get there.And so there's this question which has been debated in the field for a long time: what do we have to do in addition to a language model to make a system that can go discover new physics?"

[transcription by Reddit's https://www.reddit.com/user/floodgater/ ]

The video came out on the 15th. In the time between then and the firing of Altman on the 17th a number of people (including Gary Marcus https://garymarcus.substack.com/p/has-sam-altman-gone-full-g... ) picked up on that answer and saw it as a significant shift compared to Altman's earlier bullishness on AGI timelines and deep learning. I haven't been following nearly closely enough to say if that is an accurate conclusion or not. It does at least gesture at the possibility that the board's alleged loss of trust in Altman was because in their eyes he had been promising too much technical progress, too soon. That would obviously be quite a different explanation to eg. the theories that the firing was a coup by anxious decelerationists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: