There’s a recurring pattern here of OpenAI getting caught red handed doing bad things and then being all like “Oh it was just a misunderstanding, nothing more, we’ll get on to fixing that ASAP… nothing to see here…”
It’s becoming too much to just be honest oversights.
It's the correct counter-strategy to people who believe that you shouldn't attribute to malice what could be attributed to stupidity (and who don't update that prior for their history with a particular actor).
And it works in part because things often are accidents - enough to give plausible deniability and room to interpret things favorably if you want to. I've seen this from the inside. Here are two HN threads about times my previous company was exposing (or was planning to expose) data users didn't want us to: [1] [2]
Without reading our responses in the comments, can you tell which one was deliberate and which one wasn't? It's not easy to tell with the information you have available from the outside. The comments and eventual resolutions might tell you, but the initial apparent act won't. (For the record, [1] was deliberate and [2] was not.)
Well, in this case, you have the CEO saying basically they didn’t know about it until about a month ago and then Vox brings receipts with docs signed by Altman and Friends showing he and others signed off on the policy originally (or at least as of the date of the doc, which is about a year ago for one of them). And we have several layers of evidence from several different directions accumulating and indicating that Altman is (and this is a considered choice of words) a malicious shitbag. That seems to qualify as a pretty solid exception to the general rule that you cite of not attributing to malice etc.
Yeah, but keep in mind he's been in the public eye now for 10-15 years (he started his first company in 2005, joined YC in '11, and became president in '14). If you're sufficiently high profile AND do it for long enough AND get brazen enough about it, it starts to stick, but the bar for that is really high (and by nature that only occurs after you've achieved massive success).
> you shouldn't attribute to malice what could be attributed to stupidity
It's worth noting that Hanlon’s razor was not originally intended to be interpreted as a philosophical aphorism in the same way as Occam’s:
> The term ‘Hanlon’s Razor’ and its accompanying phrase originally came from an individual named Robert. J. Hanlon from Scranton, Pennsylvania as a submission for a book of jokes and aphorisms, published in 1980 by Arthur Bloch.
Maybe I'm misunderstanding, but this seems straightforward: the first link goes to an email that went out announcing a change, which seems pretty deliberate; nobody writes an announcement that they're introducing a bug. The second change doesn't seem to have been announced, which leaves open the possibility that it's accidental.
Although I suppose someone could claim the email was sent by mistake, and some deliberate changes aren't announced.
It doesn't matter because they hold all of the cards. It's the nature of power: you can get away with things that you normally couldn't. If you really want OpenAI to behave, you'll support their competitors and/or open source initiatives.
But their product isn’t really differentiated anymore and has really low switching costs: Opus is better at almost anything than the 4-series (training on MMLU isn’t a capability increase), Mistral is competitive and vastly more operator-aligned, both are cheaper and non-scandal plagued.
Mistral even has Azure distribution.
FAIR is flat open-sourcing competitive models and has a more persuasive high-level representation learning agenda.
This is what “Better to ask forgiveness than for permission” looks like when people start catching on.
It’s one of the startup catchphrases that brings people a lot of success when they’re small and people aren’t paying attention, but starts catching up when the company is big and under the microscope.
It’s becoming too much to just be honest oversights.