Hacker News new | past | comments | ask | show | jobs | submit login
Leaked OpenAI documents reveal aggressive tactics toward former employees (vox.com)
1791 points by apengwin 4 months ago | hide | past | favorite | 534 comments



If this really was a mistake the easiest way to deal with it would be to release people from their non disparagement agreements that were only signed by leaving employees under the duress of losing their vested equity.

It's really easy to make people whole for this, so whether that happens or not is the difference between the apologies being real or just them just backpedaling because employees got upset.

Edit: Looks like they're doing the right thing here:

> Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.


This reads like more than standard restrictions. I hate those like everyone, they are just intended to chill complaints in my opinion with enough question to scare average people without legal expertise (like me, like most devs), just like non-competes used to seemingly primarily be used to discourage looking at other jobs, separate from whether it was enforceable - note the recent FTC decision to end non-competes.

About 5 months ago I had a chance to join a company, their company had what looked like an extreme non-compete to me, you couldn't work for any company for the next two years after leaving if they had been a customer of that company.

I pointed out to them that I wouldn't have been able to join their company if my previous job had that non-compete clause, it seemed excessive. Eventually I was in meetings with a lawyer at the company who told me it's probably not enforceable, don't worry about it, and the FTC is about to end non-competes. I said great, strike it from the contract and I'll sign it right now. He said I can't do that, no one off contracts. So then I said I'm not working there.


I have worked for multiple startups (Malwarebytes, Vicarious, Rad AI, Explosion AI, Aptible, Kenna Security). Not once have I seen an exit agreement that stated they would steal back my vested equity if I didn't sign. This is definitely not "standard restrictions".


Anytime someone tried to get me to sign a terrible contract, they always said “This is just standard stuff.”


Same. And in the same breath they also added “this is never used anyway, it’s just the template”. But “no it can’t be removed from the contract”


I always respond with "if it's never enforced, then you'll be fine with me taking it out"

Then I strike the offending passage out on both copies of the contract, sign and hand it back to them.

Your move.

¯\_(ツ)_/¯


Do you really do this, and is striking out a line of a contract binding?


Yes, I really do this. Have done since I started working.

At one of my first jobs as a student employee they offered me a salary X. In the contract there was some lower number Y. When I pointed this out, they said "X includes the bonus. It's not in the contract but we've never not paid it". OK, if this is really guaranteed, you can make that the salary and put it in writing. They did, my salary was X and that year was the first time they didn't pay the optional bonus. Didn't affect me, because I had my salary X.

IANAL and I don't know how binding this is. I'd think it's crucial for it to be in both copies of the contract, otherwise you could have just crossed it out after the fact, which would of course not be legally binding at all and probably fraud (?)

In practice, it doesn't really come up, because the legal department will produce a modified contract or start negotiating the point. The key is that the ball is now in their court. You've done your part, are ready and rearin' to go, and they are the ones holding things up and being difficult, for something that according to them isn't important.

UPDATE:

I think it's important to note that I am also perfectly fine with a verbal agreement.

A working relationship depends on mutual trust, so a contract is there for putting in a drawer and never looking at it again...and conversely if you are looking at it again after signing, both the trust and the working relationship are most likely over.

But it has to be consistent: if you insist on a binding written agreement, then I will make sure what is written is acceptable to me. You don't get to pick and choose.


For one job I also crossed some stuff out, founder was cool with it because he mostly got it from a template. Having actual paper is great for that. At a later job, they insisted on Docusign, which was basically I get an immutable image in a browser to 'electronically sign' with no modifications. It had a section that amounted to a non-compete agreement that I didn't like, but their lawyers didn't really answer me on whether such a thing could be enforced or not given that the company was headquartered in California even though I'd be working out of Washington. I took that as a sign that they probably wouldn't go Amazon on me, at least.


I’m the kind of person for whom it would be hard to say it directly. You’re awesome.


Verbal agreement has lots of risks.

First you’re offering up a lot of trust to people you might have just started working with.

Or, they could be very trustworthy and just remember things differently. And of course people come and go in companies all the time they just might not be there.

At least if you do a verbal agreement follow it up with an email confirming the details.


Why not? A labor contract is a 2-ways street. If the company doesn't like the new version, they will not sign it and not hire you.


Exactly. And just like I have to be fine with not getting the job if my conditions are not acceptable to them, they have to be fine with not getting me if their conditions are not acceptable to me.

Considering the considerable effort that has gone into this by the time you are negotiating a contract, letting it fail over something that "is not important" and "is never enforced" would be very stupid of them.

So if they are unwilling to budge, that either means they were lying all along and the thing that's "never enforced" and is "not important" actually is very important to them and definitely will be enforced, or that they are a company that will enforce arbitrary and pointless rules on employees as long as they think they can.

Neither of which is a great advertisement for the company as an employer.


> So if they are unwilling to budge, that either means they were lying all along and the thing that's "never enforced" and is "not important" actually is very important to them and definitely will be enforced, or that they are a company that will enforce arbitrary and pointless rules on employees as long as they think they can.

Most of the time is basically just FUD, to coerce people into following the rule-that-is-never-enforced


It's always "not enforced" or "just a template" right up until they decide they need to pressure you into something and then they will have no problem referencing those items.

Do not sign a contract unless you are willing to entirely submit to everything in it that is legally binding.

Also be careful with extremely vague contracts. My employment contract was basically "You will do whatever we need you to do" and surprise surprise, unpaid overtime is expected.


I've seen legal departments redlining drafts of a contract repeatedly until an agreement had been reached. The final contract still contained the red lines.


(EU perspective) it is binding. you just add both parties' initials/signature on the margin of each line that was changed.


Usually it’s binding, because it’s presumed both parties signed after the changes.

However it can be disputed, and a company could argue about the timing or details.

That’s why you’re often asked to initial changes, makes it clear that both parties have agreed to the modifications.


IANAL but I've seen strikes throughout contracts, and then an initial+date from both parties. Weird how in 2024 an initial that's so easily forgeable can be legally binding


A verbal contract, which has no record at all, can also be legally binding.


I guess like all laws it depends on jurisdiction, and more importantly, if you can convince the judge/magistrate that the contract did or did not happen


Yeah, but the law still sets the default baseline for he judge / jury / whatever. In the Nordic countries (at least Sweden) -- as techno-modern and bureaucratic as they may be -- that still includes verbal agreements.

(The handshake is probably not a legal requirement, though I suppose it could be taken into consideration as evidence -- "You even shook hands on it, so you must have realised that what you had just discussed were atually the terms you were agreeing to.")


I would guess that the initial is not the important thing, but that the strike is present on both copies of the contract.


Don't forget to initial the crossed-out section and draw a passive aggressive happy face!


By that downplaying the significance of a contract's terms and persuading to sign it without fully understanding the implications. If this were only the beginning of my career, meaning my first serious company, I would sign the contract.


“This is just standard stuff” belongs in a category of phrases like “this is perfectly legal”.


...you can trust me with it"


I’ve heard of some pretty aggressive non-competes in finance, but AFAIU (never worked in Connecticut myself), it’s both the carrot and the stick: you get both paid and a stiff contract if you leave with proprietary alpha between the ears.

In tech I’ve never even heard a rumor of something like this.


It’s got a term - “garden leave” and yeah it was prevalent in finance. I say “was” because I think some states are changing laws wrt/ non-competes and this calling this practice into question.


No, you're confusing stuff.

First of all, taking any code with you is theft, and you go to jail, like this poor Goldman Sachs programmer [1]. This will happen even if the code has no alpha.

However, noone can prevent you from taking knowledge (i.e. your memories), so reimplementing alpha elsewhere is fine. Of course, the best alpha is that which cannot simply be replicated, e.g. it depends on proprietary datasets, proprietary hardware (e.g. fast links between exchanges), access to cheap capital, etc.

What hedge funds used to do, is give you lengthy non-competes. 6months for junior staff, 1-2y for traders, 3y+ in case of Renaissance Technologies.

In the US, that's now illegal and un-enforceable. So what hedge funds do now, is lengthy garden(ing) leaves. This means you still work for the company, you still earn a salary, and in some (many? all?) cases also the bonus. But you don't go to the office, you can't access any code, you don't see any trades. The company "moves on" (developes/refines its alpha, including your alpha - alpha you created) and you don't.

These lengthy garden leaves replaced non-competes, so they're now 1y+. AFAIK they are enforceable, just as non-competes while being employed always have been.

[1] https://nypost.com/2018/10/23/ex-goldman-programmer-sentence...


The federal government banned non competes last month:

https://www.ftc.gov/news-events/news/press-releases/2024/04/...


I think this still leaves garden leave on the table. The thing that can no longer happen is an employer ending it's relationship with an employee and preventing them from continuing their career after the fact. Garden leave was in fact one of the least bad outcomes of a non-compete as I understand it.


I don't recall where I saw it, but I believe the FTC clarified and said that garden-leave type arrangements aren't covered under their ban.


Comp clawbacks are quite common in finance, at least contractually. It's rare for it to go ahead, but it happens. It isn't some especially weird thing.


Comp clawbacks in exit agreements, that weren't part of the employment agreement?

I've seen equity clawbacks in employment agreements. Specifically, some of the contracts I've signed have said that if I'm fired for cause (and were a bit more specific, like financial fraud or something) then I'd lose my vested equity. That isn't uncommon, but its not typically used to silence people and is part of the agreement they review and approve of before becoming an employee. It's not a surprise that they learn about as they try to leave.


It must have been part of the original employment document package, that the equity was cancellable. In the details of the equity grant, or similar, somewhere.


According to the Vox article, it's much more complicated legally. It's not part of each employee's contract that allows this, it's part of the articles of incorporation of the for-profit part of OpenAI.


Must it?

Not clear what you mean.

Do you mean it is generic to do that in contracts? (Been a while since I was offered equity.)

Or do you mean that even OpenAI would not try it without having set it up in the original contract? Because I hate to be the guy with the square brackets ;-)


If it wasn't in the original contracts for the equity, they wouldn't be able to claw back. Fairly obviously, the mechanism can't be in the exit agreement because you didn't sign that yet.

Normally a company has to give you new "consideration" (which is the legal term for something of value) for you to want to sign an exit agreement - otherwise you can just not bother to sign. Usually this is extra compensation. In this case they are saying that they won't exercise some clause in an existing agreement that allows them to claw back.


Per the Vox article, it's not directly in the contract you sign for the equity, it's basically part of the definition of the equity itself (the articles of incorporation of the for-profit company) that OpenAI remains in full control of the equity in this way.


It must.

Joke aside - I'm saying "it must" the same way someone might say "surely".


Wise. Stops people saying "and don't call me Shirley!"


Don’t call me Shirley.


What is the structure of those compensations, and the mechanism for the clawbacks? Equity is taxed when it becomes the full, unrestricted property of the employee, so depending on the structure these threatened clawbacks could have either (1) been very illegal [essentially theft], or (2) could have had drastic and very bad tax consequences for all employees, current and former.

I'm not surprised that they're rapidly backpedaling.


> taxed when it becomes the full, unrestricted property of the employee

I guess these agreements mean that the property isn't full unrestricted property of the employee... and therefore income tax isn't payable when they vest.

The tax isn't avoided - it would just be paid when you sell the shares instead - which for most people would be a worse deal because you'll probably sell them at a higher price than the vest price.


> which for most people would be a worse deal

It's a worse deal in retrospect for a successfull company. But there and then it's not very attractive to pay an up-front tax on something that you can sell at an unknown price in the relatively far future.


Not sure how they deal with the tax. Ping John Stumpf (former Wells CEO) and ask, he probably has time on his hands and scar tissue and can explain it.


   > Comp clawbacks are quite common in finance
Common? Absolutely not. It might be common for a tiny fraction of investment bank staff who are considered (1) material risk takers, (2) revenue generators, or (3) senior management.


Can you find any specific examples? I've only seen that apply to severance agreements where you're being paid some additional sum for that non-disparagement clause.

Never seen anything that says money or equity you've already earned could be clawed back.


I negotiated a starting bonus with my employer and signed a contract that I would need to pay it back if I quit within a year.


Wells Fargo clawed back from the CEO (and a couple others if I remember) over the fake account scandals.


Right, but would that have been achieved with a clause open-ended enough to allow this additional paperwork on exit?

Or would that have been an "if you break the law" thing?

Seems unlikely that OpenAI are legally in the clear here with nice clear precedent. Why? Because they are backflipping to deny it's something they'd ever do.


I think they are backpedaling rapidly to avoid major discontent among their workers. By the definition of their stock as laid out in their articles of incorporation, they have the right to reduce any former employee's stock to 0, or to prevent them from ever selling it, which is basically the same thing. This makes their stock offers to employees much less valuable than the appear at face value, so their current and future employees may very well start demanding actual dollars instead.


> Comp clawbacks are quite common in finance, at least contractually

Never negotiated on exit.


I don't think it was negotiated on exit. It was threatened on exit. The ability to do it was almost certainly already in place.


> The ability to do it was almost certainly already in place

Why? OpenAI is a shitshow. Their legal structure is a mess. Yanking vested equity on the basis of a post-purchase agreement signed under duress sounds closer to securities fraud than anything thought out.


I'm not saying it was thought out, I'm saying it was in place. My understanding is that the shareholders agreement had something which enabled the canceling of the shares (not sure if it was all shares, shares granted to employees, or what). I have not seen the document, so you may be right, but that's my understanding.


> the shareholders agreement had something which enabled the canceling of the shares

OpenAI doesn't have shares per se, since they're not a corporation but some newfangled chimeric entity. Given the man who signed the documents allegedly didn't read them, I'm not sure why one would believe everything else is buttoned up.


If its not negotiated on exit why are they requesting additional documents to be signed when leaving? Clearly nothing like this was agreed at the start of employment.


Is OpenAI a finance company? I guess that would explain a lot.


Would it though? Presumably a finance company's claw back clause is there to protect it from you taking its trade secrets with you to its competitors, not from you tweeting "looks trashy lol" in response to a product launch of theirs, or you mentioning to a friend that your old boss was kind of a dick.


They pay like one.


Finance has bigger cash and deferred cash (bonus) in their packages. OpenAI still puts a lot of the pay in restricted equity.


IANAL but isn’t it illegal to execute something in the event of a document not being signed?


I expect not... provided it's a thing you could do anyway (and it isn't extortion or something).


You could claim you gave someone a contract and they didn’t sign it, so now they owe u a million bucks


I think you missed my proviso.

If you can do X in the first place, I don't think there's any general rule that you can't condition X on someone not signing a contract.


I’ve seen that for a well-known large tech company, and I wasn’t even employed in the US, making those seem stranger. Friends and former colleagues pushed back against that (very publicly and for obvious reasons in one case) and didn’t get to keep their vested options: they had to exercise what they had before leaving.

There was one thing that I cared about (anti-competitive behavior, things could technically be illegal, but what counts is policy so it really depends on what the local authority wants to enforce), so I asked a lawyer, and they said: No way this agreement prevents you from answering that kind of questioning.


A 90 days exercise window is standard (and there are tax implications as well in play).

OpenAI is different: they don’t grant options, but “Units” that are more like RSUs.


Don’t those come with bad tax implications then? The point of options is to give ownership without immediate financial burden for the employee.


It depends on the details.

Often RSUs in non public companies come with a double trigger: you need both the vest to happen and a liquidity to happen for the actual delivery of those, so no tax implications until a liquidity event (afaik. But don’t take tax advice from randos on the internet).


You pay normal tax on them when you sell after holding for 1 year, but an increased tax if you sell within that year.


That’s not completely accurate.

In the US, equity given as compensation for work could be taxed as wages, or, under certain circumstances, as capital gains.

The one year is for some capital gains to get considered long term gains, which may be taxed at a lower marginal rate than regular wages.

In other words, if you are granted equity as compensation, go talk at length to a tax professional to get an understanding of the taxation of it.


The closest thing I've heard of is having to sign anti-disparagement clauses as part of severance when laid off; still pretty shitty, but taking back already vested equity would be on another level.


My understanding is that its an explicit condition of the equity grant, not something technically first revealed at exit (which would probably be illegal), but probably under the expectation that no one is carefully studying the terms of the agreement that would be required at exit when they are accepting compensation terms that i nclude equity.


I work in ad tech and have had to sign this when laid off.


How is malwarebytes a startup? They were a thing when I was a baby!


GP:

> I have worked for multiple startups (Malwarebytes...

Note the "have worked" and the rather long list of places they've worked. If that list is in chronological order (sure didn't look alphabetical), Malwarebytes doesn't have to be a startup now for it to have been one when GP worked there.


Well that’s depressing. I was 27 years old when Mawarebytes was released.

Fuck, I’m old.


People on this site have been working in this industry longer than you. Some longer than you have been alive, it sounds like.


You take my statement far too literally. I thought it came out in the late 90's. Turns out, it was 2006. I was in middle school at the time.


I worked pre-ipo Uber with TK as CEO and they were bro-af and had nothing like this.


> He said I can't do that, no one off contracts.

There was still potential to engage there:

  "That's alright, as you said it's not enforceable anyway just remove it from everyone's
   contract.  It'll just be the new version of the contract for everyone."
Doubt it would have made any difference though, as the lawyer was super likely bullshitting.


This is one of those magical times where having your own counsel is worth the upfront cost.


You did the right thing here.

> I was in meetings with a lawyer at the company who told me it's probably not enforceable, don't worry about it

Life rule: if the party you're negotiating a contact with says anything like "don't worry about that, it's not enforceable" or "it's just boilerplate, we never enforce that" but refuses to strike it from the contract then run, don't walk, away from table. Whoever you're dealing with is not operating in good faith.


You did well: there is never a rule against one-off contract. I can assure you the CEO has a one-off contract, and that lawyer has a one-off contract, at the very least :D


Those are great points. I didn't think about it at the time. Since they were pushing me hard to sign a contract that for once literally blocked most of the companies in the cs speciality area I mostly have worked in, it gave me enough courage to say no. Barely ;-)


Yeah, totally legit. Don't worry about it, it's not enforceable anyways. What, remove it from the contract? God no! Oh, I mean sorry, no one off contracts.


I'd be surprised if anyone fell for that. "Oh, thanks, opposing counsel, I totally trust you to represent my interests over your employer's!"


Well, you can be surprised. It's surprisingly common, in my experience, to believe people who pretend they are on your side. One interesting and typical case that is documented through countless online videos is police interrogations, where the interrogator is usually an expert in making it seem he (or she) is on your side, despite how obvious it should be that they're not. "Can I get you a meal?", friendly tone, various manipulations and before you know it you've said things that can and will be used against you whether you are guilty or not.

And you don't get the meal, either.


> We can also mention the case of psychiatrists running the "Presence francaise" groups who, appointed to examine the prisoner, started off boasting they were great friends with the defense lawyer and claiming both of them (the lawyer and the psychiatrist) would get the prisoner out. All the prisoners examined by this method were guillotined. These psychiatrists boasted in front of us of this neat method of overcoming "resistance."

- The Wretched of the Earth, Frantz Fanon


I read some of the pages before and after that footnote. Highly disturbing, to say the least.


Attorneys are like any other profession. The average attorney is just like the average person, except he passed a difficult test.

Exceptions require sign off and thinking. The optimal answer is go with the flow. In an employment situation, these sorts of terms require regulatory intervention or litigation to make them go away, so it’s a good bet that most employees will take no action.


> The average attorney is just like the average person, except he passed a difficult test.

My best friend is a lawyer, so heck knows how difficult that test can be -- he passed it. ;-)


If it's non-enforceable, but you signed it, wouldn't that make the contract void?

I suppose there's probably a bunch of legalese to prevent that though...


Probably not enforceable != enforceable. Are you worth suing or does everyone sign? Are your state laws and jurisprudence going to back you up?

If you are ever going to sign an employee agreement that binds you, consult with an employment attorney first. I did this with a past noncompete and it was the best few hundred I ever spent: my attorney talked with me for an hour about the particulars of my noncompete, pointed out areas to negotiate, and sent back redlines to make the contract more equitable.


The single best professional decision I ever made was to get a business degree. The degree itself wasn’t worth a damn, but the network was invaluable. I have very close friends who are the exact kind of attorney who you would expect to have an undergraduate business degree. They’re greedy, combative people who absolutely relish these sorts of opportunities. And as a bonus, they are MY greedy, combative people who relish these sorts of opportunities.

They’re great partners when confronted with this kind of contract. And fundamentally, if my adversary/future employer retains counsel, I should too. Why be at a disadvantage when it’s so easy to pay money and be at even?

There are some areas my ethics don’t mesh with, but at the end of the day this is my work and I do it for pay. And when I look at results, lawyers are the best investment I have ever made.


The legalese to handle that is one of the standard boilerplate clauses: https://en.m.wikipedia.org/wiki/Severability


At most it would just make that part of the contract void. Almost all contracts with stuff like this would have a “severability” clause which states like if one part of the contract is invalid, the rest is still valid.

But even without that, judges have huge amounts of leeway to “create” an ex post facto contract and say “heres the version if that contract you would have agreed to, this is now the contract you signed”. A sort of “fixed” version of the contract.


> At most it would just make that part of the contract void. Almost all contracts with stuff like this would have a “severability” clause which states like if one part of the contract is invalid, the rest is still valid.

Severability clauses themselves are not necessarily valid; whether provisions can be severed and how without voiding the contract is itself a legal question that depends on the specific terms and circumstances.


There is usually a severability clause that basically says if a clause is illegal it voids that clause not the whole contract… this is pretty standard practice.


I think I've seen that in every contract I've ever signed.


No one off contracts. lol. What nonsense. Then why did bother to have a meeting? You handled that one like a boss.


> Then why did bother to have a meeting?

Because lawyers are in the business of managing risk, and knowing what OC was unhappy about was very much relevant to knowing if he presented a risk.


yup.

companies say that all the time.

another way they do it is to say, it is company policy, sorry, we can't help it.

thereby trying to avoid individual responsibility for the iniquity they are about to perpetrate on you. .


"If it's company policy, then how can't you help it, when you're the company?"


Not standard where I come from.

And standard doesn't mean shit... Every regime in the history of mankind had standards!


Non-competes like this are often not enforceable, but it depends on the jurisdiction.


That lawyer was probably lying, bro, since he could not keep his money where his mouth was.


> This reads like more than standard restrictions.

It reads like omertà.

I wonder if I'll still get downvoted for saying this. A lot can change in 24 hours.

Edit: haha :-P


> If this really was a mistake

The article makes it clear that it wasn't a mistake at all. It's a lie. They were playing hardball, and when it became public they switched to PR crisis management to try and save their "image", or what's left of it.

They're not the good guys. I'd say they're more of a caricature of bad guys, since they get caught every time. Something between a classic Bond villain and Wile E. Coyote.


Not a mistake...

"...But there's a problem with those apologies from company leadership. Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about..."


Mistake? The clawback provisions were the executives' idea!


"We are sorry...that we got caught."

"...and that our PR firm wasn't good enough to squash the story."

They will follow the standard corporate 'disaster recovery' - say something to make it look like they're addressing it, then do nothing and just wait for it to fall out of the news cycle.


Honestly I'm willing to give the benefit of the doubt on that, depending on their actions, because I'm sure they sign so many documents they just rely on their legal teams to ensure they're good.


There's absolutely no way that the officers of the company would be unaware of this.

First of all, it beggars belief that this whole thing could be the work of HR people or lawyers or something, operating under their own initiative. The only way I could believe that is if they deliberately set up a firewall to let people be bad cops while giving the C-suite plausible deniability. Which is no excuse.

But...you don't think they'd have heard about it from at least one departing employee, attempting to appeal the onerous terms of their separation to the highest authority in the company?


I'm not.


Then why are they paid such obscene amounts of money?


Hold up... Do you really think that a C-suite including career venture-capitalists who happen to be leading+owning stock in a private startup which has hit an estimated billion+ valuation are too naive/distracted to be involved in how that stock is used to retain employees?

In other words, I'm pretty sure the Ed Dillingers are already in charge, not Walter Gibbs garage-tinkerers. [0]

[0] https://www.youtube.com/watch?v=atmQjQjoZCQ


Naïve enough to deserve the downvotes. (None from me; too late.)


>Edit: Looks like they're doing the right thing here

That's like P.Diddy saying I'm sorry.

That's damage control for being caught doing something bad ... again.


Extreme pinky swear.


"Trust me bro, if it weren't up to me you wouldn't even have to sign that contract. I mean it is up to me, but, like, I won't enforce the thing I made you sign. What? No I won't terminate the contract why don't you trust me bro? I thought we were a family?"


Yeah, agree, but they don't have to cancel the disparagement clause. They could just eat the PR hit. Allowing former employees to talk freely seems risky to me (if I were them). I think we can give them back 5 points for this move but still leave them at -995 overall.


That really is not enough. Now that they have been publicly embarrassed and the clause is common knowledge they really have to undo the mistake. If they didn't, they would look like a horrible employer and employees would start valuing their stock at $0, dropping their effective compensations by a ton and then people will leave. Given the situation, undoing the agreement is an act of basic self-preservation at this point.

The documents show this really was not a mistake and "I didn't know what the legal documents I signed meant, which specifically had a weird clause that standard agreements don't" isn't much of a defence either. The whole thing is just one more point in favor of how duplicitous the whole org is, there are many more.


For whatever it's worth (not much), Sam Altman did say they would do that

> if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this.

https://x.com/sama/status/1791936857594581428


What Sam is saying is very different than what I'm saying. I'm saying he should be proactive and just do it, he's saying that if people explicitly reach out to him then he'll do it specifically for them.


Importantly if he just said publicly that that he wouldn't enforce the non-disparagement agreements that could be legally binding[1]. But if he just says he'll release people who he's, legally speaking, free to just not do that.

[1] The keywords are promissory estoppel. I'm not a lawyer but this looks at least like a borderline case worth worrying about.


Turkeys don't vote for an early Christmas.


Bingo.


Sure and anyone who has worked in a toxic workplace knows exactly what it means to require a direct path to leadership to resolve an issue instead of just resolving it.


I also notice he conditions it on "any former employee." What about current employees who may be affected by the same legalese?

Either way, I can imagine a subtext of "step forward and get a target on your back."


Current employees rarely sign exit agreements, since by exiting they stop being employees.


True, they can't renegotiate agreements that don't yet exist.

However the fact that the corporate leadership could even make those threats to not-yet-departed employees indicates that something is already broken or missing in the legal relationship with current ones.

A simple example might for the company to clearly state in their handbook--for all current employees--that vested shares cannot be clawed back.


Given what OpenAI's been in the press for the past few weeks, I can't help but feel this is a trap; even if it isn't, Sam is certainly making it look like it is...


don't go public

don't contact OpenAI legal, which leaves an unsavory paper trail

contact me directly, so we can talk privately on the phone and I can give you a little $$$ to shut you up


This looks like proper accountability and righting your wrongs to me. Much respect to Sam. I hope this isn’t just a performance for the public.


Hope is not a process. Look at what he does not what he says. Actually you should go deaf whenever you see him opening his mouth.


Some people are just so deep into fanboiism they refuse to see the writing on the wall even when it's in ten-foot-high letters of fire on a pitch-black background. Of fucking course it's just a performance for the public.


> ”we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations”

Looks like they’re doing that.


Well, they say they are. But the nondisparagement agreement repeatedly forbids revealing the agreement itself, so if it wasn't cancelled those subject to it would be forbidden to point out that the public claim they were going to release people from it was a lie (or done only for people from whom OpenAI was not particularly concerned about potential disparagement.)


“I have not been released from any exit agreement” is not disparagement.


“disparagement" is whatever is defined in the agreement, which reportedly (from one of the people who declined to sign it) includes discussing the existence of the agreement.


Dear heavens, being a corporate employee is paranoia and depression-inducing. It's literally like walking into a legal minefield.


This is not normal for being a corporate employee. This was certainly going to come out eventually and cause big problems, but to the extent Sam thinks AGI is around the corner he might not be playing the long game.


OpenCanAIry


If your concern is in the validating that they've done so, the article author can at least vet with her anonymous sources.


Note that statement says nothing about whether they will be allowed to participate in liquidity events


"We're hereby making a legally binding commitment that those clauses are void, whether anyone reaches out to us or we manage to reach out to them or not."

Unless and until that's what they say, looks like they’re not doing that.


The right thing would be to not try to put that clause in there to begin with, not release employees from it when they get caught.


Meant to write the same thing. Agree but at least it is known now


> Looks like they're doing the right thing here:

Well, no:

> We're removing nondisparagement clauses from our standard departure paperwork, and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual. We'll communicate this message to former employees.

So the former successfully blackmailed employees, stay blackmailed.


> Looks like they're doing the right thing here

Even if that's true (and I'm not saying it is, or it isn't, I don't think anyone on the outside knows enough to say for sure), is it because they genuinely agree they did something egregiously wrong and they will really change their behavior in the future? Or is it just because they got caught this time so they have to fix this particular mistake, but they'll keep on using similar tactics whenever they think they can get away with it?

The impact of such uncertainty on our confidence in their stewardship of AI is left as an exercise for the reader.


> Edit: Looks like they're doing the right thing here:

> Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.

1. Get cash infusion from microsoft

2. Do microsoft playbook of 'oh I didn't mean to be shady we will correct' when caught.

3. In the meantime there are uncaught cases as well as the general hand waving away of repeated bad behavior.

4. What sama did would get him banned from some -fetish- circles, if that says something about how his version of 'EA' deals with consent concerns.


Plenty of legitimate things to criticize EA for, no need to smear them by association with someone who's never claimed to be an EA and hasn't obviously behaved like one either.


To be clear I'm saying he's an effective accelerationist not an effective altruist.


I think E/Acc may be the preferred abbreviation.


It shouldn't take a Vox article to ensure employees basic security over their compensation. The fact that this provision existed at all is exceptionally anti-employee.


LLM cares not whether something is said or whether the action is described.

No doubt, openai is as vacuous as their product is effect. GIGO.


Form follows function, art imitates life, dog owners grow to look like their dogs...


...and software products reflect the corporate structure of the companies that created them.

Think I'll be staying away from "AI" for a while longer.


You surely don't actually believe Altman when he says they're doing this? Like Elon Musk, Altman is a known liar and should not be trusted. It's truly unbelievable to me that people take statements like this at face value after having been lied to again and again and again. I think I'm starting to understand how crypto scams work.


Assuming that’s the only clause that they can use to cause people trouble. The article indicates it’s not.


The amount [and scale] of practices, chaos and controversies caused by OpenAI since ChatGPT was released are "on par" with the powerful products it has built since.. in a negative way!

These are the hottest controversial events so far, in a chronological order:

  OpenAI's deviation from its original mission (https://news.ycombinator.com/item?id=34979981).
  The Altman's Saga (https://news.ycombinator.com/item?id=38309611).
  The return of Altman (within a week) (https://news.ycombinator.com/item?id=38375239).
  Musk vs. OpenAI (https://news.ycombinator.com/item?id=39559966). 
  The departure of high-profile employees (Karpathy: https://news.ycombinator.com/item?id=39365935 ,Sutskever: https://news.ycombinator.com/item?id=40361128).
  "Why can’t former OpenAI employees talk?" (https://news.ycombinator.com/item?id=40393121).


Why is AI so dramatic? I just watched mean girls and this is worse.


The best case business pitch is total replacement of all white collar jobs. It's even more a "take over the world" pitch than regular tech companies. Now, quite a lot of that is unrealistic and will never be delivered, but which bit?

AI raises all sorts of extremely non-tech questions about power, which causes all the drama.

Edit: also, they've selected for people who won't ask ethical questions. Thus running into the classic villain problem of building an organization out of opportunistic traitors.


Thank you for making me laugh. Seriously, I think working for openai already selected for people who are ok with playing in the grey area. They know they ignore copyright and a few other rules. It's not surprising to me that they would also not be very nice to each other internally.


Money. The hype is really strong, the hype might even be justified, insane amounts of money flow in. There is a land grab going on. Blood is in the water, all the sharks are circling.


After all that money, nobody can never even think of saying that it was wasted. To keep the investment value high and justifiable, they all agree and go on with the hype. Until the end.


Money is the root of all evil...


It's probably the perceived value & power it has.

They think they are about to change the entire world. And a very large but of the world agrees. (I personally think it's a great tool but exaggerated)

But that created an very big power play where people don't act normal anymore and the most powerhungry people come out to play.


I feel like the drama is, at least in part, borne from the fact that its impact is greatly exaggerated. Nothing's really changed.


I would like to answer that, but OpenAI could probably spend $100,000 per detractor to crush them and still laugh all the way to the bank.


They are just marketing for Microsoft's AI now.

It's all just drama to draw attention and investor money, that's it.

When the inventors leave there is nothing left to do but sell more.


people are starting to realize they don't have any significant technical advantages over other AI companies. so all they have left is hype or trying to build a boring enterprise business where they sell a bunch of AI services to other large companies, and their lack of experience in that area is showing.


I work for a tech startup doing communication and mkt. I'm sick of engineers sucking Sam's dick and believing every demo they watch. Even after seeing what Altman's is capable of perform in the stage. I'm also sick of the scene trying to sell "AI" (whatever that means) next to everything. We even go out of our way to promise stupid impossible stuff when we talk about multimodal or generative... It doesn't matter, it needs to sell.

Just to give a sickening example, I was approached by the CEO to fix a very bad deepfake video that some "AI" Engineer made with tools available. They requested me to use After Effects and editing to make the lips sync....

On top of that, this industry is driving billions of investment into something that is probably the death sentence for a lot of workers, cultures, and society, and that is not fixing or helping in ANY other way to our current world problems.


I'd say you can add the time Altman threatened to pull OpenAI out of the EU if its regulation wasn't to his liking https://www.reuters.com/technology/openai-may-leave-eu-if-re...


"If you're not letting me play, I shall not play!"


Parent comment's links, but (hopefully) clickable.

OpenAI's deviation from its original mission - https://news.ycombinator.com/item?id=34979981

The Altman's Saga – https://news.ycombinator.com/item?id=38309611).

The return of Altman (within a week) -https://news.ycombinator.com/item?id=38375239

Musk vs. OpenAI - https://news.ycombinator.com/item?id=39559966

The departure of high-profile employees -

Karpathy: https://news.ycombinator.com/item?id=39365935

Sutskever: https://news.ycombinator.com/item?id=40361128

"Why can’t former OpenAI employees talk?" - https://news.ycombinator.com/item?id=40393121


And of course Scarlett which is beating Musk in votes and comments (https://news.ycombinator.com/item?id=40421225)


I think it's to the point that a book deal or expose might be more profitable than the lost equity.


Don't indent lists like that; it makes HN present them as code, so the URLs in them don't become clickable links.


The whole media is _clearly_ threatened by AI. Both subjectively just thinking about it, and objectively seeing things like Google already roll out AI summaries of internet content (saving consumers the need scroll through a power points worth of auto-play ads to read a single paragraphs worth of information.)

With the breakneck progress of AI over the last year, there has been a clear trend in the media of "Wow, this is amazing (and a little scary)" to "AI is an illegal dumpster fire and needs to be killed you should stop using it and companies should stop making it"


The same AI summaries that told people to put glue on pizza? And got mass coverage for providing all sorts of incorrect info?


Great, if these documents are credible, this is exactly what I was implying[1] yesteday. Here, listen to Altman say how he is "genuinely embarrassed":

"this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have."

The first thing the above conjures up is the other disgraced Sam (Bankman-Fried) saying "this is on me" when FTX went bust. I bet euros-to-croissants I'm not the only one to notice this.

Some amount of corporate ruthlessness is part of the game, whether we like it or not. But these SV robber-barrons really crank it up to something else.

[1] https://news.ycombinator.com/item?id=40425735


Patrick Collison interviewed Sam Altman in May 2023 [1]

In the intro, Patrick goes off-script to make a joke about how last year he'd interviewed SBF, which was "clearly the wrong Sam".

I'm eagerly waiting for 2025, when he interviews some new Sam and is able to recycle the joke. :)

[1]: https://www.youtube.com/watch?v=1egAKCKPKCk


"this is on me" --> "look at what a great leader I am, taking responsibility for other people's mistakes"

"i've been genuinely embarrassed" --> "yep, totally not my fault actually"

"I should have known" --> "other people fucked this up, and they didn't even inform me"


Kind of like a humblebrag but for accountability.


Do you really believe he was genuinely embarrassed? All his public statements lately have been just PR BS. Nothing genuine there.


No, I don't. That why I put it in "scare quotes". You wouldn't get that impression had you read my comment I linked above :) — https://news.ycombinator.com/item?id=40425735

I was trying to be a bit restrained in my criticism; otherwise, it gets too repetitive.


smells like game of thrones.


Looking forward for a document leak about openai using YouTube data for training their models. When asked if they use it, Murali (CTO) told she doesn't know which makes you believe that for 99% they are using it.


I would say 100%, simply because there is no other reasonable source of video data


I use multiple websites that have hundreds of thousands of free stock videos that are much easier to label than YouTube videos.


Number of videos are less relevant than the total duration of high-quality videos (quality can be approximated on YouTube with metrics such as view and subscriber count). Also, while YouTube videos are not labelled directly, you can extract signal from the title, the captions, and perhaps even the comments. Lastly, many sources online use YouTube to host videos and embed them on their pages, which probably contains more text data that can be used as labels.


To be fair I don’t think Google deserves exclusive rights to contents created by others, just because they own a monopolistic video platform. However I do think it should be the content owner’s right to decide if anyone, including Google, gets to use their content for AI.


Any other company can start a video platform. In fact a few have and failed.

Nobody has to use youtube either.

If you want change in the video platform space, either be willing to pay a subscription or watch ads.

Consumers don't want to do either, and hence no one wants to enter the space.


*Murati


I am surprised to see a pro-copyright take on HN :)


I find it hard to believe that Sam didn’t know about something that draconian in something so sensitive as NDAs that affects to equity.

He’s not exactly new to this whole startup thing and getting equity right is not a small part of that


He was obviously lying, and he probably also knew people would not believe it. I just don't know why he still chose to do it.


Founders/CEOs don't lose track of equity or contracts around it. Every 1/10 of a percent is tracked and debated with investors.


He's not candid.


So what happened to Daniel Kokotajlo, the ex-OAI employee who made a comment saying that his equity was clawed back? Was it a miscommunication and he was referring to unvested equity, or is Sama just lying?

In the original context, it sounded very much like he was referring to clawed-back equity. I’m trying to find the link.



> ..or agreeing not to criticize the company, with no end date

Oh! free speech is on trade! We used to hear the above statement coming from some political regimes but this is the first time I read it in the tech world. Would we live to witness more variations of this behavior on a larger scale?!

> High-pressure tactics at OpenAI

> That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars

> When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI.

> “We want to make sure you understand that if you don't sign, it could impact your equity. That's true for everyone, and we're just doing things by the book,”

Although they've been able to build the most capable AI models that could replace a lot of human jobs, they struggle to humanely manage the people behind these models!!



Going to be hard to keep claiming you didn’t know something, if your signature is on it. I don’t really think a CEO gets to say he didn’t read what he was signing.


There is another very famous leader on trial for exactly that right now.


Does someone know why the employees wanted him back so badly? Must be very few employees actually upset with him and his way of doing things.


If Sam didn't get hired back after the firing, there was a good chance OpenAI would implode and that would be bad news for employee equity. Plus, the board didn't give out any information that could've convinced anyone to side with them. The drama about exit documents and superalignment research appears to have been contained in relatively small circles and did not circulate company-wide until they became public.


I recall that only some wanted him back, and the split was product/research—the “let’s get rich!” types wanted him back, the “let’s do AI!” types adamantly didn’t.


They want to get rich. They believe it will lead them to it.


In my third world country, when they do something unethical they say "everything is in accordance with the law", here it's "this is on me", both are very cynical. From the time they went private, it was apparent that this company is unethical to say the least. Given what it is building, this can be very dangerous, but I think they are more proficient in creating hype, than actually coming up with something meaningful.


It's funny how finding out about corporate misdoing has almost a common ritual attached to it. First shock and dismay is expressed to the findings, then the company leadership has to say it was a mistake (rather than an obvious strategy they literally signed off on), we then bring up the contradiction. Does this display of ignorance from every side really need to take place? Why bother asking for an explanation, they obviously did the thing they obviously did and will obviously do as much as possible to keep doing as much of things like that they can get away with.


Do OpenAI employees actually get equity in the company (e.g. options or RSUs)? I was under the impression that the company awards "profit units" of some kind, and that many employees aren't sure how they work.



> Although primarily known for ChatGPT, OpenAI has had a long history since its founding in 2015.

No, "since 2015" is by definition not "a long history".

For a long history, try the principality of San Marino, or (if you want a company) Stora Kopparbergs Bergslags AB. Or one of the Japanese temple-builder family companies. 2015 "a long history" -- wasn't that when I last took a dump?


> many employees aren't sure how they work.

Why aren't they simply asking their product?


I'm not following this very closely, but agreements that block employees from selling (private) vested equity are a market term, not something uniquely aggressive OpenAI does. The Vox article calls this "just as important" as the clawback terms, but, obviously, no.


> agreements that block employees from selling (private) vested equity are a market term

They threatened to block the employee who pushed back on the non-disparagement from participating in tender offers, while allowing other employees to sell their equity (which is what the tender offers are for). This is not a "market term".


Sure. Selectively preventing sales isn't. But it's not uncommon to have blanket prohibitions. You're right, though.


Yeah, my impression is that a lot of non-public startups have "secondary market transactions allowed with board approval" clauses, but many of them just default-deny those requests and never have coordinated tender offers pre-IPO.


I wonder if this HN post will get torpedoed as fast as the one from yesterday[0].

0. https://news.ycombinator.com/item?id=40435440


I didn't see that comment but I did post https://news.ycombinator.com/item?id=40437018 elsewhere in that thread, which addresses the same concerns. If anyone reads that and still has a concern, I'd be happy to take a crack at answering further.

The short version is that users flagged that one plus it set off the flamewar detector, and we didn't turn the penalties off because the post didn't contain significant new information (SNI), which is the test we apply (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). Current post does contain SNI so it's still high on HN's front page.

Why do we do it this way? Not to protect any organization (including YC itself, and certainly including OpenAI or any other BigCo), but simply to avoid repetition. Repetition is the opposite of intellectual curiosity (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...), which is what we're hoping to optimize for (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...).

I hesitate to say "it's as simple as that" because HN is a complicated beast and there are always other factors, but...it's kind of as simple as that.


And today: a post about Johansson's voice was on the front page with quite a high score, and then disappeared. This is not the place to discuss OpenAI.


They’re definitely doing this on comments too. I’ve had negative towards altman comments at the top dropped to below the negative voted ones in the past


It's standard moderation on HN to downweight subthreads where the root comment is snarky, unsubstantive, or predictable. Most especially when it is unsubstantive + indignant. This is the most important thing we've figured out about improving thread quality in the last 10 years.

But it doesn't vary based on specific persons (not Sam or anyone else). Substantive criticism is fine, but predictable one-liners and that sort of thing are not what we want here—especially since they evoke even worse from others.

The idea of HN is to have an internet forum—to the extent possible—where discussion remains intellectually interesting. The kind of comments we're talking about tend to choke all of that out, so downweighting them is very much in HN's critical path.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


Funny though how all these oh-so-objctive algorithms -- your rule of thumb for weighting threads, "the flamewar detector" on posts, "SNI", etc, etc -- seemingly always just so happen to have the outcome they have.

Sorry, you may be (are probably) being perfectly honest and sincere, but... It's still too many coincidences not to give rise to doubts. If about nothing else, then about the weights in your algorithms (the post of a negative-headline article that lasted under two hours on the front page didn't look all that much more flame-war-y to me than the one off a positive-headline one that lasted over twelve) or your definition of "significant" ("Breaking news, OAI says they didn't do it!" Yeah right, how is that significant; which crook doesn't say that wasn't actually his paw in the cookie jar?).

Or maybe it's bigger; maybe it's time for the, uhm, "tone" of the entire site to change? I mean, society at large seems to have come to the insight that wannabe rentier techbros just aren't likely to be good guys. And maybe your intended audience -- "startup hackers", something like that, right? -- are beginning to resemble mainstream society in this respect?

Maybe we "Hackers" are coming to the realisation that on the current trajectory of the tech industry in the late-stage capitalism era, "two guys with their laptops in a garage" are not very likely to become even (paltry!) multi-millionaires, because all the other "two guys with their laptops in a garage" ten-fifteen-twenty years ago (well, the ones of them that made it, anyway) installed such insurmountable moats around their respective fiefdoms ("pulled up the ladder behind them", as we'd say if they were twenty years older) that making it big as an actual "Hacker" is becoming nigh-impossible?

I mean, to try and illustrate by example: The Zuck zucks even in the mind of most HN regulars, right? But if you trawl through early posts (pre-2017? -14? -10?), betcha he's on average much more revered here than he is now. A bit like Musk still seems to be, and up until a year or whatever ago, that other Sam (Frazzled Blinkman?) was, and... The rate and mechanism of change here seems to be "Oops, yet another exception, a wannabe rentier techbro who turned out to be a slimebag. But as a group, wannabe rentier techbros are of course still great!" Maybe it's time to go through the algorithms and prejudices and turn down all the explicit and implicit the "Wannabe Zuck[1] = Must be great!" dials?

Because as it is, these biases -- "just percieved" as you seem to be saying, "implicit and built into the very foundations of HN" as I'd tentatively say; does it even matter which it is? -- seem to be making HN not a forum for the current-day "two guys with their laptops in a garage", but for fanboyism of the (current and) next group of Bezoses, Zuckerbergs and Musks[1]. Sorry, I haven't checked out the mission statement recently (even though you so graciously linked to it), but is that really what HN is supposed to be?

___

[1]: Well, I'm old enough that I almost wrote "Gates and Ellison" there... Add them in if you want.


> seemingly always just so happen to have the outcome they have.

The key word there is "seemingly". You notice what you're biased to notice and generalize based on that. People with diferent views notice different things and make different generalizations.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Wait til you hear what happened to Michael O’Church.


Not sure what you are trying to get across.

This is the final comment [1] that got Michael’s account banned.

You can see dang’s reply [2] directly underneath his which says:

> We've banned this account.

1: https://news.ycombinator.com/item?id=10017538

2: https://news.ycombinator.com/item?id=10019003



That post was, as far as I can tell, basically an opinion piece repeating/summarizing stories that had been on the HN frontpage dozens of times. This post is investigative journalism with significant new information.

It should not be surprising that the outcomes are different.


I really wish there was some simple calculations that could be shown on how posts are ranked. For eg post A has x upvotes, y comments, is z minutes old and therefore rank 2. Post B has these values, while C is here. Hence this post went down the front page quickly.

It's not that I don't trust the mods explicitly, it's just that showing such numbers (if they exist) would be helpful for transparency.


I really don't care about the "algorithm" here. I think this place is distinguished nicely by the fact that I almost never know how much karma a post or user has. If it was in fact a total dictatorship of a few, posing as some democratic reddit thing, who cares? I'm OK as it is, and these things don't last forever anyway.

All you can really do on the internet is ride the waves of synchronicity where the community and moderation is at harmony, and jump ship when it isnt! Any other conceit that some algorithm or innovation or particular transparency will be this cure all to <whatever it is we want> feels like it never pans out, the boring truth is that we are all soft squishy people.

Show me a message board that is ultimately more harmonious and diverse and big as this one!


>I think this place is distinguished nicely by the fact that I almost never know how much karma a post or user has.

Did you know that people who have been involved with Y Combinator who make an account on HN can see everyone else who has been a part of Y Combinator? Their usernames are highlighted a different color.

It's a literal secret club that they rarely acknowledge.


Sure ok! What is at stake with this?

In my experience, founders need all the help they can get making friends, I'm glad they have a little club!


This post and discussion from 2013 might interest you: https://news.ycombinator.com/item?id=6799854


IMHO HN data should be transparent.

The innovation on detecting patterns would be incredible, and in reality I think would be best to evolve into allowing user-decided algorithms that they personally subscribe to.


It's a common suggestion but I don't think it would work and have posted quite a few times about why: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....


Thanks for linking. I suppose my base argument would be why not let each individual decide for themselves - whether HN is a "low trust" or "high trust" source to them? Over enough time with enough data, certain macro patterns may emerge also that otherwise aren't easily observed by any individual - no matter how present they are, as I'm sure you've probably said countless times you're unable to see every link or thread posted to moderate, as there's just far too much activity.

I would also suggest such conversation would need to be corralled into some sort of secondary HN forum branch, discussion on observations, insights, etc. In general it could be useful for people to also learn about observing patterns for their own sites they own or manage.

I do understand it can facilitate a bit of a "weapons" race, in that if there are bad actors seeking to have many human looking bot accounts (or a single person orchestrating many accounts), then they now too would see how their fingerprints look compared to others as well.

Ultimately I think Elon Musk is right though that to help dissuade spam and organizations-ideologues from shaping narratives and controlling what's allowed to be seen and discussed, that an actual $ cost is required.

Perhaps HN could implement a $5/month (or even higher tiers)? For most on HN, if they are in the tech field, even $50/month for arguably a higher curated-"more moderated" forum, isn't much for the individual - and if a filter to only show posts and/or comments by those paying AND/OR better yet, filtering based on including only votes by those at different tiers - then that is affordable compared to say someone who maybe somehow is running 1,000 users; although unfortunately $50,000/month isn't much for organizations or nations with an agenda, if that's all it takes to keep certain truths suppressed as much and as quickly as possible.


People always interested and fascinated by the algorithm whenever it comes up. Dang makes the (correct) assertion that people will much more easily game it if they know the intricacies. PG always churlishly jumps in to say there’s nothing interesting about it and any discussion of it is boring.

Pretty asinine response but I work in Hollywood and each studio lot has public tours giving anyone that wants a glimpse behind the curtain. On my shows, we’ve even allowed those people to get off the studio golf cart to peek inside at our active set. Even answering questions they have about what they see which sometimes explains Hollywood trickery.

I’m sure there’s tons of young programmers that would love to see and understand how such a long-lasting great community like this one persists.


There's a public tour of HN stuff pretty much every day in the moderator comments. The story ranking and moderation gets covered frequently.


I dunno. This is standard practice for things like SEO algos to try to slow down spammers, or risk algos to slow down scammers.

HN drives a boatload of traffic, so getting on the front page has economic value. That means there are 100% people out there who will abuse a published ranking system to spam us.


wait long enough and the other product will be able to expose the secrets.

future gpt prompt : "Take 200000 random comments and threads from hacker news, look at how they rank over time and make assumptions about how the moderation staff may be affecting what you consume. Precisely consider the threads or comments which have risque topics regarding politics or society or projects that are closely related to Hacker News moderation staff or Y Combinator affiliates."


> Dang makes the (correct) assertion that people will much more easily game it if they know the intricacies.

Which is interesting, because it's sacrilege to insinuate that it's being gamed at all.


It's not sacrilege, it's just that people rarely have any basis for saying this beyond just it kind of feels that way based on one or maybe two datapoints, and feeliness really doesn't count. We take real abuse seriously and I've personally put hundreds (feels like thousands) of hours into that problem over many years - but there has to be some sort of data to go on.


The main component of the HN ranking algorithm is sentiment divided by YC holdings in the company in question. We've all seen it.


PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://paulgraham.com/5founders.html


Nothing quite like a contract’s consideration consisting solely of a pre-existing obligation. I wonder what they were thinking with that?


> I wonder what they were thinking with that?

"Fuck you, poors."


Everyone is out for Sam Altman, and there are reasons to scrutinize him. But on this issue - it is common for a company's Legal and HR teams to make decisions on language in docs like these (exit docs) entirely on their own. So it is plausible that Sam Altman had no idea that this aggressive language existed. One reason to think the same thing is true here, is I recall Sam spoke up for employee friendly equity plans when he was running YC.


Plus there's a million ways you can get screwed out of your equity


I'm surprised that an executive or lawyer didn't realise the reputational damage adding these clauses would eventually cause the leadership team.

Were they really stupid enough to think that the amount of money being offered would bend some of the most principled people in the world?

Whoever allowed those clauses to be added and let them remain has done more damage to the public face of OpenAI than any aggravated ex-employee ever could.


Does anyone remember the name of that coder who made a kickstarter for his game but he was unable to finish it because it was a bit too big ( but still epic ) and then due to his talent he got hired at OpenAI? I always wanted to follow him on Twitter but I forgot his name :\ if anyone knows that be great

Edit - sry why is this the top comment


> this is on me and one of the few times i've been genuinely embarrassed running openai

This statement seems to suggest that feeling embarrassed by one's actions is a normal part of running a company. In reality, the expectation is that a CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.


>This statement seems to suggest that feeling embarrassed by one's actions is a normal part of running a company.

It suggests humans makes mistakes and sometimes own up to them - which is a good thing.

> CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.

There is no human who does this , or are you saying turn the CEO role over to AI? :)


Streisand Effect at work


I think it’s time to cancel that Chat GPT subscription and move to something else. I am tired of the arrogance of these companies and particularly their narcissistic leaders who constantly want to make themselves the centre of the piece. It’s absolutely ridiculous to run a company as if you’re the lead in a contemporary drama.


Anthropic was founded by ex OpenAI employees who were concerned with the way it was being run and their language models are comparable, better for some things but worse than others. I also canceled my ChatGPT subscription and I will say I'll miss the GTP-4o multi-modal features.


I was thinking of giving Gemini a try, one thing I’m pretty certain of is that Demis Hassabis is consistently candid.


I don't understand whenever you read about something like this, why the head of HR at a company like this (just google (head of people|hr|"human resources" openai linkedin) and see the first result) doesn't end up on a public blacklist of bad actors who are knowingly aggressive toward employees!


Because this isn’t something instituted by the head of HR alone.


But it's 100% approved by them.


Who is bullish or bearish on OpenAI?

Now that LLM alternatives are getting better and better, as well as having well funded competitors. They don't yet have seem to developed a new, more advanced technology. What's their long term moat?


PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://p@ulgraham.com/5founders.html *edited link due to first post getting deleted


This relationship feels like Michael and Ryan from The Office.

One is a well meaning but very naive older person who desperately wants to be liked by the cool kids, the other is a pretentious young conman who soars to the top by selling his “vision”. Michael is a huge simp for Ryan and thinks of himself as Ryan’s mentor, but is ultimately backstabbed by him just like everyone else.


Really? I never thought of pg as naive.


Imagine if these people, obviously narrow-minded and greedy, gain access to AGI. It really would be a thread to mankind.


I don’t believe in the AGI claims, or in X-Risk. But I do think it’s apparent that AI will only become more powerful and ubiquitous. Very concerning that someone like Sam, with a history of dishonesty and narcissism that is only becoming more obvious time, may stand to control a large chunk of this technology.

He can’t be trusted, and as a result OpenAI cannot be trusted.


we're deeply sorry we got caught, we need to do better. i take full responsibility for this mistake, i should have ensured all incriminating documents were destroyed.

ps "responsibility" means "zero consequences"


It's okay everyone. Silicon Valley will save us. Pay no mind to the "mistakes" they've made over the last 60 years.


I feel there is a smear campaign going on to tarnish OpenAI


Protip: you can’t negotiate terms after you agree to them.


You absolutely can, you just better have the leverage necessary


You’d be surprised


From OpenAI's "fuller statement":

> “We're incredibly sorry that we're only changing this language now; it doesn't reflect our values or the company we want to be.”

Yeah, right. Words don't necessarily reflect one's true values, but actions do.

And to the extent that they really are "incredibly sorry", it's not because of what they did, but that they got caught doing it.


The company that fails in even a simple good faith gesture in their employee aggreement, claims it is the only one who can handle AGI while government creating regulation to lock out open source.


A company that was honest wouldn't lobby the government to lock out others.


AI-native companies seem to bring a new form of working culture. It could be different from tech industries environment.


yikes... turns out that lily is actually a venus fly trap...


This really is OpenAI's Downing Street Christmas Party week isn't it.


It's for the good of humanity... that part of humanity that may not want bad PR.


What surprises me about these stories surrounding openAI is how they apologize while lying and downplaying any blame. Do they expect anybody to believe they didn’t know about clawback clauses?


Do they care? The mob will shout for a week or two and then turn their attention somewhere else. spez (the reddit chief) said something like that about their users, and he was absolutely right. A few days ago I was re-reading some of those threads about reddit API changes from ten months back where so many users claimed it was their last message and they were leaving for good. Almost none of them did. I checked two dozen profiles and all but one of them had fresh comments posted within that same day.


I went from very active on multiple subreddits to barely posting once every few months. Instead of answering programming questions or helping people get in shape I'm on other sites doing other things.

Changes like that are hard to measure.


> I went from very active on multiple subreddits to barely posting once every few months. Instead of answering programming questions or helping people get in shape I'm on other sites doing other things. Changes like that are hard to measure.

Changes in sentiment can be hard to measure, but changes in posting behavior seems incredibly easy to measure.


It’s the rule of ten (I made that up) 1:10 upvote. 1:10 of them comment. 1:10 post.

The people barking are actually the least worrisome, they’re highly engaged. The meat of your users say nothing and are only visible in-house.

That said, they also don’t give a shit about most of this. They want their content and they want it now. I am very confident spez knows exactly what he’s talking about.


Some salty downvotes going on in here!


Imagine how many people I actually pissed off to get those downvotes!


How do you measure without the API?


I actively quit producing content and deleted my account.

Maybe it’s confirmation bias, but I do feel like the quality of discourse has taken a nose dive.


The discourse is about the same, trouble is the only mods left are the truly batshit ones.


If that's true, wouldn't that imply that the mods aren't very effective?


You get what you pay for. ;)


I stopped browsing Reddit. I imagine the people who posted comments to Reddit saying they’re going to leave Reddit aren’t a representative sample.


Same. Redditor for 15 years and the API thing was the last straw.

I didn’t post about not engaging with or using the platform anymore. Nor did I delete my account, since it still holds some value to me. But I slinked away into the darkness and now HN is my social media tool.


Delete early, delete often. Never keep an old Reddit account around. I torch mine and build anew every year just out of principle.


I didn’t have a schedule, but probably had 5 or 6 accounts over the years… purging, deleting, and a few weeks later rejoining. The last time I deleted everything was before the API changes, and it was the last straw. I haven’t attempted to create a new account and don’t browse at all. I used to spend hours per day there. Now the only time I end up there is if a search engine directs me there for an answer to a specific question I have.


For me it's been every six months. I've even given some creds for burned accounts to the void for the heck of it.

That said, I think you could easily correlate my hn activity with my reddit usage (inverse proportionality). Loving it tbh, higher quality content overall and better than slashdot ever was


Same! Except I've basically stopped using reddit. It used to be that if I got a "happy cake day" then I knew nuking the account was overdue.


I honestly have given up in this battle.

I'm curious, what do you think deleting accounts and starting new is going to do?

They'll just link it all together another way.


they can.

you can’t.


15y account here too - also quit. Tried lemmy for a while and didn't like it. At least it helped me kick the reddit habit. Don't even go there anymore

https://old.reddit.com/u/speff


Same here on everything you mentioned


I'm guessing the ones who actually left Reddit did what I did - they disengaged from the site and then deleted all their content and accounts. It's pointless to complain without any actual power.

The relevant stakeholders here are the potential future employees, who are seeing in public exactly how OpenAI treats its employees.


When the changes went through I nuked all my comments and then my account. I don't know if many others did the same, but if so it would mean that you wouldn't see our "I'm leaving" comments anymore, i.e. that we wouldn't be included in your samples.


Yeah, reading old threads is weird. The majority of everything is intact, but there's enough deleted or mangled comments that it is an effective minor inconvenience.


Sam Altman has stated over and over again, publicly: "I don't care what other people think." And I'm not paraphrasing.


Once you learn that online outrage doesn't actually impact your life that much, its easy to ignore. Gone are the days of public apologies and now we just sweep criticism under the rug and carry on.


I think trump taught us that very few people will stop you physically if you just ignore what they have to say.


My activity on Reddit has gone way down since they stopped supporting .compact view on mobile. I definitely miss it and want to go back but it’s incredibly hard to engage with the content on mobile browsers now.


I actually find myself to be using reddit much less. It’s not that I protesting, but it feels like the community changed into something more like Facebook folks. It doesn’t feel cutting edge anymore, it’s much more tamed stale. The fresh stuff isn’t on Reddit anymore.


They probably care more about the effect on potential hires who are gonna second think by the fact that part of their pay may be cancelled due to some disagreement


I haven't stopped using it immediately, but it definitely added to the growing list of problems. I don't use that site anymore, except when a search result directs me there. Even then it's a second choice of mine, because I need to disable my VPN to access it, and I won't login.


> I checked two dozen profiles and all but one of them had fresh comments posted within that same day.

I also remember when the internet was talking about the twenty four Reddit accounts that threatened to quit the site. It’s enlightening to see that the protest the size of Jethro Tull didn’t impact the site


This is them fucking over their employees though, not the public, and in a very concrete manner. Threats to rob them of millions - maybe tens of millions - are going to hurt more than losing access to a third-party Reddit client.

And the employees also have way more leverage than Reddit users; at this point they should still be OpenAI's greatest asset. Even once this is fixed (which they obviously will do, given they got caught), it's still going to cause a major loss of trust in the entire leadership.


Employees are replaceable. Outside of a very specific few, they have very little leverage. If an employee loses trust and leaves or “quiet quits”, they will simply be replaced with one of the hundreds of people clamoring to work for them. This is why unionization is so great.

Just as Reddit users stay on Reddit because there is nowhere else to go, the reality is that everyone worships leadership because they keep their paychecks flowing.


Yes, that "very little leverage" is why engineers & researchers near the bottom of OpenAI's career ladder are getting paid 900k/year (2/3rds funny money, admittedly, though in practice many people _have_ cashed out at very large multiples).


Your salary is not leverage..


Employees are replaceable, sure, but that doesn't mean that you can't squander your good will with competent employees and end up only being able to hire sub-par employees.


Yes. OpenAI will only attract sub-par employees. And even if it’s not OpenAI, you simply raise your offered salary and suddenly the subpar employees vanish.


> A few days ago I was re-reading some of those threads about reddit API changes from ten months back where so many users claimed it was their last message and they were leaving for good. Almost none of them did. I checked two dozen profiles and all but one of them had fresh comments posted within that same day.

Lots of people have pointed out problems with your determination, but here's another one: can you really tell none of those people are posting to subvert reddit? I'm not going to go into details for privacy reasons, but I've "quit" websites in protest while continuing to post subversive content afterwards. Even after I "quit," I'm sure my activity looked good in the site's internal metrics, even though it was 100% focused on discouraging other users.


The risk is not users boycotting them. The risk is OpenAI having trouble recruiting and retaining top talent, which will cause them to eventually fall behind the competition, leading users to naturally leave.


Honestly, from a moderation perspective, the dropoff has been stark - the quality of work behind the scenes has dropped off a cliff on most larger subreddits, and the quality of the content those subreddits facilitate has reduced in quality in turn.

It's definitely had a very impact - but since it's not one that's likely to hit the bottom line in the short term, it's not like it matters in any way beyond the user experience.


It’s not easy to get out of an abusive relationship


Personally I only use Lemmy now. I never made a goodbye/fuck spez post, I just stopped using Reddit.

I think your sample frame is off, they did themselves unforced damage in the long run.


It is hard to compete for high-end AI research and AI engineering talent. This definitely matters and they definitely should care. Their equity situation was already a bit of a barrier by being so unusual, now it's going to be a harder sell.

I know extremely desirable researchers who refuse to work for Elon because of how he has historically treated employees. Repeated issues like this will slowly add OpenAI to that list for more people.


Meanwhile the stock Google pays you can be cashed out same day. Really dumb move for OpenAI.


I think it might just be a consequence of an approach to business that, in aggregate, has been very effective.


The mob that Vox represents these days is miniscule


It’s remarkable to see the hoi polloi to stand by CEOs and big corporations, rather than defending the few parts of the media that stand for regular workers.


This is how anything political (big or small P) works.

Aspirations keep people voting against their interests.

I personally worry that the way fans of OpenAI and Stability AI are lining up to criticise artists for demanding to be compensated, or accusing them of “gatekeeping” could be folded into a wider populism, the way 4chan shitposting became a political position. When populism turns on artists it’s usually a bad sign.


It's not about taking sides, it's about not caring. Everyone is tired of getting worked up over super rich CEOs being "aggressive" to their very rich employees and your,"if you're not with us, you're against us" attitude.


How do you think those same CEOs would treat their not-so-rich employees?


First they came for the Sheldon Coopers, and I did not speak out


Every legal clause that affects company ownership is accepted by the CEO and the board. It's not something VP or general counsel can put there. Lo and behold, signatures from Altman and Kwon are there.

>Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about.

>OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.

>Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.


The public statements would suggest either that Sam Altman is lying or he signs anything that is put in front of him without reading it. I'm inclined to believe that whatever is said is PR (aka BS). In a court of law it is the written and signed contracts that are upheld.


[flagged]


Imagine if someone not representing ownership directly could just put clauses that alter the ownership and alter shareholder agreements.


Employment letters, with equity grants, are practically never signed by anyone on the board, and for companies of any size, practically never signed by the CEO. There are a bunch of people who can bind a company legally, and it's one of the things they figure as part of their governing docs, board process etc.


Parent and those quotes from the article aren't talking about employment letters. They're talking about incorporation/equity documents.


Equity grants are usually signed by the board, although a lot of companies treat this as a meaningless formality and present it as though the document your manager signs is the real one. If you take a look at equity grants you've gotten in offer letters in the past, I bet they have "effective on board approval" in there somewhere.


That's not what I said.

The text in the letter must be approved.


> The text in the letter must be approved

Employment agreements don't even need to be approved by the CEO, let alone the Board. Delegating this responsibility is fairly common.

That said, Sam chose to sign. The buck stops with him, and he has--once again--been caught in a clear and public lie.


I'm not talking about grant-making case by case, but delegation authorization resolution itself. Delaware General Corporation Law has sections describing how it can be done.


> delegation authorization resolution itself. Delaware General Corporation Law has sections describing how it can be done

How what can be done?

Nothing in Delaware law says the CEO--let alone the Board--has to sign off on every form of employment agreement. That's a classic business judgement.


fwiw, re-reading your initial comment, I think you may have meant to say one thing and have inadvertantly said two things. The comment sounds like you are saying any clause in any contract which has any effect on equity. You might have intended to say any clause in the contracts you then copy/pasted, which deal with process around granting equity.

?


Again, wrong.

It's just not how the world works. It's legally not required, and in practice not done. If the board and CEO had to sit around approving every contract change in every country, my goodness they'd have no time left for anything else.


I wrote: Every legal clause that affects company ownership is accepted by the CEO and the board.

I did not write: Every legal clause is accepted by the CEO and the board.


I read it. It's just not right. Companies have all kinds of processes in place for who can negotiate what and up to what limit and who has to agree if it goes above this or changes that or whatever. There are entire software packages around contract management to route contracts to the right place based on the process the company has in place.


You're right that in general Sam Altman isn't countersigning every OpenAI employee contract.

However, at some point a lawyer sat down and worked with people to write the form-contract, and someone said "you sure? you want them to sign an NDA on exit? with a clause that lets you claw back equity? (implicit: because that's not normal)"


The form-contract is changing frequently at a company going through a lot of corporate changes and with a lot of freakishly talented employees who probably negotiate hard on contracts.

To be clear, he may well have known. But it isn't a given and in the grand scheme of things on a CEO brain, it would have been way down the list of capturing mind share.


Agree 100%, something tells me A) he really didn't know B) it's still scuzzy.

I'm taking a mental note to remember why mom always said to read e v e r y word before you sign.

I should have learned this at a younger age, somehow ended up 50/50 in an LLC I always assumed was going to be 70/30. Cost a lotttt of time and energy, essentially let them hold the company hostage for $60K later, after some really strange accounting problems were discovered. (my heart says they didn't take money, but they were distracted and/or incompetent)


They’re only apologetic because they got caught in a PR shitstorm. They would not otherwise. Being an sh*bag company that claws back equity is a huge red flag and can drive away the critical people who make up the company. They started an arms race, but with companies with much deeper pockets. Meta will be more than happy to gobble up any and every OpenAI employee who no longer wants to work there.


It's been stultifying the older I get to see how easy it is for people to lie to themselves and others, everywhere.

You have to be really attuned to "is this actually rational or sound right, or am I adding in an implicit 'but we're good people, so,'"


Right. The big change is bad faith argument developing into unapologetic bad faith developing into weaponised bad faith.

It accelerated rapidly with some trends like the Tea Party, Gamergate, Brexit, Andrew Wakefield, covid antivax, and the Ukraine situation, and is in evidence on both sides of the trans rights debate, in doxxing, in almost every single argument on X that goes past ten tweets, etc.

It's something many on the left have generally identified as worse from the right wing or alt.right.

But this is just because it's easier to categorise it when it's pointing at you. It's actually the primary toxicity of all argument in the 21st century.

And the reason is that weaponised bad faith is addictive fun for the operator.

Basically everyone gets to be Lee Atwater or Roger Stone for a bit, and everyone loves it.


> It's something many on the left have generally identified as worse from the right wing or alt.right.

It depends a bit by what you mean by left and right, but if you take something like Marxism that was always 100% a propaganda effort created by people who owned newspapers and the pervasiveness of propaganda has been a through line e.g. in the Soviet Union, agitprop etc. A big part of the Marxist theory is that there is no reality, that social experience completely determines everything, and that sort of ideology naturally lends itself to the belief that blankets of bad faith arguments for "good causes" are a positive good.

This sort of thinking was unpopular on the left for many years, but it's become more hip no doubt thanks to countries like Russia and China trying to re-popularize communism in the West.


Propaganda at a national level, it's always been that, and I take your point for sure.

I think perhaps I didn't really make it totally clear that what I'm mostly talking about is a bit closer to the personal level -- the way people fight their corners, the way twitter level debate works, the way local politicians behave. The individual, ghastly shamelessness of it, more than the organised wall of lies.

Everyone getting to play Roger Stone.

Not so much broadcast bad faith as narrowcast.

I get the impression Stalinism was more like this -- you know, you have your petty level of power and you _lie_ to your superiors to maintain it, but you use weaponised bad faith to those you have power over.

It's a kind of emotional cruelty, to lie to people in ways they know are lies, that make them do things they know are wrong, and to make it obvious you don't care. And we see this everywhere now.


Well, I was referring to Marx and Engels. That's sort of how the whole movement got started. The post-Hegelians who turned away from logic-based philosophical debate to a sort of anti-logical emotional debate where facts mattered less than the arc of history. That got nationalized and industrialized with Lenin and Stalin etc, but that trend precedes them and was more personal. It was hashed out in coffee houses and drinking clubs.

You see the same pattern with social media accounts who claim to be on the Maxist-influenced left. Their tactics are very frequently emotionally abusive or manipulative. It's basically indistinguishable in style from how people on the fringe right behave.

Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.


> Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.

Oh I agree. I wasn't making it a right-vs-left thing, but rather neutering the idea that people perceive it to be.

I would not place myself on the political right at all -- even in the UK -- but I see this idea that bad-faith is an alt.right thing and I'm inclined to push back, because it's an oversimplification.


> Do they expect anybody to believe they didn’t know about clawback clauses?

Why wouldn’t they? I’m sure you can think of a couple of politicians and CEOs who in recent years have clearly demonstrated that no matter what they do or say, they will have a strong core of rabid fans eating their every word and defending them.


Not trying to play the devil's advocate here, but I am thinking how this would play out if I ever opened a spinoff...

Let's say I find a profitable niche while working for a project and we decide to open a separate spin off startup to handle that idea. I'd expect legality to be handled for me, inherited from the parent company.

Now let's also say the company turns out to be disproportionately successful. I'd say I would have a lot on my plate to worry about, the least of which the legal part that the company inherited.

In this scenario it is probable that hostile clauses in contracts would be dug up. I surely would be legally responsible for them, but how much would I be to blame for them, truly?

And if the company handles the incident well, how important should that blame putting be?


> I'd expect legality to be handled for me, inherited from the parent company.

That sounds like a really bad idea for many many reasons. Lawyers are cheap compared to losing control, or even your stake, to legal shenanigans.



It has become a hallmark of Western civilization to first of all Cover Your Ass, and where it gets exposed, to Pretend It's Covered, but when its photos get published, to Sincerely Apologize, and when pressed even more, to come out afresh with a Bold Pro-Coverage Vision and Commitment.

But maybe there's a further step that someone like OpenAI seems uniquely capable of evolving.


Yep, it's more like he reviewed and/or requested those clauses to be there than anything else.


I mean it's not like anything is going to happen to them anyway.

People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere, consumers will continue using GPT, businesses will keep hyping it up and rivers of cash will flow per status quo to his pockets like no tomorrow.

If one thoroughly wants to to make a change, one should support alternative open source models to remove our dependency on Altman and co; I fear for a day where such powerful technology is tightly controlled by OpenAI. We have already given up so much our computing freedom away to handful of companies, let's make sure AI doesn't follow. Honestly,

I wonder if we would ever have access to Linux, if it were to be invented today?


>People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere

The percentage of HN users defending Altman has dropped massively since the board scandal ~6 months ago.

>consumers will continue using GPT, businesses will keep hyping it up

Customers will use the best model. If OpenAI loses investors and talent, their models may not be in the lead.

IMO the best approach is to build your app so it's agnostic to the choice of model, and take corporate ethics into consideration when choosing a model, in addition to performance.


Yes, I've definitely seen people believe that in various discussions. Combine "Altman said they'd totally never done this" with "the ex-employee who first wrote about this didn't write with absolute 100% clarity that this applied to vested equity", and there's enough cover to continue believing what one wants to believe. And if the news cycle dies down before the lie is exposed, then that's a win.

Obviously that should not be possible any more with these leaked documents, given they prove both the existence of the scheme and Altman and other senior leadership knowing about it. Maybe they thought that since they'd already gagged the ex-employees, nobody would dare leak the evidence?


People are so gullible. Sam Altman deserves zero benefit of doubt. His words should be ignored, his words do not prove anything whatsoever.


I tried to delete my ChatGPT account but the confirmation button remained locked. Anyone else have the same issue?


OpenAI's terrible, horrible, no good, very bad month only continues to worsen.

It's pretty established now that they had some exceptionally anti-employee provisions in their exit policies to protect their fragile reputation. Sam Altman is bluntly a liar, and his credibility is gone.

Their stance as a pro-artist platform is a joke after the ScarJo fiasco, that clearly illustrates that creative consent was an afterthought. Litigation is assumed, and ScarJo is directly advocating for legislation to prevent this sort of fiasco in the future. Sam Altman's involvement is again evident from his trite "her" tweet.

And then they fired their "superalignment" safety team for good measure. As if to shred any last measure of doubt that this company is somehow more ethical than any other big tech company in their pursuit of AI.

Frankly, at this point, the board should fire Sam Altman again, this time for good. This is not the company that can, or should, usher humanity into the artificial intelligence era.


From Sam Altman:

> this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.

Bullshit. Presumably Sam Altman has 20 IQ points on me. He obviously knows better. I was a CEO for 25 years and no contract was issued without my knowing every element in it. In fact, I had them all written by lawyers in plain English, resorting to all caps and legal boilerplate only when it was deemed necessary.

For every house, business, or other major asset I sold if there were 1 or more legal documents associated with the transaction I read them all, every time. When I go to the doctor and they have a privacy or HIPAA form, I read those too. Everything the kids' schools sent to me for signing--read those as well.

He lies. And if he doesn't... then he is being libeled right and left by his sister.

https://twitter.com/anniealtman108


> Presumably Sam Altman has 20 IQ points on me.

I've read your posts for years on HN, don't undersell yourself.

Many CEOs don't know what is their company's contracts, nor do they think about it. While it is laudable that you paid such close attention, the fact is I've met many leaders who have no clue what is in their company's employment paperwork.


While I agree that there's probably a varying degree of attention paid...

I think that this clause is so non-standard for tech that it almost certainly got flagged or was explicitly discussed before being added that claiming that he didn't know it was there strains credulity badly.


I just talked to a neighbor, he said his startup has the exact same clause in their employment contracts!

Huh I should read mine.


> He lies. And if he doesn't... then he is being libeled right and left by his sister.

>

> https://twitter.com/anniealtman108

You know, it’s always heartbreaking to me seeing family issues spill out in public, especially on the internet. If the things Sam’s sister says about him are all true, then he’s, at the very minimum, an awful brother, but honestly, a lot of it comes across as a bitter or jealous sibling…really sad though.


I think someone mentioned possible mental health conditions that she might have. But in either case it is pure speculation and we're random people on the internet, not legal investigators, for better or worse.


Maybe he was to busy being kicked out of the company... /s


I've learned to interpret anything Sam Altman says as-if an Aes Sedai said it. That is: every word is true, but leads the listener to making false assumptions.

Even if in this specific instance he means well, it's still quite entertaining to interpret his statements this way:

"we have never clawed back anyone's vested equity"

=> But we can and will, if we decide to.

"nor will we do that if people do not sign a separation agreement"

=> But we made everyone sign the separation agreement.

"vested equity is vested equity, full stop."

=> Our employees don't have vested equity, they have something else we tricked them into.

"there was a provision about potential equity cancellation in our previous exit docs;"

=> And also in our current docs.

"although we never clawed anything back"

=> Not yet, anyway.

"the team was already in the process of fixing the standard exit paperwork over the past month or so."

=> By "fixing", I don't mean removing the non-disparagement clause, I mean make it ironclad while making the language less controversial and harder to argue with.

"if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too."

=> We'll fix the employee, not the problem.

"very sorry about this."

=> Very sorry we got caught.


> We're removing nondisparagement clauses from our standard departure paperwork

How would you interpret this part?

> and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual.

This is interesting - was it mutual for most people?


> We're removing nondisparagement clauses from our standard departure paperwork

"We're replacing them with even more draconian terms that are not technically nondisparagement clauses"

> and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual.

"We offered some employees $1 in exchange for signing up to the nondisparagement clause, which technically makes it a binding contract because there was an exchange of value."


Mr. Altman seems like a quite pedantic and evil people to work with—absolute psychopath.


So disappointing of OpenAI. I hope they'll make things right with all their former employees.


I thought freedom of speech was a foundational thing in the US.

But I guess anyone could be silenced with enough economic incentive?


Freedom of speech in regard to public institutions. For private entities/individuals, any contract you willingly sign will reign supreme over whatever this "freedom of speech" thing is unless there's written law that explicitly forbids the forms of retaliation described within said contracts.


Are there more than 2 former openai employees ?


> “The team did catch this ~month ago. The fact that it went this long before the catch is on me.”

I love this bullshit sentence formulation that claims to both have known this already--as in, don't worry we're ALREADY on the case--and they're simultaneously embarrassed that they "just" caught it--a.k.a. "wow, we JUST heard about this, how outRAGEOUS".


Unfortunately it is unlikely to result in Altman's dismissal but imagine being fired from the same company, twice, in less than 12 months.


There’s a recurring pattern here of OpenAI getting caught red handed doing bad things and then being all like “Oh it was just a misunderstanding, nothing more, we’ll get on to fixing that ASAP… nothing to see here…”

It’s becoming too much to just be honest oversights.


It's the correct counter-strategy to people who believe that you shouldn't attribute to malice what could be attributed to stupidity (and who don't update that prior for their history with a particular actor).

And it works in part because things often are accidents - enough to give plausible deniability and room to interpret things favorably if you want to. I've seen this from the inside. Here are two HN threads about times my previous company was exposing (or was planning to expose) data users didn't want us to: [1] [2]

Without reading our responses in the comments, can you tell which one was deliberate and which one wasn't? It's not easy to tell with the information you have available from the outside. The comments and eventual resolutions might tell you, but the initial apparent act won't. (For the record, [1] was deliberate and [2] was not.)

[1] https://news.ycombinator.com/item?id=23279837

[2] https://news.ycombinator.com/item?id=31769601


Well, in this case, you have the CEO saying basically they didn’t know about it until about a month ago and then Vox brings receipts with docs signed by Altman and Friends showing he and others signed off on the policy originally (or at least as of the date of the doc, which is about a year ago for one of them). And we have several layers of evidence from several different directions accumulating and indicating that Altman is (and this is a considered choice of words) a malicious shitbag. That seems to qualify as a pretty solid exception to the general rule that you cite of not attributing to malice etc.


Yeah, but keep in mind he's been in the public eye now for 10-15 years (he started his first company in 2005, joined YC in '11, and became president in '14). If you're sufficiently high profile AND do it for long enough AND get brazen enough about it, it starts to stick, but the bar for that is really high (and by nature that only occurs after you've achieved massive success).


> you shouldn't attribute to malice what could be attributed to stupidity

It's worth noting that Hanlon’s razor was not originally intended to be interpreted as a philosophical aphorism in the same way as Occam’s:

> The term ‘Hanlon’s Razor’ and its accompanying phrase originally came from an individual named Robert. J. Hanlon from Scranton, Pennsylvania as a submission for a book of jokes and aphorisms, published in 1980 by Arthur Bloch.

https://thedecisionlab.com/reference-guide/philosophy/hanlon...

Hopefully we can collectively begin to put this notion to rest.


Maybe I'm misunderstanding, but this seems straightforward: the first link goes to an email that went out announcing a change, which seems pretty deliberate; nobody writes an announcement that they're introducing a bug. The second change doesn't seem to have been announced, which leaves open the possibility that it's accidental.

Although I suppose someone could claim the email was sent by mistake, and some deliberate changes aren't announced.


The people in [2] got an email too. (It just turned out to be an automated one that hadn't been intended.)


It doesn't matter because they hold all of the cards. It's the nature of power: you can get away with things that you normally couldn't. If you really want OpenAI to behave, you'll support their competitors and/or open source initiatives.


But their product isn’t really differentiated anymore and has really low switching costs: Opus is better at almost anything than the 4-series (training on MMLU isn’t a capability increase), Mistral is competitive and vastly more operator-aligned, both are cheaper and non-scandal plagued.

Mistral even has Azure distribution.

FAIR is flat open-sourcing competitive models and has a more persuasive high-level representation learning agenda.

What cards? Brand recognition?


already cancelled my OpenAI account and installed llama3 on my local machine, and have a paid Copilot membership


This is what “Better to ask forgiveness than for permission” looks like when people start catching on.

It’s one of the startup catchphrases that brings people a lot of success when they’re small and people aren’t paying attention, but starts catching up when the company is big and under the microscope.


Seems this Altman fella isn't being consistently candid with us.


> There’s a recurring pattern here of OpenAI getting caught red handed doing bad things and then being all like “Oh it was just a misunderstanding

This is a very standard psychopathic behavior.

They (psychopaths) typically milk the willingness of their victims to accept the apology and move on to the very last drop.

Altman is a high-iq manipulative psychopath, there is a trail of breadcrumb evidence 10 miles long at this point.

Google "what does paul graham think of Sam Altman" if you want additional evidence.


Yeah, not radiating "consistent candidness" is he?


Equity in a unlisted, non-public company is an IOU scribbled onto a piece of loo paper.


I'd pay to get me one of them particular IOUs.


Why had the advent of semi-intelligent agents suddenly turned Silicon Valley into a place that now hates its own workers. Why is it that a place that once believed the mutual benefit of an intelligent worker and a company has turned into a time to brutalize or even hate the very creators of this technology

Where is all this hatred coming from?


Seems to pre-date semi-intelligent AI agents: https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...


Yea this has always been a market dynamics issue that it tried to manipulate... but it didn't have a flavor of hatred to it.



Then notion here is that the ownership class hates that it cannot own the creators of work. And now a new class if intelligence exists that can be fully owned.

This is dark.


It always hated its workers, it just didn't think it had other options for a long time.


Did it hate the market dynamics. i.e. Peter theils thesis that competition is for losers because it drives up prices... therefore the shortage of talent was creating competition which should be hated?


It's really maddening just how right the board was.


The majority of employees didn’t care about a lying CEO or alignment research: they wanted the stock payoffs and Sam was the person offering it - end of the day that’s what happened with the coup.

Now Sam is seen as fucking with said stock, so maybe that isn’t panning out. Amazing surprise.


It's funny to me to read now about employees of OpenAI being coerced or tricked or whatever. Didn't they threaten to resign en masse a few months ago, in total unquestioned support of Sam Altman? They pretty much walked into it, in my opinion.

That's not saying anything OpenAI or Altman do is excusable, no way. I just feel like there's almost no good guys in this story.


While true, it doesn't mean they were offering a better alternative.


I wish more people didn't expect an alternative before getting rid if a bad situation. Sometimes subtraction, rather than replacement, is still the right answer.


Well appropriately enough it was an AI movie that taught as a valuable lesson, namely that sometimes the only correct move is not to play.

It's such an insidious idea that we ought to accept that you can just give up your promises you explicitly made once those rules get into your way of doing exactly what they were supposed to prevent. That's not anyone else's problem, that was the point! The people that can't do that are supposed to align AI? They can't align themselves


I'll bet Emmett Shear would've been a fine CEO.


Doesn't really matter at this point because they gave literally zero info to the public or most of the company employees when they fired Altman. Almost no one sided with them because they never attempted to explain anything.


Scam Altman strikes again. And if you don’t believe he knew about this, then you are the fool.


I'll make a prediction here: OpenAI will in the coming years turn out just as ruthless and socially damaging as Facebook did.


I know shitting on FB is de rigueur. But honestly Facebook at its peak was really very useful in many ways that OpenAI hasn't ever been.


>OpenAI will in the coming years turn out just as ruthless and socially damaging as Facebook did.

They wish. Napster is a more apt analogy.


Less of a prediction and more of a descriptor of its current state


You ain't seen nothin' yet...


In the coming years? It pretty much already is


Don't forget about Reddit and Twitter. Although they like to call themselves social networks, they are really corporate psyop networks for hire.


Well put. I agree.


Why anyone trusts Altman or OpenAI with something as societally consequential as AI is beyond me.


I've reached the point where I wouldn't trust Altman with anything more consequential than a lemonade stand.


The thing is, he didn't ask you. He put himself into a position where anyone not giving him money would feel they're missing out. It's very unfortunate that he managed to pull it off.


Well that's the problem, isn't it? The incentives align with trusting Altman to make a return on your investment, nothing more.

Investors don't really care about consequences that don't hit the bottom line prior to an exit. Consumers are largely driven by hype. Throw a shiny object out there and induce FOMO, you'll get customers.

What we don't have are incentives for companies to give a damn. While that can easily lead to a call for even more government powers and regulation, in my opinion we won't get anywhere until we have an educated populous. If the average person either (a) understood the potential risks of actual AI or (b) knew that they didn't understand the risks we wouldn't have nearly as much money being pumped into the industry.


I can only think of one other tech CEO that managed to become as universally loathed as Altman in so quick a time, and that's Mark Zuckerberg. However even Zuckerberg somehow manages to seem more trustworthy than Altman.


It's neither consequential nor AI which we won't see, our children won't see, their children won't see, etc., so seems fine to trust Altman with chatbots.


Sam is in a tough position.

OpenAI is worth 100B. At this level, a founder would have been worth $20B at least.

But Sam aren't getting any of that net worth but he gets all the bad reps that comes with running a 100B company.


[ croc tears ]


but sam said he was sorry in all lowercase so it must be okay


Semi off-topic, but the trend of AI influencers adopting the all-lowercase communication style in professional conversation has been very annoying. It makes them appear completely unserious.

I recently received an job recruitment email for an AI role in all-lowercase and I was baffled how to interpret it.


I don’t think it’s only AI influencers that do this. I’ve noticed people use lowercase as some sort of power move. Like they’re so busy and important they don’t care about conventions.


Ironically, if you just kept the default keyboard settings on your phone, it will capitalize words for you. So these people changes the default settings to create this impression?


I disabled it for being annoying and frequently wrong. The idea that someone might interpret my laziness as a flex (or vise versa) is hilarious to me.


It's annoying that it insists on "proper" capitalization for github and so on ("GitHub"), I do revolt at this trademark injection into my autocompleter.


Haha, we're here arguing whether LLMs constitute AGI, and we can't even phone autocomplete right


I turn that stuff off because I don't like my keyboard automatically doing things for me.


They might be sending messages on a laptop which doesn’t have that turned on by default.


Lowercase typing is broadly informal and (historically) faster (autocorrect especially on mobile devices was terrible for longer than it has been any good).

Anything else people read into it is very often just projection.


It's certainly a modern status symbol that can be seen as a power move.


So common in academia! The student writes a detailed, error-checked, well-edited long email only to receive an answer like

'yes plaese

Sent from my iPhone'

Definitely a 'I'm very busy look at me' powermove


Actually they probably are that busy and aren’t trying to impress students, they’re trying to grade a bunch of exams while reviewing a paper while writing up research while writing a grant and also having a personal life.


And if they treat them all with this degree of attention, then they're probably failing at all of them.


The grant writing and (supervising the work of the people performing and writing up) the research funded by the grants that will also be the basis on which they will secure future grants get more attention, obviously.


Many, perhaps most, successful academics are horrible mentors and parents/spouses. That isn’t pertinent to the fact that all faculty are, in fact, quite busy.


nah i dontt hnk so

Sent from my iPhone


I think you are correct. It's a status symbol at this point. It's the digital version of "inconspicuous consumption" [1].

[1] https://www.theatlantic.com/magazine/archive/2008/07/inconsp...


No, this is a status symbol because it's a signal that these people are above norms of conventional society.

What's worse is that there's a ready line of journalists talking about how capital letters promote inequality or shit like that providing coverfire for them.


I think it is because people are often typing onto a sheet of glass on a packed train..


The software on that sheet of glass actually automatically capitalizes the first letter of every sentence. You have to go out of your way to produce the sort of text you see in sama's tweets. That's what makes it so egregious.


Something similar I've noticed--there's a certain level people reach within a company where they're now too busy to type out "thank you" or even "thanks." Now "thx" is all they have time for.


> Something similar I've noticed--there's a certain level people reach within a company where they're now too busy to type out "thank you" or even "thanks." Now "thx" is all they have time for.

"thx" is way to verbose for anyone but a plebs, the real power brokers use "ty." Or they don't thank anyone at all, because they know just bothering to read the message they got is thanks enough.


It also appears softer and borderline condescending. I always thought it was feminine because I've only seen women do it before very recently.


It’s pretty common in chatrooms, forums, etc. Been commonplace for at least two decades.


Eh, I mean in a post-smart keyboard era that will do it for you, and you either have to disable it OR purposely backspace and re-write it.


For anything on a phone, agreed. I have mine disabled so I can switch easily depending on where or who I’m talking with.


…or capitalization is not really that important for conveying meaning.


it's because the stupidphone's stupid keyboard capitalizes for you in the wrong places, and you have to take extra effort to fix it when they get it wrong, or you turn it off, and then get lazy about fixing it when you do need the capitalization.


Does it? When I start a message, the keyboard automatically capitalizes. It also does it when a sentence starts after a period. I rarely ever have to fix capitalization.

(I typed this from my phone)


I get incorrectly capitalized one in every 250 times the keyboard is used, I'd imagine.


Writing in all lowercase is an aesthetic akin to an executive wearing jeans and a T-shirt. It is supposed to impart an air of self-confidence, that you don’t need to signal your seriousness in order to be taken seriously.

However such signalling is harder to pull off than it seems, and most who try do it poorly because they don’t realise that the casual aesthetic isn't just a lack of care. Steve Jobs famously eschewed the suit for jeans and mock turtleneck. But those weren’t really casual clothes, those mock turtlenecks were bespoke, tailored garments made by a revered Japanese fashion designer. That is a world apart from throwing on whatever brand of T-shirt happens to feel comfortable to the wearer.


They have no clue which signals work and which don't, they just throw shit at the wall and see what sticks. Another "Sam" signaled his superiority by playing games during meetings with investors, and it seemed to work for him, until it didn't.

Also, how much is there to customize in a turtleneck? Seems like the same signal as a very expensive suit, "I have a lot of money", nothing more.


Unless you are very fit and have a perfect body shape, a very well tailored shirt/turtleneck can look significantly more flattering than an off-the-rack item. It'll sit well when you're in a neutral pose and stretch or pull appropriately when you gesticulate.

You correctly interpreted the point I was making — Steve Jobs treated his casual look as seriously as others treat an expensive tailored suit. And the result means he's still signalling importance and success, without also signalling conformity and "old world" corporate vibes.


I think it is more like wearing two polo shirts…


Following convention and using standard capitalization rules makes things easier on the reader.

Going to all-lowercase is harder on the reader, and thus is disrespectful of the reader. I will die on this hill.


THIS IS WHY I PREFER TO COMMUNICATE ONLY IN CAPITAL LETTERS. IT REMOVES ANY AMBIGUITY AS TO WHETHER OR NOT I’M ANGRY SINCE THE READER CAN ASSUME I’M ALWAYS SCREAMING IN THEIR FACE.

I HOPE YOU ARE HAVING A NICE DAY.


Gives off "if I sound pleased about this, it's because my programmers made this my default tone of voice! I'm actually quite depressed! :D" [1] vibes

1 - https://www.youtube.com/watch?v=oGnwMre07vQ


Caps Lock is Cruise Control for Cool, right? ;o)


I miss Usenet taglines


I’ve worked with people who write in all lowercase, but I’ve never worked with someone who writes in ALL CAPS.

How long could someone write in ALL CAPS before they get fired?


YOUR JOB ADVERTISEMENT SPECIFIED VERY MANY YEARS OF EXPERIENCE STOP

HOW DO I WORK THIS DIFFERENCE ENGINE STOP


I guffawed (briefly), with a diminishing chortle thereafter. Well played and cheerio.


One of the nurses at the high school I teach at only emails SHOUTY-STYLE. She’s been there 26 years.


This. Reading is hard enough, especially on a screen. Flouting readability conventions shows such contempt for one's readers, which I suppose is the point here.


youre not worth moving my pinky to the shift key, and you think that missed apostrophe was a mistake?

im busy running a billion dollar company i dont have time for this


who's using a hardware keyboard these days?


Give it a year, it will become very uncool as people realize only people not using AI to make their writing better would send syntactically or grammatically incorrect wording.


bet


If I received a legitimate job recruitment email for an AI role in all-lowercase I would put in the spam folder lol. sam altman typing in all lowercase letters shouldn't influence people to do the same in semi-professional environments & situations. I think Altman is trying to appear as casual, friendly, and attract the zoomer market by typing in all lowercase, it's just my speculation and perhaps I'm over-reading into it.


If I see a job application with spelling or grammar mistakes in it, then it's a huge red flag; it tells me that this person does not care about accuracy or they don't check their work very well. These are very important attributes to have in engineering.

If you see it in a job advert, I'd assume the same for the people who are doing the hiring.


Throw it at an LLM with the simple command "fix", highlight every character that has a delta and send it back to them.

Add a grade in red at the top if you're feeling extra cheeky


i read an article about bauhaus typography in high school and mostly dropped capitalization since then (~2005)

whether i use them or not is basically a function of how much i think there will be consequences for not using them. if i do use them without coercion, it's for Emphasis, or acronyms (like AI), or maybe sPoNgEbOb CaSe

i'm not sure where AI CEOs, or younger generations picked it up. but the "only use capitals when coerced" part seems similar


> I recently received an job recruitment email

Yet you use "an" for a vowel that's miles away, so I don't like the way you type either.


did you create a HN account just to point out a typo


Conversion works in mysterious ways!


lowercase has been used by propeller heads and hackers since.. idk.. the 80s? some of us just liked it better that way.


Postmodernism is consuming everything


i've been mostly using all lowercase for decades.. am i the asshole?


Same. It’s something you learned if you did a lot of chat (irc, icq, battle.net etc.) before smartphones. It makes sense young people wouldn’t know it’d been a default, faster way to type.


right? as i said in my other comment, this has been a thing since at least the 80s when i got behind a keyboard.

i don't normally do it anymore, but for this post i've gone sans-caps. kickin it old school. (yaimadork)


not at all, its a stylistic preference for some but its also easier and faster. i've been typing lowercase on PC [faster] and mobile [preference] for as long as i can remember; only using capitalization and punctuation where it feels necessary and for emphasis ..and to the best of my knowledge no one i chat with thinks it's strange, and no one on any forum has said anything about it either. the hate in this thread is just directed at Sam and since he does this, also at this

since i've always typed like this i've joked with my mother that if i ever send her a message with proper capitalization and punctuation, its a secret signal that i've been kidnapped!


Can you please explain to me how it is faster? You click shift and the character at the same time.


why?


You're so right, it's utterly distracting to read a statement written in all lower case.

It looks like it was written in a sloppy way and nobody actually proofread it.

I think Sergey Brin used to do the same thing (or maybe it was Larry Page). I remember reading that in some google court case emails and thinking, the show Silicon Valley wasn't even remotely exaggerating.


He's on the board of AI safety too. Now I feel protected.


It looks especially odd when the text contains other "formalities" like semicolons, or writing the phrase "full stop".

Aside: "full stop" is the Commonwealth English way of saying "period" so it seems like an affectation to see an American using it.


it’s the same as the tech lax dresscode, super high salaries, benefits, overall relaxed “we can get away with it because we’re both high status and enlightened” that operates as a signal to their assumed superiority.


They could have been cool. They could have been 2001 Google. They could have been the number one place any new PhD wanted to work.

But no. The MBAs saw dollar signs, and everything went out the window. They fumbled the early mover advantage, and will be hollowed out by the competition and commodified by the PaaS giants. What a shame.


Instead of taking 15 years to drop the "don't be evil" act like Google, the new AI companies did it in two! e/acc, baby!


what does e/acc mean?



Sheldon Cooper can afford a nicer apartment.


It’s not just the MBAs that saw dollar signs. A lot of the engineers and researchers did, too.


The sad thing is that this isn't even MBAs. The Bay area has gotten so infested with the hustlers and win-at-all-costs, winner-takes-all YC mantra tech bros that it seems even a high minded company like OpenAI isn't immune


Who are the MBAs that did this? Altman is not an MBA.


As much as I love blaming things on MBAs, the culture of OpenAI looks at least as much like what happens when you make a room full of real-life Big Bang Theory nerds very rich by rewarding their fantasies of a world free of balanced human interactions.

We have to look at the reality that the worst excesses of the new Silicon Valley culture aren’t stemming from the adults sent to run the ship anymore, and they aren’t stemming from the nerds those adults co-opt anymore either.

The worst excesses of the new Silicon Valley culture are coming from nerds who are empowered and rewarded for their superpower of being unable to empathise.

And I say that as someone who is back to being almost a hermit. We got here by paying people like us and not insisting we try to stop saying what we think without pausing first to think about how it will be received by people not like us.

It’s not a them-vs-us thing now. It’s us-vs-us.


What is up with the allegations of Annie Altman?

Something doesn't smell right


Well, it is wrong to disparage someone that is innocent until proven otherwise. Even if you disagree with them and think they are a snake-oil salesman.

That does not mean you should not hear someone out. As far as I am aware Annie said Sam and their brother molested her as a kid. He claims otherwise, and deflects with “she is a drug addict” (heavily paraphrasing here). Lots of talk of how her trust was broken, and it is impossible to get justice against someone so rich and powerful, etc. where sama’s camp claim it is a money grab and there is zero proof. A sticky wicket.

Now, whether all these “new” revelations (honestly never thought Sam was honest) help support her claims is up to you. Just wanted to add some context for those unaware. Not accusing anyone.


Not gonna lie, I think its shady as fuck that a new account registers to post this one comment...


Those are shocking allegations, real question is why Sam, as a wealthy man, isn't able to support his family?


It's better to check whether it's true


Another day, another article where Vox is hating on OpenAI.


You’re just shooting the messenger.


HN hates cryptocurrencies but 'equity' to me is even worse than the worst shitcoins. It's an IOU that the company controls (and one people think is a tangle part of their 'compensation.') Just imagine if a company thinks you're about to jump ship and you have equity close to being vested. The company almost has a perverse incentive to fire you to nullify that equity. This gets much much worse when you know the typical vesting schedules that startups like to use. I know some of you working at the top companies might get your equity vested every month. But in my experience its much more common to be talking about yearly vesting schedules at startups where you have to stay for many years to get anything.

So think about that. They offer you an average to low base salary but sweeten the deal with some 'equity' saying that it gives you a stake in the company. Neglecting to mention of course, how many different ways equity can be invalidated; How a year in tech is basically a life time; And how the whole thing is kind of structured to prevent autonomy as an employee. Often founders will use these kind of offers to gauge 'interest' because surely the people who are willing to take an offer that's backed more by magic bean equity money (over real money) are truly the ones most dedicated to the companies mission. So not being grateful for such amazing offers would be taken as a sign of offence by most founders (who would prefer to pay in hopes and dreams if they could.)

Now... with a shitcoin... even though the price may tank to zero you'll at least end up with a goofy item you own at the end of the day. Equity... not so much.


I bet similar claw-back clauses are waaay more common than many on this thread would imagine at private co's. I've always been under the impression that 'vested equity' doesn't mean ~anything until you actually see liquidity. The company can generally fuck you before that point if they choose to. Hope I'm being overly cynical with this take.


I have never seen it in quite a few equity agreements I've looked at. What is common is a very short post-termination exercise window that in practice acts as a clawback unless you are financially able and willing to pay the cost/taxes of exercising within (often) 90 days.

And a bunch of not-well-informed employees who didn't understand the consequences of this clause when they originally signed


It sure means a lot more after liquidity, but big successful companies like spacex do have a market selling pre-ipo options or shares or whatever.


They don't need clawback. They can just dilute the f out of you. Which is what happens most of the time anyways


Can't we all just go back to being positive and amazed with OpenAI and it's technology? Why does everyone have to be so negative about tech?


I also welcome our new AI overlords.

It's not really the tech that is negative it is the humans manipulating it for profit and power, and behaving obnoxiously. The tech is very useful.


It just seem petty, refusing to sign a simple document agreeing to not to trash your former employer (with whom you intend to continue to benefit from a shared interest in said company). It wasn't as if Altman was threatening to take back equity. Little more than a "just be nice, OK" and yet somehow that is asking too much?


Damn, is this Sammy's Sock or do you just have elementary level reading comprehension?


The latter, apparently. TBH I answered after hearing about the story elsewhere and it just struck me as fairly benign for an employer to ask as much from a (soon-to-be) former employee. I am not a lawyer anyway so I honestly have no idea what the proper legal interpretation might be. I was just commenting on what seems to be the unfortunate state of human affairs these days. It just feels like people are so much more prone to go on the offensive over what amounts to a simple request for civility. Of course down here we still honor the humble handshake. Maybe that is the difference?


I see why you'd assume, I deal with annoying young people who bitch about working 4 hours a day, annoying I know. There's still no excuse for being almost the same type of annoying. Get good. Idgaf if you're not a lwayer I just have greater than 5th grade reading level. You clearly displayed an annoying level of person. The type of annoying that encourages the type of genocides I sympathize with. The aware but dumb. Read.

If you're autistic, have an extra chromosome, or will admit they are genuinely dumb. I'll apologize. But otherwise, nah.


Well at least I wasn't nearly as annoying as the person I am responding to right now. Moving on....




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: