Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman, Lately (oftheclock.com)
160 points by iamthirsty 7 months ago | hide | past | favorite | 118 comments



> He owns no stake in the ChatGPT developer, saying he doesn’t want the seductions of wealth to corrupt the safe development of artificial intelligence, and makes a yearly salary of just $65,000.

> “A growing number of Altman’s startups do business with OpenAI itself, either as customers or major business partners. The arrangement puts Altman on both sides of deals, creating a mounting list of potential conflicts in which he could personally benefit from OpenAI’s work.”

If nothing else this idea that a person that is positioned to lead such a company is willing to do it for just $65k for altruistic reasons, backfired. Clearly the person has many incentives to go find profits somewhere else.

edit: added second quote to clarify


> If nothing else this idea that a person that is positioned to lead such a company is willing to do it for just $65k for altruistic reasons, backfired.

In my first draft “backfired” is the exact word I used too but thought it would come off as too targeted language against Sam, so I changed it.


I’m struggling to understand what backfired


He's saying the attempt to virtue signal, via the low salary, is invalidated by these deals. The virtues would be mostly altruism I guess, that the AGI mission was important enough to humanity for him to sacrifice opportunities to build personal wealth and so on. I think that's overblown personally, but certainly Altman would often repeat the line about having low salary and no equity.


And it's a lot easy to even fake virtue signal your low salary when in the last three years you've bought:

a $27M home in San Francisco,

a $16M "weekend ranch" in Napa,

and a $43M estate in Hawaii.


Altruism?


Altman isn't an Effective Altruist.


> Altman isn't an Effective Altruist

Or any type of small-a altruist, really. Which is in direct conflict of the image he wishes/wished to portray by "only" officially getting a bay-area-intern-level salary with no equity.


Did they edit their comment? Or did you assume altruism meant Effective Altruism?


> Did they edit their comment?

Yes.


The author here, just want to clarify:

The post is not a “hit-piece” on Sam, nor am I calling him a bad person or anything of the like.

It’s a discussion of the direct contradiction between publicly stating someone is without financial biases and the hard facts of clear financial entanglements.

I’m happy to hear any criticism on how I could’ve written it better to explain my position! :)


The tech community is inherently cynical, not because we dwell in basements and fear sunlight, but because we've been around long enough to see the corruptive effects of the big business side of tech.

The article is well balanced and in my opinion doesn't exhibit a sense of bias. It's clear you're not alone in noticing the dots on the graph are forming a line.

Some may say that merely shining a light on the topic, rather than something more virtuous, is a form of bias. I tend to disagree with those assessments due to the aforementioned reasons.

We're all adults, we can hold a complex view of Sam Altman. It's not about putting him in a good or bad box, but being aware that he does have financial ties to all of this (avoidable or otherwise) and that is the lens we may need to evaluate some of his decisions by.

His actions will ultimately tell his story.

That said, HN can be a little too suspicious and it does frequently trump up malice in the benign.


Honestly its mindboggling how uncritical most of the folks here are about Sam Altman and Paul Graham and somehow giving billionaires who have been caught in multiple lies the benefit of the doubt multiple times.

Its unbelievable that apparently no-one has noticed a 10M stake in the hottest company that could well have >10x'd in the meantime. Watching Paul try to handwave it away and pretend he doesn't understand how investments work is incredible.


Thanks for the open invitation!

My reaction to this, and all other discussions on the topic, is that it rings hollow.

Here’s why & I’m very curious for your response:

Conflict of interest is a fact of life in elite, cutting edge work.

“No conflict, no interest” is as old as SV and VC for that reason. It’s a small pool of people who create great companies and invest.

How someone structures themselves around that conflict is the signal. Disclosing potential conflicts of interest is the ethical bar and that has been done.

Whatever the real reason for the original OpenAI’s board to ouster Sam, the current board has found no wrongdoing on Sam’s part and crucially made no request for him to cease investment activity.

My hypothesis: that’s because Sam’s investment is a powerful signal and one the AI startup community greatly benefits from.

I do think Sam’s virtue signaling before Congress has painted Sam into a corner here, but we keep shining the flashlight on it and coming out with the same conclusion: Sam is studious about governance and an astute risk taker. Even the latest ScarJo incident shows that. I don’t see how OpenAI could do better for CEO.

It is possible for Sam to take both positions.

1. Not taking financial gains from OpenAI so that his decisions are guided by building safe AGI

2. Investing in companies that are betting on trends he has the most conviction and knowledge about

Perhaps a simple test is- is the moral hazard here worse than owning stock in OpenAI?

If we don’t expect luminaires to be anything other than people, the OpenAI team’s convergence to bring Sam back is the clearest signal anyone could have on who they believe the best leader to bet on is.

It’s human nature to find fault and we all have our faults. My concern with making this the hill to die on in critiquing Sam & OpenAI is we feed a narrative that has little explanatory power and perpetuates unrealistic expectations on how life in a position of power works.


> It’s human nature to find fault and we all have our faults. My concern with making this the hill to die on in critiquing Sam & OpenAI is we feed a narrative that has little explanatory power and perpetuates unrealistic expectations on how life in a position of power works.

Isn't then the criticism about the acceptance that "life in a position of power works" this way and is essentially morally corrupt? Couldn't it be better? Greed is among the chief reasons for such conflicts of interest to arise, shouldn't we criticise such aspect and expect better from the "elites" in positions of power? Since greed will eventually corrupt we could probably want to expect better from the ones taking positions of power, I personally wouldn't like to accept that this is the best system we can live under, and for it to change we need to criticise it for the optics to change over time.

It was once accepted to own slaves, eventually we all landed on the same page: it is morally corrupt to do it, it's unethical. More modern examples like insider trading, stock fraud, were once accepted until we didn't accept it anymore as "it's the way it is".

I don't see a reason to not strive making other morally corrupt aspects of our systems (such as conflicts of interest) also unethical and frowned upon. People are people and shared ethics is how we level ourselves against each other.


Thanks for your reply! Couple thoughts:

> How someone structures themselves around that conflict is the signal. Disclosing potential conflicts of interest is the ethical bar and that has been done.

This is kind of the problem though, it hasn't been done. Graham, my example in the post, didn't know YC invested in OpenAI. The previous board didn't know about Altman's ownership of the Startup Fund. His growing investments were, for most of them, published on the WSJ — yesterday.

> Whatever the real reason for the original OpenAI’s board to ouster Sam, the current board has found no wrongdoing on Sam’s part and crucially made no request for him to cease investment activity.

The original reason was a lack of trust, and the original board was pushed out so he could return. Of course the current board, one built from it's foundation to be much more receptive and docile to Sam's activities and statements.

> It is possible for Sam to take both positions.

> 1. Not taking financial gains from OpenAI so that his decisions are guided by building safe AGI

> 2. Investing in companies that are betting on trends he has the most conviction and knowledge about

The whole point here is that the second contradicts the first. If you're still directly making money off OpenAI's decisions & deals, the hand-waving of "being guided by building safe AGI" and not money ceases to be trust, because there are clearly billions at stake for Altman.

> we feed a narrative that has little explanatory power and perpetuates unrealistic expectations on how life in a position of power works.

The thing here is, Sam could realistically be an altruistic monk, not driven by any financial incentive when it comes to AI. There are a ton of different options, if he wants to live up to the standard he set for himself.

For example, putting investments into a trust that have to do with AI, not investing in related companies at all, putting a majority of his wealth into a Bill & Malinda Gates-style charity — the list goes on.

If had not tried to tell everyone he's doing this for the good of humanity and not money, he could take a multiple million $ salary + a huge personal stake in OpenAI and no one would bat an eye. It's the fact that he himself is position his image to be one thing, which is not the case.


Ok, please take this as constructive criticism of your writing, and not as a counterargument to your position.

In your post the comment.

> Graham's attempt to dispel rumors of Altman's supposed firing didn't work as well as hoped, with many of the responses reiterating that making him choose between OpenAI & YCombinator still constituted a form of firing.

Portrays this as a matter of opinion, when if any single opinion should be considered it should be Paul Graham's since he was the person performing the act.

The likely cause for Paul Graham's statement was Helen Toners comments a few days earlier

>if Sam did stay in power as he ultimately did you know that would make their lives miserable and I guess the last thing I would say about this is that this actually isn't a new problem for Sam and if you look at some of the reporting that that has come out since November it's come out that he was actually fired from his previous job at Y combinator which was hushed up at the time

Which clearly carries the implication that this was an inarguable firing for misdeeds. In this context, it does not matter if being asked to make a choice is technically firing. The main point is that this is a rebuttal of the narrative put forth by Helen Toner.

Paul Graham makes this clear

>we would have been happy if he stayed

So this is how you start out, It places the article within a false narrative, but is not even the topic of the post. It would have been better to more concisely state that the thread containing Paul Graham's statement revealed an additional piece of information.

Next you say

>While responding to replies, it seems that Graham was informed - and later confirmed - that Altman had, in a pretty clear conflict of interest

You are combining a conclusion and evidence here. The evidence shows that Sam Altman has a degree of investment in OpenAI through Y Combinator. You need to make the argument that this was a clear conflict of interest separately, additionally you need to state whether you mean the conflict of interest is referring to when he invested the money as the president of Y Combinator, or later in decision making at OpenAI as a stake holder.

>obviously unbeknownst to YCombinator leadership,

Is clearly false because at the time Altman was YCombinator leadership himself.

>a tidy sum of $10 million

Everything is relative, it is worth taking into context how much $10 million represents to the people concerned. In your provided reference, Paul Graham describes the sum as

>This was not a very big investment for those funds.

I cannot critique the accuracy of your representations of the Wall Street Journal, as it is paywalled.

The Gist of the critique is that there are deals between OpenAI and enterprises that Sam Altman has an investment in. It is not uncommon for individuals to be in decision making roles in one company while it does dealings with another in which they have a financial interest.

It is not enough to establish that this connection exists for it to be a story. There needs to be some suggestion and evidence that there were decisions that were improperly influenced by the person with the conflict of interest. The typical way to avoid even the appearance of improper influence is for the person to declare their conflict of interest and leave that decision to non conflicted parties. Declarations of conflict of interest are commonplace, because conflicts of interest are not bad in themselves, only when they influence events.

If you are alleging that Sam Altman improperly influenced the negotiation with Reddit for data, or any similar conflicted negotiation, then you really have a story. You also need evidence to say that it happened. Even evidence that Sam Altman refused to step aside from such negotiations would be significant news. While he may not have influenced negotiations to his own advantage, being in a position where it appears that it might have happened would generally be considered a lapse in judgement.

Barring that evidence. You have a story about someone's investments that have increased in value, which is what people expect their investments to do.


Thanks for your reply! Couple things:

> Which clearly carries the implication that this was an inarguable firing for misdeeds. In this context, it does not matter if being asked to make a choice is technically firing. The main point is that this is a rebuttal of the narrative put forth by Helen Toner.

I actually agree with Paul Graham's post — totally reasonable thing to ask someone to do, and I didn't agree with the replies to his post. I tried to point out what happened — he posted the image and people didn't believe him — rather than how I felt about that specific topic. Arguably could've done better, however.

> The Gist of the critique is that there are deals between OpenAI and enterprises that Sam Altman has an investment in. It is not uncommon for individuals to be in decision making roles in one company while it does dealings with another in which they have a financial interest.

> It is not enough to establish that this connection exists for it to be a story. There needs to be some suggestion and evidence that there were decisions that were improperly influenced by the person with the conflict of interest.

> If you are alleging that Sam Altman improperly influenced the negotiation with Reddit for data

I highly recommend you use archive.today, or whatever means to read the WSJ articles (even a paper copy, if it's in there), because the assertions about impropriety or influence of deals originally stems from those two articles. I didn't just claim that with no evidence, I was highlighting statements and assertions already made.


> Portrays this as a matter of opinion, when if any single opinion should be considered it should be Paul Graham's since he was the person performing the act.

Opinion is one thing. But PG's story of "we asked him to choose and he chose OAI" doesn't gel with the -actual evidence- that Altman proposed making himself the Chairman of YC, then went ahead and actually announced it on YC, and then it was hastily deleted and he "chose" OAI.


The cat is out of the bag now, and the Toyota Corolla trick was evented, but I think he could have used it, or a less Miami-esque version of it.

Like the other Sam, his bank account has not been inflating by accident, and him insisting that he is not that much interested by money should be taken, at least, with a reasonably sized pinch of salt.


Having a couple of billion in the bank tends to make people (publicly) disinterested in money.


Paul Graham came out publicly to defend Sam, and we instantly have this blog post about, wait, just a sec, let's dissect actually why Sam is still evil.

Can we believe that Sam could actually be a good person? Today, Kara Swisher in her podcast on Pivot said, "Every time I tell people I actually like Sam, they become widely offended".


You’re applying pretty black and white moral values to a post that, at least to me, didn’t read that way at all. One can like Sam Altman as a person while wishing he was more transparent in some of his business dealings.


The post may not be at the extremes, but if the author has been following the issue as closely as it seems, they must be aware that there are people boldly proclaiming Sam Altman to be a sociopath on a daily basis.

The issue has become polarized, for reasons I don't rightly understand, but nevertheless this is where we have ended up.

To write on the topic in this environment it would be advisable to be clear on what they are saying, what they have an issue with, and what the appropriate remedy would be.

To just throw out some insinuations in a "I'm just asking questions" manner doesn't in-itself condemn a person. It isn't happening in isolation though. No snowflake believes itself to be responsible for the avalanche.


> To write on the topic in this environment it would be advisable to be clear on what they are saying, what they have an issue with, and what the appropriate remedy would be.

They literally did all of that, though?


Its insane how far people are willing to project on their feelings towards Altman. Look at this quote in this thread.

"But it's worth noting that much of Sam Altman's presentation is just a mask (one he puts a ton of effort into and is good at maintaining), even if he's still less evil than the Sacklers or a mob boss."

How do people even come up with this narrative?


> Can we believe that Sam could actually be a good person?

Depends on what good means to you. This is a person that we have evidence on repeatedly using these kind of underhanded techniques. Maybe he's not physically hurting anyone, but this is a person I would avoid.


> these kind of underhanded techniques

What in the article is underhanded? Worst case, he has undisclosed conflicts of interest.

> this is a person I would avoid

Does Altman have a Trump-like wake of ruined careers and lost riches among former allies? Everyone he's been close to seems to have done well from it.


> Worst case, he has undisclosed conflicts of interest.

Given the size of these deals, that's kind of a big deal. It isn't an "oops, it slipped my mind" small little conflict of interest kind of thing, imo.


> Given the size of these deals, that's kind of a big deal

For investors, sure. (And the investors are more than fine with Altman, warts and all.) For the public, eh.


Are you saying the public won't or shouldn't care? Altman wants public trust to say what regulations should and should not be made. Dishonesty is relevant.


How many of them were made to sign egregious and secret non-disparagement agreements?


Honestly, I think his shady handling of the ScarJo thing is what is shifting the tide.

He very clearly isn't being honest there, and it's so obvious that many people are starting to question everything he says.


No, if anything that’s a pretty fake controversy too.

Ricardo Montalban had a great quote about the life stages of an actor, enumerating them as follows:

1. Who is Ricardo Montalban?

2. Get me Ricardo Montalban.

3. Get me a Ricardo Montalban type.

4. Get me a young Ricardo Montalban.

5. Who is Ricardo Montalban?

As far as I can tell, Johansson’s complaint is that when OpenAI reached out to her for voice acting and she turned them down, that they instead got a Scarlett Johansson type, and that OpenAI should be categorically prohibited from hiring any voice actor who sounds like her at all. Which is not how acting has ever worked, but for some reason the topic of artificial intelligence gets a lot of people worked up to the point of artificial stupidity.


Midler v. Ford is more relevant legally than Ricardo Montalban's wit. In short deliberate mimicry is not allowed.

OpenAI's public claims about how they produced the Sky voice followed Johansson's public statement. They could be true or false. We don't know what claims or evidence they gave Johansson's counsel.


> As far as I can tell, Johansson’s complaint is that when OpenAI reached out to her for voice acting and she turned them down, that they instead got a Scarlett Johansson type

Agreed, and according to Midler v. Ford that is not permitted:

"We hold only that when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."


I’ve never seen that standard applied to acting. Particularly voice acting. Midler v. Ford was a dispute between a singer and a company that used a sound-alike singer, singing a cover of a Bette Midler song in a commercial, to falsely imply a sponsorship that didn’t exist. Totally different case.



There’s a difference between impersonating someone with the explicit intention of falsely giving the impression they are involved in a project which they are not, and simply hiring an actor (or a voice actor) who can give a performance that’s similar or reminiscent of another.

> You've never seen it, huh? Google is super easy to use.

Please stop acting like an asshole.


I'm sorry, friend, from my perspective it seems quite obvious you're not arguing in good faith, and I have no interest in chasing your goalposts.

You take care now.


> from my perspective it seems quite obvious you're not arguing in good faith

I’m arguing in good faith, in the sense that I’m expressing my own genuine rationale for my own opinions. Your inability to cope with that and remain civil is the only show of bad faith in this entire discussion.


You started this conversation calling my position a "pretty fake controversy".

You take care now. I will not be responding to you further.


You really struggle with the fact that other people disagree with you, don't you?


LOL. Take care.


It’s quite a few things. I didn’t find his claims of ignorance around the non-competes, for example, particularly compelling.

But all of that is quite separate from these conflicts, which are entirely a matter for Altman and his investors, investors who have no reason to complain about him.


Agreed.

But when you see him being duplicitous in some situations, it's hard not to suspect it bleeds into other situations.


> What in the article is underhanded? Worst case, he has undisclosed conflicts of interest.

undisclosed = underhanded

> Does Altman have a Trump-like wake of ruined careers and lost riches among former allies? Everyone he's been close to seems to have done well from it.

I'm not talking about Trump, and I don't think Trump should be a reference for what is or isn't acceptable.


People are angry about the "Open"AI debacle and he's been publicly in favor of being very paternalistic and controlling (likely for his material benefit as much as safety). It's fair that he's taken some flak for those things, he's trying to control the direction of society at large and people want a say. I don't think he's evil, but I can see why people would perceive him as paternalistic or even a bit patronizing.


> Paul Graham came out publicly to defend Sam, and we instantly have this blog post about, wait, just a sec, let's dissect actually why Sam is still evil

I'm not seeing good and evil in this post. It's calling Sam out for not being transparent. Given he's elevated OpenAI, in public testimony, to an extinction-level threat to humanity, that lack of transparency is of public concern.

Not being transparent doesn't make him evil, doesn't mean he is unlikeable and doesn't per se mean he's dishonest. (Though OpenAI and he do have a likeability problem, at least in politics, albeit one I think they can fix.)


That’s exactly what I was going for — the issue of transparency here. It wasn’t dissecting why he’s “bad”, it’s that the public statements don’t match up with financial realities.

Maybe next time I could press more about the transparency factor, but I thought it was concise enough.


> public statements don’t match up with financial realities

Has he ever plead poverty?


Obviously not, but as I state in the article he has plead:

> “He owns no stake in the ChatGPT developer, saying he doesn’t want the seductions of wealth to corrupt the safe development of artificial intelligence, and makes a yearly salary of just $65,000.”

According to OpenAI themselves.

So he takes a “low” salary and no ownership as to, according to him and the company, not influence his decisions in the pursuit of financial gain — yet that’s a complete omission of the whole truth.

I’ll stop short of calling it a flat-out lie, but a mischaracterization of reality for sure.


> I’ll stop short of calling it a flat-out lie, but a mischaracterization of reality for sure.

In my opinion a mischaracterisation of reality is just a lie with layers of indirection to weasel oneself out of it. It's definitely a lie.


> albeit one I think they can fix

Money, a little image coaching, and have Aaron Sorkin write a movie.

(We've seen a similar situation before.)


I'd love for Altman to explain why they decided to start Worldcoin in Africa, and by offering bigger and bigger signup incentives for their whole retinal scanning thing, to the point where at times it could be two month's wages for some people...


Keep in mind, when Swisher says she likes Sam, what she means is, to quote her Twitter: "Sam Altman is no different than most of the talented ones, which is to say, aggressive, sometimes imperious and yes, self-serving."


People don't divide cleanly into "good" and "evil" buckets, and CEOs in general tend to be ruthless deal -makers. But it's worth noting that much of Sam Altman's presentation is just a mask (one he puts a ton of effort into and is good at maintaining), even if he's still less evil than the Sacklers or a mob boss.


Given the accounts of his sister, and now Helen Toner, no. We can be sure that he is evil.


> Can we believe that Sam could actually be a good person?

Ok but based on what ?


> let's dissect actually why Sam is still evil.

I never said Sam is evil, nor anything close.


So... not entirely joking: at what point do we declare that "OpenAI" the non-profit was basically a scam? It's clear it isn't now what it originally presented itself to be, and it's increasingly looking like it was never meant to be. The public-face of an open non-profit was, what, just a recruiting tool to be able to hire folks who otherwise wouldn't want to leave academia?


I think it was a classic bait and switch for the academics working there. You were able to publish everything, which is what academics want, until they came up with the safety nonsense and closed everything up.


There's a Musk court case about that coming along.


I’m astounded he amassed $500+ million even before this. Loopt can’t have earned him more than one digit, are YC partners paid this well?


Not just being partner, but what sounds like some extraordinary generosity by Paul Graham, new in the WSJ article cited by OP:

> Hydrazine bought out a portion of startup shares owned by Graham, a transaction that gave Altman stakes in some of the hottest companies backed by Y Combinator. The sale hasn’t been previously reported.

https://archive.is/x2hx4#selection-2672.0-2672.1

Depending on what companies and how much, that could have been an extraordinary windfall.


If you read the WSJ article, his second investment was $15,000 for 2% of Stripe


Nice. A tiny fraction of the opportunity cost of working as a founding engineer, people come to you to be vetted, and you can shotgun-approach it. Most of us are in the wrong business.


As opposed to partners in other VCs? Successful VCs make good money, thanks to performance fees; say, 20% of the profit.

edit: I don't know the specifics of Altman's case.


> thanks to performance fees

Based on public reporting, Altman's wealth came from his family office piggybacking alongside YC versus his share of carry in YC's funds.


What does this mean for dummies? Altman family was already rich?


> What does this mean for dummies?

He sold Loopt for $40mm and then invested his share of that into start-ups like Stripe. (That investment alone should have taken him past $100mm.)


Oh wow, wish I had just 1% of his wealth


Unpopular but true: if you don't have conflicts of interest, you aren't really in the game. I don't know any serious business person who doesn't have conflicts. Some manage them as honestly and fairly as possible, others not so much.

Having conflicts doesn't make someone bad.


It’s not even specifically about the conflicts of interest — although that isn’t great — it’s about the obfuscation of that information, and the direct contradiction with public statements and ‘mission goals’.


What is a concrete example?


>> Having conflicts doesn't make someone bad.

What if someone is willful opaque about their conflicts?


Read the entire comment, and think.


Yeah if you dont have conflicts of interest you are a looser. There are so few companies that what can one do?


> While responding to replies, it seems that Graham was informed — and later confirmed — that Altman had, in a pretty clear conflict of interest and obviously unbeknownst to YCombinator leadership, invested a tidy sum of $10 million through YCombinator into OpenAI’s for-profit arm.

The emphasized phrase doesn’t make sense. At the time, Altman was YCombinator leadership. Graham had retired to England by then; it’s hardly unusual or surprising that he wasn’t keeping track of every single YC investment.


> every single YC investment

A startup from the current crop whose $500K investment for 7%? Sure.

Pretty much the hottest topic ON THE PLANET, north of $80B valuation? Yeah, I'm a bit more skeptical that "it's not unusual he wouldn't be paying attention to that investment".


I'm a little bit surprised by these articles. I thought it was well known (or at least an open secret) that sama is one the most prolific tech investors? Yes, sure, there are conflicts of interest but this is par for the course. If you make many bets and startups pivot to greener pastures then inevitably you end up up funding startups that compete head-on. When people take sama's money they know he has a stake in 400 other startups. But sama's network is also one of his great assets.


He either didn't disclose key commercial dealings to the OpenAI board, or purposefully concealed them. That's a big deal.


Anyone not living under a rock knew who he was. If someone has been investing for 10 years before taking a job, one can reasonably assume they are going to have a bunch of investments. It doesn't need to be 'disclosed'. What's next, someone will be surprised warren buffett owns some stocks outside berkshire?


There is a huge difference between 'I have private investments' and 'I own a controlling interest in a fund I have been telling you is independent'


The openai fund? I don't think you know what "own" or "controlling interest" mean.


That's a bit rude, especially since you're wrong.

Prior to Ian Hathaway's takeover a few months ago Sam Altman owned over 75% of the fund, according to their Form ADV filings with the SEC.

I get why you're confused, though, given their use of fake filing names I think the ownership and structure of the OpenAI fund has been so opaque from the start it's actually hard to know who has what.


Find the filings, and read them. And understand what a limited partnership is, what the general partner does, and why saying "he owned 75% of the fund" is nonsensical. He didn't "own the fund", the new guy doesn't "own the fund". He was acting as GP and now another guy is acting as GP, in both cases through another entity. The structure of OpenAI isn't any more opaque than any other reasonable sized business and there are reasons for additional entities, it's not all about obfuscation.


I feel like you're being deliberately pedantic in a general sense and ignoring the terms of art here.

Direct owner is a term that basically means someone who has ownership, beneficial ownership, voting rights, or rights to sell. Sam Altman was that. He was also an indirect owner through OpenAI Startup Fund GP I, which is the other entity you are talking about. You don't need either to be GP, though it can make things simpler. As far as the bundle of rights in ownership is concerned, yes, he did own the fund.

Anyway, the obfuscation bit is from the filings with the state, not so much the SEC - there for a period of time OpenAI Startup Fund GP I was managed by Vespers, Inc. and had a bunch of registrants with fake names and fake directors and board members on the filings, all with the same address, an affordable housing unit in Santa Ana.


Practically everything you are saying here is nonsense as it relates to a partnership. The partnership agreement for the actual fund, ie not the entities created for purposes of someone acting as the GP, will lay out the economics, who can do what, how/if the the general partner can be replaced, what they can and cannot do.

The statement that he owned 75% of the fund is deliberately misleading. That he was in control of an entity that acted as the GP of a partnership means "he was acting as GP". But, that doesn't sound as nefarious, and it doesn't sound so sensational, and that's why it's not being said that way.

The only way to know if something shady is going on is to read the partnership agreement of the actual fund. That's where the rubber meets the road.


There are some issues you're ignoring, legally speaking, regarding liability and ownership when you have a conflict of interest, but that's ok.


Lay them out as they relate to the situation.


If you are a CEO, especially of a quasi non-profit, and you start a for-profit investment vehicle as part of this organisation, you should tell the board that.


What makes you think they didn't know it existed?


Paul Graham comes off as such a gormless chump.

Trying to hand wave away a 10M stake in OpenAI taken when they were valued at <1Bn and then trying to weasel out by saying profit is capped. Why yes it is Paul, at 100x initially with the cap raising by 20% a year. And then trying to claim that the PPUs aren’t liquid despite regular liquidity events being the entire reason we know OpenAIs current valuation.

Didn’t think I’d see the founder of YC try the “I’m a moron who doesn’t understand how investments work” defense.


> OpenAI taken when they were valued at <1Bn

Incorrect round


> Altman’s personal investments range in everything from nuclear fission reactor start-up Helion

Helion is a fusion startup, not fission. That's what makes it particularly interesting. Getting a simple but obvious detail like that wrong is odd.


Honestly was a typo, I was actually aware of the type of reactor. People make simple mistakes sometimes, especially while typing fast. Fixed in post.


Ah thx for the update, probably should have given you the benefit of the doubt on that one, sorry!


I find it strange that people keep attacking him on this.

He’s allowed to make money. Musk bezo etc all did and nobody whined

There is plenty to attack him on legitimately…yet people seem obsessed with this angle


They didn’t claim to be running non-profit/ capped profit entity , being altruistic and saving the world from AGI and didn’t co-opt the word “Open” and never claimed to work on a humble 65000 salary with no equity upside and doing it all in their goodness of their hearts

No one cares about another billionaire founder if they were honest about it . The valley created dozens if not more each wave .

The current generation of founders have drunk their own cool aid it seems and are no longer satisfied with becoming a billionaire they also want be to adored and revered as the savior the next Jesus or MLK, Gandhi or Mandela.

It belittles the people who sacrifice their careers, money and livelihoods to make a difference on actual social missions.


I don't see any of the comments here, is that a bug?


a dead comment has the rest of 8 comments


HN was behaving strangely for me a little while ago


[flagged]


> whining starts instantly when someone makes a bit dough

More like scrutiny.


> The whining starts instantly when someone makes a bit dough

Can you elucidate what your comment is trying to say in the context of the article? I don't understand the relevance.


People, most of which who will never be rich, get anxious when their idea of the American Dream is challenged. Because what’s gonna happen when it’s their turn to make it big?


> get anxious when their idea of the American Dream is challenged

How does anything in this article challenge the American Dream?


No, the complaint that when people get rich, they’re scrutinized more.

People who won’t ever get rich, but hope to one day, defend the rich in this manner because of this dream.


> the complaint that when people get rich, they’re scrutinized more

When people get powerful they're scrutinized.

Plenty of billionaires fly under the radar. If you go to the Congress, however, and tell the country you're inventing something that could blow up humanity, yeah, you're going to get attention.


Left wingers complain when journalists don't hold powerful people to account. Right wingers complain when they do.


The complaints are most due to lack of consistency in the selection of the targets of scrutiny. The point of scrutiny should be to uncover the truth, not political point-scoring.


It really seems like criticizing Sam is the new hot thing to do, with tons of people jumping on the bandwagon. Whether it's hiring a voice actor who sounds like ScarJo, having non-disparagement clauses in separation agreements (something basically all big companies and institutions tend to do), being associated with a crypto project (Worldcoin), "lying" to OpenAI board members, etc. No one is perfect, and when you are put under a microscope, just about anyone can look bad in the wrong light.

Ultimately, I ask myself, is my life better because Sam was born and did what he did? And the answer is 1,000 times "yes!" because the introduction of ChatGPT changed so much and enabled so much creation and learning for me personally. And I have a strong suspicion that if the Helen Toners of the world had their way, it never would have been released at all. And without all that money and prestige floating around OpenAI, I doubt they would have been able to create such a dream team that allowed the thing to happen in the first place. And I think all of that comes down at least in large part to Sam's vision and scrappiness and willingness to just do stuff and not get stuck in institutional morass.


> It really seems like criticizing Sam is the new hot thing to do, with tons of people jumping on the bandwagon. Whether it's hiring a voice actor who sounds like ScarJo, having non-disparagement clauses in separation agreements (something basically all big companies and institutions tend to do), being associated with a crypto project (Worldcoin), "lying" to OpenAI board members, etc. No one is perfect, and when you are put under a microscope, just about anyone can look bad in the wrong light.

True, but it's hard to start something as big as OpenAI and not warrant a little scrutiny. At least, I think there is plenty of public interest here, in particular because of the chosen mission statement for the company.

> Ultimately, I ask myself, is my life better because Sam was born and did what he did? And the answer is 1,000 times "yes!" because the introduction of ChatGPT changed so much and enabled so much creation and learning for me personally.

Which is a very reasonable position, but is the fact that your life is better negate concerns that applications of ChatGPT may actually make other people's lives worse? And that the lack of transparency around conflicts of interest raises reasonable concerns about both judgement and the ability of the organization to deliver on its mission?


That's just it-- I really don't think ChatGPT and Sam have harmed anyone besides possibly a very few people who disagreed with Sam and tried to resist him and got outmaneuvered by him. But I think many tens of millions of people have greatly benefitted from him. And to ignore that in the calculus of "is Sam worthy of reproach?" seems silly.

And I also don't feel like I am somehow owed a huge amount of transparency around the exact details of how Sam may or may not benefit financially from his association with OpenAI, or the legal agreements they had with departing staff. Even if he does benefit, is that really so horrible? They have a for-profit division now so they are paying taxes. And the fortunes made from OpenAI stock with be taxed for sure. And the people who left are rich and got to work on a world changing product.

Where is all the harm? It's really hard to point at any real harm from my standpoint. But the benefits and gains are palpable, and they are obvious to anyone without an agenda to push or axe to grind.


Altman aims to be trusted to say what regulations should and should not be made. It should not surprise you people consider evidence of dishonesty and suspicious coincidences relevant to trust.

People have lost jobs and likely careers to AI models trained on their works. You could assert in the long run all individuals will be better off. You could assert the benefits to others made the harms virtuous. You could assert they deserved it. I don't know how you could deny they were harmed. You could assert it was inevitable. But this would negate credit if it would negate blame. This is a distraction from the question of trust however.


Unfortunately, I think this was totally inevitable, particularly now that there are powerful open models that can be fine-tuned Llama3. At this point, you can't stop it any more than you can stop piracy of books and movies. And I'm not even so sure that "access to their copyrighted works" was the primary reason for anyone being disrupted from AI.


> Whether it's hiring a voice actor who sounds like ScarJo

If we believe OAI. Before anyone mentions the WaPo article, that was a bundle of documents handed to them by OAI. WaPo hasn't spoken to the voice actress. What they have is that this voice actress, who has to remain anonymous because of, quote, "fears for her safety", told her agent (who also has to remain anonymous for reasons) in a statement that "she wasn't told to sound like Scarlett". I'm not sure that it's been shown that SJ wasn't either a training source, by herself, or with this actress, or other.

> being associated with a crypto project (Worldcoin)

A crypto project that didn't launch in the US but in Africa, where they offered more and more and more money to people for retinal scans to sign up (to the point where it was often two month's wages for people) doesn't sound exploitative?


Tech loves drama just like anyone else, haha. But this recent pattern of drama that follows Altman is of his own making, although perhaps he is under more scrutiny than most, reflecting OpenAIs importance in our industry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: