Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Sam Altman said startups with $10M were 'hopeless' competing with OpenAI (tomshardware.com)
142 points by LorenDB 9 days ago | hide | past | favorite | 154 comments





> "Look, the way this works is we're going to tell you it's totally hopeless to compete with us on training foundation models. You shouldn't try, and it's your job to try anyway, and I believe both of those things."

He’s only saying that he’s not incentivized to make a small and scrappy team compete with OpenAI. I don’t think you should read this as “Sam Altman says small companies will never produce valuable AI models”.


Yes, this is really weird story, because what he actually said seems like the most banal possible thing for a tech executive to say about the prospect of competition.

He was speaking specifically about Indian startups, and he did say they should try anyway, Which means he clearly thought there was some non-zero (but not very non-zero) chance one could succeed.

It sort of is the most banal possible thing, and I'm sure he never even would have guessed that comment would have been repeated or been controversial (it was long ago). It's like if Toyota said you couldn't make a dependable car for $5,000 but hey, go try and prove me wrong. It's nothing


> the VC asks whether a trio of super-smart engineers from India "with say, not $100M, but $10M – could build something truly substantial?"

I don't know where you got "incentivized to make a small and scrappy team" - The question was simple and clear, and Sam's response was pretty clear as well. He was/is wrong, and is now finding out.

“Sam Altman says small companies will never produce valuable AI models”.

sure sounds like

"Look, the way this works is we're going to tell you it's totally hopeless to compete with us on training foundation models"


Why would Sam Altman be incentivised to compete with OpenAI? I don't understand what you're trying to say.

But in the end your conclusion is the exact opposite of what he said, and I don't see what justifies that interpretation.


The DeepSeek v3 model had a net training cost of >$5m for the final training run, the paper lists over 100 authors[1], meaning highly-paid engineers. This is also one of a sequence of models (v1, v2, math, coder) trained in order to build the institutional knowledge necessary to get to the frontier , and this ends up still far above the $10m mark. It's hardly a "trio of super-smart engineers".

[1] https://arxiv.org/abs/2412.19437v1


"Final" run... it took around half a billion - to a billion to get there.

https://arstechnica.com/ai/2025/01/why-the-markets-are-freak...


Incidentally, Altman's comments were in response to a question a out about a hypothetical startup with $10m. So you've made the argument even more cogent.

It's popular to dunk on Sam, but I don't think he's wrong here. There's now hundreds of companies that have attempted to train a foundational models and almost every single one of them has failed to build a viable business around it (Even Mistral looks to be in rough shape).

Deepseek has done something remarkable, but they have the resources of a multi billion dollar quant fund at their disposal. Telling startups they have a chance is sending them to near certain death. There's way more promising niches to fill.


My personal opinion on society, is that many many businesses have massive inefficiencies and could be wiped off the map if people understood those weaknesses. But there is a culture of "that CEO is so smart, no chance you could compete". Reality is, they are just hiring random people with fancy degrees. I bet most OpenAI "ai engineers" have no clue how low level GPU CUDA programming even works. They are just tweaking pytorch configs, blowing billions on training.

In the past, tech got away with the above because capital meant if you hired enough people, you ended up with something valuable. But AI levels the playing field, reducing the value of capital and increasing the value of the individual contributor.


> But there is a culture of "that CEO is so smart, no chance you could compete". Reality is, they are just hiring random people with fancy degrees.

Being charismatic, good talker and not having too strong moral principles will often also work most of the time, unfortunately.


We can hope that there will be a paradigm shift

My opinion is that organizing capable people to accomplish goals is incredibly difficult, and that includes keeping a business running. Inefficiencies are unavoidable, even among engineers instead of "engineers."

Right but traditionally the difficulty in organizing people is solved with money. Just keep hiring until the product gets done. Thats what we saw here, OpenAI wanted 500 Billion! Reality is the money wasn't necessary, what they really needed was innovation. AI will obsolete people that solve problems with brute force money, which is the modus operandi in VC backed startups.

The fact that the human brain can still do better on certain types of problems than SOTA LLMs while using less energy than a nice LED lightbulb continues to bolster my belief that ultimately it all comes down to the right algorithm.

That’s not to say data isn’t necessary, but rather that the algorithm is currently the critical bottleneck of further AI progress, which a $10M startup absolutely has a chance of outcompeting big tech companies on. Heck, I wouldn’t even put it past an individual to discover an algorithm that blows existing approaches out of the water. We just have to hope that whoever achieves this first has a strong sense of morality...


Specifically he said

> Look the way this works is: we're gonna tell you 'its totally hopeless to compete with us on building foundational models' and its your job to try anyways

He's clearly being glib


He also said "I believe both those things".

Stratechery said that Altman's greatest crime is to seek regulatory capture. I think it's spot on. Altman portrays himself as a visionary leader, a messiah of the AI age. Yet when the company was so small and that the progress in AI just got started, his strategic move was to suffocate innovation in the name of AI safety. For that, I question his vision, motive, and leadership.

But Deepseek took some huge and hugely expensive models that others had paid $Ms to train (well those who didn't already only the hardware mostly used investor cloud credits, but still) and distilled them, rather than training from scratch?

They trained their own V3 and then trained R1 Zero from V3 purely as RL, it didn't completely work so then they took some CoT examples and trained R1 on RL + some SFT. You're thinking of the finetunes based on R1 outputs - those are not actually distills, and they're all far worse than the original R1. And yes, they're finetuned based on other models.

Garbage in, garbage out. More garbage out. Mmm... that garbage is looking pretty good. Can we have some more garbage?

Finetunes of smaller models on R1 output are pretty bad, but the main R1 model is very, very far from being "garbage".

He's right lol. Deepseek has likely spent hundreds of millions getting to this point where they can train a 6M~ dollar model, if that is even the truth.

In the end, he is a very good dealer. He already got his packet. From now on, it's OpenAI's investors' problem. not him.

If I had what OpenAI has I could imagine how to make it profitable tomorrow, because I could do that is why they HAVE to make it free without an account, to prevent anyone new from meaningfully entering the $0 to $20mth segment, If you look at their business strategy, it's top notch, anchor pricing on the $200, $20 sweet spot, probably costs them on average $5/mth to server the $20/mth customers, Take your $50m a year marketing budget and use it to buy servers, run a highly optimized "good enough" model that is basically just wikipedia in chatbot and you don't need to spend a dime on marketing if you don't want to, amazing top of funnel to the rest of your product line. They know what they are doing I suspect, they just need time to pull a platform together, and they have a long head start on that. Basically all of this reaction is to my mind, effectively presuming that "OpenAI" is a current gen chatbot? one would presume it's going to be a lot wider and deeper than that, and given what DeepSeek is pointing to, one with incredible unit economics over time.

Every single thing you said here falls apart at 'it costs on average $5/mth to serve the $20/mth customers'.

Why? My wife has a ChatGPT $20/mth account, rarely uses now the novelty has warn off (I asked she said twice this month), the only reason she keeps paying for it is to retain the chat history. That's a tiny very rarely used docker container on a server somewhere. Maybe I'm thinking about this wrong, but I would just run this the same way we ran digitalocean, all the $5/mth accounts with no activity sit on a box configured for that, once it starts to do stuff it moves to a box with less VMs on it, etc, this is how subscription levels in cloud work, sure it cost us $100MM to get there, but now it just prints cash. If OpenAI decided to stop training models and focused on unit economics, I don't see how they couldn't get it to profitability quite quickly, even with all the competition.

Chat history is a text file. She is paying $20/mo to host a text file. If you pasted that text file back into Whatever... then you would also retain the chat history. How economical.

I just stopped paying for Plus and my history seems to be retained on the free tier.

You're totally right on the last point: if they stopped training models and fully focused on inference efficiency, then they could totally get to profitability (but presumably would be obsolete in the medium-term unless all progress stopped). But right now, inference is very expensive, and your wife's case is the exception not the rule.

would you mind elaborating why you think their assumption doesn't hold? I have a 6 year old, back then 400EUR (consumer price mind you) GPU that could happily serve 10 "normal" users. sure, we have to add other hardware too, like chassis, power supply, CPU, RAM, ssd etc. so lets 4x that. Now we're looking at 1600 investment + power cost. My setup comes down to 0.04 eur per hour of operations, so about ~30 bucks a month (mind you electricity is very likely cheaper wherever else you are). We also need maintenance and operations, eyeballing in large scale at ~2EUR/user/month.

On a 3 year amortisation period we're looking at

investment: 1600

monthly cost to operate: x * 3 + x 2 + 1600/36

monthly revenue: x 20

For x users = 10

monthly cost: 50 + 44.4

monthly revenue: 200

net month result: +105.6

net total result: +3801.6

i know i know thats a napkin calculation but if anything hardware cost will be lower for big providers, which would give them even more profits. just looks like a feasable business model to me. as long as you can sell those subscriptions


Wikipedia in chatbot? That was a poor idea, a poorer thing to do, and a plain stupid thing to even begin to brag about. Why would you want your chatbot trained by wikiNA*Is? Your "chatbot' now is of the factual opinion that every single accusation against everyone is true, despite compelling and factual evidence to the contrary, and that the German Social Democratic Party were liberators.

If you think that strategy is top notch, training your Chatbot with bottom tier data... you are going towards a cliff at an accelerated rate:

https://www.wired.com/story/one-womans-mission-to-rewrite-na...


I mean, they haven't prevented that at all though? There's plenty of very good competition in the $0 space, they're far from the only one.

Preventing a $10MM venture funded startup getting a foothold, there will be plenty of chatbots that come and go, the point is OpenAI seems to have a much bigger platform to unfold with the LLM just being at the center of it. Basically competing with ChatGPT and competing with OpenAI are not the same thing.

DeepSeek, Claude, Gemini, and Grok have free tiers (along with whatever models you spin up locally of course). Are there any other free offerings I'm missing?

I hope this is obvious but deepseek has more than $10 M. You can't spend all of your cash on a single training run.

Of course he's going to say that. He's incentivized to make everyone (especially investors) believe scaling is the only way so he keeps getting billions in investor money. If no one believes OpenAI can be caught, they win.

Scaling compute can only be taken so far. True advancement in AI comes from optimizing compression of human knowledge. Altman missed the first principle by a mile.

Whereas, Marvin Minsky promised a PhD from an AI at the end of a decade, he would have done it again, but he passed away. He did this in 1950, 1960, 1970, 1980, 1990, he forgot about it in 2000, but made the claim again in 2010.

Did you miss this principal?


Ironically it's largely Minsky's fault that it hasn't happened. Too many people listened to him.

It probably will happen by the end of this decade, or not too long afterward. Within the next few generations, a CoT mechanism like R1's (and presumably o1's) will be capable of originating, directing, and documenting original scientific work, doing so with no more guidance than a typical PhD candidate would receive from their advisor.


This is absurd for a number of reasons. Probably the biggest being that $10m need not be the only investment. Sam of... Y-Combinator is suggesting that startups do not get more funding after $10m? That $10m is insufficient to demonstrate capability that would then motivate more rounds of funding?

Honestly, I want to see more $10m startups. Stop this mega millions level funding into AI/ML and give out more small sums. Let people explore new ideas and scale. Give $10m to people doing non-LLM based stuff. Hell, give $10 to the Mamba team. Give it to the Ziming Liu (KAN). To Bruno Gavranović. To people doing Bayesian stuff. To a ton others. Let these people attempt to scale. If you really want to disrupt OpenAI the best way is to find something better than LLMs. You aren't going to beat the entrenched players at their game, but we know for certain that there's more efficient means of generating powerful learning systems. There's a whole community of "GPU Poor" people, which is just another name for "can't get their ideas through review because reviewers are too lazy to evaluate methods and just respond with 'not novel, needs more experiments.'" There's a metric ton of "unpublished" (still on arxiv) works that are more compute efficient and these people aren't just hungry, they're hungry and want to fight because they've been treated unfairly (I speak from experience).


I agree. It would be pretty hopeless if OpenAI had to compete with a model trained on less than $10M - can't they just buy DeepSeek and rebrand it?

Not sure if joking but even llama 3 training was estimated 60M compute cost. It doesn't mean that the total cost to get there is only 60M.

When looking about the staments of the founder, DeepSeek is what OpenAI promised to do originally. They are not for sale.

Just don’t ask it what happened on the 3rd of June, 1989!

Open models are not censored.

DeepSeek will not acknowledge the events on that day.


It’s CCP‘s decision anyway, not founder‘s.

Not much different than in the U.S. - what would happen if Meta or OpenAI attempted to be sold to China?

It's pretty different

That's not going to prevent the next DeepSeek from emerging.

I am beginning to think that the fate of the world might not rest on giving this man infinity dollars

i sure hope it doesn't. i can't see how anyone gets fooled by him. if anything he inspires me to try and get a cool $10M and bounce on investors

I'm in the chip design industry. After reading Altman's comments in this article and thread last year I realized that he is not as smart as he thinks he is.

https://www.tomshardware.com/tech-industry/tsmc-execs-allege...

https://news.ycombinator.com/item?id=41668824

Ultimately another company finding a better cheaper way to do AI is good for everyone. We were reading how it will be hard for new companies to catch up but but the existing big companies are now going to have to catch up. We were seeing a lot of stories about fab capacity and new power plant construction just for AI. I've been waiting for the AI bubble to pop just like the dot com bubble popped when investors finally started demanding companies show profitability. AI is certainly useful but at some point companies have to start showing how to make a profit rather than just hype.


Best to assume that CEOs don't speak candidly when addressing the public. Wasn't he fired from OpenAI due to a lack of open/honest communication? In any case, he's incentivised to tailor his words and views to reduce competition, reduce investment in competitors, and generally improve his company's position. I would be sceptical of anything publicly stated that directly impacts the public's perception of OpenAI or him personally

Glad to see that other folks are coming around on Sam Altman being a professional liar.

> I realized that he is not as smart as he thinks he is

This applies to the vast majority of startup CEOs in the tech industry.


I'm not sure _anyone_ is as smart as most of them seem to think they are, if we're measuring by how much their success should be attributed to their intelligence compared to other factors.

Our overlords are selecting for risk tolerance in leadership moreso than smarts.

I'm not sure these CEOs are assuming any personal risk in most cases. Altman's most recent venture before OpenAI took off was overseeing a failed cryptocurrency. Adam Neumann walked away from the steaming wreckage of WeWork still worth a billion or three. These guys are teflon...

The point of a corporation is to make you teflon.

I wouldn't even call it "risk tolerance". Assuming that they weren't complete idiots with the money - which, admittedly, is quite an assumption at times - the backing of a SV VC should theoretically set up a person for a comfortable living for quite a while, regardless of the success of the venture itself.

Psychopathy - the willingness to see humans as little more than what they can give you in your quest for self-fulfillment - is a far better measure for this kind of work.


The Gell-Mann Amnesia effect[1] isn't just about newspapers getting subjects wrong, it can apply to CEOs, too. When you are an expert in some topic and you hear a CEO talk about it and get everything wrong, you should also consider that the CEO is likely wrong about other topics, too.

1: https://www.epsilontheory.com/gell-mann-amnesia/


He's just trying to discourage competition. Good luck with that.

It's important to understand that most CEOs (the good ones and the bad ones) are dilettantes. This doesn't mean you should ignore them, but it's worth running their "takes" past actual subject matter experts who can provide a more nuanced perspective.

There are several weird aspects in this thread. I would like to clarify that I don't know Sam Altman nor don't care about him, and do not hold an opinion regarding whether he is a good or bad individual.

First, the title is inaccurate and misleading. It does not pertain to the hopelessness of competing with OpenAI; rather, it addresses the challenges of competing in the training of foundation models, which is not the same.

Moreover, what he stated is likely true. Training a model with a budget of $10 million will likely yield limited results. I appreciate his anwser.


I'm surprised no one has pointed out this seems like a classic example of innovator's dilemma - in the day's of HD's it took years for new technology to come around and upset existing companies. In this information age, everything happens quicker - so a new company, did something the entrenched leader said couldn't be done/didn't explore because their path was "the right path forward" - it created a blind spot. It isn't an exact usage, but I think the concept applies.

A long while back I'd read an article about Facebook's incentive to commoditize technology

It was posted here on HN (years ago) - I can't remember the specifics, but it touched on the subject of Facebook's ulterior motives in open sourcing a lot of their tech/models. At the time the conversation wasn't even about AI

Posting here to see if anyone remembers this article - been trying to figure out what it was for a long time


Deepseek blowing up like this really showed how tired everyone was with the AI founders acting like they’re saving the world.

If they're trying to act like they're "saving the world" they're doing a terrible job. It's honestly gross how much money is going into an industry that at best doesn't touch any real problems for humanity, and at worst amplifies the worst among them.

And I don't really think they sound like they're trying to save the world. They sound like they're trying to get rich.


> an industry that at best doesn't touch any real problems for humanity

I posted this yesterday, but "AI is failing the Indoor Plumbing Test": https://news.ycombinator.com/item?id=42840785

Real innovations are often boring, but they transform human lives. So far, AI has not cleared that bar. I keep hearing that AI may go rogue and exterminate humanity, but for now I'm not even sure what it will enable me to do that I couldn't do before.


One smell test for me is what the LinkedIn and Twitter "technologists" (0) are excitedly and hurriedly talking about. If they're onto it, the tech is a probably about to hit the saturation point of the s-curve as the hype lag catches up. This is the sweet spot where early adopters can find ~~bagholders~~ late series investors.

Right now, this part of the internet is obsessed with genAI. Same folks that couldn't stop talking about crypto and web3 a few years ago.

And I could be wrong! But everything about this industry smells wrong. NVidia boosting to obscene highs turning out to be because developers couldn't be bothered to write optimized code. Everyone and their mother talking about AI but I still can't see real world impacts, besides customer service chatbots getting worse. Meanwhile the world burns and people die of curable disease, and we spend money making sand go brr instead.

(0) I really don't have a good term for this that isn't something like "posers." They're the folks you meet at conferences, always giving talks and writing blogs, mostly talking and very little doing. They're the people that are obsessed with technology but can never get below the surface level. This sounds dismissive, and it is, because this kind of person has wasted a lot of my brainpower over the last decade before learning to weed them out (aka - they're not "well aligned" as customers or coworkers).


I don't think you even have to go as far as indoor plumbing. Just asking them if they'd rather have a lifetime free ChatGPT subscription at the cost of never being able to use a washing machine.

This is excellent, thanks

Altman is not an AI founder. He's a business owner and investor. He doesn't even have an undergrad degree! The actual AI founders are the ones building the tech, and OpenAI has chased many of them away.

Chased them away is one way to look at it...another way to look at it is tens/hundreds of millions of dollars luring them away.

> He doesn't even have an undergrad degree!

I don't either. Am I disallowed from calling myself a founder?


> He doesn't even have an undergrad degree!

There are a lot of smart people in the tech industry without undergrad degrees.

The problem is, in the current generation of tech workers, there's two kinds of people without degrees:

1) The grinder who has a knack for whatever part of the field that they work in and made their name through hard work and building a portfolio of work through practical experience

and

2) The (usually) guy who went to a college prep school, got into Stanford, and encountered a SV VC with exponentially more money than sense, who then told the 21-year-old that they were not a college student, but, in fact, Jesus Christ, and promised more money than the average person could comprehend to "pursue their dreams".

Altman falls into the latter category. Actually, a lot of the founder set does. I say (usually) guy because Elizabeth Holmes also falls into this category.


Undergrad degree is neither here nor there.

Bill Gates is a tech founder.


The facts are that AI has been driven by PhD level research. There are vanishingly few cases of people who have not studied the area in depth making serious contributions. The term "AI founder" is usually used to reference the people who have made the technical and mathematical advances, not the person who hired those individuals after the fact to commercialize it.

Altman is a tech founder and a business founder, but he is not an AI founder.


AI is all about show business, but it still requires hard work and a hell of a lot of Oompa Loompas, so Sam Altman should hire Deep Roy, the hardest working man in show business.

https://en.wikipedia.org/wiki/Deep_Roy#:~:text=He%20played%2...

>In referencing his workload during production, director Tim Burton called Roy the "hardest-working man in show biz".

Becoming Oompa-Loompa | Charlie and the Chocolate Factory:

https://www.youtube.com/watch?v=2J7Dg-mUJHE


> the AI founders acting like they’re saving the world

They’ve literally been pitching themselves as capable of destroying it.


Yeah there's literally no useful applications outside first drafts of code, but don't worry just a few hundred billion more and it'll cure cancer and solve physics!!

this is hilarious.

hubris is a terrible thing. there is always bigger fish.


Bigger fish have a lot more than $10 million, though...

They have less motivation to improve areas that are hindering those limited by 10 million.

scarecity is a great motivator for innovation. abundance breeds complacency.

I meant big in terms of innovation, not budget.


'sama is just maximising his upside in a completely rational way here. He knows he's the big fish, so he attempts to dissuade the little fish before they can even grow. I don't see a problem in him doing that, almost all VCs / CEOs would do the same.

Executive whose dominance is dependent on a moat (cost of training large models), points out the importance of that moat...

Of _course_ he wants to discourage competitors, but that is PR, not technical commentary.


Overconfidence is it's own category of system vulnerability.

He still right even with deepseek, they are way more than 10m

He's such an arrogant... character! If the future of AI is in his hands, the future is not bright, but pretty dark!

It really depends on how you spend your money and the shoulders of which giants you can and will stand on for free.

Creepy dude to be honest.

Commenters - watch the video - it's very short.

makes sense if you think about it - if OpenAI spent, what, $10B on its AI, then it makes sense to say anything less than $1B wont cut it.

You can barely buy a good car for that.

AI is the new bitcoin. Its a buzzword attractive to people looking for attractive investments but cannot be bothered to really map the practical utility back to practical application.

Let's be real. We have had autocomplete on the cheap for more than a decade before LLMs. How much greater utility do LLMs provide above autocomplete? Is that difference in value enough to justify the market value of something like OpenAI? I would say no. That is the definition of a bubble.

I suspect some of the misconception is that people envision LLMs will do things like replace developers. That is so misleading. Yes, AI can replace some developers that cannot do much on their on their own, but that draws entirely incorrect conclusions. Correlation does not imply causation. Just because your developers suck doesn't mean AI is magic. It just means your business couldn't select the right people, or train them, to write software. It also doesn't mean AI will provide something new or better than those people it replaces, which was likely just as true for a more intelligent autocomplete system.


Well, he was wrong about that.

its funny how this is great news but somehow the headlines are still negative

So much for Open-AI.

First they ignore you, then they laugh at you, then all of a sudden when your rival starts cozying up to the incoming Trump administration — boom.

I guess we delved too deep. Or something.


Sam Altman says a lot of things.

Ok, but please don't post unsubstantive comments to Hacker News. It's tedious and evokes worse from others.

He's been good and bad for this field.

Good:

- Attracted a lot of attention, investment, and engineering to this field. It's unlikely we'd be where we are today without OpenAI shocking the world with its demos.

- Built some salient products like ChatGPT

Bad:

- Tried to get congress to create a regulatory moat for AI (perhaps when he realized there was no moat for himself)

- Tried to dry up funding for other startups (again to try to create a moat)

- Over-hypes and over-promises what his team has done and what can be delivered (a boon to his fundraising, but making funding frothy and disconnected from fundamentals)

- Engaged in very public-facing drama around his board, his company's nonprofit status, etc.

- Telling congress he has no equity in OpenAI and is doing everything with no reward, yet driving around in a $5M Koenigsegg Regera supercar and getting a generous comp package put together for the new corporate governance structure.

- Creepy stuff like eyeball-scanning Worldcoin is pitched as essential in a post-AGI world, but transacting on non-revocable biomarkers seems like some 90's televangelist's sermon about the "Mark of the Beast"


> Tried to get congress to create a regulatory moat for AI

He was fundraising in the Congress. The whole “stop me before I shoot grandma” shtick was stupid and aimed at Silicon Valley. The moment states and countries started regulating AI he was caught flat footed.


I'm wondering how much of my repulsion to AI is Sam Altman himself. Recently I've been trying to be more open-minded with using LLMs in my workflows and have found some minor performance gains here and there. However the hype around AI is frankly embarrassing.

Big part of my annoyance is the term "AI" itself which you can say to mean anything, everything and nothing. It's something that's used to oversell/hype it to grab money.. which is fine. But if engineers like us can calm the crazy rhetoric down to "llms" or "text completion" "Image-gen api" that's already a leg up in thinking clearly. Like the question "is ai going to put people out of jobs" -> "is this new crazy good text completion model going to put people out of jobs" already gets us out of the weeds somewhat.

> Big part of my annoyance is the term "AI" itself which you can say to mean anything, everything and nothing.

Pretty sure that's the point now.

You're selling a "solution" not a "product". So you don't want to market AI as some nice robot that can move car doors around. You want to market AI as a vibe that fixes enterprise problems. So now all the companies that want your AI both have to pay for AI and also your time to actually build a product for their problem.


Yeah its definition has been diluted to the point of useless now.

I think this extends to a lot of normal people’s instinctive hatred of AI: they dislike it in part because they perceive it as being pushed on them by some pretty slimy people

Yeah. I gave Claude a spin today; it helped me to make a tech spec I'd written a little more concise and professional. But doing so required carefully reading and tweaking the output to maintain accuracy. I don't know if it sped anything up, but it reduced the cognitive effort a bit.

wait until it crosses a few thresholds we’re fast approaching. ai that performs at intern level vs phd vs domain leading expert level. ai that has the full context of what youre doing immediately accessible. ai that can agenticly autonomously navigate the computer environment without stumbling blocks.

i think the difference between intern level ai and domain leading expert ai is a few algorithmic adjustments away using a type of reasoning reinforcement framework (like GRPO) which deal with competing signals in a better way. instead of averaging them, it reasons which contextually signal should take precedent. its the difference between lets say taking a vote among the common populace on how to build a nuclear power plant and finding an expert and figuring out where the experts decisions should take precedent and where other experts decisions should override.

square that away and the embarrassing feeling on the promise of ai should wash away.


> ai that performs at intern level vs phd vs domain leading expert level.

These are different things:

- Regurgitating advanced text that has been shifted into a shape matching your query

- Understanding intimately the $100M screw to turn


How do you know that these are different things? They could be, I genuinely don’t know, but I’m not sure where people are getting these kind of confident assertions about what modern architectures could never do. Would you have predicted in 2020 that photorealistic text to image generation was within the scope of current theory?

> Would you have predicted in 2020 that photorealistic text to image generation was within the scope of current theory?

Yes, and I've been working in this area with excitement since about that time.

The physics of optics are well understood. We've been writing ray tracers for forever and coming up with clever hacks like Blinn–Phong, PBR, etc. for ages. SIGGRAPH has always felt like tangible magic. We have had the map in our hands and now we're coming up with new ways to traverse a familiar landscape.

Reasoning is an undiscovered country. There are lots of exciting claims being made, but nothing concrete.

I expect lots of advancements in signal processing, spatial computing, and beyond because those things are obvious and intuitive.


The mathematical definition of a language model is the probability distribution of tokens that follows the previous context. It's literally deciding the most probable response, which while at many times may match the correct response, is not a 1 to 1.

welcome to earth where 98% of forum/political interactions are confident assertions from nowhere used to dismiss people. enjoy your stay ;).

obviously theres also a multimodality gap to be overcome to intimately understanding the $100M screw to turn, but i suspect most reasoning that matters has already been translated and is embedded into/in words. i wouldn’t underestimate the amount of useful knowledge that exists in embedding advance texts into an LLm model. the challenge is contextually hierarchalizing it (a matter of reasoning) and decoding it back into reality (words are dimensional squished encodings of reality).

Would be happy to go through my life hearing no more from him.

Also not being reminded that technology indistinguishable from magic will happen in the next 5 years, or whatever carrot-in-a-stick money grab.

The forest is cold, but peaceful.

I say a lot of things, Monica. I say a lot of things.

https://youtu.be/XM7_eqtljUg?si=d27SGVUiZ7oZJCA3


At least he doesn't do a lot of "gestures".

The exact post I was about to make, glad to see it at the top, lol.

I keep reading people on the internet mistrusting him with a lot of confidence, but I haven't heard of any tangible evidence that he's lying about anything.

Can you name a couple of examples of the things he said that we know are lies? Or is it all just people making uninformed assumptions or being snarky?


The main thing is OpenAI itself. Altman has long pitched it as an open, non-profit and raised hundreds of millions on that. Turns out it's not open and now has been positioned as a for-profit entity owned by a non-profit that is trying to convert itself to a for-profit.

On top of that he repeatedly used the fact that he had no equity to push back on criticism and to appear more altruistic and trustworthy. Turns out, that was just part of the con.

Sam Altman is currently being sued by his sister for raping and sexually abusing her for 9 years. He is denying this, I guess that counts as lies.

https://www.cnn.com/2025/01/08/business/sam-altman-denies-si...


Changing his company from a non-profit to a for-profit once he saw the $$$, seems pretty untrustworthy to me.

OpenAI is not changing to a for-profit. It[1] always was a for-profit entity, owned by a non-profit.

The big change is that the non-profit no longer owns the entirety (or even a majority) of the for-profit entity; it is now a minority owner.

[1] OpenAI as we know it today. OpenAI was once just the non-profit entity, but back then was just an AI think tank. In 2019, it formed the for-profit corporation as a subsidiary to raise money and build the tech that we now know about (and make money from any products built on that tech).


Secretly funding the FoundationMath benchmark, contributors unaware of the COI, having access to the questions and answers with a "verbal agreement" not to train on it.

https://techcrunch.com/2025/01/19/ai-benchmarking-organizati...


I don't know if this is technically a lie, but he said that he'll purchase electricity made from fusion power within the next couple of years.

I don't know if he believes that himself. But I can tell you with extreme high confidence that this is not going to happen, and it's not even close to anything remotely realistic.


The literal bait-and-switch of "Oh hey Open AI is nonprofit I promise guys!" and then turning around and switching it to profit.

That alone is enough to walk away from him.


One doesn't have to be a liar to be untrustworthy

Why was he removed from leadership positions at Loopt, YC, and OpenAI?

https://www.disconnect.blog/p/kara-swishers-story-about-sam-...


He says a lot of things that are untrue... but this... its true. Its why Deepthink makes no diffrence.

[flagged]


He seems to have mastered the art of being unethical in all the right ways SV investors and incubators like. Many embrace his style of business, others hold their nose, not enough reject.

[flagged]


I think a lot of people find him charming. I knew when I saw PaulG listing him alongside Steve Jobs in "Five Founders".

https://www.paulgraham.com/5founders.html


Charming? His vocal fry is unbearable to listen to. Worse than nails on a chalkboard.

https://www.youtube.com/watch?v=MTJZpO3bTpg

TSMC management thought he was an absolute clown when he asked for $7 Trillion. They call him "Podcasting Bro".

https://finance.yahoo.com/news/tsmc-rejects-podcasting-bro-s...

He turned OpenAI into ClosedAI. He is a lying greedy sociopath.

It's possible (or at least I hope) he will fade into irrelevancy within the next few years, replaced with a less delusional and less insufferable CEO.


I never met the guy. I was just trying to explain the adulation... He was doing very well even before ChatGPT.

> He turned OpenAI into ClosedAI. He is a lying greedy sociopath.

And Steve Jobs shut down OpenDoc, doesn't mean he wasn't charming:

https://www.youtube.com/watch?v=oeqPrUmVz-o


Steve Jobs did not sound like a caricature of a gay man or a valley girl (no vocal fry, very charismatic speaker) and was never laughed out of the room in negotiations (one of the strongest negotiators the world has ever seen).

Steve Jobs was charismatic. Sam Altman is the opposite of charismatic.


My point was just anything he did with open source is orthogonal to charima, even if you also think he isn't charismatic; I havent gotten any charasmatic vibes off him though he does seem to have an intense stare at interviewers that doesnt really come off as charasmatic but maybe kind of Elizabeth Holmes-y and she obviously was very convincing to people and effective at manipulation. To me Holmes didn't seem charismatic either but there was definitely something that convinced other people.

[flagged]


[flagged]


[flagged]


"Steve Jobs did not sound like a caricature of a gay man"

Yes, Steve Jobs did not sound like a caricature of that voice.

https://youtu.be/H5Tv4V9uxDo?t=46

That's how it is in the real world. I think you are the homophobic one here.

Some gay men sound like that, but not all.

The current CEO of Apple is gay but doesn't have that voice. He speaks clearly. Easy to listen to.

https://www.youtube.com/watch?v=E7bpYaxgC5o


Some people around here think Elon Musk is good at public speaking. I wouldn't pay too much attention to the opinions of nerds on topics like charisma.

The tech industry, simply put, is not sober. Investors (and customers) are easily influenced by personalities and marketing stories, more than they are technical specifications.

I'm dealing with this right now as a staff engineer. We are being asked to "bake in AI" into our product, without any idea what it means or what the end goal is.

To be fair that's valid not just in tech but everywhere. Unless you're one of the rare examples of a polymath, there is no way you can know everything to judge any given product (class) on its merits - and hence all the SEO fraud around "best 10 of <class>".

Maybe he comes across better in person? https://www.paulgraham.com/fundraising.html

> Sam Altman has it. You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king. If you're Sam Altman, you don't have to be profitable to convey to investors that you'll succeed with or without them. (He wasn't, and he did.) Not everyone has Sam's deal-making ability. I myself don't. But if you don't, you can let the numbers speak for you.


To me, this isn't an endorsement.

He's also genuine according to this or maybe they wanted to emphasize "seems like":

> I just saw Sam Altman speak at YCNYC and I was impressed. I have never actually met him or heard him speak before Monday, but one of his stories really stuck out and went something like this:

> "We were trying to get a big client for weeks, and they said no and went with a competitor. The competitor already had a terms sheet from the company were we trying to sign up. It was real serious.

> We were devastated, but we decided to fly down and sit in their lobby until they would meet with us. So they finally let us talk to them after most of the day.

> We then had a few more meetings, and the company wanted to come visit our offices so they could make sure we were a 'real' company. At that time, we were only 5 guys. So we hired a bunch of our college friends to 'work' for us for the day so we could look larger than we actually were. It worked, and we got the contract."

> I think the reason why PG respects Sam so much is he is charismatic, resourceful, and just overall seems like a genuine person.

https://news.ycombinator.com/item?id=3048944


This is a bit absurd.

Why would he tell someone how to compete with him?

His answer is I'd say perhaps meant to be facetious considering the rather rude question posed.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: