Hacker News new | past | comments | ask | show | jobs | submit login

I want to believe DeepSeek R1 is legit… but the more details emerge, the more it feels like something isn’t right.

The claim that R1 was trained for under $6M on 2,048 H800 GPUs always seemed suspicious. Efficient training techniques can cut costs, sure—but when OpenAI, Google, and Meta are all burning hundreds of millions to reach similar benchmarks, it’s hard to accept that DeepSeek did it for pennies on the dollar. Then Alexandr Wang casually drops that they actually have 50,000 H100 GPUs… what happened to that “low-cost” narrative? If this is true, it's not efficiency—it’s just access to massive hidden compute.

The stolen OpenAI data theory is another red flag. OpenAI researchers have been hit by multiple security breaches in the last few years, and now we have a former OpenAI engineer found dead under very weird circumstances. Coincidence? Maybe. But corporate espionage in AI isn’t some sci-fi plot—it’s very real, and China has been caught running large-scale operations before (Google exfiltration cases, the ASML trade secret theft, etc.).

And then there’s the CCP-backed propaganda angle. This part is almost too predictable—China hypes up a “homegrown” breakthrough, gets state media to push it as “proof” they’ve surpassed the West, then quietly blocks foreign scrutiny. Lei pointed out that DeepSeek won’t even let U.S. phone numbers register. Why? If R1 is truly open-source and transparent, why limit access? We’ve seen this before with ByteDance, Alibaba, etc.—government-approved success stories that follow a controlled narrative.

But despite all that skepticism… R1 is real, and the performance numbers do exist. Whether they’re running stolen training data or smuggled GPUs, they’ve built something that competes with OpenAI’s o1. That’s still impressive. The question is how much of this is a real technological leap vs. how much is state-backed positioning and/or cutting corners.

So what happens next?

If DeepSeek is serious, they need outside audits—actual transparency, full datasets, external verification. Not just another “trust us” moment. The U.S. needs better export control enforcement… we’re seeing massive loopholes if China can stockpile 50K H100s despite all the restrictions. AI labs (OpenAI, Anthropic, etc.) need better security. If OpenAI’s data really did leak, this won’t be the last time. I don’t think R1 itself is a scam, but the surrounding story feels curated, opaque, and suspiciously convenient. Maybe DeepSeek has built something remarkable, but until they open the books, I can’t take their claims at face value.




There's a lot of thrashing about today on this subject. People have lost money and that's what happens. Spreading uncertainty may help them retrieve a few dollar and theories are free.



Aside further research no audit could claim anything reliable. They will write a report for those that pay them.

It certainly is a topic of trust, but there is no auditor here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: