Hacker News new | past | comments | ask | show | jobs | submit login

This is pure speculation on my part but I think at some point a company's valuation became tied to how big their compute is so everybody jumped on the bandwagon.





I don't think you need to speculate too hard. On CNBC they are not tracking revenue, profits or technical breakthroughs, but how much the big companies are spending (on gpus). That's the metric!

I probably don't have to repeat it, but this is a perfect example of Goodhart's Law: when a metric is used as a target, it loses its effectiveness as a metric.

If you were a reporter who didn't necessarily understand how to value a particular algorithm or training operation, but you wanted a simple number to compare the amount of work OpenAI vs. Google vs Facebook are putting into their models, yeah, it makes sense. How many petaflops their datacenters are churning through in aggregate is probably correlated to the thing you're trying to understand. And it's probably easier to look at their financials and correlate how much they've spent on GPUs to how many petaflops of compute they need.

But when your investors are giving you more money based on how well they perceive you're doing, and their perception is not an oracle but is instead directly based on how much money you're spending... the GPUs don't actually need to do anything other than make number go up.


This feels like one of those stats they show from 1929 and everyone is like “and they didn’t know they were in a bubble?”

> but how much the big companies are spending (on gpus). That's the metric!

Burn rate based valuations!

The 2000's are back in full force!


"But tulip sales keep increasing!"

They absolutely are tracking revenues/profits on CNBC, what are you talking about?

Matt Levine tangentially talked about this during his podcast this past Friday (or was it the one before?). It was a good way to value these companies according to their compute size since those chips are very valuable. At a minimum, the chips are an asset that acts as a collateral.

I hear this a lot, but what the hell. It's still computer chips. They depreciate. Short supply won't last forever. Hell, GPUs burn out. It seems like using ice sculptures as collateral, and then spring comes.

If so wouldn’t it be the first time in history when more processing power is not used?

In my experience CPU/GPU power is used up as much as possible. Increased efficiency just leads to more demand.


I think you're missing the point: H100 isn't going to remain useful for a long time, would you consider Tesla or Pascal graphic cards a collateral? That's what those H100 will look like in just a few years.

Not sure I do tbh.

Any asset depreciates over time. But they usually get replaced.

My 286 was replaced by a faster 386 and that by an even faster 468.

I’m sure you see a naming pattern there.


> Any asset depreciates over time.

That's why "those chips are very valuable" is not necessarily a good way to value companies - and it isn't if they can extract the value from the chips before they become worthless.

> But they usually get replaced.

They usually produce enough income to cover depreciation so you actually have the cash to replace them.


My 1070 was replaced by… nothing, I moved it from a haswell box to an alder lake box.

Given that inference time will soon be extremely valuable with agents and <thinking> models, H100s may yet be worth something in a couple years.


And that's why such assets represents only a marginal part of valuation. (And if you look at accounting, this depreciation is usually done over three years for IT hardware, and as such most of these chips have already lost half of their accounting value in the balance sheet).

> My 286 was replaced by a faster 386 and that by an even faster 468.

How much was your 286 chip worth when you bought your 486?


Yeah, exactly! I've got some 286, 386, and 486 CPUs that I want to claim as collateral!

That is the wrong take. Depreciated and burned out chips are replaced and a total compute value is typically increased over time. Efficiency gains are also calculated and projected over time. Seasons are inevitable and cyclical. Spring might be here but winter is coming.

Year over year gains in computing continue to slow. I think we keep forgetting that when talking about these things as assets. The thing controlling their value is the supply which is tightly controlled like diamonds.

They have a fairly limited lifetime even if progress stands still.

Last I checked AWS 1-year reserve pricing for an 8x H100 box more than pays for the capital cost of the whole box, power, and NVIDIA enterprise license, with thousands left over for profit. On demand pricing is even worse. For cloud providers these things pay for themselves quickly and print cash afterwards. Even the bargain basement $2/GPU/hour pays it off in under two years.

Labor! You need it to turn the bill of sale into a data center and keep it running. The bargain basement would be even cheaper otherwise...

Honestly, I don't fully understand the reason for this shortage.

Isn't it because we insist on only using the latest nodes from a single company for manufacture?

I don't understand why we can't use older process nodes to boost overall GPU making capacity.

Can't we have tiers of GPU availability?

Why is Nvidia not diversifying aggressively to Samsung and Intel no matter the process node.

Can someone explain?

I've heard packaging is also a concern, but can't you get Intel to figure that out with a large enough commitment?


> Isn't it because we insist on only using the latest nodes from a single company for manufacture?

TSMC was way ahead of anyone else introducing 5nm. There's a long lead time porting a chip to a new process from a different manufacturer.

> I don't understand why we can't use older process nodes to boost overall GPU making capacity.

> Can't we have tiers of GPU availability?

NVidia do this. You can get older GPUs, but more performance is better for performance sensitive applications like training or running LLMs.

Higher performance needs better manufacturing processes.


> Year over year gains in computing continue to slow.

This isn't true in the AI chip space (yet). And so much of this isn't just about compute but about the memory.


From a per mm2 performance standpoint things absolutely have slowed considerably. Gains are primarily being eked out via process advantage (which has slowed down) and larger chips (which has an ever-shrinking limit depending on the tech used)

Chiplets have slowed the slowdown in AI, but you can see in the gaming space how much things have slowed to get an idea of what is coming for enterprise.


> It was a good way to value these companies according to their compute size since those chips are very valuable.

Are they actually, though? Presently yes, but are they actually driving ROI? Or just an asset nobody really is meaningfully utilizing, but helps juice the stocks?


I asked this elsewhere, but, I don't fully understand the reason for the critical GPU shortage.

Isn't it because NVIDIA insists on only using the latest nodes from a single company (TSMC) for manufacture?

I don't understand why we can't use older process nodes to boost overall GPU making capacity.

Can't we have tiers of GPU availability some on cutting edge nodes, others built on older Intel and Samsung nodes?

Why is Nvidia not diversifying aggressively to Samsung and Intel no matter the process node.

Can someone explain?

I've heard packaging is also a concern, but can't you get Intel to figure that out with a large enough commitment?

(Also, I know NVIDIA has some capacity on Samsung. But why not go all out, even using Global Foundries?)


That's a great way to value a company that is going bankrupt.

But, I'm not going to value an operating construction company based on how many shovels or excavators they own. I'm going to want to see them putting those assets to productive use.


If you are a cloud provider renting them out

Otherwise you better keep them humming trying to find a business model because they certainly aren't getting any newer as chips


So, "No one was ever fired for ... buying more server infrastructure."

Walmart has massive, idle datacenters full of running machines doing nothing.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: