This thread by Guido Appenzeller from a16z's AI team is a strong rebuttal of the Sequoia article.[1] I personally found Sequoia's argument a bit strange, since it not only misunderstands the unit economics of the industry at a basic level, but further misunderstands the ways that AI will be "just another component" of software stacks — not unlike databases, high-level programming languages, network infrastructure, etc.
Seriously, author just throws out “for every $1 spent on a GPU, roughly $1 needs to be spent on energy costs to run the GPU in a data center” without bothering to share even a semblance of assumptions. Then we get a casual “let’s assume they need to earn a 50% margin.” What? Why?! Also, at these scales and timeframes, the time value of money becomes significant.
This is not margin question. This is question, of extreme energy ineffectiveness of current AI, so even most energy-effective GPUs spent their cost to energy in less than year (if used 100% of time).
FPGAs and ASICs are better on energy effectiveness, for 3-4 times, but cost per installed FLOP also larger for 3-4 times.
And money are expensive now, so we cannot consider, $200B investments will be returned with low rates for example 20 years, now much more real scenario, interests about 20%, so need to have much higher margin, to pay debt within 4-5 years maximum.
If only there was a non-X-thread version of this that was readable (as the threads are now hidden for people not logged in).
Of course, the author is free to choose to publish his views in any way they see fit. Except this way it gets a "meh" from me at most (and even that only because of the intro in this post here), since it's functionally hidden, as good as if it didn't exist.
[1]: https://x.com/appenz/status/1704915400096649696