Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia is about to pass Apple in market cap (reuters.com)
128 points by retskrad 4 months ago | hide | past | favorite | 148 comments



AI demand doesn't look like it's going anywhere in the next year, and their revenue is at least in the ballpark (26b vs 90b for apple). That said, price is usually dictated by the delta in growth. I think it's unrealistic to expect another 262% YoY growth over the next 12 months for Nvidia, especially with a lot of their upper-limit being determined by TSMC's capacity. In addition, there are significant pushes in motion to get processing on-device, both for security reasons and price reasons. I can't see these earnings being sustainable for nvidia over the long run.


Even if current AI progress caps out at GPT-4 level capability, we are only scratching the surface of how much application integration there is going to be. Until AMD or someone catches up, inference could easily 10x demand from where it probably currently is.


I think you're right that demand for inference is likely to grow by orders of magnitude in the medium term, but I don't know that Nvidia's lead over their competitors is likely to remain as strong as it is today. That is, they'll probably hold their dominant position, especially for training, but I'd expect their lead on inference to shrink by a lot. Companies pretty much understand how to make inference engines. Expect AMD, Intel, Qualcomm, Apple and others to come out with products geared at fast and efficient inference. Not to mention Grok and other dedicated chips.


Microsoft just announced that they are getting better price/performance ratios with AMD for inference. It's happening already. Most of GPU thirst for training at the moment but eventually most of workload will be inference. And that's depending on a big if, if users are willing to pay for LLM generated content as much as we're hyped about it right now.


I used to think this. I still mostly think this (and have frankly lost a lot of money in opportunity cost because I thought it), but I've been astonished at how long AMD is taking to pivot, such that there are startup competitors like Groq that might honestly beat them out.


And let's not forget, this is a comparison between NVDA and AAPL.

It's not just a matter of NVDA's upside.

AAPL may have peaked, and, in light of the EU breaking up it's walled garden, may be on the way down.


Well, then let's hope EU puts its eyes on NVDA and CUDA, another walled garden.


CUDA is a technology investment that gives Nvidia a moat, but there is nothing exclusionary about it, Nvidia invested in tooling years ago ... it will just take time for others to catch up.


NVDA banned CUDA translations layers, that's pretty exclusionary because it denies interoperability. There was already a discussion about it here: https://news.ycombinator.com/item?id=39592689


Well, they tried to legally prohibit it, but as people in that thread pointed out, it's legally irrelevant in most developed countries ... so another hardware maker can fund a (legally) independent team in say France (most favorable reverse engineering laws) to break it / copy it / do whatever they like ... and then open source the output for the rest of the world.

It's even dubiously legal in the US ... but there it would be a war of legal fund attrition and Nvidia has deep pockets.

Still - break it in France, give it away ... US laws go poof.

It's really just the ongoing development of the software ecosystem that gives them their moat ... but it's a good technical moat because they steer future dev and others play catch up.


Actually good points. I just hope other players catch up.


> I can't see these earnings being sustainable for nvidia over the long run.

We've barely scratched video and volumetric video training, even if text stuff hits a data wall I think demand will stay high for quite some time due to that.

The move to on-device inference actually takes more training, you need to train it longer to make it still perform well at edge parameter counts (with lots more training, Llama 3 8B is competitive with 70B). Nvidia is already more specialized towards training and has lots of competitors in inference.

Training tends to have steps that don't just involve matmul; Jim Keller said Tesla Dojo ended up having problems from being too rigid, where GPUs have been able to adapt to new training regimes better, though Google's TPUs have seemed to not fall into that as much, maybe just iterated much faster?


*llama 2 70B


My read is that investing into AI is the trendy thing to do in 2024. No CEO will have to justify pouring money into AI; they would probably be seen as negligent if they don’t. It’s just the zero-risk strategy at this point in time. That’s not to say that there is no value to be realized, it’s more a question of how much value and at what price. If in the next 12-24 months we find out that AGI, robotics and the end of labor is not as imminent as some have hoped and that perhaps not every car dealership needs an AI chat assistant, and the first companies announce reductions in their AI spending, the dominos might begin to fall. Or not, let’s see.


And yet, once the money dries up after chasing another pointless hype, the "visionary" CEO will still keep their job - while starting yet another round of layoffs.


If anything, the post you are responding to explains why CEOs have to think this way, they have no real choice. No one really knows where AI is going, yet staying on the sidelines is not an option.


The job of a CEO is to make sure the company is viable and liquid longterm, communicating their thinking.

"Society made me do it" is not a strategy.


In my opinion still to be seen where LLMs are going in general. They are cool sure, but I don't see its value measured in trillions in 2024 but my prediction and investment aptitudes are disastrous.


Replacing search is enough to warrant trillion dollar valuation for LLMs, question is just if they can get that good.

Note that you need to replace search on the lucrative queries, stuff like you want to buy a product or want to know about nearby open stores, those are the queries Google focuses the most on and will be the hardest to replace them on.


> replacing search

But since Google is adding LLMs to search, an LM competitor might not win still.

> lucrative queries

Also the hardest for an LM.

I predict this won’t end well for anyone. Google will destroy a bunch of their own value preemptively self-disrupting and burning a ton of cash only for everyone to discover that it’s not lucrative to use an AI to tell people the distance to the moon. Viagra and insurance queries will stay valuable and be least useful for a LM to disrupt. Google will maintain their market share, at the expense of more cost per query, so they’ll purge more of the loss-leaders across their consumer offering. We’ll all discover that an LLM isn’t great for search and better integrated into Gmail and similar.


> Replacing search is enough to warrant trillion dollar valuation for LLMs, question is just if they can get that good.

I would add that it needs to be more than just good, it also needs to be cheaper for mass adoption.

Spoiler: classic search is at least cheaper (old post, but relevant: https://www.semianalysis.com/p/the-inference-cost-of-search-...). The thing is it cannot get that much cheap without disrupting earning expectations from hyperscalers.


they're basically already that good in my opinion, except for a few subsets of realtime+hyperlocalized information. i feel confident that is addressable


Profits even closer: 15b vs 24b

Nvidia has higher margins so it first need that much growth to become the world’s most profitable company.


Is that “ballpark”? It’s a 3x multiple.

I do agree that growth will cap out. A lot of the massive growth is due to the previous fiscal year being effectively a crash. It’s impressive that they’ve recovered but the same quarter next year will not have the benefit of a bad quarter to compare against.


Unless AI demand increases? Which seems likely, or at least possible.


Or decrease. It looks like there is 200B industry is already sucking funds, but no end-user app with strong revenue stream is on the market yet.


And then consider what AI content spam might do to ad revenue, which many AI-adjacent companies depend on.


My non-finance spider senses tell me this will be the most epic market crash of all time, does anyone think otherwise?


The retail bulls online are now refusing to listen to that narrative. But it seems like a large correction of NVidia’s price is in the post. Nvidia added $480 billion of market cap in the last week alone. It’s getting pretty absurd. Plus Moores law will catch up to Jensen at some point and we’ll be running LLMs on Raspberry Pis. It’s Cisco/Crypto/Tesla all over again. The price is just going up and staying up for the time being but be ready when it’s time to short it back down. It’ll be epic.

P.s. - something weird happened yesterday in that the whole market was down except for nvidia - it’s like Nvidia is almost turning into a black hole, absorbing all the value and money around it, because why invest in anything else? Its market cap is now also bigger than the entire German stock market. This cannot be a good thing. Particularly because I’m quite tech-heavy and AI still isn’t making a huge impact in my life. Software yeah, but the hugely resource intensive LLMs and agents still seem like a curiosity, and I’m not sure where the profits are yet, which is like the dotcom boom all over again. I’ve no doubt that AIs will replace huge numbers of workers, but Jensen seems to envision a future of endless giant data centres running powerful AIs to do all our driving and being our assistants, but I think by the time we get there we’ll probably be looking at around the same number of data centres we have today, and the self-driving will always have to be done in-car for reasons of latency.


Meta spent ~1/3 of it's capex last year on GPUs. Are they going to spend more than 11B+ in future years? The big few companies that make up a huge portion of NVDA's revenue could just hold spend steady and NVDA will contract [1].

NVDA is also in a weird spot where their primary customers are all working to build their own thing to replace/supplement NVDA. A lot like Apple did with ARM, eventually pushing Intel completely out. As algos improve and Moores law marches forward, companies will need fewer of the 'best' GPUs and get by with their in-house versions.

[1]https://www.fool.com/investing/2024/03/14/the-scariest-nvidi...


As long as CEOs will try to convince the public/investors probabilistic word generators are a type of intelligence, the need for the biggest and strongest GPUs will continue to show how much closer you are to AGI

If they ever give up that lie, we could see a contraction


I’ll give it 18 months. Maybe 12.


> A lot like Apple did with ARM, eventually pushing Intel completely out.

Apple has the unique advantage of controlling the entire stack. Anyone who wants to compete with NV on anything AI has the disadvantage of NV's full stack - OS drivers, acceleration drivers, userland libraries, compute kernels - being extremely well developed, whereas everyone else's is a bunch of crap.


We are still early in the slow process of AI/ML percolating from the FAANGs all the way down to your neighborhood boutique software shop. Even if the hyperscalers stop buying Tesla, the demand lower in there chain will just grow.


Yeah but what else did Meta spend a ridiculous amount of capex on recently, can you remember? The tens of billions it spent on the metaverse were dead money and its AI ambitions could amount to the same given Meta really doesn’t have much to do with its cash other than keep Facebook and Insta on 99.999999999 uptime.


Sure, and Meta might keep blowing 11B+/year on new GPUs. But, NVDA needs growth. They need Meta and MS and AWS and Google to spend 20%+ more/year on new GPUs and/or find new customers who can/will drop Billions/year on new GPUs. We're talking staggeringly large $.

And that's not even mentioning that anyone with that sort of money is already developing their own GPUs.


Good take, I would just add that if it wasn't Nvidia it would be something else that would be blackholing in this market. Investors are obsessed with (or required to?) back short-term moonshots. It's not a great look for the health of the financial system.


It’s a bubble, but people have been saying it’s a bubble since before we hit the levels it will probably collapse to when the bubble pops. It might not even pop down below, say, where they were at the start of the year, even with another 6 months to run upwards in the meantime.

It’s pretty easy to just be a perma-bear, especially when it’s a company like nvidia with a dedicated hater club. it’s not a coincidence that the company with the next-biggest hater club is… apple.

https://paulgraham.com/fh.html

The same people who were all in on $AMD-to-the-moon have been saying nvidia will pop for over 18 months at this point. People have been saying that MI200 will crash Nvidia’s party, no wait MI250X (it’s dual chip but presents like one, except it turns out that was a marketing lie in the datasheets!), no MI300X, wait MI400…

They’ve beeen saying that google and Amazon and AMD are gonna crash the party for years and years now, and we still aren’t even close to literally anyone even having a viable alternative to CUDA despite those years of internet hot-air. Apple is literally the closest and they still only have traction on inference, Nvidia’s the only game in town for training still, which is where all the money is.

Simple question, if you’re so sure: how many more years until anyone can use ROCm and not have to think about it? That’s something that only two companies offer, and both of them are in the title of this thread.

Again, reminder that the foremost contender for the third place title, is shipping AI-branded (“Ryzen AI Ultra 300”) laptop CPUs that require vendor enablement to use the neural cores… you have to not only build all the software, train it hard so that edge models are viable (oops, more money for nvidia) but then also you have to convince asus and MSI and clevo and sager to support it in their drivers on an ongoing basis. That’s the third place contender, and they don’t even have the software to run on it in the first place yet.

As much as people can’t help but trip over themselves predicting a pop any day now (for the last 18 months!) literally the third place contender is only just getting rolling, they are probably 10-15 years behind right now in terms of ecosystem. Might be able to do it in 5 if they hurry, but they also have to convince everyone else not to do their own stuff too - if everyone else runs around like a chicken with their head cut off, nvidia still wins.

Again, the only real ecosystem threats right now are apple and sony, and sony is starting from zero too (but they have a legitimate ecosystem unlike say AMD or intel). Intel/SyCL is a technical threat but it’s powerless without a social /ecosystem consensus behind it.

If it’s not a competitor taking over, that leaves a general collapse in AI, and frankly that’s just wishcasting at this point. There’s too much obvious value being delivered for it all to collapse to zero like some people are hoping. Again, not expecting - hoping.

The hater club never realizes they’re haters. That self awareness is the first thing to go. Same as with the apple haters who spent the last 4 years finding any and every reason to try and dunk on apple silicon lol.


Well personally, as an investor I’m more incredulous at the price. I hold both nvidia and apple and I’ve made good money from both. It’s worth looking at all the reasons for the price going up and what happens when the price goes down. So I’m definitely not an nvidia hater because I think they’re doing great things and clearly a brilliantly managed company, it’s just that if the market cap gets too big then there’s ultimately only one direction it can then go, and it’s all predicated on an AI future that I’m not certain is going to happen like it might in the best-case scenario given how we’re constructing LLMs and how their corpuses of data are mined out already. My other main concern is towards how fickle human beings are - limitations with AIs are likely to cause a huge backlash once they become annoying or we feel like we’re being fobbed off by corporates forcing us to interact with an AI agent that can only stick to a strict corporate line. It’ll be boring, and people don’t like being bored. I’m not convinced that the current approaches to AI are going to deliver the goods that the graphs of progress are promising. And what’s more is that it already seems like everyone who wants to use ChatGPT or Claude is able to use it, and in three years the hardware cost of running all that will have dropped by 4x, not run away as some nvidia-boosting exponential.


It makes me think a litte bit of automobile companies where Tesla's market cap is somehow bigger than Mercedes, Stellantis (Fiat, Dodge, Citroën, Alfa Romeo, Jeep, Opel, Peugeot, Ram, Maserati, Chrysler etc.), Porsche-Volkswagen (Audi, Bugatti, Lamborghini, Saab, Bentley, Porsche, VW, Škoda, Seat, Ducati, and a bunch of truck companies), BMW (Mini, Rolls-Royce), BYD, Honda and Ford ...combined.


At its peek price it was worth more than the whole us auto industry while making a touch under 500k cars and the dashes were falling off compared to the 6.5 million in ford, GM, dodge, us Toyota . . .

Made no sense.

And, as they've shown. Making a car is easy, manufacturing a car at scale is hard and their competition knows how to do the hard part. And, at this point, US auto makers have shown they can build electric cars so Tesla is still over valued.


Can US auto makers (or even non-Tesla automakers generally) manufacture EVs at sufficient scale? Tesla seems to be far ahead of their competition:

https://caredge.com/guides/electric-vehicle-market-share-and...

BYD and Xiaomi in China seem to be catching up to Tesla far quicker than the US/European auto makers are.


> Making a car is easy, manufacturing a car at scale

it looks like tesla manufactured cars just fine, there is no shortage of teslas on market. Demand on EV didn't pick up.


Yeah, Tesla is long overdue a massive crash.


You might be right. But in the meantime, there is a whole group of short sellers who are taking it in the shorts:

According to data from S3 Partners, investors betting on a decline in Nvidia's share price suffered roughly $2.9 billion in paper losses on Thursday when the stock ended the day 16% higher following the chip-maker's huge earnings beat the evening before.

https://finance.yahoo.com/news/nvidias-huge-post-earnings-st...


Articles like these are cheap clickbait. Most short interest in Nvidia are hedges. Nvidia shareholders are sitting on huge untaxed gains. People want to sell but can’t because January is still 6 months away. So they hedge with puts. And market makers then have to short the stock in order to stay market neutral.

Everybody and their dog is long nvidia.


I don't have a great understanding of how all of these options for betting against something work, but in practice betting against anything seems too risky even if you are 100% sure it will fail, because you also need to know exactly when it will fail, and the longer out you think it will be the less you stand to make and the more risk. Even the most overpriced or poorly ran companies are never going to fail on a predictable timeline.

It's pretty obvious TSLA is massively overvalued... but will it crash this month? next year? 20 years from now? who knows.


With a put option hedge, they are not necessarily betting that it will fail and often are not betting that it will fail. Unlike just borrowing shares to short, someone with a put option, but who is long in the stock, will have a fixed loss on the option if the stock doesn't fall. They buy a put option to protect the possible downside of their long position, with a fixed loss in the case of the stock not falling, while if the stock does fall, they'll be able to recoup some of their losses from the fall in value of their long position.


If it’s cheaper to hedge (by shorting) and waiting for the long term cap gains tax instead of selling (and paying short term cap gains) that’s what people will do. It has nothing to do with “risky bets” or anything like that. It’s simple tax optimization.


You don’t know much but you know enough. Options is all about timing. And nobody knows the time to short for sure. The market is not rational.


You mean they're selling puts at the share price they're okay losing the stock at?

What's that have to do with taxes (and wanting to hold 12 months+)?


Equity holder buys puts as a hedge from a market maker. Market maker sells puts and shorts shares to stay market (delta) neutral.

The equity holder could also just sell their shares instead of hedging with puts, but that's a taxable event. Hedging with puts is not.

Think of it this way. You have nvidia shares, think the stock has ran up too quickly, want to sell, but also don't want to because of taxes. So you approach your neighbor and get him to short nvidia for you, and you agree to pay your neighbor a fixed amount for his trouble and your neighbor passes the money made/lost by shorting back to you. Now your effective share count is 0 and you don't have to pay taxes on unrealized gains.

The point is that people wrongly assume that short interest in nvidia must mean people are betting against nvidia, when it's much more likely most short interest belongs to people who are bullish.


If they are selling puts they are actually signalling the price they are ok to buy at. What you described is selling a call. Or to be annoying selling a right to buy.


Only until it bursts and sells off.


China-Taiwan conflict could be a trigger. If the chips stop flowing, lot of tech will go down. And they could drag rest of the market with them even if those are less affected.

Other possibility is something finally snapping even without that sort of trigger. But that could take rather long time still.


Very unlikely. Big crashes occur when businesses sell hot air or cook their books. NVidia is overpriced but they are also the sole producer of the world’s most advanced AI chips. I can’t see any scenario where demand for AI chip levels off in the coming years. The worst case scenario is NVidia’s profit margins hurting because competitors hit the market sooner than expected. But even when that happens NVidia will still be worth a trillion. Hardly an epic crash.


I think it is quite possible that NVidia's customers[1] are the ones selling hot air. If those customers go away then it turns out that NVidia made legitimately huge profits today, but it was still based on hot air, and their future profits are much smaller just of necessity. It is possible that someone will figure out a good monetization for LLM's, but I'm not aware of any existing yet, which is why I think this is a possible outcome. At least figure out monetization soon enough and with enough free cash flow thrown off that the business case closes with fancy, super expensive accelerators rather than just regular CPU's, which will Moore's Law up eventually.

NVidia is getting rich following the old "in a gold rush, be the ones selling picks and shovels" aphorism, and when the gold rush ends, you still have your money, but the discounted future profits might well be much smaller than you expected.

1: I don't think it would necessarily hurt NVidia, but I do think it might hurt their suppliers. I would be very curious as to how much of the current fab boom across TSMC, Intel, United Micro, Samsung etc. is based on assumptions about AI demand continuing to grow. Those companies would be the ones I would expect to suffer if the demand doesn't materialize because those fabs are going to be ridiculously expensive if they don't keep their order books full.


Semiconductors are a cyclical industry. Undersupply followed by oversupply. Boom and bust. Feast and famine. But demand for semiconductors has gone up, albeit cyclically, for 6 decades. Not a trend I would bet against.

Every car will get AI chips, every laptop, every server, every phone, and many home appliances. And I expect people will eagerly upgrade their electronics to get the faster AI chips in the decade to come.


Will it be a AI accelerator chip or just a regular CPU, which after X number of years of Moore's Law development will be as powerful as a H100 is today? Can you charge that much extra for it in X years, if you are NVidia?

At a certain point the AI chip market disappears and it's absorbed into general purpose computing.


For comparison, in the first dotcom boom, Sun and Cisco did extremely well as infrastructure providers, and subsequently suffered disproportionately.


Demand for AI chips is predicated on those chips producing value for their customers. But are there any AI products that are generating significant revenue yet? If those fail to materialize there will be a gigantic crash.


Cost savings in marketing, advertising are already substantial. Large businesses are running pilots with on-premises LLMs with promising results.

Aaron Levie of Box.com has been outspoken about it:

https://x.com/levie/status/1793479934645842141


My take on this: AI is real, and AI products are adding significant value, but this will not show up as revenue gains for companies. Companies have no choice but to invest in AI to remain competitive in terms of costs. Consumers will realize a lot of this value as reduced prices.


kinda seems obvious application people are going to be able to generate revenue from this, but maybe i'm an idiot


The challenge isn't just to generate revenue, but to generate a TON of revenue that justifies the massive investments these companies are making. It's not obvious to me that any of these products are going to be as successful as the market needs them to be.


The massive capital investments are mostly coming from companies that are going to be selling these as services, so not necessarily the product builders themselves. I think there will be a lot of products finding revenue coming.


NVDA is ~6% of the S&P depending on the day, so when it corrects we'll likely see a down day, but I'm not sure it has the ability to really spread contagion. Given that such a large percentage of NVDAs revenue comes from a few big providers, the first question would be are they doing poorly or have the simply capped on GPU need for a myriad of reasons.


As long as there's something after the AI hype dies off. Maybe we'll circle back to a gaming boom or another crypto or cars.


Local private generation of pornography could be a huge driver, doubly so with VR headsets.

There's also nothing stopping them from bringing some consumer software to the market.


Robots


Yeah I have been thinking we might see a humanoid robotics hype bubble. They’re flashy tech that can look pretty good in a demo, the hardware is to a state where we can convince people it’s a real product, and the software is just far enough along to trick loads of investors in to funding it.

And I don’t know, maybe in ten years time we will have enough AI advances that humanoid robots are useful. But AI is notoriously hard to predict, and if you believe Yann LeCun then all the autoregressive approaches the big LLM companies are using won’t get us to AI that thinks more like an embodied creature, with hierarchical reasoning, variable compute capabilities for solving hard problems, few shot learning, etc.


The latest demos (eg Unitree) have convinced me that it won't take 10 years for useful humanoid robots to start appearing. AGI is the wrong target. If I have to do the dishes a hundred times to generate training data in order to teach the robot to do my dishes for me, that's still better than me doing the dishes for the rest of my life.


The Unitree is what made me think we are headed for a bubble. The machines themselves will become commodity hardware. Tesla bot will push the hype side. I just don’t believe current AI systems have what it takes to operate in the real world. It doesn’t have to be AGI at all, but it needs a bunch of capabilities not present in existing models, some of which cannot be created by training on large corpuses from the internet like text and images. [1]

Also people can’t afford to spend $20k on a robot just to do their dishes. My good friend tried to make a robot that could pick up dog poop from your back yard, and he went to HAX accelerator and everything, and through extensive research he found that the addressable market for dog poop robots is just too small to fund a company. Now he makes farming robots. (I do too as it happens, on a separate project.)

The point is a humanoid robot worth the price of that robot needs to be very useful, and I don’t believe current approaches can get us there.

Maybe some uses will be found, just as current AI systems have their uses, but I say it will be a hype bubble because companies will fund raise on massive promises they will never achieve, and investment will move on to something else after 5 years or so.

Could still be robotics, but I actually think modular purpose built machines make way more sense than general purpose humanoids. What if instead of a humanoid to load the dishwasher you had a dishwasher that cleaned dishes one at a time. You load up to six table settings in to the bin and a little robotic mechanism grabs them one at a time, runs it through a little car wash setup, and stacks them on the other side. That’s what I want, not a humanoid!

[1] I’m really inspired by Yann LeCun’s recent podcast where he talks about the fundamental limitations of current popular (autoregressive) models. https://youtu.be/5t1vTLU7s40


The Nth large house appliance that doesn't need to be a humanoid is the clothes folding machine that'll go next to the washer dryer. The difficulty of the task makes me doubtful if we'll see that machine, but one can dream!

The problem with your proposed non-humanoid dish washer robot arm is that I want the dishes to go into the cabinets, not just into the dish rack, and while the robot doesn't strictly need to be humanoid, it's better that it's mobile, and while uni-wheels like a segway is certainly an option, having cracked bipedal robots, that just seems like a better design choice.


Well, there are problems with both approaches, with intelligence, cost, and sensing fidelity being major unsolved problems with humanoids. We still don’t know how to make good fingers for example with sensitive skin.

Considering how much the industry has struggled with automation in cars, I just don’t actually think humanoids will work in the home any time soon. A fantasy humanoid that works well would be more useful than a robotic dishwashing appliance, but one of those is something I believe can actually be built in the next decade.


While Full Self Driving and Waymo grabs all the attention, Blue Cruise and Mercedes' one is quietly doing what's asked, giving drivers a way to sit in traffic on the freeway and pay less attention. So I agree that it's been a struggle, but it seems the computers a managing to win that struggle. At the point where we're disagreeing on 3 years vs 10 or 20, and if there are enough rich people to afford a $20k house robot, I don't feel a need to convince you of my opinion, I'm simply excited for the future. I don't know what comes next but I 'm optimistic about it.


Bought time for Internet of Things to make a comeback


Internet of AI Things!


The AI bubble bursting would be localized to the tech industry I think and wouldn't cause a crash but it would suck for us. Commercial real estate and city budgets are the only thing that genuinely scares me at the moment but I've been waiting a long time for that shoe to drop.


I don’t think it will be a crash per se, I just think that at some point every major cloud provider will build their own silicon and this will drive demand for NVidia’s cards down by a lot. It will take some years for it to happen, but it will happen eventually.


Not implausible, but I'd say we're 10 years out from the FAANGs replacing their silicon AND CUDA.


It doesn't have to be a full replacement. Meta for example is already running both. Interesting how they state [1]:

"This announcement is one step in our ambitious infrastructure roadmap. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100 GPUs as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s."

350k NVDA GPUs, but the compute power of 600k. See here for how quickly their silicon is advancing [2].

No one is saying NVDA will go away. But the stock is priced for near perfect growth projections. NVDA's second biggest customer like Meta cutting back even just a bit will hit NVDA's bottom line. That's the stock risk.

[1] https://engineering.fb.com/2024/03/12/data-center-engineerin...

[2] https://ai.meta.com/blog/next-generation-meta-training-infer...


Gemini might not be as cool as GPT, but it already runs on TPUs. So the "G" in FAANG is not as dependent on Nvidia as the others.


Google is already there, others are starting to look at it. And one of them might sell to the others if it becomes profitable enough.


Yeah but google has shown complete ineptitude at selling anything that’s not their cash cow and the change in culture doesn’t seem close by.


AI is here to stay. And local AI will be toyed with for some time longer. This is leading to the purchase of massive computing powers. The possibility of it being something one day is driving the sales and the speculation.

However, (my prediction is) AI will be able to be run on fewer resources than today. And if this optimization comes (too soon), the speculation will crash. If it comes late, the buying and selling will have already happened. And NVIDIA will reap the profits in the meantime.

NVIDIA should liquidate stock. Now is a good time.


Nvidia make a product they can't build and get out the door fast enough.

I think apple still make a yearly phone although i can't tell the difference between the 8, 12, 14 or 16 (?)

My money is on the datacentre boys.


Even if Nvidia were to crash 75% (which it probably won't) - there have been 2 worse crashes in the last 25 years.

It wouldn't even come close to comparing to the Great Depression.


Triggered by the CCP's takeover of Taiwan.

Buffett sold his stake in TSMC because of geopolitical uncertainty and the way I see it TSMC and NVIDIA are attached at the hip.


I this the US government understands this very well and are more than willing to put their money where their mouth is.


I don't think a crash is coming. Nvidia actually has revenue growth. PE ratio is a very ordinary mid-60s after the Q2 earnings call.

Even if the AI hype cycle dies, computing power has and continues to be the oil of the digital age.

Growth may slow, but I wouldn't go shorting this company.


> computing power has and continues to be the oil of the digital age.

What would they use this much compute for though except chasing AI/AGI? If the current ever larger transformer race doesn't lead anywhere NVIDIA will crater hard since there isn't anything else worth that much compute currently.


Even without AGI, there are a lot of use-cases. A small reminder about the scale of the World. If something brings a 5% boost to the productivity of the World, it has a value of 5 trillion per year.

If AGI was to happen within 5 years it means that Nvidia is heavily underpriced as AGI would have multiples of hundreds of trillions of value.

If we knew for sure that AGI was happening in 5 years, NVIDIA is probably at least 100x under priced.


> If something brings a 5% boost to the productivity of the World, it has a value of 5 trillion per year.

What happens if something brings a 5% reduction in productivity? It's already getting harder to find reliable information thanks to AI spam.


The companies which productivity this affects would lose value, but it may not directly affect the value of the leader of the technology, if in some other area it still increases productivity. E.g. if Company A is selling a product which to B gives 5% boost, and for C as well, but C does something harmful with it, that makes the rest of the competition to become -5%.

Like arms manufacturers wouldn't lose in value during wartime which is a destructive process for the World as a whole. But if certain weapon allows you to destroy an even opposition then productivity as a whole would decrease by 50%, but the weapon would still have massive value, probably at least 50% of the whole produce.

If there is a simplified World with 2 countries where each produces $50 million of value a year and they go to war. Either of them would be willing to pay anything they have for the weapon since alternative is to be wiped out. Even though after beating the enemy instead of $100 million produced per year, it would be temporarily $50 million.

The value of a weapon would likely be whatever any of them can dish out, so perhaps over $50 million if they have saved up enough.


I have had the inverse experience. Its been much easier to find information using AI compared to a traditional Google search.


Crypto mining? Faster graphics processing for media? More powerful or lower power consumption phones/tablets/laptops? Medical imaging? Military?

There was a tech world before AI!


How can you look at numbers like this and think otherwise? Market didn’t exist a few years ago and now it’s going to go up infinitely?

https://i.imgur.com/20DZLRg.jpeg


If you are an NVDA investor, I suggest you set trailing % stop sells.

That way you can keep most of the gains incase of a crash.

Nvidia needs one bad quarter of growth and they see the same fate as Tesla.

Growth drives the market frenzy.


There is a gold rush to develop AI that can do any job anyone does. I don't think the biggest maker of the hardware all this runs on is going to do too badly during this period.


What if they don’t manage to build that AI?


Then these corporations will pay for idiosyncratic/specialized machine learning software, and a lot of it will be useful to clients.


In a gold rush, sell picks and shovels.

Even if no one build a human level AI, it’s hard to imagine people would stop trying.


True, true. But in a gold rush once everyone has a pick and a shovel the gold tends to get mined out extremely rapidly and 95% of the people go home with the same nada that they started with.


They said the same about Tesla when it became worth more than Ford and GM combined.


Sure, but Tesla is down 50% from all time high.

I don't see durable moat for nvidia.

They have best stuff now (and likely for a few years), but it's design for relatively predictable workloads with known best solutions.

Everyone sees the money nvidia makes and wants a piece of it (e.g. Jim Keller and hubdreds if others). Make something with lower TCO, proper integrations with pyrorch and co and B2B will buy.


Nothing is going to displace CUDA in the short term. ROCm is a piece of garbage. Intel is miles behind.


Can't someone else just implement CUDA for their chips? I don't see that being worth trillions, it is hard but isn't that hard to replicate. Billions makes sense, not trillions.


CUDA is propertiary, NVIDIA recently changed it's license terms to forbid implementing CUDA translation layers for non-NVIDIA devices: https://www.techpowerup.com/319984/nvidia-cracks-down-on-cud...


I thought such contracts doesn't hold in court, you are allowed to implement interfaces for interoperability right?


Having a different API from CUDA is not the problem. Having working and performant product is the problem.

AMD made a nearly clone of CUDA API with HIP (the implementation is garbage btw). The API mostly changes prefix from "cuda" to "hip" and even has semi-automatic source code modifier tool to make the switch.

It's like any other software migration (e.g. from on-prem to cloud):

If ("money to switch library" < "savings per quarter" * quarters) { "Do the switch" } else { "Status quo" }


"just"


Compared to a trillion dollars it is "just", it is just a software implementation moat, its a compiler it isn't on the scale of reimplementing windows or so.


Nvidia's got umpteen patents and will keep coming up with more. How is that not a durable moat?


Tesla is overvalued, but it's one stock. The AI stuff is pretty widespread and getting shoved into use cases where it may not make much economic sense. It could be like the .com bubble - where there's real market opportunity there, but some of the early ideas are too optimistic and can't be run profitably.

We'll see I guess.


I'm still waiting for the market correction on this one, Tesla is still way overvalued


Apple and NVIDIA have broadly similar revenue and income at this point, but Apple is shrinking while nvda is still growing exponentially.


I'm not sure that NVidia's moat is all that large.

You have to hand it to them, they have executed superbly, but the underlying technology is well understood. You have the hyper-scalers investing in their own silicon, and Intel/AMD are ramping up as well.


CUDA and their associated toolkits is their moat. Whether or not one or more of the remaining manufacturers can deliver a compelling substitute for those workloads remains to be seen but OpenCL is far in the rear view at this point and ROCm hasn’t made a difference yet.


They hyperscalers can bypass CUDA if it is profitable. Most AI practitioners use Torch rather than CUDA directly, so it's effectively "under the hood". If some director at Meta figures they could reduce Meta's capex by $X billion per year switching to in-house/AMD hardware, they'd make it happen a pocket a decent bonus for themselves and the teams involved.


Google have been trying that thesis out for years and yet TPUs aren't flying off the shelves in the way H100s are.

The basic problem they seem to have faced is that the hardware was over-specialized. The needs of models changed quite fast. CUDA was flexible enough to roll with it, TPUs weren't. Google went through several TPU generations in only a few years and yet don't seem to have managed to build a serious edge over NVIDIA despite being less flexible.

They also lost out because the whole TPU ecosystem is different to PyTorch which is what won out. That's a risk if you do your own hardware. It ends up with a different software stack around it and maybe people pick hw based on sw and not the other way around.

So it's not that easy.


> Google have been trying that thesis out for years and yet TPUs aren't flying off the shelves in the way H100s are

Google does not sell TPUs to 3rd parties at all[0]. Or do you mean cloud customers prefer H100s to TPUs - if so, I'd appreciate more context, because I know Google uses TPUs internally, and gets some revenue for TPUs - I know a bunch of people who pay for Google Collab for TPU access to accelerate non-LLM training workloads.

> They also lost out because the whole TPU ecosystem is different to PyTorch which is what won out. That's a risk if you do your own hardware.

This is barely related to hardware ans mostly about Tensorflow losing the mindshare battle to Torch. Torch works fine with TPUs, as anyone who's used a Colab notebook might tell you.

0. Except their Coral SBC/accelerator which is modest and targeted at inferencing.


Yeah I meant customers renting them in Google Cloud.


They don't. $382 vs $61 Billion in the last year. $135 vs $34.5 on the income part.


Yep, accidentally looked at quarterly numbers for Apple.


medium-term short opportunity. but time your entry after the next few quarters.


Yeah. Rode the hype. Made a killing. Fucking off now and finding the next bubble…

Existential risks such as hyped gains and regulation are starting to take a foothold.


It is kind of interesting to see that even with this crazy run up in price over the past week, the PE ratio is around ~65 (for comparison Apple's PE is at ~30 and AMD is ~240). Considering the "expected growth" in AI the PE ratio doesn't seem too bad. Of course, PE ratio is just one parameter and doesn't necessarily describe the whole picture.

I also wonder whether the announced stock split is contributing to the short-term price increase (since people will expect more money to flow in once the stock is a more "accessible" ~$100).


Interesting indeed. Historically, a PE ratio of 65 is even quite low for NVDA: https://ycharts.com/companies/NVDA/pe_ratio


It's maybe reasonable to consider this an absurd bubble surrounding AI, but it is a demand-driven phenomenon from the point of view of Nvidia.

Right now, you can listen to CEOs of Nvidia's biggest clients saying things like "the current bottleneck is the bureaucracy around building nuclear powerplants to provide energy to our datacenters". The Saudi investment fund is shopping around for AI ventures to throw hundreds of billions at. Altman is suggesting he'd be able to utilise a multi-trillion dollar raise. These are big indicators that the demand for Nvidia's products will remain strong for some time to come.

The amount of money that is aimed at AI, which ultimately a large portion of will land in Nvidia's bank account, is staggering.


Demand driven mainly by hyperscalers. And hyperscalers (MSFT as an example) are aggressively forcing customers (using tactics such a latency/performance compromises, one example being completions in 30 secs vs 1.5 min for the same prompt) to move from a Pay2Go schemes to buying Provisioned Throughput Units with minimum commitments in the order of 16-32k USD monthly with big penalties if you don't forecast your own demand with enough accuracy. Those half-assed offerings may be "interesting" to enterprise customers, but the intermediate/final consumer demand is not going to be in the scale where you think everyone and their grandma are going to use AI (as NVDA investors want to believe), the value is just not there.

That should say enough about the confidence in the capacity of such giant investments in GPUs to bring revenue in the short term future. They're not selling surplus capacity (from their own products as Copilot), they're hoping to sell snake oil directly to customers.


In the short run, the market is a voting machine but in the long run, it is a weighing machine. - Warren Buffett



Every Tom, Dick and Harry's DCF (using FCF) analysis on the market says they're overvalued, so if you wanted in, you'd better do your homework on what you think is an acceptable MC value.


Nvidia essentially has no competitors so far, the competitors are in fact years behind, the moat is deep and wide, it will be the No1 market cap company for a few years down the road probably, if it keeps executing well that is. Apple's all products can easily be replaced.

Nvidia could buy some cloud companies to become AI-cloud vendor too, instead of just selling equipment and chips, that will make it even bigger.


> Apple's all products can easily be replaced.

Interesting. I perceive it exactly the opposite way. Apple products are hard to replace due to ecosystem advantage while everybody (Apple included!) is already working on building hardware to avoid using Nvidia's H100s.


> while everybody (Apple included!) is already working on building hardware to avoid using Nvidia's H100s.

the question is if they will succeed. Only Google so far has competitive product(TPUs).


Apple is reportedly building M2 Ultra and M4-Powered AI servers. We will see what this end up looking like, but I'd say that the M2 Ultra is a very competitive product. Even if it's not really the same kind of architecture, it might very well be a better approach, or a good enough alternative.


Am I wrong, but isn’t the CUDA bigger reason than hardware itself?


Wasn't the same true for Intel vs AMD?

Nvidia moat is CUDA not the hardware, how much of teh AI hype needs CUDA?

And if Nvidia can't satisfy the demand to a reasonable price companies could search for alternative ways with AMD hardware


When 40% of NVDAs revenue comes from 4 large customers who already develop their own silicon, that's the risk. People will be watching Meta's and MS's capex, and if it's at all adjusted down, that's a bad sign for NVDA. If the CEO or CFO were to mention they were able to adjust their GPU mix to more of their in-house version, NVDAs growth rates and multiple will come under question.

Apple is much harder to displace.


They already have cloud services for high paying enterprise customers (dgx cloud)


"Apple's all products can easily be replaced."

- No wireless. Less space than a Nomad. Lame.


I had the same thought. Apple has a lot of "stickiness" for their customers - whether you think this is due to superior technology or just marketing is debatable. Meanwhile, raw computing power is fundamentally fungible and there's a lot of incentive to undercut NVDA's offerings. The only question is how long it will take to do so.


> superior technology or just marketing is debatable

“Superior customer experience” is missing from this. I don’t care if the processor in the iPhone is the fastest or not, or if it has 8Gb of RAM vs some Samsung and I care even less about marketing.


Apple is only sticky to those who buy into the ecosystem. Those whose a single Apple device (or 2) without paying for any Apple service are not trapped in the walled garden


> Apple is only sticky to those who buy into the ecosystem

That's half of the total market in many countries.


Is it too late to buy?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: