Hacker News new | past | comments | ask | show | jobs | submit | costcofries's comments login

I've had a long urge to start mixing music, so I scratched that itch over the weekend and now I'm waist deep into a new obsession.


It’s a ton of fun. What software or hardware did you choose?


Tell me more about why you believe their stock is hilariously overvalued.


Their market cap is 2.2T $.

In the past year, they had a revenue of 60B $ and net income of 30B $. Absolutely amazing numbers, I agree. The year before they had a revenue of 30B $ and a net income of 4.5B $ - and it was a rather good year. What happens next of course depend of how you judge the situation - was it a peak hype demand ? Will it stabilize now ? Grow at current extraordinary rates ?

Scenario 1 - margins get back to normal due to hype going down, competition improving etc - in this case the company is worth at best ~200B $ - or 1/10 of what it is now.

Scenario 2 - they maintain current revenue and the exceptional margins - the company would be worth ~1T - or 1/2 of what it is now.

Scenario 3 - they current growth rate (based on past 12 months) continue for ~5 years. This is the case the company is worth ~2T $.

But they are in a business where most money come from a handful of customers, all of which are working on similar chips - and given the sums in play now, the incentives are *very* strong.

My opinion, is that the company is already priced for perfection - basically the current price reflects the perfect scenario. I struggle to see any upside, unless we have AGI in the next 5 years and it decides it can only run on Nvidia chips.

All of this is akin to Tesla in the past years. They grew from a small startup to a medium car maker - the % growth rate was huge of course - an amazing achievement in itself. But people projected that the % growth rate would continue - and the stock was priced accordingly. Reality is catching up on Tesla, even if some projections are still absolutely crazy.


It does no good to design similar or even superior chips if you can't get them fabbed. How much of the world's fab capacity has Nvidia already reserved?


They are priced as if they are the only ones who are capable of creating chips that can crunch LLM algos. But AMD, Google, Intel, and even Apple are also capable.

Apple is in talks with Google to bring Gemini to the iPhone, and it will obviously also be on android phones. So almost every phone on earth is poised to be using Gemini in the near future, and Gemini runs entirely on Google's own custom hardware (which is at parity or better than nVidia's offerings anyway).


This seems as good a place as any to be Corrected by the Internet, so... correct me if I'm wrong.

Making a graphics chip that is as good as Nvidia: Very difficult. Huge moat, huge effort, lots of barriers, lots of APIs, lot of experience, lots of decades of experience to overcome.

Making something that can run a NN: Much, much easier. I'd guess, start-up level feasible. The math is much simpler. There's a lot of it, but my biggest concern would be less about pulling it off and more around whether my custom hardware is still the correct custom hardware by the time it is released. You'd think you could even eke out a bit of a performance advantage in not having all the other graphics stuff around. LLMs in their current state are characterized by vast swathes of input data and unbelievably repetitive number crunching, not complicated silicon architectures and decades-refined algorithms. (I mean, the algorithms are decades refined, but they're still simple as programs go.)

I understand nVidia's graphics moat. I do not understand the moat implied by their stock valuation, that as you say, they are the only people who will ever be able to build AI hardware. That doesn't seem remotely true.

So... correct me Internet. Explain why nVidia has persistent advantages in the specific field of neural nets that can not be overcome. I'm seriously listening, because I'm curious; this is a deliberate Cunningham's Law invocation, not me speaking from authority.


I agree with you, but let me devil's advocate.

After 10 years of pretending to care about compute, AMD has filled the industry with burned-once experts who, when weighing nvidia against competitors, instinctively include "likely boondoggle" against every competitor's quote because they've seen it happen, possibly several times. Combine this with nvidia's deep experience and and huge rich-get-richer R&D budget keeping them always one or two architecture and software steps ahead, like it did in graphics, and their rich-get-richer TSMC budget buying them a step ahead in hardware, and you have a scenario where it continues makes sense to pay the green tax for the next generation or three. Red/blue/other rebels get zinged and join team "just pay the green tax." NV continues to dominate. Competitors go green with envy, as was fortold.


> burned-once experts

More like burned 2x / 3x / 4x of this time it's different people.

Looking at you Intel


It's true that nobody has beaten nVidia yet, and that is a valid data point I don't deny.

But (as a reply to some other repliers as well), AMD was also chasing them on the entire graphics stack as well as compute. That is trying to cross the moat. Even reimplementing CUDA as a whole is trying to cross a moat, even a smaller one.

But just implementing a chip that does AI, as it stands today, full stop, seems like it would be a lot easier. There's a lot of people doing it and I can't imagine they're all going to fail. I would consider by far the more likely scenario to be that the AI research community finds something other than neural nets to run on and thus the latest hotness becomes something other than a neural net and the chips become much less relevant or irrelevant.

And with the valuation of nVidia basically being based not on their graphics, or CUDA, but specifically just on this one feeding frenzy of LLM-based AI, it seems to me there's a lot of people with the motivation to produce a chip that can do this.


> So... correct me Internet. Explain why nVidia has persistent advantages in the specific field of neural nets that can not be overcome. I'm seriously listening, because I'm curious; this is a deliberate Cunningham's Law invocation, not me speaking from authority.

To become a person who writes driver infrastructure for this sort of thing, you need to be a smart person who commits, probably, several of their most productive years to becoming an expert in a particular niche skillset. This only makes sense if you get a job somewhere that has a proven commitment of taking driver work seriously and rewarding it over multiple years.

NVidia is the only company in history that has ever written non-awful drivers, and therefore it's not so implausible to believe that it might be the only company that can ever hire people who write non-awful drivers, and will continue to be the only company that can write non-awful drivers.


CUDA is/was their biggest advantage to be honest, not the HW. They saw the demand to super high-end GPUs driven by Bitcoin mining craze thanks to CUDA, and it transitioned gracefully to AI/ML workloads. Google was much more ahead to see the need and develop TPUs for example.

I don't think they have a crazy advantage HW wise. Couple of start-ups are able to achieve this. If SW infrastracture end is standardized, we will have a more level playground.


CUDA is a big reason for their moat. And that's not something you can build in a couple of years no matter how money you can throw on it.

Without CUDA you have a chip that runs on premise without anyone having a clue how good that is which is supposedly what Google does. Your only offering is cloud services. As big as this is, corporations would want to build their own datacenters.


Sure, CUDA has a lot of highly optimized utilities baked-in (CUDNN and the likes) and maybe more importantly, implementors have a lot of experience with it but afaict everyone is working on their own HAL/compiler and not using CUDA directly to implement the actual models. It's part of the HAL/framework. You can probably port any of these frameworks to a new hardware platform with a few man-years worth of work imo if you can spare the manpower.

I think nobody had the time to port any of these architectures away from CUDA because: * the leaders want to maintain their lead and everyone needs to catch up asap so no time to waste, * and progress was _super_ fast so doubly no time to waste, * there was/is plenty of money that buys some perceived value in maintaining the lead or catching up.

But imo: 1. progress has slowed a bit, maybe there's time to explore alternatives, 2. nvidia GPUs are pretty hard to come by, switching vendors may actually be a competitive advantage (if performance/price pans out and you can actually buy the hardware now as opposed to later).

In terms of ML "compilers"/frameworks, afaik there's:

* Google JAX/Tensorflow XLA/MLIR, * OpenAI Triton, * Meta Glow, * Apple PyTorch+Metal fork.


> CUDA is a big reason for their moat.

Zen 1 showed that absolute performance is not the end-all metric ( Zen lost on single-core performance vs Intel). A lot of people care for bang-for-buck metric. If AMD can squeak out good-enough drivers for cards with good-enough performance for a TCO[1] significantly lower than NVidia, they break Nvidia's current positive feedback cycle.

1. Initial cost and cooling - I imagine for AI data center usage, opex exceeds capex.


Anecdata... one of the folks sitting in front of me at a session at GTC claimed the be an AMD employee who also claimed to previously work on cuda. He seemed skeptical that AMD would pull this off. This is the sort of fun stuff that you hear at a conference and aren't sure how much of it is just technical bragging/oneupmanship.


It doesn't. If NVIDIA doesn't work with SK Hynix to integrate PIM GDDR into their products they are going to die, because processing in memory is already a thing and it is faster and more scalable than GPU based inference.


AMD is even more hilariously overvalued, currently at 360 PE


Good luck with that. Gemini Advanced is simply unusable right now....It's so bad its hard to believe nobody picked up on that yet.


Go to Gemini Advanced and try a common programming task in Parallel with Claude and ChatGPT4. Within 2 prompts Claude and ChatGPT4 will give nice working code you can use as a basis while Gemini Advanced will ignore your prompts, provide partial code and quickly tell you it can do more, until you tell it exactly what you want. It will go from looking usable to stuck on "I can do A or I can do B you tell me what you prefer hell" in less than 2 or 3 prompts...Unusable. And I say that as paying customer that will soon cancel the service.


You're not wrong, but it wouldn't be surprising if Google irons things out with a few more updates. The point is that it would be foolish to write off Gemini right now, and Gemini is totally independent of Nvidia's dominance.


72 P/E ratio while they have a mere monopoly on one the most valuable resource in the world.

Competition WILL come. Maybe it's Groq, maybe AMD, maybe Cerebras. Maybe there's a stealth startup out there. Point is, they're going to be challenged soon.


You and what fab?

It's almost impossible to manufacture at scale with good yields and leading edge fabs are almost all bought out.


No moat.

Yes, CUDA, but CUDA is maaaaaybe a few tens of billion USD deep and a few (more) years wide. When the rest of the industry saw compute as a vanity market, that was sufficient. Now, it's a matter of time before margins go to, uhhh, less than 90%.

Does that make shorting a good idea? I wouldn't count on it. The market can always remain irrational longer than you can remain solvent.


I used to think that CUDA was something that would get commoditised real fast. How hard could building it be?

However, given that the nearest competitor AMD has basically given up on building a CUDA alternative, despite the fact that this could grow the company by literal trillions of dollars, I suspect the CUDA moat is much bigger than I give it credit for.


And MS and everyone else have plenty of interest in helping AMD commodify CUDA compatibility.


It's so weird it's taking them so long, because as far as anyone can tell AMD is mostly competent enough to make GPUs within some percentage points of Nvidia, the "breadth of complexity" in what these things do at the end of the day is ... rather underwhelming, the software stack may appear to be changing all the time but is also distinctly JavaScript-frotend-esque... is there an insider that knows what the holdup is? Is AMD just averse to making a ton of money?

At this point AMD investors should be rebelling, it's pissing money out there but they are not getting wet, and management might have doubled the stock price but that's little consolation if "order of magnitude" is what could have been.


> At this point AMD investors should be rebelling

Looking at the chart for $AMD over the past 5 years gives plenty od reasons to be happy, and no reason to rebel. A rational AMD investor should not be Jonesing Nvidia's catching lightning in a bottle via crypto + AI. The Transformers paper was published a few months before AMD released Zen 1 chips - they did not have a lot of money for GPU R&D then.

The timing of the LLM-craze was very fortuitous for Nvidia.


AMD pays very little to its SWEngs (principal engineer in SFBA for ~200k), so they can't attract top end people in SW to implement what they need. Semi companies are used to pay HW engineers peanuts and that doesn't work in SW.


It's kinda great for those of us wanting GPUs though. Nvidia might eventually decide it's not worth their time to bother with.


They also bought infiniband which has played a big role in being the best at clustering, though Google's TPU reconfigurable topology stuff seems really cool too.

Tesla went after them with Dojo and has still ended up splurging on big H100 clusters.


Because their stock value is highly coupled with crypto mining and AI craze.

The move from PoW to PoS for most crypto networks in combination with bust of ‘22. NVDA slid down in value.

OpenAI debuts ChatGPT in late 2022 and now it’s suddenly bumping in price as the hype and rush for GPUs from companies of all types buys up their stock of GPUs. Demand is far outpacing the supply. Nvda can’t keep up.

Thus, share price is brittle. Competition in the GPU market is dominantly owned by Nvidia. That can change, but so far openai loves using nvidia for some reason.


If you are a true believer that AI is not a craze, then the stock can only go up from here. If you think there is a chance that everyone gets bored of AI and moves on to some other fad that is not in Nvidia’s wheelhouse, then it’s probably down from here. I’m staying out of this bet: don’t have the stomach for it.


> If you think there is a chance that everyone gets bored of AI and moves on to some other fad that is not in Nvidia’s wheelhouse, then it’s probably down from here.

You may wish to look at history to see how things can work out: Cisco had a P/E ratio of 148 in 1999:

* https://www.dividendgrowthinvestor.com/2022/09/cisco-systems...

The share price tanked, but that does not mean that people got bored of the Internet and the need for routers and switches. QCOM had a P/E of 166: did people decide that mobile communications was a fad?

The connection between technological revolutions and financial bubbles dates back to (at least) Canal Mania:

* https://en.wikipedia.org/wiki/Canal_Mania

* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...

It is possible for both AI to be a big thing and for NVDA to drop.



Neither of these were technologically based speculations.

> https://en.wikipedia.org/wiki/Tulip_mania

While widely used as an example, most of the well-known stories about this were actually made up, and it wasn't as bad as it is often made out to be.

Quinn and Turner, when they wrote about bubbles:

* https://www.goodreads.com/book/show/48989633-boom-and-bust

* https://old.reddit.com/r/AskHistorians/comments/i2wfsm/i_am_...

purposefully excluded it because their research found it wasn't actually a thing. (Though for the general public it can be an illustrative parable.)


There's another case for pessimism as well: cost. It's possible that many AI applications aren't worth the money required for the extra compute. AI-enhanced search comes to mind here: how is Microsoft going to monetize users of Copilot in Bing to justify the extra cost? Right now a lot of this stuff is heavily subsidized by VCs or the MSFTs of the world, but when it comes time to make a profit we'll see what actually sticks around.


Better question: why does a simple search for “What color is a labrador retriever” require any compute time when the answer can be cached? This is a simple example, but 90% of my searches don’t require an llm to process a simple question.


One time I came across a git repo that let me download a gigabyte of prime numbers and I thought to myself, is that more or less efficient than me running a program locally to generate a gigabyte of prime numbers?

The compute for a direct answer like that is fractions of a penny, it might be better to create answers on the fly than store an index of every question anyone has asked (well, that's essentially what the weights are after all)


It’s in interesting question. I assume they’re using accelerators, and the alternative is a disk or memory hit. It still seems expensive to me.

https://www.linkedin.com/pulse/rising-cost-llm-based-search-...


This seems true as far as incentives go. But how much of that cost driver will be due to efficiencies driven by companies like NVIDIA? They seem well poised to benefit from a lot of the increased (non-hype) use of AI. Seems like we spent a decade or more of stalled CPU performance gains chasing better energy efficiency in the data center, same story could play out here.


AI is obviously the future, though current iterations will probably die at some point. but the dot com bubble ended up with the internet being more pervasive than may have even been thought of at the time, but regardless even the likes of Amazon's stock went bust before it recollected itself. Not a perfect comparison given Nvidia has really good revenue growth, but the point still stands.


Because he missed the train. My guess.


part of the problem is Airbnb though. We have 'investors' owning multiple properties and operating Airbnbs against local laws which does in fact decrease supply.


That's still easily solvable by getting the government out of the way of the market building sufficient housing to meet demand.


I promise you this isn't true.


Worse or not, a meaningful majority of users will never even understand the alternate workflow you just described.


Then they will be outproduced. I, a graphic design noob, managed to create a bunch of social media posts for an upcoming project that included a completely custom 3D character in precisely the poses I wanted, all within an hour. Without gen AI, I would have likely spent hours digging through stock photos, modifying lighting and colors, and still not achieved what I wanted to.

Not using the latest and greatest tools in your profession isn't really a flex.


This is a really poor article and business insider is almost blinding to try to read but anyhow.. for the last few years we've been too accustomed to throwing money at problems, as opposed to thinking about the deeper rooted systematic causes and fixing those. This is now rapidly changing, right across the board, and it's fantastic.

Spotify at this stage of growth has a complex problem, they can either raise prices to squeeze more juice (something they are doing), or they can figure out new ways to attract more paying people, continuously while also curbing churn.

I'm not surprised that they are downsizing, cutting operating costs to maintain a sustainable business under the pressures they are facing is the first place I'd start too. Perhaps I'd take on some more accountability than Ek, yes that's for certain.


The problem is music labels have Spotify by their balls. They don't give them good contracts and Spotify is stuck with the pricing because the other music streaming services are almost commodities at this point.


This is why you don't also download the music from stories when you download stories, no such agreement with Spotify.


Would just like to call out that there are 11 Men mentioned in this article and not a single Woman, really shameful work by NYT.


Who are the women you think should have been mentioned?


Tell me more about how SK is the orchestration engine for Microsoft Copilots? Which ones specifically?


Good theory. Sam will come lead AI at MSFT.


Unlikely to happen for contractual reasons.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: