Hacker News new | past | comments | ask | show | jobs | submit login

Apple and NVIDIA have broadly similar revenue and income at this point, but Apple is shrinking while nvda is still growing exponentially.



I'm not sure that NVidia's moat is all that large.

You have to hand it to them, they have executed superbly, but the underlying technology is well understood. You have the hyper-scalers investing in their own silicon, and Intel/AMD are ramping up as well.


CUDA and their associated toolkits is their moat. Whether or not one or more of the remaining manufacturers can deliver a compelling substitute for those workloads remains to be seen but OpenCL is far in the rear view at this point and ROCm hasn’t made a difference yet.


They hyperscalers can bypass CUDA if it is profitable. Most AI practitioners use Torch rather than CUDA directly, so it's effectively "under the hood". If some director at Meta figures they could reduce Meta's capex by $X billion per year switching to in-house/AMD hardware, they'd make it happen a pocket a decent bonus for themselves and the teams involved.


Google have been trying that thesis out for years and yet TPUs aren't flying off the shelves in the way H100s are.

The basic problem they seem to have faced is that the hardware was over-specialized. The needs of models changed quite fast. CUDA was flexible enough to roll with it, TPUs weren't. Google went through several TPU generations in only a few years and yet don't seem to have managed to build a serious edge over NVIDIA despite being less flexible.

They also lost out because the whole TPU ecosystem is different to PyTorch which is what won out. That's a risk if you do your own hardware. It ends up with a different software stack around it and maybe people pick hw based on sw and not the other way around.

So it's not that easy.


> Google have been trying that thesis out for years and yet TPUs aren't flying off the shelves in the way H100s are

Google does not sell TPUs to 3rd parties at all[0]. Or do you mean cloud customers prefer H100s to TPUs - if so, I'd appreciate more context, because I know Google uses TPUs internally, and gets some revenue for TPUs - I know a bunch of people who pay for Google Collab for TPU access to accelerate non-LLM training workloads.

> They also lost out because the whole TPU ecosystem is different to PyTorch which is what won out. That's a risk if you do your own hardware.

This is barely related to hardware ans mostly about Tensorflow losing the mindshare battle to Torch. Torch works fine with TPUs, as anyone who's used a Colab notebook might tell you.

0. Except their Coral SBC/accelerator which is modest and targeted at inferencing.


Yeah I meant customers renting them in Google Cloud.


They don't. $382 vs $61 Billion in the last year. $135 vs $34.5 on the income part.


Yep, accidentally looked at quarterly numbers for Apple.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: