Hacker News new | past | comments | ask | show | jobs | submit login

I'm pretty sure TPUs are not based on ARM.



I'm not claiming they are on ARM - and that in itself is a moot point.

I'm mentioning that Google Cloud primarily hosts and bills your ML code on TPUs. Today TPU v4 run at exascale.

so the equivalence "scared" behavior exists for both Intel and nvidia on Google already.


GCP cloud TPUs are comprised of multiple ASICS underneath. To say 1 TPU is faster than 1 GPU is moot as the Cloud TPU is a logical construct.

Instead $/ML training job or time to complete training given X budget is likely a better measure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: