Hacker News new | past | comments | ask | show | jobs | submit login

We already have a 4 trillion transistor "GPU" in the Cerebras WSE-3 (wafer-scale engine), used in Cerebras' data centers.

https://www.youtube.com/watch?v=f4Dly8I8lMY




how many regular sized GPU (say 4080 or nvidia L4) dies can be cut out from a full-sized wafer? I suppose thats what OP means by reaching integration density of 1T GPU.


I think the whole premise of TFA is flawed, as there already chips with way more than a trillion transistors (as GP points out).

Arguing about what is the size limit to consider something a GPU or not is a bit like bikeshedding.

As to why wouldn't a supercomputer be considered for this? Because it's not a single chip.


here's some discussion about yields for H100 per wafer. I'd assume a 4080 is smaller? regardless, the calculator is supplied.

https://news.ycombinator.com/item?id=38588876


At close to reticle limit of around ~810mm2, you could fit around 65 chips.


It's somewhere around 20 if you go as large as possible IIRC.


Yeah that doesn't really count. It's the equivalent to like 20 GPUs and costs 200x as much.


Well, yeah, that's what a few trillion transistors look like with today's tech. No doubt it will get smaller/cheaper in the future (although EUV is expensive, so maybe prices won't keep dropping as much as in the past). The point is that even with today's tech a (4) trillion transistor chip can already be built.


It's meant to compete with nvidia's DGX systems with 8 GPUs per node.


Yes exactly. It's not comparable to one GPU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: