Hacker News new | past | comments | ask | show | jobs | submit login

Consumer grade GPUs like NVidia's 3090 and 4090 max out at 24 GB VRAM, and those cost $1000-2000 each. You can get higher VRAM but need enterprise GPUs which are in the five figures, easily starting at $30K a pop.

Per this calculator, for training, only gpt2-large and gpt2-medium would work with those two top-of-the-line GPUs.

For inference it's certainly a bit better, only the Llama-2-70b-hf and Llama-2-13b-hf don't fit in that much VRAM, all the other models do.




Nvidia’s workstation cards are available with more RAM than the consumer cards, at a lower price than the datacenter cards. RTX 6000 Ada has 48 GB VRAM and retails for $6800, and RTX 5000 Ada has 32 GB VRAM and retails for $4000[1].

Very large models have to be distributed across multiple GPUs though, even if you’re using datacenter chips like H100s.

[1] https://store.nvidia.com/en-us/nvidia-rtx/store/


Other than power consumption, is there any reason to prefer a single workstation card over multiple consumer cards then?

A single $6800 RTX 6000 Ada with 48GB of VRAM vs 6x 7900XTX with a combined total of 144GB of VRAM honestly makes this seem like a no brainer to me.


You can only fit 1-2 graphics cards in a “normal” ATX case (each card takes 2-3 “slots”). If you want 4 cards on one machine, you need a bigger/more expensive motherboard, case, PSU, etc. I haven’t personally seen anyone put 6 cards in a workstation.


In a water cooled config the cards only take 1 slot. I’ve got 2 3090s and am buying another two shortly. Preemtively upgraded the power to 220v, found a 2kw PSU, and installed a dedicated mini split. I’m also undervolting the cards to keep power and heat down, because even 2000w is not enough to run 4 and a server grade CPU without tripping. When you start accumulating GPUs you also run into all kinds of thermal and power problems for the room, too.


This is impressive.

I was fortunate enough to scoop up a bunch of Gigabyte RTX 3090 Turbos. Cheap used eight slot SuperMicro (or whatever), a cabling kit, four 3090s, boot.

Those were the days!


Sincere question: Is installing and running a mini split actually cheaper than racking them in a colo, or paying for time on one of the GPU cloud providers?

Regardless, I can understand the hobby value of running that kind of rig at home.


I personally haven’t done the calculation. I have rented colo space before and they are usually quite stingy on power. The other issue is, there’s a certain element to having GPUs around 24/7/365 to play with that I feel is fundamentally different than running it on a Cloud Provider. You’re not stressing out about every hour it’s running. I think in the long run (2yr+) it will be cheaper, and then you can swap in the latest and greatest GPU without any additional infrastructure cost.


You have to pass the context between GPUs for large models that don't fit in VRAM. Often ends up slower. Also, tooling around AMD GPUs is still poor in comparison.


Used 3090 are going for ~600usd these days (at least in Europe) thanks to crypto mining crash - building a workstation with 2 of these is fairly easy for 48GB of vram, with 4 a bit more tricky but still doable and affordable IMO


Recently bought 24GB 3090 for $700 in US. Used but never tweaked, runs stable for 6 months despite heavy workloads.

nVidias play seems obvious. Game graphics don’t move that fast these days. Used market flush with 3090s and down is fine to them while they focus on extracting top dollar from fast moving AI researchers/VCs


You can easily use pipeline-parallelism though. Especially if you have 8-16 lanes of PCIe4 with direct P2P access between the cards.

IIRC you want micro-batching though, to overlap pipeline phases.


I haven't a clue how they compare but a Studio Mac with an M2 Ultra can get 192GB of unified ram for $5700 (PS: not a mac fan, a curious)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: