Hacker News new | past | comments | ask | show | jobs | submit login

And continuously training models is very hard, practicality in RL environment, even then cloud services over long term is possibly more cost effective solution, then hosting your own small cluster (few tightly packed racks).



Still, I don't mind the excuse to get a 128 core dual EPYC with a terabyte of RAM and wide PCIe flash storage.

But I would rather not have proprietary GPU drivers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: