Hacker News new | past | comments | ask | show | jobs | submit login

I wonder if they have a clear hardware separation between each of the API, ChatGPT, their lower-scale experiments and their large scale (e.g. GPT5) training hardware. Or is everything just a big blob of hardware, that dynamically gets allocated to jobs depending on demand?

Hardware demand is so high, having GPUs idling is a massive waste, but you also want to have separation between dev, test and prod environments, so not obvious what to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: