Hacker News new | past | comments | ask | show | jobs | submit login

Having worked with Quants before, the reality is that however big your compute farm, they will want more. I think this is what is going on with these large AI companies - they are simply utilising all of the resource they have.

Of course they could do with more GPUs. If you gave them 1,000x their current number, they'd think up ways of utilising all of them, and have the same demand for more. This is how it should be.




There must be some point at which the cost of the endeavor becomes greater than the profits to be made.


Absolutely! But thats for management to work through, the engineers just want more :)


From what I've seen the utilisation is still pretty poor, they're being used so poorly and most companies can get away with less GPUs. Instead of looking at how to optimise their workflow they just slap GPUs on


I’ve noticed the same. Very low utilization. But they are all used at peak every few weeks. For many companies having more GPU cost to unblock velocity of innovation here is worth it. As the benefits of improvement to your top line revenue far exceeds the GPU cost.


Are you talking poor utilization from a "we have a bunch of GPUs sitting idle" sense, or poor utilization from a performance standpoint (can't keep their opus fed with data kinda thing)?


Kinda both honestly, but more of the fact that code isn’t great for optimising GPUs at run time. For instance no batching, or not realising the CPU is actually the bottleneck




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: