Hacker News new | past | comments | ask | show | jobs | submit login

"Like many organizations we talk to, you probably have large amounts of data that you want to use to train machine learning models."

I understand's Google's bias here but doesn't it usually make more sense to bring the programs/models to where the data already is?




GCP has proprietary machine learning accelerator hardware that you can't buy.


The problem isn't the programs, it's the hardware. GCP has specialized / scalable hardware to build models on.


It is extremely expensive to setup and maintain infrastructure to process big data, especially when you get into the petabytes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: