Hacker News new | past | comments | ask | show | jobs | submit login

> migrations are extraordinarily expensive in Tech

Is that really the case with Deep Learning? You write a new model architecture in a single file and use a new acceleration card by changing device name from 'cuda' to 'mygpu' in your preferred DL framework (such as PyTorch). You obtain the dataset for training without NVIDIA. You train with NVIDIA to get the model parameters and do inference on whatever platform you want. Once an NVIDIA competitor builds a training framework that works out of the box, how would migrations be expensive?




“Builds a training framework which works out of the box”.

This is the hard part. Nvidia has built thousands of optimizations into cudnn/cuda. They contribute to all of the major frameworks and perform substantial research internally.

It’s very difficult to replicate an ecosystem of hundreds to thousands of individual contributors working across 10+ years. In theory you could use google/AMD offerings for DL, but for unmysterious reasons no one does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: