Hacker News new | past | comments | ask | show | jobs | submit login

> The ASIC processors Google use are for inference, not training. Having efficient inference is great, but training is what uses up most of the compute power.

I think that was the GP's point? The training is done on GPUs, the execution is done on ASICs, and neither step involves Intel. That's what alarms them. Dedicated hardware might help this, but that seems iffy to me. ML and neural networks are still a fast-evolving field, and instructions that might help today could be useless next year. ASICs can be thrown out and replaced, but instructions that Intel adds to its ISA will be there forever.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: