Hacker News new | past | comments | ask | show | jobs | submit login

No, their approach changes the fundamental access pattern into something anathema to GPU and TPU architectures.

In ELI5 or layman's terms: current GPU/TPU accelerators are specialized in doing very regular and predictable calculations very fast. In deep learning a lot of those predictable calculations are not needed, like multiplying with zero. This approach leverages that and only does the minimal necessary calculations, but that makes it very irregular and unpredictable. Regular CPUs are better suited for those kind of irregular calculations, because most other general software is that as well.




In layman's response that sounds like that network could use normalization


Simplify normalization if you want a layman's terms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: