Hacker News new | past | comments | ask | show | jobs | submit login

If I'm following correctly, does this mean that with this change along with a model being quantized, we could see models that are 5% the size (on file system) and memory usage but almost identical in output?



The vales are selected were arbitrary. The size reduction will be 32bits/8bits - so it will be 4 times smaller.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: