Hacker News new | past | comments | ask | show | jobs | submit login

How can `x -> -inf` occur in the first place when nearly everything is within [-2,2] and doing a dot product plus before that there's normalization too?



The use of the "nearly" in your comment is exactly occluding the issue as presented.

Enough weights don't fall under that "nearly" that we require more bits per weight to cover those edge cases. If we were able to delete the "nearly" we would need fewer bits (smaller models).


So the concern is not that x->-inf due to values but it happens due to numerical issues arising out of lower precision?


The idea is that if your range of values is small enough you need fewer bits to distinguish between meaningfully different values. The problem is that exp(x) << exp(y) for sufficiently wide ranges [x, y], so that when normalizing in the softmax and subsequently quantizing you don't get the fidelity you need and too much information is lost between layers. The proposed solution is that modifying the softmax step slightly brings x and y close enough to zero that exp(x) and exp(y) are close enough so that more compact quantizations are useful instead of useless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: