Hacker News new | past | comments | ask | show | jobs | submit login

3.999... vs 4 is a bad example, because 4 is exactly representable, and if your calculation gives you a result of 3.999... [garbage digits omitted] then it means there was roundoff error along the way and your computed result is something that is strictly not equal to 4.

A better example would be something like 1/5, which for almost every purpose is better printed as 0.2 instead of 0.20000000298023223876953125. The latter is absurdly long and gives the misleading impression of precision far in excess of the ~7 decimal digits that single-precision floats are capable of representing.

The difference between 0.2 and that long string is due to a slightly different cause than a difference between 3.999... and 4: the latter is likely due to information loss during calculations and may be reduced and sometimes avoided entirely with careful ordering of calculations and using the right rounding modes. But 0.2 can never be exactly represented, even as the input to a chain of calculations. The loss of accuracy is an unavoidable first step of trying to put 0.2 into a binary FPU register.

Students should learn about the pitfalls of floating point arithmetic. But preferably in a way that doesn't leave them with the impression that it is a non-deterministic process that always leaves you with trailing garbage that needs to be ignored.




I sort of like the idea of avoiding exactly representable floats (you typed 4 but maybe you get 4-ε) to remind programmers that errors creep into most expressions and compound, and tight error bounds require numerical error analysis.


I wish we hd float hardware that tracked epsilon for you. Floats would be two values with epsilon being updated on each operation.

Then a comparison would take epsilon into account instead of just a bit match.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: