Hacker News new | past | comments | ask | show | jobs | submit login

Whenever I use floating point for anything serious, I always feel tempted to use modules like decimal or bigfloat from the start, just to eliminate potential problems that can (and eventually will) crop up.



I worked at a small company that had a mathematician on staff. He wanted to retire, and hired a replacement.

The new hire made some comments that I thought sounded irresponsible, such as almost nobody does numerical analysis and instead they use double precision because, usually, even with catastrophic cancellation there are enough bits to give a useful answer. But, apparently, “use a higher precision” is a common answer in scientific programming, and it’s much easier to get right than numerical analysis ( https://www.davidhbailey.com/dhbtalks/dhb-expm-sun.pdf , first and last slides).

If you can afford it, that probably is the best approach. Cook’s article specifically mentions that the issue crops up with single precision floating point (which people might want for SIMD or GPU reasons, or on mobile to save power). In other words, Cook is talking about cases where you can’t afford extended precision.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: