Hacker News new | past | comments | ask | show | jobs | submit login

Most of the time it doesn’t matter. Double (or float) by default is premature optimization. When you start dealing with millions or billions of values, sure reach for the annoying, counterintuitive, footgun laden numeric types. But that’s not most code.



Well (tens) thousands doubles vs BigDecimals (think of marketing data) and any operations over them is a massive difference.

The premature optimization is beyond uncalled for. Learning to use floating point type is something that most developers should do. No need for weasel words, either (annoying, counterintuitive).


I think annoying and counterintuitive are exactly the right words for numeric types that don’t obey the associative property over addition.


the sad state of software today shows that 'premature optimizations' aren't.


Would you prefer a world where software is orders of magnitude more expensive and so limited to a few applications like aircraft and central banking?

Because that’s the alternative to your “sad state.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: