Hacker News new | past | comments | ask | show | jobs | submit login

That's the wrong way to think about floating point. Double gives you ~15 digits of precision. If the answer is 500000000.600000 but your algorithm says 500000000.601239 then 25% of the bits you've got are kaput. It's also a question of scalability. Increase the author's example array size to 1 gigabyte and now 40% of your double is garbage data. Increase it to web scale, and you're toast. What's the solution? Divide and conquer. Not only will that let you farm out the summation across machines, but it also eliminates the linear growth in errors.



Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: