Hacker News new | past | comments | ask | show | jobs | submit login

In the Go example, can someone explain the difference between the first and the last case?



There's a link right below. It seems like

1. Constants have arbitrary precision 2. When you assign them, they lose precision (example 2) 3. You can format at as a arbitrary precision in a string (example 3)

In that last example, they are getting 54 significant digits in base 10.


Thanks. What I didn’t realize is that although the sum is done precisely, the resulting 0.3 will be represented approximately once converted to float64. In the first case formatting hides that, in the last it doesn’t.


I think in the last example, it's going straight from arbitrary precision to 54 significant digit, bypassing float64 entirely, hence why it looks different from the middle example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: