My android calculator app gives me 9.00000... and lets me scroll through the decimal expansion, it's zeros at least to 1000 places, so presumably the calculator's answer is exactly 9.
I wonder if our calculator apps are doing some symbolic computations? Or just have >>64 bit precision (which I presume is the default for excel/python)?
The Android calculator is indeed doing unbounded precision calculations[0.999999999999]. It was developed by Hans-J Boehm, also known for the Boehm-Weiser conservative garbage collector.
The Android calculator really has no reason to be so excellent, but I greatly appreciate it.
Everything should do unbounded precision. The results are much more human-friendly. And if you're not building a calculator app for humans, who are you building it for?
Using this method (and other related "test" functions) I discovered that the default iOS calculator operates with significantly greater precision that the popular third party "PCalc", which was a little disappointing.
It implements the HP42, but with a math library w/ 34 decimal digits of precision. You can also buy a calculator, the DM-42 that runs on it and itd be the best calculator in the world if the key board weren't so… bleh
Also heads up: the automod seems to have banned you. I had to vouch this comment. You might be commenting too fast for a new user or be commenting from an IP it doesn't like.
This is a really cool idea. I think you could get a long way by finding calculations whose results depend on the chip being used.
Even more interesting would be a generalized version that sought to discover properties of an unknown calculator. For example, working out the internal precision by looking at rounding errors, or you could time calculations on numbers of different sizes and make inferences about the size of available memory. This would be cool because you can then apply it to more general sorts of things, like human brains.
> This would be cool because you can then apply it to more general sorts of things, like human brains.
I would expect that to fall apart because the underlying mechanics are different; we know what errors converting between base 2 and 10 look like, but I think it's pretty far from obvious that the same principles extend to whatever everyone's favorite biological neural network uses to execute mathematics. You could do the same kinds of analysis, and it would tell you something interesting probably, I just wouldn't trust inferences made on the basis of lessons learned from digital computers.
But then excel gives a compartively horrible result 8.99999999983268000 (maybe all the casting to and fro degrees and radians makes it worse?)
And python on the same machine, similar: degrees(asin(degrees(acos(degrees(atan(tan(radians(cos(radians(sin(radians(9)))))))))))) = 8.99999999983257