I absolutely agree with these requests! I am so desperate for an uncertainty-aware/unit-aware calculator that I tried implementing one myself, even though it is quite beyond my skill-set. I gave up and used Mathematica (more on that below). Two comments on the requests:
- Most existing calculators that offer propagation of uncertainty use a simplified formula -- the one with the square root and the partial derivatives. That's OK for lab courses. But in real life, the distributions of the input parameters are non-Gaussian, the formulas depend on each other in subtle ways that are easy to miss, and you have to deal with systematic and statistic errors (precision vs. accuracy). Most researches I know, myself included, sort-of fudge this with pen-and-paper and Excel, and end up underestimating their errors... In my view, the best solution is a Monte-Carlo type calculation, where you draw the input parameters from their respective distributions, then do the calculation, and repeat 10000 times. The mean and the std deviation gives you final result.
- Arbitrary precision is doable, but often unnecessary. To me, it was always more important to know when the result is numerical garbage (in which case I rearrange the equations to make them more computer-friendly), than to choose some precision up-front, hoping it would be enough. For me, interval arithmetic [1] is the best solution. All numbers are represented by the next higher and next lower floating point number, and each calculation results in a new interval, where the result is guaranteed to be in the interval. Often, the intervals blow up unnecessarily, but are still much smaller than the desired accuracy or precision. These days, whenever I get an unexpected result, I repeat the calculation with interval arithmetic in Mathematica or Matlab to exclude numerical errors. Interval arithmetic is not very costly in terms of run-time, and there are great libraries out there.
The unit-aware, uncertainty aware calculator that I ended up using for my PhD-thesis was Mathematica. Mathematica deals with units somewhat well and iterates over lists natively. You can write standard formulas and feed them lists for the Monte-Carlo calculation. However, unit calculations are extremely slow. My workaround split the calculation into one unit-calculation and 10000 unit-less calculations. However, I ran into so many Mathematica bugs and quirks that hosed the calculation ("transparent" Mathematica-updates that changed the result, file corruption (!), swalled minus-signs (!!)) that I cannot possibly recommend Mathematica for anything but toy projects.
Great work! I will keep a close eye on this awesom project!
That only, at best, gives you the maximum and minimum values, with no knowledge of the actual distribution.
EDIT: Missed the specific request for interval arithmetic part for representation of numbers. Thought that was a proposal for the error propagation (which requires actual distributions and not simple min and max intervals).
Also, frink does not support accurate interval arithmetic for all operations, and reverts to numerically accurate bounds instead (https://frinklang.org/#IntervalArithmeticStatus). I don't know if Mathematica is any better for those.
- Most existing calculators that offer propagation of uncertainty use a simplified formula -- the one with the square root and the partial derivatives. That's OK for lab courses. But in real life, the distributions of the input parameters are non-Gaussian, the formulas depend on each other in subtle ways that are easy to miss, and you have to deal with systematic and statistic errors (precision vs. accuracy). Most researches I know, myself included, sort-of fudge this with pen-and-paper and Excel, and end up underestimating their errors... In my view, the best solution is a Monte-Carlo type calculation, where you draw the input parameters from their respective distributions, then do the calculation, and repeat 10000 times. The mean and the std deviation gives you final result.
- Arbitrary precision is doable, but often unnecessary. To me, it was always more important to know when the result is numerical garbage (in which case I rearrange the equations to make them more computer-friendly), than to choose some precision up-front, hoping it would be enough. For me, interval arithmetic [1] is the best solution. All numbers are represented by the next higher and next lower floating point number, and each calculation results in a new interval, where the result is guaranteed to be in the interval. Often, the intervals blow up unnecessarily, but are still much smaller than the desired accuracy or precision. These days, whenever I get an unexpected result, I repeat the calculation with interval arithmetic in Mathematica or Matlab to exclude numerical errors. Interval arithmetic is not very costly in terms of run-time, and there are great libraries out there.
The unit-aware, uncertainty aware calculator that I ended up using for my PhD-thesis was Mathematica. Mathematica deals with units somewhat well and iterates over lists natively. You can write standard formulas and feed them lists for the Monte-Carlo calculation. However, unit calculations are extremely slow. My workaround split the calculation into one unit-calculation and 10000 unit-less calculations. However, I ran into so many Mathematica bugs and quirks that hosed the calculation ("transparent" Mathematica-updates that changed the result, file corruption (!), swalled minus-signs (!!)) that I cannot possibly recommend Mathematica for anything but toy projects.
Great work! I will keep a close eye on this awesom project!
[1]: http://subs.emis.de/LNI/Seminar/Seminar07/148.pdf