All the imperial units in this PDF hurt my head. I'm an American, so I'm used to them, but seeing them in the context of precision aerospace hardware is so jarring. As a student it seemed like math re: chemistry and physics was so much easier (read: less error-prone) with metric units than with imperial. Mars Climate Orbiter indeed.
The biggest lessons learned on the Mars Climate Orbiter wasn't exactly that everything should be metric. It was the unit conversion is dangerous and needs to be handled very carefully. Just saying everything will be metric doesn't avoid unit conversion. You might have one thing measuring fuel burn rate in grams/second and another in kg/second. That conversion can still lead to issues if not handled correctly. In the context of the space shuttle it was all designed in imperial units starting in the 70s so it would be very risky to convert everything to metric.
- There is only one unit for each dimension. In your example, "grams/second" would not be a valid unit; SI only has kilograms/second (yes, it's annoying that the unit of mass has a name beginning "kilo" :( )
- The conversion factor between different dimensions is exactly 1 (by definition). In your example, the rate R is related to mass M and time T via M = TR (i.e. kilograms = seconds × kilograms/second). There are no conversion factors, since the unit of rate is derived from the units of time and mass (unlike, say, measuring energy as calories OR pound-feet OR coulomb-volts OR ounce-miles OR slug-acres-per-squared-hour OR ...)
When designing new systems it is best to clearly define the expected units for everything clearly at the start. SI units are great but sometimes they are not great for the specific use case. When measuring high power systems like power plants or EVs kW/MW/GW are a more appropriate unit versus Watts. In embedded systems you have limited bandwidth having to have large variables just to store values in SI units is a waste versus using an appropriately scaled unit and saving bandwidth.
Overall the conversion of the unit isn't the root of the issue. Clearly defining the data types and units of everything is the key issue. This makes sure any conversion is done when necessary and allows people to design systems so that they can avoid conversion if possible.
> SI units are great but sometimes they are not great for the specific use case. When measuring high power systems like power plants or EVs kW/MW/GW are a more appropriate unit versus Watts.
A non-SI example would be e.g. kilowatthours (kWH); since 'hour' is not an SI unit. The SI equivalent would be 3.6 MegaJoules.
> In embedded systems you have limited bandwidth having to have large variables just to store values in SI units is a waste versus using an appropriately scaled unit and saving bandwidth.
- Floats are designed to be scaled, by adjusting their exponents. In the happy case, our algorithms don't care; so why not stick with standard units? In the unhappy case (imprecision, numeric instability) we may need a mixture of entirely bespoke representations, even within a single algorithm. Ideally we'd still use standard units at the "boundaries".
INT_MAX is usually 2147483648, which means a "power" figure in Watt can handle anything from a laptop CPU to a Chernobyl power plant. FLT_MAX and _MIN are e+38 and e-38 so couple digits over and under(within 32bit precision).
Have you actually had that kind of unit confusion in metric, or inferring from your experience with Imperial system? It kind of just seems to reinforce suggestion that Imperial system having bunch of redundant or weird units IS the problem.
(for your defense: try [METRIC_UNIT]/h and /s. e.g. km/h often used for display in slow vehicles and m/s used for calculation motions of fast objects are kind of confusing.)
INT_MAX is assuming every variable is using a 32 bit integer which is not always the case. In my experience with flight control systems controls systems were focused on using the smallest variable necessary for a given variable. We had many variables that were only 8 or 16 bits. The processing overhead wasn't really the driving factor for smaller variables it was typically interface bandwidth which can be very limited while maintaining required safety margins on aircraft.
We had a pretty massive ICD that defined every message going in between sub systems down to the bit level which is what was necessary to avoid unit confusion when dealing with system creates by sub contractors and such. Your example of velocity is pretty common thing that you would have to reference the ICD for. Is this velocity signal in KM/h or m/s, as well as MPH, fps, and Mach number.
Sure you could say all velocities need to be in m/s, but then if all your control laws for higher speeds which have been fined tuned through 10s of years of work utilize Mach Number in the calculations is it safer to update all the controls laws or just do a conversion?
> Sure you could say all velocities need to be in m/s, but if all your control laws for higher speeds which have been fined tuned through 10s of years of work utilize Mach Number in the calculations is it safer to update all the controls laws or just do a conversion?
Mach Number is a dimensionless ratio; I don't understand what that could have to do with velocity units? (Any units used in its calculation have cancelled-out by that point)
> You might have one thing measuring fuel burn rate in grams/second and another in kg/second.
One of my jobs is reporting on packaging weights[1] in the UK and it's not uncommon to have this problem when I request data.
For example we request in Kgs, but one manufacturer sent back the report table of numbers in grams. That meant the thin plastic sleeve a tube of paper coffee cups comes in was reported at 30Kgs rather than 30g.
These components are all taxed / levied at various stages in the supply chain so being out by a factor of 1,000 can make a significant difference - we sanity check all our data, but a surprising number of people don't. It's not life threatening, but expensive!
[1] It's for the UK packaging waste regulations and packaging taxes. For a case of paper coffee cups we report on the weight and materials of every component such as the cardboard outer box, labels, parcel tape, inner sleeve bags, paper in the cups and the plastic lining in the cups.
If I have data going out on a data bus in grams/second, but someone reading that data in a different sub system reads it and thinks it is kilograms/second because that is the default unit they use then you have an issue. The actual conversion of the bits isn't the issue it is that certain systems might use different units internally and making sure those conversions are done correctly. It is much more an engineering design/human interaction issue than a computation issue.
Pretty sure GP was making a sarcastic comment referencing how floating point is notorious for subtle rounding issues when handling values that have an exact representation in base 10 but a repeating pattern in binary.
Yes, I understood that. However repeated conversions can lead to rounding errors such as catastrophic cancellation. This is even more esoteric than “grams here kilograms there” and thus easier to fall prey to. Depending on how precise your measurements are you could easily drop significant digits or add random noise, especially if you do large scale changes.
Yeah we tended to utilize fixed point for many of our variables and documented the precision for each one. Then an analysis was done verifying that we were maintaining precision through the calculations from input from a sensor to output.
The problem is almost nothing today accelerates fixed point and everything supports floating point accelerations, which will speed up your ability to calculate the wrong number by many orders of magnitude.
Yeah that makes sense. In aerospace they will fall on the side of accuracy even at the expense of speed. But they also tend to utilize FPGAs that can be designed to handle fixed point calculations quickly but with that comes a lot of cost and specialty hardware.
Another place it makes issues is financial modeling, particularly for complex derivatives where the model might be extremely complex and small errors can accumulate into meaningful errors in risk calculations. Fixed point is sometimes used in finance but performance is also a real concern. There’s a lot you can do to reduce fp errors if your numerical libraries are carefully constructed. But it has often made me wonder why there aren’t processor lines with high performance fixed point since the math is extremely easy - even by just shifting the mantissa and working in integer space then shifting it back.
I mean, sure, converting grams to kilograms is easier (for a human) than converting ounces to pounds, but how often do we have to do that in our heads? Computers, on the other hand, do not care (and they "prefer" the binary system anyway).
Converting grams to kilograms not so much, but mm to m, mL to L, even g to mL and L to kg (of water) — all the time, and of course in our heads — it's so easy you barely need to think about it.
I don't even consider those to be "conversions" at all; in the same way that "two dozen metres" and "twenty four metres" and are both just some number of metres (not a conversion from a separate "dozen-metre" unit).
Technically, the SI standard does consider millimetres, centimentres, kilometres, etc. to be separate ("derived") units from the base unit of "metre". That matters when we have multiple interacting multiples, e.g. "one cubic centimetre" is not the same as "one centi cubic metre"; but of course, that's avoided if we stick to base units like cubic metre. (see http://www.chriswarbo.net/projects/units/improving_our_units... )