“in order to settle any lingering unease about using such tools in physics” spoken like a true mathematician :D
I enjoyed the post a lot (at least the parts that didn’t pass right over my head). But I never met a physicist with lingering unease about dimensional analysis. We get that beaten into us until it’s as natural as breathing.
> But I never met a physicist with lingering unease about dimensional analysis.
There is very little that physicists have lingering unease about when it comes to (ab)using maths to get their way with the physical world.
Examples abound, and have actually led to many a new avenue in math.
Distributions (as in the Dirac distribution) are a very good example of this (IIRC convolution didn't have an identity and they needed one, and they just made up "functions" that were allowed to have infinite values as long as they had finite sums or something along those lines).
Math is just a tool, as long as the objects behave in the ways you need you can do whatever you want with math.
If your math objects breaks other properties of math that you don't use then it doesn't matter, you don't use those so your math is still correct.
A programmer analogy would be that you implement a special kind of list for your project, but don't implement the whole list interface because you don't need the rest. That makes the mathematicians fume, "it isn't a list!!!", and they invent a new interface name for your implementation, like "distribution" instead of function.
Speaking as former physicist, can only agree with you.
However, regarding the Dirac delta function an other generalized functions, even though they were introduced by physicists (I think Heaviside was an early proponent) in a hand wavy fashion, they were later put in rigorous mathematical footing in the theory of distributions. Distributions are nowadays used by mathematicians without any hesitation.
This also not the first instance of something like this, and it won't be the last. Physicists have come up with ad hoc methods that work, but they can't justify why with rigor. Some time later mathematicians formalize it and it becomes part of their tool set.
> they were later put in rigorous mathematical footing in the theory of distributions
Yeah some French dude called Laurent Schwarz that got the Fields medal for it.
IIRC he built the set of distributions as the dual vector space of the tiniest vector space he could think of: extremely well behaved functions (C-infinite functions with finite support - weird mathematical beasts that go to zero at the extremities of their support with all derivatives also going to zero there as well).
I never really managed to grok the intuition behind the formalization of Schwarz, whereas the hand-wavy physicist way is pretty straighforward to understand.
Yes, there is an often overlooked unit 1 at the heart of dimensional analysis.
It counts discrete things that combine as integers, but are not convertible (commensurate) between different instances of the unit, like apples and oranges.
A hen lays 3 eggs a week (1egg/T), and a car factory makes 4,000 cars a week (1car/T). Dimensional analysis says they are both 1/T, but we know they are really not commensurable.
P.S. Things that do not combine as integers: water drops (1+1=1); rabbits (1+1=2^t for some time constant t).
That is a valid point but I think that is outside the scope of dimensional analysis. Distinguishing scalars from vectors (or just different rank tensors in general) is usually obvious, because they have a different number of components, and nobody even tries to add them together. If you do try, you will get a result depends on your coordinate system, a big no-no in physics.
In some cases, like vectors and axial vectors, it is less obvious because (in 3 spatial dimensions), they actually have the same number of components. But still, if you add them together you get something that does not transform in a sensible way under reflections.
That's because technically N.m is the magnitude of either of the energy and torque bivectors; technically the product you should use is the scalar product for the energy, and the wedge product for the torque. You could use the geometric product too and just call both [(N.v1)(m.v2)] for vectors v1 and v2; you would even get as a result [(N.v1)(m.v2)] = [N.v1*m.v2 + N.v1^m.v2] = [(N.v1*m.v2)] + [(N.v1)^(m.v2)] = [Energy part] + [Torque part], which is kind of neat - units that depend on the angles involved but encompassing both cases
Dimensional analysis doesn't need to be perfect, it just needs to rule out some options. You can convert all units to mass and still get useful results, but the more diverse you can keep the units the better.
I suppose you could even go through the relevant laws of physics and only identify the units for quantities that need to be added together (or to make the quantities inside more complex functions unit-less).
In the case of torque and energy you basically run into the problem that torque times angle is an amount of work and angle is traditionally unit-less. You could however give angle a unit by introducing some 'unnecessary' conversion factor like 'degrees per radian', which is only used for trigonometry. In that case you could reformulate torque as 'energy per angle'.
I don't think this distinction is often made in practice, even though it might be useful. As far as I can tell, combining Tao's approach here with a Clifford algebra approach (where energy is a scalar and torque a bivector) would be entirely possible and solve this problem.
The Gibbs formulation of 3D vector analysis/calculus was certainly useful, but forgets most of the spatial structure. Using Clifford Algebra (Geometric Algebra) fixes everything.
Almost all of the value in GA comes from the use of multivectors and the wedge product. It's very hard to find a justification for the "geometric product", and there's no real geometric interpretation of it. Therefore treating it as fundamental is a difficult-to-justify choice. "Exterior Algebra" on its own has everything you want without the objectionable parts.
I suppose you can just take the tensor product of the spacial structure with the units? So energy would be R tensored with V^{N.m} (which of course is isomorphic to just V^{N.m}), and torque would be the space of axial vectors tensored with V^{N.m}.
Seeing the paragraph below I almost thought the author would move on to discuss the particular engineering horror that is logarithms of non-dimensionless values, but it is probably all for the best that this practice remains hidden.
"A related limitation, which is not always made explicit in physics texts, is that transcendental mathematical functions such as {\sin} or {\exp} should only be applied to arguments that are dimensionless"
Angles are literally dimensionless, a radian is defined as a ratio of the lengths of arc and radius.
But we all know they have 'units', or at least different formats, because there are also decimal degrees and integer DMS, but the S may also be decimal.
There are also unit formats for other ratios, e.g. decimal or %, road gradients like '1 in 4' and betting odds like '4 to 1' .
Dimensionless angles also mean that angular velocity is just 1/T. The fact of the rotation is lost (see my other comment about tracking spatial structure).
I think there is a reconciliation. If you read the other article popular today (https://news.ycombinator.com/item?id=37493955), it focuses on scale invariance as the driving principle and gives an example of units for Fourier coefficients.
I think one subtle aspect here not really covered in that other article is sort of a "duality" between "scale invariance" and "carrying scale tags" along with expressions. This also happens in differential geometry or even just analytic geometry / vectors where you can often either use an abstract geometric object notation or carry along index subscripts. So, you can think of it units as carrying along scales you care to track "for whatever reason".
The most common reason for physical units is varying conventions/devices to measure something. Angle measurers care about their own devices and so their units. By carrying along the scale you make the number a surrogate for a hypothetical experimental result. Unit conversion can be seen as how to translate from one (kind of) apparatus to another.
So, while args to trig functions are dimensionless, you do not have to be as strict about angles. You could retain them and make people carry along a multiplicative factor inside the parameters which is likely what people with instruments measuring angles in non-radians do.
Similarly, you could also (probably coherently though I have not thought deeply about it) expand SI to 8 base units adding "axial-meters". The number of base units / scales to carry along is arbitrary. It just depends upon how much expressional error checking is desired as @contravariant observes. (system conversion has its own structure as per my other comment elsethread, unelaborated by both articles.)
Because the "axialness" factor is more "kind than amount" (and binary at that), besides contravariant's (probably better) angle-dimension idea, it might be more like how `i=sqrt(-1)` creates a 2nd dimension for real numbers and you carry along that factor to make the complex plane. I have no idea how popular this kind of "complex length unit" would be in terms of error catching value for its carrying-along cost, though.
It's not as bad as you make it sound: if you write something like `e^t` where t has units of, say, time, then the right way to understand it is that it's actually `e^t (1/T)`, (1/T) is a conversion factor that is normally hidden. `ln e^t = t` then has a hidden `ln (1/T) = -ln T` factor which you can't do much with on its own, but you can carry it forward in equations and eventually if you exponentiate again, it will go back where it belongs. All of the units work out this way.
This can be useful because it allows for changing units after the fact. That `ln T` can become a `ln T(S/S) = ln T/S + ln S` if you want, and `ln T/S` can have a non-zero value that actually matters.
"For instance, concatenation of line segments or planar regions serves as a substitute for addition; the operation of forming a rectangle out of two line segments would serve as a substitute for multiplication; the concept of similarity can be used as a substitute for ratios or division; and so forth."
When I was a wee lad, my parents somehow recognised that while I really took pleasure in understanding abstract math concepts, I struggled to grasp math concepts when only using written symbols (bless their hearts as my parents - my mom a security guard at a local power plant and my dad ran "warehouses" and did a sting in prison for "warehousing" the wrong "thing" - were the furthest thing from academics). They got me Cuisonaire Rod (1) instructions from a local lady who must've done it as a side gig.
We'd sit at her kitchen table and play around with shapes and I would gain true intuition for all types of basic math concepts. I believe it laid a solid foundation for my future (current work as an engineer and researcher).
I still struggle with abstract math explained using symbols. I very rarely make it through any math Wikipedia page explanations. It's like teaching an abstract concept with an abstract tool - too many layers of abstraction. I always envied the people who can.
I'm not sure of the current state of pedagogy, but when I was studying mathematics at the primary and secondary school levels the formalists were ascendant and geometry was an afterthought at best. It wasn't until AP Calculus that I had a teacher who stressed that many problems could be solved geometrically. He encouraged us to check for agreement between the geometric solution and the formal solution.
Related work for those who are into this: Álvaro P. Raposo. The Algebraic Structure of Quantity Calculus II: Dimensional Analysis and Differential and Integral Calculus. Measurement Science Review, 2019. doi:10.2478/msr-2019-0012 Full text available at the DOI and on ResearchGate.
> This geometric picture works well for units such as length and volume that have a spatial geometric interpretation, but it is less clear how to apply it for more general units. For instance, it does not seem geometrically natural (or, for that matter, conceptually helpful) to envision the equation {E=mc^2} as the assertion that the energy {E} is the volume of a rectangular box whose height is the mass {m} and whose length and width is given by the speed of light {c}.
I wish I could tell a friend that I've had the exact same thought, but I'm sure no one would believe me. And I have no friends anyway.
Tangential, but it bothers me that in Go that I can't multiply a Duration by an int/float, but instead have to convert to Duration and multiply two Durations together! Does this upset anyone else?
Yes. The kind of type modeling this is doing is not very physics-unit-minded at all (where any multiplicative operation potentially creates a new type). Rather, it is trying to prevent non-additive (add/subtract/compare) operations bugs viewed as common/high risk.
I mean, your example in the context of these dim.analysis discussions would really have an output type of DurationSquared, not Duration which is what usual prog.lang modeling does for arithmetic ops.
Go also is not alone. E.g., the Nim std/times has a `*` defined for (int, Duration) but not (float, Duration).
Personally, I suspect prog.langs with native 64-bit ints are better off just using raw nanoseconds for this kind of thing (even time since the epoch gets you into Star Trek centuries.. something like 10 human generations from now).
> which is what usual prog.lang modeling does for arithmetic ops.
I mean, in Python you can't multiply two timedeltas and in Rust you can't multiply two time::Durations. So I'd say that it's not programming language modeling in general that's the problem but rather that Go is putting itself in a category of languages that lose the original semantics of the thing the type is modeling. I'm not sure what other languages are with Go there.
As the author of that other post, I'm happy to see Terry Tao's post on the front page now too. I learned a lot from it when I was researching for my own. The two articles end up going in somewhat different directions---Terry doesn't talk about dimensions with complex exponents, for example---but his is definitely more complete.
My favorite bit is his explanation of how dimensional analysis explains the relationships between powers used in Sobolev inequalities. After we get the idea of dimensions divorced from the usual physical MLT usage you can apply these techniques all over math.
On a side note, it would be nice if the community of software engineers would give Tao some in-browser LaTeX rendering worthy of his mathematical content.
Money is an interesting case. The basic units are clear, with most modern currencies having a 3-letter code, and a decimal subdivision of the main unit into 100 or 1,000 subunits (or 10^8 satoshis :) Calculations should usually use integer arithmetic in the subunit, with Bankers Rounding.
Historically, there were non-decimal currencies, like British pounds-shilling-pence, with 1:20:12 ratio.
However, each value also needs a timestamp to enable conversions between currencies, or comparisons for the same currency at different times (inflation).
Money is not like a unit in a system of units that has fixed relationships. It's not only time-varying but also contextual in other ways, eg. by locale, by financial status or any number of other factors. This is a good analysis of how to represent money so that you can perform arithmetic and check for equality while accounting for contextual conversions:
Then there are 2 currencies with an exchange rate and transaction fees to convert between them. The black market currency represents the true value whereas the local currency is based on price controls.
On the contrary, because every currency amount can be expressed as an integer multiple of the lowest denomination, the currency has an intrinsic integer structure
This has maybe been said before and given how abstract both this and the recently discussed (https://news.ycombinator.com/item?id=37493955) article are, it feels weird to say they are insufficiently abstract, BUT.. if you check out
you will see that probably 4-D (say Mass, Length, Duration, and Temperature or MLDT) is a more complete example. Or is it :shifts eyes:? Maybe it's 3, 4, or 7 like SI. Linear algebra by hand gets pretty tough at 4x4 .. 7x7 which operationally (with lack of software tools) probably drives the choice IRL more than anything.
In light of that, both articles are simultaneously mostly general in number of "base physical kinds" but also neglect treating the structure of projecting and inverse projecting (based on a partially known type - often known by variable naming conventions like E=energy) done with the common c=1 & dimensionless trick { projecting into the (Mass,Temp)-(DuraLen) 3-space } or c=hbar=1 "natural" or "particle physics" units. (In PLT-ese this might be thought of as partial type erasure, I suppose.)
Well, Terry's article at least mentions the erasure trade-off of dropping the `c`, but misleads by not mentioning that naming conventions can tell you how to put it back, allowing you to (almost) have your cake & eat it, too. For energy, add a c^2 factor, for momentum a 'c' factor, etc. I think it's a blurring of boundaries of a referred to object and the referring name / syntax of referral which can all kind of cut many ways, but which is very common in both physics and mathematics.
For the programming crowd, knowing what symbols are what sort of relates to IF you are willing to have "implicit" typing in FORTRAN where variables starting with 'i' .. 'n' are INTEGER and starting with the complement are REAL. That might sound like a big "IF", but in math & physics there are often all sorts of typographical conventions (like boldface vectors/capitalized matrices, Roman number classes like R,C,I, Greek & Latin sub/super-scripts, etc.). In a way, the move is really more copying part of the type from the value to the name/syntax or more splitting than erasure. From a certain point of view (with primes, subscript, syntax variants thought of as same-typed array members, not "truly new"), this isn't even that incoherent. Advisable? Harder social questions. https://en.wikipedia.org/wiki/Hungarian_notation notably does not list "suppressing all powers of 'incidental' constants" under Advantages. ;-)
Why this matters? At least 3 reasons.
1) Some physicists actually do this, and not only for fundamental constants which get the most "press coverage".. I had a professor once who (to save on chalkboard RSI / keep derivations less noisy) liked to use units such that the harmonic oscillator sprint constant was ==1 and dimensionless. In my experience, https://en.wikipedia.org/wiki/Natural_units come up at least as much as the mathematically (almost) equivalent Buckingham Pi theorem. All this usually after a flurry of symbol manipulation that might include very little renaming/truly new symbols/things.
2) 0..7 (or whatever) essentially is sort of a measure of how much type checking is happening. Mathematicians love to quantify things { as well as expound upon "almost equivalent" :-) }. I mean, they're vector spaces and transfinite cardinality says (x,y,z) is the "same infinity" as (x,y), but in more practical "finite expressional" senses "meant to be checked", 0 is like Python/Planck and 7 is like Haskell (or some other PL metaphor of your own you prefer). It's definitely a measure of how complex the linear algebra is to do unit system conversion, in regular old computational complexity senses. :) In fact, it might be covered in the "spending symmetry" cross links Terry gives, but if so that was maybe too implicit for me. :) I wouldn't have posted this if I thought it non-neglected. Maybe he needs a "symmetry buy back" article or something about name-referent type-splitting. ;-)
3) Finally, because you are doing type splitting / inverse projection with naming conventions (or other metadata) to pick up slack where your type system is weaker, it touches more on the kind of soundness-of-what-physicists-do unease that people here think Terry might have been worried about. { Only Terry knows for sure! :-) }
I enjoyed the post a lot (at least the parts that didn’t pass right over my head). But I never met a physicist with lingering unease about dimensional analysis. We get that beaten into us until it’s as natural as breathing.