Specifically: Invertability of all elements with respect to the multiplication leads to the notion of division algebra and these have been studied for a long time. (https://en.wikipedia.org/wiki/Division_algebra)
When studying math at german universities, alebras are something you'll encounter in your 2nd year (latest; but might already show up in 1st year analysis albeit with a different focus). Implicitly, division algebras show up a lot when students learn about field extensions, galois theory and the algebraic closure of a field (usually 3rd semester). A more general treatment of division algebras is not a common subject, though.
It’s very intentionally written from the perspective of someone who knows linear algebra but maybe not that much, if any, abstract algebra, because that’s what the target audience is. Not university math students or graduates, but highschool graduates, CS/physics/engineering students and graduates, game/graphics programmers and so on.
Yeah. It's written to teach people who do not have (what some might consider) the standard prerequisite knowledge. I have no doubt the author understands algebra over a field. The author recommends teaching Geometric Algebra as a pedagogy at the end
The object the author is actually interested in is known as a geometric algebra. One often sees it discussed as an alternative theory for computer graphics or physics as it works well for expressing things like rotations.
I think it is probably not so helpful to merely think of it like a division algebra, and it is better to stay focused on the geometry. Curiously I find it easier to relate “actual” linear algebra to geometry than the thing people often call “linear algebra” that involves writing columns or rows or grids of numbers and manipulating them.
No, this is wrong. Geometric algebras aren't division algebras in general: they usually have zero divisors. Objects that live in a single grade are invertible, but composite objects don't always have multiplicative inverses.
As a concrete example, consider the elements 1 + x and 1 - x. Their product is 1 + x - x - xx = 1 + x - x - 1 = 0. So certainly 1 + x doesn't have an inverse, either.
Yes, I meant that equation to be interpreted in the GA used in the article. But essentially all geometric algebras also have zero divisors, for similar reasons.
I think the author is interested in clifford algebras, or more specifically geometric algebras, rather than division algebras.
division algebras tend to be quite boring (if they are finite then they are just a finite field; if they are finite dimensional over an algebraically closed field then they are just the field itself. I guess the quaternions are an interested example in the non-algebraically closed case. but I think if you are over something other than R you're really just talking about a field extension)
clifford algebras are a sort of generalization of the exterior algebra one would have encountered in differential geometry and other spaces.
in fact it could be considered a "quantization" of the exterior algebra. as in "quantum groups". which is an entirely different part of maths. but that's not what this article is about.
I think using the language of geometric algebras / clifford algebras in physics as this article does versus the more traditional language is just a matter of taste.
Important correction: A division algebra is an algebra in which every non-zero element has an inverse. The dual numbers for instance are a Geometric Algebra which are not a division algebra because there are some non-zero dual numbers which don't have inverses. In fact, almost all Geometric Algebras fail to be division algebras.*
So your point about division algebras is not particularly relevant to the article.
* - Frobenius's theorem classifies all the finite-dimensional associative division algebras. They are: The real numbers, the complex numbers, and the quaternions. There are no others.
> It seems that this is written out of a perspective of some missing knowledge.
Well, the author talks about what "we learned in school", not university, so that checks out but only because you two have different audiences in mind.
Indeed. I learnt about vector dot and cross products, basic linear algebra (including diagonalisation, simple Markov chains and similar), partial differentiation, grad div and curl, the volume of a parallel piped and "all that Jazz" in high school, as a 17-18 year old. I learnt about the divergence theorem, Stokes's theorem, multivariate integration, integrating factors and higher order ODEs and simple PDEs, Fourier transforms and other integral transformations and similar in the first year of university (studying Physics).
This is not uncommon in the UK – but it is also not common either, and depends on exactly what A-level modules you did. My understanding is that it's quite rare to do exactly this in high school in the united states – but there, I think limits are taught much more heavily. I think having a clear, short statement of having "assumed knowledge" somewhere probably helps avoid these issues.
(I thought the article was excellent, and beautifully illustrated!)
I'll second this - my experience is pretty much the same (though I did some basic Fourier transforms at school), studying Physics at college. I did 3 Maths A levels along with Physics and Chemistry (and "General Studies" :) however, so I too may be atypical.
It is unexceptional (indeed, expected) to get through an American undergraduate science or engineering degree without ever taking an abstract algebra course (much less the 2+ apparently expected of German pure math students).
But in any event, the top post here by Garlef is barking up the wrong tree. Division algebras, field extensions, and galois theory (per se) are not the tools to use for studying arbitrary-dimensional geometry. What you want is Clifford algebra (which Clifford himself, and later Hestenes, call “geometric algebra”) and then geometric calculus, which can be used on arbitrary manifolds, in non-metrical contexts, etc.
Basic geometric algebra should be taught to advanced high school students and all undergraduates studying any technical subject.
Math students looking for a math-style introduction to geometric algebra should try Chisolm (2012) https://arxiv.org/abs/1205.5935
Damn. They don't teach these stuff here atleast not in a computer science curriculum. What degree did you learn? Iss this generally taught in all German engineering courses?
German and french engineering school are pretty rough on math theory, for the better or the worse. Mostly because a lot of theory was born in these two countries.
Can confirm, before I was anywhere close to serious computer science, I basically did an undergraduate degree in mathematics / physics.
And it goes through some very advanced subjects in both. It basically prepares you to be an engineer in whatever field you choose, be it a structural engineer, or a computer scientist.
I think also in Germany you have to study pure math to see this. Where I study you even can avoid this by never taking Algebra 1, which is not obligatory (though imo it should be standard. There you learn why polynomials of degree >=5 have no closed form solutions)
> Not much maths in engineering/CS in germany either.
That's simply not true. It depends entirely on the particular institution and its roots.
There are two origins of CS in German universities: electrical engineering and maths. At universities where CS originated as a subfield of maths, undergrad CS education is very similar to a maths undergrad to the point that most of the tests/mid-terms are basically identical between CS and maths.
If on the other hand CS came from the electrical engineering department, the focus is significantly less on maths and the lectures are very different indeed.
So you'd have to look at the history of each university and where the CS department originated to find out.
Karlsruhe Institute for Technology is an example. Analysis 1+2 and Linear Algebra 1+2 is compulsory for every CS student (the exams are practically identical, just missing a single topic. You can always switch to the math lectures and get a few ECTS more). This is a great background for more advanced, theoretical lectures (CS+Math) but it's lacking electrical engineering lectures and especially there's no electrical engineering practice.
TU Ilmenau has Analysis and Linear Algebra but that is only heard by mathematicians. There is a different class called "Math for engineers" which mechanics students, computer science students, ... hear which has different tests and a different focus but still teaches solving linear ODEs and stuff.
>In this post we will re-invent a form of math that is far superior to the one you learned in school. The ideas herein are nothing short of revolutionary.
and concludes:
> I firmly believe that in 100 years, Geometric Algebra will be the dominant way of introducing students to mathematical physics. In the same way that Newton's notation for Calculus is no longer the dominant one, or that Maxwell's actual equations for Electromagnetism have been replaced by Heaviside's, textbooks will change because a better system has come along.
These claims are wrong.
There are three standard notation methods in physics: vectors, tensors, and differential forms. Geometric algebra is, as the article points out, a more powerful version of the usual vector notation. But it is deficient in various ways when compared to tensor notation (for calculations) and differential forms (e.g. if you want to work basis-free). [I'm oversimplifying a bit, but a full discussion is too long for a comment here.]
Anyway, geometric algebra is not some esoteric secret. People know about it and have decided not to teach it, because the stuff that's already taught is better.
[I picked up this specific phrasing from another user here, knzhou, which I think is a particularly good way of explaining it.]
I wish I would have been introduced to Geometric Algebra or calculus of forms or whatever it is called during my physics studies. We learned all the conventional things you need for classical mechanics and electromagnetism, like div and curl and BAC-CAB. But there were a couple of things that we were not tought well, which caused problems later. One thing is that at first, a vector was just an N-tuple. But in physics, something is only a vector or a tensor if it behaves under a very specific way during transformations. (The infamous "a tensor is an object that transforms like a tensor"...)
The other thing is that I hit a wall when trying to read theory papers, because nobody ever explained what a wedge operator or a two-form and so on is. I was able to mechanically follow calculations, but never got a real good intuition.
We learned math together with the mathematicians in a very axiomatic way. Without a geometric intuition, I had a hard time understanding why you need dual vectors like one-forms or covariant vectors. They seem just like a convoluted way to write the scalar product.
> Geometric algebra is, as the article points out, a more powerful version of the usual vector notation
> the stuff that's already taught is better
These two statements seem contradictory.
> But it is deficient in various ways when compared to tensor notation (for calculations) and differential forms (e.g. if you want to work basis-free)
The author made no claims about tensor notation or differential forms; perhaps those might displace all vector-like notation (GA or otherwise) in 100 years, in which case the author's claim can be weakened to "in 100 years, GA will be the dominant form of vector notation".
As for "the stuff that's already taught is better", notice that such stuff includes:
- Complex numbers
- Pseudovectors
- Cross-products
- Matrix algebra
- Dirac notation
You seem to agree that GA is "more powerful" than the Gibbs-style vector algebra normally taught. The article is arguing that GA is also a simpler and more consistent approach, which I agree with (complex numbers are certainly simpler on their own, but are a little redundant if we're using GA for the rest).
From my own experience in formal education (UK high school and undergraduate physics), I never encountered tensors or differential forms. I've since learned a little about tensors (for general relativity), but that's been due to my own curiosity; I've learned a little about GA for the same reason. I never used quaternions (hence why I left them out of the above list), although I'm aware of them and that they're used e.g. in computer graphics. I used vectors, pseudovectors, and matrices a lot, and I'm certain those topics would have been easier to learn and comprehend if they'd used GA instead.
Differential forms are a particular kind of tensor and tensors can be defined in terms of multilinear maps. As spekcular says, the standard curriculum covers differential forms, tensors and vectors. This entails becoming familiar with multivectors, the wedge product and multilinear algebra, making geometric algebra a relatively small delta to pick up.
On the other hand, the standard course will also prepare you for mathematical topics like lie derivatives, differential geometry and de Rham cohomology.
Other than physics, the standard approach equips you with the mathematical machinery underlying many topics in machine learning and statistics like Hamiltonian monte carlo, automatic differentiation, information geometry and geometric deep learning.
The central advantage of geometric algebra over the standard approach isn't that it's better or more general, it's that pedagogical material for it is generally leagues and magnitudes better than those for the standard course.
> As spekcular says, the standard curriculum covers differential forms, tensors and vectors... making geometric algebra a relatively small delta to pick up.
Could you be a bit more specific about which "standard curriculum"/"standard approach" you're talking about?
For example, in my formal education (high school; masters with physics major, comp. sci. minor; 4 years of a comp. sci. PhD (abandoned)), I did not encounter differential forms, tensors, multivectors, the wedge product or multilinear algebra (or quaternions, lie derivatives, differential geometry, (co)homology, etc.).
Maybe you're talking about a "standard approach" for a pure mathematics curriculum, or perhaps physics/math grad school?
All I can say is that high school and undergraduate physics (in the UK, circa the late naughties) (a) does not standardise on those topics, (b) is filled with tricky operations which are easy to mix up or perform the wrong way around (e.g. cross products, matrix multiplication, pseudovectors), and (c) many of those annoyances would simplify-away under GA.
It's a cliche that physicists (certainly when teaching) cherry-pick the parts of mathematics they find useful. All of those concepts would certainly be useful in a physics course, but would perhaps be too much to fit in; yet there's certainly enough scope to cherry-pick GA (since we can drop Gibbs-style vector algebra[0] to make room). Perhaps something else, like differential forms, might be even better; I honestly don't know (maybe I'll do some reading about it).
[0] By "Gibbs-style" I mean the 'cross product and dot product ought to be enough for anyone' approach that permeated my undergraduate learning.
By standard approach I mean the typical material covered for someone studying vector calculus properly. This will be stuff like differential forms and the basics of tensors, manifolds and multilinear maps at the undergrad level. Differential geometry and cohomology are examples of courses which build on them.
I agree with you that pseudovectors, cross products and vector calculus are a terribly adhoc way to teach this stuff but a course covering linear algebra with differential forms elegantly unifies, corrects and generalizes them. Standard is also in contrast to the geometric algebra/calculus alternate path.
If you can’t invert vectors, you aren’t studying vector calculus properly. ;-)
Differential forms are a half-baked formalism.
Unfortunately I don’t know of any great undergraduate level geometric calculus textbooks. Ideally there would be something like Hubbard & Hubbard’s book (http://matrixeditions.com/5thUnifiedApproach.html) written using GA as a formalism.
From my view, it goes both ways: geometric algebra/calculus is a more transparent version of the standard approach and the translation back to it is also a relatively small delta to pick up.
Either way of going about what is in essence the same material entails becoming familiar with multivectors, the wedge product, and multilinear algebra, whether you do it through geometric algebra or the standard approach.
That makes sense but my argument is since further material (some examples which I listed) assumes and builds upon the standard approach, you'll likely be better off taking that path.
> I never encountered tensors or differential forms.
Your UK undergraduate physics must have been a bit different to mine. About a third of my physics course was taught by the maths dept, and tensors/algebras were very much a part of that.
I recall, after freshers week, the dean getting everyone together. He said two things:
- Hopefully you all had a great fresher's week, now it's down to business, and
- Make sure you have fun at college.
He also had a projection on the overhead saying "If you can't blind them with science, baffle them with bullshit". I'm reasonably certain the second statement above was the latter, because...
He then casually mentioned a "maths refresher" 2 week course that all freshers had to take before "the real stuff" started. That "maths refresher" was the entire Further Maths 'A' level syllabus. In two weeks. Those of us who had done Further Maths at school were fine. Those that hadn't were shell-shocked.
> That "maths refresher" was the entire Further Maths 'A' level syllabus. In two weeks. Those of us who had done Further Maths at school were fine. Those that hadn't were shell-shocked.
Heh, that reminds me of my first physics course in an under-graduate computing degree (in Romania). The curriculum was so well designed overall that this Physics course needed linear algebra concepts that would be taught halfway-through the semester in Algebra, integration along a surface and similar that would be taught at a similar in the Calculus course, and some Statistics I don't remember that would be only be taught in the second semester.
The prof's solution? He taught a 3-hour course covering all of the above, and considered that good enough for all future courses. This particular Physics course later went on to cover analytical mechanics (generalized coordinates, Lagrangians, Hamiltonians), electricity, general relativity, statistical thermodynamics, and quantum mechanics, all in a single semester.
Needless to say, 99% were happy they passed and couldn't tell you a single thing about any of these subjects a few minutes after the final exam.
> That "maths refresher" was the entire Further Maths 'A' level syllabus. In two weeks. Those of us who had done Further Maths at school were fine. Those that hadn't were shell-shocked.
Heh, I recall managing to coast for a short time thanks to having done AS Further Maths.
The Further Maths syllabus was quite modular, and the modules our teachers picked had some discrete math (sorting algorithms, Dijkstra's algorithm, bridges of königsberg, etc. which was useful for comp. sci.), and some which complemented the regular maths course (complex numbers and more calculus, which was certainly useful for physics).
Our Further Maths was a lot less modular. Preparation for it started in the 2nd year (so 12/13 years old), when we were streamed for maths - if you were in set-1, you studied to take 'O' level (showing my age here) in 4th year (so a year earlier than most) on an accelerated schedule.
That meant you could take AO (a halfway house between O and A) when everyone else was taking their normal O levels. The thing is that the extra stuff in AO was all Pure Maths, and formed a fair amount of the easier "P1" maths syllabus for the normal A level maths exam, which had P1 and Me1 (Maths with mechanics 1, basically statics).
Because you'd done that work already prior to the A level years, you could take "A level maths" after only 1 year (which looked really good on UCCA applications :), so you've now done an exam consisting of the two 'P1' and 'Me1' papers in the first year of your A levels.
Which meant that in the second year of your 'A' levels, you could do 'Pure Maths' (P1, P2) and 'Further Maths' (Me1, Me2) for a total of 3 maths A levels.
On top of that, you had your other two subjects (mine were Physics and Chemistry), and because it was the JMB board, everyone got to do "General Studies".
Getting all of them gave you 6 A levels, even though some of the work was duplicated in the maths arena (over different years of course :)
S levels were a bonus on top - there was no fudging for those, though, you just took what you thought would be useful to study. They gave me maths and physics because I'd said I was going to do physics at college... :)
> it is deficient in various ways when compared to [...] differential forms (e.g. if you want to work basis-free)
There is nothing basis-dependent in Geometric Algebra. This presentation started from a basis, but then again so do many presentations of differential forms, leading to 2-forms like dx \wedge dy and so on.
The actual difference is that Geometric Algebra requires a choice of inner product (actually, you can get away with any bilinear form), while differential forms do not. However, some of the important operations on differential forms in physics do require an inner product (e.g. the hodge star operator and the codifferential), so you end up back on equal footing with GA again.
I worded that in an unclear way. My main beef is that GA is just way clunkier than differential forms, which are clearly the "right" approach if you want to approach the subject from a theoretical perspective. I see no advantage over the usual treatment, and many disadvantages.
I'm a skeptic too, but I might not be the intended user for the GA formalism. Please explain your reasons.
My skepticism of the supposedly superior pedagogy of Geometric Algebra is the following:
- 3D vector algebra with the cross product operation and the dot product operation is fairly easy and intuitive. Its replacement by GA might not be so easy. So maybe GA should be introduced after the vector formalism.
- An arbitrary element of a Geometric Algebra might not have a geometric meaning. For instance, some elements of a GA are vectors, while some are scalars, but there are also these exotic mixed quantities which are scalars plus vectors. This is pretty hard for me to understand intuitively.
- An arbitrary matrix has a geometric meaning. It's essentially just a linear transformation. By contrast, I don't feel that an arbitrary element of a geometric algebra has a geometric meaning.
- Consider those elements of a Geometric Algebra which represent rotations -- they are called rotors. Observe that if "z" is a rotor then the element "-z" is also a rotor which stands for the same rotation as "z". So there is more to a rotor than whatever rotation it describes. This seems very unintuitive and advanced. (I know that this behaviour has applications for the study of spin-1/2 particles in quantum physics).
I also have trouble understanding where the rule for multiplying two elements of a geometric algebra comes from. It's an operation, introduced from seemingly nowhere, which happens to have some applications in some areas. But I'm not comfortable with a multiplication rule being introduced out of nowhere without being derived out of something. The claim that it has a consistent geometric meaning from which it can be derived is never justified. My criticisms are therefore largely pedagogical.
I think your reasons are good and essentially what I would give. I also think differential forms are a much theoretically cleaner way to express the same concepts.
Further, teaching differential forms prepares my students to engage with the (vast majority of the) existing math and physics literature. Teaching geometric algebra doesn't.
The practical reasons boil down to: I have to teach the standard stuff because otherwise they can't read the literature. Having done that, what's the marginal benefit of teaching GA? Not a lot.
On the other hand, maybe GA can help with differential forms. Differential forms involves exterior algebra, and I feel like some aspects of exterior algebra are elaborated upon in an insightful way by GA. For instance, the grade-2 elements of an Exterior Algebra can be understood as angular velocities in many circumstances. In GA, this is captured by the exponential map that sends grade-2 elements to rotors. I don't know if this can be helpful for teaching purposes.
> Geometric algebra is, as the article points out, a more powerful version of the usual vector notation. But it is deficient in various ways when compared to tensor notation (for calculations) and differential forms (e.g. if you want to work basis-free). [I'm oversimplifying a bit, but a full discussion is too long for a comment here.]
I have no opinion about the claims, but I loved the article, as I quickly saw the gains from this algebra for my very basic needs.
After checking out your 2 suggested alternatives, I'm not so convinced they are easier to understand.
>Geometric algebra is, as the article points out, a more powerful version of the usual vector notation
That's not just a gross oversimplification, this is also flat out wrong if what you meant was that it only has vectors. It has more general objects called multivectors through pretty much the same process you get one, two, etc. forms from the wedge product.
In fact, both GA and differential forms build from the exterior algebra, and you can go from the former to the latter through geometric calculus (one key difference e.g. would be the method of reciprocal bases to compute inner products with non-orthonormal bases, rather than explicitly working out a basis and then its dual). So I'm confused about your remark regarding its alleged deficiency vs. differential forms if you pretty much reconstruct it within the GA/GC system (especially regarding working basis-free).
With regards to tensor notation in terms of calculations, if you mean all that index gymnastics, well GC still openly provides that way of computing things out from what you're used to.
What I like about geometric algebra/geometric calculus is precisely the way in which it's nothing new: it's putting everything people use in one system by clarifying the connections between these seemingly disparate systems. Even lie groups/lie algebras can be constructed rather efficiently in the algebra.
Another appealing feature of GA is its ability to make pretty transparent an old theorem from Cartan and Dieudonne that says you can view geometric transformations like rotations, and even translations (in projective geometry) as compositions of reflections.
There's other appealing features like this in terms of classifying and relating different geometries together that harken back to the Erlangen program, but my point is even in terms of concrete calculations, it's not quite right to say it's just a "more powerful version of the usual vector notation" as it includes more general objects than vectors, and still includes a lot of very similar ways of doing calculations (almost a kind of "backwards-compatibility?") you're used to with tensor index calculations, just with the added bonus of making the transition to the tensors used from vector calculus seamless, alongside other added relation to other systems made more transparent.
To put it in concrete terms, where does GA really fit into the story of undergraduate physics (or mathematics)?
Suppose I want to teach first-semester mechanics. I can get through this fine with the usual vector notation. Vectors and dot products are intuitive when taught well (the latter just being projections), and while cross products are a little hairy, they don't play a major role in the course. There's no time for GA, and it would confuse more than illuminate in any case.
Next, I want to teach E&M. Here, I'd probably lead with the usual vector calculus notation (because even if it's ugly, it's standard and students should know it), and then follow with an explanation in terms of differential forms. [I assume this is a more theoretical, or honors, class; I might stick with vector calculus if it's more computational.] So now students know differential forms, they can do everything in a coordinate-free way and on manifolds, and they can access a significant amount of standard physics and mathematics literature.
Having proceeded in this way, what does introducing GA do except suck up a lot of class time? To me, it seems clunky and without any distinctive advantages.
Another question to think about: if this notation system is so good, why don't working mathematicians or physicists actually use it? For example, people thought Feynman diagrams were strange at first, but they proved their value and consequently caught on.
Again, my argument is that this is not some revolutionary esoteric knowledge, it's well-understood stuff that people don't teach for good reasons.
> Suppose I want to teach first-semester mechanics
If you need to teach undergraduate mechanics, I highly recommend you at least read some of Hestenes’ New Foundations for Classical Mechanicshttp://geocalc.clas.asu.edu/html/NFCM.html
> without any distinctive advantages
The most basic distinctive advantage is that you can invert vectors (which is incredibly useful!!) without needing to pretend that vectors are matrices, complex numbers, or some other kind of object.
GA takes most of the advantages of complex numbers vs. R² for representing plane geometry, but extends them to arbitrary dimension, and extends them further (when using complex numbers for plane geometry you end up representing vector–vector products via the obscure z̄w product involving complex conjugation, and it is easy to get confused about the difference between a vector vs. a scalar+bivector).
But there are a wide variety of other powerful (and geometrically interpretable) algebraic identities which can be applied to vectors, blades, and multivectors, ranging from awkward to impossible to express using the language of differential forms, Gibbs-style vectors, etc. Physicists often end up resorting to tedious coordinate-by-coordinate calculations for stuff that would end up being an easy vector expression in GA. Learning these identities and how to apply them takes years and a lot of practice solving problems using GA.
My own experience for the first few years of knowing that GA existed but not being too fluent with it was that I would work some problem (mostly 2–3 dimensional geometry problems) out in coordinates, spending like 2 pages of scratch paper for the opaque intermediate calculations, with high chance for mistakes, then eventually find that most of the ugly bits along the way canceled and yielded a nice result. Then I would think a bit more about the problem, skim through a list of GA identities, and find I could have shortened that 2 pages of work to 3 lines, each of which had an obvious geometric interpretation.
Can you give an example of a problem that might appear in an undergraduate physics or math course, whose solution is lengthy and tedious by "usual methods" but dramatically simplified by the use of GA?
I have seen examples proposed before and been distinctly unimpressed. Any serious simplifications in solutions are usually due to some notation-agnostic insight.
As a relatively recent personal example I spent a few months (in bits and pieces) working out a bunch of metrical spherical geometry for myself without reference to past work, with points represented as displacement vectors to stereographically projected points at https://observablehq.com/@jrus/planisphere with the eventual goal of implementing computational geometry / cartography code using that as a canonical representation, which I think is superior to representations used currently in practical software.
The same spherical relationships are comparable (some things slightly slightly easier, some slightly trickier) to represent as displacement vectors in an embedded sphere. But there again relationships are clearer to express in GA terms.
Most of the material there is stated without proof (maybe eventually full proofs should be included), but several of the identities there I worked out very tediously with pages of scratch work in coordinates, then realized afterward the same results could be arrived at with only a few lines of GA.
Only a bit of the material is truly novel (after doing the work myself, I hunted around for sources and found some of the same formulas worked out previously using classical spherical trigonometry 200+ years ago), and e.g. some very similar material where the stereographically projected points are represented as complex numbers can be found at http://fer3.com/arc/img/110279.applications%20of%20complex%2...
In theory most of the rest could be also worked out using complex numbers or matrices, but (a) some ideas end up awkward and unidiomatic there so you would never think to do it, so that many identities that are slightly obscure in GA are almost unheard of written in other formalisms, (b) the algebraic manipulation is at least 2–3x more cumbersome.
That is a strange and unconvincing article. In outline, it goes:
1) Look at this slick solution using geometric algebra.
2) Look at how ugly the trigonometric solution is. GA is so great!
3) And, by the way, one can mechanically translate the GA solution into the usual vector notation.
Point 3 is even a bit understated: GA concepts are really only used for a few lines under "Solving for the Earth's radius." Once you hit the equation for epilson^2, it's just standard algebra and trig.
Anyway, the relevant comparison is not between trig and the GA solution; it's between GA and the usual vector language. It's really the same method in different notation. You take the cross product and separate into parallel and perpendicular components, and then you reach the epsilon^2 equation, and it's the same from there.
Also, I think the author is far too hard on the trigonometric solution. The vector solution is somewhat clever, and it for any given problem that gets placed in front of me, it's not obvious a slick solution exists. On the other hand, the philosophy of trigonometry is that given a completely determined problem about triangles, you can just trig-bash mechanically to get an answer (and here you can even before starting that small-angle approximations will make life easier, so trig is even more attractive). It's really not that bad here. Especially, the comment about it being tricky because one must find a "non-trivial relationship between the four angles" is puzzling. Anyone who's spent time with geometry problems like this knows that the first step is to angle-chase and write in all the value and relations, from which this falls out immediately (and again, totally mechanically). Then you just turn the algebra crank and win.
[I don't have time to think about the spherical geometry stuff. Sorry!]
>It's really the same method in different notation. You take the cross product and separate into parallel and perpendicular components, and then you reach the epsilon^2 equation, and it's the same from there.
Bivectors and cross-product aren't just different notation for the same thing if that's what you meant. They're distinct (but very much so related) mathematical structures. For one thing, one's associative while cross products break associativity.
As far as GA sharing a lot with the more comment vector algebra/calc methods. Personally, I'm happy that GA has an attitude of "if it's not broke, don't fix it". It also means there's really not a lot of time lost in the transition due to the compatibility. Hell it's even backwards compatible in the sense that you can still easily retrieve your axial vectors the cross product gave you if you so wish (which cleared up instantly what the exterior algebra folks were doing with their hodge star business when I decided I wanted to explore that perspective later on).
> ... by clarifying the connections between these seemingly disparate systems.
This is the big reason I like it. I remember learning bits and pieces of linear algebra whose rules seemed so entirely random (vectors vs pseudo-vectors, curl, quaternions, some spin calculations I can't even remember anymore, etc), that turned out to have more unified geometric interpretations once I learned GA. It definitely slowed me down as a student who hated 'just memorize it' pedagogy.
> People know about it and have decided not to teach it
Generally teachers don’t (can’t) individually decide this. Decisions about what to teach have incredible historical inertia, and are largely decided based on what the teacher learned when they themself went to school decades ago, what everyone else is teaching, what materials are easily available, what notations are used in past literature, etc. Substantial transitions in the teaching of existing material take generations.
In 2020 our basic math/science curriculum and pedagogy in high schools and universities has all been pretty well statically fixed for 50+ years (many parts are unchanged in 200+ years), except in computer science where some of the basic ideas are newer than that, and in graduate-level courses that get closer to the cutting edge.
* * *
The place where geometric algebra has seen most rapid adoption is in computer programming, where code actually has to work, and a more effective formalism makes correct code easier to write and reason about, saving a ton of time and effort even for basic examples.
Even in physics, where a better formalism leads to improved physical intuition and deeper conceptual understanding, a transition is an uphill struggle, because symbolic fluency with geometric algebra takes years of practice.
(Some) pure mathematicians on their high horses scoff at anything that doesn’t advance their own obscure abstract research, which is unconcerned with conceptual obstacles faced by undergraduate students, scientists, or engineers. They can hand-wave a better formalism away with “this is isomorphic to X and Y other structures, so there’s no value in it”.
> Substantial transitions in the teaching of existing material take generations.
> In 2020 our basic math/science curriculum and pedagogy in high schools and universities has all been pretty well statically fixed for 50+ years (many parts are unchanged in 200+ years), except in computer science where some of the basic ideas are newer than that, and in graduate-level courses that get closer to the cutting edge.
This is wildly incorrect. Even in the past ~20 years we've seen a sea change in our understanding of science pedagogy. Look up the work of Carl Wieman on active learning, or https://www.pnas.org/content/111/23/8410. Inclusive classroom practices are another thing that's come into fashion in the last ~10 years. The curriculum has also evolved; the most obvious thing to point to is the new emphasis on connections to data science in math/stats courses.
If you're someone who doesn't stay up-to-date on pedagogy, then yes, it takes your retirement to bring about a change. But a lot of people, especially those teaching at small liberal acts colleges, have continually evolving teaching practices. There are entire conferences where people get together to talk about college teaching.
> Even in physics, where a better formalism leads to improved physical intuition and deeper conceptual understanding, a transition is an uphill struggle, because symbolic fluency with geometric algebra takes years of practice.
Do you really believe this? To anyone to recognizes that it's the standard stuff in (a clunkier) disguise, it shouldn't take years.
Appealing to these two frictions does not offer a convincing theory of why GA has not been adopted despite being around for, what, 50+ years? The fact that it's worse than existing notation does.
Have you ever spent a few months trying to solve a wide variety problems using GA as a formalism, or tried teaching it to e.g. undergraduates? If not, you are speculating beyond your experience.
I've taught math to plenty of undergraduates – enough to know what does and doesn't play well – and I made an honest attempt to find problems where GA might have some advantage. It is the fact that I sunk several hours of my life into this with no reward that explains why I'm a little salty in this thread, and motivated to warn other people away.
I'd suggest that your comment that math and science pedagogy have been static for the last 50+ years reveals that you are the one speculating beyond their experience.
There has been continuous research into alternative pedagogy, but the typical undergraduate intro math/science course looks pretty much unchanged in both pedagogy and curriculum. My undergraduate math and physics courses circa 2005 were only slightly different than similar courses from 1960 (the main differences were things like an online discussion board in some courses, some courses with power point slides instead of a chalkboard, videotaped lectures in some courses, use of computers to type up papers instead of typewriters/handwriting), and the typical course is still not that much different 15 years later.
One of my hobbies is skimming old math textbooks; Lacroix’s textbook from about 1800 is not essentially different in structure or content than a typical 2020 intro calculus textbook for undergraduates or high school students, or almost any book in between. Way less radical or era-appropriate than something like http://www.math.smith.edu/~callahan/intromine.html
If you hunt you can find teachers trying new ideas (and you could also find teachers trying non-mainstream pedagogy 20, 40, or 60 years ago), but it takes generations for ideas to turn over.
> the typical course is still not that much different 15 years later.
For context, I checked your profile to see where you did your undergraduate degree. I am familiar with the way calculus is currently taught at that university, and it looks quite similar to the "radical [...] era-appropriate" textbook that you linked (at least based on a quick read of a few chapters). Those courses are also taught in a quasi-active learning style (though nothing as extreme as a flipped classroom, etc.). Your observations may have been accurate 15 years ago, but that's thankfully no longer case. There's also pressure from the department/admin to make these changes in upper-level courses. See e.g. https://people.math.harvard.edu/~community/inclusive-classro... or materials from https://bokcenter.harvard.edu/active-learning.
I’m glad to hear that. I never interacted with the intro calculus course there. My impression is that most intro calculus courses around the US today still use some book like Stewart, Larson, or Thomas, and still teach in traditional lecture style.
In poking around I am also glad to see they switched from Griffiths’s to Townsend’s book for intro QM. Much more conceptually clear with less focus on mindless computation. (Disclaimer: I went to high school with Townsend’s daughter.)
I wonder if anything similar can be done for the undergrad electrodynamics course, which was more or less an experiment of “how many gnarly multiple integrals can you grind before burning out?”
> I wonder if anything similar can be done for the undergrad electrodynamics course, which was more or less an experiment of “how many gnarly multiple integrals can you grind before burning out?”
The classical field theory course was one of my favorites at the master level. Classical EM is beautiful in the sense that by sprinkling some math magic you can basically calculate everything from a few basic laws. Everything sort of fits together in a coherent tight package. TBH, the class did have a fearsome reputation for being math-heavy, and many of my class mates struggled (which was weird, because I was never super-strong in math compared to many of them).
If deficiency to tensors and differential forms is a reason not to be taught, then why do we learn about vector notation?
Oh right, because it's a natural language for talking about geometry and mechanics, by far the most common and important type of reasoning that the average student will need to do. And geometric algebra is demonstrably superior for that domain. So your comment is pointless.
This may be true in some places, but my undergraduate physics education spent a lot of time on standard Gibbs-style vector calculus. Taylor Classical Mechanics and Griffiths Electrodynamics especially depend on them. Maybe there is a case to be made that first years should start with differential forms, but until that happens I think geometric algebra could be a big improvement.
Ah, another geometric algebra evangelist? I can't figure out if GA actually adds anything substantial, or if it merely lets us write some equations in a more succinct fashion. But it certainly looks cool.
As to vectors, obviously they have inverses, additive inverses. Since vectors don't have multiplication, there is no multiplicative inverse, but if you define new operations on them, well then that operation can have an inverse but that is not really "the inverse of a vector" anymore.
Heh, this made me chuckle and this is reason why I read HN comments. I can't help but to picture someone ringing your doorbell early on an sunday and saying: "Hi! Have you considered inverting vectors?" slams door "Who was it?" "It was just another of those damned geometric algebraists"
If you are working in ordinary Euclidean space with orthonormal bases, it doesn't do much for you. When you start doing calculus on embedded surfaces (the beginning of differential manifolds) it begins to be more helpful. Since Lie algebras are special types of differential manifolds, you can learn quite a bit by studying the geometry and this quickly leads into gauge theory and modern physics. The Geometry of Physics: An Introduction, by Theodore Frankel does a good job illustrating a lot of aspects of geometry even offering some geometric insight into some classical physics.
It is an absolute necessity for general relativity.
That books seems very interesting, but is it really about Geometric Algebra? I'm not talking about geometry in general, or differential geometry, which I'm a bit familiar with. And not Algebraic Geometry either, for that matter, which is also a fascinating subject.
I mean specifically Geometric Algebra.
It sort of seems like a notation, but it has almost a cult like following and perhaps it's more than a notation, is it a theory, a branch of mathematics?
There is Geometric Algebra for Physicists by Doran and Lasenby (2003). It recasts mechanics, E&M up to gauge theories and GR into geometric algebra. I stalled out at mechanics but I've now taken it off the Tsundoku pile and may give it another chance.
It really is useful even at the advanced level. Following it far enough leads to the Atiyah-Singer Index Theorem and Hodge Theory. The advantage over the exterior algebra is that you have that and an interior algebra, which leads to many formulas in differential geometry becoming very natural (like Cartan's magic formula).
> As vectors, obviously they have inverses, additive inverses. Since vectors don't have multiplication, there is no multiplicative inverse
A vector is pretty much by definition also a matrix, and there is a standard way to multiply matrices. You can define several inverses of a vector that way, though you can't define a unique inverse.
The standard inner product is of course also an exceptionally typical way to multiply vectors, but the concept of an inverse there doesn't make much sense.
No, a vector is defined as an object that has certain properties, like addition and scalar multiplication. It's a very general, and abstract concept.
There are vector spaces of functions, with infinite dimension, but there are also vector spaces with a finite number of elements.
So only some vectors can even be written as 1xN matrices, if that is what you're referring to. But even if you write a vector that way, it doesn't mean it IS a matrix or that it automatically "has" multiplication.
In mathematics, an object only has an operation if it's part of the definition, and as such, vectors don't "have" multiplication.
What you say is mostly correct, but only for a certain meaning of the word "vector", which has been used with 2 distinct meanings since its introduction in the first half of the 19th century.
The set of elements defined by certain properties of their addition and of their multiplication with the elements belonging to a set of scalars is named "vector space" by some and "linear space" by others.
According to the etymology of the word vector, "linear space" would be more appropriate. You have used "vector" with the meaning "element of a linear space", and what you have said is correct, except that for any "vector" as an element of a linear space, considered as a column vector, there exists a corresponding row vector, even in the infinite-dimensional case.
"Vector" means translation of the space, and this is what "vector" meant when the word was introduced by Hamilton. While the set of translations is a linear space a.k.a. a vector space in the generalized sense, the set of translations, i.e. vectors in the strict sense, has additional properties due to the multiplication operations that must be defined for "vectors" in their strict sense (which are needed e.g. to determine the angles between translations and the distances).
"Vectors" as elements of linear spaces are a very general notion, which appears in many domains, and for all linear spaces, including for those infinite-dimensional, you can define matrices, i.e. linear functions, and matrix multiplication, i.e. composition of linear functions, and also the correspondence between a 1xN vector and a Nx1 vector, more correctly between a vector and an associated linear form. The latter also exists for the infinite-dimensional case, even if it is less likely to use names like row vectors and column vectors (though the names bra vectors and ket vectors are still in use for the infinite-dimensional case).
For the infinite-dimensional case the vectors and the matrices become functions of 1 or of 2 parameters and the sums from the formulas of matrix multiplication become integrals.
While for most computer applications, "vectors" refer just to elements of linear spaces, most "vectors" used in models of physical systems are vectors in the original sense of the word, where not only the vector addition and the product with scalars matter, but the products of vectors also have an essential role and their meaning can be best understood in the context of the complete geometric algebra theory.
> A vector is pretty much by definition also a matrix, and there is a standard way to multiply matrices.
A standard way to multiply a MxN with a NxK matrix, but none for a 1xN with a 1xN or a Nx1 with a Nx1 matrix - the two possible ways to describe a vector.
You have to transpose exactly one of the two vectors.
And then you have two possible results, 1xN multiplied with Nx1 yields a scalar (that's actually the 'usual' dot-product/scalar product/whatever you call it) and Nx1 multiplied with 1xN, where the result is a NxN matrix.
Doesn't matter. We already didn't have a unique inverse, but it's perfectly possible to find a left pseudoinverse and a right pseudoinverse, bearing in mind that they're not unique.
Though thinking about it more, it seems like the outer-product-inverse of a vector (a) must be unique if it exists; and (b) is highly unlikely to exist.
> 1xN multiplied with Nx1 yields a scalar (that's actually the 'usual' dot-product/scalar product/whatever you call it)
I'm aware of this, but there are two ways we might conceive of an "inverse":
- Since a vector is a matrix, the inverse of a vector might be defined by matrix multiplication, where A is the inverse of B if AB is "the" identity matrix. This is only strictly defined for square matrices, but the pseudoinverse concept extends it to nonsquare matrices.
- Or, we could go for a more basic sense of "multiplicative inverse", where the concept is that if AB = C, then B = A⁻¹C. This is what I was thinking of when saying that the concept of an inverse doesn't make sense when multiplication is the inner product - if I give you a vector v, and its inner product with some other vector u, there is no way of recovering what u was.
> The standard inner product is of course also an exceptionally typical way to multiply vectors, but the concept of an inverse there doesn't make much sense.
Not all vector spaces are equipped with an inner product. The point is that you can start with some simple axioms and build these more complicated things (inner product spaces, algebras over a field, geometric algebras, etc.).
> Not all vector spaces are equipped with an inner product.
Any vector space over a field (usually part of the definition of a vector space) is equipped with the standard inner product, because multiplication and addition are part of the definition of a field.
That’s not how it works. Unless you explicitly give the vector space an inner product, it doesn’t have one.
What you probably mean is that “you can always define an inner product” but that’s a very different statement.
That’s not true either though, the different dimensions in a vector space do bot have to belong to the same field, so you can’t assume you can add them together.
> Unless you explicitly give the vector space an inner product, it doesn’t have one.
Well, no, not at all.
The inner product is still there. It's still an inner product. The space in which your vectors exist is still an inner product space. You may not care about the inner product, but it doesn't cease to exist when you stop looking at it.
I really don't understand why you would say this, it's obviously false.
Setting aside the subtler point of what it means to "have" something in mathematics:
clearly only some vector spaces even have the potential to introduce an inner product. Consider F for some random finite field. You can make a vector space from it, but what would the inner product be? Or R x F for that matter, you could never give that an inner product.
That's why the concepts of "vector space" and "inner product space" are separate concepts. Some vector spaces aren't, and could never be, inner product spaces.
It's just some of the more advanced theory you'd get from studying modules repackaged a bit.
Basically the extra stuff that is usually skipped in first year linear algebra courses are the symmetric and asymmetric (often called exterior) products. These form algebras, of course. The exterior product, or wedge product, has a natural interpretation in terms of signed areas (or volumes) and from this you get the determinant as a volume form.
These are the natural generalizations of dot products (inner products) and wedge products (exterior products).
You can take a vector and associate it with a 1 form (asymmetric algebra or exterior algebra), and then multiply two vectors to get a 2 form using the standard wedge product, etc. In dimension 3, the space of 2 forms is dual to the space of 1 forms and so you can "multiply" two vectors to get a third vector. That is all that's going on here.
Actually a good multi-variable calculus class will cover most of this stuff as you need some motivation for Jacobian volume forms used to calculate areas and volumes under change of basis, and dot/wedge products are useful for generalizations of the Gauss divergence theorem and the generalized fundamental theorem that says the integral over a function, f, on the n-1 dimensional boundary of a shape is the differential of the integral of the shape.
Moreover any class on Riemannian geometry will give you all the linear algebra you need as well.
One thing I would caution students with is that by using somewhat non-standard jargon they may not understand how to generalize this stuff to n-dimensions, nor will the connections between, say, determinants and wedge-forms be clear, or dot products and angles be fully understood if only the n=3 cases is emphasized. Only in n=3 can you multiply two vectors to get a vector. But fun fact: in dimension 3k you can multiply two k-forms to get a third k-form (as the space of 2k forms is dual to the space of k forms in n=3k). If you think there is this new thing called "geometric algebra" other than usual tensor products, it may not be obvious how things generalize to n != 3.
> But no, there is nothing new here beyond marketing.
Yes, that's my sense too. Of course cross products, wedge products etc make sense and that's just standard mathematics, but the part that I haven't really seen the point of is to form the algebra where all these forms live side by side.
It doesn't seem like a useful "fusing", in the way that say the complex plane is.
Of course it's very cool that sub-algebras in 2 or 3-space in GA are isomorphic to the complex plane or even quaternions, but it still feels a bit made up.
For a concrete example, one youtuber showed how Maxwells equations simplified to a single equation if you introduce an operator that is a combination of div and curl, and also a new kind of physical entity that combines the electrical and magnetic fields.
This is of course cool, but what I want to know is if this new operator makes some physical sense, and if the new multi dimensional field has any physical meaning. If they don't, it just seems like a parlour trick.
Not saying they actually don't, but I haven't seen any deeper explanations of it.
Not the same concrete example, but one where I do find the Geometric Algebra version substantially more insightful, is the treatment of rigid body mechanics in the geometric algebra of the Euclidean group (R_{n,0,1}).
It has the dual quaternions as even subalgebra (in 3D), and unifies all linear and angular aspects. It leads to remarkable new insights, as removing the need for force-couples (pure angular acceleration is caused by pushing along a line at infinity), while pure linear acceleration is caused by forces along lines through the center of mass.
These geometric ideas are independent of dimension - forces, both angular and linear are always lines. The treatment of inertia becomes a duality map, and things like Steiners theorem are not needed at all.
On top of this, the separation of the metric that sets GA apart means that this formulation of rigid body dynamics works not only in flat Euclidean space, but unmodified in the Spherical and Hyperbolic geometries. (by a simple change of metric of the projective dimension).
Well, I think the point is that in rigid body dynamics, the configuration and phase spaces naturally form a manifold and then the equations of motion are in terms of differential forms on the cotangent bundle of the these manifolds. This is commonly expressed in terms of the language of exterior algebras, hodge duals, etc. That's what is driving all of this, and is usually covered in a good class on mathematical physics. Again, there is nothing new here except marketing, but marketing plays an important and useful role.
I remember for a long time, people coming from the math end of things would look down a bit on physicists laboriously working everything out in complex tensor notation when there are these elegant canonical descriptions arising from differential geometry that look very simple and beautiful and are completely coordinate-invariant.
But then when you want to actually calculate something, you end up doing all the painful tensor contractions anyway, so the physicists would likewise often lookdown on the mathematicians for writing these simple one liners that described all of mechanics but not really understanding how to calculate stuff.
So if repackaging some of the basic facts of differential geometry as "Geometric Algebra" gets physicists to be excited about it, then that's a good thing. Just like repackaging some of the laborious tensor calculus computations into differential geometry has gotten a lot of mathematicians excited about physics. It really is much more pleasant to work in a coordinate-free manner using differential structures associated to the natural manifold suggested by the problem, rather than being stuck in euclidean space and needing to deal with lots of fictional forces and complex change of basis formulas.
> For a concrete example, one youtuber showed how Maxwells equations simplified to a single equation if you introduce an operator that is a combination of div and curl, and also a new kind of physical entity that combines the electrical and magnetic fields.
Back when I was in university, we covered this in our differential geometry class. And yes, you'd use more abstract concepts like curvature, hodge dual, and exterior product.
Maxwells equations in any dimension can be reduced to:
dF = 0 and d*F = 0
That's two equations, not one, but you can introduce a new D = (d, d*) and then get DF=0 if you want.
The advantage here is the d, and F have all the old physical meanings. F is curvature, which is the electro-magnetic field E+B, and d is the derivative (exterior derivative, but that is the derivative needed in calculus).
That looks pretty much equivalent, and they even have a version where F = E + B, just like in the youtube video. But my question is if this F, or the tensor version for that matter, has any physical meaning?
I think the point is that when learning EM, it's a lot about developing intuition by visualizing in your head how the E and B fields behave in different situations, and how charges interact with them etc. You know, a lot of holding your right hand in the "physicist handshake" position and twisting it.
Sure, the same information is contained in the F tensor, but it doesn't have a similar straightforward geometric intuition.
It’s pretty easy to think about a normal three dimensional vector field representing the magnetic field, the trouble for me is when you combine the two into one tensor field.
Looking at the link reminds me that this entity is used outside of GA, but it still feels a bit weird. More weird than say complex currents, but maybe it actually isn’t.
n=3 concern applies ewually well to every formulation, as it's a peculiarity if human scale physics that's commonly used as an application of linear algebra and anaylsis (visualizing linear transformations and rotations, point-line-plane geometry in 3D space, curl, Maxwell's equations)
For me, the substantial thing geometric algebra gave me so far was a newfound appreciation of the seemingly disparate systems: tensors, differential forms, matrix algebra, and also a newfound appreciation of stuff like determinants, conjugate elements in group theory, lie groups and lie algebras, etc., because it helps clarify the relationship between them, and as another user here said, you can get propelled up into some pretty advanced stuff later on (said user mentions the Atiyah-Singer Index Theorem and Hodge theory, but caveat: I've only recently started tacking a crack at the latter. I will say that, OTOH, it's pretty nice to be able to see something like the wiki on Clifford Analysis and realize its familiar territory from geometric calculus).
Its initial discovery was not too late, but both Clifford and Maxwell died too young in 1879 and after their too early death there was no one left who could finalize the applications of this theory to physics.
In their absence, the geometric algebra theory was ignored and both the theory of vectors and the theory of electromagnetic field were simplified to forms which are good enough for restricted contexts, but which are nonetheless inconsistent and fail in more general cases (the so-called Maxwell differential equations are valid only in much more restricted conditions than the original integral equations of Maxwell).
As a child I have also learned the theory of vectors in the incorrect way, e.g. including the so-called "vectorial product", so I lost time later until understanding that it is not a vector, and then I lost more time until understanding that the so-called pseudovectors a.k.a. axial vectors and the so-called pseudoscalars are not independent entities that come from nowhere, but their existence is just the natural consequence of the properties of the vectors.
It would have been much more efficient if the complete theory of geometric algebra would have been taught from the beginning.
Yes, this. Though Zero to Geo is one of the links at the bottom of the article.
It is really a shame that article does not clarify that, btw, what we've just derived is a re-derivation of a thing that has already been expressed and named, by Clifford, and well-characterized: https://en.wikipedia.org/wiki/Geometric_algebra
Such a bummer to see very slick but very ahistorical articles.
Hi. I wrote whole section on the history of GA and what happened and why it isn't already the norm, but I chose to remove it because the article is already far too long, and I don't think that my intended audience (engineers, compsci people, university undergrads) would care about the history. Apologies that wasn't what you would have preferred.
I think the article is great, and I think the interactive illustrations are sweet. Thanks for the taking the time to write it.
As an educator, though, when I see presentations of existing ideas that present them as if they were new, I die a little. You're standing on the shoulders of giants whether or not you think so, and whether or not you say so. It's best to figure out who the giants are, and how you're standing on them. When you are up front about the connections to past scholars, you are giving the credit to those scholars that they deserve, and you are strengthening the storyline, and you are setting a good example for the people that look up to you.
You can add a few sentences at the bottom saying, if you've gotten this far, congrats, you understand some basics of GA, and then link to other resources about the history and current applications of it.
> The similarities are so striking that we might think of them as "pseudovpseudovectors". But I won't write them this way because I think that obscures their true nature. Written this way it looks like a bivector only encapsulates three degrees of freedom!
> Instead, I will use: ... Because it forces us to remember what those coefficients are attached to. Knowing that a bivector contains five degrees of freedom, can you figure out what the other two describe?
I'm confused here and don't understand why they keep saying a bivector has five degrees of freedom. If you can uniquely identify one with three scalar coefficients, doesn't it only have three degrees of freedom?
Yes, only three. As defined, two bivectors are equal if their areas are equal and if their oriented planes are equal. Therefore two more degrees of freedom are absorbed by taking rotations of the two vectors in the plane.Along with the rescaling the author noted, we're down to three from six.
That makes complete sense to me. But then later on they say "The output is a Geometric with a scalar component s and a bivector component ⇒c, which has 1 + 5 = 6 degrees of freedom so this system is not lossy! It should permit an unambiguous inversion operation!" If a bivector only has 3 degrees of freedom then the total is 4, which seems like it would be lossy?
I was also wondering this. But note that x^y is always perpendicular to x, so really only has two degrees of freedom while you need three to recover y (knowing x). Add in the dot product part to make up for it.
For those who are interested, this sort of algebra would be known as the [Grassman algebra or the exterior algebra](https://en.wikipedia.org/wiki/Exterior_algebra). It becomes much more interesting if you use non-orthonormal bases (or non-euclidean geometry), since then you need to introduce a dual basis and distinguish between contravariant vectors and covariant vectors. When you add derivatives to the mix you end up in differential geometry.
Yes, this is exterior algebra. It's also interesting to figure out how this works in ambient dimensions other than three. The author has a table of grades: 0 for scalars, 1 for vectors, 2 for "bivectors", 3 for "trivectors", and they count the number of bases for each of these grades as 1 3 3 1. These basis counts are the dimensions of the (vector space of) scalars, vectors, "bivectors", "trivectors". If you go to two ambient dimensions you get 1 2 1, and if you go to four ambient dimensions you get 1 4 6 4 1. It's Pascal's triangle.
Grassmann algebra is a very important part of it, in fact you can reconstruct it in geometric algebra. More generally though, this algebra would be known as Clifford algebra.
English/American style of explanation fascinates me.
First, they show some algebra formulas and mention dot product and cross product. But then they start introducing a definition of a vector! With images!
Why, oh why do you need to waste yours and reader's time to introduce basic definitions, if any reader of the article definitely knows that? If they haven't, they wouldn't be able to read the first paragraph at all.
PS: Russian style of explanation is more like: "Here's the essence of my idea, maybe with some leading pre-definitions, but definitely without basics. If you are here, you probably is as curious as I am to already know/heard of all the basics."
In total, there's more material, because it's easier to write and read it, as author didn't need to explain 101s to PhDs.
(I have only briefly skimmed the article, but ...)
Material such as this serves to remind the reader of what they already know, and contextualize it in a way that is relevant to this article. The article begins by telling the reader, "We all know what scalars and vectors are---here they are---but have you wondered what if ...", and taking the reader beyond. The introduction which you seem to find objectionable is only a small part of a much longer article.
In addition, different readers have different, mildly different notation styles. These introductory blobs inform the reader of the language in the article, and are essentially a friendly statement of definitions.
A third purpose is rhetorical: readers sometimes get stuck while reading text, and these parts of the article work as anchor points where they can loop back and "synchronize" with the writer.
I definitely agree but there's no way this is "English/American style". It's because people have grand plans of making their article/book accessible to everyone, and they start off explaining e.g. what a vector is, but pretty soon realise the don't want to write an entire vector algebra textbook so they seamlessly give up and jump straight into Stoke's theorem or whatever.
I read a Synopsys simulator manual that explained what double clicking was.
Probably. But why would so many people want to make their article/book accessible to everyone. Let's accept the fact that some topics, like vector algebra, are just not that interesting to everyone.
Vector algebra is at the heart of a fairly large industry - games. I don't think there can be enough of accessible and understandable content from that point of view.
There's an urban legend at Swedish universities that American text book authors get paid by the word, and that's why their books are so incredibly verbose.
When I studied physics at the university, our undergraduate textbooks where relatively thin volumes (e.g. Alonso&Finn I-III), whereas the engineering students had these massive textbooks (Young&Freedman etc.). When looking into these massive tomes, yes, they spend a lot of words, but also they apparently don't expect the reader to be able to apply calculus. So instead of showing, say, Coulomb's law, and assuming the reader is capable of integrating to calculate the interaction between a point charge and a line, they have a section describing the interaction between two point charges. Then an entirely separate section describing the interaction between a point charge and a line, with the formula as given without actually explaining that, hey, this formula, you know, results if we take the fundamental law and do this and that. Incredibly infuriating.
ha, thank you for your honest feedback. My intended audience is not PhDs or math majors, it is high school physics teachers, practicing engineers, programmers, precocious high school students, etc. Many of these people benefit from some definitions.
I include 3 sentences defining a scalar so that I could introduce the concept of grade.
I include a few sentences defining a vector because just read the comments here and you'll see there are many definitions of vector and I want to specifically call out the one I care about in this post. I am also using a nonstandard, color-based notation throughout the article so it is helpful to take a concept that people already know just to demonstrate my notation. This also lets me introduce the 3D interactive illustrations.
Did you read the rest of the article or were these two definitions so objectionable that you quit?
I disagree -- whenever you write anything, you always assume something about your audience. In other words, there's no such thing as "general audience". E.g. in OP's article, people are already interested in Math, otherwise they wouldn't click to the article, yet alone got through the first paragraph.
... and that he revolutionised the understanding of turbulence in [1] which is four pages long.
[1] The local structure of turbulence in incompressible viscous fluids at very large Reynolds numbers, Dokl. Akad. Nauk. SSSR 30, 299-303. Reprinted in Proc. R. Soc. London A434, 9-13 (1991).
People might be interested in a similar post I wrote about dividing by a vector (https://oscarcunningham.com/4/dividing-by-a-vector/) although I came to a different answer as I was considering arbitrary vector spaces rather than just 3D space.
As a programmer it seems to me that the number one problem of math notation is that it's weakly typed. There's abuse and reuse of notation everywhere, which makes learning it needlessly difficult. I want a strongly typed fork of math notation. 90% of existing math notation would just be laughed at if it had to go through code review.
Ironically, this post is an abuse of the concept of "weak typing". There's nothing "weakly typed" about, say, the plus sign being used to add both numbers and sets, or a dot being used for both multiplication of numbers and the dot product of vectors. It just means those symbols dispatch based on the types of their arguments, which is perfectly consistent with strong typing (cf. the Julia language).
The situation that leads to weak typing in computer algorithms -- when you get data from a file or another process and don't know in advance what type it's going to be -- is basically non-existent in blackboard mathematics. Rigorous mathematical papers always tell you what set a variable belongs to when it is introduced, as well as the domain and co-domain of any functions that are defined. This is the blackboard equivalent of strong typing.
That's 100% fair, but while I could have made my point using more rigorous language, I think it's still valid. I'm not talking about multiple dispatch based on type which is fine, I'm talking about actual abuse like using fractions to mean derivatives, omiting non obvious parameters, confusing function and value of function at a point, etc. I could go on. Physics notation is an even worse offender btw.
Take for example the law of total expectation, usually written as E( E(X|Y) ) = E(X). It's totally non obvious (so much that it's harmful IMO) that the outer E is a function of Y. Hiding the summation parameter of E does nothing but hurt math learners here.
That's interesting. I'm not quite sure what I would call those issues, but I agree they can be tricky.
Many of those abuses are for a very good cause. Leibniz notation is very powerful, for example, but it's hard to master and physicists really go nuts with it.
For E[X|Y], all you have to remember is that conditional expectation yields a function f(Y) of the thing you conditioned on, and f(Y) is itself a random variable that you can take the expectation of. This property is canonical and baked into the formal definitions. It's not an abuse.
However, I do fault some machine learning types a bit for abusing probability and statistical notation. For example, the Elements of Statistical Learning book extensively overloads E[], P(), and other symbols and operators in ways that it doesn't even bother to define. They randomly throw subscripts and decorations onto all sorts of symbols and don't even bother to tell you if those decorations mean they're marginalizing, conditioning, or something else. The book has no glossary of symbols and operators and no preliminary chapter setting out notation, which is unusual for such an enormous book full of hundreds of equations. It would be impossible because the book is a hodge-podge of symbols that change from paragraph to paragraph.
That is statistics, not pure maths. But yeah, statisticians abuse a lot of notations like that, just like physicists. I haven't seen pure mathematicians make such unclear notation.
(Statistics is as much maths as theoretical physics is, both are technically mathematics but in practice the field is handled in a very different manner since they are applied and intended to solve a specific set of real world problems and hence not pure)
Probability theory is not math? (But sure, the same formula can be written in a more explicit way using subscripts and distinguishing random variables from their values.)
"Abuse of notation" is usually not weak typing but rather polymorphism. One of the reasons typed languages don't catch on in physics/mathematics is that most of them can't express the level of polymorphism that even basic routine mathematics has. E.g. take a look at https://yosefk.com/blog/can-your-static-type-system-handle-l...
Yes, one usually needs to "intuit" the meaning when there are mixes of subscripts/superscripts/parameters, assumed definitions, mixes of variable naming conventions, the order of function arguments randomly adjusted to taste, invented symbols and syntaxes.
Since no machine is ever going to read it, it's all up to individual taste and prejudice.
How would a math linter work? Don't see it, but this is one of the things that makes maths a different world from programming.
That's a very common criticism, but I don't think mathematics would work if you insisted on being 100% explicit all the time.
Clear and short notation, that is just unambiguous enough, is a very important factor, without it books wouldn't just be much longer, I'm not sure we'd even be able to understand it.
This was what Bourbaki's Elements and Whitehead/Russell's Principia Mathematica were about. These books are admired and influential but very few people actually read them. As you might expect, they're too long. They're for giving a different perspective to people who have already achieved the highest levels of sophistication in math.
Doesn't seem self consistent. He defines the ab multiplication as dot product plus "extrusion"/bivector (which seems simpler to call convex combinations of 0,a,b,a+b). Then he says aa is a scalar, presumably because the "extrusion" is 0, but you can't have this identity be 0. Just because it's degenerate does not mean it's 0. And the "extrusion" while not a plane is a line in his definition.
I am curious, how could one apply these insights to inference over sets of vectors generated by creating embeddings over things like photos etc? I understand well the ideas of +/- for things like word2vec, but what would multiplication and inverse mean in this context?
I love the way this was presented (others have pointed out flaws with the article already).
I'd appreciate a post from Matt Ferraro on how this is built. Bonus points for including nice syntax-highlighted code "widget" for a cross between maths/programming.
The writing is cute and the animations are nice, but none of it makes any sense. I stopped reading at
> It is important to remember that bivectors have a certain redundancy built into them in the sense that
s
a
⃗
∧
b
⃗
=
a
⃗
∧
s
b
⃗
s
a
∧
b
=
a
∧s
b
. We can write them using 6 numbers or 3 numbers, but they actually convey 5 degrees of freedom.
Three (real) numbers have three degrees of freedom, by definition. (And nothing about complex numbers was mentioned.) Is this a parody I don’t get? I feel like I have wasted ten minutes on nonsense.
You're right about that being wrong, and the author makes the same mistake consistently, but otherwise it looks correct. Some steps have details elided where it maybe should have been noted that things were being skipped, but with correct results. I think it's wonderfully written and a great exposition.
author here. I was mistaken about the 5 degrees of freedom bit. Bivectors have three. I'll fix the text tonight. I'm sorry you wasted ten minutes on my nonsense.
This is going to confuse readers. A point on a number line is a 1D vector; in other words it is a unit vector pointing along that number line, multiplied by something which scales its length. It’s the latter dimensionless and directionless quantity that’s the scalar.
Nitpicking: a more correct view is that the set of points on a straight line is an affine space, so the points are neither scalars nor vectors, but elements of an affine space.
The set of translations of the straight line is a vector space a.k.a. a linear space.
So the vectors are the classes of equivalences of the differences between 2 points on the straight line (i.e. the differences between 2 pairs of points, where the distances are the same, are equivalent and they determine the same vector).
While the vectors are classes of equivalence of the differences between 2 points, the scalars are classes of equivalence of the quotients of 2 (collinear) vectors, i.e. a scalar is the ratio between the signed magnitudes of 2 collinear vectors.
If you choose a point on the straight line as the origin, you can choose as a representative of each class of equivalence that corresponds to a vector, the vector corresponding to the origin point together with another point. This gives a bijective mapping between vectors and those second points.
If now you also choose a vector as being the unit vector, which will correspond with a second point besides the origin point, together with the origin point, then you can choose as a representative for each class of equivalence corresponding to a scalar the ratio between a vector and the unit vector, which will correspond to a third point, besides the origin and the point corresponding to the unit vector. So you obtain a bijective mapping between scalars and those third points.
Because on a straight line there are bijective mappings between points, vectors and scalars (after choosing 1 origin point and a 2nd point as the extremity of a unit vector), they can be used interchangeably in most contexts, but it would be good to remember that all 3 are in fact different mathematical entities.
It occurs to me that some of these axioms depend on how many dimensions the space has- in 4 dimensions, a vector would have 4 components, a bivector would have 6, a trivector would have 4, and a quadvector would have 1. And so on, in accordance with Pascal’s triangle.
In what way would that be similar? The OP performs a theoretical derivation of geometrical algebra, you wrote a well documented python class with the most basic operations on 2d vectors.
I'm taking a first semester physics course right now, and we're learning about Torque and Angular Momentum. I just finished a calculus course last semester.
Can someone tell me how I would use τ=r∧F on a physics problem for Torque?
You wouldn't really. It's the same concept as `τ = r × F`. the only difference is that it is useful to think of the 'type' of the output as being a bivector instead of a vector -- there's no sense in which it points 'out of the plane'; rather, it is a single vector in the vector space of (planes), with the same magnitude as r × F.
The distinct gets a little more useful when you start dealing with covariance under coordinate transformations. There it becomes more meaningful, because the _vector_ given by r x F doesn't transform the same way as their cross product should.
For an obvious example of why this is true: suppose r=x and F=y. Then r × F = z. If you change coordinates by mapping z -> 2z, then you would be doubling the torque that you computed .. which is wrong; the torque is unchanged. The bivector x^y is correctly unchanged by z -> 2z.
Currently in physics courses (usually not until more advanced mechanics or relativity) the resolution to this is to wave ones' hands and declare that, no, torque is a 'pseudovector'. But it is really much easier to think about if you type it as a bivector in the first place.
I would say the title should be "reciprocal" of a vector. Right? The inverse of a function (f^-1) has an unfortunate notation equality with the reciprocal (x^-1), or multiplicatory inverse. Or am I wrong?
"A vector is a thing with both magnitude and direction" isn't really a good definition. Cars have both -- an SUV is larger than a sedan, establishing a magnitude, and they obviously point in a direction -- but I don't think anyone would mistake them for pointy arrow vectors. If you use the more rigorous definition that a vector is an element of a space that obeys the vector space axioms it becomes easier to invert a vector "semantically" (a thing that doesn't obey the axioms) but quite a bit less useful. Cats don't obey the axioms, nor do punctuation marks.
Great article. I had a similar kind of revelation when I learned about generalized linear models after failing to understand all the various statistical tests.
The inverse of a any positive or negative vector with an amplitude greater than zero points directly into your soul relative to it's original amplitude and the how many regrets you have.
I stopped reading at this paragraph near the top :
“In this post we will re-invent a form of math that is far superior to the one you learned in school. The ideas herein are nothing short of revolutionary.”
The basic object that the author seems to be interested in is that of an "algebra over a field" (https://en.wikipedia.org/wiki/Algebra_over_a_field).
Specifically: Invertability of all elements with respect to the multiplication leads to the notion of division algebra and these have been studied for a long time. (https://en.wikipedia.org/wiki/Division_algebra)
When studying math at german universities, alebras are something you'll encounter in your 2nd year (latest; but might already show up in 1st year analysis albeit with a different focus). Implicitly, division algebras show up a lot when students learn about field extensions, galois theory and the algebraic closure of a field (usually 3rd semester). A more general treatment of division algebras is not a common subject, though.