Hacker News new | past | comments | ask | show | jobs | submit login
Look, ma, no matrices (enkimute.github.io)
528 points by grgrdvrt 9 months ago | hide | past | favorite | 94 comments



One of my favorite math/graphics YouTube creators Freya Holmér did an excellent intro to Geometric Algebra not that long ago [1]. If you have any interest in 3d graphics (especially but no limited to splines/Bezier curves) then be sure to check out all of their videos.

I personally have always struggled with linear algebra and I tend to find these Clifford Algebra approaches much more intuitive.

1.https://www.youtube.com/watch?v=htYh-Tq7ZBI&ab_channel=Freya...


What a wonderful talk, thanks. It reminded me of https://enkimute.github.io/ganja.js/ which is actually a library by enkimute, the OP! (It's quite a remarkable library too, by being a single file, no-build script that supports N dimensional algebras along with render support.)


This is who I thought this was going to be. I enjoyed this along with the Splines & Beziers ones. Such great presentation, never feels rushed but gets to the point.


the splines yt video (https://www.youtube.com/watch?v=jvPPXbo87ds) is (IMHO) one of the best educational videos on a programming topic, full stop.


I knew the name looked familiar. Splines was a fantastic video.


I'd like to point out that the YT comments have some good (weird for YT, I know!) clarifications and questions about bits where she did go a bit fast or skip over things, e.g. the non-commutativity of products (in general) and such.

Great video.


Thank you for sharing, I really enjoyed her talk.


I never thought maths was that cool.


Geometric Algebra was a complete mystery to me until I finally realized: it is just polynomial multiplication, but with some quantities for which the order of multiplication matters, and which have a weird multiplication table: i*i = 1, i*j = -j*i. That's it. Most intros present geometric product of two vectors:

(x1*i + y1*j) * (x2*i + y2*j)

as some deep mysterious thing, but its just the same FOIL polynomial multiplication you learned in freshman algebra:

(x1*i + y1*i)(x2*i+y2*j) = x1*x2*i*i + x1*y2*i*j + y1*x2*j*i + y1*y2*j*j

= (x1*x2 + y1*y2) + (x1*y2 - y2*x1)*i*j

The quantity in the first parenthesis, above, is the our old familiar dot product. The quantity in the second parenthesis is our old friend the cross product, but expressed in a new dimension whose basis is i*j, and which--unlike the cross product--generalizes to any number of dimensions. In GA its called the "wedge product".

Once you "get" that, you find that doing things like deriving rotation formulas, etc, become easy, because you can apply all the skills you developed in algebra to solving geometric problems.


In another comment, they pointed out https://youtu.be/htYh-Tq7ZBI?si=lOmsCL2DoqUCQgh1&t=1540 in which Freya did an excellent job reducing the number of axioms to 1: If you define multiplying a vector by itself to be equal to the length of the vector, squared, then everything else falls out of simple polynomial multiplication. It's quite lovely.


"contraction axiom" for those looking to google


Or "Clifford algebra" for the generic mathematical structure.


Your comment is interesting to me. A few days ago someone asked why math classes don’t the how and why and rather tend to just present the formulas for the operations and tell people to compute. Here you are focusing on what the operations do rather than on why they work. It’s an interesting contrast and one that teachers of mathematics have to balance. The two questions,

How does it work?

Why does it work?

Can’t always both be answered well in a given course.


> A few days ago someone asked why math classes don’t the how and why and rather tend to just present the formulas for the operations and tell people to compute.

The lecturer typically do know quite well the how and why, but teaching these points takes a lot of time (you also have to explain a lot about the practical application until you are able to explain why this mathematical structure is helpful for the problem).

Since lecturers are typically very short of time in lectures, they teach the mathematics in a concise way and trust the students to be adults, capable of going to the library, and reading textbooks about the how and why by themselves if they are interested in this. At least in Germany, there is the clear mentality that if you are not capable of doing this, a university is a wrong place for you; you should better get vocational training instead.


// focusing on what the operations do rather than why //

YMMV, of course, but in general, I always found it easier to understand the why once I understood the what and the how, rather than the other way around.


Yes. Usually, though, when someone complains about the teaching of mathematics they say we focus too much on how to do the operations and not enough on why they work the way they do. I agree it is much easier to understand why after knowing how.


I think there's a big difference between "presenting a formula" and "presenting rules for doing calculations". People are very good at extrapolating complicated results from a few simple rules: that's why taking derivatives and doing (elementary) integrals is fairly easy and also fairly easy to remember a long time after you've taken a calculus course. On the other hand, a bunch of literal miscellaneous formulas is very hard to hold on to --- for instance that's how introductory physics is taught, a bunch of disjoint relationships that you have to make sense of in your mind to make any use of.

In fact all anyone really wants for vector calculus is a bunch of "tools" they can use that will generally give the right answer if applied mindlessly. I think that's why GA is relatively popular, because it says how to do basic geometric operations (rotations, reflections, etc) without any thought.


This is because "how does it work" and "why does it work" are Fourier transforms of each other!


> How does it work?

> Why does it work?

Where can I use it?



The second term is not a cross product. It's the exterior product (or bivector). Cross product only works in 3D. Exterior product can work in any higher dimension.

A cross product of two 3D vectors is another vector, perpendicular to the plane containing the two vectors. The exterior product is a 2-vector (thus bivector) that sweeps the parallelogram between the two vectors; it's on the plane containing the two vectors. In 3D, the cross product vector is perpendicular to bivector plane.


No. In n dimensions a cross product of n-1 n-vectors is the determinant with the top row being the basis elements (e_1, e_2, ..., e_n) and the next n-1 rows being the n-1 vectors. The result is a n-vector orthogonal to the n-1 vectors.

In 2 dimensions it is i j x y

In 3 dimensions it is

i j k x1 y1 z1 x2 y2 z2

And so on.


That's not a particularly fruitful generalization. The traditional definition begins:

> The cross product on a Euclidean space V is a bilinear map from V × V to V.

This is required for some of the cross product's useful properties such as:

  |a x b| = |a| |b| sin(theta)
This only works in 0, 1, 3, and 7 dimensions.


The exterior product is the "correct" version of the cross product. I'm OK with the good guys seizing the name.

Cross product works in dimension 2^k-1 for k in 0,1,2,3. (real / singleton, complex / duonion, quaternion, octonion) It is trivial in 0D and 1D, defined uniquely in 3D, and unique up to some arbitrary negations in 7D.

https://en.m.wikipedia.org/wiki/Seven-dimensional_cross_prod...


One of the things that it takes the longest to learn in mathematics is that most things are defined in the dumbest way possible.

In particular, if (over a vector space V) you want to define a bilinear product m:V x V -> V, this is exactly the same thing as deciding on m just over pairs of basis vectors. If I were to call that "the universal property of the tensor product", you'd probably say "uh huh".


It's annoyingly one of those things that once you understand, you can't see how you didn't understand it before. e.g. the tensor algebra (aka free algebra) over a vector space is "just" the dumbest possible multiplication: if i and j are basis vectors, then i*j = i⊗j. No further simplification possible. j*i*i*j = j⊗i⊗i⊗j, etc. with associativity and distributivity and linearity: (5i+3j)*k = 5i⊗k + 3j⊗k, etc.

Then, if you have a dot product, a Clifford algebra is "just" the tensor algebra with a single simple extra reduction rule: For a vector v, v*v = v⋅v. So now e.g. `kjiijl = kj(i⋅i)jl = kjjl = k(j⋅j)l = kl = k⊗l`, which can't be reduced further.

The real magic is that it turns out you can prove[0] that ij=-ji for two basis vectors i,j, so e.g. (ij)^2 = ijij = -ijji = -ii = -1. So `ij` is a square root of -1, as is `ik` and `jk` (but they're different square roots of -1), and you get things like complex numbers or quaternions for free, and suddenly a ton of geometry appears (with `ij` corresponding to a chunk of plane in the same way a basis vector `i` is a chunk of line/arrow. `ijk` becomes a chunk of volume. etc.).

But it's all "just" the dumbest possible multiplication plus v*v = v⋅v.

[0] Proof: (i+j)^2 = i^2+ij+ji+j^2 = 1+ij+ji+1 = 2+ij+ji. But also (i+j)^2 = (i+j)⋅(i+j) = i⋅(i+j) + j⋅(i+j) = 2. So 2 = 2+ij+ji, so ij=-ji.


Pedantic nitpick: it's not the dumbest possible multiplication; it's the dumbest possible bilinear multiplication. There are "freer" free products on vectors than tensor product; the freest possible one is their Cartesian product as sets, which just makes a tuple out of them ij = (i, j). If you regarded the resulting sets as a vector space, it would not be true that i0 + 0j = (i, 0) + (0, j) ≠ (i, j). The tensor product is the freest that is well-behaved as a vector space in the sense that it respects the underlying scalar operations from its arguments.


my point was merely that in most cases where "universal object" is used, it could be replaced with "dumbest object such that".


In your proof, doesn't i⋅(i+j) + j⋅(i+j) = (1+ij) + (ji+1) = 2 + ij + ji, not 2?

What am I missing? I think ij = -ji is an independent axiom.


It's dot products there, which are also distributive. So i⋅i + i⋅j + j⋅i + j⋅j = 1 + 0 + 0 + 1 = 2.

We got dot products from the fact that v^2 = v⋅v for any vector v (so in particular, i+j). Then dot products are linear so you can expand that out. So basically the proof compares using FOIL on geometric products to FOIL on dot products.

Note that `ij` is not a vector; it's a tensor (or "multivector" and in particular a "blade" in the GA lingo). The dot product reduction only applies to vectors from the original space. But i, j, and i+j are vectors.

For simplicity and practically this is all over real vector spaces. You can make similar definitions with other fields, but you have to be careful when working mod 2.

For simplicity I'm also using an orthonormal basis. You also need to be a bit more careful if working with e.g. special relativity where your timelike basis vector t has t⋅t=-1 (though you can see things still work).


The proof is supposed to be showing that ij = -ji. Not that both ij = ji = 0.

In this case, i and j are two dimensional units. So yes, ij is a bivector (unit scalar quantity of two dimension units), so ij does not equal zero.

I think the ij = -ij requires another axiom. Or another constraint on how the space operates which results in the same result.


I didn't show ij = ji = 0. I'm assuming i⋅j = j⋅i = 0 (an orthogonal basis), but i⋅j != ij and j⋅i != ji.

(i+j)^2 = (i+j)⋅(i+j) because v^2=v⋅v for any vector v. When you expand RHS, you get i⋅i + i⋅j + j⋅i + j⋅j = 1 + 0 + 0 + 1 = 2. When you expand the LHS (i+j)^2 as Clifford multiplication, you get 2 + ij + ji. Since RHS and LHS are equal, ij+ji=0.


Ah, thanks! I will have to go through those steps more carefully.

> Or another constraint on how the space operates which results in the same result.

I see that would be the orthonormality of i and j.


Thanks for sharing that intuition


It's fun that there have been many approaches to interpolating rotations (geometric algebra, quaternions, even full-matrix interpolation [1]). But, after hand-optimizing the code, the final code ends up mostly the same for all approaches. The difference is in your understanding of the rules and capabilities.

From what little I know, GA seems like the most consistent and capable approach. It's unfamiliar. It's a bit much to take in getting started. But, people who clear that hurdle love it.

Alternatively, everybody uses quaternions while complaining they don't understand them and need a whole book to visualize them. (Visualizing Quaternions by Andrew J. Hanson, Steve Cunningham)

[1]https://www.gamedev.net/tutorials/programming/math-and-physi...


I'm not a mathematician, and don't have a ton of use for geometry in my work, but was learning GA for fun, and have similarly tried to learn quaternions in the past. GA is fun, quarternions are not fun. I think I understand GA, but I knew I did not understand quaternions after working through lectures and problems. Now that I know some GA, I kind of feel like a I know quaternions, finally.


> I think I understand GA, but I knew I did not understand quaternions after working through lectures and problems.

Most physicists stopped using them at the end of 19th century for the same reason...

> More than a third part of a century ago, in the library of an ancient town, a youth might have been seen tasting the sweets of knowledge to see how he liked them. He was of somewhat unprepossessing appearance, carrying on his brow the heavy scowl that the "mostly-fools" consider to mark a scoundrel. In his father's house were not many books, so it was like a journey into strange lands to go book-tasting. Some books were poison; theology and metaphysics in particular they were shut up with a bang. But scientific works were better; there was some sense in seeking the laws of God by observation and experiment, and by reasoning founded thereon. Some very big books bearing stupendous names, such as Newton, Laplace, and so on, attracted his attention. On examination, he concluded that he could understand them if he tried, though the limited capacity of his head made their study undesirable.

> But what was Quaternions? An extraordinary name! Three books; two very big volumes called Elements, and a smaller fat one called Lectures. What could quaternions be? He took those books home and tried to find out. He succeeded after some trouble, but found some of the properties of vectors professedly proved were wholly incomprehensible. How could the square of a vector be negative? And Hamilton was so positive about it. After the deepest research, the youth gave it up, and returned the books. He then died, and was never seen again. He had begun the study of Quaternions too soon.

- Oliver Heaviside, Electromagnetic Theory


Naive Lie Theory is a great book, and the first chapter teaches quaternions.

https://www.goodreads.com/en/book/show/4419538


also Physics from Symmetry spends 1/3 the book on lie theory passing through quaternions

https://www.amazon.com/Physics-Symmetry-Undergraduate-Lectur...


For people interested in this topic, here's a good set of slides going over Grassman/Clifford/Geometric algebra concepts: http://www.terathon.com/gdc12_lengyel.pdf

And another good site: https://mattferraro.dev/posts/geometric-algebra


don't foget the fantastic Sudgy 'A swift introduction to projective geometric algebra' : https://www.youtube.com/watch?v=0i3ocLhbxJ4

and ofcourse the go-to reference https://bivector.net

or join 1000+ profs, researchers and enthusiasts on the bivector discord here https://discord.gg/vGY6pPk


The author of that talk, Eric Lengyel, also wrote the book "Foundations of Game Engine Development, Volume 1: Mathematics". Its 4th chapter focuses on the same topics.


To be honest I've never really liked how GA results in all kinds of mixed elements if you're not careful what you multiply with what. Requiring up to 2^n terms for what was an n-dimensional space seems a bit hard to deal with.

It seems like it should be better able to deal with geometry (i.e. inner products), but I've never really found a good argument why you wouldn't just use the wedge product and the hodge star (or musical isomorphisms).

Even something 'magic' like turning a bivector "u^v" into a rotation in that plane "e^(u^v)t" is essentially just using the musical isomorphism to turn the 2-form u^v into a linear automorphism, allowing you to make sense of "e^(u^v)t" as a matrix exponential.

Another example that often gets mentioned is the ability to turn maxwell's equations into a single equation, but since the use of differential forms already makes it possible to summarize it into two equations which hold for very different reasons I never understood the utility of combining them into one equation.


// Requiring up to 2^n terms for what was an n-dimensional space//

Sometimes, the economy is illusory, e.g. normal vectors transform differently than position vectors do. Sure, you can, if you want, use the same data structure to represent both of them, but you'll still have to have some way of keeping track what kind of vector it is holding, as well as sprinkle special cases throughout your code to handle each one differently.

GA, takes the bull by the horns by having vectors use one basis (i,j,k) for vectors, and another basis (j*k, k*i, i*j) for the other.

// never understood the utility of combining them into one equation //

This is a good example of how having a higher-dimensional space actually gives you better economy of storage than a lower dimensional space does: one equation is better than two, or four :-)

And electric fields are different from magnetic fields in quite the same way as vectors are different from bivectors. You can either "special case" them by using a different equation for Electric and Magnetic fields, or you can treat them uniformly with one.


> And electric fields are different from magnetic fields in quite the same way as vectors are different from bivectors. You can either "special case" them by using a different equation for Electric and Magnetic fields, or you can treat them uniformly with one.

What irks me is that the magnetic part of the Maxwell equations is 0 for geometrical reasons, whereas the electrical part is 0 for physical reasons (roughly speaking the curvature of the potential is proportional to the current). Putting them in one equations makes it seem as if you could have something other than 0 on the magnetic side, which is impossible without fundamentally changing the topology of spacetime.

Treating them uniformly is a mistake in my opinion.


You think this way because you do not pair in the right way the Maxwell equations. The divergence equations are not paired. The same for the curl equations. Pairing the equations in the wrong way can actually lead to errors in the relativistic case.

Two equations are intrinsic properties of the electromagnetic field because it is derived from a potential, i.e. the null condition for the divergence of the magnetic field and Faraday's law of induction.

The other two equations are what you call "physical", i.e. they show the relationship between the electromagnetic field and its sources, i.e. electric current and electric charge.

Alternatively, if you use potentials to describe the electromagnetic field, which is better in my opinion, you just have the relations between potentials and their sources (together with the conservation law for the electric charge). With potentials, you can get rid of the "geometrical" relations (though the choice of potentials is not unique).


A somewhat simpler way of keeping track in the case of normals is to use row vectors for, well, covectors, which is what normals are anyways.

What GA brings is the ability to express linear combinations of scalars, vectors, bi-vectors ... Whether this is actually useful/desirable in practice is another story though.


The #1 thing that GA brings is the ability to divide by vectors, which makes working many things out on paper dramatically simpler.


Yeah, but the original commenter's objection was it seems weird to, e.g. use a 6-dimensional space to represent 3-dimensional quantities.

Doing it by using vectors and covectors still requires you to keep track of 6 degrees of freedom, i.e, 6 dimensions. Eventually everybody has to pay the piper :-)


Yes, you need to keep track of which is which (most likely using the type system) but you don't risk adding vectors to covectors without converting explicitly. Each of vectors/covectors is 3 dimensions, but there is no 6-dimensional space in which vectors/covectors are allowed to mix.

IIUC this is unlike GA/exterior algebra where scalars/vectors/bi-vectors/... can be added together, just like one can add a scalar to a pure imaginary quaternion in the quaternion algebra.


> Yes, you need to keep track of which is which (most likely using the type system) but you don't risk adding vectors to covectors without converting explicitly.

Do many graphics libraries actually do this? In my experience adding points and normals directly is actually quite common.


The mixed elements are the important ones!

A quaternion with w=1, x,y,z=0 is just the identity.

A quaternion with w=0, x=1, or perhaps w=0, x=y=0.7, those would only ever be rotations by 180 degrees.

If you want arbitrary rotations, you need some combination of the two: "a little bit of 180 around this line, and a little bit of 0deg rotation/identity". That's what it means to have scalar and bivector.

If you "being careful" with wedge and inner to avoid mixtures you are doing it wrong. Geometric product is the boss, and makes excellent mixtures!


These "mixed" elements are natural types and special cases of different objects and operations that are usually confused (e.g. "vectors" of three numbers to represent points and actual 3D vectors) or contorted (e.g. quaternions to represent rotations) or idiosyncratic (e.g. 3D cross product) in more traditional approaches.


> To be honest I've never really liked how GA results in all kinds of mixed elements if you're not careful what you multiply with what. Requiring up to 2^n terms for what was an n-dimensional space seems a bit hard to deal with.

Agreed, but the confusion is already there, and the traditional approach deals with it by just sweeping it under the rug. If you're dealing with normals, for example, you have at least 2 different n-dimensional spaces to keep track of that transform quite differently.

Having points, planes, lines, normals, translations, and rotations all represented as a single multivector type with consistent rules seems quite liberating once you grasp it. (I'm admittedly still working on it.)


The interpolation of animations at the bottom is really neat, but I can't help but wish the models were a little less _active_ on the rest of the page. Math is plenty hard without a small elephant cheerleader.


Au contraire my friend, if it were not for the elephantine encouragement I would not have made it to the end of the page! <3


The come-hither looks were just a little bit distracting, though.


if the author is reading this: please define the acronym PGA when first using it!


Projective geometric algebra for anyone wondering. A null basis vector is added to the basis vectors of the space you're working in. This allows the algebra to represent geometric objects that do not pass through the origin.


This called "affine transformation" in linear algebra language. (Linear algebra is stretches and rotations. Affine (affinity?) adds translations)

https://people.computing.clemson.edu/~dhouse/courses/401/not...

In 2 dimensions:

Rotation = multiplying by an imaginary unit.

Stretches = multiplying by a real number

Translation = adding a complex number.

In higher dimensions, the analogy to complex numbers breaks down.


It is not affine transforms per se but rather the expansion into homogeneous coordinates that enables translation by treating it as if it's a shear that leaves the reciprocal dimension untouched.

> Rotation = multiplying by an imaginary unit.

This is also not quite right.

Rotation is multiplying by a complex number with a magnitude of 1 (or perhaps you meant to say "raising a number to the power of i"?)


Sorry I meant "complex unit".

"complex number with a magnitude of 1" is the definition of a "complex unit".


I like "roots of unity" (not necessarily rational) or "unit phasor" or "non-integer powers of -1"


But... aren't the traditional 4x4 transformation matrices already use projective space, essentially?


I've been working through a bunch of Geometric Algebra on the web and YouTube lectures in recent weeks. Though I guessed Projective Geometric Algebra, I still wasn't certain as it's the first time I can recall seeing the acronym!


Yes! The use of FPGA for "Fast PGA" was particularly confusing.


Apologies. Had that joke sitting around for waaaay to long. Not that great in retrospect :D


I got a giggle out of it!


done. mea maxima culpa.


Are these algorithms efficient even given GPUs? I have the vague impression that GPUs are well-tuned for matrix work. Are those advantages lost when using Geometric Algebra formulations, so you actually dont come out ahead?

This is uninformed speculation, go ahead and correct me!


It's an extremely common misconception that because GPUs have matrix matrix and matrix vector products in the standard, that means GPU companies must be accelerating them.

In fact, because it is SIMD across the shader cores already, you can't necessarily do this. Some GPUs do, some don't


When you are programming, you have to figure out:

1. What quantity you want to calculate, and 2. What the most efficient way to calculate it is.

PGA (once you spend the--alas--not insubstantial overhead to understand it!) is a really good way of doing #1. Its virtually always a good idea to first try out the simplest and easiest to code up implementation anyways.

And what you get from using PGA to do #1 will certainly be good enough for you to prototype out the rest of your program enough to be able to benchmark it and find out where the real bottlenecks are. Happily, in most cases it will also either be the fastest way to calculate it, or close enough to not be the bottleneck.

And if is a bottleneck, it gives you a deep understanding of the problem you are trying to solve--which, IMHO, is a good idea to have before you just start trying to shave off cycles in hopes of getting it fast enough.


This is exactly what the article is about. TLDR they can be roughly equivalent


This is hair splitting at the end of progress:

The fact that 3D skeletal animation is still using 4x4 matrices in the GPU means the math developed for this around Half-Life 1 (on CPU?) is still the bleeding edge. 1998 -> 2024 = 26 years!

In 1000 years 3D animation will still be the same. End of story.


This article goes over my head, but the title reminded me of my experiments writing simple 3D renderers. After several failed attempts to learn linear algebra, I had the shower thought that a 3D rotation is just three 2D ones, and that I already know how to do those. Within an hour or so I had a wireframe 3D renderer, perspective and all!

I encourage everyone to try it.


What a great article! Not an area of special interest of mine, but the piece was a joy to read.


Thank you, appreciate that!


GA seems great! But ...

> and modern formats like Khronos' glTF use quaternions for all their rotation needs. Fantastic for animations, and generally considered worth the cost of the unavoidable conversions to and from matrices.

Quaternions are bad for animation. Animate a clock going from 9am on Monday to 6pm on Friday. Euler angles this might be expressed as from 0 degrees to 1620 degrees. With Quaternions, nope. This can't be expressed in gLTF. It can be in Unreal an Unity, both of which default to use Eular for animation. In gLTF you're required to bake it into smaller turns, all less than 180 degrees.


For specifying animations, you should work in the quaternion lie algebra, not in the group as you suggest. There you can represent 1620 degrees without any problem. Furthermore, in the quaternion Lie algebra (pure imaginary quaternions), and only in that space, you can take an arbitrary rotation key, multiply all 3 of its values with 10 and get 10 times that rotation without change in axis.

If you rotate around just one axis, the Lie algebra feels just like Euler angles .. in fact its exactly the same thing, but if you rotate around more than one .. it keeps working intuitively and usably - Euler angles absolutely do not.


Also, the use case for quaternions depends on how many times you will be applying the same rotation. If it's a few or dozens of times then maybe not the most efficient. If it's million or billions then you are going to want to use quaternions.

This is mainly due to the cost of converting to and from the rotation vector.


i.e. keep it in vector form if you're combining them a lot, convert to polar form when you want to work with angles


In exponentiated form, a GA rotor can be specified to spin as many times as you like, rotating continuously in the plane by the angle you specify. Think of rotations in the complex plane.


Most animations have more than 2 keyframes per week


There are lots of examples of a spinning clock showing time passing quickly. Lots of other things spin as well.

There's a reason Unity and Unreal (and Maya, and Blender, and 3DSMax) all use euler angles as their default animation representation.


Unity uses quats in its object transforms?


But not in it's animation data, at least not by default

You key euler angles, it lerps euler angles for animation values, it then generates a quaternion the given the current value of the euler angles. Same for Unreal AFAIK


Where in the docs does it say that? Euler angle lerps, in the general case (eg none of the starts or ends are 0), look like complete shit.

You need the log quat representation. Exp( Tlog(q1/q0) ) q0


> When interpolating between two rotations, the interpolation can either be performed on the Quaternion values or on the Euler Angles values.

from: https://docs.unity3d.com/2023.3/Documentation/Manual/animedi...

The default is Euler values


From that page: "When Euler Angles interpolation is used, Unity internally bakes the curves into the Quaternion representation used internally. This is similar to what happens when importing animation into Unity from external programs"


Someone ought to do a full Lie representation theory explanation of graphics operations.


Was that first paragraph even english? Man thats thick


This gives me PTSD


What! Why?


Working on point transformations. Not that bad, but still a bit of a pain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: