These kinds of explanations are so meh to me. Linear algebra is useful once you begin to look for vector spaces you didn't know you had.
Thinking of matrices as spreadsheets is barely abstraction. Seeing the derivative operator represented as a matrix, acting over the polynomial vector space can open your eyes.
Taking the determinant of that matrix shows that d/dx isn't invertible.
Thinking of the fixed point of the transformation yields exp, the eigenfunction of the operator.
Right, and that's a perspective you pick up on in a second course in linear algebra, typically. The key insight really is that the core concept is that of a vector space, rather than vectors per se. The only thing we really ask of vectors is that it be possible to apply linear functions with coefficients from your favorite field to them. Other than that, vectors themselves aren't that interesting: it's more about functions to and from vector spaces, whether it's a linear function V -> V or a morphism V -> W between two different vector spaces.
This is actually a common theme of mathematics, that the individual objects are in some sense less interesting than maps between them. And, of course, the idea that any time you have a bunch of individual mathematical objects of the same type, mathematicians are going to group them together and call it a "space" of some kind.
In fact, my previous paragraph is pretty much the basis for category theory. One almost never looks at individual members of a category other than a few, selected special objects like initial and terminal objects. A lot of algebra works in a similar way. If I could impart one important insight from all the mathematics I've read, done, and seen, it would be this idea of relations being more important than the things themselves.
Just plain algebra is abstract math, and even the most common everyday math most overlapping common programming work.
I didn’t even know until today there was a concept called linear algebra, it was taught to me as introductory geometry alongside other geometry concepts. So that’s neat to learn!
Yes I think this spreadsheet view is so detrimental and confusing for newcomers. I'm not even sure the analogy makes sense. The key part of linear algebra imo is the concept of linear transformations.
T(a+b)=T(a)+T(b)
Matrices just happen to be one way of expressing those transformations.
And for extra magic, since every vector space has a basis, every linear transform between vector spaces with a finite basis can be represented by a finite matrix (https://en.m.wikipedia.org/wiki/Transformation_matrix). While this might feel obvious if you haven’t explored structure-preserving transforms between other types of algebraic objects (e.g. groups, rings), it is in fact very special. Learning this made me a lot more interested in linear algebra. It unifies the algebraic viewpoint that emphasizes things like the superposition property (T(x+y) = T(x) + T(y) and T(ax) = aT(x)) with the computational viewpoint that emphasizes calculations using matrices.
Since all linear transforms between vector spaces with a finite basis are finite matrices, the computational tools make it tractable to calculate properties of vector spaces that aren’t even decidable for e.g. groups. For a simple, but remarkable example: All finite vector spaces of the same dimension are isomorphic, but in general, it’s undecidable to compute if two finitely-presented groups are isomorphic.
A semi-decidable problem is still pretty bad news from a computational perspective, but I agree that it's not the best example of what I was trying to illustrate. I was aiming for something dramatic and (somewhat) approachable, but ended up emphasizing properties of vector spaces as free abelian groups, rather than as vector spaces per se (which undermines my emphasis of the specialness of vector spaces in comparison to other algebraic structures). That said, to the best of my knowledge, the algorithms for computing whether two finitely-generated* abelian groups are isomorphic take advantage of the close relationship between finitely-generated abelian groups and vector spaces to compute the Smith normal form of matrices associated with the groups and then compare the normal forms. This takes roughly O(nmsublinear factors) for n x m matrices[0]. So to revise my example, vector spaces with a finite basis (and any finitely-generated free abelian group) can be compared for isomorphism in constant time and finitely-generated non-free abelian groups take time roughly quadratic in the number of generators, so there is a huge win there still.
Do you have a favorite example that highlights the unique computational properties of vector spaces?
*I don't know how this changes in the finitely-presented case, but I assume the extra constraint can be used to improve the performance of the algorithms. It's a lot easier to find asymptotic analysis of the finitely-generated case though and I don't see a way around dealing with the fact that it's still not free.
If I may add, I found "useful magic" like discrete Fourier transforms, local linear approximations and homogenous differential equations as exciting examples to motivate students into the abstract theory of linear transformations
Ditto. More broadly, I am bored by efforts to rehash the same introductory material from whatever your given technical topic is (math, programming, machine learning). There's already really good books out there on these things that have been written by masters and do a much better job than blogs like this (provided you read them properly).
To people out there writing educational blogs: do more research and find good, well written, timeless resources to point people to for the basics. Spend your energy writing something new that we haven't all already read.
You might find people doing this and not notice it. Sometimes the educational progress is formulating a thing you already know for a subset of people who will receive it more effectively in that format. Might be boring to you, might be brain exploding revelatory for someone else. Even a better articulation of something which might have helped you learn can be in that category! Keep in mind you’re judging education of material you already know.
I used to teach this. One of the key ideas is to get rid of 3d geometry and state, from the beginning, huge sized problems (simple models of traffic using kirchoff’s laws, image convolution, statics…). Otherwise, why define the determinant? Just compute it. Or eigenvalues? Or kernels?
Thinking of matrices as spreadsheets is barely abstraction. Seeing the derivative operator represented as a matrix, acting over the polynomial vector space can open your eyes.
Taking the determinant of that matrix shows that d/dx isn't invertible.
Thinking of the fixed point of the transformation yields exp, the eigenfunction of the operator.