Hacker News new | past | comments | ask | show | jobs | submit login

These kinds of explanations are so meh to me. Linear algebra is useful once you begin to look for vector spaces you didn't know you had.

Thinking of matrices as spreadsheets is barely abstraction. Seeing the derivative operator represented as a matrix, acting over the polynomial vector space can open your eyes.

Taking the determinant of that matrix shows that d/dx isn't invertible.

Thinking of the fixed point of the transformation yields exp, the eigenfunction of the operator.




Concepts you described can't happen without a Step 1 in even approaching linear algebra. These types of explanations help many take that first step.


I honestly don't see anything about this website that is really building intuition.

The "derivative operator" notion that the GP is describing was hugely important for me in intuiting what linalg could do.


> The "derivative operator" notion that the GP is describing was hugely important for me in intuiting what linalg could do.

Do you have a link someone could read more about this ?


Here is a nice short video on how that works: youtube.com/watch?v=2iK3Hw2o_uo


I phrased it like it wasn't for me ( but it was! ). Love it!


Right, and that's a perspective you pick up on in a second course in linear algebra, typically. The key insight really is that the core concept is that of a vector space, rather than vectors per se. The only thing we really ask of vectors is that it be possible to apply linear functions with coefficients from your favorite field to them. Other than that, vectors themselves aren't that interesting: it's more about functions to and from vector spaces, whether it's a linear function V -> V or a morphism V -> W between two different vector spaces.

This is actually a common theme of mathematics, that the individual objects are in some sense less interesting than maps between them. And, of course, the idea that any time you have a bunch of individual mathematical objects of the same type, mathematicians are going to group them together and call it a "space" of some kind.

In fact, my previous paragraph is pretty much the basis for category theory. One almost never looks at individual members of a category other than a few, selected special objects like initial and terminal objects. A lot of algebra works in a similar way. If I could impart one important insight from all the mathematics I've read, done, and seen, it would be this idea of relations being more important than the things themselves.


I agree that learning a subject in mathematics requires going through it several times at increasing levels of sophistication.

That way, one gradually develops a level of mathematical maturity that allows an appreciation of abstraction.

I read about half of this. It’s nearly incomprehensible to me - I’d have to dig to find out what he’s trying to accomplish.

He’s going out of his way to introduce novelty and it appears he will try anything except address the subject directly.


You need mathematics abstract to understand this and when you have it you already know most of this.


I don't know about you but Linear Algebra was the first abstract mathematics I was ever exposed to.


Just plain algebra is abstract math, and even the most common everyday math most overlapping common programming work.

I didn’t even know until today there was a concept called linear algebra, it was taught to me as introductory geometry alongside other geometry concepts. So that’s neat to learn!


Right, my exact sentiment.


Yes I think this spreadsheet view is so detrimental and confusing for newcomers. I'm not even sure the analogy makes sense. The key part of linear algebra imo is the concept of linear transformations.

T(a+b)=T(a)+T(b)

Matrices just happen to be one way of expressing those transformations.


And for extra magic, since every vector space has a basis, every linear transform between vector spaces with a finite basis can be represented by a finite matrix (https://en.m.wikipedia.org/wiki/Transformation_matrix). While this might feel obvious if you haven’t explored structure-preserving transforms between other types of algebraic objects (e.g. groups, rings), it is in fact very special. Learning this made me a lot more interested in linear algebra. It unifies the algebraic viewpoint that emphasizes things like the superposition property (T(x+y) = T(x) + T(y) and T(ax) = aT(x)) with the computational viewpoint that emphasizes calculations using matrices.

Since all linear transforms between vector spaces with a finite basis are finite matrices, the computational tools make it tractable to calculate properties of vector spaces that aren’t even decidable for e.g. groups. For a simple, but remarkable example: All finite vector spaces of the same dimension are isomorphic, but in general, it’s undecidable to compute if two finitely-presented groups are isomorphic.


> it’s undecidable to compute if two finitely-presented groups are isomorphic.

But (iirc) it is semidecidable, like the halting problem, and isomorphism is decidable for finitely presented abelian groups.


A semi-decidable problem is still pretty bad news from a computational perspective, but I agree that it's not the best example of what I was trying to illustrate. I was aiming for something dramatic and (somewhat) approachable, but ended up emphasizing properties of vector spaces as free abelian groups, rather than as vector spaces per se (which undermines my emphasis of the specialness of vector spaces in comparison to other algebraic structures). That said, to the best of my knowledge, the algorithms for computing whether two finitely-generated* abelian groups are isomorphic take advantage of the close relationship between finitely-generated abelian groups and vector spaces to compute the Smith normal form of matrices associated with the groups and then compare the normal forms. This takes roughly O(nmsublinear factors) for n x m matrices[0]. So to revise my example, vector spaces with a finite basis (and any finitely-generated free abelian group) can be compared for isomorphism in constant time and finitely-generated non-free abelian groups take time roughly quadratic in the number of generators, so there is a huge win there still.

Do you have a favorite example that highlights the unique computational properties of vector spaces?

*I don't know how this changes in the finitely-presented case, but I assume the extra constraint can be used to improve the performance of the algorithms. It's a lot easier to find asymptotic analysis of the finitely-generated case though and I don't see a way around dealing with the fact that it's still not free.

[0] - I'm basing this on Chapter 8 of https://cs.uwaterloo.ca/~astorjoh/diss2up.pdf, but this is a deep field in which I am not an expert, so if you are, I'd love to hear more.


I read it as a more intuitive primer on the subject, which I think it did fairly well imho.


Spot on.

If I may add, I found "useful magic" like discrete Fourier transforms, local linear approximations and homogenous differential equations as exciting examples to motivate students into the abstract theory of linear transformations


Ditto. More broadly, I am bored by efforts to rehash the same introductory material from whatever your given technical topic is (math, programming, machine learning). There's already really good books out there on these things that have been written by masters and do a much better job than blogs like this (provided you read them properly).

To people out there writing educational blogs: do more research and find good, well written, timeless resources to point people to for the basics. Spend your energy writing something new that we haven't all already read.


You might find people doing this and not notice it. Sometimes the educational progress is formulating a thing you already know for a subset of people who will receive it more effectively in that format. Might be boring to you, might be brain exploding revelatory for someone else. Even a better articulation of something which might have helped you learn can be in that category! Keep in mind you’re judging education of material you already know.


Yep.

I used to teach this. One of the key ideas is to get rid of 3d geometry and state, from the beginning, huge sized problems (simple models of traffic using kirchoff’s laws, image convolution, statics…). Otherwise, why define the determinant? Just compute it. Or eigenvalues? Or kernels?


> Seeing the derivative operator represented as a matrix, acting over the polynomial vector space can open your eyes.

Are you aware of any resources that help to elucidate ideas like that?


I've heard of some of these words. :-D




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: