I recently finished going through MIT OCW's linear algebra class from Gilbert Strang. Without the struggle of doing the assignments, reading the text, and watching the lectures, I don't think I would have ever learned the content. While content like this and that from 3blue1brown are commendable and useful, it simply would not have lodged the ideas into my head.
Now that the ideas of things like vector spaces, norms, orthogonality, rank, basis, etc are nearly second nature, the concepts are useful as I study other branches of math which would feel impenetrable otherwise.
YMMV, and if you can learn from condensed materials go for it, but I might be too dumb for it work lol. I think the real benefit accrues to the author who had to work out how to teach these concepts to others.
My experience from a decade of doing professional maths is that there are no shortcuts. You only learn maths by doing hundreds (thousands) of exercises, both mundane and more exctiting.
Also, the concepts "mature" in the brain. I remember sleepless nights in the first year of undergrad spent on understanding the details of the proof of the Jordan decomposition and a few years later (when studying algebraic groups) it all felt trivial.
There's no shortcut to understanding maths, just a lot of time spent in solitude trying to make sense of all these abstract concepts (and they DO make sense).
I agree, and have to admit that my own knowledge of LA is sadly way too superficial. The article did give a me a big lightbulb on something I didn't understand before: Some of my friends work in quantitative sociology and economics, and use Stata for their programming needs. They basically use matrices as their exclusive data structure, and the reason for that eluded me until after reading this article.
> While content like this and that from 3blue1brown are commendable and useful, it simply would not have lodged the ideas into my head.
I'm not sure I understand your point. Are you just saying that this blog post isn't an adequate substitute for taking a course in linear algebra? (Of course it isn't. But who said it was?)
Unfortunately, for a lot of people, including undergraduates, the dopamine hit they get from watching a video or passively reading a textbook makes them believe that these are adequate substitutes for doing thousands of exercises.
In my university, undergraduates have admitted that they have done fewer than 50 questions throughout the entirety of my math course. Their grades obviously reflect that, but they will do the same next semester.
Can I press you to explain what you think dopamine is/does?
I'm curious because I'm a neuroscientist who occasionally works on dopaminergic stuff and "passively reading a textbook" is so far from the canonical examples we use for dopamine activity, but the idea of dopamine/dopamine 'hits' has taken on a life of its own that seems quite different from the neurotransmitter.
I am a physicist, so I will dare not make any technical neuroscience claims. I meant "dopamine" in the causal sense of people feeling pleasure from passively reading a book because they think it is useful work.
There is a lot of excitement around machine learning and AI and there is not a proportional excitement for the math that underlies the theory.
In a lot of content that teaches machine learning/AI, the linear algebra substrate of it is given short shrift.
The consumers of that content infer that the backing mathematics is easy or unimportant, while the creators of the content are not actually implying that, but just want to move on to what the audience came for.
I'm not an expert in machine learning so I can't say whether you can get by without a strong understanding of linear algebra, but my intuition answer is no you can't. Beside that, I enjoy the math for its own sake so I'm happy to trudge through the textbooks.
And no, I don't think the blog post makes any claims that it is a subsitute for taking a course in linear algebra.
3blue1brown himself has told in his videos that they are not substitute for books and working out the exercise and he recommends reading from books and his videos are for inspiration and as a supplement.
I disagree. Maybe you'll remember it not as well if you don't do exercises, but there is no reason you can't learn from a blogpost (which is trivial to prove since you can just copy-paste the contents of a book in the form of one or more blogposts).
What your parent comment meant was that it is not possible to learn well from typical blog posts like this that try to condense the subject into a 3000 word article. Of course, if you copy-paste the content of a book into a blog post, then parent comment's point no longer applies.
While this explanation is certainly much clearer than what I remember of high school maths, I still have a pretty tough time following the formula examples.
When I see A(x) = ax, I'm not entirely sure how to read it.
Is A meant to be a function that accepts x? If so, why is the equivalent expression a * x? Is it supposed to be implied that function A also has some hidden value "a" that is going to be multiplied by the supplied value? Is this notation specific to multiplication, to this expression, or what?
Positing that something is 'intuitive' when it depends so much on additional contextual knowledge seems ever so slightly disingenuous as best, and slightly harmful at worst; it can make the reader feel as though they must be dumb for not understanding this 'intuitive' material.
I do acknowledge that this is linear algebra, and if one doesn't have a really solid grasp of notation of regular algebra it is likely to go over their heads, but the practical explanations (such as the slope rise/run example) are quite clear and relatively simple to follow; it follows that a simple explanation of the notation might be helpful too.
The particular example might be confusing because the same letter is used twice with different capitalization. There is no direct relation between them.
> Is A meant to be a function that accepts x?
Yes.
> If so, why is the equivalent expression a * x?
Because that how A(x) is defined.
>Is it supposed to be implied that function A also has some hidden value "a" that is going to be multiplied by the supplied value?
Yes, it's an unspecified constant (a, b, c... are used to denote constants by convention), so you can really calculate A(x) for supplied values of x yet until the constant 'a' is specified.
> Is this notation specific to multiplication, to this expression, or what?
No. Functions might be defined using any expression. For example
A(x) = b^x
is a valid function as well (again, we have an unspecified constant). Just don't expect to encounter it in an introductory course in linear algebra (since it would deal mostly with linear functions).
> Linear algebra is really about linear transformations of vector spaces, which is not captured in this blog post.
I... disagree. Some of linear algebra is about that. And it's probably a good way to view it that way when learning.
But some of my current work (coding theory) involves linear
algebra over finite fields. We use results from linear algebra, and interpret our problem using matrices, but really
at no point are we viewing what we're doing as transforming a vector space, we're just solving equations with unknowns.
I think this is spot on. Depending on what you're doing, a matrix can be:
- A linear transformation
- A basis set of column vectors
- A set of equations (rows) to be solved
- (your example: parity equations for coding theory)
- The covariance of elements in a vector space
- The Hessian of a function for numerical optimization
- The adjacency representation of a graph
- Just a 2D image (compression algorithms)
... (I'm sure there are plenty of others)
For some of these, the matrix is really just a high dimensional number. You (rarely?) never think of covariance in a Kalman filter as a linear transform, but you still need to take its Eigen vectors if you want to draw ellipses.
The first three can reasonably be thought of as defining linear transformations. For linear systems of equations A x = b, x is an unknown vector in the input space that is mapped by A to b.
Both covariance matrices and Hessians are more naturally thought of as tensors, not matrices (and therefore not linear transformations). That is, they take in two vectors as input and produce a single real number as output.
As for graph adjacency matrix, this can actually be thought of as a linear transformation on the vector space where the basis vectors correspond to nodes in the graph. Linear combinations of these basis vectors correspond to probability distributions over the graph (if properly normalized).
2D images... Yes, these cannot really be interpreted as linear transformations. But I'd say these aren't really matrices in the mathematical sense.
If you squint hard enough, you can see all of them as linear transformations (even the 2D images :-).
I politely disagree about covariance and Hessians. I can squint and say that the Hessian provides a change in gradient when multiplied by a delta vector. Similarly for covariance... Or you could look at it as one half of the dot product for a Bhattacharyya distance, which is just a product of three matrices (row vector, square matrix, col vector). No need for tensors yet.
That is unless you decide to squint hard enough to see everything as tensors! :-)
Great points. I wrote my comment in response to the article claiming to be an intuitive guide to linear algebra, not an intuitive guide to matrices. According to wikipedia:
> Linear algebra is the branch of mathematics concerning linear equations, linear functions, and their representations in vector spaces through matrices. [0]
The Venn Diagram of Linear Algebra and Matrices definitely has a lot of non-overlap, of which your list covers some. This article should be renamed to be about matrices and not linear algebra, because it's not.
A covariance matrix naturally transforms from the measured space to a space where things are approximately unit Gaussian distributed. This is identical to the Z transform in 1D case.
This can be useful in, say, exotic options trading - a natural unit of measurement is how many ‘vols’ an underlier has moved, e.g. a 10-vol move is very large.
Not really the covariance matrix, though, but its Cholesky decomposition (which exists, as a covariance matrix is symmetric positive (semi)definite, as otherwise you could construct a linear combination with negative variance).
Useful stuff.
And vice versa, btw - take iid RV with unit variance, hit them with the Cholesky decomposition, and you have the desired covariance. Used all over Monte Carlo and finance and so on.
Well, it depends on what "the space" is. Every set of vectors (in a common ambient space) spans some space—often called the column space of a matrix, if they are the column vectors of the matrix.
Sure, every set of vectors will span the space they span. But the requirement that a basis span the space refers to the space it’s in, not the space it spans (otherwise every linearly independent set of vectors were a basis, spanning the space it spans.) I could go on :-)
> > Linear algebra is really about linear transformations of vector spaces, which is not captured in this blog post.
> I... disagree.
This is literally the definition of the term "linear algebra".
> really at no point are we viewing what we're doing as transforming a vector space, we're just solving equations with unknowns.
You may not see what you're doing as transforming vector spaces with linear operators, but that is what you're doing. It's worth pointing out that the definition of vector spaces allows any field, including finite ones, though it's true that the intuition won't be exactly the same.
Another way to say this: if you're working on a problem without thinking about the connection to linear transformations, then it's not correct to say it's a linear algebra problem without obvious connection to linear transformations; instead, it's not a linear algebra problem at all, by definition.
Linear algebra is a shared field across multiple disciplines. So I'm sure that there are many valid and useful interpretations as to what "linear algebra" is essentially about.
However, in mathematics proper, it is absolutely the case that linear algebra is about linear transformations. Indeed, this is the only interpretation that remains meaningful when trying to generalize (e.g. to functional analysis / multilinear algebra).
20+ years ago I took a grad course in coding theory, e.g.,
W. Wesley Peterson and
E. J. Weldon, Jr.,
Error-Correcting Codes,
Second Edition,
The MIT Press.
-- gee, people are still studying/learning that?
The prof knew the material really well, but to up my game in the finite field theory from other courses, I used
Oscar Zariski and
Pierre Samuel,
Commutative Algebra,
Volume I,
Van Nostand,
Princeton.
which did have a lot more than I needed!
My 50,000 foot overview of linear algebra is that the subject still rests on the apparently very old problem of the numerical solution of systems of simultaneous (same unknowns) linear equations, e.g., via Gauss elimination (it's really easy, intuitive, powerful, and clever, surprisingly stable numerically, and is fast and easy to program; someone might want to type in, say, just an English language description!). Since such the subject of linear equations significantly pre-dates matrix theory,
the start of matrix theory was maybe just easier notation for working with systems of linear equations. In principle, everything done with matrix theory could have been with just systems of linear equations although often at a price of a mess notationally. In particular, as I outline below, now there are lots of generalizations of systems of linear equations that use different notation and not much matrix theory.
What's amazing are the generalizations, all the way to linear systems (e.g., their ringing) in mechanical engineering, radio astronomy, molecular spectroscopy, frequencies in radio broadcasting, stochastic processes,
music, mixing animal feed, linear programming, oil refinery operation optimization, min-cost network flows, non-linear optimization, Fourier theory, Banach space, oil prospecting, phased array sonar, radar, and radio astronomy, seismology, quantum mechanics, yes, error correcting codes, linear ordinary and partial differential equations, ..., and then
Nelson Dunford and
Jacob T. Schwartz,
Linear Operators
Part I:
General Theory,
ISBN 0-470-22605-6,
Interscience,
New York.
Linear algebra IS about linear transformations and vector spaces.
The thing is that the field over which the space is defined can be quite arbitrary (finite, infinite, not algebraically closed etc.) which has immense consequences on the behavior of such objects.
When one drops the assumption on finite number of dimensions, the story becomes wild (and is known as functional analysis, beautiful and extremely useful branch of mathematics).
3B1B does great work explaining these concepts, but I can't help but ask "why not both?" when it comes to explaining these concepts. Turns out, linear algebra is great for working with matrices, vector space, approximating non-linear systems, and more... Let's embrace multiple ways of teaching it and gaining intuitions rather than keeping score, eh?
Failed to comprehend which subject, linear algebra? I would argue no, and other people who are more on the pure mathematics side would agree [0][1].
snicker7 said it very succinctly:
> However, in mathematics proper, it is absolutely the case that linear algebra is about linear transformations. Indeed, this is the only interpretation that remains meaningful when trying to generalize (e.g. to functional analysis / multilinear algebra).
If you're point is that I failed to comprehend matrices, then I don't think you have enough data to make that claim since I don't really talk about matrices. I kind of address that in my other comment [2].
I don't follow your point around "a wide array of problems and models are the same thing". That's a very vague general statement that I certainly comprehend (not sure how you inferred otherwise). Specifically, I don't see how that point relates at all to the claim I made about linear algebra.
I... I still really struggle with this. I'm a smart person, I've got a bachelors of engineering, I've been a professional software developer for around 14 years now, and I've built a house. But there is something about degree-level maths and beyond that I find deeply unintuitive in a way that software development isn't.
Through comments here I found 3blue1brown's (clearly much loved) videos. By the third video I was shouting, "why for the love of god would we be doing this"? Based on this reaction I suspect that the content neither has intrinsic appeal to me, nor does it have obvious use in my work, projects, or life.
Pre-degree maths though, I love. My A-level maths really changed how I saw the world, and I make use of it reasonably often (well, often enough to not forget it).
I think I'm writing this here because most other commenters seem to really grasp this subject, or feel that they grasp it better having seen these videos. I'm honestly happy for you. However, if anyone is reading this who doesn't feel like that, then know you're not alone :-)
> But there is something about degree-level maths and beyond that I find deeply unintuitive in a way that software development isn't.
Grok.
Over the years, I come to the conclusion that one of the stumbling blocks is the definition/concept of "application". Just like the definition of "theory" is different for a layman ("My theory is..." == "My guess is...") than from a scientist's definition ("My theory is" == "My logical framework which incorporates all of the available data is..."), so the definition of "application" is different between mathematicians and engineers.
I've noticed that math books with titles like "$HIGHER_ORDER_MATH with Applications" means "$HIGHER_ORDER_MATH with Exercises". What I'm looking for is something like "$HIGHER_ORDER_MATH with Real-World Uses".
I've known LA for decades but, like you, where would I use it in my life? The turning point for me was Andrew Ng's Deep Neural Network course.
I knew that a DNN is a program of matrix operations, but how do you get 5,000 images into a matrix? One way is to resize all the images to the same n x n size, take the first pixel of each picture and break them into their RGB components. You now have the first three rows of your input matrix. Repeat for all other pixels and voila! You have a 5,000 x n matrix that you can do linear algebra on! _That's_ an application; having me add two matrices together is an exercise.
Since that insight, I've used LA in my job in the hospitably sector with impressive success because now I know how to apply it. Math books and 3B1B show you the math. We engineers (or at least this one) need real world uses.
This is typical for many people. You love pre-college math because you have intuitive understanding, while college-level maths offer a new level of abstraction that you may not feel familiar with from the get-go.
It's perfectly okay not to learn linear algebra, by the way, especially when you don't find any incentive to do so. Otherwise, you'll find linear algebra to be one of the most intuitive tools to model so many problems.
If you do want to learn linear algebra or any other higher math, I'd strongly recommend you focus on understanding concepts intuitively first, to the point that you find many exercises in a text book straight forward. Watching 3blue1brown is a good start, but do move forward with deeper treatment. The book I find very usual is David Lay's Linear Algebra and Its Applications: https://www.amazon.com/Linear-Algebra-Its-Applications-5th/d.... Lay sets up a really intuitive geometric framework to explain the intuition of linear transformation with sufficient rigor.
No. I took only a few courses on probability and mathematical stats. The books are A First Course in Probability and some some textbook on Mathematical Stats. The courses were probably not advanced enough, as I found them reasonably straightforward. I struggled a bit on what exactly is a random variable, but once that was internalized, everything else followed. I heard that advanced courses like random processes were really hard, but I'm not at that level.
Part of the challenge with linear algebra is that a lot of the basics are somewhat dry, since they serve mostly as a way to organize computation -- e.g., a system of linear equations can be expressed as Ax <= b, where A is a matrix, b is a vector, and x is a vector of variables.
Much of what makes linear algebra interesting and powerful comes from more advanced topics, especially eigenvalues. This power comes when we are not looking at a single matrix in isolation, but when we repeatedly apply a matrix. For instance, consider the equation x_t = A^t x_0. It turns out we can rewrite this an equation by diagonalizing A -- i.e., A = P^{-1} Sigma P, where Sigma is diagonal; most, but not all, matrices can be written in this form. We call s_i the "eigenvalues" of A.
Then, the equation simplifies to x_t = P^{-1} Sigma^t P x_0, or equivalently (P x_t) = Sigma^t (P x_0). This equation is dramatically simpler, since Sigma is diagonal, so if Sigma = diag(s_1, ..., s_n), then Sigma^t = diag(s_1^t, ..., s_n^t). In other words, this transformation "disentangles" the different components of A into ones that act independently. Here, the transformation x -> P x is what is called a "change of basis".
These repeated matrix applications are common in physics, where they represent how a dynamical system evolves over time. The main difference is that in physics, the system evolves continuously, but similar transformations can be applied to solve these problems.
This post really illustrates why HN needs MathJax support, and/or browsers need MathML support. Math notation is hard enough to read when you're reading the real version... when you're forced to read someone's ad-hoc "notation that I'm forced to use because I can't use real notation" it just becomes that much worse.
Luckily, Chromium is getting MathML support (finally) unless something weird happens, which (presumably) means that Chrome and Edge will inherit that support as well. Firefox already has MathML, so hopefully we'll soon be in a position where three of the most widely used browsers support MathML.
Still, it would be interesting to ask the Powers that Be if they'd be willing to implement MathJax here on HN....
The problem with Linear Algebra specifically, is that it can be viewed from many different perspectives.
The article here focuses on an "operational" perspective, how the numbers get added or multiplied together to turn into other numbers. However, Linear Algebra is also useful in geometry, and other situations.
This "intuitive guide" to linear algebra sets you up very nicely for figuring out how to add and multiply matricies together. But it doesn't give you any intuition about a rotation (aka quaternions) in 3d space, for example. A lot of math books make the mistake of trying to teach all the perspectives at the same time, instead of focusing on just one viewpoint until the student gains mastery.
A Quaternion is "just" a 4x4 matrix that represents rotation in 3-dimension space. Because you only move 3-ways rotationally (yaw, pitch, and roll), you're "underconstrained" with regards to the 4x4 matrix. Etc. etc. A lot of geometry intuition needs to be built here to really understand Quaternion... and none of that geometry is explored in the blogpost.
Which is fine. Focus is good. But when people approach Linear Algebra, its important to know that its "so useful" that there are too many ways of looking at Linear Algebra... too many different, yet equivalent, understandings of the subject.
I used to think this, my problem was that I wasn’t doing the exercises, instead just reading articles and watching videos and trying to get some kind of theoretical understanding. Everything makes a lot more sense once you’ve slogged through a bunch of repeated exercises.
100% Agreed. I've written 3D engines in shipping games and yet I can't do math for shit. I've tried watching 3blue1brown and they have pretty pictures but they don't help at me all. I feel like they're mostly appreciated by people who already understand and can't remember what it's like to not understand.
I was recently watching videos and trying to read papers on geometric algebra and getting totally and utterly lost on actually applying it.
Higher math involves a very different way of thinking from the typical, useful things people do for a living. Exactness is important. The abstractions can run very deep. It's easy to get lost in the pure side of things without really understanding how to apply it.
I think this is possibly the crux of it for me. I've certainly got well developed abstract thinking for software development, but software has always had a clear application for me, so the abstract thinking developed as a matter of course.
I've rarely found any higher math instruction which takes for the form, "so you have this specific problem X, here is how we can solve it with technique Y"[1]. But I suspect that it is because it is higher math (which presumably means 'higher order' math).
Without this, and without an inherent enjoyment of the pureness of the math, it seems somewhat esoteric for me personally. I'm not complaining, nor do I really think it should be any other way. I'm just reflecting on it really.
This also makes me think of my foray into monads: "The thing about monads is once you finally understand them you immediately lose the ability to explain what they are to others." Not saying that's the case here, just feels related.
[1] At least where I found problem X to be satisfactory. I didn't find my lecturer's problem of, "you're stood on a mountain described by this PDE, on what vector must one walk in order to stay at the same altitude" to be very applicable. I was a pretty wilful student though.
Software development never gets that abstract. Yes there are abstractions, but compared to college math, they're extremely simple. The most complex abstractions I've come across in software engineering don't hold a candle to some of the abstractions you would see in a typical undergraduate math degree. Heck, my university used Baby Rudin [0] for its introductory analysis class, which was often taken by freshmen or sophomores.
Personally I really struggle with the syntax and notation of upper-level math courses, I need a big cheat sheet of all the terms. It's like programming where we use i, j, k for loop variables instead of something more descriptive. My brain has to do one extra layer of translation between what's written and the concept being taught and then I lose focus, but intuitively I've always been fine with math concepts. When I took linear algebra in college (and did terrible) I particularly struggled with all the syntax that was introduced. Would love some tips if anyone else has a similar problem.
That is to a large extent a matter of practice. The notation can become second nature and then really helpful - you can write down regression (ordinary least squares) in two lines, basically (including problem and solution), and it all makes sense and translates fairly directly into an algorithm.
I'm currently working on a linear algebra heavy linear programming model for an optimization thesis and the general trick is, as with writing code, to improve iteration by iteration of the model. Starting with a complex mathematical model is always a bad idea, so start small and iteratively improve it. This means no use of advanced concepts unless needed. Also, nobody uses every bit of math in the everyday work. As with programming, you just need to understand the general concepts and the rest can be figured out step by step.
I think the point where it gets confusing is where they stop showing you how to do calculations. With programming, you are always calculating something, even if you abstracted away from it.
Geometric algebra seems more practical than most subjects but even its introductions suffer from this.
That's because the way that math is taught and presented makes it extremely hard to grasp and relate to, and this makes it hard for others to find the topics within it interesting. Mathematics, in it's very essence, is incredibly beautiful. The universe - yes, everything in it and everything that composes you, is in fact a form of mathematics! What does this mean?
Well, it means that understanding it means that you understand yourself! Well, not just that, but it might eventually lead to you understanding the architecture of the universe! After that? How about God? Or is God maybe a part of the architecture?
OK, so maybe I won’t try to go into the details there since opinions on it differ. Essentially, through this language though, you can master almost anything that you want! Yes, it might seem crazy, but the things that happen around you and the things which every poet and playwright and prisoner and savior ever composed can be explained through mathematics! Through it, you can also open up entirely new and utterly interesting worlds!
Let’s say that you dropped your pen this morning. To someone with no background in physics, or mathematics, this means nothing. On the other hand, to someone like me, it means quite a bit. The slight delay and movement altered the gravitational and electromagnetic field around you and echoed on into eternity. Also, it most likely changed your days structure and composition, and shifted your life into a new line (see quantum mechanics and chaos and linear dynamics) and had a profoundly large impact on everything else around you. You may have avoided a car crash, or you might have met a person who you wouldn’t have normally encountered all because of that small change. And this small change echoed on and effected everything else in turn. And this is all very mathematical, and extremely beautiful, but most people don’t know anything about it, but it does mean something to people like me: we’re all incredibly inter-connected, and our lives are ruled by chaos. Everything that you do, and everything that you say, and all the things you see and inter-related and ohh so close, but we tend not to see it and it all has to do with not understanding the fundamental mathematics!
The above is only touching on one small aspect of it though, as it only deals with physics and chaos. There are entire branches of math which make the world incredibly interesting which have nothing to do with the fields I just mentioned! Hey, did you know that standing next to someone who might look slightly like your wife will cause you to behave in a similar manner in which you behave when she’s actually around you? Yes, neurons that fire together wire together, and it’s very mathematical underneath but yet so simple! How about the fact that E = mc squared isn’t really true? Yes, the formula has an extra term (square root of 1 minus v squared over c squared) which makes it possible for massless particles to have energy and deals with relativistic effects. Did you know that you can summarize most of modern classical physics in just a few equations? (Yup, you can find most of them here: https://www.feynmanlectures.caltech.edu/II_18.html ). How about balance? Did you know that if you were standing at arm’s length from someone and each of you had one percent more electrons than protons, the repelling force would be so incredible that the repulsion would be enough to lift a “weight” of the planet! Yup – math is full of fun surprises!
Now, modern math is sort of like a constant tease which shows you the shell of this beautiful program and this excitement, and you know the beauty is there, but it’s not easy to understand and grasp! For one, most of mathematics is filled with jargon and language that is incredibly information dense, and it looks like it’s been written by a schizophrenic C programmer who’s paranoid about losing his job, so the information tends to be lumped into these incredibly dense formulas which hide the beauty and truth, but the beauty and truth will always be there! You just need to have a bit of persistence and dedication. It also doesn’t help that most teachers tend to not make things nice. They puke out the same old standardized stuff regurgitated and taught to them, and so round and round it goes.
Hopefully though, we’ll get better at teaching it and conveying it’s structure as we learn better ways of not making things cumbersome and uninteresting to other people! Wow, I need to stop writing – sorry for the large wall of text, but I hope you get what I mean!
Also at the end of February, there is geometric algebra event in Belgium. https://bivector.net/game2020.html All the big names in the field will be there.
This seems to be "what are matrices and how do you work with them" and not linear algebra.
I mean that can be useful sometimes but seems more like something you would teach in a numerics course instead.
Actually, I think this way of explaining and motivating things (linear map==matrix) will get really, really confusing once you try to understand changes of bases or eigenvalue decomposition. A linear map is something that takes vectors and spits out vectors while preserving the vector structure (i.e. addition and scalar multiplication on the input give you addition and scalar multiplication of the output).
I really like "matrices are the coordinate form of linear transforms." In the LA class I took the professor made a pretty big deal out of that, first by defining "lexicographic matrix basis" so he could write out matrices as vectors and then talking about mapping between the three different ideas.
There's still stuff he said that I'm unpacking today... that was a dense class.
This is a great book. I also recommend Halmos "Finite Dimensional Vector Spaces". The typical way linear algebra is introduced does not present a matrix as a linear transformation first and foremost.
I'm reading Strang's linear algebra book and he teaches it in terms of combining columns and rows which I think is a lot clearer than explaining it in terms of linear equations.
Goes to show how different people have different tastes. I find this type of exposition very confusing and very unenlightening. Give me a Landau-style "minimalistic"/"focused" explanation any day. Not to mention, it tries so hard to simplify things to a simple analogy (the spreadsheet thing) that it ends up being plain misleading. In other words: "Make things as simple as then can be, but no simpler."
I love explainers like this, but it frankly makes me a little angry that the vast majority of the math teachers I had in highschool and college taught in the awful way described in the setup to the piece.
Why is that? Has anyone studied it, or is there even a solid anecdotal explanation? The best one I can imagine is many of these professors simply don't care much for teaching and are more focused on their research, which is still infuriating but at least an explanation.
I ended up with an undergrad in applied math, though I'm a software engineer now. I like math, but I feel like I never got to be all that great at it. I suspect I would've enjoyed it more and achieved more with explanations like these.
How much of this explainer however seems better to us precisely because of the more comprehensive knowledge and understanding we already have?
For example the author uses the word function liberally in the explanation. However, when studying functions in school in math it was super complicated for me. It was only after I started programming, learning the programming language meaning of function, and then when I was reintroduced to mathematical functions through functional programming that I truly grasped mathematical functions and all the stuff I was taught in school.
I’m not arguing that school level math does not need to be improved. I just think we should be cautious that because an explanation that seems intuitive to us now after having gained a complete introduction to all math concepts as well as programming (esp on HN) may not necessarily be as intuitive when students who are only exposed to a very small subset of mathematical concepts encounter them.
For sure there is an effect on Reddit and other places where someone will post a question such as, "I'm having trouble with my Calc class, what is a good book?" and people seriously answer Calculus on Manifolds.
Now, CoM is a classic, a real great book, but it is useful only to people who have reached a certain level of mathematical maturity. That, presumably, is not the questioner.
A version of this is that I also see people who write, "I didn't understand this topic when I took the course but now years later, I see it is all actually very easy."
(I call this Second Book Syndrome because I don't know of a common name for it. I understand this to be what Zen people mean by the "Gateless Gate," that after struggling with something at great length a person can come to see that there is no real difficulty. But I've never heard anyone else apply that name to this phenomenon and I'm not Zen trained so I'm not sure that is right either.)
Once you understand monads, you lose the ability to explain monads. Hence the number of monads tutorials grows at an exponential rate as every new understander tries to explain them and fails.
In addition to its being good and useful, it’s also cursed and the curse of the monad is that once you get the epiphany, once you understand - "oh that's what it is" - you lose the ability to explain it to anybody else.
Here is a link that helps me think about it: https://www.alanwatts.org/3-3-10-gateless-gate/ (I teach Math and a common complaint, against all math teachers, is "They obviously know the material but they cannot teach it." When I first heard AW talk about this it was a relevation for me because it so closely linked with my experience. You say to people "A vector space is a place for linear combinations to happen" and they don't get it, of course they don't get it, I wouldn't have gotten it, but that's what a vector space is. So you have to do lots of examples, and work around the edges, and somehow sit with it for a while. Anyway, I find that for me this link conveys the point of that quote about monads.)
Actually a rather famous example of your point are the Feynman lectures. They are often hailed as a great, "easy" and intuitive explanation of physics. However, supposedly at the time, the undergrad students Feyman was teaching did not think their were all the rage. It was actually the grad students who retook the material and really enjoyed them. To support your point I think sometimes it is really necessary to have been exposed to the material and been able to grasp some part of it, then you are open to these type of explanations which really lift you over the hurdle of the things that you didn't quite get before.
As someone who learned to program before taking any interesting math I was always so confused by the amount of early math classes spent explaining function notation.
It wasn't until I tried teaching people that I realized how odd the notation can seem.
you should be happy someone explained it to you- I feel that notation was overlooked tremendously in my math education- I actually just brought this up in a thread yesterday. It was like "oh, we have this dx now... k."
I agree with your post. I had the same problem where match was harder for me to comprehend. It wasn't until I started programming that I started to understand the concept you cite (Functions.).
I taught Astronomy to the people in the years below me (a school tradition because it was optional), and it was absolutely exhausting trying to plan good lessons that didn't involve getting them to memorise stuff by wrote. I ""derived"" Kepler's laws for a bunch of 14-15 year olds who didn't even know logarithms yet and it was pretty brutal intellectually.
Also, I think teaching mathematics "intuitively" requires a bit of cooperation from the student - not in the sense of intelligence, just that for for every guy (or girl) who watches 3Blue1Brown (and looks deeper into the pure mathematics) there's another who's just along for the ride. I think the frequencies of those personalities are a product of how they were taught, but it's very difficult to convince people in "teaching time" as opposed to naturally (I was in the lowest maths group for years until I picked up a calculus book on a whim, but no teacher could've convinced old me that mathematics can be beautiful)
Quick Edit: To quote Tim Minchin, "Be a teacher". I might come across as moaning in the above but it's really interesting to explain things (in my experience at least - I love writing documentation!), and a good bullshit-test on yourself
I had an eye opening experience and some closure about my own eduction after reading a certain book, a collection of short biographic accounts of famous Germans and their horrific experience in school during the last few centuries. School has always been bad and actually used to be borderline insane, and yet these great characters, politicians, artists and scientists still developed fine.
Rant: If society gave a shit teacher would be the highest paid job with the highest standards, both in knowledge and in teaching skill. I mean literally 500k/year for the most noble calling on earth. The kind of progress humanity would make with a generation taught by the best, and teaching itself revolutionized, would be crazy.
Here are two reasons I think are important, there are more but these stand out. (1) Teaching is hard, and the amount of training you get in teaching higher level maths (rather than the just learning higher level maths itself) is very limited. (2) intuition in maths will lead you on a merry path to very wrong ideas. There are functions that are continuous everywhere without a derivative anywhere, Russell's paradox, classically zenos paradox, etc, etc. A big part of higher level maths training is in being precise with what you mean so that you can learn to do proofs and not trick yourself. I think a synthesis of these approaches is the sweet spot. Be precise but show applications to motivate the material.
This is a nice resource - I wrote one myself as well which is mostly based on the series by 3Blue1Brown, as well as other resources which I found useful and which used a visual approach to introducing linear algebra.
I remember taking Advanced Algebra Honors in 10th grade. It was basically Algebra II with a few (seemingly teacher-selected based on the experience of students who had a different teacher) advanced topics thrown in. One of them was matrices, and I was completely stumped by them. I now encounter them all the time, and wish I'd been able to wrap my head around them when I was younger.
I agree that it is better to understand math, and computer science, intuitively first. Learning the basics instead of learning how to think in them forces memorization and is frankly in a time gone by.
If only I could've been taught this way when I was younger, then I'd actually be any good at any advanced math.
The only way to test is if you talk with an expert and he says you have understood. There are many things in linear algebra that you can use in practice even when you didn't really understand them.
This is the reason why self-studying certain topics is very hard, you still need (good) teachers to give you constant feedback.
I just wanted to give some context to how I found this page, and why I thought it would be good to post.
I may be putting myself on the spot here: I never took a linear algebra course in undergrad. It was a heavily encouraged option, of course, but I felt I understood the basic rules enough to not really need formal study. I opted to study other areas, partially motivated by a fear I wouldn't do well and would hurt my GPA (my god was I vain, I feel I could do so much more studying full-time with my current world-view).
As time has gone on, and ML and quantum computing have simply blown up since I graduated in 2014, I quickly realized the magnitude of my mistake. I have frantically self-studied for years to try to make up the gaps in my mathematical understanding, and linear algebra has come up time and time again. I can do the processes, but they never clicked, I had no intuition.
I want to help others in my position cut to the chase, and study the highest yield, most intuition giving resources.
I actually developed the mental model shared in this guide on my own, and was positively delighted to find to this while thinking over a comment I was drafting on here. This page lays things out so clearly. The component steps are intuitive and I can commit them to memory/recall what they mean without needing to dig up my notes to self!
===
I find this page gives an excellent foundation, and goes great with these resources:
* A web site which clearly shows how to do matrix multiplication in a way that's easy to recall, it makes the procedure like riding a bike:
Page 3 of that paper lays out matrix multiplication (e.g.: applying a "transformation matrix" in the spatial parlance of 3blue1brown's videos) as a traversal of a directed graph. A very useful understanding, and shows how generalizable the tools of linear algebra really are in my opinion.
* The essence of linear algebra, by 3blue1brown (fantastic for a geometric/"transformation of space" view of linear alg):
Speaking for Linear Algebra, I learnt more reading for a few hours the appendix of "The Design of Rijndael: AES - The Advanced Encryption Standard" than I did in 6 months of theoretical university teaching full of useless technical terms and solutions in search of problems...
This sounds like it was meant to be pejorative, but it's what (applied) linear algebra, and applied mathematics more generally, is. Anyone can learn about a certain mathematical topic upon realising it's the one relevant to the problem they're facing—and learn it way more quickly, due to motivation and focus, than they would in a general-purpose course on the topic; the art is in recognising what mathematics is relevant, and you can't do that if you've never heard of it before. Having a library of (conceptual, not cookbook) solution hooks on which to hang your problems is how you get to be good at using mathematics.
Of course it was meant to be pejorative and that pretty much summarize my experience through university. In this particular case, the teacher wasn't giving a rat arse about his class and was merely there for the safe job, pension, sprinkles by some "research".
Anyhow, I disagree with that top-down approach, which seems to be very... European. I much prefer to follow a more logical path where the problem preclude the introduction to the solution.
> Anyhow, I disagree with that top-down approach, which seems to be very... European. I much prefer to follow a more logical path where the problem preclude the introduction to the solution.
I agree that it's easier to learn that way, but it's not much use when solving real-world problems; you can learn the techniques relevant to the solution once you know what they are, but, if you haven't met them before, you won't recognise that they're the right ones.
Now that the ideas of things like vector spaces, norms, orthogonality, rank, basis, etc are nearly second nature, the concepts are useful as I study other branches of math which would feel impenetrable otherwise.
YMMV, and if you can learn from condensed materials go for it, but I might be too dumb for it work lol. I think the real benefit accrues to the author who had to work out how to teach these concepts to others.