20+ years ago I took a grad course in coding theory, e.g.,
W. Wesley Peterson and
E. J. Weldon, Jr.,
Error-Correcting Codes,
Second Edition,
The MIT Press.
-- gee, people are still studying/learning that?
The prof knew the material really well, but to up my game in the finite field theory from other courses, I used
Oscar Zariski and
Pierre Samuel,
Commutative Algebra,
Volume I,
Van Nostand,
Princeton.
which did have a lot more than I needed!
My 50,000 foot overview of linear algebra is that the subject still rests on the apparently very old problem of the numerical solution of systems of simultaneous (same unknowns) linear equations, e.g., via Gauss elimination (it's really easy, intuitive, powerful, and clever, surprisingly stable numerically, and is fast and easy to program; someone might want to type in, say, just an English language description!). Since such the subject of linear equations significantly pre-dates matrix theory,
the start of matrix theory was maybe just easier notation for working with systems of linear equations. In principle, everything done with matrix theory could have been with just systems of linear equations although often at a price of a mess notationally. In particular, as I outline below, now there are lots of generalizations of systems of linear equations that use different notation and not much matrix theory.
What's amazing are the generalizations, all the way to linear systems (e.g., their ringing) in mechanical engineering, radio astronomy, molecular spectroscopy, frequencies in radio broadcasting, stochastic processes,
music, mixing animal feed, linear programming, oil refinery operation optimization, min-cost network flows, non-linear optimization, Fourier theory, Banach space, oil prospecting, phased array sonar, radar, and radio astronomy, seismology, quantum mechanics, yes, error correcting codes, linear ordinary and partial differential equations, ..., and then
Nelson Dunford and
Jacob T. Schwartz,
Linear Operators
Part I:
General Theory,
ISBN 0-470-22605-6,
Interscience,
New York.
W. Wesley Peterson and E. J. Weldon, Jr., Error-Correcting Codes, Second Edition, The MIT Press.
-- gee, people are still studying/learning that?
The prof knew the material really well, but to up my game in the finite field theory from other courses, I used
Oscar Zariski and Pierre Samuel, Commutative Algebra, Volume I, Van Nostand, Princeton.
which did have a lot more than I needed!
My 50,000 foot overview of linear algebra is that the subject still rests on the apparently very old problem of the numerical solution of systems of simultaneous (same unknowns) linear equations, e.g., via Gauss elimination (it's really easy, intuitive, powerful, and clever, surprisingly stable numerically, and is fast and easy to program; someone might want to type in, say, just an English language description!). Since such the subject of linear equations significantly pre-dates matrix theory, the start of matrix theory was maybe just easier notation for working with systems of linear equations. In principle, everything done with matrix theory could have been with just systems of linear equations although often at a price of a mess notationally. In particular, as I outline below, now there are lots of generalizations of systems of linear equations that use different notation and not much matrix theory.
What's amazing are the generalizations, all the way to linear systems (e.g., their ringing) in mechanical engineering, radio astronomy, molecular spectroscopy, frequencies in radio broadcasting, stochastic processes, music, mixing animal feed, linear programming, oil refinery operation optimization, min-cost network flows, non-linear optimization, Fourier theory, Banach space, oil prospecting, phased array sonar, radar, and radio astronomy, seismology, quantum mechanics, yes, error correcting codes, linear ordinary and partial differential equations, ..., and then
Nelson Dunford and Jacob T. Schwartz, Linear Operators Part I: General Theory, ISBN 0-470-22605-6, Interscience, New York.