Hacker News new | past | comments | ask | show | jobs | submit login

Here's what he is doing. He wants to start with the set of real numbers, intuitively the points on the line, usually denoted by R, maybe typed in some special font.

Then he wants to define, say, addition of real numbers. So, given two real numbers, x and y, that might be equal, he wants to define x + y.

So, here he wants to regard addition, that is, +, as an operation. Then, as is usual for defining operations, he wants an operation to be just a special case of a function. So, he wants to call + a function. So, + will be a function of two variables, say, x and y. With usual function notation we will have

+(x,y) = x + y

The set of all (x,y) is the domain of the function, and the set of all x + y is the range.

So, that defines the function + except commonly in pure math we want to be explicit about the range and domain of the function.

For function +, the range is just the set of all pairs (x,y) with x and y in R. That set is also the set theory Cartesian product of set R with itself and written R x R. So, the domain of + is R x R. The range is just R. Then to be explicit about the range and domain of function +, we can write

+: R x R --> R

which says that + is a function with range R x R and domain R.

We learned how to add in, what, kindergarten? So, why make this so complicated?

Well, he wants to regard the real numbers as just one example of lots of different algebraic systems, e.g., groups, fields, vector spaces, and much more, with lots of operations and, possibly, more that could be defined. E.g., later in his book he will want to add vectors and matrices, take an inner product of two vectors, and multiply two matrices.

So, back to addition on the real numbers, he wants to regard that as just a special case of an operation on an algebraic system.

IMHO there's not much benefit for making adding two real numbers look so complicated.

Whatever he did in that chapter for defining addition on the reals, soon he is discussing matrix multiplication with no definition at all -- assuming the reader already understands that, that is defined and discussed many pages later in his book.

So, in his notation

+: R x R --> R

and matrix multiplication, he is using material before he has defined it, even before he has motivated, explained, exemplified, indicated the value of, and defined it. In good math writing and in good technical writing more generally, that practice is, in non-technical language, a bummer.

But from the table of contents, it appears that the book has quite a long list of possibly interesting narrow topics. And maybe for the routine material, his proofs and presentation are good -- maybe. I thought enough of the book to keep a copy of the PDF. It's there; if someday I want a discussion of some narrow topic, maybe I'll try his book!

In mathematical writing, it used to be common for the word processing to be much more work than the mathematics! Now with TeX and LaTeX, and I'm assuming that the book used one of these two, the flood gates are open!




And you've summed up almost perfectly everything I hate about how math is taught, how math is discussed, and how ideas about math are communicated, when it really should be one of the most beautiful, insightful, and rewarding subjects of study in the known universe.


Is there a book that explains things in the manner you have? I enjoy math but have trouble reading it.


Most of the best math is not trivial and, thus, usually takes some effort to understand.

There are some good authors of math for, say, calculus, linear algebra, differential equations, advanced calculus, advanced calculus mostly for applications, real analysis, optimization, probability, some topics in stochastic processes, introductory statistics, various more advanced topics in statistics.

Mostly the books are short on motivation and applications, and as a result it is too easy to spend time on material likely not worth the time unless you can be sure both to live forever and remember forever.

For calculus I liked Johnson and Kiokemeister. I taught from Protter and Morrey, and it was easier than J&K. Lots of people liked Thomas.

For linear algebra, I liked E. Nering and, then, P. Halmos, Finite Dimensional Vector Spaces which really is baby Hilbert space theory. Take Nering seriously -- he was a student of E. Artin at Princeton. His treatment of linear algebra is balanced and polished. For one of his editions, he has some group representation theory in the back, good, and some linear programming, really bad.

A lot of people like the MIT Strang book.

For advanced calculus to help when studying physics, especially electricity and magnetism and engineering, I very much liked

Tom M. Apostol, 'Mathematical Analysis: A Modern Approach to Advanced Calculus', Addison-Wesley, Reading, Massachusetts, 1957.

He has more recent versions, but for physics and engineering I like the 1957 version and don't like the later versions at all.

For ordinary differential equations, I liked

Earl A. Coddington, 'An Introduction to Ordinary Differential Equations', Prentice-Hall, Englewood Cliffs, NJ, 1961.

He makes variation of parameters look really nice -- then can understand the remark in the old movie The Day the Earth Stood Still. Ordinary differential equations is a huge, old field, and there is some question about how much of that deserves study now. Do notice that for systems of ordinary differential equations, get to apply some linear algebra in cute ways.

For advanced calculus for applications, there is the old MIT Hildebrand -- he knows what he is talking about, is easy enough to read, and a good place to go if need one of his topics.

In recent decades, the pure math departments wanted to teach advanced calculus as the theorems and proofs for freshman calculus. So there is Rudin, Principles of Mathematical Analysis, third edition (not the first two, maybe a later edition if there is one). So here's what is going on: He wants to develop the Riemann integral which is the one in freshman calculus. For that he wants to integrate over a closed interval on the real line, that is, some [a,b] which for real numbers a <= b is the set of a real numbers x so that a <= x <= b. Rudin will argue that [a,b] is a compact subset of the reals and show that the Riemann integral exists on all compact sets. So, the first chapters are big on compact sets. Then he talks about functions that are continuous and then ones that are uniformly continuous. With uniform continuity, the Riemann integral follows right away. Later he does some infinite sequences and series and then uses these for careful treatments of some important results, the exponential function, the number e, the sine and cosine, etc. Later he does integration of functions of several variables on manifolds for Stokes theorem and the related divergence theorem, a fully careful treatment of these theorems used in E&M. He does the Cartan exterior algebra: What is going on is that he wants to integrate a function g: M --> R where M is a manifold, that is, the range of some function f from some box, triangle, etc. to the space with the M. So, for this need the formula for change of variable for integrating with several variables, and that is a determinant of a square matrix. This integration is a multidimensional version of the line integral where direction of integration is important -- the exterior algebra is the multi-dimensional version of that. Can see that again in some treatments of general relativity in physics.

I like Rudin's third edition: Once know what the heck he is driving at and how he is getting there, say, as above, then his high precision is welcome.

For statistics, I suggest using some popular elementary book as a start. Then learn probability really well and from then on study particular topics in statistics as needed. The current directions in machine learning promise to make lots of particular topics important.

For a first book on statistics, consider

George W. Snedecor and William G. Cochran, 'Statistical Methods, Sixth Edition', ISBN 0-8138-1560-6, The Iowa State University Press, Ames, Iowa, 1971.

My wife did really well with that. So, get a good start on statistics and, then, get to learn some analysis of variance (experimental design), an underrated topic.

For a second book on statistics, consider

Alexander M. Mood, Franklin A. Graybill, and Duane C. Boas, 'Introduction to the Theory of Statistics, Third Edition', McGraw-Hill, New York, 1974.

Here, go quickly and get only the high points and don't expect the math to be very good -- in places it's pretty bad.

For regression analysis and linear multivariate statistics more generally, there are several books, Maurice M.\ Tatsuoka, Donald F.\ Morrison, William W.\ Cooley and Paul R.\ Lohnes, N.\ R.\ Draper and H.\ Smith. So, in particular, get enough to understand that regression is a perpendicular projection and, thus, get the Pythagorean theorem again.

For more on such statistics aimed at machine learning, get the Breiman CART -- Classification and Regression Trees, maybe much of the start of ML.

With that much in statistics, will have seen a lot of applied probability and may be ready for the real stuff. For that, need measure theory, e.g., the first half of Rudin, Real and Complex Analysis or Royden, Real Analysis. Then read Breiman, Probability. After that might read some of Chung, Loeve, Neveu, and maybe some more. Then return to applications including statistics with a really solid foundation in probability, random variables, the classic limit results, and much more. Then can read and/or write lots of advanced topics in statistics.

For optimization, a similar review is possible, but it's getting late.


Thank you very much for the list. If you care for any further reviews I would greatly appreciate it. Your list as is will take me years anyways.


Here you go: ℝ




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: