I've only read small amounts of this just to get a taster. My impression is that is readable, careful, and (so far) accurate.
It is readable enough that there is a danger that a reader might read it from cover to cover and believe that they now understand it all. But like programming, mathematics is not a spectator sport - you need to engage it in hand-to-hand contact to get the most out of it.
Reading a book about programming without trying to write programs leaves you with only a superficial understanding, and potentially no extra ability. However, reading a book about programming and writing programs is immensely valuable. And so it is with proofs, and mathematics in general.
So don't just idly read this book and believe you now understand proofs - engage with it. Try to find errors in it, do the exercises, go back and re-read - you will get more the second time through. And based on the quick dips I've had, it will be worth it.
I am planning on using "How to Prove it" by Daniel J. Velleman to prepare for Tom Apostol's Calculus Vol 1. How does "Book of Proof" compare to "How to Prove it" .
I read this book cover to cover and did all of the exercises earlier this year. I didn't take any upper math in college that involved proofs, and this book has really helped me get over my fear about it.
"Normal" math is usually not done using any formal logic at all. You can try, but you'll enter a world of pain regardless of the formal system you choose. Logic and foundational theories are designed to remain out of the way and relatively hidden. It's good to know that they're there, but you don't actually use them in the day-to-day (unless you decide that logic is your field of expertise, or you're interested in the sisyphean task of formalizing proofs).
"Book of Proof" is about basic objects in higher mathematics and writing proofs (as a human). There are only about 30 pages covering the basics of mathematical logic. So, I imagine Common Logic would not be very related to this book, since CL is much more advanced than the book's aims.
If you are referring to translating the book's theorems for an automated theorem prover, the book is not nearly pedantic enough to effectively do this from the ground up.
Great to see Virginia Commonwealth University(VCU) highlighted for something other than NCAA basketball. There are some real gems at this university that often times get overshadowed by the numerous other universities also residing in Virginia.
Yes. But true in the sense that anything computable can be computed by a Turing machine. sets are universal building blocks but most of the time you want a higher level abstraction.
It isn't so simple, unfortunately. The idea behind set theory was to describe some universal building blocks of mathematics. The problem is that - unlike in the case of computable functions - such a universal building block can't exist.
There's a lot of mathematics which can't be formalized in classical first-order logic with the ZFC axioms. For most things you can get around this by adding enough Grothendieck universes, but this is still incomplete and always will be.
According to who? I don't think that's a very good analogy. I think it's better to say type theory is just another language in a 'family of assembly languages'. Its superiority is very debatable and not commonly accepted as a foundation.
I think it's a nice analogy. It's a bit tongue in cheek, but it does give you the right idea when you compare it to programming languages.
Set theory is a reductionistic system. It's supposed to give a foundation which is as "small" as possible.
In set theory you start with almost nothing and you have to encode all useful concepts into sets before you can start proving/programming.
Compare this to Turing machines, where you start with a very primitive model of computation. The first step in most treatments of Turing machines is similar to set theory, you show that some useful concepts (associative memory, high-level control flow, etc.) can be encoded into Turing machines. You then use this encoding to construct arbitrary programs.
Type theory starts out with a functional programming language and you derive new concepts from free constructions. In type theory you shouldn't be encoding things at all. You should be describing concepts (~adding an API) and possibly extending your model (~the compiler).
As analogies go, I'd say that this one is pretty accurate.
This is a great resource! I would have loved to have had it in undergrad. I ended up reading a semi-expensive used copy of 100% Mathematical Proof by Rowan Garnier and John Taylor for my Abstract Algebra class.
Excellent book. The chapter on cardinality gives a well-written, clear treatment of this important concept (e.g. # of elements in countably vs. uncountably infinite sets).
Good explanations! As I recall this stuff is usually introduced as part of classes along the boundary between basic and higher math, so having a mini-book about it seems a bit unconventional -- but it's a really good idea. Nice straightforward presentation. Very readable.
It is readable enough that there is a danger that a reader might read it from cover to cover and believe that they now understand it all. But like programming, mathematics is not a spectator sport - you need to engage it in hand-to-hand contact to get the most out of it.
Reading a book about programming without trying to write programs leaves you with only a superficial understanding, and potentially no extra ability. However, reading a book about programming and writing programs is immensely valuable. And so it is with proofs, and mathematics in general.
So don't just idly read this book and believe you now understand proofs - engage with it. Try to find errors in it, do the exercises, go back and re-read - you will get more the second time through. And based on the quick dips I've had, it will be worth it.