This fall I will be teaching the required "Discrete Math for CS course" to about fifty students at the University of South Carolina. Previously I used Epp's book [1] which in my opinion is an outstanding book but regrettably is $280.44. Many of our students are working minimum wage jobs to make ends meet, and I don't want to make them pay so much if I can at all help it.
Lucky I saw this!!
I do have one reservation though -- many of our students come in with a weaker mathematical background than MIT students; for example we spent several weeks doing proofs by induction (and no other kinds of proofs) and this text doesn't seem to feature a couple of weeks worth of examples.
I think I'll probably go with this and supplement as needed. Really it looks quite wonderful. (And hell, the book seems to be open source which would mean that I could potentially write supplmentary material directly into the book and make my version available publicly as well.)
This thread seems like a particularly good place to solicit advice: experiences with this book or others, what you wished you'd learned in your own undergraduate course on this subject, etc. I've taught this course once before -- I feel I did quite well but I still have room to improve. Thanks!
You're the type of professor everyone wishes they had. That's so phenomenal you're looking out for your students like this. That and if you end up writing supplementary material for the book.
Is it an American thing to force your students to buy textbooks? In Britain (at least at the universities that I know about) there tends to be 2 or 3 recommended texts, of which the library will have several copies, and they're meant for reference when you get stuck rather than learning the entire course.
Lots of instructors assign from books without reproducing the problems for students without books. Hence, it's typically required to have a book for a class. US University libraries will typically have copies of all textbooks used in any university class. Whether or not they have enough copies at any given time is another issue.
Also whether or not they have the updated edition with all questions...
I ran into that problem with my Operating Systems course this semester, I was just going to scan the problems I needed, but then found out they were different since the book was 4 editions behind.
It has been my experience that text books are available for checkout but only for two to three hours at a time. That way you can study there at the library and do your homework then return the book.
In Denmark, at least at the uni where I went in the 1990s, courses would use certain books which we were expected to purchase. Also, the university offered access to photocopiers at reasonable rates. Using photocopied books was not officially encouraged, for obvious reasons, but I don't think anyone ever got into trouble on that account. Personally I bought my books.
Yes, in USA you generally have to buy the book. In fact, because our teachers tend to get paid so little here, sometimes they will write a book, and then require the students buy that book to learn out of for the class. While this kind of sucks, I also understand them doing it because they get paid so poorly. The one thing I really hated though were the times we were required to buy a book and then barely used it. Which was often.
I got through 5 years of Engineering without buying a book. Between a bunch of copies of newer versions, some of older ones, some in English (this was in a Portuguese-speaking country) and some that couldn't be checked out from the library, there was always a copy available.
That was one of the high points of the college I went to, and I sure as hell did not take it for granted.
I had a fair number of classes I simply did not attend because the material was basically taught from the book, verbatim. Class was only needed if you didn't understand the book. I showed up to turn in the homework, get assigned the next set, and for exams.
At UC Irvine, almost all CS classes have either: free online textbooks, just suggested textbooks to help read along with the class, or a PDF that is widely distributed between students themselves. Most students end up never buying a $200 CS textbook
That page includes an email contact for the course; (6042-gradesmaster@mit.edu). If you're planning to release an edit, I'd reach out as soon as possible to give them time to get back to you on if they'd be interested in working with you and the best way to do so, since you posting it randomly somewhere on the net would likely do less good in my opinion, but would be a huge help if done with them.
The textbook prices are ridiculous. I buy a lot of textbooks for personal library/study, and I have found for 'classic' texts that have many editions, going back 2-3 revisions can be super cheap. I have purchased many used from Amazon in great condition for $5-10 that cost $100-200+ new in the latest edition. This might not work for a class, but it's been indispensable for my personal use.
I'm not very mathematical, and I was much less mathematical when I worked that book (calc 1, not calc 2 yet, etc).
I found it highly accessible and fun to work through. The concepts are all well-explained and I don't recall anything that would be out of reach for someone with some probability and foundations in classical logic. If they are at discrete math, this book is well withing their reach.
It doesn't teach hard math per se, but how to think in a mathematical way, showing how to break apparently diabolical problems into manageable pieces, and how to reconsider the perspective of the problem, if that makes sense. The book is among the most mind-expanding books I've ever read.
The book is terrific, and wonderful that it is available under a CC license. Most students found it accessible. We covered about Chapters 1-8 (and some things not in the book). (This is only the first part of the book, so a much slower pace than the MIT course. Some of this is to have more time for additional background and to go in depth in things, but also that our courses are scheduled with about 2/3 the amount of time per class as an MIT course is.
I appreciate your practical comments. This particular caught my eye:
>"Previously I used Epp's book [1] which in my opinion is an outstanding book but regrettably is $280.44."
This is extortionate. What is the justification for a book being priced at nearly $300? This is one book of how many a student will need that semester? Are school's getting a piece of this as well? Is this why such a racket is permitted?
Sorry I know its tangential to your original comment but I would be curios to hear anyone's feedback, especially someone from Europe or elsewhere. I wonder if this pricing is specific to the U.S.
> This is extortionate. What is the justification for a book being priced at nearly $300? This is one book of how many a student will need that semester? Are school's getting a piece of this as well? Is this why such a racket is permitted?
That is a bit high for textbooks, most textbooks are usually "only" in the realm of $80-160. That said, math textbooks are usually far more expensive than others.
The excuse given for high prices is that there is a strong used book market for college textbooks, as most students sell off their textbooks at the end of the semester, so from the point of view of the publisher, they see only one sale in 5-20. Of course, publishers discovered that the "cure" for this is to release new editions of textbooks with minor updates (most importantly, reorder the questions in the question banks!) every few years to kill off the used textbook market and haven't lowered their exorbitant prices to compensate.
As an undergrad, I did have one semester where my total book costs were about $1000. Most of the semesters, they were more like $500.
Oh, and the prices usually go to lining the publishers' pockets. Not the authors', not the schools'.
I spent a little time working at Pearson on a textbook project, and yeah, they're pretty profitable, but the up-front costs are enormous. The production budget for the biology book I worked on was $10 million. And that was for a book that would be widely used at the high-school level.
For a math book like this, I'd guess they spent more like a million getting it done. I wouldn't be surprised if it was $50-100K just for the typesetting.
There is something more than just high production costs going on, though, for math books. Consider Apostol's "Calculus". Volume I new is $150-270 on Amazon. Volume II new is $160-270.
These were first published around 1961. A second edition of Volume I was published in 1967, and a second edition of Volume II in 1969. There have been no further editions. The schools that use Apostol today are still using those late '60s second editions.
They were about $20 per volume when I bought them in 1977, which is equivalent to about $80 per volume in today's money.
A book that cost $20 in 1977 and was made to the same standards today--some paper and binding quality, etc.--would probably have to cost at least $100 today retail. Unfortunately, costs for paper and printing have outpaced inflation generally. Most trade books now are printed very cheaply--the paper and binding are crap.
I estimate the "fair" price of a textbook to be around $50-80; math (particularly calculus) is a textbook price I consider to be more naturally expensive, since they tend to cover more material. Your estimation of decades-old textbooks costing about $80 in today's money seems to vindicate that viewpoint.
>"The excuse given for high prices is that there is a strong used book market for college textbooks, as most students sell off their textbooks at the end of the semester, so from the point of view of the publisher, they see only one sale in 5-20."
Right, and couldn't they make up for this by issuing e-Textbooks with lower prices that would much higher volume in terms of sales?
Also I believe the book stores themselves are in this farce, as large percentage of the bookstores are not run by the schools but private companies such as Barnes and Noble and Follet[1]. So they are making out in this scheme as well.
Perhaps those of us who seek to make money from entrepreneurial activities should instead wonder at the business skills of people who are able to extract $300 from their customers for a book?
> A central tenet of capitalism is a competitive market though
It's true that a lot of the defense of capitalism that was ginned up (long) after the system was described and named by its critics involves invoking all kinds of hokum about markets which are both free and competitive which not only has nothing to do with anything in the history of capitalism, but isn't even all that internally consistent because of the tendency toward monopoly in unregulated markets where producers experience economies of scale (and in some other common cases.)
But this is a (deeply flawed) rationalization of capitalism, not its foundation.
> The market will bear it and the market has no choice to bear it are two different things.
With textbooks, there is a choice: go without (either taking the class anyway or not.)
That your perceived cost from going without is higher than the cost of paying for the book is, arguably a sign that the book is priced correctly for value. You seem to think that the book "ought" to be priced by cost to produce as they would be in a perfectly competitive market, but that's very often not a feature of real markets. The pricing is based on conditions in the market, which is very much a real -- if very much not an ideal -- market.
Do you actually believe the things you are writing? No, going without a text book is not actually a viable choice if you want to pass a class is it? Are you just trolling?
>"You seem to think that the book "ought" to be priced by cost to produce as they would be in a perfectly competitive market"
Nowhere did I articulate that sentiment at all. There is plenty of room to make a tidy profit without resorting to punitive pricing, even in a captive market.
I took a course that used that textbook last semester and I got the international edition for around $30, and it was the exact same book with less problems.
It really is a shame that this is a rational solution.
University text books should be material to get back to later in life, when confronted with a related problem. Part of university education should be a complete book shelf of well-understood books.
This just exemplifies the new nature of university education: An ephemeral experience where the most durable result is debt /cynisism.
I'd advise people just to get an iPad Pro and get electronic versions of these books. Makes for an eternal and ever growing bookshelf, and has been much cheaper to maintain than my previous physical library.
That's the received wisdom, yeah. The party line etc etc. I am not so sure though.
One has to factor in the possibility and ease of losing the device, or rights to the content that I have backed up on the cloud. Way too many people have the ability to intervene between me and the content. I appreciate the convenience of not having to move shit ton of bleached and pulped wood, but then I have to worry less about DRM, some raid in New Zealand, govt ordered deactivation of device, reading something that powers that be may not be to pleased about
I would like to read what the fuck I want with less power on others to influence that.
I find it much easier to recall details of material I've read in a physical textbook rather than an e-book. The unique cover, weight, texture, etc. of a physical volume provides more context cues that flag the experience of reading as noteworthy, and thus as something that tends to stick out relative to other memories.
I'd agree with you if a book was $50 and not $250 each. I really cannot justify $250 x 4 (4 classes per quarter /semester) every semester just for the sake of collecting books that I will, to be honest, never look at again. If I have a certain problem I need to revisit, I can find everything online.
There are a few exceptions, of course, which is why I said that buying books that one wants to keep for the future (and also reference it in the future) is fine.
But should I really buy books for general education that I have to take and has nothing to do with my major?
I held onto my books to have as references. Of course, this was back when the ubiquitous Internet and "instant access to everything" was some years away.
Why should I be reduced after leaving an education, to solely what I can remember in my head and what my current circumstances keep refreshed in my mind and memory? Education was an enormous investment not just of money but also of time and effort. It makes sense to me to keep the resources I used at hand; it helps keep my access to that education at hand.
Seen that way, while book prices are ridiculous, that expense as compared to the entire expense not just in money but in time and effort, is relatively small. It's an investment I've made against the future, where the best recollection and use of my education may be paramount.
And... Those prices are simply unfair. For a system that purports to provide its students tools for life -- not just formulae, but ways of thinking and being. To then increase the burden on those students of taking with them the references they used and might wish to retain as part of retaining that education... It seems that their actions in this matter oppose their rhetoric.
Then again, even when prices weren't so high, I saw a lot of my fellow students simply discarding their books -- at the end of semester, not just after graduation. Many people do "move on" and live "in the present."
Except, those people then seem to be the ones who can't recall what they learned, and who don't do as thorough and good a job in their work.
Living in the present, without the full benefit of history. And reacting based upon a more or less vague impression, rather than detailed knowledge.
What, really, do our institutions want us to retain? How do they want us to live our lives. As demonstrated through their actions -- e.g. these book prices -- rather than their rhetoric?
Anyway...
As for electronic copies. I find I read a lot better from printed pages than from screen. And, as others have pointed out, the "legitimate" electronic versions all too often come with electronic leases -- leashes -- with access deniable purposefully or inadvertently at any point in the future.
My Discrete Math book is still as accessible to me, today -- with any particular personal notes of interest and focus I may have added -- as it was when I bought it. I had to pay for it, and I've had to schlep it around. But I've also had to do that with my education, taking up premium space in my gray matter. I paid a lot more for the education; I hope and believe keeping the book handy helps me retain that larger value.
P.S. And I still find that the good books provide me more useful and valuable access to their information than an ever more cluttered and noisy Internet -- for all that their are many gems -- resources and contributors -- on it. These days, if you can only find them.
I saw some information the other day which suggested that books can with access codes for online material (tests/exams etc) which was required for the course.
The other perspective is that if tuition is many thousands of dollars a year, then the cost may be not that dramatic.
Yeah, when you need an access code you are out of luck. Thankfully many professors don't do that.
In a science major, if you take 4 classes every semester and then you assume that each book costs $250, then that's $1000 right there every semester. That money can be spent for rent and other more high priority stuff.
Honest question, why would you spend weeks doing proof by induction?
Looking it up, I think I did that when I was 16, but I've never used it while programming. I can't see a single benefit to knowing it for programming. Been programming professionally for 12 years now. What am I missing?
> Looking it up, I think I did that when I was 16, but I've never used it while programming.
Any reasoning about loops or recursion requires an implicit understanding of proof by induction. Making it formally explicit seems like a perfectly fine idea.
In some sorts of code, you can get a lot of mileage from "loop invariants" and "loop variants". Understanding these is more or less the same as understanding proof by induction.
Whether you are missing anything depends on whether you ever write the sort of code that benefits from this. (Even if you do, you still might not be missing anything. You might be comfortable with loop invariants but not have connected them with proof by induction. Or you might be good at reasoning about these things in other ways that don't involve invariants.)
(You can stop reading here if you're already very familiar with loop variants and invariants.)
Here's the sort of thing I mean by loop variants/invariants. Suppose you're writing a binary search. (Don't write a binary search. Use someone else's that's already had the bugs taken out.) This is basically pretty simple but surprisingly easy to get subtly wrong, which means it's the sort of code it may be useful to write and document in such a way that you have an informal proof of its correctness. Like this:
function find(a, x):
# find x in sorted array a; return index i such that a[i]==x
# or -1 if no such index exists. Takes time at most
# proportional to log(#elements).
lo,hi = a.index_bounds()
while lo<hi:
# INVARIANT: min index <= lo < hi <= max index+1
# INVARIANT: if x is in the array then lo <= its index < hi
# VARIANT: D := hi-lo reduces to at most D/2 next iteration
mid = lo + floor((hi-lo)/2) # may equal lo, can't equal hi
if a[mid] == x: return mid # done!
if a[mid] < x:
lo = mid+1
else:
hi = mid
# INVARIANT: if x is in the array then lo <= its index < hi
# now lo >= hi which means there's nowhere for x to live
return -1
The above is pseudocode in no particular language. Is it correct? Those variants and invariants (1) divide that question in two and (2) provide a strategy for answering the second half of the question. First sub-question: if the (in)variants always hold, does that imply that the code is correct? Second sub-question: do they always hold? (Strategy for second sub-question: proof by induction on array size and/or number of iterations.)
The first invariant implies that since mid is always between lo (inclusive) and hi (exclusive) it's safe to access a[mid]. The second, given that it holds at both start and end of the loop body, implies that if we exit the loop then indeed x isn't in the array. The only ways to leave the function are via the return in the middle, which only happens when we have explicitly found x, and via the return at the end, which only happens when x is known to be absent; therefore, if we return anything, we return the right thing. And the variant says that the quantity D, which starts by equalling the number of elements in the array, is reduced by a factor of at least 2 at each step; as it's always a (strictly) positive integer, this implies that the number of steps is at most log_2(n), so the function does return.
(Note that that last bit is more or less a proof by induction on array size that the function always returns.)
So if the invariants hold then the function does the right thing in the right amount of time.
Proving that the invariants hold is the induction-like bit. The first invariant holds at the start (since lo,hi start out being exactly min index and max index + 1). Each time around the loop we replace either lo or hi with something in between the two, so we still have min <= lo <= hi < max+1; and we will exit the loop unless in fact lo<hi. So the first invariant always holds.
The second invariant holds at the start (since lo,hi are bounds on the entire array). Because the array is sorted, our adjustments to lo/hi make this invariant true again for the next time around the loop. So the second invariant holds at the end of the loop -- and of course therefore at the start of the next iteration.
(Note that those were both proofs by induction, though I wasn't super-explicit about the fact.)
The variant works because if we write D = hi-lo and E = floor(D/2) then the new value of D is, depending on which branch of that if we take, either E or D-E-1; note that D-E-1<=E; so new-D <= E <= D/2 as claimed.
It's probably pretty clear what sort of code this is useful for: highly "algorithmic" code that's basically doing something fairly simple but that's tricky to think about and easy to get wrong. If you're writing a language's standard library, or implementing some sort of iterative mathematical algorithm, then you're quite likely to find it useful. The less like that your code is, the less often this technique will be useful to you. (Fairly extreme example: if you are writing a CRUD webapp then it is unlikely that anything you do will benefit from this.)
(Slightly off topic, but maybe still interesting.) I used to write binary searches like yours, with two tests per iteration:
while lo < hi:
mid = (lo + hi) // 2
if a[mid] == x:
return mid
elif a[mid] < x:
lo = mid + 1
else:
hi = mid
return -1
But then I discovered that there's an alternative approach which has only one test per iteration:
while lo < hi:
mid = (lo + hi) // 2
if a[mid] < x:
lo = mid + 1
else:
hi = mid
if lo < len(a) and a[lo] == x:
return lo
else:
return -1
If comparisons are expensive compared to other operations then this is twice as fast as the variant with two comparisons per iteration. (If comparisons are cheap, as they are if you are searching an array of numbers in a compiled language, then it doesn't make much difference.)
I'm afraid you're not very good at explaining yourself, I've no idea what point you're trying to make.
Sorry to burst your bubble, but any moderately complex LOB CRUD app usually has a large amount of far more complex algos than a binary sort. Your off-hand comment shows a total ignorance of what's involved in writing a LOB CRUD web app.
In my experience, algos are the really easy part of programming. The hard part is managing complexity.
> In my experience, algos are the really easy part of programming.
His point is that when you design an algorithm you need to be able to reason about why it works.
Based on the rest of your comments in this thread, it sounds to me like you have something against the mathematical background of CS? Writing code is not the same thing as doing computer science. Plenty of people are perfectly happy doing the former, but CS as an academic discipline is rooted in discrete math so any school that teaches it should also teach the background.
Ah, the old "but it's not a programming degree, it's about theory". I thought we were past that these days.
Notice how it took you just one sentence to summarise the few hundred words he wrote.
We've a fairly big problem in this industry that "qualified" CS graduates don't actually signal anything about their suitability for the profession. A significant chunk can't even fizzbuzz.
I'd suggest teaching mathematical models instead of showing loops and recursion in practical code is a large chunk of the problem.
> it took you one sentence to summarise the few hundred words he wrote.
Oh, come on. What sornaensis's one sentence summarizes is ... the one sentence at the start of my comment. It's true that his is shorter; it also gives less information.
(For the avoidance of doubt, that isn't a problem. Since you declared yourself unable to understand what I wrote, it's fair enough to try to simplify it.)
The rest of what I wrote was (1) an answer to your subsidiary question "What am I missing?" plus (2) an explanation of what I was talking about, in case it was stuff you weren't familiar with.
(Of course if I'd known you'd respond as unpleasantly as you did, I wouldn't have bothered trying to be helpful. But at the time I thought you might well be asking a sincere question rather than just wanting to vent about how out of touch those hoity-toity theoreticians are.)
I don't know how you came to the conclusion that the quality of CS graduates is poor or how that the cause of this perceived lack of quality is somehow due to courses focusing on theory. In my experience very little in a run of the mill CS undergrad program is dedicated to theory so I might come to the opposite conclusion: too many people are focused on learning how to write code with some javascript framework and not how to do computer science. So people miss important patterns and concepts because no one explained to them why they are important. Patterns you have spent years coming to understand intuitively, perhaps.
(It looks like you've been downvoted a lot. For what it's worth, it wasn't me.)
I didn't claim that CRUD apps don't have complexity in them. I claimed that CRUD apps don't typically have the sort of thing in them for which this kind of small-scale semi-formal stuff is useful in them. I am happy to stand by that claim; do you actually disagree with it?
I get the impression that you think I was being sniffy about CRUD apps. I wasn't. In particular, (1) I don't think CRUD apps are low-value. There are plenty of CRUD apps that add much more value to the world than almost any piece of code of the sort that invariants are super-helpful for. And (2) I don't think that CRUD apps are easy. Well, doubtless some are, but as you say they commonly have huge amounts of complication in them, of a sort that isn't amenable to the kind of formalized reasoning I was talking about.
To answer the implied question in your first sentence: I wasn't trying to make a point, I was giving an answer to a question you asked, namely why mathematical induction might be important for programmers.
Briefer and more explicit version of that answer: For people writing certain kinds of code, understanding mathematical induction makes it easier to reason about that code via techniques such as loop invariants, which makes it easier to make that code bullet-proof. There are many other kinds of code for which nothing closely related to mathematical induction is of any value at all.
(And, to reiterate lest I set the bee in your bonnet buzzing again: Whether a kind of code benefits from this sort of technique has basically nothing to do with how important it is, or how difficult it is to write overall, or what anyone should think of the people writing it.)
If it's so useful for "truly" understanding it, why do I have to show so many CS educated juniors how to use recursion? Have to point out to them to use it instead of doing crazy nested loops or other stupid solutions to a problem simply solved using recursion?
Given that I obviously don't "truly" understand it, having never done a CS degree.
I'd posit that most CS students don't truly understand recursion, they just vaguely know the theoretical basis behind a practical skill they have no experience in.
eh, I posit that you already understand induction and just don't know that you do.
"So many CS educated juniors" being uncomfortable with recursion is not an argument against understanding mathematical induction. Who says they understood induction?
I don't know which languages you and your CS educated juniors ("juniors") use.
While "juniors" = employees subordinate to you & "so many "juniors"" = high turnover & you = positive senior employee & college educated graduates = smart:
"posit that most CS students don't truly understand recusion" = False
'''Smart, being distinct from intelligent, indicates a propensity to make decisions that maximize benefit for the person described as smart. A smart CS student would work for a company that valued intellectual capital instead of a company with high turnover.'''
While "juniors" = employees subordinate to you & "so many "juniors"" = hyperbole & you = positive senior employee:
"posit that most CS students don't truly understand recusion" = False
'''It is dubious that more than 171.5k CS graduates worked subordinate to you from 2004-2010, or that an unbiased sample of CS graduates from 2004-2010 worked subordinate to you (see previous While "loop") [1] .'''
The proof that the previous comment's posit is not provable follows by induction.
Note_0: While loops can use "&" or "and" operators in some languages without nested loops (unless the programmer chooses to implement a break line) to achieve much of the effect of recursive functions. While statements, as implemented in the example above, offer nearly as much access to the input stack as recursive functions; about equal risk of an infinite loop; and don't risk stack overflow.
Note_1: I have a bachelors in Math. So, I was likely taught a more substantial "theoretical basis," and less "practical skill" than the curriculum most CS graduates were taught.
20 But I have to ask, if you have a bachelors in Maths, why do you not know about sample sizes? Or about the flaws in assuming perfect knowledge of actors in a system?
I'm currently taking a discrete mathematics class that Applied Discrete Structures by Alan Doerr and Kenneth Levasseur (http://faculty.uml.edu/klevasseur/ads2/). It's freely available, though I recommend steering away from the PDF on their site as it's got quite a few errors that are corrected in the source on Github.
This document is over 900 pages. How long are you supposed to take to read and understand all of this? And is this really all necessary?
On the surface this looks like an insurmountable task with questionable benefits. Don't get me wrong, but in the past 6 years of casual and professional programming, I've needed only a basic understanding of math, the most difficult thing being collision detection in games, and that includes doing RSA cryptography by hand. This paper starts off with proofs, something I've never had in school and which always seemed to me as if they only belong in scientific math papers, not practical life of someone doing computer science.
I don't mean to criticize the document or math in general, I would genuinely like to understand what 900 pages of this is going to bring me, especially when it starts with something I've never needed or seen outside of theoretical math discussions.
In professional programming, most of the time, system design is most crucial and would use less of these mathematics. These 900 pages is not insurmountable. A discrete mathematics course for year 1 computer science students would have covered a good 70% of it. Such a course takes only 3 months and is 1/5 of a student's workload.
The benefits of these is hardly questionable. Its use is apparent when you take a Design and Analysis of algorithms course like https://www.youtube.com/watch?v=JPyuH4qXLZ0. A good example of using knowledge of the pdf is analysing expected runtime of a hashtable. Which turns out to be theta(1) average case. Good analysis of algorithms inspire better design of it in general.
Data structure and software engineering courses would probably be sufficient for many software engineering jobs out there. Databases, networks, OS and security are good to have knowledge. However, if an engineer is building cutting edge stuff, Mathematics will be his/her best friend.
One good property of Mathematics is that it provide guarantees in the form of equality, inequalities or equivalences. Such guarantees can help you ensure that your system holds quantitatively. It is thus your job to reduce your computer science/engineering problem to a mathematics problem.
I find this remark by Terence Tao particularly good: "If you don’t have mathematical background, the classes you take will help you train to analyze existing systems and build things that haven’t been built before. If you want to design something really new, at some point you’ll have to model what you’re doing, which might be different from previous models, and you have to do some mathematics somewhere." This remark is made in this interview: https://docs.google.com/document/d/1rinL25rC8LnMTzZcGjg1axT-...
Yes, mathematics has been, is, and will be at the cutting edge of software engineering.
For historical examples: consider Turing, Knuth, and Page & Brin.
For current examples: consider High Performance Computing (HPC), Data Science, and Data Engineering.
For future examples: wait and see (or study math in the present).
Also, consider etymology: computers are machines that preform computations; software is code that directs computations and their results on computers; software engineers create code that directs computations on computers; computational scientists define the limits of computing on computers that have given properties, and create novel computational solutions to problems; mathematicians discover the theory that computational scientists (and, more generally, all scientists) utilize.
Note_0: There is a field at the cutting edge of computation, computational science, that requires the most mathematical knowledge of any discipline that includes the compute stem.
Note_1: Math goes beyond providing guarantees of equality, inequality, and equivalence: it is the quantitative basis of computer science (and all science); it provides Boolean logic; it guarantees boundaries/possibilities; and it provides scope.
It seems like about two semesters' worth of material in here, so 3 hours classroom time + 7 hours homework/self-study time, for 36 weeks, would give 360 hours. 900 pages divided by 360 hours is 2.5 pages per hour, a rather reasonable rate for study (this is not as dense as an academic paper).
This covers intro to proofs, mathematical logic, number theory, graph theory, combinatorics, and probability. In a typical undergrad math curriculum, these would be five or six different courses. So if you have a two-semester course or self-study covering this 900-page textbook, it's probably more efficient than learning the material in the math department by a factor of 2.5:1 or 3:1.
Yes, stuff's been cut. This is more of a survey of these fields with the emphasis on the most important points, and informing you of the most basic parts. So if you encounter one of the more specialized problems that might have been covered in a more specialized course, you'll have hints on where to look, what to Google for, and what sort of reasoning to use.
There's a lot you can do in programming without knowing a ton of math, just like these days you can drive a car quite well without having a degree in mechanical engineering. But just like with a car, the more you know about the deep underpinnings of things, the more you can do. A person with deep knowledge of how their car works may rebuild the engine and overbore the cylinders for more displacement, put in a camshaft with more lift, and replace the factory exhaust manifold with headers with better flow.
A computer programmer who understands the mathematical underpinnings of his work might replace the standard library implementation of HashMap with one that's better suited to his use case, or do his own derivations to figure out how to implement back-propagation in a neural network, etc.
No, it's not a perfect analogy, but the point stands I think - having the deep knowledge of the theory and foundations in any field enables you to push the boundaries beyond any "black box" that is provided to you.
It depends on what you mean by "read and understand".
My approach with something like this is to breeze through it a first time, just to understand the main point of each section and how it all fits together. If you spend 30s/page on average (obviously there will be a lot of variation), that's 7.5 hrs -- two mornings, afternoons, or evenings.
And for most people, that will provide 90% of the value. And you know what to look up, where, when you need it in the future. And you also know whatever part you might be personally interested in more, and you can go back and read those sections in more detail.
This approach works well with all types of non-fiction, and it can actually be really helpful to first "skim" through every non-fiction book you read, before reading in detail -- knowing in advance where the author is going can often help remove a lot of confusion.
I have found the mechanics of mathematical proof invaluable when programming, because they show you how to be very clear about what you do know and what you don't know. Basically "Is this thing just a random example, or something that is true in all cases?"
When I work on complex algorithms or concurrent systems I really feel the benefit of proof, because there's no way I could just test if the code is correct.
In the book, explaining what a proof is, how proofs are typically constructed and how to write a good one takes them ten pages, all written in very down-to-earth language. Then there are eight pages of problems you could do if you wanted to test your understanding.
I bet if you read those ten pages you'd feel there was nothing special there, just obvious ways to reason about things.
It's hardly insurmountable, but to answer your question, maybe people will read it because you're just curious, want to learn. Also you don't have to learn all, or any of it -maybe you could just use it as a reference book.
It depends on what you want to achieve. Strong fundamentals in maths gives you the tools to solve big problems. The more ambitious the problem you want to help solve, like say a traffic-control system for a high-speed train network, then the more maths you will need.
It'd be nice if they are able to record Leighton's 6.042 lectures. Meyer teaches the class 3 out of every 4 semesters in an inverted classroom setting (you read and learn topics on your own before lecture and lecture time is spent in smaller groups with TAs helping you solve problems) while Leighton teaches the class lecture style (which is better for OCW imo).
This book is a really accessible primer on the basics. Much more readable than a lot of other textbooks, which kind of go all-out in terms of rigor rather than present things intuitively. This was one of the textbooks we used in Princeton's introductory CS theory class (COS 340: Reasoning About Computation). The probability section is especially good.
I almost banned you before realizing you were probably talking to your instructor. It's good to build up a substantive comment history before posting like this!
Well, shiiiiit. I came to Hacker News to avoid doing my discrete math homework. While this is number 1, guess I'll plug Fleck's textbook from UofI http://mfleck.cs.illinois.edu/building-blocks/
I printed off the 2015 version (yes, the whole thing) and it has served as a very valuable reference. I wish there was a way to purchase a nice printed edition to support the authors. If they read this - thanks! Also, is a 2017 version of the course coming to OCW? I imagine that's why they'd release the updated PDF.
Freshman at MIT here taking this class -- the lectures are actually taught in a flipped classroom format, so I wouldn't imagine they would release the course considering there are no lectures to follow. I could see them releasing problem sets, however.
I'm still brushing up on Calc, but may have to learn discrete math on my own, since Harvard Extension School offers discrete and algorithms during the same semester, and I don't want to wait 2 years before I can take algorithms.
To that end, I bought a copy of the 1995 "C Edition" of Foundations of Computer Science by Aho and Ullman. It combines data structures and discrete math into one massive text.
Does anyone have an assessment of that book? Is the combined data structures and discrete math a good approach? Does the book hold up well against more "modern" ones?
No, this is for a general introduction to the mathematics of computer science. This looks like a basic (and good!) text any MIT freshman should be able to master. Perhaps it's for what 6.001 has morphed into?
If you understand this stuff, you really need linear algebra for today's "deep learning", which perhaps is 18.03 (I can no longer remember).
I meant Course 6, which can be 6-1 (EE) or 6-2 (EECS). Given that there are many 6-2 majors out there who say they've majored in computer science, it might come as a surprise to others who don't understand the MIT curriculum that a "computer science" major at MIT does not need to take this class.
Not really. You're better off looking at introductory calculus and statistics. This book places more emphasis on discrete math.
One good way to go about it is to audit an online course [like Andrew Ng's] and figure out what gaps you need to fill in your knowledge to understand the material.
I am familiar with Linear algebra and at most average understanding of graph theory. I know there is lot more math to cover for ML but I find it overwhelming to start
All you need for introductory machine learning is multivariable calculus (for some simple optimization stuff), linear algebra, and probability. If you don't know probability, here's a good course: https://ocw.mit.edu/courses/electrical-engineering-and-compu....
In its current state, even cutting-edge machine learning is pretty accessible if you have a good understanding of linear algebra and calculus. If you want to do have a deeper understanding of machine learning then vector calculus, tensors, graph theory, etc. can only help you.
You're a godsend, thank you from another person. I've been slowly self working through textbooks my friends give me over the years after they finish from their classes but haven't really known which direction to go in being nontraditional.
Is there a recommended order to these? I skipped two years of math in High School and ended up BSing my way through Calculus without learning any of it, so I'm trying to figure out what I may need to fill the gaps
Yeah, we had a pre-tests for algebra & trig but I read the textbook before the class started and got 100% on the pre-tests, so they skipped me ahead. In retrospect I regret it.
I had used the second edition to refresh my probability before taking a class in AI and found the material succinct and very useful.
It's only today that I found out that one of the authors is the CEO of the company I work for.
I work for Akamai Technologies and the author I am talking about is Frank Thomson Leighton.
I don't think you should be starting with a goal to finish the book cover to cover. It's an unrealistic, and ineffective goal. Although, if you think that works for you, go ahead.
From a person with only pre-high school math knowledge, what books/pdfs/moocs would you guys recommend in order to prepare for a future computer science degree? I've read that I would need linear algebra, discrete mathematics and calculus. Is that right? Is there any resource that targets compsci subjects?
From someone with an engineering background (mechanical engineering undergraduate 2014), do I need to study this if I want to transition into computer science/software developer/engineer post?
Yeah. I had what was termed a "tolerated fail" on what I think was the equivalent on my (UK) course - "Foundations of Computer Science". Tolerated as in because I scored high in all other modules I didn't have to resit, fortunately, as I'm not convinced I'd have passed.
I'm not convinced either that it's negatively affected my career. I do occasionally wonder if it was just not taught in a way I could get, whether I was just bored to tears because the theory was so separate from the application, or if it's actually just not that important a prerequisite.
Yeah I had the same, I have been developing now at various levels for 15 years since university and I can honestly say I have never once come across anything that has needed it! Then again I would say seeing as I am primarily web based it's not likely I would vs someone coding realtime systems
This is so stupid, the "primer on the basics" should be "Algorithmization for Computer Science", not a 900+ page math textbook (that doesn't even explore graph theory, calculus and differential calculus).
So if I understand correctly, this book is both too long and doesn't cover enough? (I'm confused about the book to which you're referring. If it's "Mathematics for Computer Science", then it does cover graph theory from p445.)
It can be, depending on what you're doing. Look at back-propagation in neural networks for example, where the whole thing is based on the chain rule from Calculus.
Yeah absolutely. Having a broad background in math can help in surprising places but there's much more math that has applications to CS than can fit in one class.
Calculus is a prereq for taking the course in which this book is used. I'm pretty sure Differential Equations is also a required course for CS majors at MIT, too.
Lucky I saw this!!
I do have one reservation though -- many of our students come in with a weaker mathematical background than MIT students; for example we spent several weeks doing proofs by induction (and no other kinds of proofs) and this text doesn't seem to feature a couple of weeks worth of examples.
I think I'll probably go with this and supplement as needed. Really it looks quite wonderful. (And hell, the book seems to be open source which would mean that I could potentially write supplmentary material directly into the book and make my version available publicly as well.)
This thread seems like a particularly good place to solicit advice: experiences with this book or others, what you wished you'd learned in your own undergraduate course on this subject, etc. I've taught this course once before -- I feel I did quite well but I still have room to improve. Thanks!
[1] https://www.amazon.com/Discrete-Mathematics-Applications-Sus...