That classes basically builds a microcontroller. It's Harvard Architecture with a split iROM and dRAM. Simple ALU, 16 bit CPU. You'd need one hell of a bread board to physically build it, though.
I'd definitely recommend TECS for folks looking to learn more about the many layers of abstraction sitting under their interpreter/VM. Also worth taking a look at the Little OS Book or the OSDev wiki for some more insight into a less idealized/more crufty architecture.
I wound up spending the most time playing around with chapter 9 (the Tetris part of Nand to Teris) and made a very basic ray-casting game a la Wolfenstein 3D: https://youtu.be/c-J7lwKWDN8
OK, this is an oddball one, in that a book on Evo Devo is pretty far from CS, but reading it completely changed how I look at software architecture, protocols, formats, programming languages, and a whole range of other CS-related topics.
Superficially, it is what it says on the tin: an examination of the mechanisms by which life manages to evolve at all, that keep most mutations from being immediately lethal, and instead have a decent chance of producing (potentially) useful variations.
But under the hood, I've found it extremely interesting how and why so many aspects of the living world are resilient and even antifragile, and there is are lot of useful ideas and concepts to mine beyond relatively superficial biomimicry like genetic algorithms, neural networks, or even "design for failure" approaches for software and services.
Read it, and you won't think about "Worse is Better", "Bus Numbers", or Chaos-Monkeys the same way ever again.
This book influenced me tremendously. It explains how can random process (e.g. genetic mutation) builds highly non-random structures (e.g. the human eye). Recently I was using American Fuzzy Lop (http://lcamtuf.coredump.cx/afl/) to find bugs in my software, and was able to appreciate the surprising effectiveness of the "random" strategy it employs to discover structures in the program. For example, AFL could construct a valid JPEG using randomized input: https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-th....
Hey thanks for the link, that jpeg example is really quite cool.
One of the things that I gained from the book was a bit better "feel" for getting the right granularity/modularity in order to facilitate evolvability (as in the chunks I might want to swap out for another implementation, while leaving others in place), as a separate concern from maintainability.
BTW, I've noticed is that the "order blindly emerging from chaos" theme keeps popping up in fiction by writers like Bruce Sterling, Cory Doctorow, and Charles Stross (all favorites of mine).
"The Plausibility of Life ends with a brief critique of intelligent design, suggesting that the concept of facilitated variation will provide a solid argument to rebut creationists. I applaud the authors' intention, as it seems to me that more scientists ought to face the realities of public misunderstanding of science. But their presentation is too brief, and a bit too simplistic. The "controversy" about evolution has nothing to do with the soundness of scientific explanations of the history of life: It's not a scientific controversy, but a social, cultural and political one. Creationism is the result of centuries of anti-intellectualism in the United States, coupled with sometimes cynical exploitation of the issue for political gain. In addition, many scientists have no interest in getting out of the ivory tower to talk to the very same public that pays their salaries and funds their precious research grants. The recent defeat of intelligent design at the trial in Dover, Pennsylvania—at the hand of a conservative judge appointed by George W. Bush—will do much more to promote sanity in public education than any theory about facilitated variation, as scientifically sound as the latter may be." http://www.americanscientist.org/bookshelf/pub/have-we-solve...
Thanks, that's an interesting review, though to me that passage you quoted is the least interesting bit about it. ;-)
For the viewers following along at home, this passage summarizes why I found the book interesting in the first place:
"Organisms are not analogous to human-engineered machines [...] rather, they are characterized by developmental systems that are capable of accommodating quite a bit of disruption—be that from changes in the external environment (phenotypic plasticity) or from mutations in their genetic makeup (genetic homeostasis). This ability to accommodate is in turn made possible by the modular structure of the genetic-developmental system itself, which allows organisms to evolve new phenotypes by rearranging existing components. "
To which my emotional response at the time was, "Hey, human-engineered machines are becoming less like human-engineered machines in many of the same ways!"
And this bit was the part I found most interesting about this review:
" Inherited epigenetic variants can interact with their genetic counterparts to multiply by orders of magnitude the phenotypic variation available to natural selection, thereby expanding the mechanistic bases of evolutionary theoretical explanations and greatly increasing their plausibility as an account of life's diversity."
Huh. The analogous mechanisms for software seems like they may be those that don't get propagated by code per-se (but certainly affect it), like the surrounding community. I'll have to think about that some more (and probably eventually read additional Evo Devo books).
Each section contains a story of some situation he was in where he faced a problem which he solved by applying one of various algo techniques (DP, divide and conquer, etc.). After reading CLRS for a class, it was nice to see how some of the most common textbook algorithms have been applied by a notable computer scientist.
This book is highly overrated. Algorithms by Sedgewick (http://algs4.cs.princeton.edu/home/) is a much better book for learning and understanding algorithms. Skienas book is a simple collection of (sometimes very exotic) algorithms. It won't teach you anything, it will only tell you what exists.
I'm glad to hear someone else say that because the book seems to get high marks and recommendations but I didn't care for it at all. I thought it was just me. I just don't think he has a very engaging writing style.
The second half of the book is "the catalog" of algorithms and I guess maybe that's what people like but the I had so few "a ha" moments reading the book that I got to page 130 before I threw i the towel.
No, I believe there are more algorithms listed in Skienas book. But from Skienas book you will barely understand them and it won't teach you how to really implement them. Sedgewicks book is much better in that respect. One can compile and run the given Java examples and actually play around with them. Skienas book is a good reference for postdocs in algorithmic design. Sedgewicks book on the other hand is for people who actually want to implement these algorithms.
This is a great book. I love the approach, it really helps with one of the main problems with using algorithms or design patterns even and that is problem identification. There are a bunch of problems that are way easier to solve if you recognize the solution exists in dynamic programming for example, but if you don't they become very hard.
It's kinda hilarious in that. CLRS is the gold standard recommended introductory text for algorithms, but it's so dense and over-stuffed that it's overkill, so better for reference. Skiena has a lot of stuff you don't need, but is comparably more succinct and light to read.
I love this book for how approachable, readable, and even enjoyable it is. Skiena does a great job of motivating each of the algorithmic techniques. I recommend this one widely.
It is a fantastic book. It doesn't take u into typical algorithms (at least that I recall), but rather it explains as intuitively as possible how a computer is built up from flip flops and binary logic to assembly, intermediate language and on to full-on compilation of a useable language.
Basically beginner programmers can acquire a broad understanding of the foundation the programs you're building are built on by reading this book. It reads more like a non-fiction expose than a programming language tutorial book, which is to say, given its subject, it's an easy read you can do on the couch. Depending on your skill and knowledge level, there may be a few sections you have to re-read several times until you understand it, but you won't feel as though you need to go over to your computer chair and try something to fully grasp it.
If you can do basic arithmetic, you can get through this book. That seems to be the hidden premise. That computers are easy and should be easy to understand. This book is a testament of that. Though I'm sure some will find this doesn't go deep enough. But the point is: learning so generally will create many entry points for you to follow up on in your journey into programming and computer science. It will clear up many things and essentially make the path seem less scary and out of reach. This book achieves that really well. High level programmers will come away feeling far less insecure about their lack of knowledge of the underpinnings of whatever it is they are developing. I know I did. I can't say enough about this book. It's the real deal. I'm sure those with a computer science degree might have more to say (that is they likely think it's a cursory overview), but I think for everyone else it's a computer science degree in a book you can read in one or two weeks. At least half the degree. For the second half, I recommend Algorithms In A Nutshell. And done! Go back to programming your high level JavaScript react app and get on with your life.
On a side note: it's my opinion that theory first is the wrong way. Application first, theory as needed is the right approach. Otherwise it's like learning music theory before u know u even like to play music. U might not even like being a programmer or be natural at it. And if u spend 4 years studying theory first, u will have spent a lot of time to discover what u could have in like a month. In addition, it can suck the joy and fun out of the exploration of programming and computer science. It's natural and fun to learn as u dive into real problems. Everything u can learn is on the internet. It's very rewarding and often faster to learn things when you are learning it to attain a specific goal. The theory u do learn seems to make much more sense in the face of some goal you are trying to apply it to. In short over ur computing career u can learn the same stuff far faster and far more enjoyably if you do so paired with actual problems.
But that said sometimes u do gotta step back and allocate time for fundamentals, even if u have no specific problem they are related to. However you will know when it's time to brush up on algorithms, or finally learn how the computer works below your day to day level of abstraction. Just know that a larger and larger percentage of us programmers went the applied route, rather than the computer science theory first + formal education route. It's probably the majority of programmers at this point in time. In short u r not alone learning this as u go. Learn to enjoy that early on and save yourself from the pain of insecurity of not knowing everything. This is an exploration and investigation, and perhaps you will make some discoveries nobody else has been able to make, and far before u have mastered and understood everything there is to know about he computer. Perhaps that's it's biggest selling point--you don't have to know everything before you can contribute to the computer world! So enjoy your pursuits in programming, knowing in your unique exploration at any time u may come up with something highly novel and valuable.
I think that everyone will get new information better when it is something that fixes an immediate issue or clarifies an immediate doubt.
But this is not a contradiction. Theory can be presented in such a way that you want and need to know the next piece of information, the way mystery novels work.
It can be easier for the writer to create this need with examples instead of narrative, no doubt about it.
But let's not fall in the opposite direction of having only examples and no theory, so common with blogs now. I feel empty when I read such materials.
Yes! I read it about 5 years after finishing my degree (which already covered many of the topics in depth), and it was very enjoyable. It gives a very good, succinct (if simplified) overview of computer architecture.
I second this, although I am not finished with it, I can say it is a very interesting read. Thanks for the link to the author's site which has lecture notes, I didn't know this existed. The lecture notes are easy going an have a good sense of humor, such as the following:
"It's a shame that, after proving his Completeness Theorem, Gödel never really did anything else of note. [Pause for comic effect] "
This would be my recommendation as well. It is closer to a textbook, than say a popular science book, but is still a very fun, engaging, and at times pretty funny read.
Similarly to books like GEB, it really exposed me to a whole range of fascinating ideas and topics that I am now interested in.
One of the most important books ever written on software engineering practice. Author Frederick Brooks won the Turing Award for this book and for his work on IBM's System/360 (the main example project used in the book).
Everything I needed to know about taking on software projects. "Prepare to throw one away".
The insights from reading this book years ago certainly helped me navigate project management as a rescue guy. By the time I was hired, the failing project was already part way through the experience that Brooks' book is based on, and he was writing the OS!!!
He writes about the origin of the svc (supervisor call) in OS360. They needed a way to keep track of them as they were popping up from all over, so they made a list, and you added yours to it, then other groups could check the list before they wrote another version of the same function.
If you read that and like it, try The Design of Design by Fred Brooks, in which he talks about on a high level how the design process works; good teachings about the flaws of the waterfall model.
Note, you have to be willing to put the time in, especially if your linear algebra is rusty or (like me) you have only a passing familiarity with complex numbers.
With that in mind, it's almost entirely self-contained and you can immediately start to make connections with classical computing if you're familiar with automata.
I've been interested in learning about quantum computing for a few years now and this book finally got me going.
As an aside it's a really great excuse to try out one of the many computer algebra systems out there. I gave Mathematica a trial for fun since I'd already used SageMath in the past.
Leonard Susskind's "Quantum Mechanics: The Theoretical Minimum" as well as Nielsen & Chuang's "Quantum Computation and Quantum Information" are also fantastic resources. I've got both sitting here on my desk :)
I used it to supplement my prescribed compiler construction textbook and it's incredibly useful! Also surprisingly easy to read compared to some texts with a very math-heavy approach.
This book was my greatest buy ever! I once went to Fry's electronics to their book section - my expectation was to find only popular books there, when lo and behold, I see this book on compilers. I turn it over to look at the price tag, thinking I'll have to shell out something like $80 for the book and to my shock and surprise the tag says,... wait for it... $0.01
That's right one cent! At first I think this is some kind of prank and somebody peeled off a price sticker from some discounted stale candy or something and stuck it on this book. I'm of half a mind to see if I can buy slip this book by the store checker and have them just scan the one cent price tag and not to a double take. But my concsience gets the better of me and I walk up to a store clerk point out the one cent price tag and ask them to do a real price check so I can find out the fair price. The clerk looks up the book in his computer and says, nope that is the correct price in their system! So I bought "Engineering a Compiler 2nd Ed. By Cooper & Torczon" for $0.01! My greatest book buy ever!
Try Principles of Program Analysis by Nielson & Nielson. Uber hardcore and beautiful approaches to static analysis and optimization, on a very general mathematical framework.
Fun fact: these two are married and the cutest couple. They walk to Rice's Coffeehouse almost daily together talking about whatever it is compiler professors talk about.
This is cheating a bit, because the book is history of computer science rather than computer science, but I think anyone interested in programming should read it. We often think about history of computing (and what it teaches us) in a retrospective way (we see all the amazing totally revolutionary things that happened), but it turns out that if you look deeper, there is often a lot more continuity behind key ideas (and we just think it was a revolution, because we only know a little about the actual history). This book goes into detail for a number of major developments in (early) programming and it makes you think about possible alternatives. It was one of the books that inspired me to write this essay: http://tomasp.net/blog/2016/thinking-unthinkable/
If you you work in bioinformatics it is weird how little history the average bioinformatician knows (the field only really dates back to the late 1960s). People may know things like PAM matrices, but not Margaret Dayhoff who created them (not to mention the standard amino acid codes used today).
Deep Learning (Adaptive Computation and Machine Learning series) by Ian Goodfellow, Yoshua Bengio, Aaron Courville
Came out in November 2016. Split in 3 parts:
Part I: Applied Math and Machine Learning Basics (Linear Algebra, Probability and Information Theory, Numerical computation)
Part II: Deep Networks: Modern Practices (Deep Feedforward Networks, Regularization, CNNs, RNNs, Practical Methodology & Applications)
Part III: Deep Learning Research (Linear Factor Models, Autoencoders, Representation Learning, Structured Probabilistic Models, Monte Carlo Methods, Inference, Partition Function, Deep Generative Models)
I finished it up a few months ago and I cannot recommend it enough.
The real soul of SICP is in it's exercises however. It asks you to build on your own prior work in inventive ways, challenging you to solve the exercises correctly, but also to do so in a maintainable way.
I wrote up some supplementary material to make the experience smoother for a practicing programmer: High-level requirements of the chapter subprojects, Pitfalls and paradigms embedded in some of the footnotes, Answers to (nearly) all exercises, a testing framework for Chapter 4 interpreter (including the JIT compiler), a GUI for running Chapter 5 virtual machine here: https://github.com/zv/SICP-guile
The one thing I really wish I had was a comprehensive test suite for all the exercises. They are (obviously) hard to solve and there's no other way to see if you were right but to check the solution. It's a huge flaw in most CS books, feedback without hand feeding you the answers.
I disagree with this pretty strongly, actually. The best skill you can get from SICP-level exercises is looking at the code and being completely confident that you understand it and that it is correct. The best way to use them is to write the code without running it, and once you are sure, review it with someone else to find out if you were right.
This breaks down a bit around chapters four and five where you are plugging in parts of a larger code base, and the accidental complexity starts to dominate.
Self-study students can find someone online to study together.
I like the amount of projects SICP inspires. That's one of the best things about it. I wouldn't want to discourage writing a test suite, but I wouldn't suggest studying it by using one.
SICP questions are designed to have elegant solutions, which may be missed, this is one reason why a study group is better than a test suite.
This is one of the more fascinating books that I've read recently. The commentary makes the paper itself very accessible. I rather enjoyed the direct reproduction of the paper itself (with typos!) in the book, and the near line by line commentary at points. It's not the way that I would want to be taught about Turing machines, but it's amazing to see them articulated for the first time.
Like others in this thread, this is more (only, in fact) about programming but I found The Go Programming Language to be an absolutely perfect introduction to Go with fantastic examples and succinct prose. It's beautifully typeset as well, which doesn't hurt.
I've been reading _The Practice of Programming_ (Pike, Kernighan) lately; it's mainly geared towards Java and C/C++ programmers, but even as a DevOps engineer I'm finding a lot of it useful.
Indeed! This book is likely to be 100x more interesting than your local college's undergrad networks course.
The dozen or so principles Varghese lays out for writing fast networking code is reason enough to pick it up. The writing is very approachable, too. Reminds me more of the early network operator books than a CS text.
Wow this is a unique book! Just skimming I see Bloom filters, tries, routing protocols, sequential logic and DDOS all in same book. This looks great. Thanks for sharing.
I've been looking at this but I don't really know Haskell (just f#). In your opinion is it worth trying before knowing a Haskell? I'm mostly interested because I wish to learn f-star as well, which also has dependent types.
The Art of Multiprocessor Programming by Shavit & Herlihy. Recommend topping it off with the lecture delivered at Microsoft Research (https://youtu.be/nrUszqrlvi8)
This is an excellent book! It was my prescribed textbook for an undergrad concurrent systems course. It's very "proofy" but explains a large amount of important concepts quite well.
The book is divided into two sections: Principles and Practice. The former introduces basic terminology, concepts and common mistakes. The latter is where it gets interesting and introduces real-world problems like cache coherence traffic.
The language used in the book is Java but the concepts can be applied to practically any language that supports concurrency. Highly recommended!
I'll take the counter-point on this one. I picked it up after it was mentioned in the YC's Summer Reading List; however, I found it to be a tire to finish.
The reasons and explanations given seem to touch around a technical, but not too technical approach to algorithms. Getting stuck in a place that probably just leaves both audiences a bit unhappy. For instance, there is a chapter that mentions that the optimal stopping point is ~37%. There is never any mention about how the 37% number is found. Of course, I could look it up but I could just as well look up the optimal stopping problem.
Aside from that, the examples come across as contrived and inapplicable. Sure merge sorting your socks sounds great, but I still will never do it!
Just a head's up: for someone well versed in computer science the first two thirds of this book can be pretty dull. The majority of the time is spent explaining first year CS topics in layman's terms. After that it picked up a bit with, for example, showing how randomized algorithms can apply to types of decision making in real life.
I still think it is better suited for those with little to no CS knowledge.
I just finished that not long ago. It's an interesting read because it places an emphasis on the value on simple and statistically effect algorithms that can be executed by humans with relatively low cognitive overload.
Many of the algorithms and ideas are familiar, but the novelty here is how one can map these CS lessons to improve performance on the human OS and human network.
The story they've wrapped around the shift to Agile for this company was extremely entertaining and kept me coming back to read more! Plus it has some very good thought exercises to take with you to your own job :)
I didn't love this book, it was OK. It's heavily based off The Goal, and I thought that book was a bit better. I think if you have a certain dysfunction in your team AND the mandate to fix it, then The Phoenix Project is a must read. For everyone else it's kind of a waste.
Seconded. After being a FreeBSD developer for a dozen years some people might expect me to know everything this book covers, but in fact I pulled it out just last month to help me understand how some details of how the VFS layer and the NFS client worked.
Has anyone read Probability and Computing: Randomized Algorithms and Probabilistic Analysis by Michael Mitzenmacher, Eli Upfal[0]? If so, how difficult are the exercises? Aside from doing problems I like to read solutions and analyze them to learn better style, new techniques/ideas. Are there any books like this one, but with solutions? Thanks.
The source code and commentary of xv6, a simple Unix-like teaching operating system. By far the best book on operating systems I have encountered so far.
I only had a chance to read part of the book (around the first 25% at most), but I found a lot of the ideas to be applicable to all forms of work. The ideas were well-explained and didn't require a software background. I intend to pick up a copy when I have the cashflow for it.
The Cathedral and the Bazaar by Eric S. Raymond. It's inspiring for embrace Open Source collaboration and also very useful for manage projects. In Guy Kawasaki's quote "The most important book about technology today, with implications that go far beyond programming"
It is free to read at the above link. I had read many of the chapters some years ago. Pretty interesting stuff.
It consists of multiple chapters, each written by different well-known people associated with prominent projects from the open source movement, at a time when it was relatively new (1999).
The paragraphs that I remember the most from it are from this chapter:
Future of Cygnus Solutions
An Entrepreneur's Account
Michael Tiemann
(Cygnus Solutions was a company which initially ported and improved GCC and its toolchain to multiple Unix platforms, and made a business out of it, before open source was a gleam in many people's eyes. They did well and were acquired by Red Hat some years later.)
Here are the paragraphs:
[
Again, a quote from the GNU Manifesto:
There is nothing wrong with wanting pay for work, or seeking to maximize one's income, as long as one does not use means that are destructive. But the means customary in the field of software today are based on destruction.
Extracting money from users of a program by restricting their use of it is destructive because the restrictions reduce the amount and the ways that the program can be used. This reduces the amount of wealth that humanity derives from the program. When there is a deliberate choice to restrict, the harmful consequences are deliberate destruction.
The reason a good citizen does not use such destructive means to become wealthier is that, if everyone did so, we would all become poorer from the mutual destructiveness.
Heavy stuff, but the GNU Manifesto is ultimately a rational document. It dissects the nature of software, the nature of programming, the great tradition of academic learning, and concludes that regardless of the monetary consequences, there are ethical and moral imperatives to freely share information that was freely shared with you. I reached a different conclusion, one which Stallman and I have often argued, which was that the freedom to use, distribute, and modify software will prevail against any model that attempts to limit that freedom. It will prevail not for ethical reasons, but for competitive, market-driven reasons.
At first I tried to make my argument the way that Stallman made his: on the merits. I would explain how freedom to share would lead to greater innovation at lower cost, greater economies of scale through more open standards, etc., and people would universally respond "It's a great idea, but it will never work, because nobody is going to pay money for free software." After two years of polishing my rhetoric, refining my arguments, and delivering my messages to people who paid for me to fly all over the world, I never got farther than "It's a great idea, but . . .," when I had my second insight: if everybody thinks it's a great idea, it probably is, and if nobody thinks it will work, I'll have no competition!
-F = -ma
Isaac Newton
You'll never see a physics textbook introduce Newton's law in this way, but mathematically speaking, it is just as valid as "F = ma". The point of this observation is that if you are careful about what assumptions you turn upside down, you can maintain the validity of your equations, though your result may look surprising. I believed that the model of providing commercial support for open-source software was something that looked impossible because people were so excited about the minus signs that they forgot to count and cancel them.
An invasion of armies can be resisted, but not an idea whose time has come.
Victor Hugo
There was one final (and deeply hypothetical) question I had to answer before I was ready to drop out of the Ph.D. program at Stanford and start a company. Suppose that instead of being nearly broke, I had enough money to buy out any proprietary technology for the purposes of creating a business around that technology. I thought about Sun's technology. I thought about Digital's technology. I thought about other technology that I knew about. How long did I think I could make that business successful before somebody else who built their business around GNU would wipe me out? Would I even be able to recover my initial investment? When I realized how unattractive the position to compete with open-source software was, I knew it was an idea whose time had come.
The difference between theory and practice tends to be very small in theory, but in practice it is very large indeed.
Anonymous
In this section, I will detail the theory behind the Open Source business model, and ways in which we attempted to make this theory practical.
Structured Parallel Programming: Patterns for Efficient Computation by von Michael McCool, James Reinders, Arch D. Robison
It introduces some important parallel patters, explains what makes
them tick and how to make them efficient. The book makes a strong case for using parallel frameworks, namely TBB and Cilk+ to create general and portable solutions.
I think it's well intentioned and the author seems like someone who has put a lot of work into something they think is important, but it contains some terrible technical misunderstandings, and that's a real shame as I cringe thinking about a developer without a CS degree who's read this book to make themselves more confident in interviews repeating some of the stuff in it.
It's very rare for me to read a technical book cover to cover. I recently read IPv6 FUNDAMENTALS by Graziani (Cisco), but that was more an act of desperation. The best CS-related book I've ever read was probably MEASURING AND MANAGING PERFORMANCE IN ORGANIZATIONS by Robert Austin. It completely changed the way I see the world, both professional and personally. Austin uses agency theory, an application of game theory, to show how incentives in an information economy drive dysfunction into organizations because not all metrics of behavior can be adequately or economically measured. This is extremely applicable to, for example, incentive programs in high tech organizations. At the time he wrote it, Austin was working on his Ph.D. in (I dimly recall) operations research at Carnegie Mellon while working as an executive in IT for Ford Motor Company Europe. Now he's on the tenured faculty at Harvard Business School.
C# in depth by John Skeet. This could easily be one of the best computer related books i've ever read. The amazing thing is that is shows with code examples how the language progressed from one version to the next. I've never read anything similar in computer literature. It gave me a deep understanding of the language and all its caveats.
John Day's Patterns in Network Architecture is a highly opinionated book (by someone who was around for the development of the Internet and OSI standards), and one that provided a radically new perspective on networking and network protocols that I have found useful over the past year.
Not finished it yet, but it's a joy to read and explains fundamental concepts of things like parsers, interpreters and Lambda Calculus using minimal Ruby syntax.
> This book will help you navigate the diverse and fast-changing landscape of technologies for storing and processing data. We compare a broad variety of tools and approaches, so that you can see the strengths and weaknesses of each, and decide what’s best for your application.
Designing Data-Intensive Applications by Martin Kleppman. He does a great job of distilling storage systems to concepts and discussing conceptual trade offs instead of focusing on particular storage products. Plenty of footnotes to relevant research papers too.
This book proposes how to write C++-ish code in a mathematical way that makes all your code terse. In this talk, Sean Parent, at that time working on Adobe Photoshop, estimated that the PS codebase could be reduced from 3,000,000 LOC to 30,000 LOC (=100x!!) if they followed ideas from the book https://www.youtube.com/watch?v=4moyKUHApq4&t=39m30s
Another point of his is that the explosion of written code we are seeing isn't sustainable and that so much of this code is algorithms or data structures with overlapping functionalities. As the codebases grow, and these functionalities diverge even further, pulling the reigns in on the chaos becomes gradually impossible.
Bjarne Stroustrup (aka the C++ OG) gave this book five stars on Amazon (in what is his one and only Amazon product review lol).
This style might become dominant because it's only really possible in modern successors of C++ such as Swift or Rust, not so much in C++ itself.
This book changed my perception of creativity, aesthetics and mathematics and their relationships. Fundamentally, the book provides all the diverse tools to give you confidence that your graphics are mathematically sound and visually pleasing. After reading this, Tufte just doesn't cut it anymore. It's such a weird book because it talks about topics as disparate Bayesian rule, OOP, color theory, SQL, chaotic models of time (lolwut), style-sheet language design and a bjillion other topics but always somehow all of these are very relevant. It's like if Bret Victor was a book, a tour de force of polymathical insanity.
The book is in full color and it has some of the nicest looking and most instructive graphics I've ever seen even for things that I understand, such as Central Limit Theorem. It makes sense the the best graphics would be in the book written by the guy who wrote a book on how to do visualizations mathematically.
The book is also interesting if you are doing any sort of UI interfaces, because UI interfaces are definitely just a subset of graphical visualizations.
This book almost never gets mentioned but it's a superb intro to machine learning if you dig types, scalable back-ends or JVM.
It’s the only ML book that I’ve seen that contains the word monad so if you sometimes get a hankering for some monading (esp. in the context of ML pipelines), look no further.
Discusses setup of actual large scale ML pipelines using modern concurrency primitives such as actors using the Akka framework.
# Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for Building Intelligent Systems
Not done with the book but despite it's age, hands down best intro to category theory if you care about it only for CS purposes as it tries to show how to apply the concepts. Very concise (~70 pages).
# Markov Logic: An Interface Layer for Artificial Intelligence
Even though this is an Erlang book (I don't really know Erlang), 1/3 of the book is devoted to designing scalable and robust distributed systems in a general setting which I found the book worth it on it's own.
Practical Foundations for Programming Languages by Bob Harper is really good, plus there's a free draft of the second version on the author's site http://www.cs.cmu.edu/~rwh/pfpl.html
I always go to the book author's page first not only to get the errata but also discover things such as free lectures as in the case with Skeina's Algorithm Design Book
As an ex-Amazon Affiliate myself, I disagree because the incentive to post those links is not aligned with the reader's expectations.
Do you enjoy viewing commercials and product placements without the proper disclaimer? Because this is exactly what this is. I surely don't appreciate hidden advertising, not because of the quality of the advertised products, but because I cannot trust such recommendations, as a salesman can say anything in order to sell his shit.
Notice how this is the biggest list of recommendations in this thread. Do you think that's because the author is very knowledgeable or is it because he has an incentive to post links?
> As an ex-Amazon Affiliate myself, I disagree because the incentive to post those links is not aligned with the reader's expectations.
Please don't project your behavior onto others. I take book recommendations seriously. I actually really enjoy it, people have told me IRL that my recommendations helped them a lot.
> Notice how this is the biggest list of recommendations in this thread.
They are all books that I've read in the last ~4 monthish (not all in entirety). Just FYI I'm not sure how much money you think I'm making off this but for me it's mostly about the stats, I'm curious what people are interested in.
> Do you think that's because the author is very knowledgeable
I'm more than willing to discuss my knowledgeability.
> or is it because he has an incentive to post links?
It's the biggest list because due to circumstances I have the luxury of being able to read a ton. I own all the books on the list, I've read all of them and I stand by all of them and some of these are really hidden gems that more people need to know about. I've written some of the reviews before. Just FYI I've posted extensive non-affiliate amazon links before and I started doing affiliate only very recently.
Furthermore, HN repeatedly upvotes blog posts that contain affiliate links. Why is that any different?
I read Applied Microsoft .NET Framework Programming by Jeffrey Richter a few years ago and felt that that's how technical books shold be written. I felt happy every time I got to read it.
What if - by Randall Monroe. It's not a computer science book but has lots of great questions and answers to what if questions. He answers with pretty pictures and funny explanations.
I learnt a lot about what I don't know Being able to do rough math off the head, thinking critically and out of the box.
Also the book smells good. Most of the time I read certain books because they smell good.
I haven't re-read it in its entirety, but I always enjoy picking up Sipser's "Introduction to the Theory of Computation" and reading through one or another of the chapters and playing with things in my head or on paper.
haha, I'm reading it now - finally after many years - and I can absolutely relate to the feeling you describe. I decided that I won't be hard on myself for not understanding it all at the first read. Still quite enjoying it so far (1/3rd read). Additionally I also decided that I don't have to finish it unless I feel like ( specially considering it's 800pages).
On the plus side it is full of pretty pictures, has a massive number of pages which can be used to press leaves or flowers, heat in the winter and a door stop in the summer to encourage airflow. There really isn't a bad quality about this book. Oh, and super cheap on the used market.
Wolfram is known to be super arrogant. Granted, he has accomplished a lot as a business man and is probably almost twice as intelligent as me, but there are probably more useful books out there.
I read the whole book 15 years ago. It was a waste of time and money.
The only new thing I found was a Turing Machine with (iirc) 1 less tape - the whole book could have been reduced to a 4 page paper, as far as new work is concerned.
More about programming than computer science, but it does talk a lot about and show some custom algorithms (it's more about programming-in-the-small, though it does talk about some big picture too), also it is old and out of print (last I checked), but I thought it was really good when I bought and read it, so mentioning it:
Writing Efficient Programs
Also has a great bunch of "war stories" about performance tuning in real-life, including one in which people, IIRC, improve the performance of quicksort on a supercomputer by 1 million times or some such, by working on tuning as well as architecture and algorithms at several levels of the stack, from the hardware on upwards.
When I started coding for a living in the early 1990s (in an environment where efficiency was paramount), Writing Efficient Programs was gold. I still have it, and I'm glad that to see that it's still valued.
> (in an environment where efficiency was paramount), Writing Efficient Programs was gold. I still have it, and I'm glad that to see that it's still valued.
Interesting to know, and agreed.
The book even has a list of thumb rules for the various kinds of optimizations it describes, with guidelines on when each one is appropriate to use or not (as you would know, having read it). Great writing throughout, too. Loop unrolling (for both fixed- and variable-length loops) was one among the many cool ones.
Not a book, but it's a 70 page paper that changed (substantially) the way I thought about computing systems: Non-Abelian Anyons and Topological Quantum Computation.
The gist is a model of computation based on knotting the worldlines of a certain kind of particle (well, particle/anti-parricle pairs) and measuring properties of the knots/links. It's also the theory behind Microsoft's effort to build a quantum computer.
Highly recommend at least reading the non-technical sections (ie, everything but section 3 and appendix A).
All rights reserved. Printed in the United States of America. This publication is protected by copyright, and permission must be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise.
I see no indication that the site is affiliated with Pearson or Bob Martin or that they have permission to redistribute the PDF. It's much more likely they're hosting it without permission, and therefore illegally.
Basically it has you hands on work through the basics of every concept active in a modern computer save for networks and web.
You use software tools provided with the book to design memory, ALUs, intepreters, VMs, compilers Operating Systems and applications.
Available as a free online class at http://www.nand2tetris.org