Hacker News new | past | comments | ask | show | jobs | submit login

“The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.”- Donald Knuth, The Art of Computer Programming



Sadly, this no longer really applies. Programmers today do absolutely grotesque things unimaginable to Knuth of that era. One look at any modern pipeline that converts to and from JSON a dozen times, sometimes even inside the same process between modules, would make Knuth recall all of that.

Programmers now have done the impossible: written code so consistently bad that there are no hotspots, because the whole codebase is non-performant trash.


I was going to get a t-shirt printed with “Knuth Was Wrong”

We’re going to be in a world of hurt when the newest process node doesn’t save us.


Knuth was right! Profiling would indicate whether your JSON serialization/deserialization is actually a problem. Maybe it is. Maybe it isn't.


There's key parts left out of that quote that changes the tone quite a bit. Here is the full one

"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%." - Donald Knuth


Knuth was writing in an era when scientific programs tended to be dominated by inner loops. There was an input part, a compute part that did some number-crunching, and an output part. Only performance in the compute part mattered.

Many programs today have no inner loop. Compilers were the first important programs that didn't. Most interactive programs have an outer loop processing events, rather than an inner compute loop.

Note that "AI" programs are much more like the scientific problems of Knuth's era. All the compute is in tight loops wrangling matrices.


Well the last time the thread thing bit me was actually a case where it wasn't premature optimization; I had written the code in such a way that I thought was pretty enough, and we were hitting bottlenecks, so my genius brain thought "ok I'll make this use 20 threads and it'll go faster". It did not.


Pretty sure I’ve made exactly the same mistake. I feel like everyone who ever writes concurrent code learns that lesson at some point. It’s absolutely astonishing how much mileage one can get out of thread-per-core-fed-by-work-queues architectures.


Oh with modern architectures this sentence is so wrong. If you don't think about grooming the hardware in the right way from the get go you will never touch peak performance by a couple orders of magnitudes period. See how games are developed and I tell you they don't just use OOP with virtual interfaces and neat indirections all over the place then think oh it is ok we will optimize after the fact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: