Hacker News new | past | comments | ask | show | jobs | submit login

A few random comments:

• Obviously, this is typeset with TeX.

• Though originally Knuth created TeX for books rather than single-page articles, he's most familiar with this tool so it's unsurprising that he'd use it to just type something out. (I remember reading somewhere that Joel Spolsky, who was PM on Excel, used Excel for everything.)

• To create the PDF, where most modern TeX users might just use pdftex, he seems to first created a DVI file with tex (see the PDF's title “huang.dvi”), then gone via dvips (version 5.98, from 2009) to convert to PostScript, then (perhaps on another computer?) “Acrobat Distiller 19.0 (Macintosh)” to go from PS to PDF.

• If you find it different from the “typical” paper typeset with LaTeX, remember that Knuth doesn't use LaTeX; this is typeset in plain TeX. :-) Unlike LaTeX which aims to be a “document preparation system” with “logical”/“structured” (“semantic”) markup rather than visual formatting, for Knuth TeX is just a tool; typically he works with pencil and paper and uses a computer/TeX only for the final typesetting, where all he needs is to control the formatting.

• Despite being typeset with TeX which is supposed to produce beautiful results, the document may appear very poor on your computer screen (at least it did when I first viewed it on a Linux desktop; on a Mac laptop with Retina display it looks much better though somewhat “light”). But if you zoom in quite a bit, or print it, it looks great. The reason is that Knuth uses bitmap (raster) fonts, not vector fonts like the rest of the world. Once bitten by “advances” in font technology (his original motivation to create TeX & METAFONT), he now prefers to use bitmap fonts and completely specify the appearance (when printed/viewed on a sufficiently high-resolution device anyway), rather than use vector fonts where the precise rasterization is up to the PDF viewer.

• An extension of the same point: everything in his workflow is optimized for print, not onscreen rendering. For instance, the PDF title is left as “huang.dvi” (because no one can look at it when printed), the characters are not copyable, etc. (All these problems are fixable with TeX too these days.)

• Note what Knuth has done here: he's taken a published paper, understood it well, thought hard about it, and come up with (what he feels is) the “best” way to present this result. This has been his primary activity all his life, with The Art of Computer Programming, etc. Every page of TAOCP is full of results from the research literature that Knuth has often understood better than even the original authors, and presented in a great and uniform style — those who say TAOCP is hard to read or boring(!) just have to compare against the original papers to understand Knuth's achievement. He's basically “digested” the entire literature, passed it through his personal interestingness filter, and presented it an engaging style with enthusiasm to explain and share.

> when Knuth won the Kyoto Prize after TAOCP Volume 3, there was a faculty reception at Stanford. McCarthy congratulated Knuth and said, "You must have read 500 papers before writing it." Knuth answered, "Actually, it was 5,000." Ever since, I look at TAOCP and consider that each page is the witty and insightful synthesis of ten scholarly papers, with added Knuth insights and inventions.

(https://blog.computationalcomplexity.org/2011/10/john-mccart...)

• I remember a lunchtime conversation with some colleagues at work a few years ago, where the topic of the Turing Award came up. Someone mentioned that Knuth won the Turing Award for writing (3 volumes of) TAOCP, and the other person did not find it plausible, and said something like “The Turing Award is not given for writing textbooks; it's given for doing important research...” — but in fact Knuth did receive the award for writing TAOCP; writing and summarizing other people's work is his way of doing research, advancing the field by unifying many disparate ideas and extending them. When he invented the Knuth-Morris-Pratt algorithm in his mind he was “merely” applying Cook's theorem on automata to a special case, when he invented LR parsing he was “merely” summarizing various approaches he had collected for writing his book on compilers, etc. Even his recent volumes/fascicles of TAOCP are breaking new ground (e.g. currently simply trying to write about Dancing Links as well as he can, he's coming up with applying it to min-cost exact covers, etc.

Sorry for long comment, got carried away :-)




Looks like no one complained about the long comment, so some further trivia I omitted mentioning:

• The problem that one cannot copy text from a PDF created via dvips and using METAFONT-generated bitmap fonts has recently been fixed — the original author of dvips, Tomas Rokicki ([1], [2]) has “come out of retirement” (as far as this program is concerned anyway) to fix this and is giving a talk about it next week at the TeX Users Group conference ([3], [4]):

> Admittedly this is a rare path these days; most people are using pdfTeX or using Type 1 fonts with dvips, but at least one prominent user continues to use bitmap fonts.

So in the future (when/if Knuth upgrades!) his PDFs too will be searchable. :-)

• In some sense, even Knuth's work on TeX and METAFONT can be seen as an extension of his drive to understand and explain (in his own unique way) others' work: at one point, suddenly having to worry about the appearance of his books, he took the time to learn intensively about typesetting and font design, then experiment and encode whatever he had learned into programs of production quality (given constraints of the time). This is in keeping with his philosophy: “Science is what we understand well enough to explain to a computer. Art is everything else we do.” and (paraphrasing from a few mentions like [5] and [6]) “The best way to understand something is to teach it to a computer”.

• Finally returning (somewhat) to the topic, and looking at the 2/3rds-page proof that Knuth posted [7], one may ask, is it really any “better”, or “simpler”, than Huang's original proof [8]? After all, Huang's proof is already very short: just about a page and a half, for a major open problem for 30 years; see the recent Quanta article ([9], HN discussion [10]). And by using Cauchy’s Interlace Theorem, graph terminology, and eigenvalues, it puts the theorem in context and (to researchers in the field) a “natural” setting, compared to Knuth's proof that cuts through all that and keeps only the unavoidable bare essentials. This is a valid objection; my response to that would be: different readers are different, and there are surely some readers to whom a proof that does not even involve eigenvalues is really more accessible. A personal story: in grad school I “learned” the simplex algorithm for linear programming. Actually I never quite learned it, and couldn't answer basic questions about it. Then more recently I discovered Knuth's “literate program” implementing and explaining the simplex algorithm [11], and that one I understood much better.

> The famous simplex procedure is subtle yet not difficult to fathom, even when we are careful to avoid infinite loops. But I always tend to forget the details a short time after seeing them explained in a book. Therefore I will try here to present the algorithm in my own favorite way—which tends to be algebraic and combinatoric rather than geometric—in hopes that the ideas will then be forever memorable, at least in my own mind.

I can relate: although the simplex algorithm has an elegant geometrical interpretation about what happens when it does pivoting etc., and this is the way one “ought” to think about it, somehow I am more comfortable with symbol-pushing, having an underdeveloped intuition for geometry and better intuition for computational processes (algorithms). Reading Knuth's exposition, which may seem pointless to someone more comfortable with the geometrical presentation, “clicked” for me in a way nothing had before.

This is one reason I am so fascinated by the work of Don Knuth: though I cannot hope to compare myself in either ability (even his exploits as a college kid are legendary [12]) or productivity or taste, I can relate to some of his aesthetic preferences such as for certain areas/styles of mathematics/programming over others, and being able to so well “relate” to someone this way gives you hope that maybe by adopting some of the same habits that worked for them (e.g.: somewhere, Knuth mentions that tries to start every day by doing whichever thing he's been dreading the most), you'll be able to move a few steps in somewhat the same direction, and if nothing else, this puts me in mind of what Bhavabhuti said many centuries ago [13] about finding someone with the same spirit, so to speak.

[1]: https://tomas.rokicki.com [2]: https://www.maa.org/sites/default/files/pdf/upload_library/2... [3]: http://tug.org/tug2019/preprints/rokicki-pdfbitmap.pdf [4]: https://github.com/rokicki/type3search/blob/a70b5f3/README.m... [5]: https://www.maa.org/sites/default/files/pdf/upload_library/2... [6]: https://youtu.be/eDs4mRPJonU?t=1514 (25:14 to 26:46) [7]: https://www.cs.stanford.edu/~knuth/papers/huang.pdf [8]: http://www.mathcs.emory.edu/~hhuan30/papers/sensitivity_1.pd... [9]: https://www.quantamagazine.org/mathematician-solves-computer... [10]: https://news.ycombinator.com/item?id=20531987 [11]: https://github.com/shreevatsa/knuth-literate-programs/blob/9... [12]: http://ed-thelen.org/comp-hist/B5000-AlgolRWaychoff.html#7 [13]: https://shreevatsa.wordpress.com/2015/06/16/bhavabhuti-on-fi...


> The problem that one cannot copy text from a PDF created via dvips and using METAFONT-generated bitmap fonts has recently been fixed — the original author of dvips, Tomas Rokicki ([1], [2]) has “come out of retirement” (as far as this program is concerned anyway) to fix this and is giving a talk about it next week at the TeX Users Group conference ([3], [4])

Hope that will be filmed and put online, sounds like an intriguing talk to watch!


Unfortunately the talks at TUG were not recorded this year, but you can read the preprint (http://tug.org/tug2019/preprints/rokicki-pdfbitmap.pdf) which will probably be published in the next issue of TUGboat.


Wonderful insight into an interesting man. I’ve never understood why he’s spent so much time writing the Art of Computer Programming, but framing it as a deep inclination to explain and summarize the work of others in a beautiful way is such a wonderful perspective I’ve never heard anyone say before.

Very cool. Thank you.


I think Knuth is not doing this as a hobby. It's a source of income. This is sufficient to explain why he has spent so much time in this.


In 1993, Knuth requested of Stanford University (who granted his request) that he be allowed to retire early (and take up the title of “Professor Emeritus of The Art of Computer Programming”), so that he could complete TAOCP, which he regards as his life's work: https://cs.stanford.edu/~knuth/retd.html

Needless to say, the measly royalties from a technical book (especially one that is never going to be assigned as a textbook to large classes of undergraduate students) would be tiny in comparison to the Stanford professor salary that he gave up.

Also, if he were doing it as a source of income, he'd probably actually publish the long-awaited volumes so that they can sell and make money, rather than spend decades of full-time labour polishing each one (Vol 3 first came out in 1973; Vol 4A in 2011; Vol 4B is only about 1/3rd done), continually missing estimates, until they were at his desired level of quality.


Insisting on reducing every motivation to whether or not it makes money is such a narrow-minded display of being unable to imagine other ways of viewing the world that it's almost insulting.


Thank you for knuthing Knuth for us.


I feel like he also deserves recognition for his awesome lectures (which are generally pretty low-tech but amazingly well thought out).

E.g. all of his "christmas tree lectures" are available on youtube from Stanford https://www.youtube.com/watch?v=_cR9zDlvP88&list=PLoROMvodv4...


Get carried away all you want! Favorited and bookmarked, comments like yours are the top reason for visiting HN.


Nothing to be sorry about, it was very insightful, thank you!

An extra question because you seem knowledgeable about the topic: do you know if/where tex is still used by itself? (without latex)


Within the minority of people in the world that use TeX/LaTeX/etc., there's a tiny minority of people who use TeX without LaTeX. Most of them use ConTeXt (https://wiki.contextgarden.net/Web_resources), some use extensions to plain TeX like eplain (https://tug.org/eplain/) or opmac (http://petr.olsak.net/opmac-e.html). And some use “only” plain TeX (that's Knuth's default/minimal format on top of TeX), and try to code everything themselves. (With LuaTeX it's even possible to use “TeX without TeX”: http://wiki.luatex.org/index.php?title=TeX_without_TeX)

My impression is that generally plain TeX is used by people who don't like all the LaTeX macro complexity and like to keep things simple (not easy), like to understand better what's going on, build their own tools, etc. In return they give up the ability of using the abundant number of LaTeX packages and standard formatting options, its decades-of-experience code, etc. If you want concrete numbers for a sense of things: on the https://tex.stackexchange.com website, there are currently 179,492 questions (https://tex.stackexchange.com/search?q=is%3Aquestion) and of those 590 have been tagged plain-tex (https://tex.stackexchange.com/search?q=is%3Aquestion+%5Bplai...). Of course these numbers shouldn't be taken too seriously as there are various selection biases in what kinds of people would use plain TeX versus how many of them would ask questions on a site like that, etc.


Interesting, it changes my understanding of tex a bit, as it's not "only" a building block (despite being mainly used as such). Time to follow those links and do some exploring!

Thanks again for the nice answer, people sharing like you did make HN very enjoyable


Nobody uses TeX by itself; this document is written with “plain TeX”, which is not just TeX, but a TeX package similar to LaTeX, though more presentational. I rarely see things written with plain TeX nowadays, or even with ConTeXt.


Circa 2009 I worked on an application that would render maths tests for teachers using plain TeX. I'm sure there are a few other wild examples (my CV is in eplain TeX).


I was wondering why his books look bad on a computer screen. Thank you very much. I was reading Concrete Mathematics and kept being annoyed by the quality.


I don't think that's relevant or true about "his books" in general, as (for instance) the digital versions of TAOCP look great (and use vector fonts). Obviously the publishers will use their own workflow; in that case it's not as if Knuth prepared a PDF for you to download (as he did here). Maybe the publishers just botched up the electronic version of Concrete Mathematics (as the reviews say), or you bought a copy that's defective somehow.


I did a Knuth research binge just last week, carried along by a wave of awe and admiration, so it was good to read your comment. I knew he stuck with TeX but would have missed that .dvi filetype you mentioned.


> those who say TAOCP is hard to read or boring

I always regard TAOCP as the finest technical writing. It's a pleasure to read, and certainly most accessible English to the complicated topics.


This is incredibly interesting trivia -- thank you for the context.


Thank you for writing this out. Very interesting read! I very much agree with you that writing textbooks, digesting and refactoring other authors work, is often as important as the original work itself. Indeed, what is the value of a good idea if you cannot explain it to anyone?


Thanks. Comments like this that go into the technical details, the history of the details and the people behind the technology are one of the main reasons I frequent hacker news.


Thanks for the trivia, although your comment on displays is a bit misleading: >I first viewed it on a Linux desktop; on a Mac laptop with Retina display it looks much better though somewhat “light”

There's nothing special about Apple displays that makes them "retina", it's just a (marketing) term they use that denotes a standard for pixel density at a specific distance away from the screen. The point I'm trying to make here is you could have just as easily had a "retina" display on your linux machine, if not better.


Yes you're right that Linux or Mac is irrelevant (and I should have omitted mentioning the OSes); my point was just that on a typical desktop monitor (28-inch monitor at 2560x1440, so about 105 dpi) and at typical zoom level, the effects of rasterization are very much apparent (and still would be on a so-called “4k” monitor), while on a typical reasonably-good laptop display (221 dpi), it's only slightly better. Basically, computer screens have still much lower resolution than a typical office printer (600 dpi), let alone the high-resolution typesetters (5000+ dpi! https://news.ycombinator.com/item?id=20009875) that TeX/METAFONT were designed for.

One sometimes hears that screens are now high-resolution enough that the usual tricks done for font rendering (http://rastertragedy.com/) — anti-aliasing, sub-pixel rendering, etc — are no longer needed, but in fact we still have a long way to go (if we'll ever get there) and these PDFs that use bitmap fonts serve as a good test case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: