1. I can't speak for Hillel Wayne who used the word "framed", but I didn't understand his newsletter post as Bentley having "framed" Knuth -- I understood his post as pointing out that in the popular imagination/folklore, the story had mutated over the course of years from the original setting (a program that Knuth was asked to write in WEB specifically as that was the point, and a review of that program by McIlroy evangelizing the then little-known Unix philosophy) to a "framing" where two people were competing to solve the same problem with the same available resources, and one of them did it in a "worse" way. (Also left this comment on the blog post above.)
2. Here's a comment on the previous thread from someone who says they read the column when it was posted, and their reaction they say was one of cringing -- so at least at that time it probably wasn't perceived that way: https://news.ycombinator.com/item?id=22418721
3. Much of the space taken by the literate program is for explaining a very interesting data structure that we could call a hash-packed trie (AFAICT, devised on that occasion for that problem -- a small twist on the packed tries used in TeX for hyphenation, and described in the thesis of Knuth's student Frank Liang). One cannot obtain this data structure by combining other programs, only by combining other ideas. (I mentioned this in the previous thread as well: https://news.ycombinator.com/item?id=22413391)
4. So as far as evaluating literate programming goes, the real question (and the answer is not obvious to me!) is: if you're going to write a program that uses a custom data structure (like this), how should you organize that program? Should you write it as Knuth does, or as a conventional program (like I tried to do with my translation: https://codegolf.stackexchange.com/a/197870)? And as for estimating the value of a new data structure in the first place: as of now (at that question), solutions based on a trie are about 200 times faster than the shell pipeline, on a large testcase. (The hash-packed trie, which Knuth calls "slow" in his program, is not so bad either, and it does economize on memory a bit.)
I have my own answer for #4 (which, to me, is the only interesting question about this affair). I've actually done a fair amount of literate programming on my own, although I only have a couple of examples that one can look at these days. Here is a small library for fluent matcher system for Jasmine and React: https://github.com/ygt-mikekchar/react-maybe-matchers/blob/m...
You will see that I've included yet another monad tutorial :-) I don't link to this as a way of saying that I think this is a good example of LP. It's not really. I was experimenting quite a lot. However, I can tell you one thing about it: it is practically impossible to refactor.
As a result, I decided that LP is not particularly good for working on living programs. Or, at least, it is not conducive to my style of programming, which encourages refactoring. Nothing I write is "frozen". It is all in flux and so the value of documentation is transient. Additionally, it is rare that a programmer wishes to read code from the top to the bottom. If they ever do, it's usually the first time they have read the code. After that, they will want fast access to the parts that they want to modify. Sorting out the code from the text becomes difficult. If you make a change, you also have to review all of the text to make sure that you haven't clobber something that is referenced elsewhere. It will work well for something short, but it's not great for large projects.
I still do LP style things. Here is an unfinished blog post on ideas about OO: https://github.com/ygt-mikekchar/oojs/blob/master/oojs.org However, to contrast with this, I would invite you to look at https://gitlab.com/mikekchar/testy where I put some of those ideas into action (especially see the design.md and coding_standard.md documents to show what constraints I chose in this experiment). Crucially, after this code had run its course, I'd changed a lot of my ideas and never went back to my blog post. For me, the actual code is far more instructive than the blog post ever was. Of course, I'm the author, so I understand what I was trying to say and I only need a quick peek at the code to remind me what I was thinking.
For me, that's the dilemma of LP: once you know what you want to know, the text is in the way. New people will benefit from the Gentle Introduction (sorry, couldn't resist the TeX reference...), but 99% of the time nobody will benefit from it. Is the other 1% of the time worth it. It may be, actually, but boy is it hard to convince yourself of that!
Thank you, the voice of experience counts for a lot, and I'm glad to hear from a rare person who has actually tried LP seriously (I'm not one of them!). I'd like to dig deeper for your thoughts on a couple of your interesting points:
• Living programs: You mention the point that you find LP hard to refactor, because things written tend to feel "frozen". But writers do often mention ripping out several chapters of their books or carrying out extensive rewrites in response to editors' feedback etc. (Though some don't: look for the second mention of "Len Deighton" in this wonderful profile of the editor Robert Gottlieb: https://web.archive.org/web/20161227170954/http://www.thepar...) Conversely, for those of us without much writing experience, I wonder whether literate programming may train us to become better writers, in the sense that programming (which inevitably tends to require rewriting) may make us more comfortable with doing major rewrites of our work. (Or at the very least, train us to chunk our code in a way with an eye to which parts might likely to be changed together later, which otherwise in code may be far apart.)
• Linear reading versus fast random access to code: I think it's very much true that after (or even during!) the first reading, one wants fast access to relevant sections of code, and not to read it from top to bottom. But books are also designed for random access. (The first piece of advice here: https://www.cs.cmu.edu/~mblum/research/pdf/grad.html) Many of the contrivances of Knuth-style LP (the cross-references, the indexes, the table of contents, the list of section names at the end, for that matter even section numbers and page numbers) seem designed to facilitate this. (See the TeX program at http://texdoc.net/pkg/tex especially the ToC on the first page and the two indexes at the end; the printed book also has a mini-index on each two-page spread, which is missing here.) In fact, I'd imagine that even if all that you used LP for was to organize the code in a way that better facilitates random access (e.g. just add section names to your code blocks, or move error-handling code to a separate section to be tangled-in later) it alone may prove worth it.
• Documentation versus code: In one of your examples, you seem to be writing exposition / documenting the (user-level) purpose of the code at the same time as programming. Do you find this to be the case often? My experience with LP is mainly with attempting to read the TeX program, which on the first page says "[…] rarely attempt to explain the TeX language itself, since the reader is supposed to be familiar with The TeXbook" (the TeX manual). And for the most part, whatever text is in the program is about the code itself, things that still matter once you know the program already. (This is in fact my struggle with it, it's not written like a novel; all the text is oriented towards details of the program code itself.) As that's a large example, pick a small one like this: https://github.com/shreevatsa/knuth-literate-programs/blob/m... -- there is an intro page about the problem and cache size etc., but most of the rest of the text seems comparable to what one might write as a comments even if not doing “literate programming” as such. So the main difference LP is contributing seems to be with code organization (what one might otherwise do with functions). In fact, probably most of us modern programmers wouldn't consider it the best way to organize this program, but it's interesting to consider what the author's intent may be with organizing code that way.
> Here is a small library for fluent matcher system
It doesn't seem like Literate CoffeeScript lets you reorder the code blocks which, as I understand things, is the fundamental part of Literate Programming - code follows documentation, not the other way around.
(Although, to be fair, 99% of things I've seen labelled as LP aren't either. There's only WEB, CWEB, and noweb I can think of that'd count just now.)
Yes you are absolutely correct and it's definitely a big problem from an LP perspective. Babel gives me a bit more leeway, though. But from the perspective of "is this worth it", not having it reordered actually makes it easier to work with, in my opinion. I think if you had tools that allowed you to work with the generated code and be able to jump back and forth to the sources it might be OK.
1. I can't speak for Hillel Wayne who used the word "framed", but I didn't understand his newsletter post as Bentley having "framed" Knuth -- I understood his post as pointing out that in the popular imagination/folklore, the story had mutated over the course of years from the original setting (a program that Knuth was asked to write in WEB specifically as that was the point, and a review of that program by McIlroy evangelizing the then little-known Unix philosophy) to a "framing" where two people were competing to solve the same problem with the same available resources, and one of them did it in a "worse" way. (Also left this comment on the blog post above.)
2. Here's a comment on the previous thread from someone who says they read the column when it was posted, and their reaction they say was one of cringing -- so at least at that time it probably wasn't perceived that way: https://news.ycombinator.com/item?id=22418721
3. Much of the space taken by the literate program is for explaining a very interesting data structure that we could call a hash-packed trie (AFAICT, devised on that occasion for that problem -- a small twist on the packed tries used in TeX for hyphenation, and described in the thesis of Knuth's student Frank Liang). One cannot obtain this data structure by combining other programs, only by combining other ideas. (I mentioned this in the previous thread as well: https://news.ycombinator.com/item?id=22413391)
4. So as far as evaluating literate programming goes, the real question (and the answer is not obvious to me!) is: if you're going to write a program that uses a custom data structure (like this), how should you organize that program? Should you write it as Knuth does, or as a conventional program (like I tried to do with my translation: https://codegolf.stackexchange.com/a/197870)? And as for estimating the value of a new data structure in the first place: as of now (at that question), solutions based on a trie are about 200 times faster than the shell pipeline, on a large testcase. (The hash-packed trie, which Knuth calls "slow" in his program, is not so bad either, and it does economize on memory a bit.)