Hacker News new | past | comments | ask | show | jobs | submit login
A Short Introduction to the Art of Programming (1971) [pdf] (utexas.edu)
296 points by Rescis on May 9, 2018 | hide | past | favorite | 41 comments



Quite an insightful thought for 1971:

"With the advent of what is called "large scale integration" (being a term from the computer field, its acronym LSI is better known!) it seems to become technically feasible to build machines more like "clouds of arithmetic units" with information processing activities going on simultaneously all over the place, for shorter periods of time even independently of each other. Programming for such machines will pose completely different trade-off problems: one will be willing to invest in potentially useful computing activity before its actual usefulness has been established, all for the sake of speeding up the whole computation. But although I know such machines may be coming, I shall not touch these problems for the following reasons. First, as far as general purpose applications are concerned, I have my doubts about the effectiveness with which such forms of parallelism can ever be exploited. Second —and that is the most important consideration— parallel programming is an order of magnitude more difficult than sequential programming."

Also, the first mention of "cloud", although with a different meaning.


That's right, this is more a description of how speculative execution works within modern processors (a quite familiar topic lately thanks to Meltdown, Spectre & co.).


Except the author was assuming manual scheduling where today speculative execution is mostly done transparently in hardware (barring failed general purpose VLIW attempts).


There's no assumption of manual scheduling in that text.


It's really eye opening to read something like, knowing it was written in 1971:

> It is not unusual —although a mistake— to consider the programmer's task to be the production of programs. (One finds terms such as "software manufacturing", proposals to measure programmer productivity by the number of lines of code produced per month etc., although I have never seen the suggestion to measure composer productivity by the number of notes, monthly scribbled on his score!).

47 years later, a lot of people still measure programming productivity by LOCs. Hell, even major freelance marketplaces try to track your every billed second by taking screenshots of your desktop or webcam "just to make sure you're working".


> Hell, even major freelance marketplaces try to track your every billed second by taking screenshots of your desktop or webcam "just to make sure you're working".

Well that's some dystopian nightmare stuff I had no idea was happening.

EDIT: refine quote and add italics


A big part of the problem IMO is the software development community has gravitated toward rate compensation instead of product compensation. A composer is commissioned to deliver a product and is compensated for said product, not for the time he/she worked to create the product.

This was real eye opening for me when I did contract/consultant work, even as a 1099. I can understand the enterprise, butts in the seat model of developers being on retainer but for contracting houses I don't get it.

If developers don't want to be bound by these metrics they need to get better at forcing the client into contracts for deliverables with clearly defined acceptance criteria. Also, it seems to me that the best way to get proficient and creating and updating these contracts is by having to work twice as long to deliver a work product as was initially estimated.


I'm not very optimistic that we'll ever be able to reach that utopian state, though. After all, the only person in the fixed-bid equation with any incentive to nail down requirements up front is the developer - it's in the best interest of the buyer to be as vague as possible so that they can defer actually making any decisions until they've had a chance to see the finished product, when the consequences of the decisions are most obvious. Then they can turn around and say, "the requirements clearly state that..." or in other words, you're going to be working for free. Once you get up into even five-figure bids (never minds something that might take a year to deliver), the people writing the checks are happy to put more effort into deliberately obscuring the requirements gathering process than actually gathering requirements, just to keep their options open, and they have a legal department that you don't.


I hear what your saying but there's a whole host of other fields that seem to pull it off. Other than the fact that software is not bound by physical constraints, I really don't see the major differences.

I think we're just fighting the "always done this way" mindset.


A lot of developer contract work doesn't fit into a fixed bid model.

What if someone hires you as an ongoing general problem solver?

I've had many contracts like this for 4-5 months where they just threw problems at me and expected me to think of a solution and then implement it.

This could be anything from little scripts to make development better, ops / deployment improvements, better docs, being a 2nd set of eyes for pair programming, etc..

Something like that isn't really possible to plan out, so paying by the hour makes sense.

On the other hand, I do fixed bid pricing on a lot of things when it makes sense. I sometimes even do a hybrid approach where it starts as a fixed bid and ongoing maintenance becomes hourly.


Something I'm forced up against frequently at a company that provides fixed bids for projects and refuses to bill anything hourly: How do you determine a fixed bid for fixing or modifying an existing product you're unfamiliar with?

If a client comes in and says "one out of every 50 times you load a page in this web app, it renders a blank page, how much is it going to cost to fix?" what do you say?

Every hour you spend investigating or familiarizing yourself with the problem is basically just spec work with all the risk that entails. In many cases when you're "fixing" something, the project could be entirely spec work as you spend 8-16 hours determining that the problem is simply a misconfigured setting that will take 5 minutes to flip.

Clients very rarely (in my experience, never) specify problems to a degree sufficient to create a concrete scope of work, and they're not going to pay you hourly to do so.

If I take my car to a mechanic and say "it makes a funny rattle sound, how much will that cost to fix?" he's not going to give me a fixed bid.


The more unfamiliar it is, the more inclined I would be to use hourly billing. In your blank page example I would bill by the hour.


> track your every billed second by taking screenshots of your desktop

Wow, that would definitely trigger my neurosis into a full-blown nervous breakdown. Does reading e-mail count as "working"? Does configuring environment variables and adding printers count as "working"? Does reading documentation count as "working"? And don't you dare stop by and interrupt me with a "quick question"...


I am a freelancer but fortunately I don't use such platforms so I can't answer your questions specifically.

But, there's a lot of talk about it online. Google for "upwork time tracking" and similar queries and you'll find heaps of people complaining. Upwork is just 1 marketplace that has apps like that.


> Hell, even major freelance marketplaces try to track your every billed second by taking screenshots of your desktop or webcam "just to make sure you're working".

Think of that as a very clear way to indicate that you shouldn't work for them if you have a choice. Pity those who have no other options.


For those who prefer a plain text version, start reading from [1]

[1] http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EW...


I'm quite surprised that the original PDF wasn't handwritten anyway, as Dijkstra really seemed keen on that (and back in the olden days, I remember some kind of PostScript module that faked this)


I read a lot of EWD texts last year, and he transitioned from typed to handwritten later in his life, about halfway through the number of scripts that he published.

From his Wikipedia page [1]:

> Dijkstra never wrote his articles using a computer. He preferred to rely on his typewriter and later on his Montblanc pen. Dijkstra's favorite writing instrument was the Montblanc Meisterstück fountain pen.

[1]: https://en.wikipedia.org/wiki/Edsger_W._Dijkstra


I think you’re thinking of the “Dijkstra” TrueType font (available from zillions of places. I remember there being different versions, so it probably is worthwhile to download a few and pick the largest (or, better, to inspect them for their glyphs))


I definitely remember it being in PostScript (back in the days when that was much more prevalent in Linux and I actually used jwz' scripts to print CD cases).

It might be this http://ftp.math.utah.edu/u/ma/hohn/linux/misc/quickscript/di...

If so, that appears to be a regular vector font, too, no random deviations or letter variants to make it appear more handwritten.


>The market is already so heavily overloaded with introductory texts on computers and computer programming that one must have rather specific reasons to justify the investment of one's time and energy in the writing of yet another "Short Introduction to the Art of Programming".

In 1971? I seriously doubt that...

>[people] who identify the programmer's competence with a thorough knowledge of the idiosyncrasies of one or more of the baroque tools into which modern programming languages and systems have degenerated

This, however, is as valid as ever...


In 1971 there was already 15 years of FORTAN books, and 10 years of ALGOL and COBOL books


All 100 of them?

Today we have more books than all those FORTRAN and ALGOL and COBOL books combined just for a single language, like Python or Ruby.


Landin’s “next 700 programming languages” seminal paper was done in 1965 you know. If anything, there was probably more variety in the 70s computing scene (books, languages, and so on) than there is today.


People in the field during 1971 may have read more books or papers about Introduction to Programming than people do today, where people read about Language X for Dummies.


He should have written 'justify the investment of my time and energy' to invalidate your view from the future.


How does that invalidate his view? Struggling to see how that'd change anything.


justify his time is his personal viewpoint, and he can say with some authority how many books are too many to justify his writing a new one. Justify one's time means that nobody would find that no author of programming books would find their time usage justified considering the number of books already written.

Justify my time is arrogant and keeping, IMO, with the arrogance he often displayed, while justify one's time is a judgment that is dependent on market forces and the opinions of others willing to write programming books.


Not directly related, but here’s a short documentary on his life and career (Dutch spoken, English subtitles): https://m.youtube.com/watch?v=RCCigccBzIU


Still a good read (from 1971 then with .pdf Adobe Acrobat optimized). The Towers of Hanoi simulation section brings back memories of an old Wang 2200 Basic machine (64K as I remember) buried in storage at my old prep school which we managed to bring back to life (more or less) . Decided to write a Towers of Hanoi game using its available box graphics, which could be animated but which quickly taught the maxim: write forward then erase behind (otherwise the imagery appears jumpey). The results were similar to the 'Towers of Hanoi' app now on ROKU 4. But how to add an animated 'solve' function? Raw strings (for 2: top small 1 -> 2, top small 1 -> 3, top small 2 -> 3) ate up too much memory beyond 7 (2^n - 1 = 127) so system analysis suggested: give the smallest box a life of it's own and if total number of discs to be moved is even, script the smallest to follow a path 1 -> 2, 2 -> 3, 3 -> 1, 1 -> 2 etc. for first then every other move. If total number of discs odd, script it a path 1 -> 3, 3 -> 2, 2 -> 1, 1 -> 3 etc for first then for every other move (until you're done). Each next alternate move is any valid move (top smallest on any other column move to any other column with no disks or with column with top disk larger than that one). Seem to recall Martin Gardner had a similar solution in an old Scientific American 'Mathematical Games' article.


“The sole reason one likes to write is insufficient...”

Interesting that they did not seem to want to attract people who just enjoy programming..


That's in the context of writing the text, not of writing programs. So I'm not quite sure what your point is.


this is a smart dude, he invented one of the most popular algorithms of finding a shortest path in a graph.


...to say the very least! From [1],

> His fundamental contributions cover diverse areas of computing science, including compiler construction, operating systems, distributed systems, sequential and concurrent programming, programming paradigm and methodology, programming language research, program design, program development, program verification, software engineering principles, graph algorithms, and philosophical foundations of computer programming and computer science. Many of his papers are the source of new research areas. Several concepts and problems that are now standard in computer science were first identified by Dijkstra or bear names coined by him.

[1] https://en.wikipedia.org/wiki/Edsger_W._Dijkstra



Semaphores are great for creating simple read-write locks.


IMO that's the least spectacular of his contributions. It's not like that algorithm is particularly mindblowing. In fact, I'd wager that any sufficiently experienced engineer with some algorithm work under their belt would arrive at the exact same thing. Actually, I bet there's engineers all over the world who invent it every year, independently, simply because they need it and they didn't go to college or know that there's a thing out there called "Commonly Known Algorithms That Have Names and Wikipedia Pages" so they just worked it out.

In fact, I'd wager someone already came up with it somewhere before Dijkstra did, put it in the code (or, well, on the punch card), and quietly went home, happy about a good day's work.

Dijkstra just was the first one to publish it :-)

His true contributions were, IMO:

    * Being the first tech blogger[0] (handwritten!)
    * Teaching people to use while/for instead of goto
    * Pioneering mathematically verified algorithm derivation
He's also the reason Dutch universities have no independent Computer Science departments. They're always paired with Mathematics, even at universities of technology, which makes no sense and which has effectively halved research funding for both disciplines for decades. He was a better programmer than politician.

[0] http://www.cs.utexas.edu/users/EWD/indexChron.html - I'm calling it a blog because both the median length and the frequency of publishing is a lot more like blogging than anything else. I'm not aware of other people doing this as much in that day and age (but I'm probably wrong)


> Being the first tech blogger (handwritten!)

It's not a web log (for which “blog” is short) if it's not on the web.


I've certainly heard the youngsters describe something as "a blog, but on paper". Son enough, the word 'blog' will transcend its etymology.


Even smarter: "Simplicity is prerequisite for reliability." I wish more people or "the industry" would take this advice. On the other hand what would be left to sell if everything were simple and reliable.


the name should be Edsger




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: