Hacker News new | past | comments | ask | show | jobs | submit login
Dijkstra: On the cruelty of really teaching computer science (1988) [pdf] (utexas.edu)
111 points by jackhammer2022 on Dec 3, 2013 | hide | past | favorite | 61 comments



It's interesting to see how attitudes to this EWD have changed over the years. Here are some of the previous submissions with comments:

https://news.ycombinator.com/item?id=43978 : A few comments

https://news.ycombinator.com/item?id=1666445 : Many comments

https://news.ycombinator.com/item?id=3553983 : Many comments

In the earlier threads there are two main voices: "Where can I learn more about this," and "It's useful to see the different perspective." Both say that there are many nuggets of wisdom to be mined, even when they don't agree. There are also a few who dissent and say it's irrelevant, impractical, and inapplicable.

Even in 3553983 where the top comment says it's largely irrelevant there is a useful dialogue discussing the ideas, and how they can be made to work, or at least how we can learn from them in today's context.

In contrast, in this thread so far we've mostly had comments about the hand-writing, and complete dismissal of the ideas. So far there's very little attempt to say: "OK, times have changed, but can we learn something from this?"

So I ask - Can we? Are there nuggets to be mined? Or are you convinced that you really do know better, and have nothing to learn from it.

I'm still learning.

========

For reference, other submissions, mostly without comments:

https://news.ycombinator.com/item?id=6701607

https://news.ycombinator.com/item?id=2122826

https://news.ycombinator.com/item?id=2090256

https://news.ycombinator.com/item?id=1989473

https://news.ycombinator.com/item?id=383210


The audience is self-selected. The ones that don't know any better see only a condescend tone instead of the material. They don't get the difference between "lines spent" and "lines produced" and they don't know that computers were used to "compute" and thus the formalism was absolutely necessary to trust the result.

What we write today is WoW and stuff we don't need formalism to prove they don't crash - because, you know, who cares about that?

Dijkstra comes from a time were you could reuse a piece of software and build upon it because you could understand it. Today nobody does it because the software is "disposable". Who cares about proving a piece of future garbage?


Agreed. In a way, it's like the "Blub paradox", only applied to formal methods instead of programming languages. Those who aren't familiar with them don't see the point and think it's merely "mathy" nonsense.


I think you are confusing what Stiff was saying.

People learn differently and formalism in one approach to teaching it. It has it's purposes but complex systems require more than one way of representing it. Formalism shouldn't be the only way to represent and teach the concepts of computation and CS.

You are right in saying that formalism is required to trust the result; however, I would argue that it isn't the only way. Looking at something from a strictly mathematical approach would ignore some of the more black arts component of CS and Software development. You should be viewing a system from various angles to help convince you of it's reliability.

I think these two talks might help explain things.

http://worrydream.com/MediaForThinkingTheUnthinkable/ http://www.confreaks.com/videos/282-lsrc2010-real-software-e...


His proposal from this EWD was actually realized, when people trying to reduce mathematics to mathematical logic and purely formal manipulation introduced the "New Math" movement, and it was a disaster. Even mathematicians don't typically do mathematics via purely formal methods, and software engineering is much more an ... engineering discipline. In fact, his is a very mediocre philosophically voice in a debate about formalism (the stream in philosophy of mathematics) that has been ongoing for centuries, in which people like Leibniz, Turing, Poincare, Hilbert etc. participated, and from this perspective there is nothing to admire about this essay, except maybe some peculiar form of elegance of writing style.


This EWD is mind blowing in its beauty. It's 30 pages, but 30 pages that you won't regret reading.

I love computer science and teaching it is some of the most fun I've ever had. With a lot of practice, you can get across concepts such as computability and turing machines to the non-mathematically initiated within the duration of a party, and it's greatly satisfying. (yes I am a lot of fun to be around)

The rest of the EWD notes are highly recommended. (also it's weird to see an academic write in script). Side question: do any other HNers keep notes in written form? I still do it, using a fountain pen, and am often poked fun at by colleagues :(


Re: your side question, sure![0] I would love to use a fountain pen primarily but the nib that I want is out of my budget for now. I have a couple Muji fountain pens though which are great for the price, just far too thick for me to use regularly.

[0]: http://academic.ianclarksmith.com/Coursera/DesignArtifacts/c... (Uniball .28 and mechanical pencil on blank 4x6 card)

By now I probably have thousands of these scattered throughout books (I use them as bookmarks while taking notes) or in boxes. They're not particularly well sorted except for being vaguely chronological. I have a decent stash of notebooks as well but haven't used them since it was a requirement for school except for a single pocket notebook that gets filled roughly once a year.


I keep extensive paper notes, mostly in sequential "lab book" style, pages dated and numbered, with a different colour pen for each day of the week. I don't fuss too much over structure beyond that, and anything for wider distribution I write up on a computer.

I've seen people struggle with tablets and laptops for note taking, but I find they are some way off the free-flowing UX of pen and paper.


I use fountain pens and paper to take notes in meetings all the time. I am trying to do more "thinking on paper", and I wouldn't give up the fountain pen for anything.

Keep at it!


I use OneNote with a stylus and tablet, both for notes for myself and collaboration. It's like a portable, persistent whiteboard.

I used to mostly take notes with pens on sticky notes, but after a particularly tricky production debugging session I managed to cover most of my cubicle with about 60 hastily scribbled yellow sticky notes. I then proceeded to go on a trip for a week without cleaning up.

After returning I had to dispel a rumor that I had finally "snapped".


I sometimes like to do early stage design documents with felt-tip pens or wax crayons, because it prevents you from writing fiddly details and encourages you to sketch broad structures.

I think this is more of a designer concept but you can apply it to engineering as well.



Who is Dijkstra to espouse about the best way of teaching mathematics to primary school children? I think it would be equivalent to a researcher in education deciding they know the best way to design a calculus.

The manner in which Dijkstra writes about this is very off-putting - it sounds arrogant and one-sided, dismissive. In fact he comes across as ignorant, which of course, he is, in this area.

Worryingly, I see a lot of similar writing in CS blogs. Often authors present as fact their opinions on a topic that they are not knowledgeable about.

When I read something like that, I look for the balance in their argument. If they say "well, what are the good sides to teaching by analogy? When does it work best? What is the evidence that teaching basic arithmetic is done well?" then I begin to take them seriously.

[edit: typo]


Where in this essay does he "espouse about the best way of teaching mathematics to primary school children"? It seems to be about CS education, not primary school math.


Page 5.


As someone who focused on proof based mathematics in collage and is largely a self taught programer I think this proposal has merit. Begin forced to confront abstraction and formalism without any crutches as a freshman in college changed and expanded the way I thought. Learning to write down a correct proof without an external way of verifying it is an experience everyone should have.

However I think this needs to be coupled with an introduction to the concrete at a younger age. In mathematics we teach arithmetic to young children then step up the ladder of abstraction to high school algebra, simple proofs and linear algebra before eventually exposing them to abstract algebra (ie group theory etc). Teaching abstract algebra first would make theoretical sense but the mind tends to need to know at least one example of something before it accepts the abstraction.

I imagine a playful introduction to programing for young children coupled with a strong course in formalism for young adults could produce some great programers. Though, as with the proof based math series I took, I'm sure there would also be loads of freshman drop outs/transfers to more applied courses.

I also agree with others that insight rarely comes from pure formalism. The point of learning these things is to expand the way your mind works allowing you to think about abstract objects and allowing you to verify insight when it does come.

For me Charlie Parker sums up the necessity of a formal education even if it is not explicitly used in practice:

"Master your instrument, master the music & then forget all that & just play."


For me Charlie Parker sums up the necessity of a formal education even if it is not explicitly used in practice: "Master your instrument, master the music & then forget all that & just play."

How very true across so many aspects of life!

Science doesn't happen through formalism. People learn the body of material that's been proven, then they get new intuitions, then afterwards they try to formally prove their insight. This isn't entirely inconsistent with EDW either. A lot of his point is that programmers need to know how to think symbolically, and if you truly prove the correctness of things, you can build the base of knowledge on top of it. If not, you're frequently wondering why things don't work.


A wonderful read from a badass scientist. Here is a good nugget of wisdom:

Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot".


  My point today is that, if we wish to count lines of code,
  we should not regard them as "lines produced" but as "lines
  spent"


I think people are attracted more to the form of the EWDs and to it's melancholic tone, rather than to the content itself, which is confirmed by the huge popularity of the EWDs as compared to "The discipline of programming", which is the technical exposition of what Dijkstra really is proposing in the EWDs.

As almost all of Dijkstra's writing, this piece here is a plea for teaching formal methods. It's nicely written I admit, but it is also unbelievably smug and disconnected from reality. Even in physics or mathematics, which are obviously much more mathematically tractable than computer science, people do not do significant new things by using only formal methods. In fact, the technical achievement Dijkstra is most known for, Dijkstra's algorithm, was not invented using formal methods, but like most discoveries ever it occurred in a flash of insight after a long period of thinking, as he said in an interview. No matter how much one loves rigour, the dream of formalizing all reasoning has turned out to be a pipe-dream, as evidenced for example by the failure of the Hilbert program, and people everywhere proceed about their business as usual.

Mathematics is not, after all, just the manipulation of a string of symbols, and neither is Computer Science, even if all computers do is exactly this, because we humans are complex beings, largely driven by emotions, not consciously aware of all the reasoning processes going on, that are often reasoning in "fuzzy" ways (and frequently with good effects), as opposed to being machines executing chains of logical inferences. In fact, people are born with very different internal worlds, some people are really good at symbol manipulation, algebra, and language, their daily thinking consists of "talking to themselves", while some people naturally think by imagining pictures and having various loose sensual impressions, among the latter for example the great mathematician Poincare. Was Poincare in need of someone to "really teach him mathematics"? Unfortunately I think Dijkstra assumed everyone is identical to him.

Before preaching this too much, look at the practical implementations of his ideas on this, for example have a look at him lecturing on a technical topic:

http://www.youtube.com/watch?v=qNCAFcAbSTg

or read "A discipline of programming". As nice as the EWD prose is, I honestly can't stand the man talking about anything technical for more than 5 minutes. The more concrete the EWDs get the worse they are.

For some perspective on how science really gets done, as opposed to people's romantic images of how it is done, I recommend "The Sleepwalkers" by Arthur Koestler, about the Copernican revolution, and "The psychology of mathematical invention" by Hadamard.


I got the impression that we both read completely different articles, because nowhere in it I see EWD saying that mathematicians or scientists should think by means of formal methods... That doesn't even make sense, since formal methods are a way not to think.

What he states is that computers are a "formal method applying machine" (a conclusion you seem to share), and that for competently using it, one must know formal methods. And, honestly, I can't imagine how one can disagree with that. (And that was the fate of EWD, he saw all the patently obvious problems of computing science that somehow everybody was ignoring, and communicated them, just to be called a crazy radical at first, and "duh, of course it's right, tell me something not obvious next time" later.)


Dijkstra advocates creating programs by doing formal transformations of specifications written in mathematical logic, and computers just do boolean algebra on strings of bits, so it's not that Dijkstra advocates formal methods because they are somehow essential to how computation is done in the computer - they are not, especially in the form proposed by Dijkstra.

He is proposing formal methods as a way of thinking about developing computer programs, this is clear from reading "A Discipline of Programming". You can not say "formal methods are a way not to think", because the theoretical existence of a sequence of transformations giving you the final program does not tell you how to find those transformations, so some thinking on the part of programmer is still needed. Of course you can come up with a program by a way of thinking completely uninformed by the formal specification, and then formalize it, but this is not what Dijkstra is speaking of in this article. What I am saying in the comment is that I do not believe many people benefit that much from a formal approach to program development, just like not that many new theorems in mathematics, outside of mathematical logic, were discovered using the tools of mathematical logic. Even the proofs themselves in mathematics aren't done in a completely formal way.


> What I am saying in the comment is that I do not believe many people benefit that much from a formal approach to program development

A lot of systems would benefit from formal (or more formal) methods. I worked in aviation software, formal methods would have saved us so much trouble, especially since the code size is often relatively small (10k-200k LOC) and because we're dealing with other systems the IO spec was reasonably complete. Sure there'd be issues, but feature creep was not one of them (except on one project). Feature creep makes formalisms difficult, but only when you're looking at whole program formalisms. So certainly safety critical systems benefit.

What else might? How about medical systems (see Therac-25 for faulty UI leading to deaths). Network code, especially fundamentals like routers, switches and core services like DNS that so much depends on. Cryptographic and other security systems. Compilers, obviously, similar to network code they're too ubiquitous to be left to error prone ad hoc approaches. Anything handling money. Anything handling privacy (ensuring HIPAA and whatever European privacy rights). Software handling production automation or managing warehouses/inventory, failures there mean lost production time, wasted inventory, lots of money and productivity gains to be had by having software that works.

If you accept formal methods on the small scale (essentially the promise of functional programming a la Haskell and the ML family), you can be confident in composing large systems out of smaller, formally verified systems.


> just like not that many new theorems in mathematics, outside of mathematical logic, were discovered using the tools of mathematical logic.

That is plain FUD right there.


You are confused about the manner of "formal methods" Dijkstra was proposing. His examples given are all using "informal proofs." That is to say, non mechanically verified proofs.

If you create a program in such a way that the properties you claim it has cannot be, even informally, proved. Then you have no reason to believe it works as advertised what-so-ever.

He did also propose a research program in to more formalised transformations and so on to help the programmer - which is a separate thing. But we all know about the difficulty of this and it's failure to materialise in any significant way (yet) for the average programmer.

The reason that that is, or was, a tantalising prospect is that when humans construct correct programs using informal proofs, they are using the same very simple abstractions over and over again. Therefore a system to automatically transform specifications to programs or whatever need not have all the troublesome properties of completeness and such like and may, in fact, be quite tractable. It's debatable whether anyone still think's that that's likely.


The example proofs he gives in the article have nothing to do with the way he taught to do programming, they are merely illustration of some of his minor points. Later in the article he says:

On the other hand, we teach a simple, clean, imperative programming language, with a skip and a multiple assignment as basic statements, with a block structure for local variables, the semicolon as operator for statement composition, a nice alternative construct, a nice repetition and, if so desired, a procedure call. To this we add a minimum of data types, say booleans, integers, characters and strings. The essential thing is that, for whatever we introduce, the corresponding semantics is defined by the proof rules that go with it.

This sounds exactly like a description of:

http://en.wikipedia.org/wiki/Predicate_transformer_semantics

And of the approach he uses in "The discipline of programming", so I think you are the one who is confused.


As I read it. His comments about teaching we along the lines of:

1. Thinking of a computer as machine is not helpful. He re-iterates this point on "operational reasoning" in other EWD's when describing inductive thought in understanding programs rather than thinking step-wise.

2. Unsuitability of engineering metaphors / maintenance (such as topping up the oil in an engine) and so on to describe tasks within computer programming.

3. Calling bugs "errors"

4. Avoiding anthropomorphising programs

etc.

The given informal proof is given to argue for "use of down to earth mathematics" as a way of thinking about programming in a way that avoids "tremendous waste of mental effort "seems to be among the core points of this EWD.

The points about Liebniz's dream seem to be referring to the broadening and deepening of the formal core of computer science which was and still is a highly lively and profitable endeavour. And the argument is that making computer science students familiar with formal mathematics and logic can only help the endeavour.

The part you quoted just sounds to me just sounds like a pedagogical computer language. The main thrust seems to be ensuring students have a grounding in logic and formal math. Not to have them necessarily feed all their programs in to formal solvers and expect to do so all their lives. Although predicate transformer semantics was one research effort he led in to exactly that.

He does touch on the presentiment of things like predicate transformer semantics in the EWD but very obliquely as highly speculative statements on future research. I haven't read "discipline of programming" (I'd love to but it's expensive) but it looks like a scientific publication outlining a research program rather than a students handbook. Scientific research always proceeds through what might look like blind alleys in retrospect but which are actually important contributions.

Also from the lectures of his that I've seen, it seems the content is usually more along the lines of the EWD's than formal treatises in to predicate transformers.

Sorry for length of reply but, Dijkstras thinking on these issues, IMO, whatever you think of it, is not easily dismissed.

EDIT: In fact, to me, dismissing dijkstras ideas on formal comp-sci because of predicate transformer logic is like dismissing Newton for thinking that action at a distance was an "absurdity no man could believe in." That is to say, one cannot dismiss Newtons body of work as merely a preamble for his "main work" of finding a way to disprove action at distance.


> the dream of formalizing all reasoning has turned out to be a pipe-dream, as evidenced for example by the failure of the Hilbert program, and people everywhere proceed about their business as usual.

Ugh, that is unbelievably smug and disconnected from reality.

I could post tomes on why you are wrong. I will be kind and request you to respond to just this. :)

http://www.nytimes.com/library/cyber/week/1210math.html

Automated reasoners have been quite useful. You just have to stop being so smug and start looking.

Your complaining is like the physicists ("natural philosophers"?) at Newton's time complaining that natural philosophy (physics) does not need all that weird math and normal language is enough for physics.


Do you understand the difference between "formalizing all reasoning" and just formalizing something?


>Do you understand the difference between "formalizing all reasoning" and just formalizing something?

Thanks for the condescending tone. But, you need to start somewhere. Be realistic.

Are you sitting and complaining that since physicists don't have a theory of everything already they should stop their work? Are you sitting and complaining that since we have not yet gone to Mars, NASA should be shut down? Are you also complaining that since we don't understand the genome fully, we should stop computational biology?

We start with approximations and we need to start small. Like in any science. Claiming reasoning is unformalizable is appealing to superstition and folk science.

And I was responding to the point that formalization has zero real world impact. That is patently false and intentional middle-brow FUD. This FUD can cause real harm to real people (not just people who study formal methods.)

You still have not responded to my example of formal methods being used in the real world with real impact.


I didn't claim anywhere formal methods are worthless or have zero real world impact. I said that the dream of formalizing all reasoning turned out to be a pipe-dream, and it did, as in:

http://en.wikipedia.org/wiki/Hilbert%27s_program#G.C3.B6del....

Is that superstition and folk science? I also don't see mathematicians using mathematical logic to prove theorems in "ordinary" mathematics, hence the comment about "business as usual". I know formal methods are an active research field and successfully used in specialized applications in CS, but the article we are commenting upon is claiming studying formal methods is the best way to learn programming, which is similar to saying learning mathematical logic is the best way to learn mathematics, which is non-sense as evidenced by the attempts to do this.


> I also don't see mathematicians using mathematical logic to prove theorems in "ordinary" mathematics, hence the comment about "business as usual".

You don't know does not equal to that not existing.

http://www.cs.miami.edu/~tptp/OverviewOfATP.html

People are even making money using formal reasoning. So please, take time to learn more before disparaging entire fields.

Btw, I know about the incompleteness theorems. I teach a course centered around that.

Let me ask you a related question: do you stop programming because many of the tasks associated with programming are Turing-uncomputable? (checking if a program halts, checking if two programs compute the same function etc.)


Futhermore, judging by the discussion, many people here do not understand at all what Dijkstra is saying, using informal everyday interpretations for technical terms. When he says things about being "formal", it isn't just a synonym for being "systematic", or thinking harder, he means formal as in:

http://en.wikipedia.org/wiki/Formal_methods

http://en.wikipedia.org/wiki/Formalism_%28mathematics%29


I had a student of EDW as my Programming Languages instructor. Someone asked my prof, "How do you debug?" His response, "I don't. I prove that the code is correct before I type anything."


But, however much you might bristle at his tone, he has something significant to say, and he's right when he says it.

You can't take this paper (or any other) as an end-all/be-all statement isolated in a vacuum. Maybe his smug tone portends that he might presume as much, but it's a mistake to take it as a one-size-fits all remark.

The thing he gets right is this:

Teaching students "Computer Science" as a new discipline by showing them how to reverse engineer existing examples is a mistake.

It's a mistake in the same sense that you don't teach math by giving students a number and asking them what the problem was. That's an absurd approach to math.

96.

Okay, now give me the equation.

That's the idea he's driving at. And in many ways he's right.

Does that mean that reverse engineering is a completely invalid approach to learning? No. For some disciplines, it is essential, and indeed, for certain aspects of electrical engineering, which ulimately melds with programming, one needs those sorts of strategies ready at hand.

But when it comes down to writing programs, he's cautioning that you should learn the task as a top-down approach, not by starting at the bottom, and then climbing up through someone else's ready-made tangle of spaghetti.

Does reverse engineering work for medicine and biology, or even chemistry? Well, what the hell kind of a choice did we have? Humanity had to bootstrap out of hunter/gatherer mode one way or another, but it doesn't mean we have to approach computer science the same way, and in fact, there's no rational reason to do so.

Read deeply enough into this message, and you'll unravel the mystery of why open source development is beneficial and generous, and proprietary closed source products really do represent a malign animosity toward the end user, by only providing the bald number as the result, and never teaching the man to fish. But hey, there's that old refrain: we all gotta eat right, so how do we make money when we're giving everything away and not rationing a portion for yourself?

Dijkstra comes from the era when computer scientists were a priesthood, moreso than tradesman. Computer science was esoteric, and so there was no fear of "teaching a slave to read" because there really weren't enough books to go around. Computers were high holy temples of time sharing cloistered atop the ivory tower. But in his message here, you'll gain insight into the fundamental truth, that when there are billions of cycles per second, acting on trillions of bytes at a time, there's no hope of brute force reverse engineering the resulting output of the system as a post mortem. Sometimes that's what we're stuck with, but it's not the ideal, and it's an impractical approach, not unlike picking up the pieces of an air disaster on a mountain top. Code needs to be well organized and properly conceived from the outset, and this is the desired approach, and a fundamental principle new students should have beaten into them, mercilessly.


I can't agree with this either. Practical engineering is not pure mathematics, and once systems get complex enough, even if they are man-designed and not given from nature, there is often no other way than to use "reverse engineering" or in other words use empirical methods to understand some behaviour or fix a problem.

In fact, one of the most important skills to learn for prospective computer programmers is rigorous design of experiments to confirm or deny a given thesis about some aspect of a given system. When you are dealing with a virtual machine on top of an operating system, communicating over a network through tcp/ip with another virtual machine, there are millions of lines of source code involved, not to mention potential hardware issues, how can anyone be expected to understand all of it? Even if some of this complexity is accidental, as soon as we get rid of it we will just build more sophisticated systems, so people will regardless have to know how to deal with systems that do not fit in ones head all at once and where you can't always just reason your way out.


Whoa, you jumped to some conclusions at the end there that just don't add up. Since how is formalizing something counterproductive to building sophisticated systems?

The systems you mentioned were actually built in a non-formalized matter (further emphasizing the OP if anything). Of course it is incredibly difficult to build even more sophisticated systems on top of these systems for that very reason. Many of them are needlessly complicated leaky abstractions - but persist due to industry inertia.

The methods you describe are exactly the problem causing this.


You are misunderstanding and/or misrepresenting my point, which is that once enough people over a long-enough period build systems in a top-down, systematic matter, well as it works, it becomes impossible for a single person to understand a specific issue about a stack of such systems, especially once the real-world kicks in with hardware issues, bugs, and so forth, in which case you have to resort to methods that do not require keeping all relevant aspects in the head and that do not rely on the truth of idealized assumptions. Like in going from mathematics to physics, in real-world software engineering is no longer sufficient for conclusions to follow logically from the premises, since the premises now also have to be true in the real world conditions. Having non-leaky abstractions only shifts the moment in which the shift from pure thought to empiricism has to happen.

It seems to me that you actually use the notion of being "formal" in an informal (sic) way. Remember we are talking about Dijkstra-style correctness proofs and developing programs via doing mechanical in nature transformations of program invariants.


I am not convinced that in order to use the methods described one would have to keep the entire "world" in their head. In fact, I think these methods would actually help manage that problem rather than exacerbate it.

In the end a software program is and always will be an approximation of the real world.


Complexity is not hopeless.

First, http://www.vpri.org/html/work/ifnct.htm and http://www.vpri.org/pdf/tr2011004_steps11.pdf (Personal computing, which currently fit in about 200 Million lines, or 10 thousand books, could actually fit in about 20.000 lines, or a single book.)

Second, as we make more complex systems, we could just prove that they work as intended. (Provided the underlying hardware works as intended, which as you pointed out is not allways the case). Okay, it's expensive to prove your software. But it's not impossible: just use proven libraries, and prove that your software works if the dependencies and the hardware work. No need to prove the whole stack.

Sure, it's expensive in the short term. But think of the total absence of effort you will need to invest over debugging.

---

Of course, right now, we're deep in the poo. Hail to Worse Sells Better!


"it is also unbelievably smug and disconnected from reality." - it's funny how he addressed your objection as well


I am more than eager to see all the beautiful programs you inferred using predicate transformers and hear about your experiences doing software engineering this way in teams of size more than 1.


I don't have such programs. But I have plenty of the contrary: either zero documentation or something that's just a notch above useless. Contempt on new programmers that happen to be ignorant about their precious code and how dare they ask questions? Isn't it obvious from the hundred of lines of code that have no beginning and no end?

So, you got me here. I don't have such code, but the more I get experience the more I see that people don't care.


Can't recommend "The Sleepwalkers" enough.


His page numbers are zero indexed. Cute. I would have loved to take his proposal for an introduction to computer science course.


You might also be interested in CMU's introductory curriculum in the same vein which is somewhat less extreme and has a particular focus on parallelism as the standard case of computing. http://existentialtype.wordpress.com/2012/08/17/intro-curric...



However after the list he states

  "And with the sixth example I have reached imperceptibly but unavoidably,
  the most hairy part of this talk: Educational consequences." 
which is incorrect as item 6 is actually the 7th example. Zero based indexing requires that the counter be updated after appending and the counter be referenced for the cardinal value of an item. (ie. item 6 is 7th in the list).


I know Dijkstra's genius and all, but his papers are just so unpleasant. First, I hate his verbose style and second, I hate his attitude.

"The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better."

Jesus, could he be any more smug?


> Jesus, could he be any more smug?

Funny that I think that statement is extremely humble.


Indeed, the very idea that your fly by the seat of your pants intuition, memory and senses cannot be trusted is the foundational presentiment that brought us science.


Depends on how you read it. You could mimic structures from society or nature. The isolated human mind is quite a useless thing.


How can a human write that much good prose without scribbling things out?


Presumably this is a final draft.


See page #7


Was this post inspired by this post (http://www.reddit.com/r/compsci/comments/1rxfze/if_you_liked...) on r/compsci by any chance? I didn't read much of the discussion there but the other links are interesting and things which I hadn't seen before.


I think it's the opposite, i.e. the reddit post was inspired by the paper, which I read yesterday... but I couldn't have found the link on HN, because it has only appeared today, so it must mean that I orginally found it on reddit. Now I'm confused...


Interestingly, I found the handwriting to more readable than most computer fonts; that is, very visually consumable.


That was the fist thing that jumped out at me. Amazing handwriting!


His handwriting looks beautiful!

I loved it and emailed Prof. Dijkstra during the brief time that he finally had email (I got a graduate degree from his department and attended some of his classes). I asked him for permission to one day create a font from his handwriting, not really expecting a response.

However, he did eventually reply, granting me the permission to do so.

I commissioned the font and it worked nice. It was distributed briefly and it may still be out there (I see one or two are found out there).

My original copy may be in old backups. One day I will find it and update it!

Edit: found the email reply! He retired in 11/1999 and passed away in 8/2002

  Austin, Tuesday 11 April 2000

  Dear Dr. Puchol,

        If a competent designer can use my handwriting as a source of
  inspiration, that is fine with me. Thank you for asking my permission,
  which is granted with pleasure.

        With my greetings and best wishes,
                        yours ever,          Edsger W.Dijkstra


I am glad he was able to articulate why some types of change are accepted and other changes rejected with hostility -- that some new things can be interpreted through analogies with previous experience.

It is the difference between "change" and change, "thinking" and thinking.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: