Hacker News new | past | comments | ask | show | jobs | submit login

I think people are attracted more to the form of the EWDs and to it's melancholic tone, rather than to the content itself, which is confirmed by the huge popularity of the EWDs as compared to "The discipline of programming", which is the technical exposition of what Dijkstra really is proposing in the EWDs.

As almost all of Dijkstra's writing, this piece here is a plea for teaching formal methods. It's nicely written I admit, but it is also unbelievably smug and disconnected from reality. Even in physics or mathematics, which are obviously much more mathematically tractable than computer science, people do not do significant new things by using only formal methods. In fact, the technical achievement Dijkstra is most known for, Dijkstra's algorithm, was not invented using formal methods, but like most discoveries ever it occurred in a flash of insight after a long period of thinking, as he said in an interview. No matter how much one loves rigour, the dream of formalizing all reasoning has turned out to be a pipe-dream, as evidenced for example by the failure of the Hilbert program, and people everywhere proceed about their business as usual.

Mathematics is not, after all, just the manipulation of a string of symbols, and neither is Computer Science, even if all computers do is exactly this, because we humans are complex beings, largely driven by emotions, not consciously aware of all the reasoning processes going on, that are often reasoning in "fuzzy" ways (and frequently with good effects), as opposed to being machines executing chains of logical inferences. In fact, people are born with very different internal worlds, some people are really good at symbol manipulation, algebra, and language, their daily thinking consists of "talking to themselves", while some people naturally think by imagining pictures and having various loose sensual impressions, among the latter for example the great mathematician Poincare. Was Poincare in need of someone to "really teach him mathematics"? Unfortunately I think Dijkstra assumed everyone is identical to him.

Before preaching this too much, look at the practical implementations of his ideas on this, for example have a look at him lecturing on a technical topic:

http://www.youtube.com/watch?v=qNCAFcAbSTg

or read "A discipline of programming". As nice as the EWD prose is, I honestly can't stand the man talking about anything technical for more than 5 minutes. The more concrete the EWDs get the worse they are.

For some perspective on how science really gets done, as opposed to people's romantic images of how it is done, I recommend "The Sleepwalkers" by Arthur Koestler, about the Copernican revolution, and "The psychology of mathematical invention" by Hadamard.




I got the impression that we both read completely different articles, because nowhere in it I see EWD saying that mathematicians or scientists should think by means of formal methods... That doesn't even make sense, since formal methods are a way not to think.

What he states is that computers are a "formal method applying machine" (a conclusion you seem to share), and that for competently using it, one must know formal methods. And, honestly, I can't imagine how one can disagree with that. (And that was the fate of EWD, he saw all the patently obvious problems of computing science that somehow everybody was ignoring, and communicated them, just to be called a crazy radical at first, and "duh, of course it's right, tell me something not obvious next time" later.)


Dijkstra advocates creating programs by doing formal transformations of specifications written in mathematical logic, and computers just do boolean algebra on strings of bits, so it's not that Dijkstra advocates formal methods because they are somehow essential to how computation is done in the computer - they are not, especially in the form proposed by Dijkstra.

He is proposing formal methods as a way of thinking about developing computer programs, this is clear from reading "A Discipline of Programming". You can not say "formal methods are a way not to think", because the theoretical existence of a sequence of transformations giving you the final program does not tell you how to find those transformations, so some thinking on the part of programmer is still needed. Of course you can come up with a program by a way of thinking completely uninformed by the formal specification, and then formalize it, but this is not what Dijkstra is speaking of in this article. What I am saying in the comment is that I do not believe many people benefit that much from a formal approach to program development, just like not that many new theorems in mathematics, outside of mathematical logic, were discovered using the tools of mathematical logic. Even the proofs themselves in mathematics aren't done in a completely formal way.


> What I am saying in the comment is that I do not believe many people benefit that much from a formal approach to program development

A lot of systems would benefit from formal (or more formal) methods. I worked in aviation software, formal methods would have saved us so much trouble, especially since the code size is often relatively small (10k-200k LOC) and because we're dealing with other systems the IO spec was reasonably complete. Sure there'd be issues, but feature creep was not one of them (except on one project). Feature creep makes formalisms difficult, but only when you're looking at whole program formalisms. So certainly safety critical systems benefit.

What else might? How about medical systems (see Therac-25 for faulty UI leading to deaths). Network code, especially fundamentals like routers, switches and core services like DNS that so much depends on. Cryptographic and other security systems. Compilers, obviously, similar to network code they're too ubiquitous to be left to error prone ad hoc approaches. Anything handling money. Anything handling privacy (ensuring HIPAA and whatever European privacy rights). Software handling production automation or managing warehouses/inventory, failures there mean lost production time, wasted inventory, lots of money and productivity gains to be had by having software that works.

If you accept formal methods on the small scale (essentially the promise of functional programming a la Haskell and the ML family), you can be confident in composing large systems out of smaller, formally verified systems.


> just like not that many new theorems in mathematics, outside of mathematical logic, were discovered using the tools of mathematical logic.

That is plain FUD right there.


You are confused about the manner of "formal methods" Dijkstra was proposing. His examples given are all using "informal proofs." That is to say, non mechanically verified proofs.

If you create a program in such a way that the properties you claim it has cannot be, even informally, proved. Then you have no reason to believe it works as advertised what-so-ever.

He did also propose a research program in to more formalised transformations and so on to help the programmer - which is a separate thing. But we all know about the difficulty of this and it's failure to materialise in any significant way (yet) for the average programmer.

The reason that that is, or was, a tantalising prospect is that when humans construct correct programs using informal proofs, they are using the same very simple abstractions over and over again. Therefore a system to automatically transform specifications to programs or whatever need not have all the troublesome properties of completeness and such like and may, in fact, be quite tractable. It's debatable whether anyone still think's that that's likely.


The example proofs he gives in the article have nothing to do with the way he taught to do programming, they are merely illustration of some of his minor points. Later in the article he says:

On the other hand, we teach a simple, clean, imperative programming language, with a skip and a multiple assignment as basic statements, with a block structure for local variables, the semicolon as operator for statement composition, a nice alternative construct, a nice repetition and, if so desired, a procedure call. To this we add a minimum of data types, say booleans, integers, characters and strings. The essential thing is that, for whatever we introduce, the corresponding semantics is defined by the proof rules that go with it.

This sounds exactly like a description of:

http://en.wikipedia.org/wiki/Predicate_transformer_semantics

And of the approach he uses in "The discipline of programming", so I think you are the one who is confused.


As I read it. His comments about teaching we along the lines of:

1. Thinking of a computer as machine is not helpful. He re-iterates this point on "operational reasoning" in other EWD's when describing inductive thought in understanding programs rather than thinking step-wise.

2. Unsuitability of engineering metaphors / maintenance (such as topping up the oil in an engine) and so on to describe tasks within computer programming.

3. Calling bugs "errors"

4. Avoiding anthropomorphising programs

etc.

The given informal proof is given to argue for "use of down to earth mathematics" as a way of thinking about programming in a way that avoids "tremendous waste of mental effort "seems to be among the core points of this EWD.

The points about Liebniz's dream seem to be referring to the broadening and deepening of the formal core of computer science which was and still is a highly lively and profitable endeavour. And the argument is that making computer science students familiar with formal mathematics and logic can only help the endeavour.

The part you quoted just sounds to me just sounds like a pedagogical computer language. The main thrust seems to be ensuring students have a grounding in logic and formal math. Not to have them necessarily feed all their programs in to formal solvers and expect to do so all their lives. Although predicate transformer semantics was one research effort he led in to exactly that.

He does touch on the presentiment of things like predicate transformer semantics in the EWD but very obliquely as highly speculative statements on future research. I haven't read "discipline of programming" (I'd love to but it's expensive) but it looks like a scientific publication outlining a research program rather than a students handbook. Scientific research always proceeds through what might look like blind alleys in retrospect but which are actually important contributions.

Also from the lectures of his that I've seen, it seems the content is usually more along the lines of the EWD's than formal treatises in to predicate transformers.

Sorry for length of reply but, Dijkstras thinking on these issues, IMO, whatever you think of it, is not easily dismissed.

EDIT: In fact, to me, dismissing dijkstras ideas on formal comp-sci because of predicate transformer logic is like dismissing Newton for thinking that action at a distance was an "absurdity no man could believe in." That is to say, one cannot dismiss Newtons body of work as merely a preamble for his "main work" of finding a way to disprove action at distance.


> the dream of formalizing all reasoning has turned out to be a pipe-dream, as evidenced for example by the failure of the Hilbert program, and people everywhere proceed about their business as usual.

Ugh, that is unbelievably smug and disconnected from reality.

I could post tomes on why you are wrong. I will be kind and request you to respond to just this. :)

http://www.nytimes.com/library/cyber/week/1210math.html

Automated reasoners have been quite useful. You just have to stop being so smug and start looking.

Your complaining is like the physicists ("natural philosophers"?) at Newton's time complaining that natural philosophy (physics) does not need all that weird math and normal language is enough for physics.


Do you understand the difference between "formalizing all reasoning" and just formalizing something?


>Do you understand the difference between "formalizing all reasoning" and just formalizing something?

Thanks for the condescending tone. But, you need to start somewhere. Be realistic.

Are you sitting and complaining that since physicists don't have a theory of everything already they should stop their work? Are you sitting and complaining that since we have not yet gone to Mars, NASA should be shut down? Are you also complaining that since we don't understand the genome fully, we should stop computational biology?

We start with approximations and we need to start small. Like in any science. Claiming reasoning is unformalizable is appealing to superstition and folk science.

And I was responding to the point that formalization has zero real world impact. That is patently false and intentional middle-brow FUD. This FUD can cause real harm to real people (not just people who study formal methods.)

You still have not responded to my example of formal methods being used in the real world with real impact.


I didn't claim anywhere formal methods are worthless or have zero real world impact. I said that the dream of formalizing all reasoning turned out to be a pipe-dream, and it did, as in:

http://en.wikipedia.org/wiki/Hilbert%27s_program#G.C3.B6del....

Is that superstition and folk science? I also don't see mathematicians using mathematical logic to prove theorems in "ordinary" mathematics, hence the comment about "business as usual". I know formal methods are an active research field and successfully used in specialized applications in CS, but the article we are commenting upon is claiming studying formal methods is the best way to learn programming, which is similar to saying learning mathematical logic is the best way to learn mathematics, which is non-sense as evidenced by the attempts to do this.


> I also don't see mathematicians using mathematical logic to prove theorems in "ordinary" mathematics, hence the comment about "business as usual".

You don't know does not equal to that not existing.

http://www.cs.miami.edu/~tptp/OverviewOfATP.html

People are even making money using formal reasoning. So please, take time to learn more before disparaging entire fields.

Btw, I know about the incompleteness theorems. I teach a course centered around that.

Let me ask you a related question: do you stop programming because many of the tasks associated with programming are Turing-uncomputable? (checking if a program halts, checking if two programs compute the same function etc.)


Futhermore, judging by the discussion, many people here do not understand at all what Dijkstra is saying, using informal everyday interpretations for technical terms. When he says things about being "formal", it isn't just a synonym for being "systematic", or thinking harder, he means formal as in:

http://en.wikipedia.org/wiki/Formal_methods

http://en.wikipedia.org/wiki/Formalism_%28mathematics%29


I had a student of EDW as my Programming Languages instructor. Someone asked my prof, "How do you debug?" His response, "I don't. I prove that the code is correct before I type anything."


But, however much you might bristle at his tone, he has something significant to say, and he's right when he says it.

You can't take this paper (or any other) as an end-all/be-all statement isolated in a vacuum. Maybe his smug tone portends that he might presume as much, but it's a mistake to take it as a one-size-fits all remark.

The thing he gets right is this:

Teaching students "Computer Science" as a new discipline by showing them how to reverse engineer existing examples is a mistake.

It's a mistake in the same sense that you don't teach math by giving students a number and asking them what the problem was. That's an absurd approach to math.

96.

Okay, now give me the equation.

That's the idea he's driving at. And in many ways he's right.

Does that mean that reverse engineering is a completely invalid approach to learning? No. For some disciplines, it is essential, and indeed, for certain aspects of electrical engineering, which ulimately melds with programming, one needs those sorts of strategies ready at hand.

But when it comes down to writing programs, he's cautioning that you should learn the task as a top-down approach, not by starting at the bottom, and then climbing up through someone else's ready-made tangle of spaghetti.

Does reverse engineering work for medicine and biology, or even chemistry? Well, what the hell kind of a choice did we have? Humanity had to bootstrap out of hunter/gatherer mode one way or another, but it doesn't mean we have to approach computer science the same way, and in fact, there's no rational reason to do so.

Read deeply enough into this message, and you'll unravel the mystery of why open source development is beneficial and generous, and proprietary closed source products really do represent a malign animosity toward the end user, by only providing the bald number as the result, and never teaching the man to fish. But hey, there's that old refrain: we all gotta eat right, so how do we make money when we're giving everything away and not rationing a portion for yourself?

Dijkstra comes from the era when computer scientists were a priesthood, moreso than tradesman. Computer science was esoteric, and so there was no fear of "teaching a slave to read" because there really weren't enough books to go around. Computers were high holy temples of time sharing cloistered atop the ivory tower. But in his message here, you'll gain insight into the fundamental truth, that when there are billions of cycles per second, acting on trillions of bytes at a time, there's no hope of brute force reverse engineering the resulting output of the system as a post mortem. Sometimes that's what we're stuck with, but it's not the ideal, and it's an impractical approach, not unlike picking up the pieces of an air disaster on a mountain top. Code needs to be well organized and properly conceived from the outset, and this is the desired approach, and a fundamental principle new students should have beaten into them, mercilessly.


I can't agree with this either. Practical engineering is not pure mathematics, and once systems get complex enough, even if they are man-designed and not given from nature, there is often no other way than to use "reverse engineering" or in other words use empirical methods to understand some behaviour or fix a problem.

In fact, one of the most important skills to learn for prospective computer programmers is rigorous design of experiments to confirm or deny a given thesis about some aspect of a given system. When you are dealing with a virtual machine on top of an operating system, communicating over a network through tcp/ip with another virtual machine, there are millions of lines of source code involved, not to mention potential hardware issues, how can anyone be expected to understand all of it? Even if some of this complexity is accidental, as soon as we get rid of it we will just build more sophisticated systems, so people will regardless have to know how to deal with systems that do not fit in ones head all at once and where you can't always just reason your way out.


Whoa, you jumped to some conclusions at the end there that just don't add up. Since how is formalizing something counterproductive to building sophisticated systems?

The systems you mentioned were actually built in a non-formalized matter (further emphasizing the OP if anything). Of course it is incredibly difficult to build even more sophisticated systems on top of these systems for that very reason. Many of them are needlessly complicated leaky abstractions - but persist due to industry inertia.

The methods you describe are exactly the problem causing this.


You are misunderstanding and/or misrepresenting my point, which is that once enough people over a long-enough period build systems in a top-down, systematic matter, well as it works, it becomes impossible for a single person to understand a specific issue about a stack of such systems, especially once the real-world kicks in with hardware issues, bugs, and so forth, in which case you have to resort to methods that do not require keeping all relevant aspects in the head and that do not rely on the truth of idealized assumptions. Like in going from mathematics to physics, in real-world software engineering is no longer sufficient for conclusions to follow logically from the premises, since the premises now also have to be true in the real world conditions. Having non-leaky abstractions only shifts the moment in which the shift from pure thought to empiricism has to happen.

It seems to me that you actually use the notion of being "formal" in an informal (sic) way. Remember we are talking about Dijkstra-style correctness proofs and developing programs via doing mechanical in nature transformations of program invariants.


I am not convinced that in order to use the methods described one would have to keep the entire "world" in their head. In fact, I think these methods would actually help manage that problem rather than exacerbate it.

In the end a software program is and always will be an approximation of the real world.


Complexity is not hopeless.

First, http://www.vpri.org/html/work/ifnct.htm and http://www.vpri.org/pdf/tr2011004_steps11.pdf (Personal computing, which currently fit in about 200 Million lines, or 10 thousand books, could actually fit in about 20.000 lines, or a single book.)

Second, as we make more complex systems, we could just prove that they work as intended. (Provided the underlying hardware works as intended, which as you pointed out is not allways the case). Okay, it's expensive to prove your software. But it's not impossible: just use proven libraries, and prove that your software works if the dependencies and the hardware work. No need to prove the whole stack.

Sure, it's expensive in the short term. But think of the total absence of effort you will need to invest over debugging.

---

Of course, right now, we're deep in the poo. Hail to Worse Sells Better!


"it is also unbelievably smug and disconnected from reality." - it's funny how he addressed your objection as well


I am more than eager to see all the beautiful programs you inferred using predicate transformers and hear about your experiences doing software engineering this way in teams of size more than 1.


I don't have such programs. But I have plenty of the contrary: either zero documentation or something that's just a notch above useless. Contempt on new programmers that happen to be ignorant about their precious code and how dare they ask questions? Isn't it obvious from the hundred of lines of code that have no beginning and no end?

So, you got me here. I don't have such code, but the more I get experience the more I see that people don't care.


Can't recommend "The Sleepwalkers" enough.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: