Hacker News new | past | comments | ask | show | jobs | submit login
On the cruelty of really teaching computing science (1988) [pdf] (gla.ac.uk)
89 points by teawhoyou on Dec 17, 2020 | hide | past | favorite | 120 comments



Ultimately it seems strangely exclusionary to draw lines in the sand like this. Everyone who learns how to program doesn't need to A.) be god's gift to the science of computing and B.) have the same level of understanding/fluency in what they're doing.

It's like saying the only way to teach any amount of a language is to require 6 months of total immersion. There is value in someone taking on year of a language and learning a few phrases, and moving on. Ultimately, people have practical and immediate concerns, like getting paid and putting food on the table. We aren't all programming for the same reasons, and sometimes getting the job done is all people have time or even the inclination to do. And we shouldn't judge them for that.

I'm not suggesting that he is entirely wrong, but reading things that sound like proselytizing just make me sad. There is so much room to educate and come to mutual understandings. Some people might think they've found a better route or a deeper understanding, but treating the rest like idiots is the best way to make sure they never get your message.


I like the analogy that English writers don't have to be at Shakespeare's level; even being able to write a shopping list is a very useful. Hence it's a good idea to teach everyone reading and writing at school.

I think the programming equivalent of a shopping list would be something like a simple 1, 2, 3 cron script, or if-this-then-that event handlers. That only needs a very basic understanding of computing (and how to find information effectively online), but opens up lots of really useful automation, e.g. backing up files, sending an email (e.g. a notification/reminder to ourselves), etc. Even without some automated API to use, a sequence of 'print' and 'sleep' commands can be useful to help ourselves/others perform some common task, e.g. a receipe which tells us when to turn down the oven, when to put on the pasta, etc. Hence I think it's a good idea to have simple 'learn to code' classes in schools; whilst more advanced classes would be optional for those wanting to specialise down that path.


I would go even further and say that the most important aspect of teaching programming in schools is simply exposure. Most people don’t stumble upon programming in their day to day lives, especially now in the age of iPads and other locked down devices, so it’s important that the subset of population that would be good programmers are given the opportunity to at least try it out early in life.


I'd argue that they do, and that will become more and more true. Things as simple as an Excel file, or writing simple email filtering rules may expose you to _some_ programming.

As a side note - I believe this is going to be more and more true, especially among knowledge workers - I'm somewhat bullish on UiPath's low-code app platform [+]. As programmers we understand well the benefits (and pitfalls) of automating our menial tasks, if you can lower the bar so that most people (like assistant managers, receptionists, call center operators etc) can do that - you suddenly start exposing much wider audiences to programming. It's a tough problem but I believe RPA might just be the right foundation to enable that sort of thing.

[+] Disclaimer: I work for them so maybe that's natural :)


My experience with the exposure to excel is that it isn’t used in a way that exposes all users to programming experiences of any sort.

Even presently I’m staring down the barrel of a project that involves what is looking closer and closer to NLP to simply parse cell values to map them to positions in a layout and character styles in a given InDesign template.

The people responsible for a inputting the original data do it in what is basically paper napkin thoughts that have to be interpreted accurately for want of running aground advertising regulations (should a value end up incorrect in the entire process)

So I’m only speaking anecdotally, but applications as seriously powerful and user-friendly as excel do as much to obscure as they do to expose people to programming/computer science. I mean, excel does work so well for so many—even unintended—applications that the users expect unlimited magic in all things computer and fail to understand why new limitations may be imposed when they want their workflow simplified and it can’t “just work” no matter what.

(Forgive any latent frustration making its way into that comment, the project I’m referring to has felt nothing short of Sisyphean while a new application in the space of it would be quite simple to develop)


You’re in a bubble. Most people don’t use Excel, much less write formulas, and they certainly don’t write email filters.


I don't think people will get more exposed to programming as we go on, currently we are moving away from personal computers to dedicated app machines like smartphones where you don't see files and such.


The best cure for optimistic thinking like this would be simply to try teach some of that to _not_ self-selected group. It becomes clear very fast, how non-trivial "trivial" concepts are.


Not only that. But just the concept of programming is not something most people even grasp what it is.

I stumbled upon a nice way to show people what programming is the other day. However, it requires the person who asked the question to be engaged. If they only see programming as a trade of wizardry and a way to make money they may tune you out.

But my latest analogy for people who as 'what is programming' is to put a rock on a table and say 'teach this rock to go to the store write down every single step including getting off the table and opening the door'. They may get a bit angry and I will tell them a computer is no smarter than a rock and it will do exactly what you tell it. You need to be very explicit in what you tell it. If something goes wrong like you end up in the bathroom instead, something is wrong in the instructions and you have to fix it the rock cant. Now I am sure this analogy will fail on someone at some point. But it has worked on a few people I tested it on. My job is to pretend to be very stupid and then ask 'what sort of instructions do I need'.

That is just the simple style of programming. Add in callbacks, async, events, injection models, and SQL in there and people nope out.


It reminds me how Feynman described a computer as a superfast filing system https://youtu.be/EKWGGDXe5MA

Though programming is less about precise detailed instructions, it is more about glueing together mostly existing components in a manner understood by your team.


I had an Elementary school teacher that gave us a similar exercise to the rock thing. She said something like "imagine I'm an alien, and tell me how to make a peanut butter and jelly sandwich". I got so caught up in sharing my special technique I had at the time (extra glob of PB in the middle after covering both sides) that I just got a bit deflated when she stopped me to ask what I meant by what seemed like a basic step. (It was something like "okay, I have the bread", and I hadn't asked her to pick up the knife before applying the peanut butter)

Oddly I don't remember the point of the exercise at all, or if we did anything building on it. We certainly didn't do any programming.


As my dad says, computers are extremely-fast idiots.


I think teaching anything to a not self-selected group is excruciatingly hard. It's the same with foreign languages, math, biology etc.

An easy way out that schools use is to teach things that can be memorized like laundry lists. Name the 5 components of the <thing>. Name <famous person>'s 3 contributions and write a sentence to each. Specify the formula for <physicist>'s law.

As you say, if you try to teach a non-self-selected general population computing, you'll see how nontrivial even the mental model of files and folders etc. can be. But these are people who successfully do complex jobs in their lives. Sometimes even some sort of STEM-related or technical/engineering job, just not computer related, like a car mechanic. I don't think it's some sort of brain-compute-power/intelligence issue. A car mechanic uses similar brain pathways to "debug" and repair a car, keeping logical dependencies in mind etc.

Perhaps it's about the transition from physical to the entirely abstract/symbolic world.

It may be about a mental defense against the low-status of nerds, as in "I'm not like them (thank God, haha, I have a life), so I can't do this...".

Again, forcing knowledge into someone's brain is extremely difficult. They need to cooperate by their own will, otherwise you just get memorized lists that are forgotten after the test.

People learn languages by immersion when they want to interact with people around them, but when people are forced to live in a country they don't like, they can go decades without properly learning the local language, despite going shopping etc. and living a normal life.

I know relatives that run to me with all sorts of IT tech support issues, pretending they just cannot solve it. But when it's actually about something they really want done, like watching a movie or an episode of their favorite TV show, they are suddenly able to figure out all the details of torrenting.

Not that this is some novel insight, but motivation is key. If you have an actual goal in mind that you really want to achieve (like watching the next episode of a show), you will push through the discomfort and uncertainty of learning how to get there. If you want to learn a language because you love a culture, or you need it to talk to clients at work to get a promotion etc., you will learn much better.

Some charismatic teachers can create motivation where there was none. This sometimes ends up as "learning to please the teacher", but sometimes a single teacher's influence sets someone on a whole career course.

Learning basic coding (perhaps not necessarily becoming professional devs) should be possible for a large part of the general population if they have direct use for it today ("I cannot do <desired activity> unless I figure this out") outside of made up tests and teacher-pleasing.


"If someone wants to learn something you cannot stop them and if someone does not want to learn something you cannot force them"


> I like the analogy that English writers don't have to be at Shakespeare's level; even being able to write a shopping list is a very useful. Hence it's a good idea to teach everyone reading and writing at school.

Language "skills" are strongly social. Literary acclaim falls squarely within celebritydom. Your ability cannot be too far off from the mean or else few will understand you. Though there may be some unmeasurable internal utility as a thinking tool.

We learn to read and write to do it as well as our peers do. This covers 99% of the utility.

There may be a social argument for programming, software is eating the world and having some shared understanding may be useful. But clearly technical ability is much less socially bound. The point at which you stop getting extra utility is very far from the mean.


You want those lines though, glue software engineers tend to get really bitter when they get quizzed on computer science stuff, so it is in their best interest to draw the line instead of arguing that they should have the same title.


This highlights something interesting for me: I've literally never heard anyone "arguing" that they should have the title of "software engineer" or "programmer" or "software developer". I don't know the history of it, and it could easily just be something that I haven't personally witnessed. Ultimately I fall into the camp of someone who would do fine getting quizzed on the computer science stuff, but at the same time it is ultimately about adding value (at least, when we're talking about someone paying you to work for them).

There aren't two camps: "glue software engineers" and "computer scientists". There is a spectrum of people with varying abilities in a wide array of sub-fields.


>I've literally never heard anyone "arguing" that they should have the title of "software engineer" or "programmer" or "software developer"

this kind of discussions only happens on the internet because in reality nobody gives a *, especially that the only place that seems to distinguish those is the USA

coders/developers/programmers/se/programming ninjas/shamans/system necromants

an artifical titles


I used to work for a small company that developed a cellular radio network design application and also had a radio network engineering consultancy arm. We renamed the product and changed our job titles to include Engineer and Engineering because our sales people said it enabled them to charge more and sell up our services more easily, and I believe them. I was on some sales meetings to provide technical background so I could see they knew what they were doing.

It may seem silly, but words mean things and you need to clearly and effectively articulate why your product is valuable, or what value you bring as an employee. Effective communication is really important.


The entire sector is somewhat fraudulent in its nomenclature, because it’s still a pretty young field. We’re basically going through what mechanics and physics went through about 250 years ago. At that time there were no legally-defined engineers, anyone who could do reading/writing/math could develop entire factories.

At some point it will have to clear up its act. I expect it will take another 50 years or so, as employment demand stops growing and eventually falls, at which point it will make sense for incumbents to raise formal barriers to certain titles.


On of my girls is thinking of doing Computer Science so we were checking out University courses are there are BEng and MEng computing courses with professional accreditation here in the UK.


I do understand it from marketing standpoint, but when we talk between people from this industry, e.g here, then it's pretty weird.


I use "nerd" as my job title.


There is far more to writing correct code than knowing the contents of undergraduate computer science inside and out; there is an entire world of software engineering where undergraduate computer science theory is damn near meaningless.

I mean, I worked at a medium sized medical device company (10,000+ people) (who was considered the gold standard of their industry in many respects) where you would be hard pressed to find a single person that wrote a lot of code who had anything but a Physics, EE or CpE degree; I honestly can't remember a single person who had a CS degree despite being on an algorithms team.

I can't even remember a single person at the scientific device company I worked at where their algorithms team had anything but PhDs in Physics.


I agree with the first paragraph.

The second and third paragraphs are alarming. Anti-elitism is never a good thing in actual technical fields. Of course, the lead may be from outside CS, but to say that there is not a single CS person or an expert in embedded systems and OS, who was involved in the design and testing of a pacemaker is not very confidence-building.


> Anti-elitism is never a good thing in actual technical fields

I think it's not anti-elitism but just elitism.

Outside the CS bubble, being a PhD physicist has more prestige and status than being a PhD computer scientist. A physicist is universally recognized as a scientist, nobody would ever question that. A "computer scientist" is not universally regarded as such. I'm not arguing whether it should be, but it's clearly not universally like that in people's perceptions.


> Outside the CS bubble, being a PhD physicist has more prestige and status than being a PhD computer scientist.

I disagree with this take. Just as there is more “applied” physics and “theoretical” physics there is similarly “theoretical” computer science and more “applied” computer science. The former camp are similar to the theoretical physicist as many of them are mathematicians with a computer science bent. Even within the “applied” CS camp you have experts in number theory and algebra that apply their skills to cryptography, whereas in the “theoretical” camp you may find theorist using mathematics like homotopy type theory, category theory and the like in their research. Number theory, graph theory, combinatorics, abstract algebra, category theory and the like can get really hardcore in the pure math world. Similarly, theoretical physicist also have to know a lot of mathematics.

I think your optics of what a computer scientist is and does is a bit removed from reality and that PhDs in CS are highly respected (I didn’t even mention the folks working in AI or quantum computing or the other important areas of the field).


I'm talking about the perception by general people like business managers and HR etc.

I think it's silly that there's this purity contest or practicality contest going on. I studied CS but at a more engineering focused university, so the curriculum was closer to electrical engineering than to math and physics. Many engineering profs had a weird distaste for physicists/natural scientists and vice versa. They were both "elitist" just from a different view of the world.

I like Feynman's take on this kind of bickering among fields: https://youtu.be/f61KMw5zVhg?t=137


CS is a bit weird in that, partly for historical reasons, it straddles science and engineering more than most university majors. So at some schools, it's closely tied to the math department while at others it's associated with electrical engineering. Which unsurprisingly affects the curriculum in various ways.

(And, of course, there's also the matter of how applied the curriculum is--i.e. software development vs. underlying theory.)


As it is a subset of math, CS belongs in the Math Department this better reasons than historical ones. It is too easy to mistake CS for what it is not due to it's name including "computer." Suggested to me a long time ago, it makes it easier to understand what CS is if instead it is thought of as "Reckoning Science." The computer in Computer Science is "one who computes or reckons." Also what helps conceptually is the famous analogy that a computer is to a computer scientist what a telescope is to an astronomer. Astronomy is not the science of telescopes, and likewise, Computer Science is not the science of computers, nor even as the word is most commonly understood (as "using a computer"), "computing."

Programming is a subset of CS, and Software Engineering is elevating that part of CS to match the monumental interest it attracts, to the extent of placing it fully into the Engineering Department.

Some universities have CS degrees that are merely programming degrees compared to the better CS programs, which are usually made up of an equal amount of strictly advanced mathematics courses and CS courses which incidentally may involve programming, but learning to program whatever is merely a requisite of that course. CS is relative to Physics as a well-exploited tool. A lot of physics wouldn't be possible if not for computer science.

Whatever the job title may be, programming is an occupation, and salaries can range from median to lucrative. With the most lucrative salaries, programming is often a small part of a larger solution, but a fully necessary feature of the service of solving some big problem or related group of smaller problems.

When you think of a computer scientist, you should not think of a coder, yet coding is a tool they may exploit to achieve some goal, and that goal may or may not be entirely dependent on what is actually a telescope, an underlying physical computer. The really exciting rock n' roll computer science seems to be anything whiz bang, graphics and games, digital audio processing, AI, etc., and whatever language skills support those, so it is easy to confuse the goal being able to parse and generate various amounts of code with what all that coding work is helping to accomplish. IMO, weather modeling, modeling anything, satellite tracking, as well as informatics, are massively more interesting applications of CS. Help Desk, Desktop Support, Networking, Programming, Database construction and maintenance, and the rest of IT are all narrow and rather droll practical applications of CS, entirely practical, and they can pay well, a lot of the work is reasonably important (hospitals and air traffic control, really everything relies on computers). Web design is usually kept within a company's graphics or marketing department, yet many designers have escalated their job title to "developer."

Classifications can blur because CS is so damn useful, even without computers, but CS is not just the one thing it is most popularly applied to. It is all the things.

I think of a legit lettered and experienced computer scientist as the penultimate problem-solver, and they only add a computer as needed.


I appreciate the perspective and agree with a lot of what you wrote. That said, software is not some abstract thing that exists (or at least exists usefully) outside of the context of computer hardware--which is clearly in the domain of electrical engineering.

That can be less true today given the additional layers of abstraction we continue to pile on in general. But I'd still argue that computer science is closely tied to the hardware that computer science concepts as implemented in software runs on. And therefore, it can make sense to lump it in with the engineering and, specifically, electrical engineering.

(Certainly my opinion is probably flavored by the fact that, where I got a different engineering degree, electrical engineers and CS majors typically get the same Bachelor of Science in Computer Science and Engineering degree.)


I actually studied "Computer Engineering" (rather, literally "Engineering Informatics") at a technical university (with other prominent programs including electrical, civil, and mechanical and chemical engineering) but we did learn all the algorithms and data structures, complexity theorems and proofs lots of math stuff rigorously, but the curriculum grew out of electrical engineering and so we were also close to the metal at the same time.

We took logic design in first year, flipflops, half-adders, built projects with such stuff, learned about analog-digital converters, the Intel 8085 architecture. Physics, to understand electricity (Maxwell et al.) and circuits. We learned assembly, C, system programming, resource management, paging algorithms, scheduling, filesystems (following Tanenbaum), we learned some Verilog and VHDL, but also graph theory (with proofs, plus applications to VLSI routing), group theory, but also computer graphics and associated data structures, like octrees. We learned control theory, signal processing theory and audio and image processing. But also network protocols, TCP/IP, ARP, exponential backoff, Ethernet frames etc. Databases, normalization etc. Compression algorithms, cryptography, Reed-Solomon code and its use in CDs, similar codes in RAID. Public-key crypto theory with proofs but also its use in practice in SSL. Backups, differental and incremental, practical stuff like calculating with mean time between failures, etc. Understanding hard disks, like the plates, sectors etc to understand delays and better seeking algorithms. But also GPU architecture, programming GPUs through shaders, breaking down math problems in a way that maps well to CUDA etc (before the deep learning craze, but when GPGPU was a hot new term). Java, C++, UML. Machine learning, evolutionary algorithms, agent and voting systems.

We don't have a good vocabulary. Are the above things all CS? Or some are engineering? What does informatics even mean? Certainly a lot of the above is intimately tied to computers as physical objects with timings, latencies, voltages, not just to abstract Turing machines. And I wasn't raised to be ashamed of that or to find that dirty. Nor to be bitter about having learned about red-black trees or the proof of the five color theorem or Cauchy's integral theorem or the simplex method for linear programming or linear algebra or the conjugate gradient method etc. It's possible to have the right blend of the abstract and the concrete.

And how do you learn "problem solving" if not through actually working with things like the ones I listed? Why this distancing from concrete metal-and-wires computers as opposed the pure mathematical formalisms? Is it because it's seen too close to blue collar, wrenches-and-sweat-and-grease work? That the image of a scholar sitting in an armchair is nobler? You don't have to become a help desk technician or a person plugging in the Ethernet cables at data centers, just because you've studied the concrete technical aspects.

It sounds as if medical practice were low status and only biological research had prestige. Or if practicing law was too blue collar and only things like abstract constitutional law theory was something of high worth.


When rather large company employs either everyone with the degree or everyone without degree cs degree, good working hypothesis that bias is going on in hiring.


I don't got a computer science degree either. However I do know computer science fundamentals better than most people with computer science degrees. A degree is neither evidence of skill nor evidence of absence of skill.


In my experience embedded or control system software is more about the thing being controlled than anything else so that would seem natural?


> glue software engineers

Where do you put the line between them and “purist” SWEs?

Even Things like

  #include <stdio.h>
already make you a gluer.


It isn't possible to draw a hard line, no. But people tend to identify themselves as one or the other, just ask them whether technical or social is the most important aspect of their work. Gluing things together requires you to communicate more, like adding a library or depending on an api or reusing code from another team. Building things yourself is just code, less questions so less social work but more technical work.


From a computer science perspective, this is an interesting piece. But from a practical perspective, I doubt that the Dijkstra of 1988 has a good understanding of the sociotechnical reality of the present day software industry. I comment (quite obviously) not to disparage him and his outstanding achievements, but rather to highlight that we cannot expect that Dijkstra could predict the future, in particular in a realm that somewhat exceeds the domain of his expertise. For example, I think by now we have a good understanding of the relevance and importance of software maintenance. Sure, software is not "subject to wear and tear", but the world around software evolves and a program that did a job well 10 years ago might not to the same job well today (think about security). We all understand this difference; i.e., I don't think his nit-picky attacks on software engineering metaphors are particularly useful from today's perspective.


He was an advocate of formal methods like Hoare Logic, which in principle could be used to write highly secure programs.

It's possible though that not all aspects of security can be addressed by formal methods. There's the issue of side-channel attacks (like SPECTRE), which are not easy to model using formal methods.


> It's possible though that not all aspects of security can be addressed by formal methods.

Well, one can always devise some well-defined model in which you can prove stuff. So nothing is ever unreachable.

> There's the issue of side-channel attacks (like SPECTRE), which are not easy to model using formal methods.

SPECTRE is not an error in programs, it's a breach of contract of the CPU (what shouldn't be observable actually is). I'm pretty sure formal methods would be of great help to design clean interactions between a speculative engine and the cache hierarchy. See eg https://plv.csail.mit.edu/kami/ from umbrella project deepspec.

One very common side-channel at program level (and probably one of the most important) is timing side-channel (and all derived: power draw, noise level etc). This one is "easily" solved (in the sense it's not an open research question): have constant-time function types. It's not hard discriminating between what's constant time and what's not: don't branch on input and execute only constant time primitives.


I think this is the difference between cs and software engineering. Two people can look at a problem (build secure programs). The cs person sees static analysis, model checking, and other mathematically principled approaches as the solution. The engineer sees a suite of human approaches (code review, testing, fuzzing, audits, etc). I think the pure cs vision is narrow and limits our ability to choose the best method to solve a problem for the minimal price. Formal methods should be like unit tests, a component of the software engineering process. But it should also be acceptable to acknowledge its role and limits, just like how we don’t rely exclusively on unit testing to deploy correct programs.


"We could, for instance, begin with cleaning up our language by no longer calling a bug a bug but by calling it an error. It is much more honest because it squarely puts the blame where it belongs."

Hewlett-Packard, in their glory days, had an internal policy to apply the term "defect" to both hardware and software. If it has defects, it may not be shipped.


Maybe it's just because I've been dealing with them on a daily basis for decades, but when I hear "bug" I don't infer any different connotation than "defect". They both sound like an error in a program created by the human(s) who made that program.

Also, I don't understand the reasoning for why we shouldn't anthropomorphize programs. I read the digression about two and a half times, but I can't connect the chessboard/domino tiling problem to a reason we shouldn't consider software's behavior.


I can't make much sense of Djikstra's linguistic points either. But with the little I can understand, my objection boils down to the following statement:

> A programming language, with its formal syntax and with the proof rules that define its semantics, is a formal system for which program execution provides only a model. It is well-known that formal systems should be dealt with in their own right, and not in terms of a specific model. And, again, the corollary is that we should reason about programs without even mentioning their possible "behaviours".

Perhaps this is another aspect of the "SWE vs. CS" debate but I've come to the conclusion that in SWE, it makes more sense to reason about entities and how they behave (deliberate use of the word) rather than in terms of every single line of code. Said another way, anthropomorphizing programs is another means of abstraction and SWEs and CSs alike should be comfortable going up and down the abstraction ladder as needed. Even in academia, outside your first two programming classes or so, outside algorithms, it is rare to spend time in the realm of "formal syntax and proof rules". (Or maybe this is simply a difference of the needs of CS education today and during Djikstra's time.)

But why should we encourage reasoning in this manner? I have many reasons but the bluntest (yet holds no less water) one is because I think Djikstra's statement assumes you have access to the readable source code. That's simply not true. So it's more useful to reason about computing units as entities with "behaviors".


In the real world, (outside of compiler development) we rarely discover arbitrary code without context. Usually, we are writing that code to fulfill some business purpose. Without being able to describe the behavior of the system, we'd never be able to design any practical software that exists outside of pure math.


Origin of the word bug is that long time ago, an actual bug (as in animal) caused an error. Since then errors are called bugs, because everybody loved the story.

I have no idea whether story is true. But, at Dijkstra time, he probably associated "bug" with above story - the error was not human mistake, humans were innocent, it was just bad luck that rarely happens. We are now 40 years later and associations are completely different.


In my CS university software engineering course we defined several versions with different meanings: error, fault, defect, failure, etc. Some are about the programmer doing something wrong, some are about what the code itself contains after this, others about the actual event when the problem gets triggered etc. I forget which was which. Something like the error is what the programmer makes, the failure is what happens when it gets triggered when used, the defect is the thing in the code (colloquially: bug) if I remember correctly. No idea what "fault" was.


I’m not sure if there are standard meanings for these words, but ISO26262 defines Fault, Error, and Failure in specific ways that are useful to think about but I always struggle to keep straight. Well mainly I swap fault and failure if I’m not careful.

But in this case, none of these are design or implementation mistakes. We just call those bugs, though maybe there is an official term like defect for them.


He dislikes anthropomorphism because there is no real-world equivalent of logic, and he believes that we should attack the logic problem of software head-on. Creating a metaphor is great for abstracting what logic is actually doing, but it discourages focusing on the actual logic.

Basically the metaphor is an opiate. It feels good to think about, because turns an intractable problem into one that is intuitive. But it doesn’t actually solve the problem - only considering the logic does.


We don't need yet another step on the euphemism treadmill. Say "defect" long enough and eventually it represents the same idea as "bug". I have an instant negative reaction to the idea that "if it has defects, it may not be shipped ". Any moderately complex piece of software is guaranteed to have bugs. The action space of ex. a modern OS is way too big to test every possible code path. I'd rather have an OS with a few bugs than no OS at all. Move fast and break things (unless you're writing code for a space shuttle).


Do you realize that you're plain in what Dijkstra criticizes? All you say is being contered in the talk.

> [..] way too big to test every possible code path

He disregards tests. Most are futile. Maybe automated testing could make some sense (fuzzing, quickcheck, etc), but still, most are stupidly weak and offer no guarantee whatsoever (even quantitatively).

> I'd rather have an OS with a few bugs than no OS at all.

He specifically argues for error instead of bug so that we can't say it's "almost correct" but only that it is "wrong". "a few bugs" is just "wrong".


> I have an instant negative reaction to the idea that "if it has defects, it may not be shipped ". Any moderately complex piece of software is guaranteed to have bugs.

Yes, but if there is a known defect in your software, do you release or not?

To me, there is a difference between knowingly releasing faulty software or only after release discovering faults in the software. Unless there is no reasonable quality check in the software development and release procedures, because then you are not doing much better than knowingly releasing faulty software.


There are known defects in civil engineering projects all the time. We design with tolerances to defects such that the entire system stands up. Real software is the same. Very very very few programs have ever been shipped without any defects.


> We don't need yet another step on the euphemism treadmill. Say "defect" long enough and eventually it represents the same idea as "bug".

The original point was that bug is euphemism for error. In 99.99% of cases, errors are not caused by insects.

Dijsktra basically wanted to take step back on that treadmill and use proper word.


I do this in my own projects. Specifically, I have them defined as:

"Defects" are a deficiency in the project. They come about through a failure of process. We have to fix them, and fix the failure of process that resulted in them.

"Bugs" are anomalous behavior that get in from outside. A defect in a dependency or external service can cause a bug in my project. We have to fix them, which may require fixing the dependency, working around the bug, or ejecting the dependency completely (I like to call this "deprecated due to infestation").

"Glitches" are anomalous behavior outside of the specification of the system. Example: a user whose cable splitter outside is dangling from a single strand of wire, preventing them from having a reliable enough network connection to use our telecomms system. We might not be able to do anything about glitches other than improve user training.

And finally, "User Error" is a myth.

I find being able to evaluate and define issues by this criteria is really helpful in figuring out the best course of action to take with them. For example, when I realized I was spending the majority of my time working around Bugs caused by Defects in Unity3D, I knew it was time to eject Unity3D. It's kinda like Unity3D had bedbugs, and those bedbugs gave us unsightly sores. It was better to burn the bed completely than be constantly applying bandaids to the sores.


Same thing but in his own handwriting:

https://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF

I know it’s silly, but it just made him seem more human to me.


Off-topic:

Damn, how do people get such good handwriting? It looks typed at a first glance. Wow.


Dijkstrta is infamous for his focus on handwriting. A student once told a story that during a 1-on-1 oral exam, Dijkstra spent half the time having the student practice their handwriting.


Practice :)


That and deliberate choice to improve readability, hence the choice of writing in script letters instead of cursive, which is more natural and faster to write when handwriting.


There is probably a significant genetic component to it. Some people have better eye-hand coordination and better hand stability.


Or, it could be that he comes from an age where you would write, rather than type, everything.


Probably helps, but for comparison, here's my dad's handwriting (born in the 1940s): https://i.imgur.com/svu8orT.png


German writing evolved several times in the 1900’s. That most likely impacted the legibility as different types of training came through each letter.

As an aside, reading recipes from that era is nearly impossible for me as a foreigner.

Folks can learn more about the evolution by picking up the trail here: https://en.m.wikipedia.org/wiki/Kurrent


It's also mostly impossible for Germans (that's why there's a whole subreddit of people that are capable of parsing Sütterlin)!

You might be able to recognize some letters in the excerpt (u/n/w especially) from Sütterlin, but lots of others (especially the upper case ones) are more modern.


I was born in the 80s. I spent two decades writing for hours everyday. So did my peers. Some had good writing, some had bad, despite many of the bad trying to write better.


I took a few drafting classes in highschool. My handwriting improved tremendously. They spent several weeks on just how to hold the pencil correctly and how to have the right touch on the paper for even lines. They also spent a lot of time on how to be deliberate about what was written. Because back then getting an error into a blueprint was extremely time consuming and a pain to repair just in the paperwork world never mind the physical. Never did that job. But it sure cleaned up my handwriting.

Just for fun a few months ago I thought back on how my signature has changed over the years. I drew them all down on a piece of paper. My wife thought I was nuts for having no less than 7 different signatures. My current one sort of resembles letters and scribbles.


I never had good handwriting; it was always my worst grade in elementary school. But I used to at least be able to write legibly given that I did have years of Palmer script. These days I really can't write in any consistent or truly readable way any longer.


What I find really impressive, almost galling, is the lack of corrections. Thirty pages and (i counted only) one scratch-out!


I practiced and practiced as a kid


I agree. This should be the main url.

I find it ironic that a hand written talk (by Dijkstra no less!) begins:

> The second part of this talk pursues some of the scientific and educational consequences of the assumption that computers represent a radical novelty.

Even doubly so, that the OP pdf was clearly transcribed and then converted to a pdf, when the original is so very much clearer to read.


I have very mixed feelings about EWD. On the one hand, his work on concurrency and formal methods has had a profound impact on computer science, and we owe him a huge debt.

On the other, I was present at an IFIP talk he gave in Toronto in 1976 where he announced that personal computing would go nowhere, because the average person cannot program.


https://news.ycombinator.com/item?id=11796926

> "I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras." - Alan Kay

With the followup post from Alan Kay himself.


For the average person, the computer is still not the bicycle of the mind. The computer is more akin to public transport of the mind, in the sense that it works OK as long as you want to go to a common destination, but after that you just have to walk.


Even today, computers are useful to average people only because there is software that is available that solves a wide range or problems, combined with the fact that there is really only a handful of problems that people want computers to solve for them most of the time. Catagories like communications, document and media creation/editing, data storage and calculation.

But still, let's say there is a problem someone wants to solve, and it really doesn't take much programming to solve it? How do regular people that don't have access to a programmer use a computer to handle a novel problem?

To give an example -- There is a table game that is popular at Cracker Barrel restaurants that consists of a triangle shape with 15 holes, and 14 golf tees. The object is to jump over pegs and remove them, and go till no more moves are available. The goal is to get down to 1 peg remaining. Now how would you use a computer to solve that without writing a program? (This was back in my college days, however I recall doing a brute force program that tried all combinations, and spit out the winning ones -- turns out there was something like 15,000 winning strategies).

Edit -- a quick search shows that this has become a popular computer programming challenge, so there does exist choices that people can download, but there are other similar variations that someone may want to solve.

Edit2 -- apparently the generalized version of this concept is called Peg Solitaire https://en.wikipedia.org/wiki/Peg_solitaire


>because the average person cannot program.

EWD was right in the sense of his definition of "Program" i.e. as a mathematical object. What he did not quite foresee is that it takes only a few intelligent folks to invent/design/implement the entire infrastructure so that the general "mass of programmers" can use those to certain profit without much mathematical background. This was how the "Industrial Age" played out, we use a lot of machines everyday without having an inkling of how they work. I think Dijkstra failed to realize that this was as true of "Software" as of material objects. In his defense, the "Software Industry" was still being invented at that time and hence nobody could predict how things would turn out in the future.

But his fundamental thesis that a "Program" should be treated as a mathematical object and proved using rigorous logic is still true and by that definition a lot of us fail the test. This is quite independent of the fact that the society at large has accepted a watered down definition of a "Program" with all its shortcomings and failures.


> But his fundamental thesis that a "Program" should be treated as a mathematical object and proved using rigorous logic is still true

How is that even remotely true? How are you going to formally model a browser, or a word processor, or your mail client, or a 3D game, let alone prove it correct against that specification?

If we actually followed that requirement, computers would be almost useless today, except for a few tiny niches where millions of dollars in development costs for even relatively small programs would be justified. Keep in mind that your development tools would be meager to non-existent as well, since they'd also have to satisfy that overly strict requirement.


Quite. It's the silliest of silly ideas - not just practically, but conceptually, because real problems don't have hard algorithmic edges. Which is why they're so hard.

Don't forget there's a difference between a formal (user) specification and a functional specification.

A functional specification says that when you select "Edit -> Cut" the user can expect certain things to happen. A formal specification defines how the symbolic entities involved in implementing the operation should operate.

This isn't too hard for "Edit -> Cut", but it's not tractable at all for "Translate this poetry into another language without mistakes or ambiguities."

So in fact he's just as guilty of resorting to metaphor as anyone else in computing. Only in this case the metaphor is the algorithmic perfection and consistency of a mathematical proof.

This is fine in the classroom and in certain applications where formal methods can help, but not so much in the average developer room.

It also highlights that ultimately computers aren't about manipulating symbols, but about manipulating conceptual metaphors represented by symbol sets.

But I expect he'd have dismissed that idea as too dangerously novel.


>It also highlights that ultimately computers aren't about manipulating symbols, but about manipulating conceptual metaphors represented by symbol sets.

Not quite. Computers only deal with Symbolic Logic via Formal Systems. The mapping of those to a Domain of Discourse is the job of the Programmer.


Here is Dijkstra himself answering the charge via EWD288 - "Concern for Correctness as a Guiding Principle for Program Composition".

Finally, a word or two about a wide-spread superstition, viz. that correctness proofs can only be given if you know exactly what your program has to do, that in real life it is often not completely known what the program has to do and that, therefore, in real life correctness proofs are impractical. The fallacy in this argument is to be found in the confusion between "exact" and "complete": although the program requirements may still be "incomplete", a certain number of broad characteristics will be "exactly" known. The abstract program can see to it that these broad specifications are exactly met, while more detailed aspects of the problem specification are catered for in the lower levels. In the step-wise approach it is suggested that even in the case of a well-defined task, certain aspects of the given problem statement are ignored at the beginning. This means that the programmer does not regard the given task as an isolated thing to be done, but is invited to view the task as a member of a whole family; he is invited to make the suitable generalizations of the given problem statement. By successively adding more detail in the lower levels he eventually pins his program down to a solution for the given problem.

You can also find some relevant discussion (including my posts) at https://news.ycombinator.com/item?id=24942671

The point is that by using a top-down and step-wise refinement methodology one can show "correctness" at whatever level of granularity is acceptable.


What would a coarse-granular specification for Excel look like? How would you prove correctness against such a spec? Examples, please. In these discussions, I only ever see vague statements and handwaving by formal verification proponents — which is kind of ironic, when you think about it...


I believe you are being facetious here. This is a forum for stimulating discussions and exchange of ideas. The fact that you only "see vague statements and handwaving" is more a reflection of one's knowledge bank (or lack thereof) than any shortcoming of the subject or its proponents. By definition this is a complicated and difficult subject and when a computer science pioneer and great like Edsger Dijkstra says something, you pay attention and think about it :-)

PS: "Excel" can be thought of as a program generator which specifies a metalanguage (i.e. a "grammar") for a set of DSLs (i.e. Spreadsheet formulas). A little Googling brought up "ViTSL/Gencel"(https://web.engr.oregonstate.edu/~erwig/Gencel/) and "Classheets"(http://www.jot.fm/issues/issue_2007_10/paper19/index.html) showing approaches to specifying "correct" Spreadsheets.


The fact that average people can write useful programs are largely due to the maturity of software engineering practices, which was just beginning to develop at EWD's time.

Software engineering != developing algorithms. And unfortunately programming can mean both things.


He's not wrong in the man-computer symbiosis sense.


Future predictions are rarely correct. You can hardly fault someone for making an incorrect prediction.


How is he wrong? The average person cannot program. The average person can use but not create software, that’s why computers are popular.


He was wrong in that personal computers took the world by storm, to the point that almost everyone in modern civilisation carries one in their pocket. He was wrong in thinking that programming was essential to personal computing; it is not.


But “computing” to him meant programming. So most people are not computing in the sense that he’s talking about.


I somehow do not understand what the article is really aiming at, but what I found interesting: " the subculture of the compulsive programmer, whose ethics prescribe that one silly idea and a month of frantic coding should suffice to make him a life-long millionaire" I am suprised that this was a thing in 1988. Can somebody enlighten me what this is refering to? (I mean nowadays it is obvious) but was there the software startup culture in th e 80s?


It certainly matches early Atari, and there were lots of microcomputer game developers in the 80s who managed it. They were smaller than startups - often just the single programmer and someone helping them duplicate casette tapes.


You may have heard of Bill Gates? Steve Jobs? Larry Ellison? C'mon. :)


Yes, but this was more marathon then sprint, wasnt it?


There was a mini gold rush to write software for the new personal computers like the C64, including games and business programs.




I didn’t find that useful. He’s not arguing that metaphors and anthropomorphism are not effective for teaching in general. He’s arguing that they hide the underlying mathematical and logical objects that programs represent, which lulls programmers into a false sense of accomplishment.

I agree. We are awfully good at creating software that does not function properly. Should that be rewarded?


This is similar to saying that instruments "hide the underlying physical objects". That's certainly true in a sense, but also not a helpful point -- we can't see stars or proteins or quarks without instruments. And we can't learn without metaphors.


Thats a very strange analogy. A microscope allows you to directly see a small object. A metaphor hints at an object for you to visualize yourself, and the reference is imprecise. That’s exactly Djikstra’s point.

And we can (and do) learn without metaphors. We need base knowledge for the metaphor to refer to. They are useful for learning, but not essential.


Man, he was in a bad mood that day.

"The effort of using machines to mimic the human mind has always struck me as rather silly: I’d rather use them to mimic something better."


> Do the universities provide for society the intellectual leadership it needs or only thetraining it asks for?

How about both?

There is a necessary and sufficient course of study for someone to be successful - gainfully employed as a programmer using technologies of the day.

There is a different course of study that is necessary and sufficient to produce an academic.

Many other areas of study are similar. It is one thing to train someone to be a lawyer, quite another to train someone to be a law professor.


Programming is magic based on science. Without actuators/mech engines it won't go anywhere, its just ilusion proven or not.It may be known as a light drug development.The focus is on transfering numbers. Money is another kind of ilusion, a kind that fakes demands. The focus is on increasing or decreasing numbers.


Can someone summarize? I couldn't follow this but given the author, it sounds like something i should pay attention to as a budding educator


Computers and programs are really complex; programming is really hard; Programming (computer science) will eventually be able to replace human reasoning (and be better at), but the complexity to do that requires deep mathematical knowledge and formal methods (Djikstra was a big fan of formal program proving). Universities aren't teaching computer science, because businesses don't care about that, they just want coders.

I took a few computer science courses at UT back when Djikstra (not from Djikstra himself, though, from Dr. Nell Dale) was there. Everything in the algorithms class came with formal proofs. Loop invariants were core concepts. The book was not yet published, we spiral-bound photocopy of the draft.


banking on computers to automate human reasoning anytime in the near future is laughable at best, utter naivety at worst


We are incrementally replacing human reasoning with computation in the present day. For instance, most static type checkers are weak but fast theorem provers, and type inference replaces some of the human reasoning involved.

Granted, static type checking is a very minor corner case, but manifold small incremental changes add up. It's untrue that human reasoning has never been replaced with automation, and it's untrue that human reasoning isn't currently in the process of being further replaced.

I agree that total replacement of human reasoning is likely any time soon. However, I'd argue that total replacement of human reasoning implies removal of human desires from the input. (What are programs, if not incredibly formal expressions of what humans desire computers to do? How can we divorce human desires from human reasoning about what is good/desirable?) Science fiction provides numerous examples of how a complete decoupling of computers from human desires can go terribly wrong.


Banking on computers to automate all of human reasoning? Sure. Preparing for computers to automate some disproportionately impactful subset of human reasoning, on the other hand, is very reasonable.


"banking" is the right word as this assertion is used to induce VCs to disgorge huge globs of their customers' money.


Turing, as well as, nine days ago: Where Did Combinators Come From? Hunting the Story of Moses Schönfinkel [0]

Reasoning is computation. It's one suppressed thought away.

[0] https://news.ycombinator.com/item?id=25335175


I'm pretty confident you're wrong.


Both my undergrad and grad education did not train me to program. I did learn computer science though. Even then... mathematical and algorithmic proofs are in a league of their own. CS has always been applied math as much as physics is.


CS is definitely applied mathematics. Whether or not all of the maths that Dijkstra thought were essential to programming are much use in the day-to-day business of programming is debatable. His curmudgeonly view of our field, from the linked paper:

'As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory'


[I've incorrectly put some of the statements from Part 2 in the Part 1 summary, but I've already sunk enough time into summarizing, and the flow feels a bit better this way.]

Part 1: definitions and motivation.

"Radical novelty" describes something new that is so different from everything that came before it that analogical thinking is misleading and new phrases built by analogy with old phrases are inadequate at best. Thinkers in the middle ages were greatly held back by over-use of analogies. There are many cases where modern society has dealt poorly with radical novelty: relativity, quantum mechanics, atomic weapons and birth control pills. The way our civilization has learned to deal with great complexity is to specialize professions, where each profession abstracts away some amount of information, typically expressed in the range of physical scale of their work. The architect deals with a certain amount of complexity: not having to deal with the large scale of the town planner or the small scale of the metallurgist working for the I-beam manufacturer (my interpretation of "solid state physicist" in this context). Computing is radically novel in the scale of complexity handled by a single profession. The terms "software maintenance" (as if time or use alone, rather than shifting requirements, degraded software), "programmers workbench", etc. are evidence that analogies are misleading and software is radically novel.

Part 2: consequences [this summary is more abbreviated than part 1]

History has shown the natural human reaction to radical novelty is to pretend analogies still hold and to pretend that rapid progress isn't possible. We can't keep treating software like it's just some kind of mechanical device and programmers as assembly line workers stamping out widgets. We can't treat software production as just a new type of manufacturing. Manufacturing-style quality control (testing) is a misleading analogy for software quality control, and formal methods are needed for software. Software engineering isn't about designing new widgets, but about fixing the impedance mismatch between humans as assigners of tasks and computers and executors of tasks. There are a variety of vested interests (Mathematicians, businesses, the military, those teaching software development as widget building, etc.) working against advancement of Computer Science as a field. We need to start by fixing our terminology to stop encouraging misleading analogies. ("Software bug" encourages us to think of programmer errors as things that passively happen, like insects crawling into relays, etc.) The job of a software developer is to show that their designs and implementations are correct, not to correctly execute some set of operations that create a software widget.


If you can't follow, why care?


Such a fatalistic view of human understanding helps no one.

"couldn't" is past tense. The GP clearly expects/hopes there's a quick remedy here, and your reasoning about "can't" in the present tense (and implied continuing future tense) doesn't hold.


Present with extra steps doesn't entice me.

How about you come up with something else other than "-ism", then we'll talk (maybe).


No need to be rude.


If I wanted to learn formal verification methods like he describes, can anyone recommend where to start?


Technology advances much faster than language and culture can.


Should say 1988 in title.


Added. Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: