You could make the same arguments in every industry that exists, somehow software is the only one that gets away with it. Even our counterparts in hardware are held to a higher standard.
Djikstra and Hoare are the only ones who tried to advance a higher standard and yet beyond naming an award for one and singing praises of the other, the field has continued to accept lesser standards.
Desktop computers have tens of millions of lines of active software, and for most users they only exhibit a bug every week or so. By any measure, that level of reliability far exceeds any other industry. Chipmakers achieve 0.999999 reliability, which is impressive, but if software had only that level of reliability it would crash every millisecond.
Setting aside all other factors (complexity, progress made, resources available, etc), I would argue that the culture of bug acceptance has lead to more bugs in software than would have existed had it been less tolerant.
I'm trying to puzzle out the logic behind chipmakers' products being less reliable than the software that runs on them. Does the weakest link in the chain not break first?
They're different units. When chipmakers claim six-sigma (about 0.999999 reliability) they mean that when you order a million chips from them, on average 1 will be bad.
It's not apples-to-apples, but if you look at the number of lines of code executed before a noticeable bug it's in the trillions.
You could make the same arguments in every industry that exists
You could, but the argument would be false there. Building a car, constructing a house, sewing a shirt, writing a book, etc. are all not significantly more complex than they were decades ago. Software is orders of magnitude more complex than decades ago.
You would be incorrect - having worked in the industry for a short while, I can attest that modern automobiles are at least an order of magnitude more complex than something from, say, the 60s.
They still operate on the same mechanical principles, but we exploit these in vastly more complex and efficient ways. If you want to make this argument, then you also have to accept that computers are "just" current running across silicon.
No, the reason why we get away with this is because replacement is easy. You get a car right the first time because a large recall can cost in the hundreds of millions, and be logistically impossible to execute. For us it's a simple "Hey! We has update! Yes/No?" dialog box.
"modern automobiles are at least an order of magnitude more complex than something from, say, the 60s". I could believe a car today is even two or three orders of magnitude more complex than a car of the sixties (100 times or a 1000 times more complex).
But that's compared to Moore's law that a computer becomes an order of magnitude more complex roughly each eighteen months for twenty or more orders of magnitude increase from the sixties.
Also, modern cars do have lots of bugs in sense of suboptimal behaviors. They just cannot fail utterly without people being pissed.
We're not talking about silicon here - we're talking about software. There's no doubt that the complexity of silicon has increased by leaps and bounds in the decades since the 60s - but what about the code we write to drive them?
The complexity of code - not the compiled binary - the text you punch into the machine and what it semantically represents, has not really gotten that much more complicated over the years. We've introduced several new paradigms since the COBOL mainframes: object orientation, functional, to name a couple. It'd be hard to argue, though, that it follows Moore's Law. Not even close.
> They just cannot fail utterly without people being pissed.
This is an important point: people who expect flawless behaviour from software because other fields of engineering demonstrate it, IMHO, are misguided. Aircraft engineers work incredibly slowly because the consequences of fucking up is perhaps thousands of deaths, and billions in liabilities. Car engineers are the same on a lesser scale. There's no need to expect flawless, 100% perfect function when you don't need flawless, 100% perfect function.
We could spend 20 years developing the perfect toaster that will never, ever burn your toast. Or we can spend 2 months on something that will get it right 97% of the time, and just move on with our lives.
In addition, it's not as if those other activities actually exhibit many fewer mistakes than software does. Cars get recalled all the time for small things that slipped through the design and testing phases. Your average book has a number of typos on first printing. The difference is that cars are usually designed with enough redundant features that they can deal with minor flaws and book typos inconsequential enough that people are willing to accept them without any coaxing.
Software is different not just because it is more complex, but because computers are much more finicky than roads and people. A logic flaw in software can cost millions; an equivalent flaw in a book will likely never be noticed.
Djikstra and Hoare are the only ones who tried to advance a higher standard and yet beyond naming an award for one and singing praises of the other, the field has continued to accept lesser standards.