You could make the same arguments in every industry that exists, somehow software is the only one that gets away with it. Even our counterparts in hardware are held to a higher standard.
Djikstra and Hoare are the only ones who tried to advance a higher standard and yet beyond naming an award for one and singing praises of the other, the field has continued to accept lesser standards.
Desktop computers have tens of millions of lines of active software, and for most users they only exhibit a bug every week or so. By any measure, that level of reliability far exceeds any other industry. Chipmakers achieve 0.999999 reliability, which is impressive, but if software had only that level of reliability it would crash every millisecond.
Setting aside all other factors (complexity, progress made, resources available, etc), I would argue that the culture of bug acceptance has lead to more bugs in software than would have existed had it been less tolerant.
I'm trying to puzzle out the logic behind chipmakers' products being less reliable than the software that runs on them. Does the weakest link in the chain not break first?
They're different units. When chipmakers claim six-sigma (about 0.999999 reliability) they mean that when you order a million chips from them, on average 1 will be bad.
It's not apples-to-apples, but if you look at the number of lines of code executed before a noticeable bug it's in the trillions.
You could make the same arguments in every industry that exists
You could, but the argument would be false there. Building a car, constructing a house, sewing a shirt, writing a book, etc. are all not significantly more complex than they were decades ago. Software is orders of magnitude more complex than decades ago.
You would be incorrect - having worked in the industry for a short while, I can attest that modern automobiles are at least an order of magnitude more complex than something from, say, the 60s.
They still operate on the same mechanical principles, but we exploit these in vastly more complex and efficient ways. If you want to make this argument, then you also have to accept that computers are "just" current running across silicon.
No, the reason why we get away with this is because replacement is easy. You get a car right the first time because a large recall can cost in the hundreds of millions, and be logistically impossible to execute. For us it's a simple "Hey! We has update! Yes/No?" dialog box.
"modern automobiles are at least an order of magnitude more complex than something from, say, the 60s". I could believe a car today is even two or three orders of magnitude more complex than a car of the sixties (100 times or a 1000 times more complex).
But that's compared to Moore's law that a computer becomes an order of magnitude more complex roughly each eighteen months for twenty or more orders of magnitude increase from the sixties.
Also, modern cars do have lots of bugs in sense of suboptimal behaviors. They just cannot fail utterly without people being pissed.
We're not talking about silicon here - we're talking about software. There's no doubt that the complexity of silicon has increased by leaps and bounds in the decades since the 60s - but what about the code we write to drive them?
The complexity of code - not the compiled binary - the text you punch into the machine and what it semantically represents, has not really gotten that much more complicated over the years. We've introduced several new paradigms since the COBOL mainframes: object orientation, functional, to name a couple. It'd be hard to argue, though, that it follows Moore's Law. Not even close.
> They just cannot fail utterly without people being pissed.
This is an important point: people who expect flawless behaviour from software because other fields of engineering demonstrate it, IMHO, are misguided. Aircraft engineers work incredibly slowly because the consequences of fucking up is perhaps thousands of deaths, and billions in liabilities. Car engineers are the same on a lesser scale. There's no need to expect flawless, 100% perfect function when you don't need flawless, 100% perfect function.
We could spend 20 years developing the perfect toaster that will never, ever burn your toast. Or we can spend 2 months on something that will get it right 97% of the time, and just move on with our lives.
In addition, it's not as if those other activities actually exhibit many fewer mistakes than software does. Cars get recalled all the time for small things that slipped through the design and testing phases. Your average book has a number of typos on first printing. The difference is that cars are usually designed with enough redundant features that they can deal with minor flaws and book typos inconsequential enough that people are willing to accept them without any coaxing.
Software is different not just because it is more complex, but because computers are much more finicky than roads and people. A logic flaw in software can cost millions; an equivalent flaw in a book will likely never be noticed.
Depends on the job. I've seen some very unfortunate PC deployments (most in the 90s) that axed perfectly good 3270-based systems in favor of clunky, crash-prone, button-and-wizard GUIs that were so bad, employees had to go back to paper-based workflows to get their jobs done.
Terminal-based systems often have a very steep learning curve, but I've seen many cases where they were better designed and met business needs better than COTS replacements.
Granted, those terminal-based systems were probably phenomenally expensive when they were put in (mostly in the 70s or early 80s, that I saw), and I suppose it's possible that after 15-20 years we have a biased sample that only represents the best of the breed, but they got the job done.
If by "green screen" you mean 3270 terminal, then ... yes, but it would be pretty hard. The networking hardware for a 3270 is pretty expensive. (I was going to say "getting pretty expensive," but in reality SNA has always been expensive; it's just that other networking hardware has gotten dirt cheap by comparison.) And, of course, you need some sort of actual backend computer to attach the terminal to.
But sure, if you had a terminal, a spare IBM system (the iSeries is probably the lowest end) set up to support SNA, and then a networking controller to attach the 3270 to, you could probably get Lynx working.
Alternately, if you just want the effect, you could just pick up an old green/black monochrome display and hook it up (via a physical adapter) to any VGA card. Buggy implementations excepted, all VGA cards should be backwards-compatible to MGA.
Add a nice IBM Model M keyboard and you'd be all set. I suspect someone has probably done this as a case mod before (if not ... I might); stick a modern mobo in an old IBM PC case with the original monitor and keyboard.
As long as the problems are matched to the capabilities of the language, I don't actually care about what the language and capabilities are.
I judge how good things are by the human element. That hasn't changed. The software industry is as inclined as always to engage in death marches, shortchange quality, seek imaginary silver bullets, and devalue experience. In short it isn't fundamentally different at all.