No you can't. You can show that it has worked a finite number of times in tssting. With testing alone you can't show that it won't produce arbitrary crap the next time you run it.
Here I think lies the line between theory and practice. As a computer scientist, Dijkstra clearly represents the former approach.
Your above description has basically been how computer programs have been built so far since the beginning for the very reason that otherwise they wouldn't have been written at all in the first place. Programs like that are nothing that even the humblest of computer scientists would ever consider complete, but in practice knowing the boundaries of incompleteness and living with that is more than useful.
Surely a program can fail in spectacular ways, but by testing a set of known behaviours with a set of reasonable inputs we can be pretty sure that if you use it as intended, with inputs that are roughly what they're supposed to be, you're most likely to have the program produce the expected output again and again. This is good enough for all practical purposes of business and utility.
It's also quite similar to how it happens in mechanical engineering. For eaxmple, an engine is designed to run at between 1000-5500 revolutions per minute, with oil of grades SAE 40 to 50, and with a suitable mixture of air and petrol. If you push the engine outside of these fixed specifications, you'll increase the likelihood of the engine failing in ways that are spectacular. The complexity of failure patterns can be overwhelming: a small, seemingly unrelated thing turns out to be vital in some underestimated sense, and failing that thing will cause all kinds of other failures which eventually destroy the engine completely. And this occasionally gets realized in real life, too. For a computer programmer, doesn't this sound familiar?
We do (try to) write critical software in a different way. Avionics, spaceship computers, medical systems. The cost per line is overwhelming but the programs still aren't bug-free. A lot of that cost goes effectively to proving the non-existence of bugs: fixing bugs that are found are cheap in comparison.
Proofs of correctness can be formulated for simple systems but it's increasingly hard for complex system. Worse yet, for most programs we use daily we're completely unable to write specifications that would be tight enough to actually make it possible to write fully correct and bug-free programs. Specifying how the program should work in the first place takes a lot of the effort that goes into special systems such as avionics. That's because specifying is kind of programming, and even if we managed to express the program specification in some non-ambiguous and logically complete format, I think the process in turn to build that specification itself would suffer from similar disconnects, vague definitions, and other humanly problems.
Goals sometimes produce the most value when they're walked towards but not necessarily ever reached.
With proofs of correctness, I think it's important to recognize that systems will need to be built in a way that is conducive to proving statements about them. This is a practical daily endeavor for anyone who uses static type systems to catch errors (since types are theorems if you squint), and a major driving idea behind functional programming research. Compositionality and purity can make it drastically easier to prove interesting theorems about programs.
The phrase "testing alone" was key. Practical good-enough correctness comes from reasoning about the code and inputs. Testing is a tool that helps with that. Testing alone is not good enough.
I teach freshmen how to code and test - we use unit tests in class - and have seen first hand that novices over-rely on tests as an excuse to not concentrate hard enough to fully understand what the code is doing.
Without factoring in the cost of errors and the cost of avoiding the errors, this debate is a pointless exercise in transcribing (i) dogma, (ii) favorite hypothetical situations and (iii) favorite anecdotal evidence. I doubt any original thought will come out of it today.
I feel like this is just the scientist/engineer debate, where the former is trying to discover objective truth, and the other just wants to build stuff.
It's an elemental disagreement, where the scientist is saying "You don't know everything!" and the engineer is replying "So what?"
But yeah, probably we're not going to learn anything with it.
I doubt it is a engineer / scientist thing. Dijkstra was way much more of an engineer-programmer than many would realize. He was an engineer first and the rest later.
I think it is about correctness of claims. There are engineers who are cognizant of the deficiencies of testing, and those that believe that testing is truly sufficient and complete.
Here I think lies the line between theory and practice. As a computer scientist, Dijkstra clearly represents the former approach.
Your above description has basically been how computer programs have been built so far since the beginning for the very reason that otherwise they wouldn't have been written at all in the first place. Programs like that are nothing that even the humblest of computer scientists would ever consider complete, but in practice knowing the boundaries of incompleteness and living with that is more than useful.
Surely a program can fail in spectacular ways, but by testing a set of known behaviours with a set of reasonable inputs we can be pretty sure that if you use it as intended, with inputs that are roughly what they're supposed to be, you're most likely to have the program produce the expected output again and again. This is good enough for all practical purposes of business and utility.
It's also quite similar to how it happens in mechanical engineering. For eaxmple, an engine is designed to run at between 1000-5500 revolutions per minute, with oil of grades SAE 40 to 50, and with a suitable mixture of air and petrol. If you push the engine outside of these fixed specifications, you'll increase the likelihood of the engine failing in ways that are spectacular. The complexity of failure patterns can be overwhelming: a small, seemingly unrelated thing turns out to be vital in some underestimated sense, and failing that thing will cause all kinds of other failures which eventually destroy the engine completely. And this occasionally gets realized in real life, too. For a computer programmer, doesn't this sound familiar?
We do (try to) write critical software in a different way. Avionics, spaceship computers, medical systems. The cost per line is overwhelming but the programs still aren't bug-free. A lot of that cost goes effectively to proving the non-existence of bugs: fixing bugs that are found are cheap in comparison.
Proofs of correctness can be formulated for simple systems but it's increasingly hard for complex system. Worse yet, for most programs we use daily we're completely unable to write specifications that would be tight enough to actually make it possible to write fully correct and bug-free programs. Specifying how the program should work in the first place takes a lot of the effort that goes into special systems such as avionics. That's because specifying is kind of programming, and even if we managed to express the program specification in some non-ambiguous and logically complete format, I think the process in turn to build that specification itself would suffer from similar disconnects, vague definitions, and other humanly problems.
Goals sometimes produce the most value when they're walked towards but not necessarily ever reached.