> A given piece of software can be bullet-proof today and a catastrophic hole can appear tomorrow.
Only because you didn't notice the catastrophic hole today, and that's true of everything. When a building collapses the construction company/architects can't really get out of that by going "well it was perfectly fine yesterday, it's hardly our fault!", I don't see why we'd accept that attitude from software engineers.
The state space of a complex piece of software is vastly larger than that of a building, and the desirable states within that space are not continuous, while a building's desirable state space is pretty smooth. It's like the difference between walking along the knife edge of a fractal vs. standing on top of a mostly smooth hill.
Because of that, software simply can't be treated the same as other engineering disciplines, at least not yet.
> The state space of a complex piece of software is vastly larger than that of a building
Is it? Unless we can accurately simulate reality at the atom level and bruteforce building designs against every possible scenario to make sure it doesn't fail catastrophically, it's still all down to prior knowledge/experience, reasonable assumptions, approximations and measurements, just like in software. If anything, software is easier, as with enough efforts (formal methods, etc) you can get to prove the correctness of your software, but you can't really do so with a building.
A large chunk of software vulnerabilities is either due to outright incompetence, legitimate mistakes or cost-cutting. The example of an insecure, black-box backdoorable EC like in this article would have a "building" equivalent of using rebar or concrete of unknown specs and origin and then wondering why the construction collapsed.
Thankfully the liabilities associated with civil engineering means we mostly don't use unknown materials of shady origin and there are multiple layers of review to catch any oversights. The same can be applied to software, and while you're never going to get 100% in either domain, if software was as reliable as buildings (as in you can count the number of major collapses/bugs on one hand in the past few years) it would be a major improvement.
Disagree. First of all, the level of detail an atomic level simulation could provide is simply useless in the grand scheme of things. Classical physics is more than enough for the vast majority of use cases and they can be simulated by computers. Eg, you can programmatically measure the load bearing on any part of a structure. And the more important thing, engineers can usually fall back to safety margins that “linearly scale” the provided confidence in a given case. What I mean by that, is that if a given part has to be this stable, as per the calculations, you can trivially make it 3x wider.
While a while loop in a computer program will eat away at any sort of redundancy you may introduce, and the complexity of computability itself simply leaves behind even mathematics. There is no universal way to prove a non-trivial property of an arbitrary program.
This isn’t really fair. Most buildings, public works, utilities and industrial processes are extremely vulnerable to the most basic of attacks, they just don’t get carried out as often in physical space because it’s logistically more difficult to execute.
Your claim is fundamentally "It's logistically easier to attack software, so such attacks will happen more often".
This is absolutely true, but it's also proportionally easier to defend software. It's insanely easy to test whether your software is vulnerable to SQL injection, it's not particularly easy to test whether your building can be destroyed with explosives.
Combine that with the fact that just about every piece of consumer software on the planet has a laundry list of bugs that don't require malicious intent to reproduce, and I find it very hard to accept that reasoning for software developers absconding responsibility.
Empirical/dynamic testing for vulnerabilities is not rigorous or complete in any any way, and is only viable for a relatively small subset of issues.
I'm only suggesting that holding programmers liable for security vulnerabilities isn't really precedented across any other engineering discipline. That's not to say there isn't tons of shitty software being shipped with reckless disregard for quality, and some reckoning there might be useful.
We don't expect buildings to hold up to physical attacks. Most would crumble immediately. Just like most hardware doesn't catastrophically fail through normal use.
Computers are one of the only things we expect flawless defense against malice.
Today when computers are connected to the whole world, they have to build safe. The analogy would be, if everybody in the whole world could access the buildings frontdoor. In a such scenario, the frontdoor would, of course, build very heavy.
Only because you didn't notice the catastrophic hole today, and that's true of everything. When a building collapses the construction company/architects can't really get out of that by going "well it was perfectly fine yesterday, it's hardly our fault!", I don't see why we'd accept that attitude from software engineers.