> The problem with holding manufacturers liable is that I'm not sure it's the economically optimal solution
Either you hold someone liable or the effect will just be hiding the risk.
How about just requiring that the use of critical software systems be ensured against malware/failure in general? Seems like we want that anyway, and if we can't find anyone to insure a piece of software, it probably shouldn't be used in a critical system in the first place.
Importantly, it's the users of software in critical systems alone that need to be insured. Neither software vendors nor regular users need insurance. The insurance company, alone, should handle the job of shielding a critical system from the mistakes of software vendors. We need to allow software vendors to be able to make mistakes, or nothing would get made, ever (source: I'm a programmer).
And the insurer would be wise to spend some of the premium on bug bounties for the software they're ensuring (to minimize the cost of failure). In the end, all white hats would end up being employed by an insurance company, helping assess software security.
The difference between your approach and mine is that you propose to solve it by making rules (regulations), as opposed to adding a separate party that can absorb risk (insurance), thus shielding a creative industry -- software development -- from adhering to a list of rules, which surely will only grow in size.
Ah. Insurance doesn't act as a separate party to absorb risk in the way you're talking about.
They act as a party to amortize known risk, in exchange for a monetary premium set based on that known risk.
Without the government stepping in and limiting catastrophic liability to some degree (ideally in exchange for signaling the market to produce a social good), the premiums changed would be so large as to just suck money out of tech. There's no creativity shield if you're paying an onerous amount of your profits in exchange.
Which is why I said any solution has to be two part: (1) require risk liability on a better-defined subset of risk & (2) provide a liability shield on the remaining less-defined risk iff a company demonstrates an ability to handle it (aka prompt patching). This creates a modelable insurance risk market, therefore reasonable premiums, and still does something about nation-state level attacks.
Either you hold someone liable or the effect will just be hiding the risk.
How about just requiring that the use of critical software systems be ensured against malware/failure in general? Seems like we want that anyway, and if we can't find anyone to insure a piece of software, it probably shouldn't be used in a critical system in the first place.
Importantly, it's the users of software in critical systems alone that need to be insured. Neither software vendors nor regular users need insurance. The insurance company, alone, should handle the job of shielding a critical system from the mistakes of software vendors. We need to allow software vendors to be able to make mistakes, or nothing would get made, ever (source: I'm a programmer).
And the insurer would be wise to spend some of the premium on bug bounties for the software they're ensuring (to minimize the cost of failure). In the end, all white hats would end up being employed by an insurance company, helping assess software security.