So because a tool can't prevent all possible faults or problems, that tool isn't worthwhile?
I see it like this: we have a number of different tools to minimise problems (various types of testing, theorem provers, type systems etc). Non of them catch every problem and many of the tools overlap in the types of problems they do catch, but if you use many of these techniques together, then you minimise the surface area of issues that can slip through.
Very few things, in computers or "real life" are completely fool proof, but that doesn't mean that they are not worthwhile. That's a call that must be made on a case by case basis to weigh up the risk of faults, the damage done if a fault does occur, the cost of preventing said faults, your budget and time available etc...
One needs to consider the safety of the system as a whole in order to build a safe system.
Leaving aside social problems for instance is a bit like saying that one can build perfect software, with the small issue that it can't be used by humans.
Leaving aside hardware is even more suspicious, because the two are inseparable in a system.
I see it like this: we have a number of different tools to minimise problems (various types of testing, theorem provers, type systems etc). Non of them catch every problem and many of the tools overlap in the types of problems they do catch, but if you use many of these techniques together, then you minimise the surface area of issues that can slip through.
Very few things, in computers or "real life" are completely fool proof, but that doesn't mean that they are not worthwhile. That's a call that must be made on a case by case basis to weigh up the risk of faults, the damage done if a fault does occur, the cost of preventing said faults, your budget and time available etc...