This is true but mostly meaningless in practice. I have never found the people who say this every time formal verification comes up to be the same people who actually work in formal verification. The reality is that it's far far harder to mess up a formal spec that you actually have to prove a concrete instance of, than it is to mess up the code. With a buggy spec, in the course of trying to prove it you'll arrive at cases that seem nonsensical, or be unable to prove basic lemmas you'd expect to hold true. Formal spec bugs happen, but they're quite rare. Usually even when there are edge cases that aren't covered, they will be known by the authors of the spec due to the dynamics I mentioned earlier (cases encountered during the proof that are clearly being solved in ways that don't follow the "spirit" of the spec, only the letter). Of course, "informal" specs, or paper proofs, will have subtle bugs all the time, but that's not what we're talking about here.
A lot of specs bugs happen all the time. If you think people can account for all edge cases in massively complex projects you are wrong. There are many behaviors that you can't predict will be nonsensical ahead of time until you actually hit them.
Formal verification is not a silver bullet. Despite all of the extra effort bugs will still happen. It's better to invest in making things safe by default, or by aborting when getting into a bad state.
Can you point me to some of the concrete bugs in formal specs with associated proofs that you are thinking of? Specifically in the part that was formally verified, not in a larger project that incorporates both verified and unverified components. Because my experience is that when asked to actually provide concrete examples of how formally verified code has bugs in practice, people find it quite difficult to do so. Formal verification is about as close to a silver bullet as it gets for eliminating bugs, even if it's not 100% (nothing is). It's certainly far better at eliminating bugs than vague philosophical positions like "making things safe by default" (what does that mean?) or "abort when you're in a bad state" (how do you know you're in a "bad" state? Isn't it in general at least as difficult as figuring out what the correct formal spec is? How easy is it to detect? Is the performance cost acceptable?).
In fact, what usually happens is the opposite of a formal proof being undermined by a bad spec--when an informal spec is formally verified, inconsistencies are often found in the original spec during the course of the proof process, and bugs are subsequently found in older implementations of that spec. Fixes are then incorporated into both the original spec and the existing implementations. Formal verification is a spec-hardening process.
>Specifically in the part that was formally verified
No, I am saying that is that it is easy to not verify something because you don't know the requirements up front.
>"making things safe by default" (what does that mean?)
It mean that complexity is abstracted away such that it is hard to the wrong thing.
>"abort when you're in a bad state" (how do you know you're in a "bad" state?
There are invariants which you can assume that are always true. If they aren't true for whatever reason you can abort and later track down what caused you to enter this state. It could be as simple as some logic bug or obscure as hardware malfunctioning and causing a bit flip (at scale you need to deal with hardware misbehaving).
>inconsistencies are often found in the original spec during the course of the proof process
> No, I am saying that is that it is easy to not verify something because you don't know the requirements up front.
Granted, but "this spec doesn't attempt to cover [X]" is very different from "this spec claims to cover [X] but actually doesn't, in ways the authors are unaware of." The former is quite common in formal specs with machine-verified correctness proofs and I've never heard anyone call that a "spec bug". The latter is not common in them at all. Perhaps you didn't intend this yourself, but the way people generally talk about "there could be a bug in the spec" is usually used to imply that the latter case happens often enough to make formal verification comparably effective to other defense mechanisms, when empirically formal verification is far, far more effective than any other bug prevention mechanism--to the point that people struggle to find even individual examples of these kinds of spec bugs.
> There are invariants which you can assume that are always true. If they aren't true for whatever reason you can abort and later track down what caused you to enter this state. It could be as simple as some logic bug or obscure as hardware malfunctioning and causing a bit flip (at scale you need to deal with hardware misbehaving).
Yes, and stating these invariants precisely is generally exactly what you need to do as part of writing the formal spec; getting them right is equally hard in both cases, so it seems weird to me to say that it's easier to check the invariants at runtime than to get the spec right. In fact, many formal specs have invariants that are exactly of the form "assert that if I ran this line of code, it would not throw an exception." Moreover, you can use much more "expensive" invariants when you're creating a spec, since you don't actually have to check them at runtime, which in practice lets you catch far more and more subtle bad states than you can with software checks. For example, your proof state can include a full history of every action taken by the application in any thread, and prove that invariants on them are maintained during every instruction, even when those instructions proceed concurrently and might witness partial or stale views of the latest values in memory; storing and checking all of this information on every step on an actual machine would be wholly impractical, even for high level languages and non-performance-critical software.
Obviously, things are defined according to some base model of the world, so if you are concerned about hardware bit flips (which in rare contexts you are) your spec should take that into account, but the vast majority of bugs in modern applications are not bugs of this form, and most defensive runtime checks are equally susceptible to hardware issues or other unexpected environment changes.
> Bugs are found in the process of testing too.
Yes, and I never said testing was bad. Formal verification and testing are complementary techniques, and tests are important in a formal verification context because you should be able to prove the tests will pass if you've gotten your spec right. However, them being complementary techniques doesn't mean that they are equally effective at reducing bugs.