As a practitioner but also a small-time CS researcher, I don't expect verifying that small Rust libraries use 'unsafe' correctly will "waste my assurance budget".
On the cost side: we don't know yet how hard it will be to formally verify small Rust libraries that wrap unsafe code in safe abstractions. That's partly because we don't even know exactly what the specification is that unsafe Rust code has to satisfy. But once we've settled on that and built good proof assistant tools, I think the work will usually be pretty easy, because most of these libraries are fundamentally quite simple and small, even including all their dependencies.
You correctly identified compositionality as a key issue. Fortunately, verifying correct use of 'unsafe' is inherently compositional: knowing nothing but the code of the library and its dependencies, we must prove that the necessary safety properties hold in all possible contexts.
On the benefit side: proving that libraries do not undermine Rust's safety guarantees is important, and the importance grows with the popularity of the library. And of course those guarantees aren't just "memory safety" but data-race-freedom and the general "mutable-xor-shared" property --- the importance of which is the subject of the OP here!
> You correctly identified compositionality as a key issue. Fortunately, verifying correct use of 'unsafe' is inherently compositional
Yes, it is an inductive property. Sadly, not a very powerful one.
> On the benefit side: proving that libraries do not undermine Rust's safety guarantees is important, and the importance grows with the popularity of the library.
It is only important if you take it as an axiom that all programs must be proven to be memory-safe. That may be an axiom accepted by some Rust fans, but it is obviously not an axiom of software. What software needs are methods that reduce dangerous bugs (where "dangerous" means different things for different programs) in the most cost-effective way. It may well be the case that, say, soundly eliminating 95% of bugs caused by memory unsafety and using the rest of the budget on unsound methods is more effective than spending it on eliminating the last 5% of those particular bugs. Software's needs do not currently justify complete sound memory safety at all costs, so the question of cost is still the most relevant one. There is no justification to focus on one cause of bugs just because it can be soundly eliminated.
> And of course those guarantees aren't just "memory safety" but data-race-freedom and the general "mutable-xor-shared" property --- the importance of which is the subject of the OP here!
Neither of these are worthwhile enough to move the needle. This is precisely what I mean by the myopic focus on what can be done as opposed to what needs to be done. If the gap is 10 orders of magnitude, narrowing it by 1 doesn't have a big impact, unless it were completely free, which it isn't.
Your focus on "the most cost effective way" is fine in principle but in practice, we don't know ahead of time what bugs exist and what their impact is. Experience has shown that the prevalence and impact of memory safety and data race bugs is very difficult to predict, and the impact has very high variance. There are so many examples of memory safety bugs and races that lie dormant for years and suddenly become very important (e.g. because someone malicious discovered them, or because the software started being used in a different context).
Note that these safety bugs are really different from most other kinds of bugs in that their potential impact is unbounded: they can cause the program to behave in arbitrary ways. Logic errors, even stuff like non-UB integer overflow, have impact that is still constrained by the expected semantics of the programming language.
Given that uncertainty, it makes sense to take precautions. If you have cheap tools to eliminate those bugs, do it just in case. That might even be cheaper than trying to figure out ahead of time how important those bugs are going to be over the lifetime of the software (which is often an impossible task anyway).
Ah, you seem to think I proposed using formal verification to eliminate 100% of memory safety bugs no matter what the cost. In fact I didn't say that, and I don't recommend that. There will certainly be cases, typically in very large libraries that use 'unsafe', where the invariants you have to prove are very complicated and fragile and difficult to verify, and therefore the cost outweighs the benefit.
What I'm looking forward to is formalisms and tools to pick the "low hanging fruit" --- as I said, verifying the correctness of small libraries that use 'unsafe'. In small libraries, in general, and invariants and proofs are smaller and proof automation is correspondingly more effective.
On the cost side: we don't know yet how hard it will be to formally verify small Rust libraries that wrap unsafe code in safe abstractions. That's partly because we don't even know exactly what the specification is that unsafe Rust code has to satisfy. But once we've settled on that and built good proof assistant tools, I think the work will usually be pretty easy, because most of these libraries are fundamentally quite simple and small, even including all their dependencies.
You correctly identified compositionality as a key issue. Fortunately, verifying correct use of 'unsafe' is inherently compositional: knowing nothing but the code of the library and its dependencies, we must prove that the necessary safety properties hold in all possible contexts.
On the benefit side: proving that libraries do not undermine Rust's safety guarantees is important, and the importance grows with the popularity of the library. And of course those guarantees aren't just "memory safety" but data-race-freedom and the general "mutable-xor-shared" property --- the importance of which is the subject of the OP here!