> Things like “this writes to the database”, “this makes an unbounded query”, “this requires a random number generator “, rather than a nebulous “oh this has I/O” is very powerful and useful for maintaining a system
1. How do you know that?
2. Why?
3. Why is tracking the effects in the type system useful? Even if and where this is important, it could be easily achieved using something like Java's 20+-year-old security manager and with finer granularity (i.e. you can allow code to read or write certain files, connect to certain domains etc.) [1]. Just because you can do something in the type system, doesn't mean you should, as it may not be the best way.
Because it is a way to force you think about how you structure your code. You necessarily pull out the pure parts and are explicit in handling all the impure parts. This also makes it very easy to test your code.
Why is it done in the type system? Well, because the goal is to have guarantees at compile-time, and this happens to be something that can be expressed in a sufficiently powerful type system.
> Well, because the goal is to have guarantees at compile-time
No, that's no one's goal. The goal is to increase correctness as much as possible prior to shipping. BTW, reducing soundness guarantees is all the rage in FM research these days [1] precisely because soundness has a high cost, and it is rarely needed for non-simple, technical properties.
This comment is deeply misleading. O'Hearn's incorrectness logic that you cite in your [1] is perfectly sound. It just changes the meaning of Hoare triples from over-approximation to under-approximation. In brief:
- Hoare logic soundly over-approximates
- Incorrectness logic soundly under-approximates
Sound under-approximation is extremely useful when you want to prove that a certain problem must arise in code, rather than may arise. The problem with logic based program verification has often been that your prover could not prove that a program was correct, but, due to over-approximation of traditional Hoare logic, the best you could learn from this failure was (simplifying a great deal) that something may go wrong. That is not particularly useful in practise, since often this is a false alert.
I am well aware. In software verification, a "sound" verification method usually means no missed bugs. Obviously, the logic is sound in the sense that it cannot be used to prove false statements; in that same sense, tests are also sound, and yet in software verification, tests are considered an unsound verification technique (which, these days, is becoming good).
Tests are not sound in the sense of conventional program logic (overapproximation). O'Hearn's [1] even proves a soundness theorem. So when you say say [1] is not sound then this is confusing for the reader.
> Tests are not sound in the sense of conventional program logic (overapproximation).
Of course they are, in the logic sense of "sound", and in the same sense that incorrectness logic is sound. A test is a proof of a proposition about a single program behavior (provided that the execution is deterministic). But neither testing nor incorrectness logic are sound in the sense that term is used in software verification, namely as proofs of some explicit or implied universal proposition. In software verification, a sound technique is one that guarantees conformance to a specification over all behaviors (or conversely, the absence of spec violations in any behavior).
This strikes me as a pretty bizarre statement. The more checks I can put into the compiler, the less tests I have to write. Tests are also code and come with development and maintenance costs. Since a software project's costs correlate directly with lines of code, the less you can write to get the job done the better.
Type checks done by the compiler and tests that you manually have to write are not the only two available techniques. Again, the goal is not to have the compiler do anything, but to produce software that's as correct as needed for the lowest cost.
As for 3. Haskell already does it now with the IO monad.
I can see how having something more specific for effects than the most generic IO type is the next step on an evolutionary path for languages with an advanced type system. The concern whether or not it is worthwhile then extends to the whole type system.
1. How do you know that?
2. Why?
3. Why is tracking the effects in the type system useful? Even if and where this is important, it could be easily achieved using something like Java's 20+-year-old security manager and with finer granularity (i.e. you can allow code to read or write certain files, connect to certain domains etc.) [1]. Just because you can do something in the type system, doesn't mean you should, as it may not be the best way.
[1]: https://docs.oracle.com/en/java/javase/13/security/permissio...