The article doesn't specify what exactly, is permitted by an unsafe vs. safe function. The Rust reference indicates the additional operations permitted by unsafe functions are:
1. Dereferencing a raw pointer.
2. Casting a raw pointer to a safe pointer type.
3. Calling an unsafe function.
Indeed, there's been some discussion regarding forcing users to specify what sort of unsafety they desire when using unsafe blocks. There's no syntax whatsoever yet, but consider something like the following:
unsafe(no-bounds-checks) {
...
}
or:
unsafe(raw-pointers) {
...
}
Right now the problem is that once you drop into an unsafe block, everything that is unsafe becomes allowed. Obviously this is bad for correctness and bad for readability.
It seems to me one of the prime difficulties with trying to differentiate types of unsafe behavior, is that often one will imply another. I recall that in C#, there is only one type of Unsafe block and the only unsafe operations it allows is raw pointer manipulation, but since you could theoretically point into the runtime itself, the set of implied consequences of Unsafe blocks is effectively unbounded. If Rust manages to drop its runtime it's not quite so easily susceptible, but even so, returning a pointer to a vtable outside of the unsafe block implies return-oriented programming implies unbounded capabilities.
I guess part of the issue is, what is the goal of marking code as unsafe? The use-case that C# was trying to serve was that you could be consuming a third-party assembly without access to source, and you want to know what are the possible consequences of running it. If Rust's main goal is being able to audit your own code for safety (either by hand or by tools), rather than trusting external binaries, then it makes more sense to take a feature-oriented approach rather than a consequence-oriented one.
Rust's pointer taxonomy might also help with providing degrees of consequences. You could, for example, restrict that raw pointers could only be cast to pointers with non-sendable kind, which would let you circumvent type safety while still controlling the scope of possible data races. Restricting lifetimes might be useful too, but that's too far beyond the limits of my experience.
> I guess part of the issue is, what is the goal of marking code as unsafe?
Basically, it's a way to be able to find the unsafe code by grepping for it. It's a mechanism for security auditors to be able to search for the unsafe code and audit it.
It also serves as some social pressure to avoid unsafe code.
Actually, the OP was talking about Safe Haskell, which is not the same as unsafePerformIO. While the two are related, they are different. While Safe Haskell does not really map to the Rust unsafe functions, unsafePerformIO does.
Usually strong typed languages for systems programming use the unsafe concept for the usual dirty tricks of low level programming that question language's safety.
You can find this in Ada, Modula-{2,3}, D, Oberon and many others.
If they're really into reducing bugs, they should embed a complete proof system into the language, rather than try to guarantee something as simple as correct memory references - which isn't much of a problem to ensure for anyone serious about the correctness of their code.
Add that if a proof system (that is powerful enough) was available, and people actually used it, the language could do away with the unsafe overrides. It could also be of great help to the compiler to optimize the code, like removing array bounds checks where the programmer has supplied a proof that the index is in range.
This isn't a value judgment. I just find it an interesting distinction.