Hacker News new | past | comments | ask | show | jobs | submit login

Indeed, there's been some discussion regarding forcing users to specify what sort of unsafety they desire when using unsafe blocks. There's no syntax whatsoever yet, but consider something like the following:

  unsafe(no-bounds-checks) {
      ...
  }
or:

  unsafe(raw-pointers) {
      ...
  }
Right now the problem is that once you drop into an unsafe block, everything that is unsafe becomes allowed. Obviously this is bad for correctness and bad for readability.



It seems to me one of the prime difficulties with trying to differentiate types of unsafe behavior, is that often one will imply another. I recall that in C#, there is only one type of Unsafe block and the only unsafe operations it allows is raw pointer manipulation, but since you could theoretically point into the runtime itself, the set of implied consequences of Unsafe blocks is effectively unbounded. If Rust manages to drop its runtime it's not quite so easily susceptible, but even so, returning a pointer to a vtable outside of the unsafe block implies return-oriented programming implies unbounded capabilities.

I guess part of the issue is, what is the goal of marking code as unsafe? The use-case that C# was trying to serve was that you could be consuming a third-party assembly without access to source, and you want to know what are the possible consequences of running it. If Rust's main goal is being able to audit your own code for safety (either by hand or by tools), rather than trusting external binaries, then it makes more sense to take a feature-oriented approach rather than a consequence-oriented one.

Rust's pointer taxonomy might also help with providing degrees of consequences. You could, for example, restrict that raw pointers could only be cast to pointers with non-sendable kind, which would let you circumvent type safety while still controlling the scope of possible data races. Restricting lifetimes might be useful too, but that's too far beyond the limits of my experience.


> I guess part of the issue is, what is the goal of marking code as unsafe?

Basically, it's a way to be able to find the unsafe code by grepping for it. It's a mechanism for security auditors to be able to search for the unsafe code and audit it.

It also serves as some social pressure to avoid unsafe code.


This is similar to the {+ } and {- } blocks offered by Borland Pascal dialect and other systems programming languages in the Pascal family.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: