Hacker News new | past | comments | ask | show | jobs | submit login

The article doesn't specify what exactly, is permitted by an unsafe vs. safe function. The Rust reference indicates the additional operations permitted by unsafe functions are:

    1. Dereferencing a raw pointer.
    2. Casting a raw pointer to a safe pointer type.
    3. Calling an unsafe function.
I was expecting something more along the lines of Safe Haskell (http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/safe-...) which is more about trust than memory safety and, as such, explicitly restricts IO.

This isn't a value judgment. I just find it an interesting distinction.




Indeed, there's been some discussion regarding forcing users to specify what sort of unsafety they desire when using unsafe blocks. There's no syntax whatsoever yet, but consider something like the following:

  unsafe(no-bounds-checks) {
      ...
  }
or:

  unsafe(raw-pointers) {
      ...
  }
Right now the problem is that once you drop into an unsafe block, everything that is unsafe becomes allowed. Obviously this is bad for correctness and bad for readability.


It seems to me one of the prime difficulties with trying to differentiate types of unsafe behavior, is that often one will imply another. I recall that in C#, there is only one type of Unsafe block and the only unsafe operations it allows is raw pointer manipulation, but since you could theoretically point into the runtime itself, the set of implied consequences of Unsafe blocks is effectively unbounded. If Rust manages to drop its runtime it's not quite so easily susceptible, but even so, returning a pointer to a vtable outside of the unsafe block implies return-oriented programming implies unbounded capabilities.

I guess part of the issue is, what is the goal of marking code as unsafe? The use-case that C# was trying to serve was that you could be consuming a third-party assembly without access to source, and you want to know what are the possible consequences of running it. If Rust's main goal is being able to audit your own code for safety (either by hand or by tools), rather than trusting external binaries, then it makes more sense to take a feature-oriented approach rather than a consequence-oriented one.

Rust's pointer taxonomy might also help with providing degrees of consequences. You could, for example, restrict that raw pointers could only be cast to pointers with non-sendable kind, which would let you circumvent type safety while still controlling the scope of possible data races. Restricting lifetimes might be useful too, but that's too far beyond the limits of my experience.


> I guess part of the issue is, what is the goal of marking code as unsafe?

Basically, it's a way to be able to find the unsafe code by grepping for it. It's a mechanism for security auditors to be able to search for the unsafe code and audit it.

It also serves as some social pressure to avoid unsafe code.


This is similar to the {+ } and {- } blocks offered by Borland Pascal dialect and other systems programming languages in the Pascal family.


The better Haskell analog are the unsafePerformIO and related functions which match up pretty closely with unsafe in rust.


can someone explain why this (of all the flamebaity/controversial comments I make) is getting downvoted?


Okay, since you asked: Your comment doesn't make sense.

The parent comment: "rust unsafe and haskell unsafePerformIO are different!"

Your comment: "You forgot about unsafePerformIO! It's the same as rust unsafe!"


Actually, the OP was talking about Safe Haskell, which is not the same as unsafePerformIO. While the two are related, they are different. While Safe Haskell does not really map to the Rust unsafe functions, unsafePerformIO does.


Saying something different doesn't imply disagreement. He's just adding information.


It's phrased as if he's talking about a different topic, but actually he's talking about the same topic.


Usually strong typed languages for systems programming use the unsafe concept for the usual dirty tricks of low level programming that question language's safety.

You can find this in Ada, Modula-{2,3}, D, Oberon and many others.


In D it usually means "no undefined behavior". For example, you can use the @safe annotation on functions.

http://dlang.org/function.html#function-safety

http://dlang.org/safed.html


If they're really into reducing bugs, they should embed a complete proof system into the language, rather than try to guarantee something as simple as correct memory references - which isn't much of a problem to ensure for anyone serious about the correctness of their code.

Add that if a proof system (that is powerful enough) was available, and people actually used it, the language could do away with the unsafe overrides. It could also be of great help to the compiler to optimize the code, like removing array bounds checks where the programmer has supplied a proof that the index is in range.


I believe this is the programming language you are looking for: http://www.ats-lang.org/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: