Hacker News new | past | comments | ask | show | jobs | submit login

Really? I guess if your typical programming languages are C and C++.

Otherwise Rust just has semantics that allow more control over memory, as is often needed in lower level programs, while preventing pointer aliasing. The majority of languages in existence are memory safe--some even more so than Rust. They're just not as flexible.




It's much better than Java, Kotlin and C#.

The borrow checker detects the majority (~95%) of concurrency problems. We don't have that many single core CPUs lying around anymore.

It's got a story on high performance, high concurrency programs which is significantly better than anything else I've seen so far.


I'll give you that.

The reason I questioned is because in my experience with those languages the 95% problem is not the actual data consistency rather it's locking and synchronization hell that results from needing to make your program threadsafe to ensure data consistency. Rust says, don't get yourself in a situations where you need to do that in the first place, it's not safe. Just clone the data or leak it read only or Cow it.

What Rust does is great, it sets you up so you're never sharing references across threads unless you try really really hard. And that's the source of needing manual synchronization the majority of the time. However, when you do need locks, Rust doesn't do anything to help. In other words, if you copied a Java program to Rust with object instance pointers all over the place, I bet it would feel just as bad in Rust.

So I tend to think of that more as "thread safety" than "memory safety". But we might just be arguing semantics at this point. I agree Rust is far more of a pleasure to work in than Java and C#.


It has been a very long time since I’ve used Java. Rust will tell you where you need the locks, at compile time. Does Java? Serious question.


Not since I’ve use it either. I may be missing something since I’ve only used async Rust, in what way does Rust say “you need a lock here”? If it does that then I stand corrected and I may just have to drop async Rust altogether and checkout crossbeam + rayon that everyone raves about.


Rust has two traits, Send and Sync. Send means "this can be transferred to another thread," and Sync means "this can be accessed via a reference in another thread.

Here's some (contrived!) example code (for one thing I'm using thread::scope because I don't want to deal with joining the threads):

    use std::thread;
    use std::rc::Rc;
    
    fn main() {
        let v = Rc::new(vec![1, 2, 3]);
        
        thread::scope(|s| {
            s.spawn(|| {
                do_work(v.clone());
            });
            
            s.spawn(|| {
                do_work(v.clone());
            });
        });
    }
    
    fn do_work(v: Rc<Vec<i32>>) {
        unimplemented!()
    }
This gives:

    error[E0277]: `Rc<Vec<i32>>` cannot be shared between threads safely
      --> src/main.rs:8:17
       |
    8  |           s.spawn(|| {
       |  ___________-----_^
       | |           |
       | |           required by a bound introduced by this call
    9  | |             do_work(v.clone());
    10 | |         });
       | |_________^ `Rc<Vec<i32>>` cannot be shared between threads safely
       |
Rc is not thread-safe. We try to send it into some threads. It doesn't work. Switching to Arc, which does use atomic reference counts and therefore is thread-safe, does. Same principle would apply with Mutex if we were trying to modify the vector, Rust will yell at us.

One really really nice thing about this is that it'll check no matter how for "down" into the details the thread unsafety is. There's a story Niko told in a presentation of his how he was doing some refactoring and added a type that wasn't thread-safe like, four or five layers down from where the threading happened. rustc caught it immediately, and therefore, it was obvious. Would have been a heisenbug in other languages.

Async Rust also uses Send/Sync, for example, tokio::spawn requres a Send bound, just like spawning a thread does. I do know there are some tricky deadlock cases there, if I recall? But deadlocking isn't what I'm talking about, no aspect of Rust statically prevents those.


I understand send and sync. I see what you’re saying. Though note, even if you pass around Arcs the inner value still has to be Mutex or RwLock. But I do see how Rust makes this more structured. Honestly with async it’s usually enough to just make sure your types are Send and Sync and clone them so that’s really the extent of what I normally have to deal with.

Re deadlocking: With async runtimes, since you have a fixed threadpool, if you use the normal locks from the stdlib you can deadlock or more accurately stall your program because all available executor threads are blocked waiting on a lock. If the executor is starved the task that would unlock the stalled threads never gets scheduled. It’s a problem unique to the task executor paradigm (the thread per task version of the program would be logically correct and never deadlock or a version that used yielding locks). Not sure if thats exactly what you’re talking about, but, it’s a part of the language/experience I think could use some work. Would be nice if the structure that exists around data races could also exist around blocking vs yielding calls from async tasks.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: