Hacker News new | past | comments | ask | show | jobs | submit login

When you have built in support for threads in a language, it definitely makes sense that it would be easier to use than operating system mechanisms. For a lot of the non-embedded code that I end up writing, though, there's usually an inherent benefit to using processes over threads. It usually comes down to the benefits of having separate memory spaces. You can safely use code that was never written to be thread-safe, saving time otherwise spent refactoring gnarly old code. Also, it makes it a lot easier to mix and match different languages. For python in particular, it avoids having to battle for the global interpreter lock.

I think what's nice about rust is that, because it makes it difficult to write thread-unsafe code, it's naturally easier to add threading at some point in the future without too much pain. As a result, more applications can benefit from having access to multiple CPU cores. I don't think that's quite the same thing as pure performance per watt, though. That really comes down to how the code was written, and how well the compiler can optimize it. Rust may have some advantages there over C, since it constrains what you can do so much that the compiler has a smaller state space to optimize over. Someone who knows what they're doing in C, though, could likely write very efficient code that effectively uses parallelism, and may gain an edge over rust simply by cleverly leveraging the relative lack of training wheels. For high performance compute, rust vs. C may be a wash. For consumer facing applications, though, the more programs that can use multiple cores to run faster (even if less efficiently), the better.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: