Hacker News new | past | comments | ask | show | jobs | submit login

> 1) There isn't transparent integration with IO in the runtime as in Go or Haskell. Rust probably won't ever do this because although such a model scales well in general, it does create overhead and a runtime.

By "transparent integration with the runtime" you mean M:N threading. M:N threading is just delegating work to userspace that the kernel is already doing. There can be valid reasons for doing it, but M:N threading isn't us not doing the work that we could have done. In fact, we had M:N threading for a long time and went to great pains to remove it.

In addition to the downsides you mentioned, M:N threading interacts poorly with C libraries, and stack allocation becomes a major problem without a precise GC to be able to relocate stacks with.

M:N will never be as fast as an optimized async/await implementation can be, anyway. There is no way to reach nginx levels of performance with stackful coroutines.

> OS threads leads to lowest common denominator APIs (there is no way to kill a thread in Rust)

This has nothing to do with the reason why you can't kill threads in Rust. We could expose pthread_kill()/pthread_cancel() on Unix and TerminateThread() on Windows if we wanted to. The reason why you can't terminate threads that way is that there's no good reason to: if you have any locks anywhere then it's an unsafe operation.

> some difficulty in reasoning about performance implications.

I would actually expect the opposite to be true: 1:1 is easier to reason about in performance, because there are fewer magic runtime features like moving or segmented stacks involved. Could you elaborate?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: