Hacker News new | past | comments | ask | show | jobs | submit login

I am a big fan of rust but I don’t think the Async using the ”await” keyword or using Future/promis is the way to go.

I wrote a large web app in scala using Futur only and it turned into a monster because of it. Even if you don’t use callback you still need to manualy unwrap them and they pollute your méthode signature.

Then more recently I wrote a pub/sub server in c# using the “await” keyword. While much better it still polute your method signature and make the debugger and stacktrace useless. Also you still have to think about trying to no make anything that could block spinlock or calling api that are not Async ex: Createfile

Compare this to Golang where project like Groupcache (1) have super clean code that replaced a thousand line of code C++ system at google.

Why do you think Golang fiber/gorotine are worse ?

(1) https://github.com/golang/groupcache




The answer to this question is quite involved. As you may already know, Rust at one time had a fiber model, but it was removed in favor of Futures.

One big advantage of Futures is they are decoupled from their execution environment.

For example, different types of executors can be used depending on the type of Future. For example, you may have an IOThreadPoolExecutor to handle accepting and responding to connections that should spend very little time blocking and a CPUThreadPoolExecutor for offloading heavy processing. In Go, you have no say in how goroutines are scheduled, you sacrifice control to reduce complexity.

Another big one in network programming is the polling mechanism. In Go, you have no control over your polling mechanism, whereas with a Future is decoupled from the event loop, and you can write your own event loop on any experimental kernel APIs you'd like.

Rust made the right move because in Rust, just as in C++, we want maximum control. Futures are more primitive, and it's possible to build coroutine models on top of them, but not vice versa.


I agree with you. I guess the best solution would be something in between “await” and goroutine.

My understanding is the main issue with “await” model is that you only abstract The Task model.

Running task don’t gave a real ID and stack like “Go” and “Erlang” you can list all running fiber how much memory and CPU they use ...

I believe you could write your own event loop easily in Go if they exposed the internal api used to pause and resume goroutine, you can already pause goroutine by having them wait on a mutex and resume them from your custom event loop by releasing that mutex.


In go toh can't control the polling mechanism if you are using the Builtin types.

You can drop to raw fd's and do whatever you like... of course this negates a lot of the niceties that go affords you.


> Rust at one time had a fiber model, but it was removed in favor of Futures.

No, fibers were removed with nothing to replace them. The futures as we know them today did not exist in the far, far past when fibers and the whole runtime thing got dropped.


Futures and fibers are not mutually exclusive, as shown in the latest (2018) Oracle Code One keynote speech, where they demoed project Loom (basically you can wrap a fiber in a CompletableFuture, in the same fashion you can wrap a thread/execution context and expose it in a CompletableFuture). That being said, the JVM team seem to have come to a similar conclusion that using fibers is the least invasive way in terms of syntax as compared to async.


Goroutines are just threads, with a particularly idiosyncratic implementation. You can use threads and blocking I/O in Rust too.

You wouldn't want a Go-like M:N solution in Rust. It would be slower for no reason.


> You wouldn't want a Go-like M:N solution in Rust. It would be slower for no reason.

Depends on how many threads you have and what OS you’re running on.


Goroutine are very different from thread in practice.

you don’t have to pay the memory usage and context switch cost you would have if using large amounts of threads.

It’s also faster to do IO processing from a single thread in batch instead of having one OS thread per request.


Context switches on I/O are the same for 1:1 and M:N because you're eating the cost of a kernel to user context switch either way.

Memory usage per thread is a property of the GC, not M:N threading. You can have very small stacks in a 1:1 implementation too.


there is less context switches because you cant write some data to several distinct file descriptor using a single system call.

You can also read data from several file descriptor in a single System Call.

This way you significantly reduce the number of System call instead of doing one blocking read() per connection. I believe context switch round trip (from user space to kernel to user space) is much more expensive than simply switching between goroutine of the same process.


> and make the debugger and stacktrace useless

Chrome manages to do async stack traces for their implementation of the similar JavaScript feature. I wonder if this would be possible for C# and Rust.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: