I have seen this exact same argument in python and elsewhere as well. Any time anyone writes a rant against async and suggests threads as an alternative, you know they didn’t really understand what async was for and were using it for something it’s not really for, so you know somewhere in their rant they will say the performance improvement they expected didn’t happen.
Async (in any language) is not a panacea. Async is for allowing multiple things to make progress simultaneously that would otherwise be blocked on I/O. If you thread them your threads will be independently blocked on I/O and you will have additional locking overhead. If you have an embarrassingly parallel task and you aren’t blocked on I/O of course async will be slower than pure parallelism because that’s not what it’s for. It’s almost literally so you can have one async thread consuming exactly 1 CPU doing all the I/O and it will all make good progress.
Sometimes you are "computing a lot" over very general dataflow graphs and want to have tasks with work stealing. Async frameworks will give you that for free, seamlessly scaling to any number of underlying threads.
> The purpose of flagging is to indicate that a story does not belong on HN. Frivolous flagging—e.g. flagging a story that's clearly on-topic by the site guidelines just because one personally dislikes it—eventually gets an account's flagging privileges taken away. But there's a new 'hide' link for people to click if they'd just like not to see a story.
This story seems to very much belong on HN. Just because the statement is opinionated and some users don't like it, it doesn't mean that we can't debate about its merits.
I also think that Async Rust is a major disadvantage and overall a mistake.
It feels like Rust is trying to be "The Language" suitable for both low-level system programming and high level application development.
But you can't do both. Rust will never be as ergonomic and simple to cook as Java, Go, OCaml, Scala, Erlang/Elixir and other high level languages. Yet this async split brings the perilous language schism somewhat akin to D's GC/non-GC dialects, where people have to write and maintain two versions of libraries. And I doubt that parametric async will solve the problem fully.
I disagree about ergonomics. Switching to rust allowed us to focus on the real application/business logics rather than spending time worrying about GC lag, performance, exceptions, null references, memory leak etc. plus the toolchain is much nicer than most other languages.
I’ve recently (past year) been diving deep into async Rust and the modern Rust ecosystem after a several year hiatus (last active 2013-2016, pre-async). maybe this advice applies to a small category of application developers, but this take overall feels reactionary (versus constructive) and immature (cites OS primitives that don’t approach the same design space). There are pains with async Rust but the community should lean into trying to solve them. I personally don’t feel the pains as severely as described…
Deferred computation is a primitive, and threads do not solve it.
I feel that, from a language theory level, it should be possible to implement functions that can be called in both sync and async contexts, removing the need for function coloring.
Any fundamentally blocking operations could be forced by the compiler to have have two implementations - sync (normal) and async, which defers to some abstract userspace scheduler that's part of the language itself.
Sync functions that block (e.g. perform a system call) cannot be called from async functions.
(Actually, they can, but you're going to stop the whole scheduler, or at least one of its worker threads, which is something you really don't want to do...)
I believe Java virtual threads/project Loom fits what you are describing. No separate async APIs, everything is coded using a thread based model. The user decides between using platform/OS threads (thus delegating scheduling to the kernel), or using virtual threads and letting the JVM take over scheduling.
But I was wondering if the same thing could be brought to Rust, while still keeping the runtime away from the language. I probably forgot to mention Rust in the grandparent comment.
I think the better way to think about async Rust is to use it when it's beneficial to developer productivity and to avoid it when not. There are quite a few situations where it makes the code easier compared to alternatives you could come up with.
I don't think for a second that async Rust should be picked for performance reasons.
You get a feeling for what is a good use of async and bad use of async relatively easily these days as the ecosystem is maturing.
It is increasingly harder to avoid async Rust if you do any form of IO. Most of the useful io based crates assume you are doing async with a very minor amount of them giving you a non async api. Out of the ones that do they are bundling an executor to power that api because they don't want to implement it twice.
I think part of what is feeding this sort of backlash against it is the way that it creates two different rust ecosystems. One of them, the non async version, being decidedly a second class citizen.
It looks like there's good movement on the proposal to bring pollster[1] or similar into the Rust standard library.
I think that's awesome. They've been afraid to "bless" an executor for good reasons, but pollster has 0 chance of "winning" even if blessed since it lacks so many features. However it's a solution to the problem you expressed: I/O crates can be async and used with pollster in sync contexts.
I haven't been following, would you have a link to this proposal ? A quick search of pollster in rust-lang/rust and rust-lang/rfcs does not bring any interesting result.
Good question. I might have been projecting my hopes to make it sound more formal, but the only thing I could find following my cookie crumbs was the mention in https://without.boats/blog/why-async-rust/
What arguments are there for async if not performance? Threads/fibers/gofuncs/actors/... are easier to reason about. Async is super helpful to avoid overhead of thousands of threads, but makes just about everything else harder.
I disagree. I've got several programs with a async select based main loop and others with threads, and the former are easier to reason about in my opinion. Threads hide effects.
However, Tokio tries to be the best of both threads and async and sometimes ends up being the worst of both when Sync/Send/etc creep into function signatures.
Async makes awaiting things much easier than other primitives in the language. So for instance if you both need to wait for stuff to happen on a socket or some event happening, async select makes that super easy.
You can also keep async relatively local to a function that does these things and is itself blocking otherwise.
Yes, async is easier, but granularity of performance is a real downside. The CPU is a resource too, and needs to be managed more carefully than async can do. There's a reason why people stopped using cooperative multitasking like in Windows 3.1 ...
Async is not really that in Rust. While it's true that a singular poll cannot be preempted, things like task::spawn schedule that on a multi threaded executor. So a lot of cases behave just like threads except you await them from somewhere else.
So choose a technique and language that least limits your design freedom.
Choosing performance as your #1 priority is often a bad idea as it gets you into a straight-jacket from the start, making everything else much more difficult and slows down development to a crawl. Unless you're developing an OS kernel perhaps. Computers are fast enough these days, let them do part of the work for you! And you can always write a faster version of your software when there is a demand for it.
> Choosing performance as your #1 priority is often a bad idea
You can write inefficient code and optimize it later.
> it gets you into a straight-jacket from the start, making everything else much more difficult and slows down development time to a crawl. Unless you're developing an OS kernel perhaps
The argument seems to break down: Surely you don't want to be in a strait-jacket if you're developing an OS kernel. Somehow Rust is equated with always being in a strait jacket.
The cost of writing highly concurrent programs is pretty much the same in every language except ones that have concurrency at the core (Erlang). I don't see much difference between starting with Java or Rust in terms of avoiding complexity caused by having to build things that a concurrent runtime could give to you for free.
> The argument seems to break down: Surely you don't want to be in a strait-jacket if you're developing an OS kernel.
If you're developing an OS, there is no escaping from the straight-jacket. Your design freedom is severely limited by the fact that your constraints include all applications that will run on your OS.
Async solves different problems, you can, for instance, have just a single-threaded CPU and still have a nice API if you have async-await. It might not be so cool at a higher level as Go's approach of channels and threads, but it's cool in embedded, read this:
"Rust's async/await allows for unprecedently easy and efficient multitasking in embedded systems. Tasks get transformed at compile time into state machines that get run cooperatively. It requires no dynamic memory allocation, and runs on a single stack, so no per-task stack size tuning is required. It obsoletes the need for a traditional RTOS with kernel context switching, and is faster and smaller than one!"
I'm just toying with Raspberry Pi Pico and it's pretty nice.
Go and Rust have different use cases, the async-await is nice at a low level.
I don't disagree with any of this, though it might be worth mentioning that async can be useful on platforms that don't support threads, e.g. embedded or WASM.
I doubt it would have been added to the language if it was just for those use cases though.
> But even in 1999 Dan says this about cooperative M:N models:
>> At one point, M:N was thought to be higher performance, but it's so complex that it's hard to get right, and most people are moving away from it.
It is higher performance. If you have M jobs and you can get N workers to work on them at the same time, you win!
It is also complex. So if you want the feature, let the smart people working on runtime figure it out, so that each team of application developers in every company doesn't invent their own way of doing it. If not in the runtime, then let library developers invent it, so there's at least some sharing of work. (Honestly I probably prefer the library situation, because things can improve over time, rather than stagnate.)
> Many operating systems have tried M:N scheduling models and all of them use 1:1 model today.
Nope! At the application level, M is jobs and N is threads. But at the OS level, M is threads and N is cores. Would I be exaggerating to say that doing M:N scheduling is the OS's primary purpose?
> but how come M:N model is used in Golang and Erlang - 2 languages known for their superior concurrency features?
These examples are "the rule", as opposed to "the exceptions that prove the rule".
> The Coloring Problem
I'm sick of the What Color Is Your Function argument. The red/blue split exists, and not just for asynchrony. Your language can either acknowledge the split or ignore it:
* A blocking function can call a non-blocking function, but not vice-versa.
* An auth'd function can call a non-auth'd function, but not vice-versa.
* An impure function can call a pure function, but not vice-versa.
* An allocating function can call a non-allocating function, but not vice-versa.
* A subclass can call into a superclass, but not vice-versa.
* A non-deterministic function can call a deterministic function, but not vice-versa.
* A exception-throwing function can call a non-exception-throwing function, but not vice-versa.
Even the dependency inversion principle works this way: it's a plea for concretions to call abstractions, and not the other way around!
Trying to remove the red/blue split will not work, and you'll only be pretending it doesn't exist.
The "solution" (if you can call it that) is simply for library writers to expose more blue code and less red code, where possible. If your language acknowledges that red and blue are different, then application developers have an easier time selecting blue library imports and rejecting red ones. Which is somewhat aligned with the article's title. But application developers can do whatever - red/blue, go nuts.
It's not to say that Go is bad in this regard! It is just (always) doing the heavy lifting for you of abstracting over different colors of functions. This may have some performance or compatibility (especially wrt FFI) concerns.
Rust chose not to do this, which approach is "right" is subjective and will likely be argued elsewhere in this thread.
I don't think anyone is suggesting that Go's concurrency model is perfect. However, the OP said "trying to remove the red/blue split will not work". This is a pretty strong claim, and Go seems like a reasonable counterexample to it.
Similarly, if someone said "trying to marry async to a language with lifetime analysis and no GC will not work", it would be reasonable to point to Rust as a counterexample, even though Rust async has various problems.
Sure. My point is that 'pretending that the distinction doesn't exist' (aka 'abstracting away from it', in less loaded language) does in fact work. Go's concurrency model is perfectly usable and successfully reaps many of the advantages of M:N scheduling.
Let's look at a less loaded example:
"Trying to remove the distinction between stack and heap allocation will not work, and you'll only be pretending that it doesn't exist."
It's true that on some level there's going to be a distinction between stack and heap allocation. But it totally does work to abstract away from this distinction ('pretend that it doesn't exist'). Go, for example, will usually allocate non-escaping values on the stack, but unless you are tweaking your code for performance, you'll never have to worry about this.
> There are two key drawbacks to this otherwise interesting and useful decision. First, Go can't have exceptions. Second, Go does not have the ability to synchronize tasks in real (wall clock) time. Both of these drawbacks stem from Go's emphasis on coroutines.
1) Go can't have exceptions? What exactly are panics, if not a peculiar implementation of exceptions? They print stack trace of the panicking goroutine, just like exceptions print stack traces of the thread they are thrown in. What exactly is the difference?
2) For real-time workloads, you can pin goroutine to an OS thread and use a spinlock. How does this make it different than in any other language?
> Since goroutine stacks are thus made disparate -- goroutines do not "share" common "ancestor" stack frames like Scheme's continuations do -- they can unwind their own stacks. However, this also means that when a goroutine is spawned, it has no memory of its parent, nor the parent for the child. This has already been noticed by other thinkers as a bad thing.
Goroutines are made to resemble lightweight threads. Maybe the author considers threads bad, but that's just a subjective opinion. But-- at the end of the blog, there's a sentence:
> OS threads provide some very nice constructs for programmers, and are hardened, battle-tested tools.
Goroutines provide almost exactly the same semantics as OS threads, so I don't really get what they're trying to say.
> Consider something of a converse scenario: Goroutine a spawns a goroutine b, without using an anonymous function this time. No closure, just a simple function spawn. Coroutine a opens a database connection. Goroutine b panics, crashing the program. The database connection is then left open as a zombie TCP connection.
On any sane OS, when the program crashes, the kernel closes the TCP connection - there is no such thing as a "zombie" TCP connection.
With all due respect to whoever the author is, I think this blogpost is full of crap.
UnixODBC in Go?? Zombie TCP connections from a crashing program. The author is clueless on these subjects. Not worth arguing over a misinformed blog post.
I argue that this code is all-red when sendMsg is allowed to spawn an extra (green)thread to do its work (at the async keyword.) The order of the prints in main is unknown. If you remove the async, the code becomes all-blue and the order of the prints becomes known.
Sure, but it’s a different effect. Go’s asynchronous “red” is either “accepts a cancellation context” or “accepts a output channel” (since goroutines can’t return values normally).
This article feels overblown. Is async Rust perfect? No, far from it. It feels like a MVP that Rust's developers have neglected for a while. Hopefully picking up some steam these days with partial implementation of async functions in traits. But there still problems with it.
Async Rust is rather nice to use when you're writing a web server. Structuring your code in an async manner is honestly very useful. Writing a composite Future or a Future state machine by hand is super tedious. Async makes most of that pain go away.
> Async Rust is objectively bad language feature that actively harms otherwise a good language.
This is an objectively false statement :) and is so inflammatory that I don't see much of a reason to read past it. Especially since I, and many other people, have been using async Rust in production quite happily for years.
The OP doesn't seem to know what the Go and Erlang/BEAM runtimes are or what they do. One of their primary tasks is managing _async tasks_. 'Just use epoll'... Please, make it it a livestream, I'll buy popcorn and We'll all watch you reinvent rust, go and erlang.
> Leaky abstraction problem which leads to "async contamination".
There's not much else of a way to do it any better. Not sure your exact gripe here, other than dogmatic.
> Violation of the zero-cost abstractions principle.
It's not a principle, it's just a benefit of Rust's design that you get often but not always. `Clone` is not zero cost, should we throw that out too?
> Major degradation in developer's productivity.
Yawn, speak for yourself. I implemented incredibly extensive firmware with Embassy (async embedded framework) in months instead of years for a custom PCB I made. Async was literally the last thing on the list that caused problems - in fact it sped up my productivity and reduced power usage of the board overall.
> Most advertised benefits are imaginary, too expensive (unless you are FAANG) or can be achieved without async
No, they cannot. You are so confidently incorrect to an impressive extent.
Stopped reading after that section. This person has some bone to pick and left level-headedness at the door in doing so.
For example, to_owned on an owned type is a no-op typically (it's a blanket implementation).
Clone on a unit struct or a unit enum variant is also a no-op in most cases (unless explicitly implemented not to be, which is very much frowned upon).
I understand, that wasn't really the point I was making. I don't know of a single systems language that has zero cost async abstractions. The author is making an impossible, nonsense ask.
Async (in any language) is not a panacea. Async is for allowing multiple things to make progress simultaneously that would otherwise be blocked on I/O. If you thread them your threads will be independently blocked on I/O and you will have additional locking overhead. If you have an embarrassingly parallel task and you aren’t blocked on I/O of course async will be slower than pure parallelism because that’s not what it’s for. It’s almost literally so you can have one async thread consuming exactly 1 CPU doing all the I/O and it will all make good progress.