When async is visible in the type system it means you need to write things like iterating containers twice: once for dealing with non-async values, and a second time for handling async operations.
Well maybe this isn't the best example because you already have iterators for the plain iteration part; however let's say you wanted to make an iterator for a container that needs async to iterate the data, then you find yourself having the same problem that the existing facilities making use of the old "non-async" iterators become useless.
Alternatively you let async leak into _everything_, even if they are not actually using async themselves.
There's nothing wrong with runtime-driven schedulers handling the threads for you, but async isn't the only way to do it. But it is one of the easy ways to do it and a way that is implementable as a library.
In particular single-threaded async (javascript) can be painful because you need to be mindful about not doing anything for "too long" without splitting the work into several parts. In a sense with async we're back in the era of co-operating scheduling. Even in multi-threaded environment is can be based on luck or faith that not all the threads allocated happen to get stuck on long-running jobs when short and fast user-facing interactive async tasks are starving. (A problem solved by over-allocating threads of course.)
Sure, it's possible to parametrize your algorithms.
However, do you need to explicitly parametrize them or does the language do it for you? I'm not aware of any language that would remove the need to consider them and just automatically "work" with async types whereas "normal" threading—with errors handled via exceptions—more or less work out of the box the way you expect.
> However, do you need to explicitly parametrize them or does the language do it for you? I'm not aware of any language that would remove the need to consider them and just automatically "work" with async types whereas "normal" threading
That's correct. What I said is: you can write your algorithm once and apply it to both sync and async datastructures (given certain constraints).
But that doesn't mean that async and sync feels exactly the same. It's simply impossible for this to be true in any meaningful way and has been tried over and over.
Well maybe this isn't the best example because you already have iterators for the plain iteration part; however let's say you wanted to make an iterator for a container that needs async to iterate the data, then you find yourself having the same problem that the existing facilities making use of the old "non-async" iterators become useless.
Alternatively you let async leak into _everything_, even if they are not actually using async themselves.
There's nothing wrong with runtime-driven schedulers handling the threads for you, but async isn't the only way to do it. But it is one of the easy ways to do it and a way that is implementable as a library.
In particular single-threaded async (javascript) can be painful because you need to be mindful about not doing anything for "too long" without splitting the work into several parts. In a sense with async we're back in the era of co-operating scheduling. Even in multi-threaded environment is can be based on luck or faith that not all the threads allocated happen to get stuck on long-running jobs when short and fast user-facing interactive async tasks are starving. (A problem solved by over-allocating threads of course.)