Hacker News new | past | comments | ask | show | jobs | submit login
Maybe Everything Is a Coroutine (nels.onl)
160 points by rsaarelm on Feb 14, 2024 | hide | past | favorite | 81 comments



Not everything, but if by “coroutine” we mean a delimited continuation, then we get: exceptions, async/await, generators, and even the IO monad.

http://logic.cs.tsukuba.ac.jp/~sat/pdf/tfp2020.pdf

In short: algebraic effects.

Here’s a whole thesis on the cool things that can do done with this one simple trick: https://publikationen.uni-tuebingen.de/xmlui/bitstream/handl...


> Here’s a whole thesis on the cool things that can do done with this one simple trick

"Professors HATE HIM"


I've been trying to learn more about algebraic effects and I want to use them for practical things. However it has been hard because although I understand at some level the theoretic reasons for using them, the actual day-to-day experience of using them is something else. I think the most popular effect library for a mainstream language is effect.ts, it is a nice library and it also uses coroutines but using it in some practical way isn't actually productive. I think a lot in the functional programming community get stuck on the f . g trap. As if function composition is the one thing you would want to do all day (even though you can), from a super practicle perspective it is often very little work to also write f' and g'.


If you want to look at a very mature and used in industry effect system take a look at Cats Effect 3 in Scala.


If anyone is interested, we are leveraging delimited continuations as the foundational paradigm for structured concurrency and async flow control in the browser.

I gave a talk about it recently at Michigan Typescript: https://www.youtube.com/watch?si=Mok0J8Wp0Z-ahFrN&v=uRbqLGj_...

https://github.com/thefrontside/effection

https://github.com/neurosnap/starfx

With delimited continuations, we are able to express any async flow control, it is an incredibly powerful paradigm.


I get the first three but what is the connection to the IO monad?


I guess every monad can be expressed as the continuation monad; and that includes the IO monad?


If you check out the linked paper you'll see the title is "One-shot Algebraic Effects as Coroutines" - the keyword being "one-shot".

In general every Monad can be expressed "interpreting" the free monad. This relaxes the "one-shot" restriction, and can be implemented using delimited control. One-shot means faster performance though and is still useful for many things - and that can be implemented using coroutines.

If you acquire some understanding of what this means then you'll have a very good idea about the expressive power of what you can use coroutines (with nothing more) for, so it's very interesting.


I'm not quite sure if the free monad (or the 'freer' monad) can simulate the continuation monad? See https://stackoverflow.com/questions/25827271/how-can-the-con...


It's been a while since I was steeped enough in this to answer comprehensively, but per memory, up to performance concerns, they're equivalent in inductive settings.

Haskell ends up being a bad place to talk about this (or a great place, depending on your goals) because due to laziness you get to write a lot of structures which look inductive but end up being able to express coinductive structures.

From memory and intuition, if you're working in a strict language (or better yet, something like Agda where the distinction becomes very sharp) then you end up finding that continuations make for good coinductive "free" structures and the free monad (and its ilk) make for good inductive free structures.

The distinction between inductive and coinductive types is fairly subtle and hard to see in most languages where those distinctions are blurred, but broadly you can think of inductive structures as ones that are, in principle, finite and coinductive structures as being those which may be, in principle, infinite.

For example, a linked list is inductive. If you're looking at one cons cell of it you can't prove that, you may have to chase pointers for longer than your patience allows, but at least in principle there is an end. A stream is coinductive, because it instead suggests a generative process.


In a sense, inductive types are "naturally strict" whereas co-inductive types are "naturally lazy". Though there are some types, such as products/tuples and arrays, that come in both strict and lazy varieties. Ultimately, this would allow one to equally account for both strict and lazy evaluation in a very natural way - quite unlike languages like ML or Haskell, where only one form is natural and idiomatic whereas the other has to be added as an afterthought.


After reading that, I think freer can in fact encode the continuation monad.


https://www.reddit.com/r/haskell/comments/7yll62/comment/duh... also suggests that.

A quick scan of https://okmij.org/ftp/Haskell/extensible/more.pdf doesn't yield much one way or another.


I was playing with Haskell years ago and remembered a cheap trick. Just make a typeclass with all the IO ops you want and also make it a monad. You now have your own custom IO monad with only the bits you want to “give access to”. It is easier to understand than free monads (which I never truly grokked but could happily copy/paste adapt and get em to work, sans understanding!)


I used to argue that lazy evaluation and coroutines were kind of two sides of the same coin. That in a way, partially evaluated haskell functions were equivalent to concurrent execution.

It's just the way we force the results to appear that differs.

IO Monad, if you squint, is extremely similar.

Continuations are a different way to "store a procedure and state" (like a partially evaluated function in a lazy sense).

It's not totally obvious, but it's a lot of fun to think about how these things are all related.


I'd prefer to separate the concepts of "exception" and "unwinding".

In languages such as C++ and Java, raising an exception defaults to an unwinding only if not handled within the same function that raised it.

Then there are languages with resumable exceptions, and languages wherein unwinding is considered normal control flow as "shortcut returns".


Also, "exceptions" and "exception handlers".

Exceptions are merely the record of the programmer's mistake. Essentially an error, except an error that could have been caught at compile time given a sufficiently advanced compiler.

Exception handlers provide a control flow for dealing with exceptions, which may include unwinding.

Exception handlers, while primarily intended for use by exceptions, are not necessarily restricted to exceptions. Often programmers use them to move other things around, most notably errors.


Most places where exceptions are thrown do not want to handle the case of resumption. Being usefully-resumable requires careful design at the site where the exception occurs, it isn’t something you can just throw in (no pun intended) as a general feature of all exceptions.

From an interface-contract point of view, exceptions model the case that an operation cannot complete normally. Any mechanism that enables an operation to complete normally after all upon encountering an error condition, should better be modeled with a separate language feature, for example with callbacks specific to the concrete error condition.


Can you give an example of languages with resumable exceptions (just want to check it out)?


Common Lisp, Smalltalk, Ruby, Elixir, ...

A handler for a resumable exception is passed down the call stack, rather than an object being passed up the call stack to the handler. In some languages the handler can conditionally decide whether to resume back into or restart the block that raised the exception, or to cause an unwinding of the stack. In other languages, the action is fixed per exception type. (`try ... rescue`)


Racket


it is an infectious disease when not being careful, i.e. anything now is an algebraic effect.

Instead, define effects that are actually relevant to your particular app. DB reads/writes, I/O to third party systems.

Model those as data-driven effects. Keep the rest pure.


Everything can be a coroutine (goroutine) in Go, very simple. Just add `go` before the function call. No need to do `async` and `await` or try to make sure it's turtles all the way down by never calling a synchronous function from within a `async` function. In Go, everything within the function proceeds synchronously unless you use `go`.

It's literally as simple as it sounds:

https://gobyexample.com/goroutines

Want to wait for a bunch of parallel coroutines to finish (sync) before proceeding? Sure, that's easy too:

https://gobyexample.com/waitgroups

This simple primitive is one of two major go features that made me not really pursue Rust more, simply because Rust's concurrency libraries (it doesn't seem to be an intrinsic part of the language like Go) seem to be more modeled after Python's or Node's. Go's `go` is even more readable than Erlang and Scala, so in my opinion it's one of the great three features of the language

(OT, but to me the other two major features are 1: channels being an integral part of the language, and 2: forced immediate error handling -- perhaps you disagree that this second one is a feature and prefer tracebacks, but in my experience large Go projects always seem to have cleaner code than, say, Python, probably because errors are handled very close to their origination. Lastly, there are a minimum of footguns in Go.)


I've been in a go job recently, and I have to say, I hate the entire language - a lot - except goroutines. I normally complain a lot (in my head to myself, not to anybody else) about literally everything not being written in Java, but with for this service go was actually a really solid choice at the time. The green thread approach + writing "blocking" code is far superior to async/await or event-based code and the product wouldn't been a mess if it were using a less straightforward programming model.

But now that Java has the same feature in the form of virtual threads, I'm once again back to complaining!


Ok. But, surely there are places you could go and use Java.


Language choice is low on my list of reasons I choose a company, but there's no law that says I can't still complain to myself ;)


goroutines do not give the programmer control over suspension points, though, so they differ from coroutines as most people are apt to think of them. There was consideration for adding coroutine[1] support for the rangefunc experiment, but it didn't make the cut for 1.22, opting for a simpler model.

[1] https://research.swtch.com/coro


That's what the other great feature of Go, channels, are for.

https://softchris.github.io/golang-book/05-misc/04-goroutine...

Channels let you block (control suspension points) and wait for progress from any other thread.


That's not really a coroutine. It is a set of language features that can be used to solve similar problems in a different way, and while conflating those two things is a popular programmer past time, they aren't the same thing.

Coroutines are one of those terms that used to have a really price academic definition and has gotten a lot looser over the years. Python generators don't conform to the academic definition, but casually calling them coroutines is pretty popular now.

However, no matter how you slice it, goroutines aren't coroutines and that's not the derivation of the name. They're threads. Within those threads is bog-standard structured programming. All functions have one entry point. Being able to push out values through channels is no different than a normal function that writes several things to a file, it doesn't have the distinctive characteristics of a coroutine.


Python generators do conform to the academic definition of coroutines. Specifically, they are stackless, asymmetric, coroutines. Which is somewhat nerfed and second rate, in my opinion, but that's Python for you.

I agree with you, as anyone should, that goroutines aren't coroutines at all. Preëmptive scheduling is disqualifying. I'm confident that the cute name was intended to convey "these aren't coroutines at all" but that was somewhat lost in the translation.


> used to have really precise academic definition

Out of curiosity, what is the precise academic definition?

goroutines + channels were specifically designed to solve the concurrency/communication/locking/flow/performance problems that traditional coroutines introduced. Coroutines are much heavier and introduce a lot of overhead.

Goroutines are not necessarily threads; they're more akin to greenthreads. In Go, they're much cheaper and lower latency and still automatically move between processors (subject to your control).

But, aside from lower performance and greater difficulty in locking and multi-threaded communication via shared objects, coroutines are also much harder to code with and reason about.

The primitives in Go are very simple and that is one of the best features of the language. Running a for loop across a set of channels is very easy and built-in to the language, making it childs-play to write a heavily-concurrent network server. Channels are specifically designed to be a high-speed data bus between goroutines, rather than ever use more expensive and less safe shared memory. (Channels are preferred over shared memory in Go)

This is a fantastic video explaining how and why Rob Pike and team developed the idea of goroutines and channels, building on their work from Smalltalk: https://go.dev/blog/waza-talk


You are thinking about threads, not coroutines. Coroutines are not “heavy” and have nothing to do with concurrency or locking.


> Channels are specifically designed to be a high-speed data bus between goroutines, rather than ever use more expensive and less safe shared memory.

What do you mean? How do you think channels are implemented? Go is open source, you can see for yourself, channels still use mutexes internally.

https://github.com/golang/go/blob/b6ca586181f3f1531c01d51d63...

If you needed one mutex lock to receive a reference to a data structure, a channel does not make that cheaper. The absolute best case is that the same OS thread served both sides of the channel transaction, but you can't plan your overall performance around that, because it becomes less likely the more loaded your program's goroutines are. In general you still have to think of both the send and receive side of the channel as short-lived exclusive locks of the mutex, with all of the contention and cache effects that implies.

Shared memory is not more expensive. Memory is memory, it's either cached on your core or not. In fact, Go still has to issue fence instructions to ensure that the memory it observes after a channel read is sequenced after any writes to that memory, so it's at best the same cost you'd have with other forms of inter-thread communication in any language.

Anyway, even that is missing the point. Go still shares memory if you used a reference type, and most types in Go end up being reference types, because it's the only way to have a variable-sized data structure (and while we're at it, string is the only variable-sized data structure that's also immutable).

The bigger problem is that Go doesn't enforce thread safety. Channels only make communication safe if you send types that don't contain any mutable references... but Go doesn't give you any way to define your own immutable types. That basically limits you to just string. Instead people send slices, maps, pointers to structs, interfaces, etc. and those are all mutable and Go does nothing to enforce that you didn't mutate them.

Even if all of that somehow wasn't true, many parallelism patterns simply don't map well to channels, so you still end up with mutexes in many parts of real world projects. Even if you don't see the mutexes, they're in your libraries. For example, Go's http.Transport contains a connection pool, but it uses mutexes instead of channels because even the Go team knows that mutexes still make sense for many real-world patterns.

https://github.com/golang/go/blob/b6ca586181f3f1531c01d51d63...

This whole "channels make Go safe" myth has to stop. It's confused a generation of Go programmers about the actual safety (and apparently performance) tradeoffs of channels. They do not make Go safer (mutable references are still mutable after being sent on a channel), they do not make it faster (the memory still has to be fenced), and heck while we're at it, they do not even make it simpler ("idiomatic" use of channels introduces many ways that goroutines can deadlock, and deadlock-free use of channels is much more complicated and less idiomatic).

The most useful thing about channels is that you can select{} on multiple of them so they partly compensate for Go's limitations around selecting on futures in general. They're a poor substitute when you actually needed to select on something like IO, where io.Reader/Writer still don't interact with select, channels, or even cancellation directly.


> Coroutines are one of those terms that used to have a really price academic definition and has gotten a lot looser over the years. Python generators don't conform to the academic definition, but casually calling them coroutines is pretty popular now.

Could you elaborate on how Python's implementation doesn't meet the precise definition?


'goroutines' are just threads, that happen to be scheduled in userspace. They are not coroutines - they don't work the same way and don't have the same features.

Pretty much nothing in the blog post is possible is Go, without 'writing it yourself', which is what you are doing with channels. The same is possible in most other languages.


> No need to do `async` and `await` or try to make sure it's turtles all the way down by never calling a synchronous function from within a `async` function.

Yes, good grief this is annoying.


The irony of a language with first-class one shot continuations not having exceptions is not lost.


Which language is that?

Context would suggest you are referring to Go, but Go has exceptions. It even has exception handlers. As do all of the other languages mentioned.


Blasphemy!

Everything is a tables and SQL!

(hint: tables and queries implement content-addresable memory [1] and, consequently, dataflow architecture [2], which is also Turing complete)

[1] https://en.wikipedia.org/wiki/Content-addressable_memory

[2] https://en.wikipedia.org/wiki/Dataflow_architecture


I designed a syntax for this: that everything is a state machine progression, a bit like sequence types in the article.

   state1a state1b state1c | state2a state2b state2c | state3a state3b state3c
This means wait for state1a, state1b state1c in any order, then move to the next sequence of things to wait for.

In a multithreaded server or multimachine distributed system, there are global states you want to wait for and then trigger behaviour. The communication can be inferred and optimised and scheduled.

It's BNF syntax - inspired by parsing technology for parsing sequences of tokens but tokens represent events.

If you use printf debugging a lot, you know the progression of what you see is what happened and that helps you understand what went wrong. So why not write or generate the log of sequence of actions you want directly not worry about details?

But wait! There's more. You can define movements between things.

So take an async/await thread pool, this syntax defines an async/await thread pool:

  next_free_thread(thread:2);
  task(task:A) thread(thread:1) assignment(task:A, thread:1) = running_on(task:A, thread:1) | paused(task:A, thread:1);

  running_on(task:A, thread:1)
  thread(thread:1)
  assignment(task:A, thread:1)
  thread_free(thread:next_free_thread) = fork(task:A, task:B)
                                | send_task_to_thread(task:B, thread:next_free_thread)
                                |   running_on(task:B, thread:2)
                                    paused(task:A, thread:1)
                                    running_on(task:A, thread:1)
                                    assignment(task:B, thread:2)
                               | { yield(task:B, returnvalue) | paused(task:B, thread:2) }
                                 { await(task:A, task:B, returnvalue) | paused(task:A, thread:1) }
                               | send_returnvalue(task:B, task:A, returnvalue); 
  
Why not just write what you want to happen and then the computer works out how to schedule it and parallelize it?

I think iteration/looping and state persistence and closures are all related.

I have a parser for this syntax and a multithreaded barrier runtime which I'm working on, I use liburing. I want to get to 500 million requests per second of the and ~50ish nanosecond latency of LMAX Disruptor.

The notation could be used for business programming and low level server programming I think.


This looks like it could be modelled by a petri net. Your states are typed tokens, and tasks (ie transitions) are triggered by the presence of tokens and produce tokens as output.

IMHO petri nets are the most widely applicable method for modelling concurrent processes that I've seen yet.


I've read casually around petri nets and graphical diagrams. I would love to talk more about this.

Would you like to talk more about this subject?


I think joearms laid out the rough spec for this in UBF(c):

https://ubf.github.io/ubf/ubf-user-guide.en.html

but part (c) was only ever a sketch of a service/event grammar.

I implemented REGEV (regular expression for events) in my previous job. It was a REGEX for event types, but in that case, applied to log/trace emissions from test runs. So it allowed you to specify a regex of what events to expect for success, and had various listening and pattern-matching streams for runtime verification. It did not generate full-blown state machines for every protocol grammar. The wildcard matching helped ensure the tests were not fragile to minor changes to the code, or extra trace/log emissions.

So I think the opposite (complement) of what you were describing - not a specification of what should happen, but a grammar pattern to match downstream of what actually happened.

P.S. It's closed source, but the company is bankrupt, so maybe the IP will surface some day.


Would love to talk more about this with you, do you have an email address I can email you with?

Did you regex engine support commutative events? (Happen in either order?)


Might be useful in OS kernel programming, when threads are not available. Like in IRQs, etc.

My state machines in those contexts are such beasts sometimes, when you have to account for different combinations of DMA progress/completion, etc. Sometimes the hardware limitations you need to handle in software makes you really bang your head to the wall...


Thread-per-core architectures might bring this back to the fore. Especially with things like GPUDirect, StorageDirect and all the DMA engines being slowly integrated into everything.

If you have some taskgraph that is static or predictible (think closed-loop control) and you need low latency this might be your best option.


Would love to talk to you more about this, do you have an email address I can get to you at?


> wait for state1a, state1b state1c in any order

Conway worked out some results for non-serialised (commutative) events in Regular Algebra and Finite Machines about half a century ago.


This seems reminiscent of Session Types to me:

https://en.wikipedia.org/wiki/Session_type

I think one difference is that session types capture which state a process/co-routine is in after receiving a sequence of messages whereas this system does not, it captures how one can respond to each state in isolation (e.g. I don't think you can statically capture that once a door is closed and locked you can only receive the locked state from it and respond by unlocking it).


Hey you gave me an excuse to link to one of my favorite PWL talks: "A Rehabilitation of Message-passing Concurrency" by Frank Pfenning - https://www.youtube.com/watch?v=LRn_nPfti-Y


For my game projects in Unity I have been using coroutines quite a bit instead of the normal types of state machines.

It's a lot easier to read and write, BUT it comes with the downside that its not easy to save or load, which makes it only usable in certain game types and/or actions.


> it comes with the downside that its not easy to save or load

Can you expand a bit on that?


I'd venture that since part of the state is stored in the coroutine stack, and that you cannot extract it automatically it's hard to have a save feature that doesn't miss any thing. Eg loading a saved game won't have the exact same outcome as original play.


That's my take too. You'd have to preserve the entire execution state at that moment in time, including the coroutine scheduler. That's probably easier with some virtual execution model (VM) instead of native threads, stacks, etc.

The whole mess also has to be deterministic too, or as you say, it won't have the same outcome across save/load.


Exactly. Usually in the most basic enum/int state machine you can just save the step and all related data to save and load.

With a coroutine the data can be made global to be saved, but there's usually no flag to indicate what step we're on. On a load the data can be used but its really hard to get back to the exact step in the coroutine stack without building in a lot of conditional logic, which then voids using it in the first place.


This might have promise. I'd really like to see an example with a heterogeneous "select" construct, which is where a lot of these proposals fall down.

Go lets users do this only over channels, so the rest of the ecosystem has to bend everything into the shape of a channel, including timers, cancellation tokens, etc.

Rust generalizes this to any kind of future, though the ergonomics get rough in many cases, sometimes requiring pinning and sometimes risking dirty cancellation with no way to account for the resulting state.

Under this proposal:

How would you allow multiple coroutines to progress in parallel while selecting on the next update from each of them? When you're selecting, you don't yet have anything to send any of them, and when you get something back from one, you might have to send something to one or more of them, but they're still working and are not yet in a state where they can receive.

How would you cancel coroutines while still allowing their implementation (not the caller's) to decide how to get back to a known state? I can understand if the intention is that you always have to send them a signal to indicate this, but as above they have to be in a state where they can receive it, and it will still be extremely likely that at least one caller forgets to cancel at least one coroutine. Especially since the purity of this approach means it must work recursively as well.


The programming language Beta took this way further many years ago:

https://beta.cs.au.dk

It's syntax didn't help it's spread unfortunately.


Nice link, thank you. Simula cut down is interesting. Initial impression is that the most useful reference is likely to be https://beta.cs.au.dk/Books/betabook.pdf. There might be source available somewhere but the most obvious links on the website seem to be down.


Every time I see a post like this and read the comments I'm more convinced that there's a division between programming language enthusiasts and developers and only the former group cares about any of this.


The division between theorists and empiricists is as old as civilization itself.

The world truly belongs to those that can be both.

I agree with your take. In this case, we have one group that tries to explain the world using algebra while loudly endorsing that approach. Meanwhile, the other group just wants to write error-free state machines.


This makes me think of dataflow languages where functions are be suspended until their (logic variable) arguments are sufficiently bound, then they fire, with this being a somewhat more imperative take on it.


In turn, it makes me think about relational / logic programming languages.


    door() ->
      | :open(:close)
      | :closed(:open | :lock)
      | :locked(:unlock)
What feels wrong to me in this example is that the type system doesn’t “know” that :close-ing an :open door will yield :closed. Expressed as a state diagram, it would only have a single node, whose type would be a sum type with three options, and the diagram doesn’t tell you which option you end up with for any given state transition, while at the same time you have to know the current option to know which transition you may take from the current state.

So there is an asymmetry regarding the associations to the three sub-states between the two ends of a state transition. Or in other words, the state transition arrows begin in three different states but end in a single state that is the opaque combination of the three source states.


I desperately think GPU programming(or specifically CUDA) needs some language level support like coroutine/async/await to organize the data flow and the executions among different dispatched device side function calls, and more on that to have some synchronize primitives between different blocks/warps etc.



Worth noting that a GPU is essentially a hardware scheduler for large numbers of small threads that yields whenever one needs to wait for memory. They don't have a great way of changing the working set of threads.


Wait, how do you run two coroutines in parallel? Like, how do I write zip() or something like it?

    a() -> Int() | Void
    b() -> Int() | Void

    for(a()) {
        | na -> for(b()) {
            | nb ->
                print(na + nb)
                // now what? From here, I want to continue to a(), not b()
I don't believe there is much use for coroutines without support for their interleaved execution.


Generators are a thing in multiple languages that are essentially this (javascript, C# to name a couple). It’s nice for creating lazily iterators ergonomically


Such a language would certainly have some way to take a single value from a coroutine. Probably just an omission by the author.


Yes, I know it's an omission by the author but since the interleaving is IMHO the sine qua non for coroutines it's a glaring omission. Without it, the whole proposal looks like a large heap of machinery and complicated syntax that still only supports sequential execution which traditional languages support well enough already, thanks.


Wait, like, there isn't any data being passed to those functions so they have like no data dependencies and can be like run in parallel and buffered.


Interleaving is so synchronous.

You have to liberate yourself from tick-tock synchrony.

There are 'a' and 'b' processes, they generate messages. You cannot predict how those will arrive. If you want them strictly alternating, you can tag each of the messages with the source, and block until you get the next other one you expect. Or you can just receive them all and make sense of them on their own terms.


> You have to liberate yourself from tick-tock synchrony.

Thank you for telling me which kinds of program I am not allowed to write any more in the bright new future of the programming.

As for the rest of your comment, I am aware of CSP and its derivatives, thanks. The question was about the language proposed in TFA that seems to be lacking the choice/select primitive.


Someone on the internet thinks they have discovered El Dorado, only to discover, it's called Erlang, and there are many people already inhabiting that delightful prairie, including many new Elixirian immigrants.

Virding's First Rule of Programming:

  Any sufficiently complicated concurrent program in another language 
  contains an ad hoc informally-specified 
  bug-ridden slow implementation of half of Erlang.


Isn't this just iteratees? The examples look very similar to conduit code, unless I'm missing something.


Iteratees are a subset. They don't care about any kind of effects except for processing the data (and indicating the current state of how the process is going).

They are deliberately simple and for dataprocessing. They not meant/suitable for things like e.g. IO whereas the continuation monad is.


Isn't a language described very similar to the (future) OCaml with effects (https://github.com/ocaml-multicore/effects-examples) added?


Very OT:

First time that I learn of the .onl TLD. I thought it was a typo but it actually exists for 10 years already: https://icannwiki.org/.onl


"every function is a coroutine" doesn't make much sense to me. Notation in this area is a mess though and the author doesn't define their terms - maybe they're deliberately using coroutine to mean something unusual.

A function is some executable code. A closure is that plus some mutable state. A coroutine is usually a closure because something needs to track where it was suspended. Maybe this post is from the context of one of the languages that does the function colouring trick, where the answer is usually that said colouring is motivated by performance concerns.

There is a missing abstraction in the pthreads model - one can yield the current thread, but cannot yield to some specific other thread. I vaguely remember a patch set from google to add that to linux but am having trouble finding it now.


...or maybe we should go back to cooperative multitasking with stack switching and just add some familiar language syntax sugar on top ;)

IMHO the invisible switch-case code transform only makes sense when stack switching isn't available (such as in Javascript or WASM).


This is why we keep having to learn new languages. Stop




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: