Hacker News new | past | comments | ask | show | jobs | submit login

The quick answer is that coroutines are often used to implement cooperative multitasking because it is a very natural fit, but it's a more general idea than that specific implementation strategy.



interesting, i would have said the relationship is the other way around: cooperative multitasking implies that you have separate stacks that you're switching between, and coroutines are a more general idea which includes cooperative multitasking (as in lua) and things that aren't cooperative multitasking (as in rust and python) because the program's execution state isn't divided into distinct tasks

i could just be wrong tho


Yeah thinking about it more I didn’t intend to imply a subset relationship. Coroutines are not only used to implement cooperative multitasking, for sure.


well, i mean, lua's 'coroutines' are full tasks with their own stacks, unlike, say, python's 'coroutines'. so arguably it isn't that one can be used to implement the other; it's that they're two names for the same thing

lua's coroutines aren't automatically scheduled (there isn't a built-in run queue) but explicitly resumed, which is a difference from the usual cooperative-multitasking systems; arguably on that basis you could claim that they aren't quite 'cooperative multitasking' on their own

the last time i implemented a simple round-robin scheduler for cooperative multitasking was in july, as an exercise, and it was in arm assembly language rather than lua. it was 32 machine instructions and 64 lines of code (http://canonical.org/~kragen/sw/dev3/monokokko.S), plus 14 lines of example code to run in the threads. when i went to go look at that just now i was hoping to come up with some kind of crisp statement about the relative importance or complexity of the stack-switching functionality and the run-queue maintenance facility, but in fact there isn't a clear separation between them, and that version of the code creates all the tasks at assembly time instead of runtime. a more flexible version with start, spawn, yield, and exit calls, which respects the eabi so you can write your task code in c (http://canonical.org/~kragen/sw/dev3/einkornix.h et seq.), is 53 lines of assembly and 34 machine instructions, but similarly has no real separation of the two concerns


> arguably on that basis you could claim that they aren't quite 'cooperative multitasking' on their own

Right, I think this is where I am coming from. Generators, for example, can be implemented via coroutines, but I would not call a generator "cooperative multitasking."

That's very cool! Yeah, I have never done this myself, but in my understanding implementations in assembly can be very small.

> when i went to go look at that just now i was hoping to come up with some kind of crisp statement about the relative importance or complexity of the stack-switching functionality and the run-queue maintenance facility, but in fact there isn't a clear separation between them

That's fair, but I don't think that's the final say here, as you were building a system for cooperative multitasking explicitly, with no reason to try and separate the concerns. When a system is very simple, there's much less reason for separation.

Actually, this makes me realize why I probably have this bias for thinking of them separately: async/await in Rust. The syntax purely creates a generator, it is totally inert. You have to bring along your own executor (which contains a scheduler among other things). Separating the two cleanly was an explicit design goal.


while python-style generators aren't cooperative multitasking (by the usual definition in which cooperative multitasking maintains a separate stack for each task), they can be implemented using cooperative multitasking, which is (arguably!) what happens if you use lua coroutines to implement generators

it certainly isn't the final say! it's just an analysis of how my own code turned out, not any kind of universal lesson

the implementation in monokokko, which reserves the r10 register to always point to the currently running task, is five instructions

            .thumb_func
    yield:  push {r4-r9, r11, lr}   @ save all callee-saved regs except r10
            str sp, [r10], #4       @ save stack pointer in current task
            ldr r10, [r10]          @ load pointer to next task
            ldr sp, [r10]           @ switch to next task's stack
            pop {r4-r9, r11, pc}    @ return into yielded context there
interestingly, what you say of rust's generators is also sort of true of monokokko

> The syntax purely creates a generator, it is totally inert. You have to bring along your own executor (which contains a scheduler among other things).

the above five instructions, or arguably just ldr r10, [r10], is the executor. the in-memory task object consists of the saved stack pointer, the link to the following task, and then whatever variables you have in thread-local storage. but from a different point of view you could say that the in-memory task object consists of the saved stack pointer, a pointer to executor-specific status information (which for this executor is the following task, or conceptually the linked list of all tasks), and then other thread-local variables. i think the difference if you were to implement this same executor with rust generators is just that you probably wouldn't make the linked list of all tasks an intrusive list?


I'm gonna have to mull over "implement generators using cooperative multitasking" a bit :)

> i think the difference if you were to implement this same executor with rust generators is just that you probably wouldn't make the linked list of all tasks an intrusive list?

You still could, and IIRC tokio uses an intrusive linked list to keep track of tasks. There's no specific requirements for how you keep track of tasks, or even a standardized API for executors, which is why you'll hear people talk about why they want "to be generic over runtimes" and similar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: