Hacker News new | past | comments | ask | show | jobs | submit login

Thanks, that's an interesting point of view.

Unfortunately, with event loops and async programming, including async-await models, cancellation is just as fiddly and needing to be explicitly handled by client event handlers/awaiters.

For example, think of JavaScript and its promises or their async-await equivalent.

There is no standard, generic way to cancel those operations in progress, because it's a tricky problem.




> cancellation is just as fiddly and needing to be explicitly handled by client event handlers/awaiters

That's not true. In event loops to do cancellation you simply remove event handlers for associated client from whatever event notification mechanism you are using and delete (free) client's data structured, including futures, promises or whatever you are using. Since references to all of them are necessary for event loops to be able to even call event handlers, no awareness of any of it on event handlers' side is required.


That's not true; it only applies to a subclass of simpler event scenarios.

For example, in an event loop system you may have some code that operates on two shared resources by obtaining a lock on the first, doing some work, then obtaining a lock on the second, then intending to do more work and then release both locks. All asynchronously non-blocking, using events (or awaits).

While waiting for the second lock, the client will have a registered an event handler to be called when the second lock is acquired.

("Lock" here doesn't have to mean a mutex. It can also mean other kinds of exclusive state or temporary ownership over a resource.)

If the client is then cancelled, it is essential to run a client-specific code path which cleans up whatever was performed after the first lock was obtained, otherwise the system will remain in an inconsistent state.

Simply removing all the client's event handlers (assuming you kept track of them all) and freeing unreferenced memory will result in an inconsistent state that breaks other clients.

This is the same basic problem as with cancelling threads. And just like with event/await systems, some thread systems do let you cancel threads, and it is safe in simple cases, but an unsafe pattern in more general cases like the above example. Which is why thread systems tend to discourage it.


Nope, event loops and asynchronous programming in general don't have a concept of taking a lock, because the code in any event handler already has exclusive access to everything. I.e. everything is effectively sequentially consistent.

There are some broken ideas out there that mix different concurrency models, in particular async programming with shared memory multithreading, not realizing they are bounding themselves to the lowest common denominator, but I was never talking about any of them.


We are clearly working with very different kinds of event loops and asynchronous programming then.

I think you use "in general" to mean "in a specific subset" here...

It is not true that every step in async programming is sequentially consistent, except in a particular subset of async programming styles.

The concept of taking an async mutex is not that unusual. Consider taking a lock on a file in a filesystem, in order to modify other files consistently as seen by other processes.

In your model where everything is fully consistent between events, assuming you don't freeze the event loop waiting for filesystem operations, you've ruled out this sort of consistent file updating entirely! That's a quite an extreme limitation.

In actual generality, where things like async I/O takes place, you must deal with consistency cleanup when destroying event-driven tasks.

For an example that I would think this fits in what you consider a reasonable model:

You open a connection to a database (requiring an event because it has a time delay), submit your read and writes transaction (more events because of time to read or to stream large writes), then commit and close (a third event). If you kill the task between steps 2 and 3 by simply deleting the pending callback, what happens?

What should happen when you kill this task is the transaction is aborted.

But in garbage collected environments, immediate RAII is not available and the transaction will linger, taking resources until it's collected. A lingering connection containing transaction data; this is often a problem with database connections.

In a less data-laden version, you simple opened, read, and closed a file. This time, it's a file handle that lingers until collected.

You can call the more general style "broken" if you like, but it doesn't make problems like this go away.

These problem are typically solved by having a cancellation-cleanup handler run when the task is killed, either inline in the task (its callback is called with an error meaning it has been cancelled), or registered separately.

They can also be solved by keeping track of all resources to clean up, including database and file handles, and anything else. That is just another kind of cleanup handler, but it's a nice model to work with; Erlang does this, as do unix processes. C++ does it via RAII.

In any case, all of them have to do something to handle the cancellation, in addition to just deleting the task's event handlers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: