Hacker News new | past | comments | ask | show | jobs | submit login
Designing APIs for Asynchrony (izs.me)
24 points by iambot on Aug 24, 2013 | hide | past | favorite | 6 comments



I have code that calls back synchronously if a value is in memory and asynchronously if it has to make a network call to get it. It's simple, and I'm not persuaded by this post (or the one it recommends) to make it more complicated. In this particular use case it seems enough to know that the callback may occur asynchronously, so you can't assume that anything will still be in scope.


This is not really about scope. In the particular case of JS scope is not an issue at all; the engine handles the closures and scopes for you.

The issue is that once you start doing serious work with asynchrony -- filtering streams, handling disconnects, backing out of parse errors, and dealing with parallelism all at the same time -- then being able to reason clearly about the control flow is paramount to getting anything done.

If sometimes code below the callback registration will be executed first, and sometimes not, depending on unpredictable runtime state, it's game over. Even if you understand the code and it works now, good luck when a dependency update introduces a new corner case and your app's control flow has been radically changed and you're stuck midnight-debugging the mess.

I guess this is one of the things you learn in the trenches, along with avoiding global variables and not optimizing prematurely.


You're right that "scope" isn't the right word to use there, but I still can't think of a better one.

Trying to accommodate the complexity you describe ahead of time could well be its own mistake. One thing I've learned in the trenches is to wait for most things to prove to be a problem before adding complexity to address them. I believe you that there are cases like you and the OP describe, where "all sync or all async" is a good invariant to have. But I don't believe it's a general rule, the alternative to which is doing it wrong (or even the devil, as in the OP).

Another thing one learns in the trenches is that almost all general rules about software are bogus and end up distorting one's thinking. It's better to choose invariants on a system-by-system basis.


Isaacs is writing from a node context, where apps are basically collections of third party modules glued together. In that kind of world, it's expected that things will work a certain way and not deviate, or your code ends up being crusted up by the glue and exception handling.

The issue with "wait until it's a problem" in node is that dependency updates can and will ruin your assumptions if you haven't been diligent about doing things the standard nodeish way. For better or worse, the node community pushes this pretty hard.

So the way I see it, this isn't about blind application of general rules. It's about having a standard API in your framework. And I firmly believe that the reason npm ecosystem works so well is because of people following these standards.

Though I do agree izs appears to be overgeneralizing his conclusions.


I disagree that what you're doing is less complicated. Doing it the way the OP describes is less complicated, in that the number of things you have to think about and account for is greater if you do it your way, than if you do it the OP's way. It's just one more noodle in the ever-growing pile.


I think it's mostly about the failure case. Calling the callback immediately can mean that certain algorithms will walk down the stack, invoking synchronous callbacks until they run out of stack space. Not great.

In your use case it's possible you'll never hit that failure scenario.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: