Hacker News new | past | comments | ask | show | jobs | submit login
Async/await support in Firefox (nightly.mozilla.org)
454 points by dallamaneni on Nov 1, 2016 | hide | past | favorite | 115 comments



Is there an article or book where the major "ways" of doing concurrency in different languages are explained and compared? E.g. thread process mutex semaphore etc. in Java et al, callback promise async await in JS, goroutine channel in Go, task async await plinq in C#, asyncio in Python, etc. It would be nice to have a general overview of these, what their pros and cons are and why.



I believe this one does not addresses async-await in any of its implementations directly, does it? But it looks pretty interesting to cover other concurrency models.


Actually it doesn't cover promises/tasks/rx or any other monadic stuff at all. The Clojure part also is probably not the most practical since it lacks core.async. However it's still a very good read and highly recommended. One reason is that it covers most of the concepts that make up a solid concurrent program (immutability, sharing data by communicating, ...).


I found this a good resource: http://berb.github.io/diploma-thesis/


I read this couple of days back, very well written.


Slightly off topic but you may also want to check out Leslie Lamport's classic Teaching Concurrency.

http://research.microsoft.com/en-us/um/people/lamport/pubs/t...


It's much broader but if you are into that kind of thing this book has a broad survey of programming language concepts and how different languages implement different features.

https://www.amazon.com/Programming-Language-Pragmatics-Fourt...

It's pretty easy to read for most programmers who are familiar with a few family of programming languages.


There's probably more than one way of doing in it every language (e.g. in C you can use pthreads, OpenMP, C11 threads, etc). and I doubt any book/article would be complete and current.


Ugh. At least when everyone was doing threads you only had to learn one model. Then everyone had to build a better mousetrap.


I feel as though 1:1 threading is underrated nowadays. Most of the cited benefits of async or M:N—especially small stacks—are actually properties that can apply equally well to plain old 1:1 threading as well. Linux thread spawning is really fast nowadays.


Yes I agree -- threads without shared state or with thread-safe shared data structures is a perfectly good programming paradigm (and pretty well proven too).

One of my pet ideas is just to implement a Go-like CSP with plain pthreads. Channels could be pipes of pointers. Channel select is just select(). No weird M:N runtime needed. I don't want an operating system in my programming language.

This is just stolen from Programming in Lua chapter 30 -- threads and states ( https://www.lua.org/pil/ , not in the 1st online edition unfortunately)

Lua has coroutines within the interpreter, but to utilize all the cores he suggests layering threads and multiple Lua interpreters on top.


> One of my pet ideas is just to implement a Go-like CSP with plain pthreads. Channels could be pipes of pointers. Channel select is just select(). No weird M:N runtime needed.

Please do it! I feel like we've forgotten the lessons of NPTL, when everyone tried M:N and collectively came to a consensus that M:N wasn't worth it in practice.


OK :) I actually have a reasonable project to do it on... I'm implementing a new shell, which is making decent progress, and I've been documenting some interesting things here:

http://www.oilshell.org/blog/

It's very compatible with bash so I think it has a chance of being adopted. And I would like to add structured data pipelines, which powershell has. I was thinking of implementing it with the threads and pipes of pointers scheme (and probably the condition variable).

I guess the shell concurrency model is more like a subset of CSP, but the more general CSP model seems useful and fairly easily implementable. I feel like it should be like 200 lines of code, so I should try it sooner rather than later. Just start porting some simple Go programs to it.

( My last post about parsing expressions got buried on HN but I think it is fairly interesting to a specialized audience: http://www.oilshell.org/blog/2016/11/01.html )


I've just spent 1h at work (silly me) reading your blog, it is illuminating! I love your writing style, and I can't wait to see the your shell in action! Just the full-parse-before-execution is well worth it, especially when deploying scripts on servers.

If you open source the project, I'd love to try giving you a hand. Anyway, good luck with the project!


Great thanks! Yes it will be open source.

I'm rounding the corner on parsing hundreds of thousands of lines of bash scripts now... the prototype is in Python as mentioned in the first post, and the executor isn't complete, but if you want the parse-before-execution, that is working well.

ShellCheck does exist though. IIRC I had mixed experience with it -- it did actually find one bug, but on the other hand it spewed hundreds of warnings about double quoting vars, which is technically true, but not the best use of time for most scripts I write. I'd rather just get rid of stupid quoting rules, which is one of the #1 priorities.

(As far as writing, I find that "omit needless words" from Strunk & White goes a long way. Words like "very" and "a little" somehow spray themselves all over my writing; they are rarely useful and I kill them on editing passes :) )


That's probably a good idea. However it will most likely not work out with pipes as I don't know how you could implement the synchronous (unbuffered) channels with them.

IMHO unbuffered channels are one of the most powerful constructs in Go, since they guarantee that "resources" are always on one-side of the channel and are taken care of, and never stored in a channel (or promise, ...) where they could get abondoned/lost. It also allows to make some other assumptions like "the in-memory server has taken my request through a channel and is now working on it and will answer through a chnanel soon" or "the server has shut down so I can't write to the channel", but never "the write to the channel succeeded but nobody cares about it".


> However it will most likely not work out with pipes as I don't know how you could implement the synchronous (unbuffered) channels with them.

Easy. Just implement them with a mutex and a condvar. If you want, layer some lock free algorithm on top for the fast path.


Yeah actually the Lua implementation I mentioned uses a condition variable (not sure about the mutex).

The unbuffered channel would be a good reason to use that scheme. I was thinking of using pipes so Go's select reduces to the select() system call. But I think you can just do both -- it's cheap. Write to a pipe and and notify a condition variable. I'll experiment with it.


However from the learning side it doesn't really matter if you have to learn and apply synchronization in a 1:1 threaded or M:N threaded model. It's about the same.

The more important difference is what kind of synchronization primitives are provided to you by the platform. Go's (synchronous) channels in combination with select provide a quite powerful tool compared to using only pthreads. Even the old WinApi with WaitForMultipleObjects and that stuff will result in other ways or solving typical concurrency problems.


Thread spawning is fast but footprints are still pretty large. You'll likely max out around 10k threads with defaults, and maybe tweak it to 100k with small stack sizes.


Last time this came up, it was found that you can get per thread overhead down to 10KB with musl and Linux, and even smaller with upcoming Linux kernel work. That allows for quite a few more than 100,000 threads for typical server RAM configurations.


FWIW, there was an async vs. threads debate at Google that was "resolved" with one very important application running tens of thousands of threads per machine on hundreds of thousands of cores. Some people thought async would be a wiser architecture, but it was made to work (albeit with significant and literally full-stack engineering effort).

The problem of threads vs async in C seems pretty well studied, but a more interesting question is what we're talking about here: concurrency models in higher level languages: async/await in JavaScript vs. threads in JavaScript. Or let's say Python, because it actually has threads.

I feel like that tradeoff has been less well studied. Interpreters probably use a lot more stack space than native programs, but I wonder if anyone has quantified it.

And on the other hand, the downside of async is less pronounced than in C -- the whole point is to avoid "stack ripping" and explicit state machines.

And I have to echo the recent post here about the complexity of the async/await mechanisms in Python, although honestly I'm not that well-versed in the model.

https://news.ycombinator.com/item?id=12829759

(Interesting that the top comment there is kind of echoing our issue with M:N threading -- the inner platform effect.)


Green threads usually take something on the order of 1kB per thread. That's an order of magnitude more threads per machine, but this is not even the greatest benefit.

Green threads shine because they take ~1kB without tweaking. That means you just pack your software, send elsewhere, and you get those millions of threads per server, instead of losing hours on customer support, and have it revert to 10k threads at random because of bad sysadmins.

Anyway, "thread" is not really a concurrency oriented concept. It mixes so much of parallelism that it's expected that it has some downsides compared o purely concurrent concepts.


It's still an order of magnitude more than many green threads implementations.

musl is interesting though. I've never seen it pitched as a solution for normal machines. I've always seen it in the context of small embedded systems.


Yeah, but that is a very specific OS stack, which cannot be generalized to language runtimes running on top of general purpose OSes.

Now when targeting bare metal deployments like unikernels, it is a different story in how to approach it.


Threads in JS?


Web workers are threaded.


They are more akin to "processes", because they don't share state with each other or the main event loop.


web workers are almost entirely useless due to the fact they cant interact with anything except a message channel. Serializing and deserializing everything so web workers can operate on it introduces more overhead than the parallel speedup is worth in the vast majority of cases. Not to mention stuff like DOM updates, web-sockets, or anything else cant be done by them, and are stuck on the main thread.

They are really not comparable to threads.


I suppose if you could only crunch numbers in a Web Worker and then make a clone to get it back to the DOM thread where it would be used, then yeah, Web Workers would be bleh. But, it's not as bad as all that!

For example, transferables let you move bulk data between threads without cloning. At least that removes most of the communication overhead. https://developer.mozilla.org/en-US/docs/Web/API/Transferabl...

On firefox you can transfer your WebGL canvas to a Web Worker, how is that for useful? https://hacks.mozilla.org/2016/01/webgl-off-the-main-thread/

You can use Web Sockets in Web Workers in very recent Firefox (48+?) and Chrome versions. https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers... And you've always been able to do AJAXy things on Web Workers as well.

I'm not saying the API's are clean or lovely, but the multithreaded functionality is there for the taking.


> You can use Web Sockets in Web Workers in very recent Firefox (48+?)

38+. It's been shipping for a year and a half.


Off by 10 errors are my specialty!!

Good to know it's stable for awhile, thanks for the update. Go Mozilla, go!!


That's not true at all, and I expect them to become increasingly important over time. Now that we have the DOM being abstracted away into the virtual DOM in React, Angular 2, Vue, etc., it should be almost entirely possible to run your framework code entirely in a WebWorker while sending back some type of delta/patch to update the actual DOM.

I'm not sure what you mean that web sockets aren't supported since the MDN shows it as being fully available:

https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers...

Where this falls down is in handling events where some actions inherently need to be synchronous (preventDefault, stopPropagation), however I think Angular 2 was starting to consider options on how to overcome that. I can't speak for progress in other frameworks.


You can do websocket in web workers. At least per spec, and in anything resembling a recent Firefox; it was shipped in Firefox 38 about a year and a half ago. I can't speak for other browsers.

You can also do XMLHttpRequest and fetch in web workers. And IndexedDB, so you can store stuff persistently from a worker and then read it back from a worker later.

I agree that if you want to hand out data on the client to a worker pool after getting all the data from somewhere in the main page script, then serialization/deserialization can start to bite.


They are mostly done in order to free the ui "thread" to avoid getting ui freeze. The event loop is fast enough for most concurrency things in js client applications as long as you divide the tasks into smaller things.


Great news.

FYI Chrome will ship with async/await support[0] in the next stable version (55, so it's already on Chrome Canary).

[0]https://bugs.chromium.org/p/v8/issues/detail?id=4483


A giant shout-out to Igalia (Caitlin Potter!), who were instrumental in getting this implemented and shipped!

https://blogs.igalia.com/compilers/2016/05/23/awaiting-the-f...

This joins generators[1][2][3] (Andy Wingo) and arrow functions[4][5] (Adrian Perez de Castro, Andy Wingo) as examples of the community (read: not employees of Mozilla, Google, Apple, Microsoft) directly working on the engines to ship these features much sooner than they otherwise would be. (Surely they'd land eventually, but acceleration and cross-engine coordination greatly improves time-to-dev-market.)

[1]: https://wingolog.org/archives/2013/05/08/generators-in-v8

[2]: https://wingolog.org/archives/2013/10/07/es6-generators-and-...

[3]: https://wingolog.org/archives/2014/11/14/generators-in-firef...

[4]: https://groups.google.com/forum/#!topic/v8-users/5FNvOv-kQY4

[5]: https://wingolog.org/archives/2015/06/18/arrow-functions-com...


Note that async functions always return native promises. So if you like your fast or sugary promise libraries, you can't make async functions return those promises. You are stuck with the native promises, which, as of now, are pretty slow in almost all engines in comparison to the heavily optimized promise library bluebird.

I guess we should just hope that engines will optimize their own promise implementations. This might not matter much in front-end apps, but absolutely does for Node apps.

Other than that, I believe it is time JS gets: 1. An actually useful try/catch (with pattern matching on the catched errors) 2. Standard stack traces across all engines

Those two will really help async/await.


https://kpdecker.github.io/six-speed/

According to this, native Promises are 2x faster than Bluebird in Chrome and 7-8x faster in Safari, and the same speed as Bluebird in Firefox and Edge. I remember a yearish ago they were slow, but that appears to no longer be true.


Part of the work Igalia did in 2016 was to improve compatibility and performance of native Promise. It's a constantly moving target. If you single out v8, they're constantly improving the JIT, evolving the architecture, while individual features (e.g. generators, Promise) continually get optimized on top of those underlying engine changes. Lots of people talk about performance in absolute terms, but what is really needed is a proper performance test suite for individual features and a kangax style[1] browser/engine version perf chart (edit: like the link you posted, duh!) to track it over time similar to the more general AWFY[2]. I guess the main issue is that the test suite needs to be more accepted by the browser teams as a de-facto standard?

[1]: http://kangax.github.io/compat-table/es6/

[2]: https://arewefastyet.com/


>I guess we should just hope that engines will optimize their own promise implementations.

Firefox/Spidermonkey have just ported their Promises completely to C++ [1] which will help performance and enable future optimisations.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1313049


The funny part is that they _used_ to be completely in C++ (as part of Gecko, not SpiderMonkey). Then they were rewritten to be part of SpiderMonkey, and various bits were made self-hosted in the process. This turned out to be slower than doing it all in C++, for various reasons, some of which might be Gecko-specific (e.g. support for security wrappers).

But in general, optimizing promises as specified in ES6 is _really_ hard. There's all sorts of work that the spec says should be done which is typically unnecessary (for example, did you know that every time you resolve a Promise with another Promise, that causes creation of a _third_ Promise that no one cares about) but can be observed from script in various ways (e.g. by messing with @@species) so are very hard to optimize out. Promises are also very gc-intensive as specified....

This is one of those cases (iterators being the other poster child) where the usual "yeah, people will just write complicated enough engines with complicated enough JITs to make this all perform OK enough" attitude of the ECMA committee is a bit annoying.


the cool kids are already moving to observables anyway :)


I am a JS programmer and I have mixed feelings about this. I am not entirely sold on the stuff that comes after async/await. In particular, I am curious to know what folks here think about cancelable promises [1] and the try...else block that might be introduced. Or about async iterators [2], where you can `yield await` stuff.

[1] https://github.com/tc39/proposal-cancelable-promises

[2] https://github.com/tc39/proposal-async-iteration


Is this about the cancelation tokens they also want to force on observables?


Well, I was asking in general about all those new features that are in consideration: cancelation tokens, try...else, even the await.cancelToken meta-property [1]. It feels like a lot of baked-in syntactic support for a very specific pattern =/

[1] https://github.com/tc39/proposal-cancelable-promises/blob/ma...


I'm really looking forward to not having to debug my code compiled to state machines.

Is there a good way to get Babel/Webpack to emit multiple versions of your code compiled with different features enabled, then load the right bundle?


I believe that's a goal of https://github.com/babel/babel-preset-env. There are also a few manually crafted presets on npm which target a particular environment and EcmaScript version, e.g. https://github.com/blockai/babel-preset-eslatest-node6 (I'm the author).


There's a website, polyfill.io, that does browser sniffing and sends a minimal polyfill. The source is available under the MIT license[1], it shouldn't be a tremendous amount of work to set up a bunch of different builds for different features levels, with a server in front responsible for sending the correct bundle.

[1]: https://github.com/Financial-Times/polyfill-service/blob/mas...


What you just described is what I always describe as the ideal web deployment system. Most of my dev is in C++, but I couldn't believe the inefficiency found in sniffing everything at runtime versus using what already exists to serve up different pre-compiled targets specifically optimized for all the major browser versions active in the field. It seems that products like this that span the dev workflow (compilation) / operational (sniffing server) gap are hard to get off the ground since many devs can't touch the ops side.


Biggest advance in JavaScript since ... I don't even know. XHR? Reasonable GC performance?


I've intentionally been keeping myself ignorant about async/await in JS until it's common enough I can consider using it natively without any great concern about cross browser compatibility. Why is everyone so excited for it? From the example it looks like a thin syntactic sugar layer over normal promises, what's so great about saving one level of indentation?


Well, I guess there are three main reasons I'm so excited about async/await:

1. It allows you to reason linearly about code with IO mixed in, without blocking the entire event loop. This is incredibly useful. Computer programmers are very good at reasoning linearly. It is just much, much more comfortable [1] to think about code that uses async/await than the same code with promises.

2. I think the way that promises and async/await interop is very beautifully designed. The way it ends up working in practice is that you can write a function that "blocks"[2], but if you later decide you want to call it in parallel with something else, that's fine because async functions are just a thin wrapper around Promises. So you can just take that existing function and call `Promise.all()` on it. Similarly if you start with a function that returns a promise, and you decide you want to call it in a blocking way -- you just do `await thingThatReturnsPromise()`. This is an almost perfect example of primitives composing to be more than the sum of their parts.

3. (and this is worth the price of admission alone) error handling works properly and automatically. If you `await` an async function and it throws, you'll get a plain ol' JavaScript error thrown which you catch in the normal way. If I never debug another hung JavaScript app that dropped a rejected promise on the floor it will be too soon.

[1] please don't tell me that I just don't "get" asynchronous programming. I was writing js event handlers when you were in diapers (something something lawn something).

[2] `await` doesn't really block the event loop, it just appears to.


One more thing that's particularly nice about `async/await` is just using the `async` part. For existing functions that should always return promises, when that function is `async` you can be sure of it. Even if it throws. Even if it returns a value that sometimes isn't a promise.

Makes writing correct code easier.


I'm not up to speed on async/await -- do you happen to know if an async function throws on synchronous code (before the first await keyword) -- does that result in an asynchronously rejected promise or a synchronous exception?


It results in a rejected promise. (The other behaviour would be a bit inconsistent/annoying.)


This added contract between the caller and callee is huge. No more searching for that one branch of a non-trivial function that failed to wrap its immediately available return value in a Promise.


> [1] please don't tell me that I just don't "get" asynchronous programming. I was writing js event handlers when you were in diapers (something something lawn something).

This sounds pretentious and detracts from an otherwise good comment, even if you didn’t mean it to.


I read it as more of a whimsical and jokey side comment. I don't think it's that egregious.


Fair enough.


The main problem with normal promises is you can no longer use native control structures, because the code before and after the operation has to go into separate functions. Loops no longer look like loops, the bodies of conditionals are less obvious, exception catch points are hidden, etc.

With async/await the language takes care of lowering those control structures that straddle async operations, similarly a compiler lowering them into conditional branch instructions.


I've used both. In my opinion, promises are fine for simple scenarios, but once you start doing anything complex, the synchronous "feel" of async/await make your code much easier to read and reason about.


Ability to use asynchronous functions without pulling everything depending on them into a closure, which comes with a lot of baggage in how you have to structure things.


Honest question, does async/await not create some variety of an implicit closure?


Depends how you look at it. The straightforward answer would be "no" because await lets you assign the result of a promise to a variable without calling `.then(...)` with a callback and then accessing the variable inside the callback (where the callback would be the closure). In that way of thinking, async/await directly let us write code with less closures.

Note that that explanation was purposefully ignoring any implementation details about whether or not closures are actually being created/allocated and was purely focusing on whether the programmer had to type out/read closures as extra syntax.

Actually, functional programmers might point out that even without async/await, assignment in imperative languages is still just some syntactic sugar for actually using closures. That is, in functional languages, its almost never idiomatic to directly "assign" to a variable (that is, overwriting the variable's previous value); instead values are "bound" to variables in let statements (or whatever the language's idiomatic equivalent is)--and let statements are effectively syntactic sugar for function calls/closures.

If you're familiar with haskell: you can think of imperative code as haskell's do notation but with the Identity macro (aka no special mode of computation, just the results of each function flowing into the next with "assignment" converted into functions). Marking a function as async (and thus allowing you use to await inside it) is just switching that function to be under the Promise monad. (but be careful! JavaScript promises aren't strictly monadic in that they auto-resolve if nested... so you can't have "Promise Promise a")

(... Sorry, this ended up being a bit of a mess of a braindump because you specified "implicit" closures :) )


It's explicit, since the `async` keyword must only precede a function declaration.

i.e.:

    async function () { ... }
    
    // or 
    
    async () => { ... }


I believe they are discussing at the call site of an asynchronous function


It is safe to use it in the server side. Very soon it will be released a Node.js version with the V8 async/await additions included.


NB It's already in Node v7 behind a --harmony flag.


async/await is well designed in that it "converts" into a Promise (and you can await Promises). Even though async code is still "contagious" if you need the result of the computation, sync and async code can now be mixed – the sync code can pass the promises around. This makes it a lot more convenient to use i.e. code is more concise.


Just as a point of interest (maybe), I've been using Redux-Saga on a project it essentially affords the same benefit; it abstracts over any of the JS async primitives (callbacks, promises, generators, etc). It is awesome!


If you have a linear chain of async operations (do this, then this, then this, ...) it doesn't change a lot compared to raw promises.

However if you have control flow in it (execute this async function 100 times, add the results, depending on the result call another async function, ...) things will get a lot more ugly with raw promises.


Agreed. Before this and promises it was impossible to reasonable about async scenarios without getting seriously confused.

Allowing programmers to write code as if it was synchronous removes a lot of confusion, boilerplate, and bugs.


> impossible to reasonable about async scenarios

Nothing was impossible. Maybe if you had 3+ levels of nested Ajax calls then it needed bit of head scratching, but jQuery's Deferred and other promise libraries are pretty neat to solve these issues.

However this could be indeed hidden from the programmer.


Removes boilerplate? Yes. Confusion? Sometimes. Bugs? Not really. async/await doesn't inherently change the concurrency model, it is just a different notation for the same thing. Therefore you can still have all the same bugs as with classic promises.


Using generators can make your code look as if it was synchronous


I disagree.

Look, I get its use case. It simplifies doing asynchronous and synchronous programming together. It takes an asynchronous operation and turns it into synchronous which worries me a bit about new developers and code readability. There are APIs including parts of node, HTML 5, etc that async and await isn't going to work with. So now you're going to have normal synchronous code, asynchronous code that looks synchronous, and other asynchronous code that can't be made to look synchronous.

I'm all for finding ways to avoid callback soup and to simplify asynchronous patterns. I think there are good ways to structure traditional callbacks that is sane and I think promises are a great step. I'm skeptical about async and await.


I have mixed feelings but I think this is a net win for JavaScript.

XMLHttpRequest was enhanced with the Fetch API which uses promises. The libraries and methods will eventually catch up in a similar way.

If we stopped here there'd be little reason to ask everyone to promisify their libraries. The promise callback chain was only slightly more easy to read than the callback pyramid of doom. You might say that async/await lowers the explicit complexity vastly and increases the implicit complexity marginally.


the point of promises was to have a standard contract for callbacks, including error handling, and that contract can be passed around, listened on later, etc. It was a drastic improvement over callbacks. Async/await is just minor sugar over that to make your eyes happier. You still need to reason about the code the same way. You still need to remember they're promises (so you can, let say, await Promise.all(...) to avoid waiting on every single step individually). It doesn't really "add" anything. Promises did.

Too bad promises had a lot of glaring flaws that are taking forever to fix (or are unfixable). We're just getting to the point where unhandled errors are dealt with properly. Composing promises with non-promise constructs still suck. There's still only 2 paths (success and error). The flatMapLatest scenario still suck.

Still, promises were the improvement. Async/await is just sugar.


No. Those two have real world advantages that allow you to do things you previously could not.

This is just pointless syntax.


JIT compilers.


I have one major misgiving about async-await - the regression in needing to do a try-catch if one isn't isn't a test situation. This is a terrible syntax to have to write to handle this.

That said, async functions also introduce a fundamental language shift in how one parses JS. One can now block execution to return a value, but no longer know if the innards of a function is blocking, which could mean that your function execution is blocked by an async execution. Instead of what one may know as async by looking at a promise or any other similar construct for handling async flow like observables, this will be the source of many bugs in developers' code as we shift in mindset, should this become popular. This would definitely improve readability of code written using continuation passing style though.

Minus the awful try-catch, I'm not necessarily against async-await being the norm, but its broad effects on how async code will be written should be recognized, especially considering that it doesn't solve a fundamental issue like something like observables do.


> One can now block execution to return a value, but no longer know if the innards of a function is blocking, which could mean that your function execution is blocked by an async execution.

Huh? I think you're misunderstanding how these functions work. You can't call an async function from a normal function and have it block the normal function. You specifically have to await the Promise returned by any async function.


> You specifically have to await the Promise returned by any async function

I just wanted to add (may help others): And you can only do that from within an async context (currently just the body of an async function).


Background: I started using async/await in my code a few weeks ago.

> I have one major misgiving about async-await - the regression in needing to do a try-catch if one isn't isn't a test situation. This is a terrible syntax to have to write to handle this.

If you don't like try/catch, you can still do:

    await getPromise().catch(err => {/* handle error */ })
Personally, I don't find the try/catch syntax that awful and it'll get better with do expressions[0], e.g.:

    const username = do {
      try { await getUsername() }
      catch (err) { 'unknown-username' }
    }
One common misconception is that async/await forces you to put everything inside a try/catch block which isn't the case. Errors get bubbled as with regular promises and you only need a try/catch block wherever you previously had a `.catch` (or you can keep the `.catch` as mentioned above if you prefer that syntax).

> One can now block execution to return a value, but no longer know if the innards of a function is blocking, which could mean that your function execution is blocked by an async execution.

I'm not sure what you mean by that but you still have to explicitly use the "async" keyword for any function that uses "await". And function calls that are not prefixed with "await" will not block just like before. In fact, in an async/await codebase, you can more easily tell which functions are async because they'll have "async" in front of them (unlike errback/promise code which would require you to read the body of the function).

> Minus the awful try-catch, I'm not necessarily against async-await being the norm, but its broad effects on how async code will be written should be recognized, especially considering that it doesn't solve a fundamental issue like something like observables do.

Yes, it's basically just syntax sugar for promises but it does remove a lot of boilerplate code and is easier for JavaScript engines to optimise.

[0] http://wiki.ecmascript.org/doku.php?id=strawman:do_expressio...


I view the use of try-catch as actually strictly an improvement... Before you had either 2-argument promise.then(...) or promise.catch(...), which were effectively the same as a try-catch (and in most promise libraries .then and .catch DID use try-catch under the hood), except they were now duplicated syntax for doing the same thing (handling exceptional cases).

More likely you're arguing people shouldn't use 'exceptional situation' error handling as often as they do... but that's a separate argument. I think going from 2-3 syntaxes for the same thing to 1 syntax is strictly an improvement, especially since huge amounts of promise code silently swallow errors when people don't properly chain promises or handle the .catch case.


> and in most promise libraries .then and .catch DID use try-catch under the hood

To catch thrown errors, not to implement the behaviour of transforming rejected promises.


You could always do something like:

   const maybe = await fetch(url).catch(err => err);
But, I can understand that being a bit icky for some... but in a few cases it could make sense and is pretty concise:

   const hits = await cache.incr(key).catch(err => 0);


you don't have to you can await on settle() then deal with it in whichever way suits you


Am I alone in the boat that feels that the way that async/await is implemented in JS is a little wonky? I kind of prefer that you can do `await asyncFunc(...)` without having to denote the function itself as `async`, but I don't like that you can just call the `asyncFunc` without `await` and it would still potentially do something. In Python, when you `await` something, it actually runs the code, whereas running the function by itself just returns a Future that hasn't been started.

I feel like it's less difficult to shoot yourself in the foot with Python's implementation, because you know that if you're calling an asynchronous function, you have to `await` it or it won't work and won't have side effects. In JS, you could call the async function without awaiting it and it could have side effects which could produce subtle bugs that aren't easy to find.


Still not entirely sure what the big win with async/await is. Is it just a readability thing? Is there one "killer" case that demonstrates its superiority over callbacks?


There's a lot of benefit in being able to reuse the imperative language structures that people know and "love". I think it is more a writability thing.

I've been working to teach Promise thinking to another developer. Promises (and callbacks) are "easy" from a functional programming background, but for someone without much of a functional programming background like my colleague, it's certainly confusing. There is a rigor needed in writing Promises (and callbacks) in knowing what is in each closure and making sure that the return values of individual callbacks are in the "right shape" for the next callback in the chain.

Loops with Promises involve confusing things like higher level combinators like Promise.all and Promise.race, and that requires more new sorts of reasoning about return types than a "traditional" `for (let thing of list) { let result = await doThe(thing); /* do something with result */ }` loop.

Not that a need for things like Promise.all and Promise.race goes away with async/await but that it moves from being a must need to learn on the first pass of writing an algorithm out to being a performance optimization in advanced scenarios that can be done easier by someone more senior and/or in a later pass of code writing (such as a code or performance review).

It makes the learning curve to "doing asynchronous code right" a lot less sharp, smoothing out some of the complexity, and that can be a huge win for any team with a mixture of developers of different skill levels and skill sets.


I agree to some degree but at the end of the day it is a programming style.

I personally prefer the functional approach with bound methods to avoid callback hell, improve readability and ensure there is minimal amount of memory leaking.

That being said, async/await makes callback programming easier so it is another tool which may come handy in the situations where you really need it.


Yes, you can match async/wait with generators thus using them for monad computations in Javascript.

Think of async as return and await as bind.


"you can match async/wait with generators thus using them for monad computations in Javascript"

I hear the words you are saying, I just cant visualise the concept. Could you ELI5, or post an example?


Yes, check Bodil Stokke's presentation, The Miracle of Generators, at GOTO 2016.

https://www.youtube.com/watch?v=6mCkLZ0cwAI


OK- so basically: async/await, when used with generators and promises, allows monads.


It's basically a fancy (but more limited) syntax for promise, with the same advantages.


Looks like awful readability to me. No idea what they're trying to solve here.


Why would we want to mix async and sync behavior in one cognitive space? That seems counter evolutionary.


Finer control?. Synced behavior is easier to understand and organize logically. You have the option to write it like a book and execute things "concurrently" (considering its really single threaded in JS).


Single threaded concurrency is not the same as synchronous/sequential execition. You and TC39 folks are telling us that mixing sync and async behavior in the same cognitive space is good? Amazing. Async handling should be in its own cognitive space.


Dear Javascript coders:

If you wind up writing the word "await" in your codebase more than once, you're doing it wrong. You will quickly see how futures/promises wind up taking over all of your code. This is a very good thing. You shouldn't need to block, ever.

Sincerely,

Scala coders


Am I missing something? Await doesn't block. It's just syntactic sugar for Promise.then(() => {}) without having to go down into callback hell.

It makes code more readable and easier to write, but doesn't fundamentally change how it is working under the hood, as far as I am aware.


  var txt = await read();
  // This code is blocked until read() completes.
  console.log(txt);


It is in essence:

    read((err, txt) => {
        console.log(txt)
    })
Or

     read().then((txt) => console.log(txt))
Why is it bad?


Await doesn't synchronously block in Javascript.

http://docs.scala-lang.org/overviews/core/futures.html#block...


And how do you suggest to do that without "blocking"? :) Sure, you could use a stream but that isn't always possible or practical.


That's not what blocking means.


Well, it depends...

async function is blocked on await statement, after all that's why it has such a name.

It does not block the whole VM but task (async function) is effectively blocked for an observer inside it.


Yes but when you talking about async/await, "blocking" means "blocking the thread". And this is very clearly what mehwoot meant.


Care to elaborate a bit? Just trying to understand why async/await is bad. In my mind it makes for much cleaner code that the callback hell.

Also, how is Scala doing it?


Scala is doing it exactly as in JS. We have async/await as a macro library transforming the code to a FSM. The original author posting this probably didn't know that, under-the-hood, await doesn't really block in Scala nor in JS.


Dear Scala coders,

Why have you decided to rewrite everything?

Sincerely, Everyone else. (Obviously not JavaScript coders, sadly...)

P.s. I can only hope you have a somewhat sane release story now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: