That is, why be explicit about async when implicit style code is much easier to read. That said, the answer in this case is simply backwards compatibility - JS already had a concurrency model before async/await came around.
You can also make a explicit is better than implicit argument; it’s mostly personal preference/ideology though. That said, Python has been in a fun land of “rewrite the world” because the async model they chose (async/await) was not backwards compatible to any existing libraries. So now in Python which library you use for e.g. http varies with which concurrency model you use in the rest of your app.
NightMKoder is right on, that article ("What color is your function") sums up exactly how I felt, trying to learn JavaScript after knowing Go. Also, this interview with Node.js creator Ryan Dahl, where he himself admits that Go solves the async programming model much better than JavaScript, I can relate completely to the reasons he gives: https://www.mappingthejourney.com/single-post/2017/08/31/epi...
"You can also make a explicit is better than implicit argument;"
While I'm generally an explicit sort of guy in these contexts, there isn't much that being explicit buys you here, except the ability to write bugs. There's really only one right answer and the compiler is perfectly capable of handling it. In the exceedingly rare cases where you need to override it, you can. Given how exceedingly rare those cases are we're easily in the realm of "use another language" or "fork & hack the runtime" sorts of things.
Same reason given in the bluebird library documentation:
> Promise.reduce will start calling the reducer as soon as possible, this is why you might want to use it over Promise.all (which awaits for the entire array before you can call Array#reduce on it).
Whether this is ever necessary is another matter :)
I suppose this might be useful in situations where you are querying an API(s) with multiple requests and some will certainly return seconds before others.
This way you could have the same reducer handle the results and begin updating the UI as the results come in.
An example real-world app might be a price comparison tool or social media aggregator.
> I suppose this might be useful in situations where you are querying an API(s) with multiple requests and some will certainly return seconds before others.
But it's still a serialized operation so the parallelism is still limited. What's really needed is a "parallel reduce" using something like C's select function that will reduce in an arbitrary order using any promises that are ready at any given step.
Something I see often and is a huge code smell to me is not using the most restrictive form of iteration.
If you see collection.map(...) you know that each iteration is simply a pure function from original element to transformed element, which is an immense help when reading the code.
If you can use only map / filter / takeWhile / join etc to express what you are doing, use those! If not, try and just use reduce / foreach. If not, try and just use for. Only use while if nothing else works!
> If you see collection.map(...) you know that each iteration is simply a pure function from original element to transformed element, which is an immense help when reading the code.
You'd think so, but I've had colleagues who managed to fuck that up and use map or list comprehension solely for side-effects.
And I thought it was abusing the map() call for side effects. However, it is still shorter than writing it out as follows:
def update(): Try[Unit] = {
parser.parse(...) match {
case Success(result) =>
updateState(result)
Success(())
case f@Failure(_) =>
f
}
}
So I didn't have a strong opinion either way since semantically both do the same (and in the case of Scala, the first one is potentially more performant since it relies on the JVM doing virtual dispatch as opposed to calling unapply() and matching, not to mention potentially less garbage being generated).
Yeah I mean my point is "trust but verify", I love restricted iteration construct, but just because they're being used does not mean they're being used "properly" unless the language ensures it.
What is the "most restrictive" form? The answer to this question is highly context dependent (for example, programming language / architectural framework) and expensive to give. Remember that everything has a cost. And especially overzealous formalism.
Personally I find simple C index-based for loops consistent and refreshing. And it's typically not a huge deal. If it is, the procedure might be doing too many things at once. (But yes, I use simple "for x in y" style loops in Python or C++ when they make the lion's share of the loops).
Many different types of loops in a single file, over a single datastructure, always remind me of odd syntax highlighting (for example in vim) in so many different colors that it's only a distraction. I don't care to make so many distinctions. I try to focus on the distinctions that we have to make to get a program done.
Map-style iteration carries more semantic information than a C-style index based loop. An index based loop might be doing anything with the data being iterated over. When you do collection.map, already at first glance you get an idea about what the code is doing.
It generates code for algebraic data types which offer a typesafe interface similar to the `case` statements in languages that natively support pattern matching.
Please do! Genuinely curious what you don't like about it.