Hacker News new | past | comments | ask | show | jobs | submit login

JS is a mess.

I think this blog post highlights the frustrating viral nature that is async functions:

http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...




JS is a mess, but async messes aren't JS's fault. Async is hard, but also everything just has to be async -- there's no way around this really. If you write sync code and later you need it to be async, then you're in for a full rewrite. If you write async code and you later need a sync version then you just add a wrapper that waits for completion.

In a world of fast CPUs, slow memory, distributed computing (whether client/server or peer-to-peer), remote function as a service, ..., asynchrony is a hard requirement.

The GUI event model has been with us since... even before the days of X11. Asynchrony in the UI is and long long has been a hard requirement. Users hate synchronous UIs, and rightly so.

Asynchrony is also difficult. But it's not like sync + threads is a panacea -- it's not because threads are way too heavy-duty. Better to deal with the pain than to deny the need.


>If you write sync code and later you need it to be async, then you're in for a full rewrite

If you just want it not to block using the page then you can just chain the sync code into a bunch of async callbacks. For actually parallelizing tasks that weren't before, it can be harder, but in my experience still not a full rewrite. It is much harder for larger applications in other contexts like the migration of Firefox to Quantum, though, yes.


Rubbish! You could make the same argument for parallelising, abstraction, logging, and just about every other feature. But then you’d never build anything because there are infinite possible features you could include.

Build only what you need today, and prepare for what your employer says you will need tomorrow. Anything else is like saying everybody should buy a jeep in case they move to the countryside.

Async brings its own costs. Unless your employer benefits, you are costing them money by giving them features they don’t need.


You have to think at least a bit ahead. Better to write thread-safe async code from day 1 than to rewrite later.


YAGNI - https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it

I’ve worked with plenty of valuable apps not using async. If the requirements change such that async becomes the best way to solve the problem, that’s something the business would decide at that point.


Sometimes you can think ahead and see the need coming -- or not coming -- and act accordingly.

Experience is helpful here.

When I have to write C I try to at least write thread-safe C close to something I could expose as a library. I've had too many cases where I or someone else wrote code they NEVER thought would be needed as a library, therefore took all sorts of liberties, and then suddenly there was a need for a library to do what said code was doing. Oops. I've run into fewer instances of sync code that turns out should really have been async and where adding threads isn't good enough, but I've still run into some.

This isn't about functionality -- this is about scalability. When you know you need to scale, you have to think ahead about it.


A lot of projects are little more than crud with a few complicated business rules. Use a framework with a good caching layer, and a lot of the time the employer is very happy with the outcome. It’s cheap to build, maintain, and host.

But then along comes a developer who says “we must build this app with async so we can scale”. Suddenly the junior devs build an absolute mess because they don’t know how to work effectively with async. So a few seniors get involved. Then the project runs out of cash.

Sure, some projects benefit from async and some fundamentally require it. But not all of them, I doubt even the vast majority of them.

You only need to scale when you’re successful, and most applications are not successful.


"JS is a mess, but async messes aren't JS's fault"

Still though, JS is a mess.


Oh, no doubt!


> JS is a mess.

The linked blog post doesn't seem to agree with you. He goes on to talk about Promises and async/await—both features of modern javascript—which he describes as "nice":

> Async-await is nice, which is why we’re adding it to Dart

He does finish by talking about Go's concurrency model, which he describes as being more "beautiful", but I still wouldn't confuse "it's nice" for "its a mess".

Also, the line about Java doing it better seems to be a joke.


That's a good article. And yes, async is totally viral. Once you start using it, everything becomes async if you like it or not. I definitely need to study Go and goroutines. They seem to have a better approach.


There is no good approach to avoiding asynchrony. There's been nothing new in this department for decades. You have these choices:

     - async, continuation passing style code
     - async with delineated continuations to make snippets look synchronous which aren't actually
     - cooperative multi-tasking (co-routines with async I/O under the covers)
     - threading (preemtible multitasking)
Each one of this choices has its problems, and all can be characterized as where on the spectrum of explicit vs. implicit state keeping you want to be. For example, with threads and co-routines you keep a lot of implicit state in the call stack, whereas with async CPS you make all of the state fully explicit. Async CPS code is the least problematic once you wrap your mind around it, but it does have "mind warp" as a hard requirement.

There are times, of course, when your code is very serial, and then you really do benefit from co-routines/threads. For example, PuTTY's implementation of the SSH version exchange, key exchange, and user authentication protocols, is one HUGE C function that is actually running as a co-routine which cooperates with the UI side, with async I/O under the covers. The PuTTY approach is wonderful for that particular case: those parts of the SSH protocols are extremely serial, and the resulting serial-looking code is very very easy on the eyes (so much so that it makes up for the enormity of the function that implements it!). So it pays to be able to reach for co-routines sometimes, but not always, and I'd say not even most of the time.

Programmers today simply have to be prepared to deal with everything-is-async codebases, and they should be prepared to create codebases where everything is async from day zero.


I believe the "programmers should be prepared to deal with async codebases" is purely an engineering issue and async is a question of providing the necessary syntactic tools in order for them to achieve that.

There's nothing special about "async" per se. What about people writing OS code in the 90s? They didn't even hear about async and the "sync" was invented for the sake of easier control over the time shared between processors (remember all of our implementations of processes in OSs are CPS-style, where they just give up the control over their stack if run for more than X ms).

Other thing I do recently consider a lot is asynchronicity between machines: 1) receive a request 2) send a database request 3) return an error if database request had timed out ...) etc. Having a language that spans several systems, enveloping time, execution redundancy contraints and ensuring data passed is not garbage is something that would be novel.


"syntactic tools" -> implicit state.

There's only so much the compiler can do for you. At some point you have to bear some of the cognitive load of state compression (which is really all async is about).

Multi-processing was great, but it doesn't scale to today's world, and neither does multi-threading. It doesn't matter in the least that those two technologies allow the programmer to write serial-looking code that does not execute serially. Those technologies cannot compress program state embodied in the call stacks and heap, therefore they do not scale (because they consume too much memory, have worse cache footprints and higher resident set size, involve heavy-duty context switching, and so on).

In terms of actual novelty in this space in computer science, I don't believe there's been anything new since the 80s. Everything that seems to be new is an old idea rediscovered, or a new take on an old idea, so either way not new at all.

CPS -> least implicit state.

Process -> most implicit state.

In between those two there are a few options, but nothing is a panacea, and ultimately CPS is an option you have to be prepared to use.

Less implicit state -> explicit state, which you can compress well because you understand its semantics.

Less state -> less load for the same work -> more work can be done.

In some contexts (e.g., the UI) this is a very big deal, which is why GUIs are async.


JS is a mess because React is using a concept similar to double buffering to prevent bad rendering? That's all this is, they are batching updates to the DOM before reconciliation to reduce redraws.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: