I believe the "programmers should be prepared to deal with async codebases" is purely an engineering issue and async is a question of providing the necessary syntactic tools in order for them to achieve that.
There's nothing special about "async" per se. What about people writing OS code in the 90s? They didn't even hear about async and the "sync" was invented for the sake of easier control over the time shared between processors (remember all of our implementations of processes in OSs are CPS-style, where they just give up the control over their stack if run for more than X ms).
Other thing I do recently consider a lot is asynchronicity between machines:
1) receive a request
2) send a database request
3) return an error if database request had timed out
...) etc.
Having a language that spans several systems, enveloping time, execution redundancy contraints and ensuring data passed is not garbage is something that would be novel.
There's only so much the compiler can do for you. At some point you have to bear some of the cognitive load of state compression (which is really all async is about).
Multi-processing was great, but it doesn't scale to today's world, and neither does multi-threading. It doesn't matter in the least that those two technologies allow the programmer to write serial-looking code that does not execute serially. Those technologies cannot compress program state embodied in the call stacks and heap, therefore they do not scale (because they consume too much memory, have worse cache footprints and higher resident set size, involve heavy-duty context switching, and so on).
In terms of actual novelty in this space in computer science, I don't believe there's been anything new since the 80s. Everything that seems to be new is an old idea rediscovered, or a new take on an old idea, so either way not new at all.
CPS -> least implicit state.
Process -> most implicit state.
In between those two there are a few options, but nothing is a panacea, and ultimately CPS is an option you have to be prepared to use.
Less implicit state -> explicit state, which you can compress well because you understand its semantics.
Less state -> less load for the same work -> more work can be done.
In some contexts (e.g., the UI) this is a very big deal, which is why GUIs are async.
There's nothing special about "async" per se. What about people writing OS code in the 90s? They didn't even hear about async and the "sync" was invented for the sake of easier control over the time shared between processors (remember all of our implementations of processes in OSs are CPS-style, where they just give up the control over their stack if run for more than X ms).
Other thing I do recently consider a lot is asynchronicity between machines: 1) receive a request 2) send a database request 3) return an error if database request had timed out ...) etc. Having a language that spans several systems, enveloping time, execution redundancy contraints and ensuring data passed is not garbage is something that would be novel.