I agree with the premise, but disagree with the conclusion.
For a little background, my first computer was a Mac Plus around 1985, and I remember doing file copy tests on my first hard drive (an 80 MB) at over 1 MB/sec. If I remember correctly, SCSI could do 5 MB/sec copies clear back in the mid-80s. So until we got SSD, hard drive speed stayed within the same order of magnitude for like 30 years (as most of you remember):
So the time to take our predictable deterministic synchronous blocking business logic into the maze of asynchronous promise spaghetti was a generation ago when hard drive speeds were two orders of magnitude slower than today.
In other words, fix the bad APIs. Please don't make us shift paradigms.
Now if we want to talk about some kind of compiled or graph-oriented way of processing large numbers of files performantly with some kind of async processing internally, then that's fine. Note that this solution will mirror whatever we come up with for network processing as well. That was the whole point of UNIX in the first place, to treat file access and network access as the same stream-oriented protocol. Which I think is the motive behind taking file access into the same problematic async domain that web development is having to deal with now.
But really we should get the web back to the proven UNIX/Actor model way of doing things with synchronous blocking I/O.
For a little background, my first computer was a Mac Plus around 1985, and I remember doing file copy tests on my first hard drive (an 80 MB) at over 1 MB/sec. If I remember correctly, SCSI could do 5 MB/sec copies clear back in the mid-80s. So until we got SSD, hard drive speed stayed within the same order of magnitude for like 30 years (as most of you remember):
http://chrislawson.net/writing/macdaniel/2k1120cl.shtml
So the time to take our predictable deterministic synchronous blocking business logic into the maze of asynchronous promise spaghetti was a generation ago when hard drive speeds were two orders of magnitude slower than today.
In other words, fix the bad APIs. Please don't make us shift paradigms.
Now if we want to talk about some kind of compiled or graph-oriented way of processing large numbers of files performantly with some kind of async processing internally, then that's fine. Note that this solution will mirror whatever we come up with for network processing as well. That was the whole point of UNIX in the first place, to treat file access and network access as the same stream-oriented protocol. Which I think is the motive behind taking file access into the same problematic async domain that web development is having to deal with now.
But really we should get the web back to the proven UNIX/Actor model way of doing things with synchronous blocking I/O.