Events are just a pattern running on top of nonblocking I/O. I preferred nonblocking I/O when I first started network programming, but now I prefer blocking I/O running on separate threads.
The reason is that languages like Go (and really any language that uses pipes instead of a shared memory space) move the programmer from a tangled mess of logic and callback hell to individual processes that each resemble the main loop we’re all used to. Cumbersome tasks like fetching a file over HTTP end up looking like opening a local file. The thread just blocks until the data arrives, which the programmer generally doesn’t have to worry about. That was the original vision behind sockets (that everything is a file).
I have high hopes that generators and coroutines will merge the two paradigms. That’s because the danger with events is that their logic is a state machine and tends towards unmanageability after a few dozen states. The programmer discovers this late in a project when edge cases and failures can’t be easily isolated or fixed. We see this every day on websites whose javascript chokes and the page has to be reloaded. But the danger with threads is concurrency issues (atomicity/mutexes/semaphores etc). The programmer discovers those early on and has trouble writing even the simplest stable program, especially in a low level language like C or java. So the learning curve with blocking I/O is steep enough that I think that’s why it never went mainstream.
Anyway, if we only use channels/pipes and no shared memory, and not even allow threads access to the filesystem, then coroutines and preemptive threads become basically equivalent. I think the endgame will be a methodology in approachable languages like javascript/Go that works like Erlang, with the ability to spawn threads on different processors or even different machines, running over something like ZeroMQ or WebRTC. If we can throw in a proof of work system like Bitcoin, and be able to utilize the graphics card with CUDA/OpenCL but without ugly syntax, we’ll really have something because we’ll finally have distributed computing and processor speed won’t really matter anymore. I think something like NumPy/Octave/MATLAB could run this way and be several orders of magnitude faster than anything today, with little additional cost since so many machines are sitting idle anyway.
The reason is that languages like Go (and really any language that uses pipes instead of a shared memory space) move the programmer from a tangled mess of logic and callback hell to individual processes that each resemble the main loop we’re all used to. Cumbersome tasks like fetching a file over HTTP end up looking like opening a local file. The thread just blocks until the data arrives, which the programmer generally doesn’t have to worry about. That was the original vision behind sockets (that everything is a file).
I have high hopes that generators and coroutines will merge the two paradigms. That’s because the danger with events is that their logic is a state machine and tends towards unmanageability after a few dozen states. The programmer discovers this late in a project when edge cases and failures can’t be easily isolated or fixed. We see this every day on websites whose javascript chokes and the page has to be reloaded. But the danger with threads is concurrency issues (atomicity/mutexes/semaphores etc). The programmer discovers those early on and has trouble writing even the simplest stable program, especially in a low level language like C or java. So the learning curve with blocking I/O is steep enough that I think that’s why it never went mainstream.
Anyway, if we only use channels/pipes and no shared memory, and not even allow threads access to the filesystem, then coroutines and preemptive threads become basically equivalent. I think the endgame will be a methodology in approachable languages like javascript/Go that works like Erlang, with the ability to spawn threads on different processors or even different machines, running over something like ZeroMQ or WebRTC. If we can throw in a proof of work system like Bitcoin, and be able to utilize the graphics card with CUDA/OpenCL but without ugly syntax, we’ll really have something because we’ll finally have distributed computing and processor speed won’t really matter anymore. I think something like NumPy/Octave/MATLAB could run this way and be several orders of magnitude faster than anything today, with little additional cost since so many machines are sitting idle anyway.