Hacker News new | past | comments | ask | show | jobs | submit login

>I understand that IO is expensive, but so is performing string interpolation on a string in memory 50 times (for whatever reason).

I think you are conflating waiting and working. Both cost time, but they are not the same. Waiting can be avoided (nonblocking), working cannot.

Iterating over an array in most languages probably takes .000001 seconds. Reading from a bad connection could take 30+ seconds, or whatever your servers timeout is.

Since the 30 seconds is spent _waiting_, it can be avoided within a single thread of execution. On the other hand, if you are spending 30 seconds _working_ to compute prime numbers, this cannot be avoided in a single thread of execution.

Blocking vs nonblocking refers to dealing with waits. Does each IO operation do the waiting, or does each IO operation return immediately and we pool waits into a single poll() (etc.) call?

Parallelization refers to dealing with work. In the rare instance one needs to generate huge fibonacci sequences, one can do so in a seperate fork\process\thread\cpu core\machine, and report results back to the event loop via pipe\socket\shared memory. [1] Multiplexing waits in a single thread makes sense, multiplexing work in a single thread such as array iteration does not: on the contrary it sounds like a good way to increase the overhead of iteration and generate more work rather than reduce it.

It's confusing because some tools such as threads can be used to address both the problem of parallel work and the problem of parallel waits.

[1] edit: quick example in lua:

    -- In fork, write to the blocking end of a pipe
    local r = formfork(function(w)
        local work = fibonacci(10^10)
        w:write(work)
        w:close()
    end)

    -- In event loop, read from the nonblocking end of a pipe
    readline(r , 1024, function(r, line)
       print('got work', line)
       close(r)
     end)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: