Hacker News new | past | comments | ask | show | jobs | submit login

Well, in any case, the problem can be resolved by adding a kernel-level api function that allows one to wait (block) until results are requested from the other end of the pipe.



Why?

The opposite is already true and has the same effect.

Each stage of the pipeline is executed when it has data to execute. So ultimately the main blocking event is IO (normally the first stage in a pipeline). Every other process is automatically marked as blocked, until its stdin is populated by the output of the former. Once its task is complete it re-checks stdin, and if nothing is present blocks itself.

So the execution of each task is controlled by the process who's data that task needs to operate.

In your system why would you want to block the previous step? This would just interfere with the previous+1 step, and you'd have to populate that message further up the chain. This seems needlessly complicated. As you have to add extra IPC.


Why: consider the generation of a stream of random numbers; assume each random number requires a lot of CPU-intensive work; obviously, you don't want to put unnecessary load on the CPU, and hence it is better to not fill any buffer ahead of time (before the random numbers are being requested).


There are arguments for supply-driven processing, AND for demand-driven. It all comes down to latency and speed arguments.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: