Hacker News new | past | comments | ask | show | jobs | submit login

> Performance only goes downhill from here if the function does more work,

Should be the opposite. Overhead as a proportion of total time goes down the more useful work is involved.




Proportionally sure, but actual wall clock time can only increase.


Okay, but the proportion of overhead is what matters here.

If you have a CPU or memory size bottleneck and a parallelizable workload, it makes plenty of sense to split the work across multiple machines and coordinate them over the network. If your job was going to take 20 minutes for 1 machine to do and you can fan it out to 100 machines and accomplish the same in 12 seconds per machine plus an extra fraction of a second in communication overhead, that’s a huge win in total latency. The overhead doesn’t matter.

If you have a trivial workload that can be handled quickly on one machine, but you unnecessarily add extra network hops and drop your potential throughput by 100x, then it’s a huge loss.

A ping server that makes a bunch of international RPC calls before replying is the worst case scenario for overhead.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: