Hacker News new | past | comments | ask | show | jobs | submit login

What you suggest is what was followed by a large-scale distributed system I worked on. Your suggestion is also in-line with Google' SRE book section on Handling Overload: https://landing.google.com/sre/sre-book/chapters/handling-ov...

> Mandate that batch client jobs use a separate set of batch proxy backend tasks that do nothing but forward requests to the underlying backends and hand their responses back to the clients in a controlled way. Therefore, instead of "batch client → backend," you have "batch client → batch proxy → backend." In this case, when the very large job starts, only the batch proxy job suffers, shielding the actual backends (and higher-priority clients). Effectively, the batch proxy acts like a fuse. Another advantage of using the proxy is that it typically reduces the number of connections against the backend, which can improve the load balancing against the backend.

Other chapters on Load Balancing, Addressing Cascading Failures are related too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: