Hacker News new | past | comments | ask | show | jobs | submit login

Very similar to background queues but the load balancer holds the request and dispatches it to a waiting worker, or waits until someone comes to serve it. While the worker handles the request, the load balancer proxies.

The request serving is the same, the way the work is dispatched is just inverted.




Seems like this wouldn't work as well in situations where you care a lot about latency or resource utilization. If the workers are polling the LB every n ms, then that's an average of n / 2 ms added to _every_ request. Plus additional CPU cycles and network traffic on both ends due to the polling mechanism.


High volume low latency message queueing is already a problem people have had and implemented solutions for that (for their use case) cost little enough in terms of latency that the advantages were well worth it.

Also there's no reason to poll the LB/queue/etc., you just tell it "I'm ready" and it sends you something to handle when it's got something.

Pull/push here is about who decides when a server is ready to receive another request.

So ... I'm not an expert on actually implementing it, but I've seen systems in practice that -were- in such situations and it worked out extremely well.


couldn’t the workers longpoll vs interval based?


If you care about that, having requests queue on a worker that’s already busy (while another worker is idle) is currently the most widely used alternative.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: