Hacker News new | past | comments | ask | show | jobs | submit login

You can pipeline requests on http/1.1. But most servers handle one request at a time, and don't read the next request until the current request's response is finished. (And mainstream browsers don't typically issue pipelined requests on http/1.1, IIRC)

If you have a connection per request, and you need 1000 requests to be 'simultaneous', you've got to get a 1000 packet burst to arrive closely packed, and that's a lot harder than this method (or a similar method suggested in comments of sending unfragmented tcp packets out of order so when the first packet of the sequence is recieved, the rest of the packets are already there)




Ok, pipelining as in using the fact that the socket is bidirectional, so you queue up the next request before the previous response has arrived?

Sounds a bit dodgy, since any response could potentially contain a Connection: Close. Maybe ok for some scenarios with idempotent methods.


It's less that the socket is bidrectional, but that most requests have an unambiguous end. A pipeline-naive server with Connection: keep-alive is going to read the current request until the end, send a response, and then read from there. You don't have to wait for the response to send the next request; and you'll get better throughput if you don't.

Some servers do some really wacky stuff if you pipeline requests though. The RFC is clear, the server should respond to each request one at a time, in order. However, some servers choose not to --- I've seen out of order responses, interleaved responses, as well as server errors in response to pipelined requests. That's one of the reasons browsers don't tend to do it.

You also rightfully bring up the question of what to do if the connection is closed and your request has no response. IMHO, if you got Connection: Close in a response, that's an unambigious case --- the server told you when serving response N that it won't send any more responses, and I think it's safe to resend any N+1 requests, as the server knows you won't get the response and so it shouldn't process those requests. It's less clear when the connection is closed without explicit signalling --- the server may be processing the requests and you don't know. http/2 provides for an explicit close that tells you what the last request it saw, which addresses this, on http/1.1 when the server closes unexpectedly it's not clear. That often happens when the connection is idle.

An HTTP/1.1 server may send hints about how many requests until it closes the connection (which would be explicit), as well as the idle timeout (in seconds). But it's still not fun when you send a request and you receive a TCP close, and you have to guess if the server closed before it got the request (you should resend) or after (your request crashed the server, and you probably shouldn't resend).


https://en.wikipedia.org/wiki/HTTP_pipelining https://www-archive.mozilla.org/projects/netlib/http/pipelin... https://kb.mozillazine.org/Network.http.pipelining This has existed for years, and honestly it worked pretty well for me on most servers before HTTP2 came around, so long as you didn't abuse it. You could setup multiple connections too. I usually had mine set to "4"

Some servers didn't support it, most did though. Which was why when the first HTTP2 tech demos came out, I really couldn't see the enormous speedups people were trying to demo.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: