Hacker News new | past | comments | ask | show | jobs | submit login

So Google says that pipelining is bad, but they never compared to it in any of their published results. One of their results for mobile didn't even include TCP connection setup for SPDY but did for HTTP (I suppose their SPDY implementation kept the connection open to their test server). Google says bad proxies and routers break pipelining, except if you can use SSL to bypass these things for SPDY then this is a solution for pipelining as well.

Meanwhile Microsoft published data showing pipelining to be basically the same speed as SPDY, and one of Google's pages (maps IIRC) loaded much slower with SPDY then plain HTTP because of a priority inversion, so priority codes had to be embedded in the site content.

So there's this much more complicated single connection, with priorities and "TCP-over-TCP" and dubious performance benefits. Why? I wonder how long Google will allow each request to come from a different connection. That seems to be where they are headed with this.




Yeah. I'm still not sold either.

To add to your examples, Google claims resource sharding is bad and complex and "hacky", and criticize this for possibly breaking client-side caching and invalidating HTTP proxy support.

Then they move on to show how "wonderful" server-push is with the example of a request for index.html also pushing index.js and index.css, two files almost guaranteed to be cached more than 99% of the time. As if that doesn't break caching.

So you have a possibly hacky thing you may do which may break something, which is bad. And then you have a new protocol feature causing the exact same problem. And that is good. Hey Google: Make up your mind, already!

Yeah. Not sold at all. HTTP 2.0 is a terrible protocol, and not because it doesn't do enough, but because it does too much. It attempts to solve problems not belonging to the application layer.

HTTP 1.x was a nice, simple, stateless text-mode protocol. This thing is a terrible, state-full, impossible-to-debug Rube Goldberg machine and has nothing in common with that simple and nice protocol whose name it attempts to piggyback on.


Google seems to dislike sharding because its an end run around CWND and congestion control. In an uncongested network sharding works great. Once it starts to become congested sharding works against you. The priority system and single persistent TCP connection is there to make it suck the least for everyone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: