So Google says that pipelining is bad, but they never compared to it in any of their published results. One of their results for mobile didn't even include TCP connection setup for SPDY but did for HTTP (I suppose their SPDY implementation kept the connection open to their test server). Google says bad proxies and routers break pipelining, except if you can use SSL to bypass these things for SPDY then this is a solution for pipelining as well.
Meanwhile Microsoft published data showing pipelining to be basically the same speed as SPDY, and one of Google's pages (maps IIRC) loaded much slower with SPDY then plain HTTP because of a priority inversion, so priority codes had to be embedded in the site content.
So there's this much more complicated single connection, with priorities and "TCP-over-TCP" and dubious performance benefits. Why? I wonder how long Google will allow each request to come from a different connection. That seems to be where they are headed with this.
To add to your examples, Google claims resource sharding is bad and complex and "hacky", and criticize this for possibly breaking client-side caching and invalidating HTTP proxy support.
Then they move on to show how "wonderful" server-push is with the example of a request for index.html also pushing index.js and index.css, two files almost guaranteed to be cached more than 99% of the time. As if that doesn't break caching.
So you have a possibly hacky thing you may do which may break something, which is bad. And then you have a new protocol feature causing the exact same problem. And that is good. Hey Google: Make up your mind, already!
Yeah. Not sold at all. HTTP 2.0 is a terrible protocol, and not because it doesn't do enough, but because it does too much. It attempts to solve problems not belonging to the application layer.
HTTP 1.x was a nice, simple, stateless text-mode protocol. This thing is a terrible, state-full, impossible-to-debug Rube Goldberg machine and has nothing in common with that simple and nice protocol whose name it attempts to piggyback on.
Google seems to dislike sharding because its an end run around CWND and congestion control. In an uncongested network sharding works great. Once it starts to become congested sharding works against you. The priority system and single persistent TCP connection is there to make it suck the least for everyone.
Shame about the huge banner at the top of the page that stays there as you scroll. Website designers that purposefully waste my screen space, please get a different job!
Well, it's not a banner. It's a video, and the main reason for the page to exist :)
But yeah, agree with you. Particularly in this format, where there's so much value in the transcript below it, it'd be better to not have anything sticky. Like YouTube/Vimeo, where sometimes I go down to continue reading, and leave the video playing on top (off screen).
Fascinating and educational information from Ilya, as always, especially about improved mobile latency in North America.
Regarding the format of the webpage: The video completely blocks my laptop browser window in Firefox v. 31.0, making it almost impossible to read the transcript. Sticky HTML elements on text-content heavy pages drive me crazy. Awhile ago I made a simple bookmarklet that makes me a much happier consumer of such pages. One version of it walks the DOM and unaffixes fixed DOM elements. The other changes them to display: none. You can roll your own pretty easily, or try out the ones I made at StaticDing.org. They both make it easy for me to read the transcript of Ilya's presentation. Without them, I probably would have given up.
I'd rather see a fixed tab on the left or right that was less intrusive that allowed you to open a modal or slide in the video div. I do understand the video is the focal point and the ratio of any video is important to show something non-squished, I've been there, but as others on laptops and netbooks have mentioned the video div takes up nearly 45% of the vertical spacing and it's distracting as you get your bearings figuring out what's on the page. Perhaps the video should be nearly above the fold and it remains fixed once you get to it, until the next informational section similar to a table header? I live for solving these types of problems and it's a shame I don't run into them more in my work -- and at no time am I saying I have all of the answers.
Edit: I like what you are doing on mobile since you are hamstrung without many options, maybe on iPad portrait and wider you could go for a 2 column approach with the video on the left in a "sidebar" and the text flowing on the right? Again, just an idea. At any rate what you have is fine and gets the point across and I'm just being picky!
Hey there. Tim from Heavybit. I wanna make sure you're having the best experience possible. Can get me some more info on the banner? It might be the video of Ilya's talk.
Sometimes it's nice to skim a transcript when a video can't be watched or when deciding whether to invest the time into watching the video. Some way of hiding the banner would be useful here.
Yes, it's the video of the talk that follows you as you scroll down the page. If you are not watching the video it is a terrible format as it blocks a large amount of the reading area.
yes, it's the video. Some of us, though, would prefer to just read the transcript (thanks for providing a high quality transcript, awesome) instead of watching the video.
Some way to collapse the video 'banner' into a smaller area, so we can use the full screen, or nearly, when reading the transcript, would be welcome. Perhaps if you click on the transcript to open the video at that point, it could expand again.
Meanwhile Microsoft published data showing pipelining to be basically the same speed as SPDY, and one of Google's pages (maps IIRC) loaded much slower with SPDY then plain HTTP because of a priority inversion, so priority codes had to be embedded in the site content.
So there's this much more complicated single connection, with priorities and "TCP-over-TCP" and dubious performance benefits. Why? I wonder how long Google will allow each request to come from a different connection. That seems to be where they are headed with this.