Hacker News new | past | comments | ask | show | jobs | submit login

But dumping everything through zlib turned out to be a security hole: http://en.wikipedia.org/wiki/CRIME.

I wrote some terrible, terrible code for Chromium to patch zlib in order to segment different sources of data and compress them separately while still being wire compatible with zlib. I'll be very happy when I can remove it.




Just like it turned out that minimized HTTP was faster than SPDY: http://research.microsoft.com/apps/pubs/?id=170059

I'm not sure what the motivation is behind HTTP2/SPDY. There have been consistent misinformation from Google about the performance of it (for instance including initial connection time for HTTP but omitting it for SPDY, using an outdated HTTP stack, not using HTTP pipelining) so I doubt the motivation is performance.

It looks like just not-invented-here bloat, but I wouldn't be surprised to find out that the SPDY connection to google-analytics.com is kept open and reused across tabs.


> It looks like just not-invented-here bloat

To me, the real advantage are:

- native multiplexing on a single TCP socket. No more connection pools, no more domain sharding...

- the server is now able to send data at arbitrary moments.

Here's a concrete example: how would you do a webchat ? You need bidirectional communication, potentially initiated by any party. Easy solution: use websockets.

With SPDY/HTTP2, you can use POSTs for sending messages and Server-sent events for receiving messages. You don't need to pool them, you don't need something-over-HTTP like websockets is; you just use standard HTTP semantics.


I actually agree with you for the most part. Google wants with HTTP2/SPDY to foster an internet based on highly stateful applications, the web as a remote GUI to their servers, rather than mostly stateless documents as it has been. I don't see this as an improvement.


Apart from it's a couple of years old…

There are also a number of potential shortcomings and details missing from the study e.g. you've only got to look at httparchive.org, to see the dummy page isn't representative of real world pages, and then they used similar small pages for the other sites.

We also don't have any information on the TLS setup e.g. was it well optimized etc.

The study provides some interesting datapoints but would benefit from further research


> to see the dummy page isn't representative of real world pages, and then they used similar small pages for the other sites.

If you believe the SPDY marketing material that one of the reasons behind it is to reduce delay from TCP window expansion then small sites should be where it shines. A small site is more influenced by time to 'ramp up' connections. Yet it appears to be slower on small sites.

Also, bbc.com and greenweddingshoes.com (now 10 MiB) are small pages? Hardly.

It's pretty clear to me with the slew of basic errors and oversights in Google's numbers that performance wasn't the motivation behind this protocol. I'm scratching my head trying to understand what they were thinking.


I was going through that article after you posted it... is BREACH still an exploit in the wild? Turning off compression altogether seems painful :/


Well, the exploit doesn't just go away.

Any context compressor could introduce the same hole if attacker-provided and sensitive data share contexts.

Specific countermeasures include salting your anti-CSRF tokens (so make sure they're not consistent but differ on every page load!).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: