Hacker News new | past | comments | ask | show | jobs | submit login
If You Can Read This, You're SNIing (mnot.net)
129 points by mmastrac on May 11, 2014 | hide | past | favorite | 57 comments



One of the more interesting facts from this was that Python2 will be supporting SNI out-of-the-box, and some of the related discussion on what will now be pushed into the 2.7 release:

http://legacy.python.org/dev/peps/pep-0466/

Since there's no other commentary yet, I'd just like to point out how much of a pain SNI has turned out to be for us in the release of the product I was just working on. Sure, major browsers all support it, but you get bit on things like automated testing tools (ie: Python2-based ones using old version of the requests library), third-party load-testing services (one we've contracted with did not support SNI but it's "on their roadmap"), Charles Proxy (we use this a lot for remote debugging on devices but SNI support is hit-and-miss), and all of the little things like old versions of cURL etc used in monitoring.

In the end we ended up moving back to virtual-IP-based SSL to support the tooling. I was a little sad when this happened.


I say hurrah! There is an awful little dance you can do to make SNI work in 2.7 with requests but it is way too complicated for most deployments:

https://github.com/kennethreitz/requests/blob/master/request...

I've actually done this once. Took all day.


Yeah, we're not delighted about how complicated this is, but it's really the best we could do. =( To be clear though, Requests only requires the three packages be installed, it'll do everything else automatically.

Actually, is there interest in this being an installable add-on? Meaning you'd run: pip install requests[SNI] to get it?


I'm just glad it exists at all. Saved my bacon wrt backend toolchain when I set up SSL through App Engine.

I think my problems were with getting pyOpenSSL compiled for Windows and/or my way-too-old Ubuntu. Or maybe it was pycrypto as a deeper dependency. Something like that. I'm sure an add-on would be nice, though I'm not sure it would solve what I ran into. But with Python 2.7.7 coming up pretty soon, may not be worth the effort.


I spent a whole day just trying to figure out what was wrong with my code on the site. Worked fine in dev, but when pushed to AppEngine, it just flat out broke my API calls. FWIW, if you want to get a virtual IP on AppEngine (which fixes the 'problem'), it's like $40-something a month extra.


That's what we ended up doing. It's expensive per month, but you can roll up multiple sites under a wildcard cert (I think -- I'd have to ask the devops guy who is thankfully dealing with this mess).


Yes. The criteria for avoiding SNI is one IP per cert, not necessarily one IP per domain. If you can consolidate subdomains under one cert, you only need one IP. The issue that SNI deals with is with cert switching on a single ip:port... not anything inherently related to multiple domains on an IP.


It looks like browsers basically all support SNI now, but unfortunately it also looks like that if you require SNI, you'll be giving up traffic from a fair number of RSS readers and search engine spiders, which makes this a non-starter for me. I'm surprised the author is OK giving up those 263 NewsBlur subscribers. I hope these RSS readers and spiders get fixed soon.

I don't think there's a point trying to send back an error document to clients that don't support SNI, since the client will get a certificate error before seeing any error that you send it. On the other hand, the clients that don't support SNI also look like just the type of clients that don't properly check certificates. If that's true, you're better off winging it without SSLStrictSNIVHostCheck and hoping that these old clients will ignore the certificate error and then be routed to the correct vhost anyways via the Host: header.


Bug report opened on NewsBlur: https://github.com/samuelclay/NewsBlur/issues/534

Because they're using Requests this should be a fairly trivial fix for them.


All people using browsers on a modern OS support it. I've still got about 9% of traffic on Windows XP.


Using Chrome or Firefox on Windows XP also works. It's only Internet Explorer that's screwed up.


Good point! That looks like it cuts it in half -- 4 or 5%.


What about a modern browser on XP?


As long as it doesn't use the built-in Windows HTTPS stack (wininet/schannel), it's fine -- modern Chrome and FF will work just fine in this regard.


  > == Unknown
  > Not sure about these. shrug
  > - Gregarius/0.6.0 (+http://devlog.gregarius.net/docs/ua)
This is from Gregarius - my long favorite RSS/Atom aggregator, written in PHP: http://sourceforge.net/projects/gregarius/ (a bit dead nowadays... I'll have to migrate to day)


Author of Gregarius here, have a hug. But yes: that project is dead.


After the notorious shutdown of Google Reader I hoped to just resurrect my old Gregarius deployment and was forced to face the projects death; so, what are you using instead nowadays? (Or, are you using any RSS/Atom reader at all?)


Not the author, but if you want to host your own you should consider goread (https://github.com/mjibson/goread) or ttrss (http://tt-rss.org/redmine/projects/tt-rss/wiki). Goread.io is the hosted version of goread if you don't mind paying the $3 per month. Otherwise you have feedly, newsblur, theoldreader, ..., but goread is really the closest to Google Reader (clean/fast UI, keyboard shortcuts are the reason I pay).


After much consideration, after using Lilina and then Gregarius with the Lilina theme when Lilina itself could not handle the number of inbound feeds anymore, I'm inclined to choose tt-rss... But I have not migrated yet.


Have a hug back. I really liked this reader: worked well; logical interface; looked nice.

There appears to be a fork on Github https://github.com/jphpsf/gregarius


Oh, I want to thank you for Gregarius. I still use it to this day and haven't found any replacement I like so far (tiny tiny rss just did not cut it last time I checked, a year ago).

May I ask why you decided to stop development ?


I didn't really decide to halt development, the project simply lost momentum over time, I didn't consider it dead until a couple years later. During that timeframe I also stopped writing PHP and picked up saner web development habits.


Can't blame you. I have to admit I thought several times to modify a few things that bother me (e.g. to have a mobile-friendly version), but just the thought it's PHP pushed me away ...


SNI is the default for Google App Engine - as market share of SNI sites increases the few RSS readers that don't currently support it will be forced to update or be ditched in favour of ones that do.

We shouldn't let old code hold back the web.


Agreed. You can probably drop SSL connections in favour of TLS only if you do this, too. I've done this on my work and personal sites and everything works fine for modern browsers.

For example, the config for your Nginx server block can go like this, with no need to support SSL at all.

> ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

I realise this excludes some traffic and I'm okay with it.


I have taken exactly the same stance which has the added benefit you can also drop the vulnerable RC4 cyphers.

https://wiki.mozilla.org/Security/Server_Side_TLS#RC4_weakne...


Yep, if you are requiring SNI, SSLv3 might as well be disabled as SNI will never work with it.


One.com uses SNI in order to sell SSL-enabled websites for somewhere around $2.5/month.

Edit: We have around 1.-something million websites, though I don't know how many of those have SSL enabled.

(Disclosure: I happen to work there.)


After re-reading it a few days later, I realize it reads a lot like advertisement.

Some technical stuff wrt. SNI:

We waited a long time for the SNI-support to be widespread enough for us to provide to the customers. Despite we have a lot of users in the 'mom-and-pops' demographic, which typically tend to have older equipment, I'm not aware of widespread complaints about it.

It also turned out that most webservers can't handle being loaded with a few thousand SSL-certificates...

Finally, we were somewhat nervous about the rumored 'SSL/TLS has lots of CPU usage', but it didn't turn out to be an issue AFAIK.


Why is it impossible to buy web hosting from you without buying or transferring a domain?


I believe it is possible, if you beg the support enough.

I do know we strongly discourage it though, as it makes some parts of shared hosting hard for us; ex. quickly move your website to another IP in case of a DDoS attack.


You can have SNI on Windows XP if you use a browser instead of Internet Explorer.


FYI it's not just IE and some oddballs you have to worry about.

Last time I experimented I found that Android's Downloads manager ([yes even KitKat] and probably some other popular Java clients while you're at it) have trouble with SNI. Worth noting if anyone else is thinking of switching.


Am I the only one deeply disappointed by SNI? Having the domain going out over plaintext is a major step down in privacy from the way it was before.

(Yes, there's an additional leak in the form of a DNS lookup that has to happen, but as a client that's usually easy to address if you care.)


How is it a step down? Without SNI, the public IP you connected to also uniquely identified the domain/cert you were visiting.


You are absolutely correct, but in practice I believe it often works out differently. I imagine that under pre-SNI conditions, many hotels/free hotspots/university/work firewalls don't go to the trouble to actually connect to the IP to see if it matches their list of bad actors. And that guess has often been borne out for me where, for example, http://youtube.com is blocked but https://youtube.com is permitted. With SNI, they can easily passively sniff the domain you want and block and/or log it, no active measures like reverse lookups or probing connections required.

Edit: Actually now that I think about it, could they just sniff the certificate offered by the server and get the same info? If so that's unfortunate, but plenty of firewalls don't seem to be doing that, as I noted above with my youtube example.


If it's equally possible to sniff out the domain name, what makes you predict that more operators would go through the trouble under SNI than did before?

The sites in question will likely be blocked soon enough, but it won't be because of SNI being less secure, it will be because more and more traffic will default to TLS - a development helped along by SNI, certainly - and the operators will notice and the gateway providers will add the required sniffing, covering both "old school" certts and SNI.


Yeah, I think you're right. Then we'll have a proposal for SNI2 and a multitude of people wishing it had been done right the first time.


Wouldn't it be possible to engineer the protocol in such a way that the domain that you want to go to is not publicly identifiable?

Something like this, perhaps?

1. Server sends a salt

2. Client sends hash of salt + target domain

3. Server computes hash of salt + target domain for all possible domains that it serves and compares to find out which one the client is requesting


Using DNS, you can usually find all the domains a certain IP hosts[1], so the attacker could just do the same as the server to find the target domain.

That said, there's an argument going at the IETF list about encrypting SNI and the rest of the TLS handshake: http://www.ietf.org/mail-archive/web/tls/current/msg11823.ht...

[1] http://reverseip.domaintools.com/


You could do that, but who is going to want an extra round trip or two while making an HTTPS connection? Internet latency is often bad enough already.


.net running on windows 7+ System.Net.WebRequest should support TLS SNI, the test I just ran on my machine worked fine (windows 7, .net 4.5)

  using System.Net;
  string url = "https://www.mnot.net/blog/2014/05/09/if_you_can_read_this_youre_sniing";
  var request = (HttpWebRequest)WebRequest.Create(url);
  var response = (HttpWebResponse)request.GetResponse();
  var resStream = response.GetResponseStream();
  using(var sw = new System.IO.StreamReader(resStream))
  {
  	Console.WriteLine(sw.ReadToEnd());
  }
As far as I know System.Net.Security.SslStream does NOT support TLS+SNI though.


I really, really hope SNI takes off soon. Amazon just added SNI capability to their load balancers and it allows us to avoid spending $600/month in fees. Apparently, when you host a normal TLS connection on an ELB they have to provision an IP address for every endpoint in their network, which costs a lot of money. Using SNI allows them to use a single ELB endpoint for multiple domains, which saves us a ton of money. :)


Looks like the web crawlers for Bing and Yahoo are not sending SNI, according to this reporting. Can anybody corroborate?


rawdog/2.19 from the "Unknown" category is an RSS aggregator [1] that should hopefully get fixed with the mentioned python update...

[1] http://offog.org/code/rawdog/


I find it ironic that when I try REDbot the page, I get 403 "TLS SNI Required."


That is not ironic, that is the point of the page: identify broken clients.


Perhaps you missed that redbot.org is now one of the sites using SNI? That means that the tool can no longer connect to itself: https://redbot.org/?uri=https%3A%2F%2Fredbot.org


I was wondering how long it would take someone to notice that...


Why would you host a blog on a https url at all?


Why not? Why shouldn't everything be encrypted? Yes, you're probably only leaking the specific page you visit, but the principle stands.


I like the idea of information that is public (eg. a blog, as opposed to your activity in your favorite webapp) being easily cached.

You might appreciate your ISP's web proxy cache if you live in New Zealand, for example.

It would be nice if everything magically worked in this case, but I don't think we're really there yet. And I'll admit there's a downside - SSL also buys you data integrity (eg. was the blog post modified in transit?) and you don't get that here.

Still, I'm not convinced about SSL-by-default for general public information websites (with an obvious exception for sensitive health or political sites, etc).

I also don't buy the "protect your readers' privacy" angle. To me, that's "provide your readers an illusion of privacy", since they still have to trust $random_website to not leak the logs they are able to gather. Readers who want identity privacy should use Tor or similar, and not be misled by https-everywhere.


It'd be nice if we had a signing mechanism; it could reuse most of the mechanics of SSL encryption (same certs, same algorithms) and it'd provide tampering-proof communications would hindering caching. And it could even be done statically, without requiring more servers resources per request (e.g. as a step during static site generation).


Because the blog admin area should be on https anyway and it's an opportunity to play with https.


To protect your readers' privacy.


It only protects the specific page they're reading. The domain goes in cleartext (that's the whole point of SNI). Kind of a weak protection.


It's really a wash since if you don't have SNI, the IP address uniquely identifies the site that the user is going to and the IP address is on every packet but is not ever encrypted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: