Hacker News new | past | comments | ask | show | jobs | submit login
HTTPS Watch (httpswatch.com)
136 points by kingkilr on Jan 18, 2015 | hide | past | favorite | 56 comments



The current ratings seem too simplistic and strict. I think a better rating system would be:

1. None. Not listening on https.

2. Bad. Invalid cert or broken cipher suites.

3. Ok. Valid cert and good cipher suites, but no redirection to https.

4. Good. Http redirects to https.

5. Great. Redirects to https and sets HSTS header.

6. Amazing. In browser HSTS preload lists.

It may make sense to change the criteria as sites improve, but that list seems sane today. I'd also recommend using letter grades (A+, A, B, C, D, F), but that might cause confusion with SSL Labs[1].

1. https://www.ssllabs.com/ssltest/


There is much more to HTTPS than just ciphers and HSTS, i would personally use the following rating.

1 None - No HTTPS Support/Invalid certificate/Broken or vulnerable cipher or protocols (POODLE, SSLv2 etc'.) Cookies not set as 'SECURE'.

2 Poor - Valid certificate, weak or anonymous cipher suits, none standard ciphers. Site serves mixed content. Any certificate issues like SHA1/MD5 signatures, low rated CA's, lack of revocation lists etc.

3 Good - Fully validated cert and chain, including revocation lists, supports only secure cipher suits with forward secrecy. HTTP-HTTPS redirection or HSTS, all cookies are set as 'SECURE'. No mixed content.


I think the main problem is mixed content. Wikipedia has excellent HTTPS support, but would be listed as mediocre because it isn't the default (I've heard this is due to it needing to be accessible in countries like China). The New York Times (and many other sites) is unusable in HTTPS but has the same classification.


So depending on the application and use case, it may be acceptable to not accept connections on port 80. I still think setting the HSTS header is important, but the redirect isn't (always important).


Redirecting to HTTPS is heavily encouraged by the HSTS spec.[1] Clients are just as easily man-in-the-middled if you refuse connections on port 80. The middleman can connect to your HTTPS and use it to send valid (though maliciously corrupted) HTTP responses to the client.

There's only one way to ensure clients aren't MitM'ed on first connect: Go to https://hstspreload.appspot.com/ and submit your site to Chrome's HSTS preload list. Firefox slurps Chrome's list from time to time. The lists are shipped with browser updates, so the whole process takes months.

1. http://tools.ietf.org/html/rfc6797#section-7.2


I was looking forward to a smartwatch that somehow made use of https. Now I feel like an idiot.


I would love to see a list of financial institutions included. I checked www.bankofamerica.com and secure.bankofamerica.com on SSL labs found both to have identical (B-grade) security.


I think this is a really good idea. I mean, today to most people the measure of whether or not a site is “secure” is just whether or not the lock icon displays when they’re browsing.

An actual “public shaming” of sites with bad security is probably all that’s effective at this point.


You can install the Calomel addon in Firefox and see how secure those certificates are. https://addons.mozilla.org/nl/firefox/addon/calomel-ssl-vali...


Is there a search engine which returns only results which themselves use https?


I'm curious why this lists but few of the Alexa top 10, such as Google, Yahoo!, Facebook, Twitter, and others. The first two are mega-sites and only the root domain would count most likely, but social sites constitute a lot of communication. (Even better would be to say whether app connections are secure, such as knowing whether Snapchat connections are over TLS or not, though that's probably out of scope.)



Chur bro. Pretty quick turnaround there

Note that westpac's sensitive area is on a separate subdomain sec.westpac.co.nz and does enforce http->https redirect - www.westpac.co.nz is just static marketing stuff.

Also for TSB, you have 'The HTTPS site redirects to HTTP.' but this doesn't seem to really be the case in a web browser.


Yeah, it looks like the script doesn't handle the . to www. redirects. I've fixed them all up (hopefully) and a few scores have changed.

It's only checking the main sites, not the separate internet banking login pages atm.


Oh, there was a bug in the script, i've merged gutworth's latest code that fixes it (And the weird green tick when the TCP connection just gets reset).

Page has been updated :)


I always find it slightly weird, when reading Snowden-related articles and looking at the NSA PDFs on Der Spiegel, that they don't use HTTPS (and even actively, permanently redirect to HTTP).


would also like to recommend my friend who runs a similar product (I have no affiliation):

http://sslswitch.com/


How are you not affiliated with your friend?


normally when people say I have no affiliation, it is infered that they mean they are not part of, invested in, a member of, that company.

kind of like how I have a friend that works for Coca-Cola, but I'm not affiliated with them.


>"If a verified TLS connection cannot be established or no page can be loaded over TLS, the site is given the Bad rating."

So, bad = none.


Where is the line item for "prevents downgrade of HTTPS connections to vulnerable protocols"?


For someone who doesn't get it, why do you need https on websites that just show you some text?


Mostly mobile from a risk perspective. If you connect to other people's WiFi networks or even have your own configured to auto-connect it's possible to use devices like the WiFi Pineapple and more to man-in-the-middle (MitM) your connection. If you aren't using a VPN, and lets face it, nobody really does, then a bad guy MitM could inject JavaScript of choice into the HTTP response for major sites, thus using them as a gateway to execute exploit code within the context of your browser. Many modern exploits don't rely on writing to disk and instead remain resident in memory so AV really doesn't even save you. If everyone on the web only supported HTTPS the risk of an evil MitM compromising your device while on WiFi goes down fairly significantly so long as you don't do something stupid and install a rogue root CA or accept browser crypto warnings and continue.

It also helps mitigate things like rogue advertisement injection by hotels, ISPs, etc.


Because no one has any business knowing what content you're accessing. That should be a secret between yourself and the website you're on.


It guarantees that the text you are seeing really comes from that website, that it hasn’t been tampered with by an actor between you and the server.

This means that Internet service providers and others between you and the website cannot insert ads in web pages. It also means that if you’re reading a newspaper online (for example), you can be sure that the articles you read really are from that newspaper, that all the articles are there and that they were not modified.

It also prevents URL or content-based censorship, since mans-in-the-middle cannot know what URLs you visit or what content you received from a website.

It does not prevent domain name or IP-based censorship, since the man-in-the-middle can know these, and it is also sometimes possible for a man-in-the-middle to know what page you are visiting by examining the length of the URL and content and comparing them with public information available on the website.


It guarantees that the text you are seeing really comes from that website, that it hasn’t been tampered with by an actor between you and the server.

That's what subresource integrity is for. (http://www.w3.org/TR/SRI/) Links with subresource integrity include the hash of the content to be delivered. Subresource integrity allows caching by content delivery networks and ISPs without allowing them to change the content.

Using HTTPS for general static content is that it breaks caching and CDNs. Because it breaks CDNs, many CDNs (especially Cloudflare) break HTTPS by terminating the HTTPS connection at the CDN. They may or may not encrypt from the CDN to the actual host. This makes big CDNs a central point of attack for snooping purposes.

While this is an unpopular opinion, I consider HTTPS Everywhere a form of security theater. We need really strong security on logins, financial data, and such. We do not need to encrypt newspapers or public cat videos.


That is not what subresource integrity is for.

Transport Layer Security guarantees, among other things, that the content really comes from the server it should come from¹. This means that the content was not manipulated by a man-in-the-middle.

However, it does not guarantee that the content was not manipulated by an attacker with access to the server. If a web application (say Gmail) includes a JavaScript library (say jQuery) served by a content delivery network (say code.jquery.com), it can use subresource integrity to have the browser verify that the library was not manipulated by an attacker.

This prevents the threat model where the content delivery network becomes compromised and an attacker replaces the library by malicious code that sends the private data of users to the attacker.

Subresource integrity can also prevent other attacks, but it complements end-to-end encryption. It does not replace it.

¹ This assumes that the certificate is valid, of course. There are problems with the current certificate authority model, but there are also solutions to these problems.


Yes, we do. HTTPS has the benefit of some confidentiallity, meaning only the hostname is visible to an eavesdropper (eg your friendly government).


It's trivial to rewrite plaintext on the fly and the number of people in a position to do so is staggering. Like passing a note in class, it's basically about trust. You want to ensure the message hasn't been tampered with by the time it reaches you. HTTPS is one way, even if the recent past has backed it into a tiny corner (RIP SSLv2, SSLv3, TLSv1.0 and a long list of cipher suites).


For the reasons given by others, plus many news websites have login functionality for commenting, or for subscriptions etc.

That login info, or at least the cookies it produces, should not be sent unencrypted.


Is it protocol at this point to always redirect from HTTP to HTTPS? Is there an RFC for that?


It's part of the HSTS spec that a server receiving a request over HTTP should redirect to HTTPS.

I assume the logic here is that as it's best practice for any site with HTTPS to use HSTS also, all HTTPS sites should not be available over HTTP apart from a redirect to the secure version of the page.


Not yet, as there was a bit of an argument in the HTTP/2 WG about that.

I may suggest a SHOULD NOT again, however.


Forcing a HTTP to HTTPS redirect is a really bad behaviour.


It's an incredibly common pattern. What's bad about it?


It makes your site inaccessible to people who don't have TLS stacks. I can write and carefully audit an HTTP/1.0 client in a weekend; auditing even the bare minimum the code needed to speak TLS would take months or years.

"HTTPS Everywhere!" means "Heartbleed-like bugs everywhere".


So I should compromise my users security for the incredibly obscure use-case of people who want to browse with their own hand-made tools?

There are several readily available TLS stacks for embedded systems (CyaSSL, PolarSSL, etc.) and plenty for other platforms (Open/Libre/BoringSSL, NSS, etc.), so 'people without a working TLS stack' is not a real-world use case you need to take into account.


The argument is that someone may not be able to audit TLS stack, not that someone may not be able to use TLS stack.

Requiring TLS forces people to include TLS stack, which enlarges trusted computing base a lot. Security depends on many things, but the size of trusted computing base is an important factor.


> "HTTPS Everywhere!" means "Heartbleed-like bugs everywhere".

That's a silly argument. Here is how I will rephrase it: don't use computer because computer software always has bugs. Millions of servers run on all variants of Linux distro and every month we find a dozen of security vulnerabilities. Even standards have bugs and unresolved items.


Millions of servers run on all variants of Linux distro and every month we find a dozen of security vulnerabilities.

Most of which only apply if you have local untrusted users or are running particular software.

You can give up if you like, but I still believe it's possible to set up secure systems.


Well, your site is inaccessible to people who don't have TCP/IP stacks too.

Try writing one of those from scratch and auditing it. (No, seriously, actually do try, break out Rust, do some proofs; more secure code would be awesome.)


FreeBSD's IP stack is about 1/4 the size of OpenSSL. (And a hell of a lot easier to read, too.)

EDIT: If you exclude sctp, the remaining IP/UDP/TCP/ICMP parts are 1/8th of the size of OpenSSL.


> It makes your site inaccessible to people who don't have TLS stacks. I can write and carefully audit an HTTP/1.0 client in a weekend; auditing even the bare minimum the code needed to speak TLS would take months or years.

What about the HTML 5 parsing, the CSS parsing and layout, and the JavaScript virtual machine? I doubt you could write and carefully audit all that in a weekend.

Most sites are already inaccessible to people who don't have modern web browsers, which all include a TLS stack.


In addition to your point, being able to use, for example, Wireshark to debug HTTP communications is often handy. HTTPS makes that challenging.

But specifically about your point, when one audits an HTTP/1.0 client, should one also audit the TCP/IP stack in the kernel? I don't think most researchers/engineers would, and would (for sake of practicality) instead trust the underlying systems. Eventually, TLS will be thrown into that "underlying systems" bin. That time is not likely at hand; there are still too many shortcomings of TLS and its surrounding infrastructure. As evidenced at least by the OP. But when the time does come, writing an HTTPS client in a weekend will be just as practical, since you'll trust the software libraries/kernel to handle TLS. Just as you trust your libraries/kernel to handle TCP/IP today.


I'm curious, do you think that we should use secure communication only when needed and use plain http (or whatever) otherwise? Am I misunderstanding your point?


Compatibility maybe?


Why?


Healthcare.gov being an example to the rest... go figure.


I don't suppose we should be checking the pages that should actually be secure. IE Ubuntu is listed as bad, why not check their login page? https://login.launchpad.net/ or launchpad.net. Perhaps once https://letsencrypt.org/ comes available it will be worth the extra effort to encrypt everything. In the interim it's most likely a waste of funds, especially for projects that operate on donations.

Edit: I was surprised to see the WSJ listed as Bad. Checking their login form, something that should be encrypted, the post goes to... https://id.wsj.com a secure page. I wont go through the entire list, but I expect most of the ones in this list have a similar configuration.


Just encrypting the login page or a form action does not work.

Think about, say, browsing on some public WiFi network (airport, cafe, etc.), but it turns out it's actually a rouge access point. Or there's an MITM at the ISP or somewhere. If you hit an unsecured page, I can rewrite the links to be insecure, so now instead of going to https://login.launchpad.net, you actually go to an http page that I proxy to the real page, so you probably don't notice the difference and I can steal your details.

Same with the form - I can rewrite the form action to regular HTTP and seamlessly send it back to the HTTPS once I have stolen your details.

If you have anything that requires security, the entire domain needs to be HTTPS.

Then, of course, there is also the risk of session hijacking.


You're right for pages with links to login pages. Fortunately in this case, ubuntu does not appear to link to login.launchpad.net anywhere on their main website as far as I can tell. I'm sure it's linked somewhere but I was only able to find it via a search. Odd.

In any case I understand the point of all this, get the big sites using it so that the little ones might adopt it as well, the issue is that the cost of adding ssl does not add value to sites without logins. Perhaps letsencrypt.org will make it worthwhile to encrypt those static sites as well, it'd be nice to see hosting providers include this as a default.


Using HTTPS on your login and account management servers but not on other pages of your site that need access to the login (like WSJ's article pages) can leave you vulnerable to session hijacking attacks (like those demonstrated by Firesheep[1] a few years ago against Facebook and Twitter). If you want to be secure, resources that require authentication should always be accessed over SSL, and your session cookies for logged-in sessions should be set as secure (so the browser won't send them over plain HTTP).

[1] http://en.wikipedia.org/wiki/Firesheep


Session hijacking only works on webpages with sessions. Most of these urls are static informational pages without sessions, from the several I looked at.


The WSJ specifically requires login to view articles (because they require an active subscription to read anything on their site). Most of the other newspaper sites (at least; not going to go and pick through the whole list) are the same, with different thresholds for when login is required.


I'm not sure what happens on the site after login on WSJ, if they are doing it semi-correctly then it should be a secure page after login. If that's not the case, then yea they should improve this. Not picking through them individually, there are a number of pages outside the newspaper category that are strictly informational without even login forms.


No, everything should be HTTPS, and these sites fail that. HTTP SHOULD NOT be carried on the bare wire.

Don't be surprised to see port 80 formally deprecated soon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: