There's something about their favicon being the default green lock ( https://https.cio.gov/assets/favicon.ico ) that unsettles me. It feels like a social engineering trick.
That's an interesting point. I'll be straight, it's lifted right from https://istlsfastyet.com. And cio.gov is in the HSTS preload list [1] so (once the list makes it into stable channels) the chances of the domain being downgraded to plaintext are pretty low. But I had not thought of that angle. Hmm.
I wasn't calling it a social engineering trick, more that it just felt like one. To the average person they wouldn't second guess the icon. To those who believe in HTTPSAllTheThings, we question anything out of the ordinary.. and that little padlock shouldn't appear in the tab.
As I said, it just felt weird, sort of the same feeling you get when you go to Apple or YouTube and there's a warning on the lock icon. You just want to hit the back button almost instantly fearing something dodgy is happening.
From what I was reading, the removal of the favicon in Safari was more just a UI redesign decision to remove "clutter". Personally I don't find a 16x16 icon too intrusive, but hey, what ever floats their boat. I was hoping that it was as you said, and was to prevent maliciously designed favicons from tricking users on plaintext sites (where the protocol had been stripped by the UI) into thinking they were on a secure site.
I don't use Safari, so I don't know how they render their address bar.
Hey gov, how about promoting a transparent decentralized system for certificate signing that doesn't require paying a vig every year to a corporation that can easily be leaned on by not-so-well-meaning authorities?
> how about promoting a transparent decentralized system for certificate signing
Honest question: with a decentralized system for certificate signing, what would be the trust root?
The current system has the browser makers as the root of trust; this trust is delegated to a set of certificate authorities through the list of root certificates which comes with the browser; these certificate authorities then delegate to their intermediates, which finally certify a server as trusted for a fully-qualified domain name.
Without a root of trust, anyone could say "I'm example.gov, this is my certificate", and present "proof" of that. A trust root is necessary to prevent this.
So far, the only working proposals I've seen for decentralized trust (which don't do away with the human readability side of Zooko's triangle) are based in distributed proof-of-work systems like Bitcoin's, where the trust root is the distributed "chain". Has anyone ever tried to apply a system like that for certificate signing for TLS?
The problem with achieving real adoption of crypto had historically been made a lot harder than it should be because too many problems are trying to be solved at the same time.
Separate the problems! It is much easier to find realistic solutions when the requirements are narrower. The remaining needs can be solved later on. Once some usable infrastructure has been established, it might be possible to leverage that infrastructure to add back in some of the missing features.
For HTTPS, a good start would be PHK's suggestion of simply auto-generating self-signed certs in apache by default, as a replacement for plaintext. Authenticating those certs can happen later.
After keys are everywhere, a potential solution might be t o allow both PKI authorities and some sort of web-of-trust (or other methods? blockchain? something new?), and exposing the source of trust to the user in way they can manage.
There is no one-size-fits-all solution to the trust problem, so let the user decide because they know what their requirements are. If I'm browsing to some bank, a well-known PKI root might be a good trust source. If I'm chatting on some local forum, a web-of-trust auth might be better (it's a local forum, so fingerprints can be exchanged manually, friend-to-friend).
There are middle grounds that would still be better than a plaintext internet. Cert pinning, even self-signed certs, would be better.
First time to https://example.com - I get a prompt and a UI element telling me it's self signed. I accept the risk (This may not be who you think it is! - but it probably is)
2 - nth time to https://example.com - UI element tells me it's self signed, but the same as before. Whether it's the NSA or the site, it's the same person at the other end.
Next - does the cert change to a PKI-trusted? Then great! I get a UI element (no prompt) showing the site is trusted. Does the site get a new self-signed cert? Back to the first step.
I believe this, and the parent notion of default cert generation by apache installs, is better than no SSL. It's not as good as fully verifiable auth.
---
And son of a gun, I know this isn't an original idea, but I can't believe it took another post to remind me this is exactly how SSH works. Sure, you can get the server key to your DO host and transfer it to your client, but how many people do that? They accept the fingerprint they see at first, assume it's good, and probably raise an eyebrow if it changes. Don't like it? OK, let's go back to telnet.
If state level actors wanted to MITM all SSH connections to make you accept the wrong key, they could. But it has a high chance of being caught as generally, you'll pay attention when the key changes. Plus you can reasonably verify the key out of band. (And one could imagine an extension to add a bit of CA-style key verification, like, "was this key generated by my account or my VPS provider, allowing you to manually verify one key and then being set for the rest.)
Whereas if you do this on HTTP, a: you're constantly having a lot more "first time" connections. B: you've got no real way to know when the key changed legitimately. Users would quickly grow accustomed to key change warnings and ignore them. Or you'd see banners on sites like "ignore the key warning; I reinstalled my blog and the key changed". Attacks could even inject such a banner.
If you want to manually verify every cert, you can already do that today: just go and add certs to your browser!
Unauthed HTTPS-by-default just adds complexity and a false sense of security and isn't worth pushing out on the public.
I think the problem is that HTTPS carries the connotation of security, renaming this proposal would make it obvious to users what the expectations are.
Lets change the name of it to something like httpe e for encrypted, s for secure.
Let's change the behavior of httpe to automatically accept the first key it sees for a domain.
Browsers have the option of uploading the lists of domains and keys to their creator, Firefox, Google, etc which can then be collated and updated from the browser.
This data could be used to spot where MITM attacks are taking place.
This is the first time I've heard of HTTPe and I love it! However, I'd like to sit on it for a bit. Let's encrypt should come out later this year. I want to give it a chance.
Encryption without authentication solves a majority of threat vectors that would allow people to look at your private communication. (authentication here means knowing that key K belongs to some organization, not authenticated encryption). If, by default, everything was encrypted, then the internet would be a much better place. Once encryption is in place, it means that active monitoring will be easy to detect because if only a small number of people actually verify that the remote party is who they say they are, then it will still be detected.
Is this meant to be a general statement or just a response to some comment?
One thing I do not like about the popular "strong" encryption solutions are that they are tied to relatively "weak" authentication solutions. Instead of two programs that each do one thing, we are instructed to use one program that does two things.
I prefer that encryption and authentication were are viewed as distinct programs. If desired they can be used together. Sometimes we may not wish to rely on the hope of an "encrypted channel", but instead we might just want to send an encrypted blob over an untrusted channel (=the internet).
Obviously it makes sense to send your encrypted blob to the correct destination, but that does not mean you _must_ use encryption to verify the destination is the correct one; it is an option, but not the only one.
For example, it is possible to do the authentication part via some old-fashioned method that does not require the internet.
Sure, but at least it means simple optical splitters can no longer read the contents of your web traffic. That's arguably not worth a whole lot, and may trick people into feeling safer when they shouldn't. But if nothing else changed (no browser indications, etc), wouldn't using DH on all HTTP be strictly better than nothing? Theoretically. In practice, I suppose entities just replace their splitters with a retransmitter and we're back to zero.
I have a secret. I want to share this secret with only my best friend. I do this by telling the secret to everybody I meet, whether they look like my friend or not. Eventually I meet my friend, and tell them too.
This is confidentiality without authenticity. It is an incoherent idea.
It's incoherent at a high level, and it stays incoherent as you delve deeper into the theory. For instance, systems that lack authentication tend to lose confidentiality to error oracles.
Without authentication your connection is susceptible to undetectable man-in-the-middle attacks that DHE does nothing to prevent. That they're separable is superficially true, but not interesting, as encryption alone doesn't stop people from reading your traffic, which is the whole point.
This argument comes up so regularly, one might speculate that some people are trying to keep the internet in plaintext[1].
Yet again, encryption is a replacement for plantext, which is the only thing it should be compared to. Of course you can MitM attack it, but that's not something that is easily done in bulk.
Simple encryption raises the cost of an attack from "trivial wiretaps, DPI optional" to the time, money, and effort required to do a targeted MitM attack. Additionally, while it is generally impossible to detect wiretaps, MitM can leak information that betrays the presence of an attack.
Remember, this isn't intended to stop all types of attacks. It is simply a very easy to implement feature that lets you replace plaintext with something resistant (not proof) to eavesdropping in general, and proof against some types of bulk surveillance.
Note: I haven't said anything about presenting this type of non-authenticated communication to the user as "secure".
It doesn't raise the bar high enough that the people who are currently snarfing internet traffic wholesale would bat an eye at.
It doesn't matter how secure the phone line is when you have no idea who you're actually talking to. Especially when there are people with money, means and access to make sure that you're always talking to them.
To follow up on what apendleton has said - I have been involved in implementing standard protocols (e.g. IKEv2) that involve DHE to set up an encrypted connection. The first step in all of these is ALWAYS to verify that the Diffie-Hellman value you got from the other side is actually from who you want to talk to, otherwise it is trivial to run a MITM.
Right. Because active MITM attacks -- at least if limited to cases where the attacker doesn't know how / if the connection is authenticated -- carry zero risk of alerting those being attacked.
An untargetted attacker cannot know that there is no authentication. Dragnetting connections where they don't recognize any authentication therefore risks detection.
That risk to the attacker is not present when observing plaintext connections.
.
I'm not safe from muggers because I have eyes in the back of my head to see them trying to sneak up on me. I'm (largely) safe because someone else is likely to see them (or catch them on camera) and get them caught.
One of the ways to detect MitM is authentication. It is a particularly good method, which I recommend whenever possible, but it is not the only method.
Suspicious changes in the environment might be another, as would detecting data that leak past the middleman. Key pinning would be an example of a change in the environment, unexpected changes in important network topology or routing could be another. An example of a leaking middleman might be detection of the real (non-poisoned) "duplicate" packet in a DNS-poisoning packet race.
These methods are nowhere near as good as proper authentication, of course. Reliability of detection is probably very low. The point is that it is better than the case of sending plaintext that anybody can trivially wiretap with zero chance of detection[1].
As always, it is important to define your threat model. If you are defending against any kind of targeted attack, then yes, authentication is a firm requirement. If your threat model is only concerned with avoiding the trivial surveillance that can be done in bulk, anything that forces the opponent to use a more complicated ("expensive") MitM attack is a success.
[1] modulo any still-very-hypothetical quantum communication methods. We can reevaluate our options if those technologies ever work well enough for common use,
Without a root of trust, anyone could say "I'm example.gov, this is my certificate", and present "proof" of that.
Most SSL certs in the wild are legitimate and trust has already been established. So if you hit FooCo's corporate website and get one certificate, and some other guy hits the same website and they get another, it's likely something fishy is happening. Replace this model of two people with a few million, and you have a pretty decent verification system happening.
Really, what we have now isn't okay. We're training users to click past SSL warnings which, 99.999% of the time, are due to misconfiguration or BS reasons (i.e. expiration).
> So if you hit FooCo's corporate website and get one certificate, and some other guy hits the same website and they get another, it's likely something fishy is happening. Replace this model of two people with a few million, and you have a pretty decent verification system happening.
In this model, the trust root is the verification system which compares your "hit" with other people's "hits". If an attacker can pretend to be the verification system and tell you "everything's fine", the system won't work. Also, it's centralized: the verification system itself becomes the central component.
What's to stop an attacker from poisoning such a system: pretend to be a thousand different people all saying "Yep, the cert with signature 0xBADBADBAD was what I saw"? How does someone rotate certs without breaking all their existing clients?
Convergence and DNSChain are two interesting proposals to replace the CA system.
IMO, it's more important to emphasize the idea of secure origins, and HTTPS hits that note. TLS could be swapped out, the CA system could be changed, but what matters is the expectation that connections across the web are expected to be secure by default.
Agreed.. it's a shame that I can't just publish a public-key as part of a DNS entry for a domain, and as long as the DNS chain is secure (DNSSEC) then that key can be trusted.
Using something like Namecoin https://en.wikipedia.org/wiki/Namecoin and storing the cert hashes in the blockchain would allow for decentralized verification.
While I certainly don't love the CA system, this isnt where I see the biggest flaws with SSL.
Regardless of who is "trusted", the true critical points are in the Private Key and the Cryptographic primitives. Fixing the CA model addresses neither.
Symantec CA does not ever get to see my private key when I purchase a certificate from them. They dont get to determine the encryption methods I use. All they do is verify my identity (and do a damn good job at it in my opinion). Server Authentication is an important part of SSL, but its only one part.
The other part, encryption, is able to be undermined in many other ways - ways where we have much more direct proof of gov't surveillance and tampering.
You can perfect Server Authentication all you want, but if there are holes in the encryption, thats all for naught.
The encryption is really quite good. It's actually very rare to have a problem with the crypto. Compromise of the trust network is much more common and is really the problem with today's crypto systems... read a bit about superfish for a good news worthy example of abuse of trust.
When reading the https gov doc -- it's very important to remember that the government runs its own CA.
Superfish was certainly a huge abuse of the trust network. However, if we look at other recent SSL vulnerabilities: Heartbleed, POODLE, FREAK - most of these are all dealing with flaws in the encryption (some directly, through most with the use of side-channel or other clever attacks).
We also know that the NSA is saving encrypted messages for mass decryption in the future. New technologies like Perfect Forward Secrecy (PFS) can help eliminate this issue. I think that fact that nearly 100% of servers were still allowing SSL 3 up until the POODLE attack a few months ago highlights how poor most SSL configurations are. Unlike the trust network, which has infrequent but serious breaches, the encryption side seems to be poorly implemented almost universally.
However, unlike Superfish, which we know affected thousands, alot of these other SSL vulnerabilities are usually just PoCs...
Also it is aggravating to me that the cost of implementing SSL on a single site is too high because of the CA signing cartel. In my experience this is the main reason that many sites eschew the matter altogether. We're creating a digital security ghetto like this, and it is completely unnecessary.
This is one area where government has the capability to lead the way at a very low cost, which is all I was trying to say, perhaps with a bit too much snark.
In my experience the biggest hurdles to implementing SSL are: advertising networks, dependencies (such as internal systems or APIs) that are incompatible, policy of providers (such as Akamai implementing fees far above the actual cost of SSL), or performance issues.
Cost does not seem to be a major issue for most organizations (or even independent websites). Sure, Symantec and other high profile CAs charge an arm and a leg, but thats all marketing. There are affordable and trusted certs out there for <$10 if you just need a single domain cert, and <$100 for Wildcard and Multidomains.
The best alternative I've heard so far is actually a patch: We keep the CAs, but they publish signed, append-only lists containing every cert they've issued.
No cert is trusted if it isn't in such a list, and the lists are mirrored (and cross-signed) by a bunch of trustworthy authorities, hashes written into the blockchain, and so on.
Then any CA who issues a cert for www.google.com will also have to publish irrevocable proof that they done fucked up; and when their CA status is revoked it's possible to grandfather in certs they've already issued to avoid breaking lots of sites.
Of course, it won't solve the problem of CAs unjustifiably charging $$$ for certain types of cert - or the problem of the dubious authentication done for affordable domain-validated certs.
> Then any CA who issues a cert for www.google.com will also have to publish irrevocable proof that they done fucked up; and when their CA status is revoked it's possible to grandfather in certs they've already issued to avoid breaking lots of sites.
I wish. Even in cases today where we have conclusive proof of CAs willfully issuing fraudulent certificates, they get a pass. TrustWave still has a valid CA cert even after having been caught with their hand in the cookie jar.
Oddly, DigiNotar got the death sentence. The only lesson I can see in this is that incompetence is inexcusable, but willfully subverting the CA system is A-OK.
The Fed employs 2.7M people. I don't really understand the people who imagine it's one great big well-oiled machine with everybody in agreement, on the same page of the same playbook.
Yes, the park rangers at Yellowstone and meteorologists at the National Weather Service were all consulted on those decisions and deserve equal blame as the people up top and the torturers themselves.
No matter if they were consulted, but they surely must know that their paymasters torture, if my boss had a torture chamber I wouldn't be showing up to work every day, but I suppose you might.
If my company released a report stating that management tortured people but they weren't going to do anything about it then yeah I would quit, how is this controversial?
If people don't draw the line at torture and murder where else is it drawn, genocide?
So … take the people at the State Department who funded Tor to help dissidents. You've talked with them and confirmed that they're in lock-step agreement with a very different part of the government which they have no influence over, right?
The HTTPS-only folks mean well, and I support it as a stopgap solution, but it is useful only in that it can probably be implemented more quickly than IPSec-everywhere (or, if IPSec proves to be unsuitable, then some successor standard with the same goal of encrypting all traffic).
The latter, however, should be preferred as a permanent solution. The Web is by no means the only part of the Internet that needs to be secured.
IPSEC and HTTPS work at different levels. With IPSEC, your computer can be sure it's talking to the computer at 198.51.100.1 and not to any other computer. With HTTPS, your browser can be sure it's talking to www.example.gov and not to any other web server. Both work equally well against passive eavesdroppers, but they authenticate different things and so will work differently against active attackers.
Oh, no, it's useful for what it's designed for: to protect communication between two computers. If I have IPSEC protecting the connection between my desktop and my internal DNS server, and between my desktop and my database server, I know that connection to my database server is protected by IPSEC.
It doesn't protect the mapping between a computer name and a IP address, but that's not its job.
I think it's more like IPsec hasn't happened because it's a huge hairball of complexity which requires kernel-level configuration on every client and full end-to-end support for two new IP protocols and a UDP key management service.
In contrast, TLS requires using a new client library and works just about everywhere. All of the work people have been doing to switch to strong crypto everywhere and deploy things like perfect-forward security? Imagine how quickly that'd have happened if it required everyone to install a kernel update.
Until IPsec becomes easier to use (something as simple as checking that a socket is actually secure used to be shamefully under-documented) the best way to think of it is as a potential replacement for proprietary VPN protocols. Anything which cares about security will still need TLS over that so most people will simply use only TLS.
It's worth noting that HTTP(S) has broadened outside of the web, in the sense of web browsers. Most native mobile apps, and lots of APIs used by desktop apps, etc., all use HTTP to get their job done. Definitely doesn't cover everything, but I think it's fair to say that HTTP is basically the protocol of our lives right now.
Outstanding! This is wonderful news. I've heard people ask whether a given site or protocol really needs to be secure but I hold the opposite opinion: everything should be encrypted unless there's a specific and compelling reason otherwise. I'm thrilled that major organizations are coming to the same conclusion.
Great. When that is done, then do email with is frankly a far more retractable problem in general (and a field where there is almost zero innovation or improvement, thanks to Microsoft (Exchange/Outlook/Outlook.com), Apple (Mail), and Google (Gmail)).
Of the communication that's nominally protected by TLS, a lot of it can probably be trivially broken by an active MITM attack that either (1) prevents a connection from being upgraded to TLS, in which case it typically remains plaintext (see STARTTLS), or (2) establishes the connection under a self-signed certificate, which will suffice since many email systems do not perform certificate path validation.
Mm you mean RFC 822 - well thats the down side of internet standards x.400 had a lot of nice security features but the cheap and cheerful SMTP email standards won that war a long time ago.
I used to do X.400 and x.500 for a Large Telco back in the Day.
It's a trade off between ease of use and cost - and is a tech example of Gresham's Law
One thing that amazes me is that I get a huge volume of spam emails that claim to be from financial institutions. I use gmail for two reasons: (i) deliverability to mailing lists I need to be on and (2) other mail programs don't filter that junk out.
In 2015 it should be impossible to send a fake email from chase.com
But, unfortunately, even mail from a properly configured mail server on properly protected domain will still end up in gmail users' spam boxes by default. Domain and server rep systems are a bear to work with.
What strikes me is that their conf is inaccessible to old client (no sslv3, no sha-1 cert). Honest question: does the government think it's ok to break website access from ancient clients (XPSP2 IE6)? Or will they be forced to enable legacy ciphers when the first citizens complain?
I'm genuinely interested, because the topic of TLS modernity is a hard problem to solve. If that was easy, google.com wouldn't be using RC4-MD5 and a sha-1 cert...
With the number of attacks against SSLv3 and/or RC4, I think it's a good idea. Especially given that it's a government website. With a target that large, it's a safe bet that, at any given moment, someone is trying something against that site. And really, who's going to access cio.gov from an XP box? Heh.
Just speaking for myself here, but what makes sense in terms of legacy client support for e.g. taxes or health care may be different than what you need for a set of technical resources.
I want to select who my browser should trust. Kinda like how I choose which NTP server to get my time sync from. Make it a popup the first time a browser starts.
An HTTPS only standard is not going to mitigate MITM attacks which is invariably the biggest issue regardless of perpetrators although this is clearly a step in the right direction.
Proper HTTPS is secure at MITM attacks. HTTPS can be MITMed if the certs or local systems have been breached. But the MITM is just a byproduct of a different failure. Absolutely any secure communication protocol has to be honored by both parties for it to be secure.
But it's impossible. If I'm logging your keystrokes, game over. Unless you add some firmware to do dsa encryption in your brain, and become very fast at typing cyphertext.