Hacker News new | past | comments | ask | show | jobs | submit login
Perfect Forward Secrecy can block the NSA, but almost no one uses it (computerworld.com)
347 points by LoganCale on June 24, 2013 | hide | past | favorite | 135 comments



Chrome should change the lock icon to something weaker for sessions that don't use ephemeral keys.

This may be the most important article on HN related to the NSA leaks. Fact is, most of us haven't paid enough attention to the details of https.

EDIT: "something weaker" - didn't mention color. People dont need to understand "forward secrecy", the browser just needs to raise the bar for what's considered secure. The goal is to change server-side behavior, not consumer behavior.


People barely pay attention to the lock icon. Why would color-coding the lock make a difference?


I don't think the goal would be getting everyone to pay attention, because if it is, you're absolutely correct.

But if you look at it as a way to spread awareness to web admins, to be confronted with the question "how come we don't have the bestest type of icon", it's not so bad.

A related example: I have recently been wondering how widespread DNSSEC is.

I installed the DNSSEC validator chrome plugin, and now (almost) every website on the planet shows me a sad "no DNSSEC" icon. I can't help but feel that part of the low deployment of DNSSEC is because almost nobody, including web admins, gets any feedback about DNSSEC. (I am aware that there are other costs and risks to deployment.)


DNSSEC provides minimal value and adds significant overhead. We're better off the less it's deployed.


If you're like me, and wishing tptacek would elaborate, it turns out he has in the past. For example, this entertaining thread from three years ago:

https://news.ycombinator.com/item?id=1234567

Also, despite my earlier google queries about DNSSEC, everything I was reading was pretty dry. The magic phrase to google for is "DNSSEC sucks"--that gets you the interesting stuff.

I haven't digested it yet, but in those results, this looks pretty interesting from D. J. Bernstein: http://cr.yp.to/talks/2010.12.28/slides.pdf



The a video of the talk those slides are from is online http://vimeo.com/18417770

Worth watching even if you don't care much about DNSSEC, Bernstein is a good speaker.


I don't understand why your criticism of DNSSEC focuses on TLS vs. DNSSEC.

DNS is a scalable global key-value store, and DNSSEC allows owners of namespaces to sign their own key pairs and delegate to sub-namespaces. If you can make the case that _that_ is not valuable, or how that is accomplished by other means, I am curious. But TLS vs. DNSSEC doesn't cut it.

Yes, DNSSEC/DANE can make TLS work better. It provides a straightforward way to pin an entire zone tree while allowing site owner modifications. Your example of Ghadaffi and Bit.ly is silly, because the status quo is that governments already have access to CA keys able to issue for any server. Restricting to a single zone can only improve that.

I don't know how you can claim NSEC3 is a grotesque hack without noting that it is equally grotesque to pretend that your DNS records are private. If I wanted to collect the BoA zone, I would setup recursive nameservers on the coffee shops' wifi within a quarter mile of their headquarters and grep the logs. Incidentally, this would work even with DNSCurve deployed.


Anyone who would advocate for DANE in 2013 is looking at a situation where users assume that governments have compromised the PKI that drives the most important encryption on the Internet, and saying to themselves, "let's bake that problem into the network architecturally; let's make it so that the NSA doesn't even need to compromise a CA, because they'll own the global root of all CAs".

Regarding NSEC3: most people reading this thread don't know what it is, so I'll explain it really quickly and let them decide, because it it so obviously a stupid hack that I don't think I need to argue against it too much:

Just a couple years ago --- more than a decade after work on DNSSEC was started --- somebody realized that Bank of America would not in fact be OK with a DNS design where every single one of their hostnames, for both public and internal systems, was public. But that's a problem, because DNSSEC wants to authenticate negative responses; if there's no JABBERWOCKY.BANKOFAMERICA.COM, DNSSEC wants that cryptographically proven. But their design to do that breaks if there are nonpublic names, because DNSSEC chains the names together as a way to authenticate denial.

So here's what they came up with: domain names are hashed (crappily) as if for a 1997 Unix password file, and the authenticated denial messages refer to hash ranges instead of literal hostnames. Meaning you can only discover all of Bank of America's hostnames if you are as technologically sophisticated as a circa-1997 password cracker.


You didn't actually respond to what I said.

I don't know why you're talking about the NSA. And if you are, anyway, do you really think they can't ask Verisign for an arbitrary cert already? And won't the same tools that modern network programs use to protect against these attacks (certificate pinning, convergence, etc.) work equally well when applied to DNSSEC KSKs?

NSEC3 is good enough. I gave you a trivial way of acquiring jabberwocky.bankofamerica.com, even with DNSCurve deployed. If BOA wants network accessible services on a network accessible namespace to be private, they should make a zone cut at internal.bankofamerica.com and restrict access to the delegated NS (which can be the same machine). The easiest way to do this is to run a VPN, which they already do.

I am asserting that having a global key-value store, where namespace owners can sign their own entries and make delegations, is a valuable system to have in place. That is what DNSSEC is. Unless you can argue against that, you are simply beating on a straw man.


Why spend the money adopting DNSSEC if it's at best a marginal setback to Internet security?

The "trivial way of acquiring jabberwocky.bankofamerica.com" relies on somehow being in the same coffee shop as an employee who accesses the site using public DNS. Whereas DNSSEC just goes right ahead and publishes the information.

As for "making zone cuts" --- they haven't. Very few networks have. DNSSEC advocates just like to pretend that everyone has either architected their DNS zones they way they would, or that they'll all relabel all their hosts to fit that way.

I don't know why I should care about a "global key value store where namespace owners can sign their own entries and make delegations". We can have lots of those. Why use a crappy one bolted onto DNS?


It's not a setback at all. You can still use the existing CA system. In fact, you can just not set the secure bit and ignore its existence.

> As for "making zone cuts" --- they haven't. Very few networks have. DNSSEC advocates just like to pretend that everyone has either architected their DNS zones they way they would, or that they'll all relabel all their hosts to fit that way.

Very few networks have ridiculous PHB requirements for public servers defined on public namespaces that are somehow slightly more difficult to find than normal (and once the cat / jabberwocky is out of the bag and published to a mailing list somewhere, gives no advantage whatsoever).

Those that do have reasonable options for satisfying said PHBs, first with NSEC3 and then with zone cuts and private networks (which actually does solve the problem, instead of just pretending to solve it).

> I don't know why I should care about a "global key value store where namespace owners can sign their own entries and make delegations". We can have lots of those. Why use a crappy one bolted onto DNS?

What alternatives? To my knowledge, there is no credible alternative system to DNS. Why put up with a DNS system that is not end to end verified when you don't have to?


I think you are painting an exagerated picture of DNSSEC deployment. Off the top of my head every gov/mil domain is signed, debian, archlinux, fedora, paypal, freebsd, icann, ietf, isc...


I don't know--it really seems pretty rare in the sites I use day-to-day. Going over a list of popular websites (plus a few of interest to hackernews users), I don't think any of hackernews, google, reddit, github, wikipedia, facebook, yahoo, amazon, twitter, tumblr, bing, or ebay are using DNSSEC. My bank is not, nor are any of the other banks I thought of off the top of my head.

Of the sites you mention, paypal is the only one that I use on any sort of a recurring basis. But it's a little weird to a DNSSEC newbie like me, so maybe you can explain. The verisign tool shows that paypal.com is using DNSSEC, but it doesn't appear that www.paypal.com itself is secured. Is the chrome plugin giving me misleading information? Is this how things are supposed to be?

And a lot of US government sites have DNSSEC waivers. The first two examples I tried:

cia.gov appears to not use DNSSEC: http://dnssec-debugger.verisignlabs.com/www.cia.gov nsa.gov appears to not use DNSSEC: http://dnssec-debugger.verisignlabs.com/www.nsa.gov

(Thank goodness the verisign tool itself IS using DNSSEC.)

A long list of US .gov sites with and without working DNSSEC: http://fedv6-deployment.antd.nist.gov/cgi-bin/generate-gov

(Also: nist.gov is using DNSSEC)


I am certainly not a DNSSEC expert so take my explanation with a grain of salt.

Short answer www.paypal.com is a cname for an akamai box. That cname record is secure:

  $ unbound-host -v -t cname www.paypal.com www.paypal.com has CNAME record
  www.paypal.com.akadns.net. (secure)

Everything falls apart when your resolver finishes the rest of the required lookups to get an IP address from akadns/akamaiedge.

As I have been experimenting/researching dnssec I have often found it is useful to use verisign's tool AND sandia's dnsviz[1] tool. For the moment forget what I said about paypal's cname and compare sandia's[2] and verisign's[3] results for www.paypal.com and see if you can spot the issue.

You are correct that it is a relief that verisign uses DNSSEC but if I may be so bold I think you may be wrong about why it is a relief. (I am trying to be helpful, i apologize if that sounds dickish it is not my intent) With DNSSEC (just like DNS) everything flows from the root. Verisign manages the .com tld so if they did not sign the .com you could not verify anyhost.com. The same thing can be said for DISA, GSA and PIR for .mil, .gov and .org zones respectively.

As far as NIST goes they are the second least surprising DNSSEC adopter as far as the federal government goes. Because of NIST's standards function within the government they are normally at the forefront for things like this. Moreover DNSSEC is heavily reliant on accurate time and NIST is the home of the government's truechimer. However if you go down this rabbit hole you start to have some serious chicken and egg problems.

[1] http://dnsviz.net/ (NB: sandia's is so slow that its painful)

[2] http://dnsviz.net/d/www.paypal.com/dnssec/

[3] http://dnssec-debugger.verisignlabs.com/www.paypal.com


Since you mentioned banks, I just need to chime in:

I recently (May, June) surveyed 100 european banks (in Germany, Switzerland, France, Italy, Austria), roughly selected for the 20 specimens with the greatest total assets per country.

None of them have deployed DNSSEC. Most use SSL, many use EV certificates, a few go so far as to include HSTS. DNSSEC? Zero.


Do you have the dataset anywhere?


The data will be part of my bachelor's thesis, to be published in July. Well, pseudo-published, as such theses tend to end up.

Ping me at ycombinator at y dot ly for your choice of either raw (collected using Qualys' SSL Server Test[1] and plain old DiG) or aggregate data.

[1] http://ssllabs.com/ssltest/analyze.html


So in terms of traffic, basically 0% of the internet.


In terms of traffic no doubt. But that seems like the most ridiculous metric to use. But more importantly these are off the top of my head and my interests are not your average internet users interests (eg debian). Comcast is using DNSSEC. Google's dns (8.8.8.8) will do dnssec if requested in the query.


Part of the problem is that the lock icon is just grey. The lock icon should be pulsing red, solid yellow or green. This would work the same way Google's anti-black-hat-SEO advisories work. Force people into security by calling attention to bad practices and reducing the prominence in search of any site that violates this.

Clicking on the pulsing red icon, would pop up a window that tells people exactly what they should be careful about.

Clicking on the certificate, should present a much clearer dialog for the layman.


> pulsing red

Yeah that's great when I'm trying to focus on an article I'm reading. This is why security people aren't allowed to touch UI.

All joking aside, I agree that this issue should be surfaced, and I think people would notice the security icon if it was in constant flux.


Because they can learn.


That's what the people selling EV certs said. But EV certs have been a failure too, and they had a much louder UI.

And at least EV certs were backed by a concept that people could understand. It's unreasonable to expect people outside of tech to understand "forward secrecy".


It is at least possible that EV certs have seen limited uptake because people understand too much about them. After all, they are the same CA "sure you can trust us" scam that we've always had, except with more rent-seeking. Some users and/or site operators might have been influenced e.g. by EV certs' unsavory association with Comodo.


It is extremely unlikely that mass-market adoption of EV certificates has anything to do with the inside baseball of CA politics.


So strike my last sentence. EV certs are still more onerous for site operators in terms of price and process than "normal" certs. They don't offer a credible increase in security or decrease in liability for anyone. Those facts suffice to explain the paucity of their mass-market adoption.


Back then people didn't know the extent and depth of NSA's surveillance, so they weren't motivated to learn. The stakes are higher now.


In retrospect, the timing of Google introducing PFS is interesting. It's possible they introduced it because they either knew about or suspected the surveillance but either had no proof or were unable to talk about it. Of course, maybe they're just security conscious but it's interesting that Adam Langley so explicitly referred to the recording for later decryption scenario.


The USA has one of the least intrusive, least overbroad surveillance regimes in the world. Google introduced PFS because governments in places like India and Russia openly desire to intercept all communications.


The average customer does not, but security professionals do.

I know that in many government scenarios, (think local governments) for example, policy guidance drives purchasing and other behavior. So if compliance with some security standard requires a "green lock" that lock matters.

That will drive behavior -- banks, insurance companies, and other businesses will be disqualified for all sorts of RFPs.


Make it blink or jiggle a bit. Problem of winning attention solved.


Maybe, but in the long run the positive change would be by changing defaults everywhere, and then eventually throwing warnings for non-ephemeral SSL in the browser.


How do you explain that to people though?

It's simple enough to say "if you don't see the green padlock, don't enter your credit card number" but then what if the person sees the weaker symbol?

You could say "go ahead" in which case they will go ahead all of the time or you could say "only go ahead under these additional circumstances" , what would they be?


This is a great idea, but I wonder how far you can go with it. Could you also change the lock icon when larger more secure keys are used? Could you perhaps have the lock icon change color between a continuum of red to green (or better, black to white, or something accessible to colorblind people) depending on how the site is set up?


People are not going to pay attention to a color-coded internet-weather-report sort of icon.

But maybe if you switched the iconography/expectation: use a 'person' icon, instead of the lock, for vanilla SSL. It would indicate you have verified the server's identity -- which is not nothing -- but you can reserve the lock icon for forward-secure servers.


The problem of getting users who don't care to care is probably unsolvable without destroying usability. I think providing information nicely to people who care about it is the next best thing to shoot for.


I don't think it's about users not caring at all. It's more about the limit to which they can add yet-another nuanced technology/security concern to their life and work and then keep up with it as times change. [1]

We've got a question of granularity [2] and rather than make a professional value judgment in the average user's best interests [3], the proposal is trying to represent the entire scale and then expect the user to educate themselves about what the scale means, compare that to their environment needs, etc.

[1] How long ago was it, we were simply railing against insufficient key sizes or improper certificate validation? Anyone who 'learned' up to that day's wisdom would have to continually update their knowledge along with the rest of us to be able to make informed decisions on which colors were 'safe' for which uses.

[2] 'how secure' on a scale from 'unsecure' to 'SSL' to 'perfect forward secrecy'

[3] drawing a line on the scale, to the left of which is not secure and to the right of which is secure. And, when 'less-secure' options prevent valuable information that isn't necessarily 'security' (e.g. identity validation) using a clear method of distinguishing that.


Probably it should be a rolling standard, based on current best-practices.


I was thinking it would be up to the browser vendor to decide how to rate things, but if there were a more formal system that would be great I think.


Oh, I'd just meant "standard" in the "something against which you measure things" sense, and had been thinking per-browser, but I agree that some industry-wide coordination would be fantastic, if done right. Probably an industry-wide lower bound, with individual browsers allowed to require more security for a given marking, would be best.


Assuming you're sarcastic, but don't you think it's right to punish sites that don't use the best encryption strategies?

I'd support three stages: 1) no encryption, 2) some/partial encryption, 3) "perfect" encryption.

For the partial encryption, the browser would provide an explanation of exactly how it could be compromised. For the case where the encrypted version does not have perfect forward secrecy, it could explain that if the server is compromised, this session can also be decrypted.

I don't think browsers should cover the banks' asses here.

Edit (since I can't reply deeper): I thought you were being sarcastic since I consider a gradient to be a really horrible idea. I'd want something where the browser tells me something. I don't want to wonder whether the new browser version changed the gradients slightly or a site improved its encryption strategy. Clear symbols and language is a must.


I'm not being sarcastic, I think this is a good idea. I probably came off wrong because half-way through writing my comment I actually changed my mind, I guess my tone leaked through though.


I vote for having a rope tied in a knot as the icon: still sort of secure, just less so.


Or just throw away the lock entirely, given that Google is in bed with the NSA.


Google is the only major player even employing perfect forward secrecy; hardly a case of a company trying to get extra-cozy with the NSA.


Perfect forward secrecy would prevent some attacks against Google, as does certificate pinning in Chrome. But if the NSA got somehow Google's secret key, they can still MITM a SSL connection. It just means, that they actually need Google's secret key, instead of using a CA under their control. ( And they need this key before they can MITM any connection.)

And the entire secure connection stuff is broken, if the NSA just obtains a FISA warrant for your GMail account. ( Or compromises the Google servers directly.)


> they actually need Google's secret key, instead of using a CA under their control

Note that Google runs their own CA (signed by Equifax's Root CA) and, thus, issues their own certificates.

The way things are going, I see no reason why the NSA could not, with a FISA warrant, simply order Google to:

1. provide them with a copy of Google's CA's private key; or, 2. issue the NSA a certificate valid for *.{every-google-domain}.com.

Maybe they can.

Maybe they have.


>But if the NSA got somehow Google's secret key, they can still MITM a SSL connection.

Security isn't my field at all, but I'd gotten the impression from HN that PFS is meant to protect against just this scenario. Am I mistaken?


What PFS will protect against in this scanario is decrypting SSL sessions whose cyphertexts were captured before the attacker had access to the private key. It doesn't protect against (any) man-in-the-middle attacks.


Before or after they have access to the private key, so long as an active MITM is not performed. In essence, PFS makes it such that no matter what information you have about the server's configuration, passive sniffing of data is not enough to compromise a connection's confidentiality.


How about a big red nose next to the padlock?


I really hate the name "Perfect Forward Secrecy". There's no guarantee in ECDHE that the connections cannot be decrypted at some future time. All that's being implied is that the key changes per connection.

Sure, it's 'better' than RSA with a long lived private key, but there could be advances that would breach ECDH and make none of this 'perfect'. Such as: http://www.wired.com/politics/security/commentary/securityma... If the NIST supplied curves used by the TLS standard had a backdoor none of this would be perfect.


What you hate is the word "perfect". "Forward secrecy", though, has a technical meaning: a compromise in the long-lived key material for the cryptosystem doesn't enable an adversary to go back and decrypt earlier saved sessions.

The link you've provided doesn't implicate Diffie Hellman; it's in an obscure, virtually unused NIST standard that uses elliptic curves to generate random numbers.

There are other things not to like about the NIST curves, but the methodology used to generate them is documented (I suppose you could generate your own using the same method) and is based on the literature on ECC from the late '90s.


Yes, it's the word 'perfect' that really bothers me. I think it gives the general public the wrong impression.


Perhaps they should call it "pretty good"?


I just call it "forward secrecy". I think "complete" would be a better word than "perfect", but I think that neither are necessary.


Cryptographers seem to have a thing for misleading names. A big one is "provable security" which really means "is at least as hard as another more old and famous problem that we are all guessing is hard to solve efficiently." In any case "provable security" alone is often too difficult to prove so more assumptions are added to make the proof go through. Saying you've obtained provable security under some assumptions doesn't sound very good though, so the assumptions are called "models" instead.


Maybe i'm reading it wrong but I have the impression that the problem here is the notion of forward secrecy.

Forward Secrecy only garanties that given the master keys Eve still can't derive the session key. It says nothing about the scheme used to create the session key (which may not be safe) It only states that even if Eve gets the private key that won't give her any information on the session key. It's the "won't leek any information" part that makes it "Perfect".

"Prefect" is used in the same way in "Perfect Information-Theorical Security" (http://en.wikipedia.org/wiki/Information-theoretic_security) were the Info-Theoric Security is perfect if the cipher text doesn't leek any information about the plain text.

The use of the word perfect seems consistent to me. The problem is actually understanding what we are talking about and what part of it is Perfect.


So, the actual scheme is authenticated Diffie-Hellman key agreement. Its security is based on the discrete logarithm problem---it does not provide perfect information-theoretical security.

"Forward secrecy" is an accurate term. "Perfect" is redundant and potentially misleading.


I agree with the article. Perfect forward secrecy (given master private key, still can't figure out derived session keys) is a wonderful property.

However, it's a bit over-reaching to say it can "block the NSA". It won't stop them from backdoor-ing your hardware/software (keyloggers, compromised random numbers, etc). It won't stop them from storing the encrypted communication until (for example) quantum computers make it possible to decrypt it (assuming RSA).


In the forward secrecy modes of TLS, RSA isn't used to encrypt anything.


Since we're assuming quantum computers, isn't Diffie-Hellman vulnerable as well? A bit of wikipediaing suggests DH is equivalent to a "discrete logarithm problem", which it also states has an "efficient" (polynomial time) quantum algorithm.


Sure, it won't "block the NSA" from discovering anything about you. But it raises the stakes.

It is much easier for the NSA to backdoor/crack/make deals with a few telecoms, collect all of the encrypted data, and then decrypt it at their leisure (via stealing RSA keys, getting them via legal channels, cracking RSA, taking advantage of compromised random numbers, or the like), than it is for them to backdoor your hardware (much larger surface of possible discovery), install keyloggers (need physical access, much more expensive to do so only makes sense to do in a targeted fashion) etc.

The whole point of this brouhaha is that the NSA appears to be hoovering up massive amounts of data, which it keeps and which even fairly junior employees at government contractors have access to and could abuse. I don't think that anyone (or most people, anyhow), object to specific, targeted wiretaps of targets that you have appropriate warrants for. It's the incredibly broad based collection of all records, without targeting, for later data mining, coupled with the first-amendment destroying demands of secrecy about discussion these practices, that have people up in arms.

The NSA has admitted that they retain encrypted data for longer than unencrypted, as they cannot as easily discard it as irrelevant. If you use a protocol with perfect forward secrecy, it substantially undermines the value of that collection.

Edit to add:

> It won't stop them from storing the encrypted communication until (for example) quantum computers make it possible to decrypt it (assuming RSA).

Actually, it will stop them from using quantum computers to crack RSA and thus be able to read past communications. That's the who point of perfect forward secrecy; even if you can crack the RSA layer (or obtain the private RSA keys), which is used for bootstrapping the authenticated encrypted channel, perfect forward secrecy generates the symmetric key in a way in which it is never actually communicated over that channel.

Perfect forward secrecy protocols, such as Diffie-Hellman key exchange, work by both sides coming up with two values, one of which is secret and one of which is passed over the channel, such that both of them can compute the same secret value by combining the private and public values. Even if the "public" values are intercepted, for instance by cracking RSA, you cannot determine the secret values, because you do not have the secret portions of the keys that each side generated.

The reason to use RSA in addition to key exchange is that Diffie-Hellman key exchange is not authenticated; someone could MITM you, doing both sides of the Diffie-Hellman protocol, and intercept your communication that way. RSA allows for signing of certificates which can be used to bootstrap the authentication process.

So, if RSA gets cracked in the future, then the NSA could at that point MITM all of your future connections. But they could not decrypt your prior conversations, which had been encrypted using a private key generated via a key-exchange protocol that provides perfect forward secrecy.

edit 2:

Hmm. After further investigation, it looks like quantum computers would be able able to break the common "perfect forward secrecy" algorithms as well. So yes, in the case of RSA being broken due to quantum computing taking off, it's likely that DH and other discrete log based perfect forward secrecy algorithms will be broken as well.

Sigh. Why can't we have any NP-complete problems that work cryptographically? While even that isn't a guarantee of security against quantum algorithms (after all, it may still be the case that P == NP), there's a fairly strong belief that P!=NP and that NP-complete is a disjoint category than quantum polynomial algorithms, and so would give us a better security margin against quantum attacks.


Cryptography needs problems that are hard on average, not in the worst case. This doesn't fit well with P/NP/NP-complete, which are all about the worst-case difficulty of a problem.

For example, see the Merkle-Hellman knapsack cryptosystem [1], which is based on the subset sum problem --- a NP-complete problem! Nevertheless, it is broken. As you can see, using a NP-complete problem as a computational basis for a cryptosystem is not a silver bullet.

[1] https://en.wikipedia.org/wiki/Merkle-Hellman


Apparently it is very hard to prove results in average case complexity theory. We don't even know if computational problems that are hard on average assuming P!=NP exist, let alone actually designing a cryptosystem out of one.


There actually are a few "average-case complete" problems. The field was started with this 1986 paper: http://epubs.siam.org/doi/abs/10.1137/0215020

It's true that we don't seem to know how to build a cryptosystem out of them. Part of it is that a cryptosystem needs something stronger than average-case complete, a property more like every instance being hard (possibly excluding trivially easy instances that can be detected and filtered out).


Average-case complexity is a not a topic I know a lot about, and I can't easily get the paper itself, but what the abstract seems to say is that for some particular NP complete problem, if an efficient average case algorithm exists for that problem then efficient average case algorithms exist for all NP complete problems. There is no mention of showing that such an algorithm does not exist.

My reference is Impagliazzo's 1995 survey paper "A Personal View of Average-Case Complexity" where he lays out five possible worlds based on open problems in complexity theory. He calls the world where P!=NP but all NP problems are easy on average "Heuristica." This is still an open problem as far as I know.

On your second point I can say a bit more. Cryptosystems in general do not need all instances to be hard. With RSA for example, there are all kinds of weird attacks, like the modulus n can easily be factored if phi(n) or phi(phi(n)) are products of small primes. There is no need to filter out these cases because the probability of them occurring is so small. There are cryptosystems where an arbitrary instance of the problem is as hard as the average case. This was a notable selling point for lattice cryptosystems when they were first invented.

In the RSA case the probability of these corner case attacks decreases with the size of the instance. So for current parameters the probability is near zero. However, even if you only had a scheme where the probability didn't diminish with the instance size and there are attacks that you can't check for, you can still securely send messages. Say the probability that your key generation algorithm gives you an instance that is hard is 50% (or any constant). You can securely send messages with arbitrarily small probability of having your message decrypted by using multiple keys and breaking up your message with a secret sharing scheme (Shamir's for example) and encrypting each message share with each of the keys. The attacker, able to only get a constant number of decrypted message shares will not be able to get decrypt your message.


>backdoor-ing your hardware/software

Yeah, gotta build your own computer and operating system from scratch!

>(assuming RSA)

A great reason to look at elliptic curve crypto.


I'll guess you've seen this[1], but if you want perfect security, you really do have to build from scratch. Of course, I doubt that any single human is really capable of building a truly-secure system.

I guess we have to live with a situation where some faith is required if we want to use computers.

[1] http://cm.bell-labs.com/who/ken/trust.html


..Or stop the company you're talking to from shipping off information to a 3rd party after it's been received.


Yes, in the same way that an excellent front-door lock doesn't stop someone from smashing your front window.

For the purposes of encrypting traffic to prevent eavesdropping, one of the recipients sending the unencrypted data to a third-party is not an attack vector of consideration. In your scenario the only viable measure is to send no data at all (aka "don't play").


Two notes:

(1) While Google uses forward-security on their HTTPS connections, I've not yet seen evidence either way as to whether the SMTP-TLS connections (relaying email to other domains) use forward-security. (The report at checktls.com mentions only the cipher "RC4-SHA", not the key-exchange mechanism.)

(2) If one side of the connection chooses to retain its session keys, or chooses session keys in a poor/predictable manner, or leaks information about session keys via a side-channel (either by mistake or intent), then the forward-security could be destroyed, and in a very subtle/undetectable way.

(This could be a cheap and sly way to grant visibility to a third-party: adopt forward-security outwardly, but ensure your session keys only look random to people who don't know the bug/secret-seed-shared-with-the-third-party.)


Quick log check. For the last month at least, all SMTP-TLS that my servers received from Google was using ECDHE-RSA-RC4-SHA. Same for Yahoo mail.

Hotmail, by contrast, never uses it, nor is STARTTLS offered by their MXes.


Thanks for asking this -- apparently there is no really easy tool for this (I was thinking openssl s_client could do this automagically -- apparently not) -- however, see:

http://superuser.com/questions/109213/is-there-a-tool-that-c...

And now I realized I hadn't tuned the SSL ciphers on one of my servers to not accept (among other thing NULL ciphers... at least if that bash-script works -- I'll have to double check).


You should check out https://ssllabs.com. They accept a URL and do SSL checks to determine if your server is not configured in the most secure manner possible.

It is already interesting to run the check against online banking sites to see first hand how seriously they take security in practice. I was surprised to see that some banks score pretty poorly in this regard.


I don't find an option at ssllabs.com to check SMTP (the topic of this thread). Am I overlooking it?


No, you're not. It would be nice if they had one, though.


I'm not sure I'm brave enough to force forward secrecy in my Exim config just yet -- I can't find an option to log handshakes (unless it is logged as part of the message logs -- which can be kept) -- but I suppose refused messages would show up in the reject log.

Still, I'm not sure if I'm even ready to force SSL at all... for incomming SMTP. Sounds like a good way to break your email infrastructure (and reduce spam ;-).

Essentially mail transport is pretty much unencrypted -- I see SSL/TLS having potential to help fight spam by forcing some form of authentication (via DNS sec, CAs etc) -- but not really a useful tool for securing email from snooping -- for that I would advocate S/MIME and/or GnuPG (Gnu Privacy Guard).


Every so often I get a mail server from Google that uses DHE outgoing, but most of the time it is RC4-SHA at 128 bits/128 bits.

There are plenty of other mail servers sending me mail though that are sending it over DHE-RSA-AES256-SHA.

I may go tighten up the ciphers I accept. Maybe if only given the option for PFS ciphers Google's mail servers will use them.


How much SSL traffic is offloaded to dedicated hardware on the server side nowadays? How many different companies are there in that space? How many of them have secret agreements with the NSA (or their respective state intelligence agencies) to retain session keys and dump them out when they receive the right kind of query?

The actual owner of the SSL offload hardware would have a hell of time even detecting that their keys were being handed out that way - picking out a once-a-day SSL connection that the offload hardware talks to directly versus one that it passes through in the clear to the systems behind it is going to be exceptionally difficult.


I think the amount of SSL hardware in the field is closer to zero. Nobody with any clue uses that stuff because it's power hungry, the APIs are awful, the implementations can't be verified, and any such asic will look slow in a year whereas cloud server lifetime is five years. AESNI killed those guys a long time ago. The only users of them are embedded platforms without the host CPU power to do crypto.


It still gets used sometimes in hardware load balancers (I guess that is "embedded platforms"). Some of those turn out to actually be more like software, though.


Really? I thought a lot (most?) of the big boys used load balancers that include SSL termination along with DoS protection and the load balancing itself. BTW

anyone know how AWS load balancing works in this regard?


As far as I can tell, ELBs are just commodity EC2 nodes, which are actually pretty shitty because they're EBS backed; you can build a better ELB yourself using an S3 backed EC2 node.

Some of the OpenStack people were using hardware load balancers (F5; I think maybe HP OpenCloud?) as an option.


Why does it matter what kind of storage a load balancer has?


EBS is a very unreliable service compared to the rest of the AWS stack; probably the single most unreliable service. (vs. EC2 which overall is fairly reliable, even if individual nodes aren't, and Route53, which has been quite reliable.)

Load balancers are often used as part of an HA system, so building your HA component using the least reliable subsystem is a bad idea. Particularly because there's no need to -- the S3 backed instances work fine for this.


More than you think. ADC/load balances like F5 have hardware security modules that offload SSL.

Folks who host PKI often offload key generation. This might e as complex as some sort of appliance, or as simple as a smartcard.


FWIW, news.ycombinator.com provides perfect forward secrecy with ECDHE_RSA. Thanks, Nick!


Of course, there is no way for us to know whether or not yc archives a copy of the session keys... (Not that I think yc does that, but we're still talking a shared secret -- you need to trust both parties, if you want to trust forward secrecy...).


Since this is a forum site (versus a site that might get sensitive data), I hadn't bothered to check to see if it was always on HTTPS, until this post. Glad to see that it's always on.


I don't see how this really helps, though. The NSA can still force a CA to generate certs for any domain they wish to intercept and MITM everyone they care about. They are likely to have the resources for that.


But that requires the greater cost/intrusiveness of an MITM, and could potentially be noticed as an unexpected change in the certificate. See for example the SSL Observatory project:

https://www.eff.org/observatory


In the case where providers/servers are under NSA jurisdiction (or control, in the case of them hacking servers) -- they could also keep a copy of session keys.

But that would force them to intercept at many more points than simply various edge routers (which is what they may or may not be doing now, having (or not having) equipment at ISPs/TelCos).

Essentially, if you use gmail or outlook.com -- you'll just have to trust that no one has forced (or covertly) installed backdoored crypto libraries. I do think it is very likely security agencies (both foreign and domestic) have agents/assets working at large companies like Google and Microsoft -- I don't think it is very likely that they have been able to covertly subvert their infrastructure. But it certainly is possible.


They don't need to do that much. If they control a CA and a few ISPs (especially networks to the outside of the US, since apparently us in the rest of the world are fair game), they can MITM anyone reliably. The only defence is checking fingerprints, but few will bother.


Note that despite this headline, the article does correctly state that Google now uses Perfect Forward Secrecy.


Who cares? The NSA gets it's data directly from google. You need to protect yourself from all hops between you and the one you communicate with, not only from the hops between you and google.


I'm assuming you read the article, in which it offers an explanation of how the NSA gets Google data. Are you disputing that very likely explanation for the theory that a Powerpoint slide somehow communicates more technically accurate and viable information?

Obviously, Google could be lying in their most recent denial that disputes such a claim. They are, after all, required to lie if they are directly and materially involved in the program.

On the other hand, were I the business owner of a corporation that held millions of people's information and the NSA approached me for access, I don't think it is at all inconceivable and unlikely that I would attempt to negotiate not having the NSA on my network, in my machines, so I could retain the ability to truthfully tell my users that the company did not allow direct access to company servers.

In the article, Mr. Horowitz suggests that, given other leaks and information we've had in the prior years regarding NSA tapping the internet, the slurping of company data is happening at the on/off ramps to the networks, where data is coming in/out. This is far more likely than Google shipping off deltas in cron batches. Thus, Google et al. can appear on the PRISM Powerpoint, which just says their data was added to the system. That can mean anything, you know. The Powerpoint presentation is hardly a source of empirical data presenting the technical architecture of the program.

This article is attempting to help educate and achieve protection at all hops, not just the hops between you and Google. Perfect Forward Secrecy is just for that purpose--it removes the ability to find your encrypted data crackable via single master key decryption by using temporary keys.

So, while Google can only do so much in resisting NSLs, perhaps there is a little bit worth appreciating here. Google can't stop the NSA from slurping up traffic. But they can prevent that data from being massively decrypted via a single key.


> Google can't stop the NSA from slurping up traffic. But they can prevent that data from being massively decrypted via a single key.

I don't want to sound snarky, but, so what? If you want privacy you still have to encrypt data end-to-end, not end-to-[google|microsoft|yahoo|facebook]-to-end.

And, anyway, [google|microsoft|yahoo|facebook] should not know about your stuff just like the nsa or anyone else who is not the intended recipient.


It's not up to Google to encrypt your e-mail end-to-end, though. That you have to do yourself. This technology exists and is called PGP.

Obviously Gmail's HTML interface can't work with these e-mails, until you get some way to sandbox part of the DOM on the client. Search won't work either.


No, they're not required to lie. AT&T and Verizon never did when asked about NSA surveillance: http://news.cnet.com/8301-13578_3-57589012-38/nsa-surveillan...


It would have been a fantastic effect and win for computerworld.com if their site supported what this article talked about.


It's odd that there seems to be no definitive instructions for nginx or apache for enabling PFS. Given that it's clearly not obvious how to set this up, how many here are inadvertently running non-PFS without knowing it?


Funny, I think in part it's because few people care. Maybe I didn't have the right keywords, but I just wrote an article about this for nginx/stud, and it got 2 upvotes on HN.

Here's the HN link: https://news.ycombinator.com/item?id=5935988


I've had a question about PFS for a while now: How 'expensive' is it to implement?

First up, let me state for the record that I don't have personal experience with running SSL on a server. But back in the day, generating a public/private key pair for PGP was a semi-expensive operation. Definitely not the kind of thing I could imagine running on a per session basis.

Since PFS requires per-session public/private key pairs, how "costly" is it?

(I understand that the goog uses ECDHE, so I guess I'm asking for a general feel for how quickly and efficient this compares to RSA key generation.)


The article mentions 23% reduction in speed for ECDHE-RSA vs RSA. Another example states between 15% and 27% reduction.


Thanks! I knew I should have read the article more carefully.

(The same section of the article also references http://vincent.bernat.im/en/blog/2011-ssl-perfect-forward-se... which answers some of my follow on questions.)


Gmail uses Perfect Forward Secrecy, so what? If the NSA really does have direct access to Google's servers, then PFS will not provide you any extra protection from them. Sure, PFS will make it harder for others to snoop. But the current context for the general population is PRISM and the NSA.

I'm not saying that PFS (ugh, what a name, perfect, really?) isn't valuable, I'm just pointing out that no one should think this is going to make it any harder for the NSA to read/access your gmail account.


The NSA is getting all of the attention, but this applies to all of the world's intelligence services.

Even if you absolutely believe and support what the NSA is doing, you probably have good reason to want any number of governments, foreign competitors, and individuals from knowing what you are doing or (at the least) your financial status.


> you probably have good reason to want any number of governments,

> foreign competitors, and individuals from knowing what you are

> doing or (at the least) your financial status.

...and you should therefore ignore PFS in https, and use something that encrypts your data so that only the recipient can read it, and not someone in the middle like google, microsoft or facebook.


Sure, but if Googles stores all session keys, as is likely (key phrase: meta-data), then the session data store is a rich and likely target for all the world's intelligence services, and one that is not invulnerable. After all, Google is a big organisation with a large turnover of staff.


I would say that storing session keys is vanishingly unlikely and eliminates the benefit of PFS. Google probably spends millions of dollars a year in PFS additional handshake costs. Google then storing the session keys would be a little like the Dutch building an expensive dike and then knocking a big hole in it.

If you want to store data like "user X searched for Porsche 911 GTS 0-60 statistics, so let's show him some 991 Cabriolet ads the next time we can," you don't need to store the session keys.


The line has always been, "You can't stop a determined hacker with any single solution."

The NSA may be a group of determined hackers.

One technology isn't going to suddenly cure the whole problem. It takes a solid security policy set at and followed by the top levels of a company, and constant vigilance to respond to threats. You're not going to stop anyone from getting in, but you can stop them from getting anything of value, and you can kick them out quickly and quietly.


This article seems to start out with wrong assumptions and therefore ends up presenting dangerously wrong conclusions:

"Suppose, for example, the NSA was recording all HTTPS encrypted traffic to/from joeswebsite.com in January. Then, in February, they learned the private key for joeswebsite.com. Almost always, that lets them decrypt everything from January, February, March and beyond."

We should instead suppose that the NSA has had the private keys all the time (since it started recording encrypted communication), so every piece of communication that has been recorded can be decrypted even with Perfect Forward Secrecy. I.e. let's assume they got Google's private key immediately when it was last changed, then all communication with Google's servers is decryptable (correct me if I'm wrong).

The NSA doesn't have to brute-force private keys, they just use legal/strong-arm techniques to obtain them, so it doesn't take a lot of computing power and time.


No. The point of Perfect Forward Secrecy is that, even if you know one side's private key and have a log of all communications, you cannot determine what _session-key_ was agreed upon, and thus can't decrypt the rest of the session.

PFS means that even with private keys, you can't read sessions passively. Of course, it can't help with a MITM or taps on unencrypted data.


I guess this is exactly why this is over my head. Maybe you can help me even though it's a little unfair to pick on you :) If one can decrypt the conversation that negotiates the session key and thereby obtain the session key, couldn't one also read the conversation? This must be the whole trick to the ephemeral part...


This is not PFS but a simplified example of how you may not be able to decrypt a conversation with one side's private key:

Server X and User Y are communicating. Some guy Z has X's private key. Z is also passively listening the communication.

- Y sends its public key to X. X sends its public key.

- Y generates a random number (A), encrypts with X's public key sends to X.

- Both X and Z decrypt the number. Now X, Y and Z all know the number A.

- X generates a random number (B), encrypts with Y's public key, sends to Y.

- Since Z does not have Y's private key, it can't decrypt it . At this point, X and Y know A and B, Z only knows A.

- X and Y use a predetermined algorithm using A and B as inputs to generate a new key. Further communications are encrypted/decrypted with this key.

- Z can't decrypt the communication.


And to link it back to the outer conversation... Since google is using PFS for gmail, for the NSA to read those conversations, google would either need to (a) give them the email unencrypted directly (cheating) or (b) give them the specific key negotiated for each conversation.

It strikes me that once PFS is in place, google would, in theory, be able to keep everything private except those conversations that a court forced them to give up the keys for.

As long as the rule of law were upheld (i.e., warrants/judicial involvement), it seems to me that this model could work and be generalizable for all web traffic. Maybe it's the way forward...

Happy to have my naiveté corrected :)


Great explanation of the concept! That's really clear.


Excellent explanation! I'm in!


Yes you are wrong :D If NSA is not actively man-in-the-middling you, Perfect forward secrecy still works.

ECDHE has been in Openssl since version 1.0.0 so its out there just not used.

Btw. Is there any addon for Firefox that shows the “encrypted communication” details like google does?


> Yes you are wrong :D If NSA is not actively man-in-the-middling you, Perfect forward secrecy still works.

OK, I guess I was reading too much into the (not very enlightening) definition of PFS on Wikipedia and too little of the actual implementation based on Diffie-Hellman, which has the desired properties.

The question that arises is: how feasible is a MITM attack on this phase of session initiation? Can it be kept undetected?


With channel id, it would need to share the state even as laptops move across networks.


They don't have to take an active role though if they just sniff all the traffic, right?


Yes, they do.


Whoops. This wouldn't be the first time I got public/private key schemes muddled in my head.


They won't have access to the temporary keypair that the server generates, and which lives only on the server.


PGP complete lacks perfect forward secrecy. Suppose an eavesdropper is recording your email, even though they can't understand it. If your private key is ever compromised (say by torture or subpoena), then all your previous messages can be immediately decrypted.

> You could say that Bob losing control of his private key was the problem. But with today’s easily-compromised personal computers, this is an all-too-likely occurence. We would really prefer to be able to handle such failures gracefully, and not simply give away the farm.

http://www.cypherpunks.ca/otr/otr-wpes.pdf


It appears this very site is using Perfect Forward Security. http://cloud.lucasjans.com/image/081m1V3c3O1z


Does it need to be DHE or will DH do just as well? What exactly is the difference here?

It still uses a key per connection at the least, and it's still not on the NIST/FIPS 140-2 standards because you can't decode it after the fact so it breaks auditing (all AFAICT, and I have looked at this stuff a lot).

That said, we're best using AES_GCM, 2048-bit RSA and ECDHE as a matter of course, because at the very least it requires a valid, active MITM to break and can't just be logged and scanned later.


If Google is offering perfect forward secrecy for e-mails, why don't they have the same for Hangouts (OTR with perfect forward secrecy)?


How important is the difference between ECDHE_RSA and DHE_RSA security-wise?

edit: I asked because Opera showed a site's free StartCom certificate as "TLS v1.0 256 bit AES (1024 bit DHE_RSA/SHA)". I checked with Chromium and there it is shown as ECDHE_RSA though. This is confusing...


They both provide PFS, but ECDHE_RSA is _much_ faster.

Do you have TLS 1.1/1.2 enabled in Opera?


What about more use of one-time or ephemeral email addresses? The info we have so far seems to suggest NSA likes to track email addresses. Compare tracking an email address that was only used to send or receive one or a few messages with one that has a long history of use.


Suppose the data captured is not a dump of a database but rather a https log. Doesn't it imply that NSA has to essentially rebuild the server functionality to make sense of the data?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: