Hacker News new | past | comments | ask | show | jobs | submit login
Heartbleed should bleed X.509 to death (lorddoig.svbtle.com)
401 points by lorddoig on April 9, 2014 | hide | past | favorite | 147 comments



What you want is Moxie Marlinspike and Trevor Perrin's TACK.

It is already the case today that for Chrome and Firefox users, a compromised CA can't easily hijack connections to Google Mail. Not only that, but any attempt to hijack Google Mail connections in the large will run aground on Chrome and Firefox users, who will not only not accept the rogue certificates, but will also alert Google, which will put a gun to the head of the CA.

The feature that enables this is called certificate pinning. It works well for small numbers of high-profile sites, but requires manual intervention on the part of browser vendors.

TACK pushes certificate pinning out to site operators. It works like HSTS: the first connection to a website is trusted, and that connection loads up state that the browser holds. Subsequent connections check for consistency with the first connection. Dynamic pins, or "tacks", make dragnet surveillance of all sites asymptotically as risky as spoofing Google Mail. The attacker is nearly certain to accidentally catch someone with a tack loaded, and at that point the game is up: the attempt to present an otherwise-valid certificate that violates a tack is a smoking gun, to which Google and Mozilla ca respond with their own firepower.

The nice thing about TACK is that it works alongside the CA hierarchy, and even derives some value from it. A tiny fraction of the Internet could adopt TACK and still make life much harder for attackers. The effort required from site operators is small, and the whole system is invisible to end-users.

Fixing the CA hierarchy is a lot less sexy than ground-up rewrites of the whole Internet security model. But the ground-up rewrite is never going to happen, and the incremental fixes are not only doable, but doable by the kinds of generalist developers who are champing at the bit to stick it to the NSA. The biggest security problem on the Internet isn't protocols; it's browser UX.


Not only that, but any attempt to hijack Google Mail connections in the large will run aground on Chrome and Firefox users, who will not only not accept the rogue certificates, but will also alert Google, which will put a gun to the head of the CA.

This is one of those "Things which somebody would probably bring up at an anti-trust meeting if anybody at an anti-trust meeting had the foggiest clue of what was going on", incidentally. (The hypothetical threat is "You give our web properties a better SLA than anyone else in the world gets, or we will use the coincidental fact that a large portion of the world's web traffic runs code under our control to end you.")

It's funny, people (including me) always thought that Google's big swinging Wand of Annihilation was google.com, but now they have at least four of them.


It's not just google properties, the Strict Transport Security sectionof the chromium dev docs [1] lists multiple properties they do this for (for example, twitter and paypal), and it appears you can specify your own as well through the command line (and probably elsewhere).

[1]: http://dev.chromium.org/sts


I believe patio meant Google/Mozilla going to the CA and saying 'You duplicated our cert, you better explain or we will stop trusting you in our browsers'. Which would end the CA, of course. As they deserve to.


I took it as an implication that Google properties were getting special treatment by Chrome. I'm not sure how Chrome blacklisting a CA could be construed as anti-trust, even if it essentially killed the CA, because there's plenty of healthy competition in the browser space. They could just switch to Firefox, and not even lose the extra protections they were getting since Firefox pins google property certs as well.


And here I thought the "gun to the head" was more of a "smoking gun" -- as in, hey, that MITM attack you thought you could get away with? Yeah, we noticed.


Why would that be an anti-trust concern? What Ptacek is referring to is a CA compromise being detected. At that moment the CA would be, quite correctly, with a gun to their head as their entire purpose for existing -- their whole business model -- would have evaporated.


"The biggest security problem on the Internet isn't protocols; it's browser UX."

I think it is security UX in general. Anything around certificate issuance/life cycle (SMIME or PGP signed/encrypted mail), PGP key exchanges, etc. That problem has not been solved.


TACK looks cool. I read through the RFC and have some questions ....

At first I thought an active MITM could drop TACK negotiation from ClientHello and wait 30 days until pins expire, but as I read it, I think that should result in a "contradicted" pin.

You could do browser profiling though, and only MITM clients which don't send a TackExtension in the ClientHello, or behaviourally look like IE, say. I wonder if it would have been better not to indicate that client supports TACK? (Maybe constraints not obvious to me).

The other thing I'm not sure about is overlapping TACK handling. I don't see what's to prevent an MITM from adding an additional new TACK of their own in the ServerHello, gradually superseding the "valid" TACK. That would take like 60-90 days though.

This looks like a massive improvement, although I wonder if it actually protects clients which do not support the extension?


You could do that, but you'd be outed immediately by anyone who checked two TLS connections back-to-back for consistency.


I think I see, so one client advertising TACK and one not?

Right, that would work, assuming there are people going around doing it for every web site (the TACK police!). I can easily see that happening, it is just the kind of thing moxie would do, say.

It still seems to me that it would be simpler if the client just didn't advertise the extension. I am probably missing something though.


>This looks like a massive improvement, although I wonder if it actually protects clients which do not support the extension?

Only indirectly through network effects. It's much harder for a valid attack certificate to stay undetected in the wild when a subset of the people you might attack are running TACK. You're right that it might not help clients if the MiTM software intelligently avoids attacking TACK-capable clients.


What exactly needs to happen for TACK to go widespread at this point?

Maybe we can get some good out of this week's focus on security.


I'd say to just get the word out. Around this time last year I was working at a security company and personally pushing for establishing automatic TACK protection for our clients, passively sniffing certs for SMB networks and learning which ones were good, but no one else knew what I was talking about, I couldn't convince them the project wasn't dead, and at the time I couldn't answer the question "how do we deal with a legitimate change in browser cert."

If you can grok TACK better than I do at the moment, start writing patches for web servers.


Best next step would be to get some non-browser user of TLS to rely on TACK.


I may not be understanding it clearly but won't a non-browser app be better off simply pinning whatever certificates/public-keys etc. directly in the binary of the app?


The key idea of TACK is that it puts controls of pins in the hands of the site operators. The goal is to protect trustworthy site operators from rogue or compromised CAs (e.g. Diginotar, Comodo). The site operators have a better idea than anyone of what keys and certificates are correct for their site.

Pinning directly in the app puts trust in the developers of the app instead, which is indirect and prone to lag. It is also generally fragile (have to issue app updates for cert revocations) and can be hard to scale. How many secure sites does your app need to connect to? Is it flexible, unknown? How are you, the app developer, going to validate those certs beyond relying on the CA PKI? (and then you're back to square one).


What are these "site operators" that you speak of? If facebook or google or twitter is publishing a native app, they can embed any credentials/certs/keys etc. they want into their app. It's not a great leap to think of a cert/key renewal mechanism via DNSSSEC or some other proprietary mechanism. Essentially, it's the same as google pinning their certs in the chrome browsers. Except, since you control both end points (your app and your own servers), you don't have to participate in the broken (trust based) PKI system at all.


Isn't that a bit like Opportunistic Encryption (which I happen to consider a neat idea)? Or do I misunderstand that?

I'm asking, because I dimly remember you not being a fan of OE -- have I confused you with someone else there?


I'm not sure how TACK is similar to OE. OE is encryption without meaningful authentication. It's an open invitation to MITM. I'm definitely not a fan.


I think the comparison is that you don't meaningfully authenticate the first connection.

This is wrong, though: although new users will, in fact, be temporarily MITMed, all returning users will see a big scary warning page that will cause them to (perhaps automatically) report the problem to their browser vendor. Google/Mozilla will promptly drop the offending URL into the malware-sites list, and the new users will thereby be rescued as well.


That's true, and also, those first connections are authenticated (they're just not authenticated perfectly). There's a difference between weak authentication and no authentication, as aggravating as it is to point out.


What do you mean by perfect authentication?

We're talking about referring to Telehash's approach as "ubiquitous encryption" with "self-consistent authentication" but nobody seems to agree on what authenticating an identity means in the first place.

Pinning encryption to addressing solves those problems at a lower layer, and leaves phishing attacks to be solved separately. I don't know that anything I'd call "perfect authentication" can be solved within the X.509 framework.


Could you elaborate what you mean by "pinning encryption to addressing"? Do you mean DNSSEC? Doesn't that suffer from the same third party trust problem as any other PKI based system?


Public key cryptography doesn't require X.509's third party trust model, but as the author of the article points out, PGP key exchange is not quite as simple as just trusting the CAs your browser vendor decided you should trust.

Telehash is taking an approach that completely sidesteps the problem of human-memorable names, though. It uses the public key fingerprint as the "network address" of a node in a DHT. The Telehash address is globally routable, like an IP address, but there is no MITM possible, because only a node with the private key generating the address (fingerprint) can communicate at all using that address.

There is still the problem that humans don't want to type in an IP address, let alone remember something unwieldy as 9ba9c175c3c26af9df5c8163ea91d4ae4eca59ba95d66deb287c89ea0c596979. But deciding whether to trust that key is distinct from verifying data is signed with the same fingerprint.


For a simpler model, if you don't want to wade into consistent hashing and DHTs, there's also IPv6 Cryptographically-Generated Addresses[1].

[1] http://en.wikipedia.org/wiki/Cryptographically_Generated_Add...


Yeah, that's what I meant (and couldn't properly articulate).

So the difference is the reporting mechanism, that allows a powerful organisation to make meaningful threats to protect others?

Because in [Garfinkel 2003] it's similar: returning users get a big, fat warning, but they can't meaningfully do anything about it on their own, except not trust the other side.

(http://simson.net/clips/academic/2003.DGO.GarfinkelCrypto.pd...)


TACK is really cool, but is it still pushed to the vendors? (Or even, are the vendors interested?). I've seen patches floating on github, but alas, not that much activity lately.


The vendor space, considered seriously, consists of Google, Mozilla, Apple, and Microsoft.

Of those four, Google is pushing hardest for CA-compromise countermeasures.

Google's current preferred solution is Certificate Transparency. I don't love CT, for reasons very similar to those that the typical HN reader would come up with after reading it, but it's still a step forward.

We sponsored development of some TACK code last summer, but none of the browser vendors are itching to integrate TACK. Google has nice things to say about it, but "backburnered" would be a fair summary of where it stands right now.


Thank you for the information on the current situation.

Well, not to be an opportunist, but let's hope that this kind of incidents will move the adoption of TACK a little bit faster.


Meh, he's only saying that because TPTACEK loves his PET TACK.


If TACK can rewrite the trust model away from the CA and into the state of subsequent connections, we won't need the actually CA. Self-singed certs are equally fine to establish trust after first connection.

If however we do use CA's as the trust model in order to trust the first connection, then TACK is nothing but a policy system to attempt keep CA honest. Users would still need to put their trust to third-parties that they never meet, and who's priorities and objectives are unknown.


I see TACK as a decisive mitigation for the biggest danger posed by CAs today (that they enable global passive adversaries to conduct dragnet surveillance), and as a step towards a workable web-of-CAs system.

I do not believe in pure peer-to-peer schemes as workable solutions, but I do believe that we could in 5 years have a system where I can trust ACLU first, and then Verisign as a fallback.


"the first connection to a website is trusted"

Yeh, I can't see that going wrong any time soon.


Nobody is claiming that it provides perfect security, but it massively reduces the attack window for successfuly certificate spoofing, while turning any failure to hit that window into a big red blinking alarm sign.


That threw me off too. But I think they may mean the first visit ever. Meaning it would be something they could control and validate before releasing to the public?


This has been discussed multiple times now [1], but TACK is not a solution because:

1. It amounts to a preservation of today's pay-for-security system (the no-so "nice thing" you mentioned), which is not necessary. It is not necessary, thanks to distributed databases like Namecoin, to have to pay for SSL certificates anymore (or fax credentials, or any of that).

2. It doesn't offer a strong mathematical proof of authenticity the way a blockchain-based solution does. [1][2]

[1] https://news.ycombinator.com/item?id=7325551

[2] https://github.com/okTurtles/dnschain


TACK has the advantage to work in the current infrastructure (with some patches): it's a point that should not be ignored if you want wide acceptance.


I wonder whether there is any meaningful difference between the effort it would take to implement TACK securely in a way that doesn't involve paying what amounts to protection money, and the effort required to use a blockchain-based solution.

It wouldn't surprise me in the slightest if the blockchain-solution were actually simpler to implement and deploy. Fetching public key fingerprints involves a single HTTP request that returns some JSON. That's about it.


Don't we still have to then trust whoever is hosting the server returning the json? I guess you could verify with multiple servers, but that doesn't really guarantee anything and increases the requests.

The other option is to a full namecoin client with an up to date chain, correct?


> Don't we still have to then trust whoever is hosting the server returning the json?

It's assumed that you find yourself (or a close friend) trustworthy.

DNSChain is designed to be run by individuals, with no powerful deciding authority (like browser vendors) deciding who you should trust (as with CAs today).

Today, you trust the least trustworthy of hundreds of organizations that you've never heard of.

With this proposal, anyone is free to trust whoever they want, and they can change that instantly without any browser updates or anything along those lines. It's about as trustworthy as you can get.


There are also other reasons (besides authentication) that a blockchain-based architecture is desirable for the Internet:

- real-ownership of domain names (free of political pressures that result in domain-seizures)

- a powerful, global identity system (not run by any government or mega-corporation)

Some concepts & partial implementation: http://okturtles.com/#open-source


Blockchain protocols: the Ron Pauls of distributed systems design.

Replacement of DNS with a blockchain protocol is never going to happen. It's hard enough to talk DNS operators out of baking the CA system into DNS, despite the utter inapplicability of DNS to that problem. DNS has a fierce, powerful status quo advantage.

If you believe strongly that blockchains are going to be the future of global networking, a better plan would be to build a system that ignored the DNS and used a blockchain protocol instead. For instance: the DNS doesn't play any role in matching Google search terms to SERPs, nor does it control how AIM matches names to IM accounts, nor does it control how IRC matches nicks to receivers.

Forklifting out giant chunks of the Internet is a bad plan. Deprecate the Internet and build a new layer on top of it. Eventually, TCP/IP will find itself in the same role as Ethernet; it's inevitable.


> "Deprecate the Internet and build a new layer on top of it."

That's sorta what's taking place (not the entire Internet, but a part of it that's not serving us well). It's interesting that nearly that exact language was used when DNSChain (back then "DNSNMC") was introduced:

[therightkey] DNSNMC deprecates Certificate Authorities and fixes HTTPS security

http://www.ietf.org/mail-archive/web/therightkey/current/msg...

> "is never going to happen."

How many times has humanity heard that refrain repeated?

> "For instance: the DNS doesn't play any role in matching Google search terms to SERPs, nor does it control how AIM matches names to IM accounts, nor does it control how IRC matches nicks to receivers."

You seem to not understand that DNSChain is not just a DNS server. It also is a RESTful HTTP API and interface to the blockchain. This means using HTTP, not DNS. DNS is just icing on the cake (and not "throwing the baby out with the bath water").

BTW, some of those things are already starting to happen. For instance, there's PoC Pidgen fork that works with Namecoin, and also a working Bitmessage + Namecoin client out there:

https://www.mail-archive.com/enigmail-users@enigmail.net/msg...


Sigh, nice try but it doesn't work. It does remind me of the adage that goes, "For every complex problem there is an answer that is simple and wrong."

The web of trust model doesn't scale, that was made abundantly clear by PGP when it first came out. Even Phil Zimmerman, the guy that practically invented it, agreed it didn't scale and something else was needed. X.509 came about not because some person foisted it on the universe, rather a bunch of people who were writing security systems at the time (myself included) got together with other cryptographers, engineers, and administrators under a group hosted by "Public Key Partners" (the folks collecting together the Patent pool associated with public keys) and tried to come up with ways this might work.

It has had some fabulous successes, certificate authority compromised? Pull their root cert and blam none of their keys are trusted any more. It had some failures. Call the baby ugly if you must, but at least propose something that hasn't already been tried and shown not to solve the problem.

[Edit: I really need to keep peoples names in different buckets in my head]


PKI is the worst form of authentication, except for all the others.

I like to design secure protocols for fun, and they all inevitably converge into a PKI when you start adding the non-trivial required features. It's incredibly frustatring.


I wrote an article called, "Why you think the PKI sucks but can't do any better" that goes over every single proposed alternative to it and explains why none of them work:

https://medium.com/p/d04ea6a2c771

It's also got links to various security usability studies that are required reading for anyone who cares about this space.

tl;dr - the article is dumb, heartbleed doesn't even have anything to do with X.509.


For me, the major problem in the infrastructure of PGP "web of trust" is that even that is prone to spamming (not in the hash values but in the appearance of names involved) and learning to avoid them. See the recent example:

http://cryptome.org/2014/04/radack-greenwald.htm

It doesn't appear at the moment that pushing this technology to web browsers would increase the security of the most of the users.


It doesn't really solve OPs points though, the largest of which is that certificate authorities are a closed off oligopoly. But we have literally no way to trust them, even beyond the CA price gouging, because any state they are located in will just seize their keys and read your traffic.


Trust me zanny, I'm not picking on you, I'd like to examine your three claims in a bit more detail:

1) "certificate authorities are a closed off oligopoly" - This is absolutely not true. Pretty much anyone can start their very own Certificate Authority. The code isn't that complicated (the specs are all available), the math is no longer patented so you don't have to pay tributes to PKP. What you do have to do though is convince three people you're trustable, Mozilla, Microsoft, and Google. If they add you to their trusted root certificate list then you've covered a whole ton of the market. I know of at least one "private" certificate authority which shares its ROOT CA with individuals who want to trust that the sites which have a cert from them is "legit." (for some definition of legitimate).

2) "any state they are located in will just seize their keys ..." - this conflates two things, one is trust and one is seizure. If you live in the US, and have a PGP key that is trusted by the target of an investigation, and law enforcement can convince a judge that using that is the only way for them to get the proof they need, you may find yourself on the receiving end of a subpoena which demands you hand over access to your key. You can refuse of course, and the court can put you in jail for contempt. This issue is completely separate from the Certificate Tree or Web of Trust choice. The purpose of the certificate is to establish trust not privacy. The purpose of the TLS protocol (aka SSL) is to establish privacy. Its necessary (but not sufficient) to be able to trust the other end.

3) "... and read your traffic." - Which is a violation of privacy relates to how you established privacy as opposed to the mechanisms in that protocol. And I suspect that you think that is a splitting semantic hairs but bear with me for a moment. The heartbeat bug is in OpenSSL, not the certificate infrastructure. There are lots of things that used different protocols, and X.509 certificates, that are just as secure today as they were before this bug was disclosed. The key here is that they used a different protocol.

The OPs rant might more properly be leveraged at what is effectively a mono-culture around OpenSSL, and I would agree that is a bad thing, but to not have that be the case you would have to have people write their own TLS libraries. And while that would assure that vulnerabilities were contained, it won't happen because nobody can afford these days to hire a programmer to write TLS libraries that the only 'hire' has been someone coding up some Javascript and CSS.

I can completely relate to the OP's angst over the challenges of keeping things secure in today's world, its something of a life we "chose" relative to using open source.


"Pretty much anyone can start their very own Certificate Authority."

This may be technically true, but the process of becoming a CA was described a year or two ago on the randombit cryptography list and it was estimated that it is a (roughly) 1+ million dollar undertaking, just to get up and running and accepted in the browsers.


Yes, that's because browsers don't want to put security of hundreds of millions of users in the hands of any Joe Random that asks for it. Running a CA is expensive because they are held to very high standards. You need to have your root keys inside an HSM, you need to have multiple people on your board who can access those keys, you need to set policies for certificate issuance that meet the CA/B requirements, you need to run OCSP servers, you need to be audited by a third party to verify you're actually following all those rules, all those things take money so then you need billing and charging people money implies you need support. In future you may need to take part in the CT audit logging system as well.

Taking out any of these things and you'd be left with something that is significantly worse.

That's why it costs money to be a CA _that browsers trust_. Of course if you want to be a CA that doesn't care about browsers, that's like three lines of code at the command line.

This does not mean that the CA system is broken. There's a huge middle ground between "anyone can do it for free" and "totalitarian oligopoly". $1M to start a business is not that high compared to many other businesses.


> The purpose of the certificate is to establish trust not privacy.

And it fails in that case. If a government forces a root CA to give it a copy of the root cert key. Nobody can trust any received certificates signed by that CA ever again.


And if it was discovered, that CA would be revoked at huge cost and a ton of people would be very unhappy.

Does the US Govt care? Maybe ... there are no references to such a root key seizure in any of his docs so far. Or maybe not. Just lots of talk about stealing private keys directly from the original holders. But who can really know?

Suffice it to say there are attackers in this world other than the NSA.


The problem though is that you lose any external trust.

Now external trust is not very helpful for some things. It won't protect you from state-sponsored MITM attacks. Also non-EV certs may not inspire a whole lot of trust.

But you have the fundamental question which X.509 tries to answer: "Before I give you my credit card number, how do I know you are who you say you are?" You need some form of external trust there to answer that question.

External trust is not foolproof. See Thompson's important paper on the limitations of it. However, it is very good at addressing certain classes of threats (and very bad at others).

In the end we need both models and there is no real way around that.


Not going to happen. The WoT is a usability nightmare for the 99.9% of nontechnical users that don't care about things like 'p2p' & 'decentralized'.

Do you really think Granny is going to be happy with the tablet she bought that can't connect to her online banking account out of the box? Have fun explaining to her that she needs to exchange keys with enough trusted intermediaries to have a valid trust path to her bank. I'm sure there plenty of key signing parties happening at the 'ol retirement home.

Or maybe you can explain to Granny why her money was stolen when a scammer managed to compromise one of her trusted keys and then created a compromised subgraph in the WoT leading to a fake certificate to her bank?

The WoT is a usability nightmare. Sure, the PKI isn't too great, but it's what we have, and it is currently more practical than any other solution out there. Security needs to be usable to be useful.

EDIT: for a good rebuttal to the OP, read this blog post by Mike Hearn which covers the issues I raised and more: https://medium.com/bitcoin-security-functionality/b64cf5912a...


In general I hate arguments that includes the grannies and grandpas of our world.

First of all because it makes the assumption that all of them are stupid somehow - or for the less adaptable ones that have problems with newer technology, it makes the assumption that the current status quo works. Do you think that granny from your example wouldn't click "Ignore" on a browser warning?

Second of all, if we really get down to an argument about elders, society and making the world a better place, the priority shouldn't be to keep the status quo because the elders wouldn't cope with change - because in that equation, today's children are more important, don't you think?

> Have fun explaining to her that she needs to exchange keys with enough trusted intermediaries to have a valid trust path to her bank

That's false - she only needs to exchange keys with the bank directly.


I hate arguments that includes the grannies

The point isn't the slur on elderly users (though that often applies), but to think of the least-technical, large-base user likely to be trying to make use of your product.

In my experience, I've encountered technically challenged users of all stripes: the illiterate, PhDs, strangers on the Internet, immediate family and friends, children, the elderly, mentally or psychologically challenged, executives (but I repeat myself), entrepreneurs, the harried, etc. And, put quite bluntly, there's a hell of a lot of them.

Within the tech world we tend to be fairly insulated from the larger scope of this problem, and yet in my experience it's still ubiquitous.

The point of the example isn't to take affront, but to realize that for widely-deployed systems, base-level usability is crucially important.


Its not that they are stupid, many people have better things to do than spend a single second they don't have to staring at a monitor.

Different interests. Different focuses. No one will want to listen to you explain that its for the best, or the current issues with CAs (Also what a CA is). They just want to check their damn gmail.


A "web of trust" is essentially an extension of the CA "tree of trust". Why can't we have both?

Apple can act as iPad users' first WoT node. If a user logs into Facebook they immediately add every Facebook friend to their web. etc, etc.

Just because WoTs are currently usability nightmares doesn't mean they have to be forever.


And you can easily have a web of distrust, in that if one of your more immediate trustees stops trusting a distant branch, you can at least prompt that something is wrong. That way you can avoid the whole "compromise one node and screw everyone over" problem as long as someone realizes the compromise before you trust it.


Even with just that single example the scaleability issue becomes apparent. I would never want to trust all of my Facebook friends with my computer security. And even less so for my friends' friends, etc.

And I certainly could be wrong in my understanding, but I believe all it takes is a single malicious (or pressured) actor to ruin that chain.

That seems scary to me.


Agreed. Somehow I feel we need to end up on reputation-based nodes of trust that do act as proxies into real-world trust. It must not be like PGP is now, but it does need to be distributed and based on local trust.


Check out SPKI / SDSI. It has a hybrid model similar to what you described, but unfortunately it never really gained adoption.


Isn't this what Keybase is kind of accomplishing?


So Granny huffs and puffs and calls me, trusts my key through the sexy new UI I talked about and is done with it.

I specifically said that PGP may not be the solution, but what we have now is just ridiculous if you really think about it. We have no choice but to trust 4 companies on precisely nothing but their word. Even if you mistrust their word - and I do - there is no alternative choice.

Security always boils down to trust in the end, and the status quo outsources it. It is the definition of stupid.


What happens when someone far away from you in the WoT is compromised by say, a botnet? Now you get compromised because a source you verified through your WoT loaded a malware-infested piece of software on your mobile device. So it's not necessarily any more secure.

The status quo outsources trust because that's what you do in an economy. We trust the government to secure the value of our money. We trust banks with storing that money, and we trust that the government again will make sure that they do.

If you want to see what happens when you DON'T outsource trust, look at how terrorist networks operate. They only deal with trusted associates who know each other personally, they only communicate through trusted couriers, and they live in fucking caves. It's not exactly conducive to a modern economy.

You have to outsource some level of trust. Otherwise you waste so much productivity on maintaining and verifying your trust network that you can't actually do anything worthwhile with it. I think the real question is "to whom?" and "for what purposes?" If you need something to be really secure, then you should probably do an in-person key exchange. For the majority of things people do you only need "mostly secure" because there are other protection measures in place in case the communication is fraudulent.


What do terrorist networks and how they live have to do with internet security or even the outsourcing of trust in general?

On a technical level there's no meaningful connection.

Just talking philosophically they "live in caves" because the US & other govt's have armies trying to kill them. It has nothing to do with trust networks. If anything that style of trust networking has made them more secure as it's difficult to penetrate. The point that OP was making.

Finally, personal trust networks have worked remarkably well. Look at guanxi in China, social societies like the Freemasons (not in a "control the world" way, just better business contacts, etc.). These are all based on networks of trust.

I have no idea if this is the best way forward for the web but a comparison to terrorist networks is meaningless.


The point of the comparison is that the size of networks for which trust is actively maintained are necessarily small due to the expense of maintenance. Indeed, both of your counterexamples have this property.

The OP believes that to be economically viable, trust networks must be large. Hence, outsourced trust.

But I agree with you: once your personal network grows beyond a certain size, the property connecting you directly to any particular node is no longer exclusively "trust", but will increasingly be "convenience". Usually followed shortly thereafter by "abused by".


The reason for the comparison is that terrorists require absolute security of their communications and can't make sacrifices for convenience. As such, they have a difficult time coordinating any large-scale attacks and this is a huge strategic advantage for their enemies. Replace large-scale attacks with "buying things online" and you start to see the limitations of the web of trust as the exclusive means of securing communication. I only brought up the comparison because it was the best example I could think of where the ONLY trust is personal trust, and even then it still gets exploited through social engineering (spies & informants). Even if you take it to the extreme like that, it's not fool-proof (or even incredibly effective). The entire point was that the failings are not technical; they're structural to the concept of trust.

Personal trust works well, and nobody's implying that you can't or shouldn't use more peer-to-peer solutions where you feel you need more security -- but it's not going to form the backbone of the global economy. At the end of the day, you need some form of centralized trusted authority with which individuals can contract to provide trust-management services, otherwise you spend all your time verifying trust and not actually doing anything.


Calls who? Why should Granny trust you? What you described is no different, Granny is still outsourcing her trust to some 3rd party.

Or are we working under the assumption that every Granny has a grandson who is just as technically competent as you are? The fact of the matter is, PGP has just enough friction that if implemented correctly, will still lead to the vast majority of non-technical users simply signing up to some SaaS to handle it for them, and with that you end up in square one, where a handful of SaaS providers are the gatekeepers to everyones identity.


You have a choice. Remove all CAs from your browser, and manually verify all the server signatures yourself.


I tried this a few years ago in Firefox, but NSS kept throwing all kinds of bizarre errors, as it demanded that various intermediate CAs be trusted.


And the proposed alternative is to instead trust just one entity, the PGP code? On top of that, it all falls flat if/when any of my trusted friends gets hacked, meaning I start trusting evil.example.com because their hacked key now tells me to?


Everyone always uses this take down and its stupid.

Why would you trust your friends keys to validate say, your bank? You wouldn't. You'd trust your government, and various regulatory bodies to do that.

You'd trust friend keys to validates your friend's websitss or the like.

Different trust paths for different things. This is really the problem with UX on all crypto at the moment though - way too absolutist about 'trust', rather then considering use cases.


FYI, you should take a look at DNSChain:

https://github.com/okTurtles/dnschain


WoT won't work, yes.

> Sure, the PKI isn't too great, but its what we have, and it is currently more practical than any other solution out there. Security needs to be usable to be useful.

But I disagree with you here, there are better solutions that are just as easy to use:

- Some options: namecoin. If you own the domain you can easily sign stuff with the same key you use to own the domain

- Put stuff in DNS's TXT record once DNSSEC is rolled out. (Or create a new record)

Or take a look at TOR hidden services for example. You enter a onion domain. And you're there. Guaranteed. No messing around checking if there is a green lock or messing around with a WoT.

Note namecoin might not be there yet to be usable for 08/15 users. And most of them probably don't want the blockchain locally. But it's easy to imagine that you just have your ISP still provide you some sort of DNS service.


In the past, key exchange was hard. Now it could be as simple as walking to your local Bank branch and scanning a QR code on a plaque with your smartphone (as mentioned in the article).

Online-only or remote businesses like social networks and airlines would face a tougher problem.


Well, in the QR code and smartphone scenario, you also have to trust that the code in the smartphone all the way down to the hardware is trustworthy. Otherwise, you'll get attacked through the firmware or bugs in the OS or through custom sleeper electronics injected at the fab.


That's the case with practically any system.


What about if there was a hybrid solution where we all collectively issue certificates in some sort of p2p model. I know there are huge technical hurdles to that and you have to be careful of nefarious parties trying to inject bad certs but at least we would get rid of the monopoly.


Or the bank (which she trusts) sends her a magic-code (which adds the banks public key as the only key to be used for their domain) for the tablet, along with the 2-step authentication device.


tl;dr: Don't trust big scary corporations like Symantec to verify sites, trust your friendly local geek's network.

I think if you weren't exhausted by the sheer length of the post by the time you reach that proposal tucked at the very end, you might think to ask some critical questions. Like, what are the vulnerabilities and exploits of a peer-to-peer system? Would this not be open season on socially engineering average folks to trust the wrong peer? How vulnerable to attack are local geeks and university computer science departments? How are compromises noticed and handled by the average folks who trust a small local authority? How will the verification work be paid for, or will it be completely volunteer based, and how efficient will that be?

Moreover, what the author fundamentally misunderstands is the importance of usability in security. Web security isn't perfect but that's because more perfect security would make ecommerce annoyingly difficult. Then people start taking shortcuts or just ignore security completely, which is a worse outcome. It's not enough to point fingers at users and yell that they're doing it wrong; security architects have to take responsibility for security outcomes. A peer-to-peer system would be significantly more inconvenient for average folks to use correctly, if only because of figuring out who to trust in the first place.


Well, to be fair, the author raised the UI/UX question which could be a great way to overcome the bullshit "green padlock == safe" idea. Which it doesn't now post-Heartbeat, and never did.

A different UI might reveal the trust path more directly, so that if I navigate to my bank that path might be forced into view.

I, for one, would love it if my browser displayed the trusted path used to connect to my bank before loading any part of the page. The same goes for self-signed certs. Would I avoid HN if their cert was self-signed? Nope.


The problem I see with PGP is you'll end up with thousands if not millions of keys you need to keep on hand to decrypting everything. Not to mention the web of trust will be massive and navigating will likely start taking very large CPU power if its strictly peer to peer.

To avoid this most people will start just trusting larger companies; Google, Facebook, Apple, Mozilla. And only checking their keys, since they will trust that company's key. And these companies will handle signing new websites. Small websites won't care if you personally trust them, they'll only care if one of the 'big companies' trust them.

In the end we wind up exactly where we started. Large companies are implicitly trusted by everyone. Sure you may sign your key off to a few dev friends so you can access their test sites, which will make self signing easier. The cost will be mitigated, but in reality nothing will change. Even likely within a 3-4 Browser Generations we'll see non-Company trusted PGP keys get scrapped in all but the more free (as in beer) browsers.


> Large companies are implicitly trusted by everyone.

No. What little trust there is, it comes from people trusting them to be afraid for their own interests. But even then people generally acknowledge that the customer's interest might not always win here.

See eg. Linux refusal to use Intel's hardware RNG.


I'm not sure this is the likely outcome. There are a lot of people out there savvy enough to not jump on the big company bandwagon - and they would be very vocal about why it's a bad idea too. I don't see your vision becoming a reality for as many as you expect, especially if PGP is brought into our lifestyles a bit more (e.g. email, chat) and the general population begin to understand it - it wouldn't be long before they understood enough to value building their own trust relationships.


The problem is its not a personal savvy problem, its a mathematics problem. PGP will pick the shortest route. So if you trust google.com, and google.com trusts the site, bing-bang-boom your done.

The shortest route will always favor the person with the most keys and the most trust. Who invariably will figure out that he/she can make money getting more keys and more trust. Which lucky for us their are both a finite number of persons and a finite number of keys that will be signed by each key. We end up with a pyramid scheme.

Where the more trust and keys you have, the easier it'll be to get more trust and keys.

:.:.:

The problem is capitalism. In all honestly we'll likely see the PGP network end up in the hands of banks. You want secure access to you online account? Sign each other keys. Now the bank has a 5 million person strong trusted key. They'll sell that trust naturally. I trust most tech companies enough not to instantly monetize the PGP web, but some would.

Likely some tech company attempts to monetize it, they get yelled at. They stop. Another does, nothing changes so people accept it as the new norm. The arguments made it allows for faster page loads, easier access. Nobody says a word after a year.


The difference though is that if we used web of trust / PGP, you still have the choice to divert around exploiters.

With PKI, you can't choose the root CAs. Today, Verisign abuses the shit out of their market dominance to price gouge certs, and I have no reason to ever trust that company with anything, they don't give me a reason to, and they almost certainly have their root keys in the pockets of groups like the NSA.

So if I don't want to run my WoT through Google, I could choose not to. For the average user they shouldn't care, but I would at least have the choice. Right now there is none.


I completely agree with the capitalism problem, but the most likely hands to end up holding the web of trust would be the browser makers, not the banks.


The difference between the CA system and the WoT system you're describing is that you can revoke your trust in, say, Google yet you'll still be able to validate certificates.

With the CA system as it is now, once a CA is trusted, it's effectively trusted FOREVER.


A bug in a PGP implementation could have leaked your PGP private key. A bug in an SSH implementation could have leaked your SSH private key. CAs may be a flawed concept, but I don't think they have anything to do with Heartbleed.


More importantly, they're certainly not the only threat presented by HeartBleed. While this guy drags his feet patching (because his certs still won't be really secure), his servers are likely leaking session cookies, usernames, and passwords by the bucketload.


Heartbleed knocked us down, the CA system is going to make it very difficult for us to stand up straight again - that's the point.


How much effort would it be to rebuild a web of trust after all the keys were simultaneously assumed compromised?


Well, that's one of the most powerful arguments in favor of CAs I've ever heard.


That's precisely why you only use subkeys in daily life and only use the root key for keysigning (and ideally store it safely and offline).


Good point, but will it be less than all the effort and money about to be expended in the coming months? Who knows. For the record I did say that PGP may not be the solution.

The other great thing is that PGP is not just for sites but for people, so even if all the private keys handled by nginx/apache/whatever were compromised Heartbleed-style, the core person-to-person trust relationships would be unaffected; the core of the web of trust would be intact, only the endpoints would need re-verified.


That's a great point worth repeating. Your personal trust relationships probably don't also change when your bank gets hacked and needs to replace their keys, and a network of a certain size will restore access to your bank relatively quickly due to friend of a friend connections.

It also reduces the burden on your bank for maintaining the security of their keys (to some extent). It's still very important, but the consequences are no longer quite so catastrophic.


The Queen/Princess/DNA analogy was more confusing than actual system of certificate signing.

The author also underestimates the consequences of performing a MitM attack with a root certificate. MitM attacks can be detected and a copy of the signed cert is proof. If the NSA were abusing a root cert, there is a chance it could be noticed.

So what if it was? Well, that certificate would be removed from browsers and operating systems. The CA would be placed under suspicion. In a worst case scenario, the CA could be completely ostracised, perhaps even to the point bankruptcy. An abuse of a root certificate could potentially do hundreds of millions of dollars worth of damage.

That's not even covering the diplomatic fallout. If the CA points the finger at the NSA, the President would have to explain why the target was so important that it merited destroying part of the root trust system of the Internet.

There are far less messy ways of dealing with a high-value target. I'd be more concerned about other zero-day vulnerabilities the NSA might have found.


If one of the Big 4 were compromised (which we should all agree is most probably the case for all of them), even then, "too big to fail" rules the day.

It's vanishingly unlikely that Google, Microsoft, and Apple would remove a Big 4 CA root cert and break the trusted path of 25% of the secured market.


It wouldn't just be the browsers removing the CA. There would be a strong incentive for websites to switch as well, particularly foreign ones, so you'd find a mass exodus anyway, even without browser support.

Browsers don't have to turn a root CA off all at once, either. They could start by turning off Extended Validation for the compromised CA, or they could release a statement saying that if they don't get guarantees this won't happen again, they'll remove the CA in a year's time. They could allow connections, but change the SSL icon to indicate the certificate has been compromised. Browsers have a lot of options to put pressure on root CAs, even without removing the cert.


Again though this argument rests in people choosing to behave a certain way: CAs will choose not to go rogue because browser vendors will choose to be outraged by it.

If one was to attempt to formally specify X.509 in terms of math or logic we'd get to this part and have no choice but to write "the security of this portion is because we say so". How many times must we be betrayed before this isn't good enough?


Security is always going to involve trust. Even putting aside root certificates, you'd still have to trust your browser and your operating system.

That's not to say there aren't better mechanisms for verifying trust, but you'll never eliminate it entirely. There's always going to be some assumption, such as "the central authorities are trustworthy" in the case of SSL, or "the majority of nodes are trustworthy" in the case of Tor, or "the CPU majority is trustworthy" in the case of Bitcoin.


He who controls a Queen can make functionally equivalent copies of every Princess and Princess-baby in the Queen’s lineage. They have the skeleton keys to your ‘secure’ kingdom and could at any time decide to become a fraud factory and dish out copies of your keys to whomever they fancy.

In a sense, it's worse than that, because a "queen" can actually sign (correctly or not) any "princess-baby" in any "lineage".


YES YES YES, 1000 TIMES YES!!!!

Unfortunately not too many people know this, and it's a really important issue.

BTW like a lot of other people here, I didn't like the "Queen" analogy. IMO it didn't make the explanation any simpler.


A couple of problems:

The average internet user has no idea who's trustworthy and who isn't. If they have to personally grant trust in order to get at some content they're looking for, they'll simply do it. This is the same behavior that causes people to execute boobs.exe attached to a random email that landed in their inbox.

In order for this to work, the average internet user must cede the trust decision-making process to some other entity who claims to be more qualified to do it, like say the company who makes their browser. There are four browser makers that account for probably 90+% of usage. Now you're right back to where you started with the current oligopoly system, except that with the new system there's a much larger attack surface for nefarious agents to use when trying to insert themselves into the trust chain because anyone at all could let them in.

Cynically, that's the problem with internet security protocols in general - they have to work not only for smart, self-interested people but also for stupid people who are actively self-harming. That's a really tough bar to meet.


I would much rather trust a handful of multinational corporations than a group of "local geeks" to tell me which keys I should trust.

Why?

1) It is probably easier for casual attackers to trick a local geek to trust a phony key. Determined attackers and state-level actors can probably compromise CAs as well, but most day-to-day threats are of the casual type.

2) When a local geek accidentally trusts a phony key, and other people realize it and point it out to them, all that happens is "Oops, I'm sorry." When Comodo is caught issuing phony certificates, there will be a Silicon Valley-wide uproar, browser vendors will very quickly invalidate the offending intermediate key, and the incident will hurt Comodo's bottom line for many years afterward. In other words, Comodo is more accountable than any private individual, not because it's any more ethical, nor because it is any more competent, but simply because it is a highly visible target of public scrutiny whose very survival depends on its public image as a trustworthy CA.

3) Most people (including but not limited to grandmas) who are just beginning to use the Internet have no way to know which keys to trust. We in the programmer community are an exception, not the rule. So what's actually going to happen is that browsers will trust, by default, a bunch of highly reputable individuals or groups (perhaps the browser vendors themselves) and advise the user to trust whomever these people trust. That's not really different from the current situation with CAs. We just replace Verisign and Comodo with @cperciva and @tptacek.


I strongly disagree with your point 2). The reality is, if Comodo is caught issuing phony certificates, there's some media shitstorm that never actually changes anything, stocks go up and down a bit, and few days later nobody ever remembers any it or cares about it, and the company continues doing it's business as usual (don't believe me? then why GoDaddy still exists?). On the other hand, we have social mechanisms for dealing with mistrust in place since forever. If you are caught untrustworthy once about something, you'll probably never be trusted again on that issue. People know how to deal with those situations effectively between themselves. It's also easier to boycott an untrustworthy peer than a multinational corporation. You have many friends to choose from, but there is usually no other company to go to for a comparable service.


> we have social mechanisms for dealing with mistrust in place since forever ... People know how to deal with those situations effectively between themselves.

As some of the other commenters have mentioned, the problem seems to be that these social mechanisms don't scale.

Please take my point 2) in combination with point 3). As I said, techies are the exception, not the rule. It's not just Grandma who will have trouble with a web of trust, it's pretty much everyone except us. How do they even know which peers to distrust? Will there be a news feed about compromised peers? Will everyone have to subscribe to one? What if someone wants to explore a part of the web that none of their peers, or their peers' peers, have ever heard about?

The single most important advantage of a centralized model of trust is that a list of trustworthy vs. untrustworthy parties can be quickly and widely distributed in an automated fashion. Comodo issues phony certs? 12 hours later, every copy of Firefox receives an updated list of revoked keys. I know it doesn't currently work like that, but it's entirely possible. Whereas with a web of trust, millions of people will be left trusting compromised peers for many months afterward because they didn't get the news.


> It is probably easier for casual attackers to trick a local geek to trust a phony key. Determined attackers and state-level actors can probably compromise CAs as well, but most day-to-day threats are of the casual type.

Not true, see here: http://privacy-pc.com/articles/ssl-and-the-future-of-authent...

Problems here is that free market model doesn't work once you're a big player. Instead of Comodo being bashed by MS/GOOG/Moz it's sill there all shiny and bright serving SSLs.

So the current model is flawed and can be exploited by technically unskilled users, but worst than that, it doesn't seem to care about it's failures.


> And fundamentally you have to trust that they who hold the Queens aren’t dishing out copies of your certificates.

In general, I'm a fan of analogy, but I'm having trouble following this whole queen/princess/baby thing. Putting that aside, I think you're claiming that CAs can present your certs to random clients?

This might be an indictment against the DNS system, which directs the clients to an IP address of its choosing, but if the client makes it to your server, your server chooses which cert to present to the client.

> What we have done here is fitted our doors with some mega heavy duty locks, and given the master keys to a loyal little dog.

Again with the strained analogy. Who's the dog? What does the mega lock represent?

I think this belies a fundamental misunderstanding of what the CA is doing. The client asks your service to validate itself, your service does so by saying that Verisign/Thawte/etc. has previously signed the cert that your service sent to the client. The client does not have to automatically trust Verisign or Thawte or whomever you say signed it, and furthermore, if it decides that it does trust that party, the NSA is not able to use that to its advantage in any way as a result of Heartbleed.

> As of today, that green padlock no longer means what it once did. And the reason for that is because of the business conditions of gatekeepers.

No, it doesn't mean what it did yesterday because of a bug in an implementation of OpenSSL. The protocol is still just as valid. The business conditions of the gatekeepers, while distasteful to you, doesn't invalidate the mechanisms by which that little green padlock gained its fame.


...if the client makes it to your server, your server chooses which cert to present to the client.

Ummm, that's kind of a big "if". The whole point of authentication is to resist an adversary who controls the network. We already know we can't rely on DNS (or any of the other 37 moving parts involved).


Not the OP but I think I might be able to help.

>> I think you're claiming that CAs can present your certs to random clients? This might be an indictment against the DNS system, which directs the clients to an IP address of its choosing, but if the client makes it to your server, your server chooses which cert to present to the client.

Here I am fairly confident that he is talking about a situation in which a CA signs a key for your domain and gives it to someone else (NSA/GCHQ) and they preform a MITM attack on a user like this:

Client -> Fake key for yourdomain.com provided by MITM proxy server -> decrypt data then encrypt with real key for yourdomain.com -> Your Server

CA's have been compromised before [0] (and I'd be willing to bet there are quite a few more incidents that they have swept under the rug) and so there has been discussion on what happens when you can sign a certificate for any domain. I believe this is what the OP is referencing.

>> Again with the strained analogy. Who's the dog? What does the mega lock represent?

I agree with you, this one is harder to understand. As I see it the mega lock = CA's private keys, dog = CA's. When he talks about the dog being tempted by a steak he is referencing the rumors that the NSA/GCHQ have back room agreements (steak) with CA's or have simply hacked the CA's and taken what they needed (for this I would say something like "the dog was asleep").

>> No, it doesn't mean what it did yesterday because of a bug in an implementation of OpenSSL. The protocol is still just as valid. The business conditions of the gatekeepers, while distasteful to you, doesn't invalidate the mechanisms by which that little green padlock gained its fame.

This is less cut and dry than you suggest. The green padlock has always meant jack-shit when it comes to state actors (if you subscribe to the theory that they have either bought off one or more CA's or hacked them, which I do), what it did protect you from was your run-of-the-mill online criminal. It made it impossible for them to sniff your login credentials a la Firesheep[1] (Yes the padlock itself didn't do that, the PKI did but it gave people a simple way to check if the connection was secure and the website was who it said it was). What the heartbleed bug did was allow ANYONE to potentially steal your private key right off your server, opening the door to not only NSA/GCHQ but anyone with an internet connection (and the knowledge to exploit it).

The OP is suggesting that CA's should have revoked certificates to force people to fix their servers but they never would due to the backlash. CA's have the ability to revoke certificates that are compromised and we have to assume every certificate has been. I don't know what the right course should be but one that spring to mind iss giving everyone a deadline at which point all certificates will be revoked and refuse to re-issue a certificate to a url that is still vulnerable to heartbleed. YES, this is extremely and no it's neither simple nor easy but I think there are very good reasons for why it should be done. The thing is, at least IMO, that CA's really don't give a shit, like the OP suggests they care about one thing and one thing only: their investors. If they really did care about making the web a safer and more secure place then why aren't they sponsoring OpenSSL or working on their own open source SSL library?

[0] http://crypto.stackexchange.com/a/11765

[1] http://codebutler.com/firesheep/


with a properly done CSR (certificate signing request), a CA never has access to your private key, therefore cannot "give it to someone else"


Correct, they cannot give but they can sign a new key for your domain which the attacker can use.


Previous HN discussion on Monkeysphere, a Debian project which implements something like what the author envisions: https://news.ycombinator.com/item?id=6617132

And the description from the Monkeysphere site on why they are a better alternative for HTTPS: http://web.monkeysphere.info/why/#index1h3


Indeed, this is pretty much exactly what the OP is talking about. The problem is that it's hard to bootstrap, since correct verification procedures are not widely-known.

TACK, what tptacek mentioned, is an orthogonal strategy for solving the same problem, but it assumes that some MITM will be detected. An ideal solution would involve a combination of both TACK and monkeysphere.


There's also convergence, which currently can work for the case where the client is undergoing MITM, but not the server. Add support for notaries to cache TACK responses and you are pretty secure.


I used to work on PKI and this right here would have the old guard of system security architects up in arms:

  > 90% of that guff can be automated and hidden underneath a good UI, but can we
  > dispense with the need for key exchange parties? Absolutely we can.
So who builds this "good UI that everyone trusts"? Without details of how this works, there is no way this system can grow. There is no way to have efficient key exchange except though an arduous process of everyone creating this mesh of trust manually. PKI creates this "good UI everyone trusts" with a bad UI that everyone trusts which has turned into these 4 companies that are mentioned in the article. It sounds good, but it's an iron triangle.


I’m not a cryptographer; nor am I a hard core C guru; nor have I invented some brilliant library that gives me street cred to talk about this stuff. I’m a nobody.

But somehow I am qualified to inform the world as to why PGP is superior to X.509.

I'm not debating that point, and informed debate would be welcome. And I have to say that I find it refreshing for a blogger to so inform me in the first paragraph as to just how quickly I should skim through or close their rant.

I really did appreciate that. Though somehow I find myself investing more time in the writing of this comment than in the consumption of the article. Fortunately, like floss, 't'will soon be forgotten.


Can't this be solved with some kind of distributed, authenticated, pre-existing protocol? Something like...

DNS?

With the DNSsec extensions it should be possible to publish enough information to authenticate a given site against a certificate. If your DNS has been compromised you've got bigger problems than your SSL cert.


DNSsec is still broken by hierarchical trust - implied by some "authority", rather than chosen by the user.

I think the solution needs to be something like Moxie's Convergence, which allows for users to decide who they trust, and revoke such trust at any time.

https://www.youtube.com/watch?v=8N4sb-SEpcg

It's a shame that Convergence is basically dead, although there's still some activity in Perspectives on which it was based. (http://perspectives-project.org/)


If your DNS is compromised SSL and x509 is currently what's protecting your users from a fraudulent site.


> If your DNS is compromised SSL and x509 is currently what's protecting your users from a fraudulent site.

Uhm no? Because you can litterally just buy a new valid cert for it? As long as we're talking Domain Validated.

I doubt your average user will notice that there isn't a green bar anymore or that the certificate lacks ownership information.


If DNSsec allows your domain's certificate to be signed vs. a more root authority (e.g. .com) then it would be a lot harder to introduce fraudulent DNS records.



> it should be possible to publish enough information to authenticate a given site against a certificate

You can, it's called DANE and is a future standard [0]. We're just waiting for DNSSEC to spread because without DNSSEC everything is unsecure.

[0] https://tools.ietf.org/html/rfc6698


Rather than all the engineers and tech-minded people here naysay the idea into oblivion, I think it's worthwhile that we encourage designers to take an earnest stab at this problem.

The complaints here are basically "w.o.t. is not usable", but that's basically what the author said. He therefore also indicated this is as much a design problem as anything else. That's a useful insight we shouldn't dismiss, at least not until some thoughtful, imaginative designers have actually taken a crack at it.


Heartbleed and X.509 are basically unrelated aren't they?

The OpenSSL bug that allows heartbleed is nothing at all to do with the (many) flaws in the public trust system.

The fundamental problem here (as I see it) is that you're trying to set up trust between parties that have no existing relationship. This requires third parties and externalised trust whether you use a CA or a P2P net.

Either way, it's nothing much to do with heartbleed, which would have leaked the keys to the kingdom under either model.


Not going to happen because the main OpenPGP implementation (gpg and gpg2) currently has a non permissive license that as such that it cannot be used "Everywhere".

Until there is a implementation of OpenPGP that uses a permissive license, getting the world plus dog to switch to PGP is a non starter.


From the perspective of a layperson with limited tech knowledge I really like the way you explain things!


The article makes a generalization that is not correct in most cases around certificate request and issuance --

"And fundamentally you have to trust that they who hold the Queens aren’t dishing out copies of your certificates."

The entity holding the Queens can give out a copy of your certificate, sure, but in most cases, they do not hold the crown jewels -- your private key -- which is the part of the Heartbleed bug that is really bad.

There have been cases of CAs either issuing or being compromised and issuing new certs which duplicate a site identity, but that is different then releasing the private key of a particular certificate.


He who controls a Queen can make functionally equivalent copies of every Princess and Princess-baby in the Queen’s lineage. They have the skeleton keys to your ‘secure’ kingdom and could at any time decide to become a fraud factory and dish out copies of your keys to whomever they fancy.

This seems like utter nonsense to me. Certification authorities should never get to look at my private key, and I don't care about them giving out my public key (it's public, after all). The best they can do, if they're evil, is create a new pair with information that impersonates me.


Surely if Zuck got half the world signed up for a network that does nothing but suck our eyeballs in return for money out of advertisers pockets, we could get a few million, even say, 10-20 million people using PGP. Remember that Tor was once considered a niche tool as well.



The problem isn't any one cryptography scheme; the problem is trust. How do we build a trust framework that facilitates commerce on a wide scale while remaining truly secure? I don't think we can; so we give up a little bit of security for a whole lot of economic benefit.

Without centralized, trusted gateways, it's not even clear that your communications are secure. They need to be centralized to make them easy to monitor and audit. With a distributed trust model, the compromise of one node can be catastrophic; all you're really doing is handing control of the trust network over to botnets.

This is a really hard problem. I can't think of a better solution that would serve the same niche as our current one.


Next 10 years will be all about decentralization of every infrastructure and institution. Only in a trustless system we can have any chance at trust. So no CAs, no authorities.


The missing piece of this for me is: How do we fix X.509 for mobile apps, considering 80%+ of mobile usage is in apps, not browsers?


This already exists: http://web.monkeysphere.info/


Anybody know which, if any, of the SSL cert vendors don't use OpenSSL?


OMG, you've exposed all those intruders-oligopolists!


WoT and CA systems are both problematic since they can be altered on the fly and thus 'hijacked'.

I wonder if we wouldn't be better of with something similar to what SSH does. Accept trust the first time and verify that the signature doesn't change on every subsequent connection attempt. This way one would be immune to hijacks.

It wouldn't solve first time verification, but how likely is a first time spoof? And for really sensitive communications you could use pre-shared keys. I could for instance get a hardware token from my bank containing their public key.


PGP would be a problem for high load servers too.

"Why not use public-key encryption for everything?

At face value, it seems that the existence of public-key encryption algorithms obsoletes all our previous secret-key encryption algorithms. We could just use public key encryption for everything, avoiding all the added complexity of having to do key agreement for our symmetric algorithms. By far the most important reason for this is performance. Compared to our speedy stream ciphers (native or otherwise), public-key encryption mechanisms are extremely slow. A single 2048-bit RSA encryption takes 0.29 megacycles, decryption takes a whopping 11.12 megacycles. To put this into comparison, symmetric key algorithms work in order of magnitude 10 or so cycles per byte in either direction. In order to encrypt or decrypt 2048 bytes, that means approximately 20 kilocycles."

https://www.crypto101.io/

EDIT: I suck at copy-pasta


I think the author is proposing to replace CAs with PGP-like web of trust but keep the rest of SSL/TLS the same, so public key crypto would only be used to setup a session key.


That's fair. I re-read the article and see your point. I would still agree with other comments here that a WoT would be difficult to implement in a user friendly way that wouldn't also be exploited.


How is it hard for a browser vendor to implicitly trust itself, and build its WoT from there? Get Chrome, trust Google. Get Firefox, trust Mozilla. It means you have to trust your browser, but.. you kind of already have to do that, you are putting all your personal info through its text fields and such.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: