Hacker News new | past | comments | ask | show | jobs | submit login
Namecoin, A Replacement For SSL (mediocregopher.com)
159 points by mediocregopher on Nov 23, 2013 | hide | past | favorite | 93 comments



The problems this author lists with SSL/TLS are not in fact problems with SSL/TLS, but rather with the browser vendors.

The TLS standard itself is almost agnostic to how certificates are validated, and there is more than one strategy for doing so. For instance, Chromium doesn't trust the entire TLS CA system; it bakes into the browser "pinned" certificates, and not only won't honor a Google Mail cert from Komodo or China or Trustwave, but will also alert the (large, enthusiastic) security team at Google that a CA issued a rogue certificate.

There's no reason the same system couldn't work for your site too; key pinning just needs to be made dynamic. And there's a standard for that: Moxie Marlinspike and Trevor Perrin's TACK, at http://tack.io. If you want to "fix" TLS, don't throw out the baby with the bathwater: you are vanishingly unlikely to do a better job of designing an encrypted channel than every cryptographer and protocol designer and software security person who has poured time into TLS. Instead, update the browser UX, which is the only thing leaving us hostages to the CA system.


The idea I've proposed (although I'm sure I'm not the first to have done it) would leave most of TLS untouched. The only portion of the handshake and subsequent steps which would be different is that which verifies the server's certificate. Everything else would be the same.

My understanding of pinning is that it could just as well be used in a namecoin based system, and could enhance the security of that scheme just as it does with the CA scheme. All of these ideas could work together, with namecoin as the external, third-party source of trust instead of the CAs.

So I guess a better title for the post would have been: "Namecoin, A Replacement For Certificate Authorities"


Once we have dynamic pinning, we don't need an elaborate distributed cryptographic ledger to solve this problem.


Maybe, but it could nonetheless help with the first-visit problem.

There are at least three proposals with a distributed cryptographic ledger to reform or replace the CA system: Namecoin, Sovereign Keys, and Certificate Transparency. Each of them has some security benefit relative to TACK alone in the case of a user-agent's first visit to a site.

In SK and CT the ledger is less distributed than a Bitcoin blockchain, although these systems have historical influence from discussions sparked by the existence of Bitcoin. Ben Laurie argues that proof of work is actually superfluous to the security model of these systems:

http://www.links.org/files/decentralised-currencies.pdf http://www.links.org/files/distributed-currency.pdf

Anyway, to refocus, a decentralized or distributed ledger does have the potential to help with the first-visit introduction problem, because you can have another place to turn for ground truth or at least for evidence that the certs you accept are publicly known. (There are also alternative techniques that don't rely on such a ledger, including Moxie's own Convergence.)


A distributed system might make sense. Build an API (could be a REST system with some basic conventions) that allows sites to vouch for the certs of certain other sites. The advantage is that you can then poll several sites to check the validity of a certain cert instead of being reliant on a single point of failure.

Alternately you could use arbitrary key signing which the original site presented up along with their cert.

It certainly seems preferable to the single-point-of-failure systems we have now.


When will we have dynamic pinning?

I'm ready right now to try either TACK or crypt.io. What should I do if I want results over the next 24 hours?

Also, and I mean this for both TACK and crypt.io, is there a self-contained, easy example someone can run and understand that shows if TACK and crypt.io work?


The idea I've proposed (although I'm sure I'm not the first to have done it)

I find your wording a bit strange here. Saying that you proposed the idea and yet half-acknowledging prior work, as if you're not aware of it.

You and Marco were at my senior project presentation where I talked this very issue (several weeks ago). Half of your blog post came out of my talk. Happy to see you were able to figure out the last slide was partly referring to Namecoin.[1]

This idea has also been discussed for some time on the Namecoin forums, and various security mailing lists and websites.

I'm glad more people are exploring this option. It would be nice if they were a bit more polite.

[1] http://cl.ly/image/153W3X3Q140y/Screen%20Shot%202013-11-23%2...


>Instead, update the browser UX, which is the only thing leaving us hostages to the CA system.

This is only the case if you mean the browser UX as it relates to site owners and not site users. For users the only browser UX they should need to know about is the browser being able to tell them with confidence "this communication is [not] secure".

So why go through all this hackery to patch up the broken CA system vs just distributing the TLS keys through DNSSEC? That way you get a complete chain of trust all the way to the DNS root, validating both HTTP and the initial DNS response. Then you can really tell the user that the communication is tamper-proof without any gotchas about rogue CAs and certificate pinning.


DNSSEC is just another PKI, but this time controlled by world governments. I'm baffled by how anyone could think that DNSSEC is a superior alternative to the CA system.

The good news about DNSSEC is that it's never going to happen anyways.

Meanwhile, TACK strikes a balance between the convenience/stability of having a third-party trust anchor that can vouch for your identity, while at the same time putting site owners in control of how strong that voucher is, by allowing them to bind their identity directly with clients. You could in theory pin a self-signed certificate, if that was your thing.


>DNSSEC is just another PKI, but this time controlled by world governments. I'm baffled by how anyone could think that DNSSEC is a superior alternative to the CA system.

We rely on DNS anyway so what can those world governments do with a DNSSEC PKI that they can't do with DNS as is? I mean if you can change the DNSSEC distributed certificate for google.com surely you can just redirect google.com and be done with it.


No. We do not rely on DNS now. The whole design of TLS is premised on the notion that the DNS is insecure. Much to NSA's annoyance, if I switch MAIL.GOOGLE.COM in the DNS, I cannot easily MITM people's Gmail sessions.


>Much to NSA's annoyance, if I switch MAIL.GOOGLE.COM in the DNS, I cannot easily MITM people's Gmail sessions.

If the user is using a new browser or has an expired HSTS header, wouldn't it actually be trivial to do? Or are you also relying on TACK to say "never connect to mail.google.com without TLS" or on HTTP/2.0 to mandate TLS everywhere?


If the whole CA system is compromised, you mean?


My assertion is that absent some way to force TLS from the start on the connection owning DNS allows you to MITM a user that types mail.google.com in the address bar. Where is that wrong?


Chromium will not let you connect to MAIL.GOOGLE.COM via port 80; the fact that Gmail requires TLS is pinned just like their certificate is.


>My assertion is that absent some way to force TLS

So, my assertion is correct then. Own DNS, own mail.google.com if the browser doesn't force TLS.


Point arms up into sky, cast yourself aloft into the heavens, if the earth doesn't have gravity.


If you don't distribute the pin along with the browser you don't get TLS forced on the first connection. As you yourself said in the other thread TLS+TACK doesn't solve this either. So this is hardly a fundamental property of TLS.


Browser trust anchors aren't a fundamental property of HTTPS/TLS? That's like saying "TLS is insecure if you don't distribute any root certificates with your browser".


Really? You're arguing that a feature that was first implemented in 2011 is a fundamental property of TLS? And are you also arguing that the whole web should have Google/Firefox/IE/Safari ship a certificate pin with the browser? That will surely scale...

If we're doing that why do we need CA's at all? Just ship pins of self-signed certificates to all the browsers and trust Google with the integrity of the database.


Your browser could also be a trojan.


This is true but those governments already control the domain name system, so they can yank or redirect your domain. You're already trusting the registry so why not have them sign your certificate? There's nothing stopping your from pinning it later, too.

Also, that way you could limit your trust to one entity per tld. Unlike today where any CA anywhere in the world can sign any certificate.


I feel like I have to keep saying this over and over: no, you do not rely on the domain name system for security. TLS was designed to assume that the DNS was totally insecure.

Yes, world governments can yank GOOGLE.COM so you can't reach it at all. But what they can't easily do is stand up a different MAIL.GOOGLE.COM and reliably collect people's mail.

However, if were were stupid enough to stick TLS certificates into the DNS...


The trouble with the idea that TLS assumes DNS is "totally insecure" is the numerous CAs that will issue certs based on proof of control over the domain name -- as observed by the CA.

So while TLS itself doesn't trust DNS at all, PKIX does! CAs may well have google.com on a blacklist, but if someone could "yank" matasano.com and point it at their own server, they could most likely get GoDaddy to issue them a cert based solely on their control over that server. GoDaddy's existing test for the legitimacy of a certificate issuance request is the ability to post a text file with specified contents on an HTTP server pointed to by the subject CN.

(For that matter, the attacker who took control of the matasano.com zone could then publish MX records allowing them to intercept any verification or confirmation e-mails to the whois contacts.)


I agree that the model doesn't work at all right now without pinning, though for the adversaries I'm concerned about, it's still strictly better than DNSSEC.


I think I agree with you, given pinning.

Without pinning, keys in authenticated DNS (like with DANE) are strictly better than CA certificates, because the CA certificates are themselves issued on the basis of (even unauthenticated!) DNS data.


I find it hard to believe that a government agency couldn't easily present a forged certificate perceived as valid by popular browsers if they were so inclined.


They can, which is why Google pins certificates, and why there are dynamic pinning proposals.


Couldn't you also pin a certificate if it was distributed over DNS? DANE would make setting up ssl simpler for operators of small sites while not compromising the ability of big players, like Google, to deploy their own security mechanisms.


> you are vanishingly unlikely to do a better job of designing an encrypted channel than every cryptographer and protocol designer and software security person who has poured time into TLS

Given the overwhelming, repeated breakages in the SSL/TLS record protocol I'm really surprised you put the designers of it on such a high pedestal.

The researchers who broke SSL/TLS should certainly get that kudos, but the result of this is that a competent designer should be able to do a better job than TLS now if they merely choose to fully research the problem domain.


TACK is a nice improvement to TLS that keeps compatibility with the existing CA system.

However, TACK only protects you after you've visited a site for the first time. If a person is targeted by any organization that can sign a fraudulent certificate before the first time they visit a site, then they are still vulnerable to MITM. Given that many countries have root CAs, many governments have this capability.

Namecoin does not suffer from that issue.

Namecoin and TACK solve different problems. One is an incremental improvement with backward capability; the other is a clean break that attempts to provide stronger guarantees.

"you are vanishingly unlikely to do a better job of designing an encrypted channel than every cryptographer and protocol designer and software security person who has poured time into TLS"

It pains me to see this kind of arrogant dismissal as the top comment so often.

The author said nothing about designing his own protocol. He simply explained the problems with the CA system and the way Namecoin (specifically the DotBIT proposal) might solve those problems. This would simply add new functionality: supporting .bit URLS where the clients looks up the certificate through the blockchain, without trusting the CAs. After that first step, the encrypted channel implementation can be the same. And the blockchain concept (and even Namecoin's specific implementation) is shared with Bitcoin, which has been battle-tested quite thoroughly at this point.


TACK is a baby step towards a design idea Marlinspike called "Convergence", and I just call "updating the 1995 browser trust store UI to the standards of 2005".

The idea that only members of the CABForum can run CAs is silly, especially when (a) many of those organizations are in fact beholden to governments and (b) some of them have actually sold root CA certs.

There is no technical reason the ACLU should not be able to run run its own trust store, certifying alternate CAs that it is confident aren't corrupted, and revoking CAs when certificate pin errors make it clear that they have been. The UI to make that happen is very simple as well.

My issue with things like distributed ledger verification is that TLS's problem isn't the crypto. The crypto should be kept boring; we're barely holding the crypto together as it stands.

The problem with the TLS security model is browser UX. We can and should leave the protocol alone.


> We can and should leave the protocol alone.

Nobody, not in the article and not in the post you replied to, sad that the protocol needs to be replaced. In fact, both say the exact opposite.

I like your posts, and you clearly know what you are talking about, but this time it seems you don't discuss the article, but just hop in the discussion with random and not correlated, but true facts.


I'm including the CA system in my definition of "the protocol".


> I'm including the CA system in my definition of "the protocol".

Well then, TACK changes "the protocol". As you know better than me it proposes changes to the TLS handshake and how CAs are verified.


It's seems like we're in an interesting position at the moment.

On the one hand we have the experience of about 20 years of the web, >15 years of heavy internet usage, and so on. We've learned the various shortcomings of the systems that were originally put in place, and we've learned a lot of advanced techniques and technologies that can be brought to bear to build next generation versions of those systems.

On the other hand we are burdened with 20 years of backwards compatibility concerns and network effect, making replacement systems hard to deploy and gain traction. IPv6 perhaps being the poster child in that regard.


I do think "fixing" TLS is ultimately a pointless exercise that would only serve for the transitional period until we can build an Internet that's encrypted by default at the IP or Tranport layer.

If that's the case, then we may just have to dump TLS eventually anyway, since it was made as a patchwork to cover a very insecure Internet. We need an Internet where security is mandatory (in a cheap way, if possible), not just "optional".


Right now, I'm not sure we know how to handle key management for universal encryption at the IP layer.

With names we can at least claim them, remember historical claims about them, or prove that we apparently control them according to some naming system. With IP addresses we have deliberate churn in control so you might use a different address every week (and someone else might re-use your old address). What is a safe way to associate keys with IP addresses, given this?


You can use an ephemeral key system to encrypt communications. Knowing who you're talking to, to thwart active MITM, can be regarded as a separate problem from using encryption to thwart passive MITM. We need to solve both problems, but we don't necessarily need to solve them simultaneously.


I think that approach was taken by some prior IPsec implementations. I recall that John Gilmore described a main goal of FreeSWAN, for instance, as protecting Internet traffic against passive eavesdropping.

I do wish that plan had worked out. On the other hand,given evidence that governments are sabotaging so much of our communications infrastructure, I guess I fear that active MITM will become common quickly unless there's some mechanism in place to at least detect, publicize, and investigate it.

And since a given IP address can be legitimately used by many different devices, human beings, or organizations in a short timeframe, it's hard to see what this would look like.


And since a given IP address can be legitimately used by many different devices, human beings, or organizations in a short timeframe, it's hard to see what this would look like.

Maybe if we could go back in time, we could add a new field to the Internet Protocol that indicates a unique endpoint ID for IPsec purposes. Two hosts desiring to communicate would exchange endpoint IDs, and an encryption "session" would only apply to the IP+endpoint combination. That way it's still possible to implement UDP and TCP on top of IP, but it doesn't necessarily solve the case when different TCP or UDP ports on a single IP address correspond to a different machine and the port-based router should not be able to decrypt the traffic.


Is there any movement on TACK recently? It seemed promising when it was proposed, for the reasons you mention, but I haven't seen any real movement towards implementing it by any of the major browser vendors. The IETF proposed draft also seems to have stalled: http://datatracker.ietf.org/doc/draft-perrin-tls-tack/. Do people have objections to it, or is it simply inertia keeping movement slow?


The browser UX should only show either of: Green - The channel is secure, Red - Security error

Which the current system already does. How would you upgrade the UX without exposing the complexities of security to the user?

Also, why isn't there room for two authentication systems? The internet was built around redundancy. Secure systems could utilise both when they want to absolutely sure.


> we are creating a client-side, in-browser encryption system where a user can upload their already encrypted content to our storage system and be 100% confident that their data can never be decrypted by anyone but them.

This concept may sound clever at first but gives you as the user no additional confidence compared to encrypting data on the server side upon arrival. Either way, you're trusting the host.

The threat model for server-side encryption is essentially:

1) the host has an unethical employee who wants to read your content.

2) the host's servers are insecure and get compromised.

3) someone successfully MITMs your connection to the host (possibly due to the SSL problems being discussed here).

4) the government compels the host to provide your data (i.e. what happened with Lavabit).

The threat model for browser-based client-side encryption is the same! In any of these cases, the attacker (or the host, in case of #1 or #4) simply sends JavaScript encryption code to your browser with a backdoor in it.

Cryptocat originally worked the same way: all chats were encrypted on the client side, but with JS code sent from the server, in which a backdoor could be inserted at any time. After much criticism, this is why Cryptocat is now a browser add-on, with discrete releases made available from a central source (Chrome Store/Mozilla addons site), which can be audited.


One of cryptic's features is that the front-end is completely open-source. You can see the source for the current prototype here:

https://github.com/cryptic-io/web

We'll be releasing tools, like a browser-extension, that will help confirm that the code you've received on the site is the same as that in the repository.

And since the whole frontend is open-source and is only html/js/css, you can host it on your own box if necessary.

To address your points 1 and 4: Since all data is encrypted BEFORE leaving your browser (this was NOT the case with lavabit) even if our servers were compromised your data would still be secure.


Cryptocat was and is open source too:

https://github.com/cryptocat/cryptocat

That doesn't solve the problem. No one is going to manually view source and compare it every time they use the damn thing.

> To address your points 1 and 4: Since all data is encrypted BEFORE leaving your browser (this was NOT the case with lavabit) even if our servers were compromised your data would still be secure.

At rest. Yes, at rest it's fine, like I said, but if someone logs in while the server is compromised, it would be trivial to decrypt anything they post or access during that session. Same as Lavabit.

> We'll be releasing [..] a browser-extension, that will help confirm that the code you've received on the site is the same as that in the repository.

So it'll download two copies of the code, one from your servers and one from GitHub, and check that they match? Doesn't seem to me that that buys you much. And unless it's mandatory, you'll be leaving the users that don't install the extension unprotected.

See here for a long list of other reasons in-browser crypto is problematic: http://www.matasano.com/articles/javascript-cryptography/


When you create an account with cryptic.io, a private key is generated in browser and encrypted with the hash of your password. This encrypted private key is what we keep server-side. All files you upload, and all of your user-data, is encrypted using that private key. In short, all encrypting/decrypting of ANY sort happens inside your browser. So someone logging onto the server and viewing data as it is uploaded is still seeing encrypted data. Short of compromising a user's computer there is no way for them to see it. Our encryption scheme is nothing like the scheme that lavabit used.

The extension won't be able to mitigate an attack, but it will be able to alert you to one, which for someone who had the initiative to install it (which we will be heavily encouraging users to do) would be enough to inform them that something is amiss. And if something is amiss they can host the front-end themselves and use a local copy of the html/js/css so they can be sure they're getting a good copy of the site (something we will also be making easy to do).


I have thought about a similar service but was dissuaded by various sources warning against the idea of using javascript with cryptography, e.g. http://www.matasano.com/articles/javascript-cryptography/. That's not to say a reasonable solution cannot be found, but there are a good number of issues that need to be addressed. The one that seems crippling to me is that the strength of javascript crypto libraries is questionable at best - nevermind the various other javscript attack vectors. A browser plugin could address some issues, but then that limits users to browsers with the plugin installed. Might as well have a native application where the quality of the cryptographic algorithms are more thoroughly tested at that point. Still, I like the idea and wish you the best of luck.


What happens if a user changes client machines?

You seem to suggest storing their hashed password in the browser, but if they change machines they won't have that hashed password around. How will you go from plaintext password to hashed password without having the salt used with PBKDF2?

You say user passwords are never sent over the wire (not even the hash)[1], but then say users have an object containing their hashed password (is the documentation here out of date?)

[1] - https://github.com/cryptic-io/web


The browser extension is the missing link. That's what makes it impossible for malicious code in a frontend's source code to go unnoticed.

I wish you had an example of a custom application that uses your storage, so people can see how easy/hard it is to use and customize the frontend for their own applications. How would the browser plugin properly attest the frontend code hasn't been modified if an application dynamically generates custom frontend code per user?


Cryptography achileus heel is the UI.

Think of PGP, adding a new key toyour browser, creating a certificate, adding an exception... Always unintuitive CLI.

In terms of UI, I still don't get why for firefox a clear text password other an unencrypted connexion is not worse than a self signed certificate (it should be identically considered dangerous).

The only crypto UI I can use is ssh.


You know, you're right. SSH has about the most sane crypto ui out of everything. That's a really sad state of affairs. I also agree with your point about self signed certificates...you it seems like sites should be able to identify you by that. It's probably easier to keep the certificate store on your machine safe than it is to actually follow the best practices that I've seen regarding password selection and usage.


> These root certificates are managed by a small number of companies designated by some agency who decides on these things

You're trying to explain how SSL is secure, that everyone implicitly trusts the root certificates and then there's this. I immediately lost complete interest in your explanation (because you probably don't understand it yourself) and hence in whatever product and/or solution you're trying to offer.


This seems to propose that the namecoin block chain could be used as an alternative to root certificates, but without the client needing to participate in the block chain. But this means you have to trust a third party service (and your connection to it) to serve you correct information from the block chain. How exactly does that work?


For that case you're right, the scheme suffers from the same concerns as the CA scheme. The difference is that there is room to move forward, in that you can host the namecoin chain yourself and be sure of its accuracy. I think we could even have browsers have it built-in that they host a copy of the chain (the whole chain doesn't have to be downloaded, nor does all that's been downloaded need to be in memory).


Second paragraph, "One", not "On". Not trying to be a jerk but that is a bit sloppy :)

EDITED: Also, unless I'm wrong equivalently comparably small errors in code are what allows for security loop holes cough cough original buffer overflow.


Nice idea, but the project is pretty much dread. All the domains are squatted and up until a month or two ago you could steal anyone's domain and make it your own, it took a public full disclosure to rouse the developers enough to get it fixed.


Eh, they think it's fixed now.

The squatting is awful, though.


It's rather disingenious to say that an SSL cert is $200, when in reality a domain-verified cert goes for about $20. Hardly anyone interested in Namecoin would be needing EV certs.


A wildcard cert, which you would likely want/need if you're running a business, does indeed go for about $200 (that's the minimum I've seen it, if you have a cheaper source it would make me very happy :))


I would want wildcard cert if I have over 10 public facing domain names. And if I have that many, $200 is not an issue.

I mean... your cost argument is weak, almost meaningless. Just focus on the trust angle, it's a reason enough against PKI/SSL.



is that actually accepted by browsers now?

IIRC they used to have a ridiculously awful method for revoking a cert. Is that still the case?


StartSSL's certs are accepted by browsers and revoking a free cert costs $24.90.


That brings it down to $59.90.


> At cryptic.io we are creating a client-side, in-browser encryption system

Here we go again... Let me guess, these guys also figured out a way to disable the context menu from popping up to prevent you from copying anything from their page.

The rest of the article is pretty interesting.

Edit after finishing the last bits of the article: So why is this better than something like MonkeySphere or DNSSEC? Both already let you distribute your public key and don't require anything experimental.


For whatever it's worth, DNSSEC is a disaster too.


Why is that? What should we use instead? DNSCurve?


DNSCurve is fine, but we can also just continue assuming the DNS is insecure and pushing security out to the endpoints, where the End To End Argument says it belongs anyways.


So, how do I connect securely to a new website if I can't trust the DNS response?


TLS.


You seem to have a clear idea of how TLS+TACK solves this issue but you're not articulating it, just giving laconic answers. As far as I can tell TACK doesn't solve the first connection problem:

"The big drawback of the TACK mechanism is that, like other client-side key-pinning methods, it does not protect the user against a man-in-the-middle (MITM) attack during the very first connection attempt to the remote server."

https://lwn.net/Articles/499134/

So as far as I can tell even in a TLS+TACK world if your initial connection is to http://mail.mydomain.com anyone that controls the DNS has you and if you connect to https://mail.mydomain.com it's anyone that controls a rogue CA. This is exactly the same situation of TLS without TACK, no?


TLS+TACK doesn't solve the first connection problem if you stipulate a corrupted CA PKI. But the difference between the CA PKI and the DNSSEC PKI is that you have to stipulate that the CAs are corrupt; the DNSSEC PKI is corrupt by design.

Meanwhile: the CAs are financially disincentivized from going rogue, because Mozilla and Google will remove them from the trust store if they do stupid things.

What dynamic pinning plus CAs buys you, on top of key continuity that corrupted CAs can't break, is surveillance: an entity that can get bogus certificates issued can't assume they can do it undetected, because those certs will almost immediately break on someone's pin.


>TLS+TACK doesn't solve the first connection problem if you stipulate a corrupted CA PKI.

Don't you think it's a reasonable assumption that the NSA controls at least one CA?

>the DNSSEC PKI is corrupt by design.

That's interesting, have any pointers as to why? With the current model any CA can vouch for any site (hence the need for pinning), with DNSSEC only the CA that controls .COM can issue fake certificates for EXAMPLE.COM. I don't understand how that doesn't make the potential attacks strictly less.

>What dynamic pinning plus CAs buys you, on top of key continuity that corrupted CAs can't break, is surveillance: an entity that can get bogus certificates issued can't assume they can do it undetected, because those certs will almost immediately break on someone's pin.

Right, and that's fine for mass attacks (which is important) and less than stellar for anything that's targeted. Plus there's nothing stopping you from implementing pinning+DNSSEC CA's, so that if the .COM CA starts issuing fake GOOGLE.COM certificates you catch him. At the same time RandomCAFromAntartica doesn't even get to try.


The USG owns the DNS roots. Or did you not notice the pirate sites with their logos replaced by the DOJ seal? They don't have to "corrupt" the DNS; it's already theirs to control.

We can stop arguing about DNSSEC though, because it is never going to happen. It's a stability disaster, an enormous administrative pain to manage, cryptographically weak, and requires domains to publish all their hostnames (private or not); see NSEC3 for the ludicrous hack the working group came up with to (badly) mitigate that problem.


You're postulating that the DNS is compromised. So you've already given up on the first connection and are relying on pinning only to raise red flags if someone does too broad an attack. With DNSSEC CAs only the .COM CA (USG compromised you say) gets to try to MITM. With the current system all CAs (compromised by USG and others I say) get to try to MITM. Pinning+DNSSEC seems strictly better than Pinning+ExistingCAs.


I am again baffled. If a CA issues a rogue certificate and, as is likely, is rapidly caught because 10 million people have an already-loaded pin for the real certificate, Google can nuke the CA from orbit. Google cannot nuke .COM.

How could a government-run PKI possibly be better than a network of crappy private companies? At least the companies have to respond to incentives.


Ok, now we're getting somewhere. Our proposed end-states for security are different:

You: TLS connections have a high likelyhood of being secure but they will be MITM'd by some CAs some of the time. We put mechanisms in place (pinning of individual certificates) to in the long run detect most of those attempts are disable those CAs.

Me: TLS connections are known to be secure to the people in the trust chain. (I trust my registrar, who trusts .COM, who trusts the root). No other parties can breach that trust. We put mechanisms in place (pinning at all levels) to detect misplaced trust (the USG creating a new certificate for .COM) and make a fuss when that happens.

If you have enough trust in the democratic process being able to control the chain of trust you'd go for my solution, if not you'd go for yours. It sucks that we live in a world where that choice isn't clear.


I haven't read the article, but...

This isn't at all comparable to disabling the context menu. Anything that runs client-side and is in opposition to the user is screwed as the user can get around it.

An attacker in this case would not have access to the client, an attacker in this case would be a MITM and would therefore not be able to mess with the encryption code (unless the encryption code itself is sent insecurely).


From their Kickstarter page[1]:

> [the browser code] can also be verified against an open source repo with a browser extension

[1]: http://www.kickstarter.com/projects/cryptic-io/cryptic-encry...


If you're paying $200 for an SSL certificate, you're doing it wrong. But I don't think any of the points of the article change for cost > $0.


Maybe I'm ignorant but $200 a year is not that much money. Considering a software engineer is usually paid 70k a year and you have four engineers you are looking at this being .071428571 % of total cost. So ummmmm maybe minimize your costs elsewhere :p


Obviously $200 is cheap compared to a skyscraper or a space station.

Compared to the ~$15 a year a .com domain costs, paying 10x that amount for an SSL certificate seems a bit on the high side.


Exactly, if you want a more secure web, you need a system that is affordable to everyone.


Some of us live under the federal poverty level, where any sum of double digits or more is significant.


$200 certificate for a personal website? That's not necessary. What we need is to stop charging a SSL/TLS certificate so much to get the recommended level of cipher used.


$200 a year should be easily affordable under the US welfare system. Many people on welfare (oftentimes it's fraudulent, but government-encouraged, "disability" these days) make more than minimum wage.

I am making a US-based comparison since you used the word "federal."


There's always free certificates like from StartSSL and CACert.


Yes. That is true. You are right.


Yeah, I had the same exact reaction. You can purchase SSL certificates really cheap nowadays, and even business ones costs under $100, e.g., here - https://getssl.me/en/ssl-certificates.


You can get Class 1 certificates for free[1].

[1] - http://www.startssl.com/


Not for commercial/business use you can't. That's only for personal sites.


I think there were some problems with namecoin.

Mainly that nobody is currently developing it and fixing its bugs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: