Hacker News new | past | comments | ask | show | jobs | submit login
How not to run a CA (koehntopp.info)
561 points by stablemap on March 1, 2018 | hide | past | favorite | 248 comments



In related news, Trustico's site is down apparently due to users being able to run commands as root on their webserver.

I wonder if this was used to extract some private keys?

https://twitter.com/svblxyz/status/969220402768736258 https://twitter.com/Manawyrm/status/969230542578348033


I am aware this is not contributing to the thread but... the only way I can summarize my feelings on that is 'holy fuck'. I just spent a minute just muttering 'holy fuck' to myself.

They are running the good old php shell of "<%= system(%_REQUEST['cmd']) %>". As root. As a security company.

This entire company is just blowing my mind at the moment. What's next, are they running their services on a notebook in the office?


I propose a new term: "Clown Car" security.

For situations like this where a fiasco just keeps getting worse, each step its own facepalm.

* Asking users to generate private keys on the issuer's server

* Storing those private keys

* Emailing those private keys

* Sending that email completely in the clear

* Running unsanitized user input on their server

* as root


"As a security company."

Well, their CEO's linked-in profile doesn't really sound like a security company.

> Email Marketing Digital Marketing Google Analytics Google Webmaster Tools Market Analysis Marketing PPC SEM SEO Sales Security Social Media Marketing Web Analytics Affiliate Marketing Google Adwords Management E-commerce Lead Generation Online Marketing Online Advertising SaaS Marketing Strategy Strategic Partnerships Cloud Computing New Business Development Business Strategy Start-ups Web Development CRM Social Media Product Marketing Solution Selling Strategy Channel Partners Channel Sales Business Alliances Business Development Leadership Social Networking Network Security Hardware Product Management B2B Professional Services GTM Partner Program Development Internet Security Google B2B Marketing Sales Operations


If you sell certs, you're a company in a security-related context. I don't care who you are, I care if you do your job.

Saying that, that quote sounds way too long to not be BS.


The quote sounds like SEO.


CEO SEO?


SpaceX to launch SEO CEO's REO into LEO.

(I expect this post will get 0E0 upvotes)


What they actually had was probably more like:

    system('openssl req -config /prod/prod-config.cnf -subj "/CN={$DOMAIN}" ....'
And whoever wrote that function assumed someone else had sanitized DOMAIN.

It looks like a lot more understandable of a mistake when framed like that.


Does it really? Even if the code author hadn't learned to escape/sanitize close to use so it's visible (or to avoid cases where you need to escape/sanitize entirely, like using something that bypasses the shell and takes array arguments), let's look at the manpages.

PHP's system() manpage: http://php.net/manual/en/function.system.php

  [red box]
  Warning
  When allowing user-supplied data to be passed to this function, use escapeshellarg() or escapeshellcmd() to ensure that users cannot trick the system into executing arbitrary commands.
system(3): http://man7.org/linux/man-pages/man3/system.3.html

  Any user input that is employed as part of command should be carefully sanitized, to ensure that unexpected shell commands or command options are not executed.  Such risks are especially grave when using system() from a privileged program.
This is a canonical mistake that's used as a mistake example in textbooks.


at this point i think the problem with system() should be blamed on the language and not the people using the language. how many legitimate uses of system() functiona call are there. a primitive that does fork() execv() on an array is a much better alternative. yeah, it doesn't 100% fix problems you might have issues with - style flags but you are in a much better situation. like if your users want to do system() maybe force them to do the extra work.

system() style functionality -> should be the hard thing to do execv() style functionality() -> should be the easy thing to do


Whilst I do wish I could cleanse a web application of actually supporting system(), we have system() in Perl, Ruby, Python, and modules for Node. I've seen people bagging PHP and that really isn't fair.

Shower thought: Allow me to globally disable system() in for language x. Aside from the obvious case of just banning these insane system calls, you're protected against surprise vectors in parsers.

Edit: You would presumably mitigate pipe open vulnerabilities too


You can do so with SELinux btw. You can remove the right for a program to run the exec syscall.

It's just sad that there is no really good tutorial how to write your own SELinux modules for your own applications. It's easier than it seems and allows some really powerful security measures.


Maybe you could write one? I bet that it would be really appreciated.


I suspect that these languages just end up deferring to the system() library function in libc. LD_PRELOAD or other linker trick would then let you override it with a do-nothing or complain-loudly replacement.


All of those languages have an option to pass in arguments as an array and bypass the shell completely. PHP does not. It's much safer with no shell (though not perfect).


I choose to believe that this is performance art. The world is a better place that way.


> mailed 23000 private keys

I had a similar thought. My thought was: "HOLY SHIT!"


On the plus side, a notebook doesn't have connection to the internet...


In some languages (German for one) the common word for laptop is translated to notebook in English.


The term notebook computer is used in English as well, however its usage has declined significantly over time compared to the term laptop:

https://trends.google.com/trends/explore?date=all&q=laptop%2...


It is declining, but I highly suspect the MacBook will remain the MacBook, and not the MacTop.


Should count as probable cause for revoking the remaining 27k certificates then, no? It's not unreasonable to think someone has been exploiting this for years, siphoning any private key passing through?


The remaining 27k certificates are probably the ones which were generated on and which never left the customer's premises. The only thing which could be siphoned in these cases is the CSR, which is harmless (it contains essentially the same info as the certificate, which is public).


If the server was rooted, couldn't an attacker silently intercept and swap out a csr with their own during domain validation, and thus get certificates approved based on fraudulent csr where the corresponding private keys are controlled by the attacker? Maybe even throw in a few "oops, please retry validation" to hide the fact that a different public key was signed in the certificate and the real customer would be none the wiser (except for perhaps looking at CT logs)?


No. The CSR is tied to the key and the resulting certificate as well. Verifying that the CSR belongs to the key would be he CAs job, not the servers. If the server was rooted, an attacker could probably just use it to create an arbitrary new certificate - but revoking the legitimate cert won’t fix that.


1. User validates domain and submits csr 2. Rootkit swaps out csr with one of its own whose private key is under its control and submits upstream to the CA 3. CA signs a cert because the domain was validated 4. Rootkit receives valid cert for its own private key 5. Rootkit presents a bogus error to enduser 6. Enduser tries again and rootkit lets it through

End-result: there are now two certs and private keys for the domain, one of which is compromised?

Just a though experiment. I'm not familiar with the domain validation flow this reseller site was using.


Like I said: If you compromise the Trustico servers which are a reseller, you could trick the CA into believing that a CSR is valid for a domain. This would get you a certificate with a key under your control. But revoking the certificate with the key under the owners control won’t fix this. The illegitimate certificate would still be under the attackers control and remain valid.


Unless you revoke all the certs issued through that channel/reseller - the legitimates would be collateral damage but you'd also get rid of any illegitimate ones?


If you compromised the reseller, would you have the reseller keep a list of illegitimate certificates that you issued via that channel?


Surely the actual CA has a database of certificates it has issued and by which reseller?


1. Rootkit issues cert and sends to hacker

I find this plan to be much much easier.


Presumably, even if you had root on the reseller website, you would need to wait for the domain owner to actually perform a domain validation before the actual CA's system would deliver any certificates?


Considering the security on that website, probably not.

Even if you did, I would argue that if you have root access to the validation form, getting a certificate signed is not going to be exponentially harder either.

Waiting for a potential target to sign a cert using that specific reseller is just borderline useless.

An attacker of a CA will be interested in either their CA private key (or intermediary) or the ability to get arbitrary certs signed.

Random targets on the internet are useless since it's unlikely they can MitM them.


Certificate transparency logs would out that pretty quickly.


Wouldn't that attack mean that the cert the domain owner got wouldn't work on their webserver, since they didn't have the correct private key for it?


Yes, but you could fool the user with an error message so the user tries again and let it through unmodified? (Hide the fact that a cert was actually issued already)


Someone might have rm -rf / --no-preserve-root'd them! :)


Probably good, in case anyone was looking to dump stuff on that server.


Hmmm, you sound guilty!

;-)


Not me guv'ner! Honest like!


They have a tool that allows you create a private key + CSR

https://www.trustico.com/ssltools/create/csr-pem/create-a-ne...

Apparently they decided to keep a copy of the private key.

Edit: Looks like they are having problems atm. A copy can be found at

https://web.archive.org/web/20180217071027/https://www.trust...


This probably happened because they allowed users to execute root commands on their server. Either they quickly shut down the site or someone else did it by shutting down some servers.

> https://twitter.com/svblxyz/status/969220402768736258


Oh my. There must be some sort of hall of fame for security vulnerabilities. And this belongs in it.

Perhaps they’re passing this command to a secured container? I shouldn’t make excuses for them, but passing root commands to the shell seems too far out there.


> root commands to the shell seems too far out there

You haven't been in software too long, have ya? /s

Glibness aside (and I meant the above as a joke, not a personal attack), this is distressingly common to the point of being near-universal in some areas of our industry.


> Perhaps they’re passing this command to a secured container?

That would indicate they were concerned about shell injection while writing the code. But if that were true, why would they skip the much simpler step of sanitizing/escaping the input?


> Perhaps they’re passing this command to a secured container?

This would still raise my eyebrows, since root inside a container is still something you should avoid unless absolutely necessary (especially if they aren't using user namespaces). Just because containers add some newer security features to regular processes doesn't mean you should forget the security features (POSIX DAC) that were there in the first place.


> There must be some sort of hall of fame for security vulnerabilities.

It's called the Pwnie Awards.

https://pwnies.com/


This seems more to warrant a business Darwin Award.


Up until a few years ago there was a <keygen> tag which embedded a x509 CSR generator into a regular HTML form. The private key was generated by the browser(1) and was completely inaccessible to the site or any JS running on it. That's the proper way to generate certificates in browsers, but it's been removed since and was never supported in IE.

StartSSL used it, for example, but also allowed you to hand them a CSR of your own making. Although they of course ignored almost everything in the CSR apart from the public key (which is probably a good idea).

(1) IIRC you could even have a smartcard generate the key, at least in theory.


You say "they of course ignored almost everything in the CSR apart from the public key" but it turns out they went a bit further than that, they also ignored the signature on the CSR.

So you could put some other public key in there, add a bogus signature that wouldn't verify and they'd issue certificates for a key you never even controlled.

The bad security implications for that scenario are a bit subtle, and situational, but CAs are supposed to be checking that the CSR is properly signed, that's only a long way down the list because StartCom/ WoSign had so many other serious problems.


Actually I've never seen a solid argument for why that's a problem. A similar issue exists in PGP, but again it's unclear what the actual problem is. Some (non-x509) PKIs can't do this anyway.


Two people have asked essentially the same thing here, I'll answer this one because it was at the top. I am going to deliberately use a far-fetched scenario rather than invoke specific technologies, because there is no benefit to learning more specific here than "Nope, a competent CA must never allow this to happen".

Alice has public key A, and Bob has public key B, and everybody trusts Charlie the Certificate Agency to issue certificates, Alice has one binding (Alice,A) and Bob has one binding (Bob,B).

Alice controls a missile she will only accept Major Tom's commands. Bob runs firework displays, he accepts the display organizer's commands.

I want to fire Alice's missile. I impersonate Bob, and I trick Charlie into issuing a certificate (Bob,A) because she doesn't verify that I know the Private Key for A. Then I offer (as Bob) to run a really great firework show for Tom, and I give him the (Bob,A) certificate so he can command firework launching.

When Tom sends me a launch message encrypted to A, I simply deliver it to Alice, who launches the missile as I desired.

Alice was never compromised, neither was Tom. Charlie's only mistake was not verifying that I controlled the Private Key for the cert she issued me. Bob was compromised, but he was just running a firework business, he didn't know this was a matter of national security.


Ah, ok. In your scenario, the assumption appears to be that by issuing a (Bob, A) certificate, Charlie asserts that key A belongs to Bob.

For some reason, I never thought of DV certificates that way. I always took (Bob, A) to mean "I checked with the real Bob and he says it's fine to use key A in his name".

The former is, of course, the more useful guarantee.


I'm aware of this[1], and with what you said out front in mind - it doesn't apply to either HTTPS or PGP. (I'm pretty sure you can build systems affected by this with X509, though).

[1] I don't want to take away from your good post, it's a good and well explained scenario to illustrate the issue. Personally I found Dominic Tarr's paper on AKEs-as-capabilities quite illuminating when I read it, and the analysis applies to your scenario as well. (The scenario is also a neat demonstration of a bunch of other issues, too)


Hmm. I think I agree with you that it isn't practical in HTTPS. I can see clearly why you can't do anything like this in TLS 1.3 because it has this nice simple ordering - the client and server do DH key agreement - we now both have encryption but no certainty of who we're talking to - then the server sends a certificate and uses the key from that certificate to sign their DH transcript, the client optionally also sends a certificate, they too use the key from that certificate to sign the transcript. Since I don't know Alice's Private Key I cannot sign transcripts while presenting the (Bob,A) certificate, and if I let Alice do the key exchange so that she'll sign, I'm cut out of the loop entirely. That's easy to follow.

But in TLS 1.2 and earlier it's murkier to me because there are cases with and without DH, and what gets signed, by who, and when varies. I think you're right, but I started doodling the possible cases out and I filled two A4 pages before I gave up. Certainly even if you're right as to how the protocol is designed it will not astonish me if somebody implements it wrong and doesn't check a signature somewhere given the many cases.


Sure, they should have checked but I don't see the implication. If you take steps to prove ownership of a domain and then request a certificate for it with a key you do not control, that's on you.


It can't be that hard to whip up a cross platform Qt app that streamlines the keygen and CSR process in a user friendly fashion. Maybe throw in some dodgy key escrow service for the daring. Why does everything have to be done over the web.


It already exists, e.g. https://www.digicert.com/util/

The problem is, people who don't understand the issue will prefer a solution that doesn't require installing some software.


they didn't support the tag but you could do the same using activex components, seen once one vpn enrollment website doing that.


Worse yet, the tool seems to run JS on the client side and the page includes JS code from 5 different sources, including ad companies.

Any of them can capture the generated key.


This shows that you need a lot more than a fancy web site to convince you that a CA is professional.

It’s kind of crazy how little we know about these important institutions. Credit reporting agencies are another example of complete incompetence in a presumed-sensible organization.

Especially once money changes hands, there ought to be a lot more terms in the contract to specify good behavior. You need backup when you discover something is run by 6-year-olds.


This was a "reseller", and they don't have to be audited, and the browser CA inclusion policy probably doesn't mandate anything from CAs regarding their resellers. Well, it should.


The thing that isn’t clear to me is how Trustico even had the private keys to begin with. It’s been a while since I’ve purchased a SSL certificate, but I remember generating the private key locally and providing a certificate signing request, which isn’t the private key. What am I misunderstanding here?


You remember correctly the way things should happen.

But, presumably, Trustco generated the public and private keys for the customers, signed the certificates, and handed the whole mess to the customers. I imagine some customers would even pay a bit more to not have to bother learning to generate a keypair and signing request themselves.

The thing I don't understand is how the CEO thought things would likely work out to his advantage. He must have realized that the person holding all of the cards didn't want to cooperate, and decided to try and bully that person into acting against Trustco's customers. To make such a colossal misjudgement makes me curious what else this CEO has done at previous companies.


It sounds like Trustico got these certificates from Symantec. The CEO of Trustico was arguing that they should be revoked as they weren't secure and emailed the private keys as proof. Which, while a dumb thing to do, did prove his point I guess.

EDIT: From Trustico's account

> We believe the orders placed via our Symantec account were at risk and were poorly managed. We have been questioning Symantec without response as to concerning items for about a year. Symantec simply ignored our concerns and appeared to bury them under the next issue that arose... We were also a victim whereby Symantec mis-issued SSL Certificates owned by us, subsequently we were asked to keep the matter quiet, under a confidentially notice.

https://groups.google.com/d/msg/mozilla.dev.security.policy/...


Why would they not try to transition their customers to new certs _before_ getting all the old certs revoked though?

Seems like suddenly revoking 23k of their customer's certs with only 24 hours notice is just shooting themselves in the foot.


We're using the Symantec/Digicert API for getting certs at work. I'm not directly involved in that, but I think when you want to issue a new cert you have to revoke the old one first. The API will just return an error if there is an existing cert for the same domain name.

I recall vividly that when we moved from manually-issued certs (using their website) to automatic issuing (using their API), we had to revoke all certs before starting the automatic job for the first time. I don't know how it works wrt moving from the old Symantec root to the new Digicert root; I'm not directly involved in that.

Oh, and BTW, of course we're investigating moving to Let's Encrypt instead, if only because we can use an existing ACME client and don't have to continue maintaining our own certificate automation.


That doesn't make sense. Old cert expires 2018-03-01 23:59:59, yet I'm not even allowed to apply for a new one valid from 2018-03-02 00:00:00?


You can get a new cert when the existing one has less than one month remaining lifetime. The problem is that Chrome now distrusts Symantec certs way before their expiration date.


Ah, now it makes sense.


That makes no sense; how would you extend a cert at the end of its life? If you have to revoke one to get a new one, that means you will have to take an outage, since you will need to get the new cert and install it after revoking the old one.


I use digicert and I can get as many certificates as I want with my wildcard cert. In fact I run different certificates in different contexts.


we were using symtantec. then when they cocked it up we switched to the amazon ACM which integrates with our infrastructure and doesn't have dodgy people running it. and is 'free'. but not really because it is hidden cost of our AWS pricing.


Because of utter incompetence.


"Trustico allows customers to generate a Certificate Signing Request and Private Key during the ordering process," the statement read. "These Private Keys are stored in cold storage, for the purpose of revocation."

Maybe they decided preparing CSR is too hard for their clients :/


Lots of companies used to allow this, GlobalSign being one of the biggest that jumps to mind (iirc they stopped it in the last couple of months).

The idea that the company would store the private keys, however, is even more troubling.


> Maybe they decided preparing CSR is too hard for their clients :/

It seems that a lot of businesses and people feel that making things "easier and more convenient" takes priority over best security practices. For example, we can't support client side TLS cert authentication (in addition to a username and password) because customers won't be able to generate the CSR or know how to import the certificate into their client. Instead, let's use SMS or email based two factor authentication.


And they're probably not wrong. In other words, I suspect that if there were two identical competing services, the convenient one would win. If one only supported client certs, and one only supported 2FA, I have a (unsupported) feeling that the client cert company would not survive long.


Yep. The problem is that most customers cannot judge the level of security offered by a company. So Company "A" says they are "secure," but has a cumbersome process to follow to get a certificate. Company "B" says they are "secure" and has a convenient process. Guess which one gets the business (all else being equal).

In reality Company "B" may well be much less secure then "A", but the customer has no way of knowing that or making a judgement on which company is more secure.


On the other hand, a customer who doesn't realize that their private key should never leave their premises has no business asking for a certificate. It's like not selling guns to children.


While that may be true, that doesn't mean the rest of us shouldn't have the option of using certificate based authentication.


We have the option of using certificates, but not the option of giving away private keys.


Could a company offer both options? That is, a customer could choose whether to use a client certificate or SMS message/email based 2FA.


I don't understand why they'd need the Private Keys for revocation. Isn't one of the reasons to revoke because you've lost the private key?


You don't need the private keys for revocation, and you're right that compromise of the private key is one of the best reasons to revoke a cert.

I can't figure out why the Trustico CEO emailed the private keys to DigiCert. It doesn't make sense.


The Trustico CEO intentionally compromised the private keys to force DigiCert to revoke the certificates. DigiCert wouldn't do this otherwise on his request; it's the actual owner of the certificate who needs to request revocation.


I meant I don't get why he would want to do that. Why insist that all of his customers' keys be revoked? Why intentionally compromise the keys to make that happen?

After reading more about it, though, I think it's less that it doesn't make sense and more that the person making these decisions is incompetent.


In fairness, all of those private keys were already compromised from the start. I have no clue why the CEO decided to take such harmful action to his own company, but one way or another all of those certificates needed to be reissued.


It looks like Trustico had a feature on their site to generate all of the required public/private keys on their website. If you used that form to generate the keys then they'd also store them on their servers.

From my reading of the available data, it would explain why not all of Trustico clients needed their certificates revoked. Some of them will have generated the keys locally, not using Trustico's onlike tool.


As mentioned elsewhere in the thread, Trustico has a tool for generating keypairs. On their server. Which is dumb for exactly this reason.


Oh, it's worse than that: https://twitter.com/svblxyz/status/969220402768736258

You can run arbitrary shell commands as root from their webserver.


Cue the intro to Bohemian Rhapsody. This is a huge WTF. SQL injection? Too basic. We do raw shell injection now. To a CA.

Welcome to the future of computers where security comes secondary to extra profits and marketing.


FWIW, Trustico isn't a CA. They're a certificate reseller; all certificate validation is handled by a different company. If Trustico hadn't been generating their customers private keys for them (or if their customers had refused to let them do things that way) they wouldn't have been able to screw things up this badly.


Got it, that kind of makes sense. The concept of certificate reseller doesn't make a whole lot of sense to me but thank you for making the distinction


Mostly they exist because of price discrimination.

Rich Uncle Bob hears he needs an "SSL Certificate" he's heard of "Thawte" brand SSL, he goes to the brand website and clicks "Buy $69.99 per year".

His savvy friend Tight Mike needs one too, he shops around, finds a reseller called "Discount SSL" that offers an Thawte certificate for $18.99

What's the difference? Nothing except that Tight Mike was looking for a cheaper price, and if "Discount SSL" didn't sell it to him for $18.99 he might have eventually kept looking enough to find that somewhere else has a GoDaddy cert for $12.99 or whatever. Bob didn't care, he just paid whatever they asked, so wring the maximum possible out of him.

In theory there's also some more traditional "sales and service" type role, where they educate customers, help manage local experience e.g. maybe the Reseller is in Egypt and your English isn't so good - and that sort of thing. But a LOT of the business is straight price discrimination, trying to ensure as much of the customer's money goes to you as possible without them switching to a competitor with lower prices.


I guess someone started playing around, their server now responds with a 503.


For some reason their server is "temporarily busy" right now.


Ironically his blog isn't available on https. Would be time that browers mark http sites' address bar as "Not secure" in orange.

It's either secure or it isn't. Fun fact; Europe's ePrivacy law is coming next year which enforces all communication to be secure.


> Europe's ePrivacy law is coming next year which enforces all communication to be secure.

Link to Directive please?

(The reason I ask is that news reporting on EU law in English is extremely unreliable, and it's best to go to primary sources)


> Link to Directive please?

I-scoop[0] explains it well. Do some digging in the documents[1] and you'll get the idea where it's heading to.

"Respect for the privacy of one’s communications is an essential dimension of this right, applying both to natural and legal persons. Confidentiality of electronic communications ensures that information exchanged between parties and the external elements of such communication, including when the information has been sent, from where, to whom, is not to be revealed to anyone other than to the parties involved in a communication. The principle of confidentiality should apply to current and future means of communication, including calls, internet access, instant messaging applications, e-mail, internet phone calls and personal messaging provided through social media." [1](page 6, article 7)

> (The reason I ask is that news reporting on EU law in English is extremely unreliable, and it's best to go to primary sources)

Completely agreed, it's a complete jungle of information and the source documents are hard to digest.

[0] https://www.i-scoop.eu/gdpr/eu-eprivacy-regulation/#The_cons... [1] http://data.consilium.europa.eu/doc/document/ST-15333-2017-I...


Even local reporting on new laws is not great, somewhat recently was Feinstein's gun law, not a fan of it, but there was so much misinformation flying around, making it seem much worse than it was. Which in turn detracted from legitimate criticism.


Which they cannot enforce on the web except by blocking (aka censoring the web).


Right, because the only way we can control speeding is by barricading roads.


They could fine you.


I live in the US, operate in the US, and if I ever go to the EU, I am quite confident that the system won't move quickly enough to notice I'm in the EU and collect fines during a tourist stay. So no, they couldn't fine me. If they did fine me, there would be zero repercussions for simply ignoring the fine.

I am not saying I plan to break this law: I'm a big supporter of encrypting everything and I was in compliance with this law before this law existed. I think that this law is a big positive step for privacy in the EU.

I am saying that the claims that this EU law will have massive international effects are overblown. There are five other continents with major businesses and only the businesses which operate in the EU have any reason to care about EU laws which are enforceable only in the EU.

EDIT: I accidentally a continent.


Only if you are (or will be) incorporated in the EU, which generally you should be in order to accept payments from and do business in the EU.


Europe's GDPR data law contains no such stipulation.


I'm referring to the communication (ePrivacy) not data / consent (GDPR).


That requirement applies to over-the-top communications providers, not to communication in general.


Security depends on your threat model. HTTP is generally secure for publishing and has the added advantage of being cacheable by proxies. This blog is secure


> HTTP is generally secure

This statement is wholly incorrect. HTTP is not generally secure.

> for publishing

When I publish something I do not intend for third parties to interfere with the delivery of what I publish.

> and has the added advantage of being cacheable by proxies

If you trust your proxy, you can still have cached data at your proxy. If you don't trust your proxy, then why are you proxying through it?


HTTP plus a trusted hash would provide an integrity measure of the content of a page, and enable hashing.

It would not prevent anyone from examining that content in flight, or altering it. It would allow any such alteration to be identified.

It is possible to offer various levels of assurance on unencrypted communications.

Mind: I'm describing a possible world, not the one most of us happen to live in. Unless, say, you're retrieving Debian package repositories over FTP or HTTP transport, and rely on the package signature rather than HTTPS for integrity.

See for example: https://unix.stackexchange.com/q/90227


You cannot have a "trusted hash" if you cannot trust the delivery mechanism (unencrypted and unauthenticated TCP).

The content of the delivered payload (your blog and your "trusted hash") can be altered by anyone in transit.

When you take unencrypted and unauthenticated TCP and upgrade to encrypted and authenticated TLS, only then can you begin to have trust.


You can't have encryption without authentication, because a man in the middle can snoop your connection.

You can have authentication without encryption. This is what PGP signed messages are.

I'll admit it's weird to call a cryptographic signature a "trusted hash", which makes it seem like the author of the post you're responding to doesn't know what they're talking about. But it's totally possible to to have trust without encryption or TLS. TLS isn't even the best protocol for signing out there, as the whole thing depends on CAs being trustworthy (which they aren't).

All that said, if you want trust on the internet, start with HTTPS. Sure, you don't need the encryption for delivering non-secret content, but it's the easiest way to set up trust and the only one non-technical people are likely to verify in any way (because their browser does it for them). It doesn't provide very strong guarantees, but it's better than nothing. If you want more go with PGP.

And the fact that you can use PGP over HTTP doesn't in any way mean that HTTP is secure in general.


> You can have authentication without encryption. This is what PGP signed messages are.

Yes, but you must install the authorization certificate using a secure method.

Honestly I think far too many CAs are "trusted" by default, especially for executing things (such as javascript) on my computer.


> you must install the authorization certificate using a secure method.

No.

You must be able to trust the authorisation certificate.

Again, PGP/GPG and PKI: keyservers are not authenticated, anyone can post a key. Anyone can sign a key. And if keys are transmitted via plaintext methods (such as an ASCII-armoured key exported and posted to an HTML website), then that can be distributed and installed in an insecure method.

The security for PKI comes from the trust and integrity of those signing keys. Whilst anyone can sign a key, the catch, for the attacker, is that you choose the keys you trust, and extent to which you trust them.

This isn't a magick bullet, and has numerous issues and challenges (trust roots, scale, trust revocation, general comprehensibility to the lay public, etc., etc., etc.)

But, given the following components, an insecure distribution method is absolutely possible and has in fact been the mainstay of PGP/GPG networks over the past quarter century:

1. A robust and cryptographically valid cipher system and implementation for generating, signing, and validating keys and signatures.

2. Key signers trusted to you who have signed keys.

3. Reasonable assurance of the validity of a given key relative to the claimed identity by those you trust as signers.

I think that's pretty much it. How you distribute the information generated within this system doesn't matter, because the cryptography, implementation, trust, and keysigning practices are where the integrity of the system are manifested. That is, the system does not rely on transport-layer security outside those domains.

If you look up PGP keysigning protocols, you find that these are generally based on in-person procedures, which is to say, the transport layer for that element is highly assured. There are other alternatives, including TOFU or numerous informal-but-generally-sufficient mechanisms.

What PGP/GPG lack that the SSL/TLS systems have (generally) is the notion of universally trusted authorities. If you introduce that particular element, you end up with numerous cans of worms. And in fact what we've started seeing are effectively secondary (or greater) checks on CA reliability, in the form largely of major browser vendors, or operating systems, who maintain their own lists of trusted and untrusted CAs. This is a step, by TLS, in the direction of PGP's distributed WoT. PGP, on the other hand, has moved somewhat toward centralised trust in the form of auto-signing systems (PGP Inc., now part of Symantec, ran one such keyserver). Signatures by such keyservers are not a strong assurance of trust, but do establish a documentary record of key existence and history which may prove useful.

Full and true trust are phenomenally complex and/or difficult. Ultimately, impossible as an absolute, but useful even in imperfect form.


One clarification: You choose the signing keys you trust.

You can also partially trust keys, in which case a given key requires multiple signatures (from partially-trusted signers) to itself be considered trusted.

Note the distinction between trusting a key and its signers.

Among core problems with PGP/GPG is the lack of a notion of a negative trust signature. That is "I am signing this key to indicate that I know it is not what it claims to be and/or is otherwise not trustworthy". That would be generally useful (and, of course, also generally exploitable in various ways, a common story).


Except when a party also has the certificate information and private key and does a man in the middle attack without you knowing, because of resigning the data with the exact same key :)


False.

That is what PGP / GPG's Web of Trust offers.


This just went past me on Twitter: https://twitter.com/lynyrd_cohyn/status/968977681210585090

(Malicious ad injection + HTTP = mobile billing)


It can be MITM'd and used as a vector for malware.


There is a lot more to the threat model than "will my content be stolen or compromised?"


TLS = encryption TLS = authentication TLS = tamper proof


Largely, yes, modulo CA integrity and APT attacks.


While the article title is accurate (you should not run your CA in this manner), Trustico is not a CA but is instead a reseller of CA services.


This is such a clusterfuck I don't even know where to begin. I am very happy that "pay money" for DV SSL certs is going the way of the dinosaur, with Let's Encrypt.

The only SSL cert you should ever pay money for is $90/year for an EV SSL cert for an ecommerce/product purchasing website where people are entering credit card details. The big friendly green bar GUI element, for non-technical users, is worth it.


> I am very happy that "pay money" for DV SSL certs is going the way of the dinosaur, with Let's Encrypt.

> The only SSL cert you should ever pay money for...

You can still "pay" Lets Encrypt, my understanding is that as a non-profit they rely primarily on sponsorship and donations. If you are using them in production for a product making money one could at least consider throwing them a donation. If no one contributes we don't get to have nice things like LetsEncrypt!


I would pay for an acme endpoint that is not rate limited.


You don't have to pay; you just have to ask. :-)

https://letsencrypt.org/docs/rate-limits/#overrides


And it just "takes a few weeks"!



> proving that Trustico has knowledge of all their customers private keys, keeping copies of them, which proves that they never knew how to run a Certficate Authority business in the first place

Well, they also weren't actually running a CA business either.


also, 23,000 people disclosed their private key to a third party.

a) in a sense maybe they thought they were disclosing their private key to a their CA so in a sense it didn't really matter because their CA could issue certificates for their domain anyway (... ignoring certificate transparency/other external verification)

[... we know this is not true and it's mostly people don't know/don't care/it doesn't matter what they are doing in the scheme of things]


This is going to be the funniest thing I read all day. Thanks for writing it up!!


Mirror, because the site wasn't loading for me earlier: https://www.eternum.io/ipfs/QmSjZic3JCaU3MHro8MyvRi1RpQweuYC...


I'm wondering if anyone could explain what the "right" thing for Trustico to do?


They should not have offered a web form for generating private keys, and instead should have educated their customers on how to generate their private keys and CSRs on their own servers.

If they had done that they wouldn't have had access to any private keys in the first place, so all their subsequent mistakes in mishandling those keys would have been impossible to make.

As for the whole "arbitrary Remote Code Execution as root" on their web server, that's got a pretty obvious solution: sanitize your data inputs, and don't run your web application server as root.


> So the CEO of Trustico, Zane Lucas, mailed the private keys of 23000 Trustico customers to Digicert

How is it that these guys have managed to stay in business for this long?


Intel is breathing a sigh of relief that finally there's a distraction from Spectre and Meltdown.


letsencrypt is great and I use it. But I don't really get it. All I needed to do was prove that I could place a generated file on the server that I wanted the certificate for. This seems to me to be a very low bar. What am I missing?


There's a difference in certificate type. Let's encrypt (which only issues basic certificates) just verifies that you're the rightful owner of a domain, not whether the domain is what it says.

If you'd like to have more verification for your certificate you need a extended validation certificate (which often costs money). These certificates also include your (company) name and the issuer verifies whether it's correct or not.

Basic certificate issuers don't judge over domain names or content, they just verify domain ownership.


To clarify, other certificate types do not judge content either. The difference is that they verify your organisational details (to a varying degree, depending on the validation level) and include them in the certificate. The CA is not going to check whether the business is fraudulent or anything like that.


To clarify for people who've never bought one, an EV SSL cert actually involves work by humans at the CA. They do things like obtain copies of your state/provincial corporate registration, business license, LLC registration, local business license, etc. Then the do basic matching that your physical business address of record matches with what your state's Secretary of State has on file for your corporation. It's about a half hour process on the part of the CA of verifying the existence of a real business entity.


That is basically the bar for a "DV" (domain validated) SSL cert. Let's say you are the owner of the domain rupertsdildoemporium.com and want to get an SSL cert. Since you own the domain you control the entries for what nameservers it uses. Since you control the nameservers, you can point the A record for the domain anywhere of your choosing. Or put an arbitrary TXT record in the DNS zonefile. That is the full extent of what you need to prove to get a domain validated SSL cert.

It is complete and total bullshit that DV SSL certs still cost money (thanks LetsEncrypt), anywhere from $9/year to $80/year.

The companies that rely on selling DV validated SSL certs for their business model are polishing the brass doorknobs on the Titanic. It's all going down. Just a question of when.


The bar for basic certs has been, for some time, an indication of control over the DNS domain in question.

In the past, the bar used to be much higher.

Whether a lower bar is a good idea or not I will leave to other more informed folk.


When was it higher, and what made it higher? The only extra thing that you used to have to do is pay money....


They're not verifying your identity, just that you control the hardware running the domain that you want to generate a certificate for.


Correction: that you can control the IP space advertised to Let's Encrypt. A BGP exploit would result in getting valid certs for someone else's domain/host. It's almost trivial to exploit BGP, which is why PKI is so important... so it should actually be incredibly difficult to get a cert.


If someone is hijacking BGP to MITM the world, how are the old methods any better?


If you could hijack BGP you could just as easily inject routes to intercept DNS requests as well.


This is why ISPs use tools like RPKI validation of received routes, and things like the ARIN, RADB and RIPE route-servers.


Almost nobody uses RPKI: https://rpki-monitor.antd.nist.gov/


They're not verifying your identity since it's a domain validated cert. The point is just to ensure that communications between a user's browsers an www.somerandomdomain.com are secure, regardless of who is behind the domain in question.

For identify validation, you'd need to buy an EV or OV cert. But for most organizations, particularly if your domain IS your identity, a domain-validated cert is absolutely fine.


> TL;DR: Forget your EV or other certs. Just run “Let’s Encrypt”.

The author has a fundamental misunderstanding of the situation [1]. Trustico's awful decisions regarding

a) storing customers private keys and

b) improperly handling key material

Have no bearing whatsoever on EV certs, which verify the legal entities that run websites. This is like saying Trustico is bad, therefore HTTPS is bad.

[1] Assuming this is what the author said - the site is in plain HTTP so integrity isn't guaranteed.


There is at least some merit to the argument that "Trustico Bad" => "CAs bad" => "HTTPS Bad".

More than one CA has been shown to be extremely lacking in trustworthiness and that trust is important. I'm OK with the centralised model but there needs to be a bit more visibility of the CA process.


I'd settle for an end to the credentialism that ensures only the rich and powerful can enter the CA business. The actual technical chops and physical/operational requirements to become a CA are modest by the standards of the average HN reader, but the financial cost for the audit required to wind up in the browser trust stores is prohibitively high.

...That, and given the massive failures we've seen coming out of the CA world recently, I question whether those audits are actually worth anything.


Aren't the audits what allow us to find out about the failures, and revoke their ability to be a CA?


How many of the most recent failures have come to light as a result of a failed audit, and how many were due to a post-audit, outrage-generating violation of basic best practice and common sense?


The whole point of auditing is to detect problems before they become big, embarrassing, messy failures that put users at risk.

If you hang out on mozilla.dev.security.policy for a while, you'll see plenty of examples of audits exposing weaknesses or sloppiness on the part of CAs, and receiving the resulting pushback from browser vendors. Here's the most recent example I've found: https://groups.google.com/forum/?fromgroups=#!topic/mozilla....


So is a fair summary, "In a weak attempt to force Digicert to revoke 23,000 certs, Trustico proved they had the private keys for those certs. In the process, Trustico also proved they are completely untrustable as a CA."?


Yes, with the addition that Trustico also don't know how to transmit key material as well. Basically DigiCert's summary nails it.


"Bad Actors" in the tech field tend to flock together. Comodo has been at the center of several really ugly stories, this one being the latest. The CEO of Comodo attempted to sue Lets Encrypt before they launched, in order to kill the project because of the threat it represented to their business model. after 24 hours of backlash from the internet public he backed down and said it was all a misunderstanding. Of course.

Cloudflare uses Comodo for their SSL. They could use some other cert authority, but they chose to use Comodo. This is on topic, in a general sense of CAs and trust. It bears repeating: Bad actors tend to flock together.


Your argument for guilt by association is not compelling. Cloudflare is their customer. I am Cloudflare’s customer. Does that make me a bad actor too?


Continued usage of Cloudflare baffles me. They spewed tons of private data, across customer boundaries, on random web responses, which made its way into search engine cache around the world. They conducted an act of Internet censorship:

https://fightthefuture.org/article/the-new-era-of-corporate-...

..and the CTO who made that decision later backed down and said he wouldn't make that same decision again. O_o

I really feel like the biggest reason they're still in business is that very few alternatives offer anything really competitive.


Keeping nazis off of servers one owns/rents themselves is not censorship.


How is that not censorship? Their opinions are wrong and disgusting but kicking them as a customer for their beliefs is very obviously censorship.


Censorship is something sovereign entities do. When anybody else chooses what they will or won't say we call that Free Speech. Nazis don't have a right to make other people say what they want.

The fact that American TV networks weren't allowed to say "shithole" in reporting what the US President said is an example of censorship. The FCC, a government agency, requires that they not use certain words. When Fox decided to get rid of O'Reilly that's not censorship that's just basic ability to read how the wind is blowing.


You're speaking of a strict definition of censorship which is when an authoritative body is the one censoring. And you are not wrong.

Private censorship is still censorship, however. It is also a form of speech itself.


A business transaction requires the consent of two parties. “I choose not to do business with you” is not censorship; the nazis are free to go and stand up their own servers.


^CTO^CEO


The question is: why does cloadflrare even use them?

There are like ten thousand other options...


I believe at the time they deployed their Universal SSL, which was over four years ago, is absolutely massive scale, and is free, they were the partner that could actually deliver the necessary integration and infrastructure to handle that sort of load.

Also, they were amongst the only ones to offer ECDSA certificates IIRC.

Cloudflare detailed a little bit about this in this and other blog posts:

https://blog.cloudflare.com/universal-ssl-how-it-scales/


We use several CAs to issue—Comodo, DigiCert, GlobalSign—and will be adding Let's Encrypt once they support i) SHA-2/ECDSA signatures and ii) wildcards.

Having multiple issuers is important for us as each CA, at some point in time, has operational issues. Additionally, as you've seen with Symantec, browsers take action to distrust certain issuers/roots.

When either of these scenarios happens, our customers don't care if it's the third-party that's down—they expect fast and reliable issuance from us (Cloudflare).


If Cloudflare wants wildcards it's so they can have multiple customers, each using completely different origin servers, on a single certificate, which makes those certificates worthless, and I hope Let's Encrypt blacklists Cloudflare from using them when wildcards are launched.


They are not worthless. They still protect the data from the user to the Cloudflare servers, and only Cloudflare keeps the private keys for them. Not sure how this would be made any safer by each domain having a separate certificate.

The main issue is that Cloudflare needs to have plain-text traffic to examine, which cannot be done with End-to-End encryption from the client to the origin server, in which case you would definitely need a separate certificate for each domain.


What you're describing is a multi-domain certificate, not a wildcard. Let's Encrypt already supports multi-domain certificates.


Huh? We want wildcards so that if you sign up example.com we can give you a certificate for example.com and *.example.com. This has nothing to do with SANs.


However, that is apparently a deal breaker, which makes me suspicious because many organizations have no trouble implementing Let's Encrypt SSL despite the lack of wildcard support. That + Past Behavior = suspicion.


Which part is a deal breaker? I'm not sure what you are getting at.


I am quoting an earlier comment in this very thread: "We use several CAs to issue—Comodo, DigiCert, GlobalSign—and will be adding Let's Encrypt once they support i) SHA-2/ECDSA signatures and ii) wildcards."

Did your colleague mis-speak?


No


How is that misusing a wildcard certificate? Should proxy server websites be HTTP only since the content is loaded from a different origin server?


Why does a proxy server need to decrypt traffic to serve it? Why can't it simply _act as a proxy_?

What Cloudflare does is dangerous to the integrity of safe browsing, for a variety of reasons. And their aggressive marketing to convince a lot of tech-ignorant people with low traffic websites that they need a CDN is harmful.

Is it too much to ask that their service at least operate in a way that doesn't poison the internet they're getting rich from?


I'm not suggesting guilt by association. What I'm suggesting is that the crooked CEO of $Comodo and the crooked CEO of $Cloudflare probably made some back room deals where they laughed about how they'd be taking advantage of all the peasants who've made them both wealthy.

I'm simply pointing out that when you have incompetent and crooked people in tech, they often run in packs.

You might argue that altruism has no place in business but I disagree. When the CEO of Comodo launched his attack on Let's Encrypt, Cloudflare had opportunity to cut all ties with them. There's plenty of other CAs they could work with. They chose to ignore that scandal, and that matters.


SSL is fundamentally broken. Web-of-trust is the only real way to do security.


It's not. It's actually worse.

You're assuming that random people on the internet are going to collectively be more secure than CAs, which is obviously not the case.

Imagine you get an email signed by the IRS, and which is trusted by people A1, A2 and A3, who are trusted by people B1, B2, and B3 ... who are trusted by Z1, Z2, and Z3, who are fully trusted by you.

Should be reliable, right?

But while Z1, Z2 and Z3 may keep all OPSEC rules, you have no guarantee that they verified that Y1, Y2 and Y3 did. And even if they did, you have no guarantee that ...

And even if you did, can you guarantee that no one there got any malware, which signed off on a bunch of fake certs?


All you've told me is you don't understand web-of-trust.


Could you help us out by explaining what this comment is misunderstanding?


You wouldn't trust a chain that long with something that sensitive. WOT gives you the ability to have different tiers of trust. If you verified the identity yourself then you can trust it completely. If you accept people verifiying identities on your behalf then you can choose exactly who you trust (ie. not everyone). It not only makes the delegation explicit, it gives you much more fine grained control about how much you can trust a given identity.

I'm not saying WOT solves all of our problems. It only makes it slightly better than verifying every key yourself. But it's better than the CA model because you can make it work for you.


Yeah, web of trust gives me the option to manually decide, every time, if the chain(s) of trust looks "good enough" for me to trust the other party. There are two problems with this:

1. This is a ton of work and a lot of guesswork even for educated individuals. I end up looking at either explicit chains of trust (I trust Bob and he trusts Alice and she says that this is definitely my bank's website) or some random value an algorithm spits out that tries to convey how "trusted" an entity is based on how many paths there are to it and how short they are. In either case, it's a manual decision that will often feel arbitrary.

2. Laypersons are completely fucked. No way my grandmother can reasonably decide who to trust this way.

If web of trust ever becomes widespread somehow, I guarantee you a month later Google and Apple and Microsoft will become the de facto CAs because everyone will just look to them in the web of trust and see if one of them vouches for their banking website.


> I guarantee you a month later Google and Apple and Microsoft will become the de facto CAs

Except that none of those companies bother verifying one's identity if you're not actually paying for their services.

> and see if one of them vouches for their banking website.

In that scenario, could I not just verify the bank's public key when I'm physically in one of their branch locations while opening an account? They could also verify my public key at the same time. That would allow for a direct line of trust. The same could apply to any company one deals with.


> Except that none of those companies bother verifying one's identity if you're not actually paying for their services.

I'm sure in this scenario, those companies will be delighted to step fully into the role of CA including accepting money for identity vouching.

> In that scenario, could I not just verify the bank's public key when I'm physically in one of their branch locations while opening an account? They could also verify my public key at the same time. That would allow for a direct line of trust. The same could apply to any company one deals with.

No. The same couldn't apply to any company. My primary bank is online only. And how many times have you actually walked into an Amazon office? Or Paypal? Are people in Ohio supposed to fly to San Jose to get Paypal's public key when they create an account? Or are we going to wait for the post office to deliver a physical copy of Paypal's public key (and we'll just trust that whole transaction couldn't be compromised). Physical key exchange is simply not practical in most cases.


It probably could apply to some that I locally deal with. But you're correct in saying that it's not practical for companies that don't have local branches.


>In that scenario, could I not just verify the bank's public key when I'm physically in one of their branch locations while opening an account? They could also verify my public key at the same time. That would allow for a direct line of trust. The same could apply to any company one deals with.

You could? Would your grandmother? Would you fly to Mountain View to get Google's? And then turn around and fly to Washington DC to get the IRS's? And then turn around and fly to who knows where to get HN's? And then tell them to keep customer staff ready to assist you to install their public keys?

And what about personal blogs that don't want to be MITM?


> Laypersons are completely fucked. No way my grandmother can reasonably decide who to trust this way.

Can your grandmother trust you? Or maybe you're not prepared to keep her secure so you just tell her "trust Google, grandma".


No, my grandmother cannot trust me to verify every site she might want to visit. How am I supposed to know if random blog is legit? Or for that matter random bank?

The fact that I have to answer this tells me that you haven't though through the implications of web of trust very far.


From my understanding, the example posted is an unlikely degenerate case where the whole "web of trust" between you and another peer consists of three separate parallel paths.


SSL is not fundamentally broken. The server authentication process is. Besides what does web-of-trust add? The client usually does not authenticate himself to the server. How should a client share "hey, I trust this server" with others?


Real-world SSL is broken above a risk threshold. There are far too many trusted CAs that have repeatedly demonstrated incompetence and all of them are beholden to governments. (There are far too many, period, or in the alternative far too few.) If you have high assurance requirements, SSL with commercial CAs can't be trusted.

We have natural experiments showing that WoT, at least as implemented, doesn't scale. I hate that humans don't seem to be able to make it work, but that's reality for you.

Yes, I am a security pessimist.


I think the real sentiment here is: good PKI is an unsolved (and perhaps unsolvable) problem.


I agree. The CA model is broken (why should I trust a Russian to certify cia.gov?). The Web of Trust model is less broken in some ways, more in others.

I think a real solution would be something like a multi-root, score-based system (e.g. if the U.S. government, Underwriters Laboratories & ICANN all state that I'm talking to www.google.com/172.217.13.238, then I honestly probably am) — but I'm worried that it'd be way too complex for normal people.


I would settle for restricting scope of CAs. As you say, cia.gov should only be signable by (at least) US CAs, but whatever.gov.ru should probably NOT be signable by any American CAs.

CAA records help with this, where available, but still leave some things to be desired.


True. But at least web-of-trust gives you the ability to do something you are comfortable with.


If by "something you are comfortable with" you mean "reject web of trust". Trust is not transitive, which is the problem with web of trust. I trust a set of people. I do not necessarily trust the people those people trust.


If you reject the web of trust then you're saying you trust nobody (except yourself). Do you really not know anybody you trust to verify identities on your behalf? (That's a trick question because you trust the CAs).


I specifically said that I do trust a set of people. Do you imagine that my small set of trusted people will personally vouch for the authenticity of millions of certs so that I can use the web? Of course not. You imagine instead that I'll trust the next set of people and so on, and that all this transitive trust will let me trust my bank's website. But I don't trust all those random people, which leads to a clear rejection of web of trust.


But you trust the CAs right now. If we used a web of trust system, you could still trust those CAs and nobody else. That's your model that you say isn't broken. You can be happy with that. I'll be happy with trusting people that I trust.


You propose a world where CAs still exist and all the infrastructure to support them exists, and realistically everyone continues to rely on them. But in parallel, everyone will build this web of trust system so that you can prefer it when it's convenient. That seems real plausible.


There is levels of trust.

While I maybe trust someone to be a real person after I've met them in real life and ate their spaghetti to give me back the 10$ they borrowed, I wouldn't trust them to verify the identity of the IRS for me.

The current solution is that we have some third parties which follow strict rules defined by themselves and browser vendors. Everyone involved has a very good incentive to remain trustworthy, otherwise they'd be out of business.

Sadly this doesn't prevent bottomfeeders like Trustico, StartTLS and others to leech of the system while some are genuinely interested in securing everyone's communication (see LE)

The Web of Trust only works if your trust in someone is binary and understandable to a computer, otherwise the browser might tell you "This website is 35.218% Trustworthy".

HTTPS trust must be binary. Either the cert is trusted or it isn't.

It's even more fun; GPG is considering to abandon the WoT. They're switching to TOFU instead, the first time you see a key it's trustworthy, similar to SSH.


>They're switching to TOFU instead, the first time you see a key it's trustworthy, similar to SSH.

Which is horrible unless your first connection to a server is trustworthy or you have an out-of-band way to verify keys (which is why it works in SSH, and can't work on the web).


Exactly. SSH Connections are usually towards servers you have setup so verifying the keys is easy. Even then there is little concern since most of it happens in the local area network anyway.

For PGP/GPG I feel like that TOFU is what most people are doing anyways, this change will only formalize the 0.90-quantile behaviour.


> For PGP/GPG I feel like that TOFU is what most people are doing anyways, this change will only formalize the 0.90-quantile behaviour.

Yes, probably. Key signing parties used to be a thing. I was able to find one that happened in London in 2014 (no idea how many keys were signed, though). Unfortunately people are not aware of the concept of trust and especially of the fact they delegate trust to these third party CAs. It's just not a proper way to do security.

The TOFU model is the best thing if you actually verify the keys out-of-band. Web of trust is the best way I'm aware of to enable in-band verification. I don't claim that it will actually work in practice. But that's fine. All it means is you can't actually do in-band verification in practice. I think most people here are beginning with the assumption that in-band verification is possible and then criticising WOT for its practical shortcomings which is ridiculous.


Trust is not a binary all or nothing state. I might trust a good friend to keep a secret about my finances generally, but I wouldn't trust that friend with my ATM card and pin.


The only thing I trust less than a CA is an army of Russian troll bots.


Yes, SSL has been broken for some time. That's why TLS was introduced nearly 20 years ago.


You presumably got downvoted for being pedantic here, but I think your pedantry is reasonable. If someone's going say "X is fundamentally broken", they should know what X is actually called. Referring to TLS as SSL reeks of amateur hour and shallow knowledge[1], and is a mistake on par with referring to Javascript as Java.

[1] This is the sort of lazy mistake I would make, because I'm not a security expert.


Yes, technically it's TLS, not SSL.

TLS, however, is an "evolution" of SSL and many, many people still use this nomenclature. It doesn't "reek of amateur hour and shallow knowledge", it's just a holdover from the past.

We all know and understand what others are referring to when they say "SSL". It's like when I tell the girlfriend I'm going to go on a "bike ride". She understands that I mean I'm going for a ride on my Harley, a motorcycle, and not an actual bicycle.

Or last weekend, when a friend asked me, "Hey, could you move your car so <other person> can get out?". While I could have stood there arguing with him or correcting him (since my car was actually at home, in my driveway), I instead went out and moved my truck so that the other guest could leave.

This excessive, unwarranted pedantry is annoying as hell. You may be "technically correct" but you make everyone around you dislike you.


Your examples are not really comparable, mostly because of context. If someone at a party mentions SSL casually and you feel the need to explain that SSL is dead and they are really talking about TLS, you're just annoying. If someone presents themselves as an expert and says that SSL is "fundamentally broken", it's pretty reasonable to question their expertise. Especially when they actually aren't even talking about TLS but about CAs and PKI.

To make an absolute statement about the fundamental soundness of PKI and then use completely the wrong term? Come on. Can you imagine a real expert doing this and not at least recognize their own mistake? Don't talk about "fundamental flaws" when you don't know the difference between SSL and TLS and PKI. This is very much like someone trying to criticize the design of TCP/IP and not knowing the difference between an IP address and a MAC address.


> It's like when I tell the girlfriend I'm going to go on a "bike ride". She understands that I mean I'm going for a ride on my Harley, a motorcycle, and not an actual bicycle.

Try saying this in the Netherlands (or, presumably, Denmark). Everyone would be surprised when you bring out your motorcycle instead of your bike (i.e., bicycle).


While not a security expert, I frequently refer to TLS as SSL, mostly when I talk about encrypted connections in general rather than specific protocol versions.

It's not quite as bad as calling JS Java or vice versa, it's more like saying Java and then saying Java 1.2 and Java 1.8 and so forth.


Or better yet, it's like calling it Java 8 and then someone says you don't know what you're talking about because it's technically Java 1.8.


The first line of the blog post calls it SSL. It's irrelevant to what I said because I'm talking about the public key infrastructure.


The blog author is also not a security expert.

And I understand that your real criticism was with PKI. That makes it even worse that you called it SSL. Again, I feel like you're showing your lack of expertise in this area. I'm not an expert at all in this area and yet even I can recognize that you're playing armchair security expert.


SSL/TLS are just protocols, they aren't infrastructure. There's a lot of moving parts in the infrastructure that are not these protocols. The protocols work pretty well.


> TL;DR: Forget your EV or other certs. Just run “Let’s Encrypt”. It gets you a cert, it’s fresh, and it does not make any difference whatsoever. At least not any you or anyone else can check for, or cares for.

Let's Encrypt shut down their new test interface because of a security flaw they found. If this was in a production service, this would have been about as bad of a security flaw as is possible in a PKI system.

Let's not get all high and mighty assuming Let's Encrypt won't get compromised; they probably will. We should be planning for how to deal with that.


That doesn't make sense. You cannot criticize someone for finding an issue in the testing environment before it hits production. That's literally what the testing environment is for.


I was not criticizing them (can you quote the part of my comment that was critical?). I was illustrating that they are not perfect, and just using them without considering that they too could be compromised is a bad idea.


At least with LE we're not paying to have our data compromised :)


I think we should use ssh instead of SSL and also ssh instead of username/password pairs.

If somebody is doing a distributed chat/social system, I would use ssh if I were them.

By ssh I don't mean execution commands but rather encryption/authentication framework.


ssh is, by default, Trust on First Use which is a significantly different model than a Trusted Thirdparty (CAs).

There are some well known trade-offs, namely that having everyone manually verify fingerprints on initial connect and again on any server change is a large burden.

I don't particularly want to have to go into my bank's local office and verify in person the fingerprint is correct each time they need to rotate a secret.

If this isn't what you meant, that we should use TOFU vs Trusted third party, please do expand.


They could just put the relevant information on all their communications, your credit card, statements and so on, also accounting for future rotations.

For banks, it could work quite well. Most other people and organizations aren't that lucky, though.


I think it works much better for contacts. You have to verify contact once and then you are assured that it's still the same person.


> You have to verify contact once

Only if users never replace their keys, which puts them at significant risk in the event of a key compromise.


Replace your key by signing a message announcing your new key?


If your key is compromised, anyone can send that message and sign it with your key... :)


Browsers need to remove all CAs except Let's Encrypt.

CAs have proven again and again to be ridiculously insecure, and the problem is that there is no penalty for their mistakes.

So just remove them all, after a warning period: Let's Encrypt is enough.

Or if they want to stay in business and be trusted by browsers, then require them to put up at least $100k in cash in escrow for each certificate they sign, which is forfeit if there's evidence that any compromise of that specific certificate, due to their fault, has or may have happened, with at least half of the money being distributed to whoever provides the evidence first.


> Browsers need to remove all CAs except Let's Encrypt.

No. I love Let's Encrypt, but we can't put all our eggs in a single basket like that.

Now, if we could somehow foster multiple non-profit organizations like Let's Encrypt, but run under the aegis of different boards and sponsors, I'd be 100% for this idea.

It's very odd that companies for whom CAs business is quite literally a money printing operation can't be bothered to do the relatively minimal maintenance and care required for a trustworthy CA. A bizarre and unexpected failure of the market.


This market failure is neither bizarre nor unexpected. The CAs are like ratings agencies in the financial crisis: they are in the business of selling to one party (the website) a credential that they offer to a third party (the browser). Their incentives are aligned to make them sell certificates as cheaply as possible, and they are of course willing to trade off as much security as possible for convenience/cost, as long as they don't go over the lines defined by internet governing bodies and browser vendors. And when they're pushing those boundaries, every once in a while they're going to make a mistake.


> A bizarre and unexpected failure of the market.

The consumers are, partially, buying a product they can't see: The security operations of the vendor.

Given the consumer has imperfect information, it is exceptionally profitable for a supplier to just not invest in security. The downside being the risk of compromise.

A market set up this way entirely predicts low-cost suppliers. With no way for consumers to introspect security routines, two vendors will appear to simply differ in price. The outcome is rather obvious.


Security requires that ALL CAs be secure, since any compromised CA can compromise all websites (barring fragile schemes like pinning or certificate transparency checks), so the less CAs there are, the more secure the system is.

It's like a building that needs secure doors: it's better to invest in a single, massive, bulletproof, guarded door rather than inviting anyone who meets some standards to add a door to the building, since what matters is the weakest door.


The proper solution is to actually make sure the CAs are secure, it doesn't matter if there are 1 or many. Also more CAs mean that any problems are only localized to their certificates instead of the entire internet.


Certificate transparency seems like it'll make a compromised CA much less of an issue.

If it can be reliably proven that a CA is compromised, then it can be fixed.


I'm not sure that's as good an idea as you think. Even LE has designed ACME to ideally be replicated elsewhere. The big problem with having only one CA is that if they get compromised or go down, that's the entire internet. For good reason, it's not like you can launch a CA in a day, so we will always need a few horses in this race. Culling the herd isn't a bad idea, but without some diversity any issue could be catastrophic.


>The big problem with having only one CA is that if they get compromised or go down, that's the entire internet.

Compromising any CA affects the entire internet.


Yeah, Let's Encrypt doesn't support OV or EV certs...


EV has been proven a lot less useful than thought : https://arstechnica.com/information-technology/2017/12/nope-...


The simple solution is to always require a certificate by Let's Encrypt, and allow the website to optionally present a second OV/EV certificate in addition to the Let's Encrypt DV certificate.

Although it's unlikely that's of any use, since unsophisticated users aren't going to differentiate, and sophisticated ones can use other means to verify identity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: