Hacker News new | past | comments | ask | show | jobs | submit login
Why is nobody using SSL Client Certificates? (pilif.github.com)
255 points by e1ven on March 2, 2012 | hide | past | favorite | 105 comments



I'm the author of this, now quite dated, article and by now I really don't agree with its contents any more - at least not fully:

1) this is not truly two-factor authentication as the certificate is stored right there on the machine I'm using for authentication. If this would ever get widespread use, we'd have issues with malware extracting the key from the browser. Not even password-protecting the key (giving you something comparable to SSH authentication) would help much as malware presumably has access to your memory anyways.

2) With the problems inherent to the SSL protocol in conjunction with renegotiation (http://www.kb.cert.org/vuls/id/120541), this becomes a bit annoying because you can't safely have resources for which you check a client certificate on the same server as resources where you don't. Remember: The article predates the vulnerability by a year.

3) More and more people are beginning to have multiple devices they use. There's not even a "main" machine any more. Not having access to the client-side certificate is annoying though with the various browser syncing solutions out there, this might become a non-issue over time.

None the less. My personal complaint about the overly complicated UI still stands - heck, this is even annoying for me and I would (aside of point 2 above) have good use for this at times - but clicking through this jungle of dialog boxes remains annoying to this date.


The UI is still ridiculous, yeah. But that seems to be par for the course for anything security-related, especially around client-side public keys. Setting signatures for your email is ridiculous, as is receiving them - epic warnings on every client I've tried means I know I can't send signed emails to my family, because they'll freak out and delete the message or shut down the computer immediately. Until that changes, you can be guaranteed it will never gain widespread adoption.


I agree with your reasoning in the article and in point 3 here.

I used to work at a tech firm, we used this SSL Certificates to authenticate against our internal CA. Sysadmins got it wrong, devs got it wrong, qa got it wrong and nobody else stood a chance.

The worst thing about it by far was managing keys between machines. For example, it made it impossible to work from home on an impromptu basis without someone available to help you out.


Would this be able to handle multiple accounts for the same website? Or would you use a separate browser for each account?


Chrome and Safari prompt you (at least on OS X, not sure about others) which certificate you want to use if there are multiple matches.


The bad thing is that it remembers what certificate you want to use and logging out and loggin back in doesn't ask you what certificate you want to use :-(

I've had this issue with the fact that I have two CACert accounts (due to not remembering I had already created one before) and I just recently moved all domains to the newest account...


The browser could ask you which cert to present, but as the saying goes, now you have N problems.


Smart cards (like the DoD Common Access Card/CAC) solve #1 and #3 (no dice on 2 yet).


Renegotiation is fixed in all major browsers and servers with the implementation of RFC 5746. http://tools.ietf.org/html/rfc5746

Unless you're a year or so out of date on your patches, renegotiation works better than ever.


Thanks, I missed that somehow. Although, I did say DoD clients, so odds are they're out of date still.

Neat to get the update from the guy I watched keynote about the problem in the first place though!


I have a bit of a history with SSL client certs; I'm the reason the Windows Azure management APIs all use SSL client certs for auth/security. Looking back, I think it's a bad idea.

- Most developers don't understand them. - Library support for them is wonky - we've run into issues with curl, the PKI infrastructure in Windows and weird bugs up and down the stack - There is a lot of complexity around cert management - how do developers move it around machines, share with their team, etc.

We had a lot of good reasons for using it and a lot of scenarios were made much easier still. But if I had to go back and have a do over, I would probably use OAuthv2 (which didn't exist then :) ).


Weren't InfoCards like client certs with better UX? What happened there?


Nobody adopted them (my opinion, not sure what the official take is). Don't think anybody understood them either.


That was one of the features I was most excited about in Windows Vista, and I was really sad that MS did not push them in way, shape, or form; or even make an effort to use them on their own web properties.

I think they could have caught on at least as well as BrowserID will.


I think the bigger issue in general is that client certificates only authenticate the machine, not the user. That is, unless you password-protect the certificate — in which case you've only succeeded in adding a step to the process of getting into your site, a process that most commercial sites would like to streamline as much as possible because it's essentially a giant ramp in the middle of your funnel.

Basically, even if the cost is marginal, so are the benefits for most sites — and although I haven't implemented this myself, I do not believe the cost would be marginal once you take into account all the new problems it creates.


Yup, plus you have the issue of users gettng a new machine and being totally lost.

Or owning 2 machines.

I suppose you could help that a bit. i.e. have a cert used for authentication, if it is not present then ask for the users info and offer to create a cert for next time.


Isn't this the same problem as SSH keys though? I already take care to bring my SSH keys with me where ever I go and have an agent to handle them. In fact, if I could get the browser to use them for authentication I'd be thrilled.


You shouldn't be taking your SSH keys anywhere. Generate a new one on each machine you use... unless you are using a smart card.


How absurd.

This is 2 factor authentication... Validating something you have (ie the computer) and something you know(ie a password).


Except that in most "real" two-factor systems, you can't duplicate the second factor easily. The problem here is if the private key is in main memory, malware can copy it to another computer that will be indistinguishable (for the purposes of authentication) from the first. With something like a smartcard that is made much more difficult. Not impossible, but beyond the reach of most folks.


You have a point. I would like to see crypto chips on motherboards/in the CPU that can generate a key, export a public cert and then be asked to sign things. Then this vulnerability disappears.


There are TPMs exactly for this. Not very popular, though.


These sound awesome. Wikipedia says many laptops have them. Is this true? If so, how can I detect one on Linux?


There are tpm kernel modules and tpm-tools package (in Debian, for example). Also sometimes device must be turned on in BIOS Setup.


And all of the recent Mac laptops have them. You need a special driver -- look here for a good writeup: http://osxbook.com/book/bonus/chapter10/tpm/


And that is why they made stuff like browserid:

- UI is slick

- you don't need to remember to carry your certificate around, because while you complain about the UI do you think average joe who's scared by the preference button will actually .. copy his cert and carry it around? Please, get f. realistic for a moment, that is the real major useability issue.

- it uses certs to authenticate to sites as well

basically browserid provides a third party to store and manage certs for you and you can request them from anywhere without knowing anything about computing or security.


BroserID looks neat, but doesn't it rely on using external JS to hit browserid.org?

I'm generally not in favor of adding additional 3rd party dependencies to my websites.


Kind of. We're suggesting that people link to the javascript shim on Mozilla's servers, while we refine the API, but we hope to support self-hosting by autumn. The entire system is designed to transparently drop away as identity providers and user agents implement native support for the BrowserID protocol, but for now we're supplying a shim implementation for the browsers and are acting as a trusted third party for identity providers.

For how this plays out, imagine the following combinations of native support for the BrowserID protocol:

1) BROWSER AND IDENTITY PROVIDER: Mozilla's services are left completely out of the loop.

2) BROWSER ONLY: Mozilla's services step in as a trusted third party to handle validating control over an identity and signing assertions for that user.

3) IDENTITY PROVIDER ONLY: Mozilla's shim and services step in to provide a surrogate implementation of the BrowserID protocol.

4) NEITHER: Mozilla's services step in as a trusted third party to handle validating control over an identity and signing assertions for that user, and to provide a shimmed implementation of the BrowserID protocol.

Again, the protocol was designed to be distributed from the very start, so nothing is locking it down to using Mozilla's shim and services, but they're there for you if you want to use them.

Disclaimer: I do not currently work for Mozilla, but I will be joining the Identity team later this month.


If it depends on JavaScript, it's still the problem. Mozilla has the browser, why not implement it without JaveScript? There are already more authentication methods which don't require JavaScript.


Yeah, I hear you. Still, for Persona / BrowserID to be successful, it has to support users regardless of browser. Choosing JavaScript means that we can build a solution that works for the whole web.


You should still implement non-js solution in Mozilla immediately to demonstrate how it's going to work once accepted.

js-based auth doesn't work for the whole web. Even for other browsers you have to make a solution where everything server can get the e-mail address without javascript on the client.

And you should definitely explain protocol as simple as possible for everybody who implements it to understand what's going on and how your services can be replaced by something else. That is, if you're serious about having a real cross-browser solution.


The long-term goal is to make the API supported by the browser itself. Consider the JS more like a "proof of concept", I think.


I have now built 5 mobile banking apps for iOS / Andrdoid for 4 different clients. We have proposed this as a security measure to every app we've built so far. It is a perfect solution since many banks can now identify the user and ensure the the traffic is really coming from them. It also helps that since we're building custom apps, we don't have to rely on simple cookie+ssl based solution.

Guess how many companies bite? Zero. None of them want to do it, and these are major nationwide banks. So much of their knowledge comes from handling browser based sessions that they can't imagine a different way.


They're not insane though. Client certs, like ssh keys, aren't the whole storey. Users don't use the same browser all the time. How do you move a cert securely? How do you "move" a cert at all given that peer-to-peer transfer between devices basically hasn't worked for common users since the end of the floppy disk (seriously: try asking your aunt to copy a big PDF file from her computer to her iPad without broadband).

Someone needs to do this in the browser. You'd need to put a (escrowed!) keyphrase on the cert, back it up to the cloud securely, associate it with the user's account in such a way that they can get to it. And then enough browsers need to support it to make it worthwhile. That's a tall order.


Technically you should not be moving certs. Each machine should have its own cert that the bank can choose to invalidate or not. It simply hooks in to the first time process of identifying a user/device.

For example many banks in the US use security questions to identify a user/device on first run. In europe they use randomized keycards. Once this has been completed the device stores a nonce to identify the device across multiple sessions. If you ever kill your nonce you have to repeat first time authentication.

Now granted this works, and many people might feel this is good enough, but I agree with the author that client certs would work better for identifying users then the current solutions.


you never move a cert, just like you never move an ssh key. you simply generate a new one and add it to their auth, any time the user is using a new browser they have to go through extra authentication steps, and get issued a new cert for their browser (or not, on a public machine).


"any time the user is using a new browser they have to go through extra authentication steps"

And one wonders why no one wants to do this stuff.

This is poor analysis. You're arguing from an incomplete position. The choice isn't between "some security" (mobile certs) and "best security" (forced reauthentication). It's between "bad security" (multiple accounts with shared passwords) and "better security" (single cloud account with escrowed access to the cert).


The two major public banks in Brazil require clients to provide additional information if you are on a "new" PC. One of them asks personal information (id numbers, etc), the other sends an SMS to your cell phone and you enter that number (a very good two-factor auth).

They don't use client certs, though, and I have no knowledge on the private banks.


Usually what are the ways to authenticate the user when generating a new cert?

A user moving to a new browser has to identify himself as the real user, thus requiring some sort of authentication. It's a chicken and egg thing.


Automated call? Automated text?


an emailed code is usually the prefered method, however depending on your needs a text message or account details may work


So in this case an SSL cert is just like a cookie. Why not just use two factor (SMS or an app) when there is no cookie?


Banks in the U.S. have been largely content with username and password security, perhaps because their sites don't let you do all that much -- at least so far. Now, with more p2p functionality coming (esp. cross-bank; see clearXchange), that might need to change in a hurry. Some banks (like Wells Fargo) do phone number authentication before executing some actions (like p2p payments). https://www.wellsfargo.com/privacy_security/online/advanceda...

At Clover (www.clover.com) we've built a payment app for iOS and Android which use client certs to great effect. Once your iOS/Android device is bound to your Clover account using the client cert, you just need a short PIN to protect against unauthorized physical access to the device.

Because we're a native app, we're able to hide all the nastiness of installing the client cert. When the app is freshly installed, we first verify control over a phone number (by sending a text or calling it with a verification code). If that checks out, we issue a new client cert to that device and associate the device with the account bound to the phone number. An account is locked to a (small) set of devices (e.g. iPad + iPhone).


Probably because people want to be able to change browsers/computers without the bullshit of transferring around an SSL cert and doing so in a secure fashion?


Now that Firefox, Chrome and Safari all sync through various cloud accounts, is that really still an issue?


And how do you log in on that cloud account? Hen and chicken...

Not a problem as long as you only have a fixed number of machines to use but once you start using other machines, family/friends/school/work/library you need a secure way to access the certs.


egg and chicken of course :P


Given that a good 50% or so use IE, yes?


I believe Windows 8 has this built in, but doesn't have a critical mass yet.


If it were possible to:

1) Synchronise the certificates automatically between every computer in the world;

2) Change user; and

3) Create multiple identities,

I think this would indeed be used on a huge scale already. Traditional accounts do all of this: you can log in to your Gmail account from any computer in the world. You can sign out of your Gmail account and sign in with a different user. If you are feeling paranoid or scared about having no backup Gmail address, you can create a secondary account.

Now you are probably able to use multiple certificates and maybe switch between them, but none of this has been made user-friendly. So that is the crux, browser builders (Microsoft, Mozilla, Apple, and--a bit later on--Google) just didn't pick this up.

Still though, the first problem would remain. But don't I remember entering a password for my self-signed SSL certificate which I created for my Apache webserver? Couldn't you do something like that and generate a certificate based on a password? Like signing up for a website, only on browser level and for every website at the same time?

Maybe there is actually future in this after all. This sounds good to me at least!


Several openid providers do use them to identify users. For example see link on https://www.myopenid.com/signin_password

Also, I've heard mention that Monkeysphere is working on something to generate client SSL certs that are tied to your gpg key. Which would allow logging into a site with one, based on the web of trust.


Usability and cost are the biggest concerns. It might even end up being less secure. You have to carry your key pair with you and import it into the browser anywhere you want to use it.

You have to make sure you delete the key (securely, ideally) before leaving the machine. What is the guarantee that the machine you are trying to use is not compromised or holding on to your keys?

Even then the website is not sure if its really you. They also need to implement a password based authentication. Some one stealing your keys is more possible (given that you want to login from any machine) than some stealing your password. Of course they can guess your password but I won't get into that.

One way to do it would be to build a dongle that has keys and provides the authetication service on behalf of the user. But then we would need multiple identities, which means we need multiple certificates and the cost will multiply.

Anonymity is completely gone. Even if the CA grants you an "anonymous" certificate it can always be traced back to you based on the cert parameters.

So no.


It seems that the only type of client cert we're likely to see in wide use on the web in the near future are origin-bound certificates (Google): http://www.browserauth.net/origin-bound-certificates

The proposal is to automatically generate a self-signed cert for each origin. This gets rid of many of the UI problems: it eliminates the need to choose which cert to send to each site (which also means the user can't be tracked across sites via OBCs, and thus the user doesn't need to grant permission before sending his cert).

On the other hand, it doesn't solve the problem of re-authenticating on different machines. It's not proposed as a primary authentication mechanism, but instead as a means of strengthening ordinary HTTP session cookies (http://www.browserauth.net/channel-bound-cookies).


Given all the things the Google Chrome User-Agent does keep in sync across different User-Agent instances, I'd imagine keeping a host of origin-bound certs in sync wouldn't pose too great a challenge. I'd love to see BrowserAuth take off!


I've implemented two projects using SSL Client Certificates in the last few years, and one of them, a B2B kind of thing, has been live for a couple of years with several hundred customers using it day-to-day. Some times it works great, and other times it is an intractable pain, and the browser UI is a pain, and the end user doesn't really understand it at all. These are not high-tech people. They don't put passwords on their keyrings, they forget they're there, they buy new computers and give the old ones away and then their new PC doesn't work and they panic! For a small number of customers (<1%) it never really works on at least one of their PCs and I have no idea why.

I think it is a bit of a chicken and egg problem: no-one uses them because no-one understands them, and no-one understands them because no-one uses them. The certificate management UI is also awful, which doesn't help.


I did this one time at home to share some files quickly somewhat securely, and sent my friend a cert to install. It worked fine. At my Fortune 500 of course the wifi has certs, but they don't do any certs for authentication (er, unless you count drive encryption and NT security?) but no HTTPS ... of course SSL for user keys, but no "authorized clients" style of client based keys.

I think it is probably the perceived overhead of operating your own CA, even though modern web app like interfaces exist and make doing such a thing a breeze. It could be that the less technical people like architects (that don't have to implement it themselves) just assume that it'll be too much of a conceptual thing to try and sell. It works tho! And seems like it would be in theory more secure given the features of revocation, but I guess that's the same as turning off someone's ID in the directory perhaps.


Ancient article. But as a side note, startssl.com uses them very effectively.


They do, indeed, and they provide a great service.

However, if you have two accounts there, logging out and logging in again causes IE9 to just re-use the same certificate instead of prompting you again.


I found the headline funny: my job for the past 3 months or so has been heavily involved with SSL client certs. Of course, it's on a private network, not the Internet, so that makes it easier. (By which I mean "easier to force the clients to use them", not actually easier to use.)

I absolutely agree the browser interface for certs is terrible. The post is from 4 years ago, and the interface still looks just the same, and is still just as much of a pain in the rear.

I would take this over "What color was your first car?" any day, though.


It is just like signing certificates for servers of any type but the subject is you.

I recommend using P12 as most browsers will 'just get it'. With Nginx for example, you can build in SSL support and then configure a directive to request a client cert at which point the web browser will load a relevant certificate to choose to pass along. More so, Nginx can be configured to extract information from the client certificate and use it as variables.

http://wiki.nginx.org/HttpSslModule#ssl_client_certificate http://wiki.nginx.org/HttpSslModule#Built-in_variables

So one can have ssl.awesome.io and extract info to only allow 'Joe231' to see ssl.awesome.io/Joe231. Even better is by serial or what have you.

Now my 2 cents on the problem; it confuses people, revocation and issuance. I'm guessing here DoD had certificates built into our ID cards and that was extracted with the reader at a need. Not sure, just a guess. Personally I think the cryptic nature that is command line Openssl is what slows down the mob from pushing new tech unto everyone. Think about it. Some comments present are disappointing for hacker news; you should be playing with cryptic technologies and making them work.

Want a startup idea? Plop certs into AXE body spray cans and done.

oh looky what Wikipedia cited: http://www.cs.auckland.ac.nz/~pgut001/pubs/pfx.html


We used SSL client certificates for the 2008 Olympics ticketing systems. What a nightmare.

It's hard to know how much to blame on Windows & how much has changed since then, but they were a never ending source of pain. We had to provision & manage 1,000+ workstations, dispersed not only through Beijing, but all of China. We couldn't find a way to install the certs as part of the imaging processing, so we attempted to automate as much as we can. Only on Windows (XP, I believe) you couldn't automate the entire installation. So we had to print up instructions & try to make it understandable to bank tellers. (Olympic point of sales where located at various Bank of China branches.)

Additionally, each certificate was store with a specific window account. So either accounts had to be shared, or we had to provision each machine for the dozens of tellers who might use it. (As well as making sure the process was easy when someone new started. And that the certs were revoked when they left. Again ugh.)

BTW, did you know that Java on Windows XP w/ the Chinese language pack has a different default classpath than Chinese Window XP? One of the other joys we discovered.

Actually, maybe the morale of the story here isn't client certs, but rather that Windows does.


I don't necessarily agree with the perceived benefits highlighted in the article.

i) No need to enter username and password: Yes, we won't have to enter those but the real problem is keeping track of multiple usernames/passwords, not entering them. Even if we implement certificate based authentication, we will probably need multiple of them. Single identity is unlikely to happen because of (a) Privacy - You won't want to use the same identity (username or certificate) on facebook and a .xxx site. (b) Security - Sites who care about security like banks would want stronger control on identity (e.g. by expiring passwords or certificates periodically) whereas other consumer facing sites would not. Given these reasons, single identity is not going to happen anytime soon. If keeping track of multiple usernames/passwords is a problem, think how hard would it be to keep track of multiple certificates.

(ii) Two factor authentication: Well, just storing certificates in the browser/computer doesn't make it two factor authentication. The second factor in 2-factor should include something which is outside your computer such as a hardware token or your mobile phone. If you store your certificate in your laptop, the same malware which steals your password would steal it too.

(iii) Delegation and revoking: OAuth already solves this problem. You don't give your facebook username/password to a third party site for it to access your profile. And you can revoke access anytime.

Certificate based authentication is useful if you store them on some 3rd device like Smart Card or USB keys. There are a bunch of companies doing that but they mostly sell to enterprises. We don't want to carry multiple cards, right?


So just like how I authenticate with GitHub and SSH? That could be very useful.


Hacker News you have let yourself down.

Half the comments on here are about moving certicates! YOU NEVER MOVE PRIVATE CRYPTOGRAPHIC CERTIFICCATES!


Can you educate me? Are you saying that using something like 'openssl x509' to export a certificate as PKCS12 to import into my browser is a bad thing?


I'm not sure I fully understand. However, if the private key ever leaves your computer, it's a Bad Thing.


Hmm, what I described above is how a client certificate signed by a certificate authority is created.

Random site that describes the procedure is below. What is the right way to do this?

    Exporting the private key of the certificate

    Enter the password of the private key, and then export the client key onto the generated client certificate using the following command:

    openssl pkcs12 -export -out client.pfx -inkey client.key -in client2.pem

http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/index...


Not to be too pedantic here -- but what do you mean by "PRIVATE CRYPTOGRAPHIC CERTIFICATES"? Certificates are intended to be public. Private keys are intended to be private. They are not equivalent although they are certainly related.


Public and private keys are just very large numbers; they are stored in various formats. Certificate may have been the wrong word, but the gist is, you should never copy your private key anywhere.


I disagree about the copying. You should protect your private key wherever you take it, but copying it is certainly a reasonable thing to do under some circumstances. With an appropriate passphrase, storing your private key in a PKCS#12 file is just as secure as storing the private key in software-based OS or application keystore. In fact it may be identical depending on the OS or application. You are subject to the same attacks (password jacking, in memory key copying) in both cases. I keep some of my lower-value private keys on a USB stick for exactly this scenario.

The only way (IMHO) to get around those attacks is to never decrypt the key on a machine with untrusted software running and accessible memory. The only device that comes close to this is a smartcard or TPM type scenario, which uses a separate CPU and protected memory to do the RSA operations.


I don't have a problem with copying, I have a problem with copying between machines.


They work as a pair. If you access from a different machine you need the private key on that machine. You can't just copy the public key over to that machine and expect it to work.


It depends on what you are doing. You can do public key operations without the private key (for example signature verification). You can do private key operations without the public key (for example signing). In practice the certificate is always available near the private key, but the reverse is usually not true -- there are many cases where you will have the certificate but not the private key.


Probably because no one understands them. Using myself as an example, any time I've had to do any work with certs, I've had trouble figuring out how to get them to work at all. I'm no security expert, but neither are most devs.


I've used client certs several times, but not necessarily to authenticate a user. We have a few mobile apps with a public facing HTTPS API endpoint. Currently, we have it set up to use a client cert that we ship with the app itself to 'secure' the connection between the app and the server. In reality, it's no more secure than embedding a username/password in the app itself and using basic auth.

It was slightly tricky to get the iOS/Android programmatic HTTP layers to properly format and present the cert to an auth challenge, but since we figured that out, it's been seamless.


SSL Client certificates can be easily and completely stolen by worms and botnets. So they can't be used by serious applications like online banking.

For everything else, there are much simpler solutions.


That problem is not specific to SSL client certs. It potentially affects every authentication mechanism (on the same computer). Typing in your passwords? Keylogger. Using a password manager? Stolen passwords. The password database is encrypted and you have to type in a password to decrypt it? Keylogger.

If your system is rooted / infected with malware, you lost. The only solution is to format the drive and start over.

The alternative is to use a 2 factor authentication mechanism that uses a separate device, like your phone receiving a text message. That's a pain for the average user, and certainly not "simpler".


People are very familiar with text messages. Its very easy to explain.


Worms and botnets can steal passwords just as easily, and session cookies almost as easily. What do you consider appropriate for online banking?


The certificate is public anyway, you'd need the private key as well in order to authenticate.


It creates a support headache. I have rolled out client certs on a number of projects, the first over 10 years ago. I now know the perfect project that suits client certificates: authenticating work computers in high-risk transactions where you have a small base of users (500-5000).

In one scenario even with 500 users we would get 2-3 support calls per day regarding the certificates. And having Verisign set us up as an intermediary with an ability to sign client certs was very expensive.


You could use openssl or openca for free....


customers were banks so they wanted the brand name


Apple uses client certificates for their APNS API (push notifications). Both to verify that you can actually send push, and to identify which app and environment (dev or production) you're sending push for. I have also used client certificates for a number of computer-to-computer systems.

So client certificates are being used, even though people aren't using them in their browsers.


The process for installing client certs is wonky. We setup a PKI to authenticate computers and allow us to use SCCM inside or outside of the firewall -- it works great.

I tried to get other folks to use it, but people equate cert with complexity and money and their eyes roll back into their heads if you mention PKI.


The last time I tried to get apache to do authenticate with client certs it was an absolute nightmare.


I didn't find it being an absolute nightmare, but it wasn't trivial either. Here's the configuration that works for me: http://boston.conman.org/2009/03/02/siteconf.txt

I also found that the Firefox Convergence add-on (which itself is a neat concept) breaks client certificates (which I keep forgetting, so when I hit the private areas of my secure server, I keep getting denied until I remember to disable Convergence).


My experience as well. I tried to implement client side certificates for an internal system (changing computers/browsers/ugly UI/etc not an issue), but couldn't quite get it to work.

And in that setting not worth the time compared to Basic Auth.


What is even more baffling to me is many bank cards now incorporate smart card chips on them, yet are never used for online activities. Every desktop browser has decent support for card readers, hand all your customers a $5 card reader, and save millions a year on fraud.


"never" is a strong word. I know at least one (fairly large in it's country) bank that does credit card based auth. And if there is one, there probably are others too.


Every bank in the UK uses card readers.


I think that we can pretty much all agree that SSL is the new WEP; broken beyond repair. If you don't believe me, try using OpenSSL in C.

Very relevant: https://en.wikipedia.org/wiki/X.509#Security


An alternative to browser certificates:

http://dswi.net/


This is indeed very simple. But I'm afraid this will evoke what I call the "crypto gag reflex" which in this case would be, "client-side javascript + crypto === BAD."

How do you respond to this criticism? Is it valid in this case?


It's a very valid criticism, and it's the main reason that DSSID is not quite ready for prime time. I'm participating in a W3C working group trying to develop new browser standards that would allow secure client-side crypto with a better UI than client certificates. It's a very hard problem.

But even in its present state I would argue that DSSID is at least as secure, and probably a great deal more secure, than any other extant authentication scheme out there. If you trust SSL, trust the DSSID server to be secure against write attacks (i.e. attacks that allow a hacker to change the content being served), and pay attention to the URL in your browser bar before entering your pass phrase, then DSSID is secure. (There's actually one other vulnerability, and that is malicious plug-ins, but no software-based security scheme can possibly be secure against that.)

By way of contrast, with passwords, you have to trust EVERY SINGLE SITE that you log in to. You have to trust the certificate chain for every single site that you log in to. (With DSSID you only have to trust the certificate chain for your DSSID server.) You have to trust that every single site that you log in to has not had malicious Javascript injected. You have to pay attention to the URL of every single site you log in to, not just one.

Most problematically, many HTTPS sites do not have valid certificates, which trains users to ignore warnings about invalid certificates, which essentially undermines ALL of the security of SSL. So having to only ever trust one certificate (which will always be valid) is a huge win.

If DSSID were be broadly adopted it would become more secure. Once the code is stable enough, I am going to open-source the DSSID server code so that anyone can run their own DSSID server if they don't trust mine. The Javascript that is served from the DSSID server is static (except for the nonce) so it can be independently audited and verified. The DSSID protocol doesn't have to implemented in Javascript. You could write a plugin that implemented it, or build the capability into a browser. It's a lot easier to plug the holes in DSSID than anything else out there.


It's bizarre that we're reduced to forms-based authentication. Surely even basic authentication could be improved to include the basic signup/forgotten flows. Odd that this crucial UX is so overlooked by browser devs.


Well I frequently use client certs. My company develops and maintains an web-based application platform. The platform is only accessible using client certs and used daily by a couple of thousand people.


We used client certs in my last project, a private cloud document storage API. Big mistake! I probably spent a quarter of my time helping other teams organise their certs.


IIS 5 used to have issues with client certificates. I remember random php process crashes when we tried using them.


Can anyone recommend a good smart card device that works with openssl and firefox?


The U.S. Government is doing it. It's called PKI.


because pki is hard


LOL, I was going to write this. I worked on Government PKI for 4 years. Most enlightening security course ever.


It's the classic chicken and egg problem. Few sites use them because they correctly believe that the browser configuration is too painful for most people. The browser configuration remains painful because users aren't pressuring browser makers, and they're not pressuring browser makers because they don't use any sites that make such configuration necessary. It won't change until (a) enough sites require client certs despite the painful configuration or (b) browser makers take some initiative to make configuring them less painful. Neither seems very likely.


(b) is sort of happening, but with a new protocol (BrowserID).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: