Hacker News new | past | comments | ask | show | jobs | submit login
Get HTTPS for free (gethttpsforfree.com)
731 points by somecoder on Jan 30, 2016 | hide | past | favorite | 159 comments



This is my project, happy to answer questions or receive feedback. The goal was to let people experiment with getting a Let's Encrypt cert before the had to install anything on their server. The static/unhosted property is to strengthen trust that nothing shady is going on here.


Just used it successfully (with some changes at the bottom -- converting the certificate so I could import it to Windows) to get a cert for my IIS web server.

Worked like a champ!


Once again I find myself looking at code you wrote and appreciating it. If we ever meet in real life I owe you some beers.


Thanks! Make sure you use https everywhere :) Also, my co-founder would shoot me if I didn't mention our startup is hiring.


Nice promo there! I will never be paying for the certificates again.


Does this support wildcard subdomains? If not, would you be willing to add such support?


This uses LetsEncrypt which doesn't support wildcard certificates yet:

> Will Let’s Encrypt issue wildcard certificates?

> We currently have no plans to do so, but it is a possibility in the future. Hopefully wildcards aren’t necessary for the vast majority of our potential subscribers because it should be easy to get and manage certificates for all subdomains.

From https://community.letsencrypt.org/t/frequently-asked-questio...


Unfortunately, that doesn't work with dynamic subdomains (i.e, domains assigned and edited by users). Hopefully they'll change their minds in the future - until then, I'll be paying for a commercial certificate


You could always script the letsencrypt API and generate a new certificate on each subdomain generation.


That's correct, however there are rather aggressive rate limits in place right now that would make this hard for your typical SaaS-on-a-subdomain deployment if you have more than ~5 new signups per week. Plus, if SAN support is a concern, wildcards are preferable too.


The rate limits[1] I see documented are 500 registrations per 3 hours. That's a lot more than ~5 new signups per week. More like ~16800 new signups per week, no?

[1] https://community.letsencrypt.org/t/rate-limits-for-lets-enc...


Certificates/Domain is the one that would affect this use-case the most. It's set to 5 certificates per domain per week. More specifically, it's certificates per TLD+1, so one certificate for customer1.example.com and one for customer2.example.com would put your rate limit for example.com at 2, thus limiting you to 5 signups per week unless you spread your SaaS over multiple TLD+1's.


Wildcards are important and LE should support them, but it will take perhaps some more work on the validation rules. Dynamic subdomains are powerful stuff, and even a real-time automated cert request is a poor substitute for just having the wildcard. If you're doing sub-domain per customer, the wildcard cert is definitely preferred particularly if you're proper multi-tenant all the way down the stack.


Ah, I didn't catch that this limit was applied to the TLD+1.

Weird, why allow a generous 500 registrations per 3 hours, while limiting certs per domain like this? Anyone have a link to anywhere that letsencrypt explains what they are trying to do here?


Registrations don't cause a lot of load. They're essentially just one row in a table.

Certificates have to be signed by a Hardware Security Module with limited capacity. OCSP messages have to be signed every couple of days for the lifetime of a cert by the same HSM. This is significantly harder (and more expensive) to scale.


How do they define a TLD? What's, for example, .co.uk to them?


They use the Public Suffix List[1].

[1]: https://publicsuffix.org/


Hmm, are you sure they do? Including the "PRIVATE" section? Any docs from them saying this, and clarifying whether this includes the PRIVATE section?

Because if so, that would seem to make the certs-per-domain limits not so much of a problem. If you own example.com, and have customers using sub-domains at a.example.com, b.example.com, etc -- that would seem to make example.com suitable for inclusion on the "PRIVATE" section of the list.

No?

"owners of privately-registered domains who themselves issue subdomains to mutually-untrusting parties may wish to be added to the PRIVATE section of the list... Requests for changes to the PRIVATE section must come from the domain owner."

https://publicsuffix.org/submit/

And indeed there are a few dozen random .com, .net, etc domains in the PRIVATE section. For instance `github.io` is listed there.

If that's the way for SaaS providers to get free certs from letsencrypt for their customers at customername.provider.com, I'd expect to see the listings in the PRIVATE section skyrocket.


Yes, private suffixes are included. It has already caused a spike in new PSL submissions[1].

You're right about this being rather easy to bypass, but the main goal is probably not to mitigate against abuse but rather prevent buggy automation scripts stuck in some kind of infinite loop from DDoSing them.

[1]: https://community.letsencrypt.org/t/dyndns-no-ip-managed-dns...


A TLD. They define domain as anything the average user can purchase.


Oh that's a bummer, I was just looking into using this for a free service I made a while ago. Hopefully they'll bake in support at some point


Note that that's the current limits, but they have stated that they plan to raise them greatly in the future


5 certs per domain name per week. I'm currently rate limited, I should be able to get my www covered in 6 days.


i almost went down this route, then realized I could avoid all this R&D and just pay $40 for a wildcard cert.


$40? I paid over $90 for mine. Can I ask where you got it from?


https://www.ssl2buy.com/alphassl-wildcard.php

Here's where I got mine, works great.


Nice! Bookmarked for next renewal. Thanks!


StartSSL do wildcards for $30/yr (minimum 2 years).


And here you can pay by bitcoins https://hostigation.com/?page=SSL


as mfkp said, that's where I got mine too.

Important though, for compatability with firefox and some other browsers, you'll need to copy the intermediate cert to the end of the cert file. it works fine with 2 certs in the file, just put the intermediate at the end.


Having only a half a dozen subdomains, with maybe another half a dozen being added per year (well below the limits), are there any advantages to using a wildcard cert VS individual certs for the subdomains? In other words, any way to justify the extra $30/year for a wildcard cert?


If you're thinking you're going to use LE, they're rate limits which make individual certs for sub domains unreasonable


Can you limit the width of the main content container? Lines that long aren't nice to read. I'd probably bump the font size and switch to a nice sans serif if I were you.

I almost pasted the page into something else to make it easier to read - before realising how short it was! It's still a touch off-putting as it is.


Thank you, I was looking through it trying to find your email to thank you.

Process of getting ssl certs is not very simple and the way you laid it out for us/me really helped.

My small site I used as example zeljko.rocks is now https secured.


Thanks, diafygi! I used this a few weeks ago for a new domain and it was incredibly easy to get set up. Definitely appreciate you providing both Nginx and Apache isntructions as well. Great tool here!


So if you use the Keychain Assistant on OS X to generate a CSR, that should be enough? Or am I missing something here? Thanks for the service!


If they generate x509 compliant CSRs, then this should work. If not, please file an issue with the CSR, and I'll take a look to see why it's not parsing correctly.


Yup, though not sure I am doing it right.

Printed the public key and that worked fine. Went to step 2 and pasted in my CSR, and that also worked fine.

Basically Start Keychain -> "Request a Certificate from a Certificate Authority" -> Save to disk.

Though weirdly enough, my public key generated from keygen didn't quite work, I actually had to use

"openssl rsa -in myPrivateKey -pubout" for it to accept it. Why is that?


Slightly off topic:

I know everyone here is all about naked websites but I couldn't help but add these three lines of CSS to the body:

  max-width: 630px;
  margin: 0 auto;
  padding: 0 15px;
Makes the whole thing much more pleasant to read! (And even looks good on mobile)

Here's a screenshot: http://imgur.com/UFHJp8a



Grey text is one of the silliest trend I have seen.

If you've changed the background from white to light-grey, that's enough. Stop changing text from black to dark-grey (especially the non-bold text)!


10x this. See http://contrastrebellion.com/

When this is additionally compounded with a non-standard super thin font, the result is text almost invisible, even when zoomed, at least on non-retina non-macs.

I also noticed some (very few, but not that rare) websites use a font that looks completely rubbish on my Windows machines (certain serifs not being displayed, making it impossible to decipher letters). I thought it was impossible that every Windows user had this, they would have learnt. Turned out my ClearType settings hugely affected the rendering of mentioned font (ClearType is antialiasing method on Windows, when you enable it, it goes through manual calibration process, hence every Windows machine may have radically different config)

Don't go wild with colors and fonts for main content of the page. The more standard ones you use, the better chance it will render well for every user. Not everyone has same device with same config as yours.


When you're aware that high contrast can exacerbate and trigger scoptic sensitivity reactions in those with dyslexia, that page takes on a different tone.


Okay, so should I have to provide a screen reader download with my website as well? If you have a condition that between 5 - 17% of the world has[1] then you should be responsible for setting up your own environment (possibly with the help of your OS) the way it works for you.

[1] https://en.wikipedia.org/wiki/Dyslexia#Epidemiology


You want a huge portion of the world, most of whom not knowing how, to override your stylesheet because you can't be bothered to take five seconds to ease up on contrast (or even consider it)? Please tell me you're joking.

This is why the ADA happened.


>You want a huge portion of the world, most of whom not knowing how, to override your stylesheet because you can't be bothered to take five seconds to ease up on contrast (or even consider it)?

No, but given the way that the web and browsers are designed, along with the way that my site is designed the option is there for people that want to take it. Again, should I have to provide a link to a spoken version of my website in case somebody who is blind doesn't have a screen reader installed?

ADA is about making things accessible. So long as somebody can, with reasonable accommodation, access my website (for example, my design being simple and trivially overridable with user styles) then I'm accessible. If someone with dyslexia cannot handle the contrast levels of my site because a small minority of dyslexic users have their dyslexia triggered by that amount of contract then that is precisely what user style sheets are for, and I don't think it's unreasonable to ask somebody who is a minority of a minority to use the tools that are provided implicitly for them to configure things in a way that is easier for them.


More likely 5%-17% of your readers will just piss off somewhere more friendly.


As is their right, certainly.


I didn't know about SSS. If you have it, I feel very sorry for you. Do you use some special apps for minimizing the effects?

It's true that modern devices, particularly mobiles, emit too much light. I always set device/monitor brightness to very low. Additionally I use f.lux/twilight to shift colors slightly towards red. When the page is black on white, I can adjust brightness enough to not bother me too much.

But when the page is gray on gray thin font, the only way to read the text is to enable custom stylesheet that changes font to black Georgia. Easy on desktop, not so easy on mobile.


Hard black on hard white is 21:1 contrast, and most accessibility guidelines aim for on the order of magnitude of 3:1, 4:1, 7:1, or so, depending on usage. Black text is tiring on the eyes with an LCD, and high contrast text can trigger word motion and blurring for those who suffer from dyslexia. That's right: your text contrast in your 'ideal' black/white world can make dyslexic readers struggle to read what you've written. It's not 'trendy,' it's out of respect for those with disabilities.

Please make an effort in your own work to respect those in the world with dyslexia, colorblindness (never use red/green for status without another visual indicator, an overwhelmingly common sin that I'd wager 80% of those reading have done at some point), full blindness, and many other disabilities that can affect those who use your software. If your organization doesn't have accessibility guidelines, it should write some; there are a lot to start from online. You can (and should) also run your software through accessibility tools to help.

Hard black, #000000, is a modern thing and itself the trend. Typography has never in history been hard black until Web pages started just throwing 'black' on 'white' down without understanding color and color physiology. Hard black, all zeroes, is even outside of generally usable color in television; almost all broadcast equipment stays in the 16-255 range and relegates 0-15 for superblack purposes. Notice you can tell when a TV is on? That's why, and there are numerous technical reasons for even that.


I wouldn't call it a trend for a trend sake, it's much softer on the eyes, making it easier to read. But I would have preferred 1.25 line-height :)


#222 on #eee is fine but some sites do #777 on #ddd or go farther. This radically lowers the contrast and makes text really unpleasant to read with some fonts.


This is not a tend. For instance, finding pure black in fine art or elsewhere is next to impossible and if you do it's a modern trend (not the opposite).


But this is text. Black ink.


Look at a printed book or magazine. The ink is dark. But it's not black.


The contrast ratio (how bright whites are to black) is the important thing, and the contrast ratio of printed text in sunlight is a lot greater than your average LCD.


Neither is 0,0,0 on your typical screen.

Black is the goal in the case of printed text, and should be the goal on a screen too.


I always wondered about this one, you hear it a lot: "A little less contrast

Black on white? How often do you see that kind of contrast in real life? Tone it down a bit, asshole. I would've even made this site's background a nice #EEEEEE if I wasn't so focused on keeping declarations to a lean 7 fucking lines."

But my monitor is not perfect, nobodies is so why make stuff less well readable, I don't understand it. It annoys me to no end when I have to copy paste stuff into a text editor just to get a white background because someone decide a background should never be white and people seem to agree. If I want less contrast I'll tone down the background lighting of my screen.


There are lots of people who are annoyed to no end that the background of most websites is white and their browser does not offer simple way to change that. Plugins like stylish help to both groups. Just set the background color to what you want in custom css style, cope with the broken graphics and enjoy the web.


So, when your background is too bright... why not reduce monitor brightness? The letters will stay as black as they are (hey its 2016), your background gets less bright. This has other advantages too. Why must a website creator choose how much (maximum) light my screen emits around the letters?


I do reduce often, but this makes everything else on my desktop too dark so I can't read it. So I have a keyboard shortcuts that run xrandr --brightness <intensity> that I use frequently, but it is not the ideal solution for everyone. Better solution of this problem would be if websites detected brightness+contrast preference set up in browsers, like they detect language or browser window size, and changed their CSS accordingly. It would be great accessibility and UX improvement of the Web. We should make some standard for that and push it up into browser projects. Anybody experienced with such a task?


Just because I want a less bright background for reading text doesn't mean I want my monitor to be incapable of displaying a bright white - it's not only used for reading text.


While I agree with most of this websites recommendations I thought this line ironically looked too long.

echo "200 fucking characters long. Keep it to a nice 60-80 and users might actually read more " | wc -c

I was right.

    89


This makes me think, is it possible to set that as the default stylesheet for pages that don't have CSS?


You could try resizing your browser window. Not everything has to be maximized.


Resizing windows is one of the most annoying interactions to perform on a computer

You have to zero in on that edge with your mouse. There's only a window of ~10 pixels so you have to try to be precise, and it seems to be a rather expensive graphical thing to show because I always see ugly, jolting artifacts, and I have a GTX 970


I'm usually using Linux, so I forget how painful this is on other systems. On Linux (and similar), you can just Alt+Middle-drag (or Alt+Right-drag, depending on Gnome vs. KDE) anywhere in the window. The closest corner will move.

Edit: I also highly recommend using mice instead of touchpads for all computer interactions. I cringe when I watch a coworker using an Apple magic touchpad, because they have to move so slowly.


After using Linux for 10 years and having recently switched to OSX I seriously miss this feature. Does anyone know how to get it for OSX?

Also, Alt+Left-drag to move the window is super nice (not having to aim for the title bar).


I may well be an exception, but I actually find it easier to read with wider columns.

I don't like narrow columns being forced on me, and if line length bothers you, can't you just set your browser to the width you like?


> just set your browser to the width you like?

Thank you for saying it. This "fixed-width-in-pixels" thing that has taken hold completely baffles me.


And then a site adds a sidebar and it all breaks.


The variation I keep seeing today is sites that lose any left margin at all when the browser is less than 1000px wide (or 930 or whatever magic number the designer had in mind that week).

It's pretty special seeing text butted up hard against the left margin. I'm glad the proliferation of web standards has advanced to the point that this is acceptable!


You aren't alone - I fine huge gutters on the left/right to be very distracting. If I have my window set to be very wide, I WANT TO USE THAT WIDTH.

However, I realize that I'm in the minority - I don't find narrow columns easier to read, I don't consider analog watches "more attractive" than digital, while I love physical books I don't feel I lose any experience in going to a nice eInk reader, and I find most icons to be worse than a short text label. For these sins I am prepared to simply suffer a tiny bit until society catches up to me :)



Someone already turned this into a pull request: https://github.com/diafygi/gethttpsforfree/pull/66


I don't have a github account.


It's easy to sign up! Maybe something small like this will get you started :)

https://github.com/join



You can get one the same way you got one here.


is that right?


Bookmarklet :

javascript:(function(){var%20nbr%20=%20prompt("pixels","400");var%20a%20=%20document.getElementsByTagName('body')[0];a.style.paddingLeft=nbr+"px";a.style.paddingRight=nbr+"px";})()


or go with oliv__'s suggestion:

javascript:(function(){var a = document.getElementsByTagName('body')[0];a.style.maxWidth=630+"px";a.style.margin="auto";a.style.padding="0 15px";})()


That's cool, thanks!


What is the padding for? Doesn't make a difference for me.


Mobile.


@kelukelugames

Sure. The padding adds 15px of space on both sides of the body so that the text doesn't "stick" to the edges when the size of the screen is less than the defined max-width (i.e: on mobile). Doesn't make a difference on desktop though.


Thank you! Now I don't have to use Bootstrap on every site I make. :)


Can you please explain? I want to improve on CSS, but have under developed eye for design.


This stuff belongs in the domain of graphical design. Rule of thumb is to provide padding towards the edges around text, a little more to the sides than to the top, eg. [padding: 1em 2em;] depending on the font used because of inherent vertical padding in writing. If the text hugs the container, it seems cramped, and we get distracted by the lines of the bounding container.

I recommend giving "The Elements of Graphic Design" by Alex W. White a read, where he talks about issues of space and typography and why certain conventions exist. CSS is after all a way to express graphical design relations in code, which isn't very useful if one is unaware of the principles driving clear and useful design.


If you don't add padding, the text will touch the very edge of the screen when in a mobile browser.

[edit] see oliv__'s response, it's more detailed than mine :)


You know, not running your browser maximized on your widescreen monitor would solve that problem, for every site you visit, with no CSS required.


But... why? I maximize all the windows I use. Less window management hassle.


> Less window management hassle.

Actually, no, you still have to perform the "find the window" function and the "bring the window to the top" function, for both maximized as well as non-maximized windows. And once any given window is sized, the 'management hassle' for 'find and raise' is identical between both modes (maximized and non-maximized.

The screenshot you posted is of a Mac, so because of that I am going to assume you are using a Mac (because otherwise you would have posted a screen shot of some other OS). This explains why you believe that window management is a hassle.

Mac's window management support is on par with MSWindows window management, which is to say, they both suck for handling more than a handful of windows at once, and they both suck for doing much more than "find something - bring it to the top" (and even that function on both is awful as well). Which is really sad given that Apple is the company that brought the whole multiple overlapping windows GUI model into the popular view (Apple Lisa).

Additionally, because you are using a Mac, I am going to assume you are likely using a Mac laptop (simply due to the sheer number of Mac laptops sold vs. Mac desktops, there is a significantly higher probability you are on a Mac laptop). This leads directly to a second reason why you believe window management is a hassle. On almost any touchpad/trackpad, with the current UI hooks used by Mac (and Windows as well), window management is a hassle and sucks terribly. This is because all the UI hooks are designed around a mouse with a separate button (or buttons) that can be independently held down separately from creating mouse motion. On any touchpad/trackpad that lacks separate mouse buttons that can be independently held down with your second hand, performing any kind of "hold mouse button down while moving" operation is made significantly more difficult and a major hassle. This is due to the need to reposition your fingers on the pad to perform large movements, and the act of repositioning loses the "hold button down" mode, so you have to repeat the "hold button down" action just to reposition your fingers. But this is not a flaw with multiple window UI's, this is a flaw with Apple deploying a touchpad without separate buttons and failing to update the UI interface to better mesh with the physical constraints of a single touchpad without external buttons.


I didn't post the original screenshot, you're probably confusing me for the poster of the parent comment.

I am using Windows and I'm writing this from a desktop, from a Linux VM. I don't find window management to be much better when using a mouse. I maximize all my non-instant messenger windows.

Maybe one day I'll use a different window management paradigm, but I doubt it. I already have a special keyboard layout and every time I use a standard keyboard layout I feel/look like a newbie since I need to adapt to the new circumstances. Switching to a different window management paradigm would mean that I would have an even harder time adapting to using another PC.


> This website is static, so it can be saved and loaded locally. Just right-click and "Save Page As.."!

This strikes me as particularly neat. I wish more SPA's were able to work like this.


It would be somewhat hard. You couldn't do things like CSRF tokens or any kind of data that comes from the server and rendered on the page. Also the assets couldn't be named with hashes to be cached indefinitely.

But the concept is interesting.


> Also the assets couldn't be named with hashes to be cached indefinitely.

I don't see why not.


For those that are interested, I posted an article[1] a little while ago on how to automate the renewal process for Letsencrypt using Daniel's acme-tiny[2] script. It's a lot nicer to let cron handle it than doing it manually ;)

[1] http://robmclarty.com/blog/how-to-secure-your-web-app-using-...

[2] https://github.com/diafygi/acme-tiny


This is definitely a step in the right direction. It's bugged me that vendors are leveraging a commercial and proprietary system to secure sites. If we are going to move forward with this as the baseline of security for public facing sites then it's good to see a free and transparent solution pop up to help lower costs for students and the developing world.


Very nice, I quite like it!

I recently hacked together a completely web-based, client-side CSR generator for PKCS#10; you can take a look at it at https://johannes.truschnigg.info/csr/ With something like that fused into your project, users wouldn't even have to execute `openssl` to generate their key material and CSR, they'd just need a modern browser with support for the W3 Web Cryptography API.


We also use webcrypto on https://certsimple.com

If people prefer, we also dynamically generate a full OpenSSL or powershell command,so they can make keypairs on their own server with a single paste - no terminal Q and A.

We're awesome, but there's nothing stopping you from using the tools wherever you want. :^)


What is the HTTPS/security solution for devices on a home/office LAN? They aren't externally accessible, don't have a globally unique name, but do have access to valuable content (think your router, baby camera, lighting controller, NAS, media device).

Having to teach users that you always see the padlock when accessing your valuable information over the Internet, but do not see it when accessing your even more valuable information on the LAN doesn't seem good.


Office setups: Deploy an internal root CA (possibly with appropriate name constraints, to limit the damage to internal domains if your CA key is compromised). Active Directory makes it relatively easy to do this.

Tooling will hopefully get better now that browsers are pretty much set on going HTTPS-only.

Some consumer devices could probably implement something similar to what Plex did to deploy TLS [1].

I do agree that the industry isn't where it should be yet, but hopefully everyone's feeling the pressure now. :)

[1]: https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


Your office setup only works for the larger ones. And then for Windows machines that are actively managed. It doesn't for example address a BYOD iPhone.

The Plex solution looks good. I wonder if lets encrypt could provide a similar solution that works for everyone, rather than products having to reimplement what Plex already did.


The Let's Encrypt certificates seem to expire after 90 days. I wrote up some example code in Go so you can automate the process of issuing these certs here: http://goroutines.com/ssl

It does not require CSRs, but uses your DNS provider to complete the challenge. You do not need to run anything on your production servers.


Yes, the intent is to have short expiry times to encourage people to completely automate the process.


Unfortunately that makes it more complicated too, which means fewer people will use it (a classic example of a trivial inconvinience http://lesswrong.com/lw/f1/beware_trivial_inconveniences/).


Honestly renewing certs is a huge pain and should be automated even in the 1 year case. I usually have forgotten after a year how I obtained and installed the cert last time.

The Lets Encrypt flow with DNS is way less complicated than obtaining a cert through many commercial services, so it probably works out about even.


I had been using free Startcom SSL certs, but their UI and overall experience was not as great as this simple website. I just generated mine in about 10 minutes. The last I remember was that StartSSL required something to be stored on my local browser, but I reinstalled my browser, so lost some certificate, etc. If was free, but painful. I know I should automate every 3 months, but even when I miss it, I know I can use this website and manually generate a cert in 10 min.

Thanks to OP, diafygi and Lets Encrypt !


Startcom insist that you have your own personal certificate first, before they will issue the website certificate. They could have allowed just a username and password, but I guess certificate authorities believe there is no such thing as too many certificates, and don't realise just how inconvenient they are for most people.

(It was their personal certificate they issued to you that had to be in the browser.)


> Startcom insist that you have your own personal certificate first ... [t]hey could have allowed just a username and password...

I assume that this was for TLS client authentication. Username+password sucks as a method of authentication. The only thing it has going for it is that you can -theoretically- remember your password and key it in on a machine you've never used before. [0]

Client certs are effectively unguessable and typically stored in the most secure place that the OS can provide. What's more, there's -IIRC- absolutely no reason for the remote side to remember anything about the certificate that they issued you after they generate it, so there's no risk of a server-side DB breach revealing any significant information about a client's credentials.

Frankly, I wish more sites would eschew username/password authentication for username/cert (or at least offer the option of username/cert). Then the UI for certificate operations would certainly get easier to use. :)

[0] Though, in today's environment, it seems... highly unlikely that any non-mutant will be able to remember all of the passwords for all of the sites that they use.


Given a clean slate you could possibly make some sort of case for client certificates. But right now username + password is the standard, and people can continue to use whatever solution they deem fit for that.

Client certs might in theory be stored in "secure places", but that is not the case in practise. Have anyone get a client cert, and then make a backup or copy to another device. I'll guarantee it is trivially accessible - eg sending it in clear email to yourself to get from one machine to another.

It doesn't matter how secure something is, if a standard user would find it highly inconvenient, and it doesn't recognise the modern world of the same user having multiple devices in multiple locations.


> Given a clean slate you could possibly make some sort of case for client certificates. ... It doesn't matter how secure something is, if a standard user would find it highly inconvenient...

You obviously missed the point of my comment. I'll highlight the part that most clearly captures the essence of my point:

"Frankly, I wish more sites would eschew username/password authentication for username/cert (or at least offer the option of username/cert). Then the UI for certificate operations would certainly get easier to use."

> ...and it doesn't recognise the modern world of the same user having multiple devices in multiple locations.

Easy fix: issue one cert per device. Does the UI for this suck right now? Yes. Does it have to suck? Fuck no.

> Client certs might in theory be stored in "secure places", but that is not the case in practise.

It absolutely is the case in practice.

Imagine a hypothetical magical computer that stores all username/password pairs in a magical HSM that never lets the username/password outside of the HSM, ever. The credentials in this system are stored in the most secure place that the computer can provide. The fact that the user of that system may have also written those credentials down on a sticky note under his keyboard or saved in a plaintext message in his webmail mailbox does not change that fact.

Does that make sense?


I got the point about client certs being more convenient at the point of authentication. My point is that they are considerably less convenient at all other times. How do you get a cert from a Windows machine running IE into an iPhone running Safari? Or vice versa. What about when one of them is off? How do you get a cert onto a device you will use in the future that is in another location under someone else's control.

I don't understand how you issue one cert per device. There would be a bootstrap problem, presumably solved by entering a username and password. Or some mythical software/system that is able to securely distribute and update your certs across your systems.

Chrome stored my client cert from Startcom in a file in my home directory. Any app running as my user id has access to that file. Since this is a personal workstation, pretty much all the software (some system daemons excluded) runs as me. I don't consider this a secure place in practise, hence the comment. I do agree they could be done differently, but they aren't. Heck I've yet to encounter any browser or similar that uses the TPM in my laptop for security purposes.

Client certs seem very similar to ssh keys in usage and needs (secret blobs that need to be backed up and distributed). I know several above average developers who manage theirs well. But then I come across many who don't because it requires too much mental effort. It gets reflected in using passwords for git operations at github. A trivial number of the people who get them right, also don't bother with PGP for email because that is even more painful.

For client certs to have any hope of wide usage, the non authentication issues need to be solved, and there is nothing even close.


> Chrome stored my client cert from Startcom in a file in my home directory.

I gather that you don't have a smart card attached to your system? This [0] indicates that Chrome uses Mozilla's NSS to store certificates. The documentation for NSS indicates that it's happy to work with certs stored on (or to store certs on) a smart card. :) Granted, the UI "sucks" for this, but that's part of my point.

> I got the point about client certs being more convenient at the point of authentication.

That's not what I was saying. Client certs are unguessable, the authenticating partner has no need to store any significant information about the cert, and certs typically rest in the most secure storage the OS can provide. [1] Certs are leaps and bounds safer than passwords.

Right now, certs are typically harder to use than than passwords.

> For client certs to have any hope of wide usage, the non authentication issues need to be solved...

I'm not trying to be confrontational, but didn't you see that this was exactly what I was saying in my previous two comments?

No need to write four 'graphs that largely restate what I already said. :)

Thing is, if noone uses a thing, the UI for that thing will never get better. You frequently have to offer the thing as an option so that people will start using it before UI design teams will deign to make the UI for that thing better.

> A trivial number of the people who get them right, also don't bother with PGP for email because that is even more painful.

This is kind of a tangent, but: I guess those people don't use email clients that have adequate support for PGP. Enigmail makes key management, selection, and use trivial.

[0] https://chromium.googlesource.com/chromium/src/+/master/docs...

[1] User mishandling notwithstanding.


> I'm not trying to be confrontational ...

I am having trouble understanding what you are saying. The wording and tense seems to switch between what is really happening today, versus what could happen today given sufficient effort and implementation. I believe that other than small niches (eg startcom, some smartcard use in corporate and DOD), client certs are essentially unused by the vast majority of users. And I believe that even if everywhere accepted them, the logistical issues around ensuring a user has their certs in the relevant places are far too hard to overcome no matter how big the magic wand :-)

> I gather that you don't have a smart card attached to your system?

Nope. I could get the smart card reader when I buy Thinkpads, but don't. And I use Linux, Windows, Mac, Android and iOS. Having a smartcard reader on one machine wouldn't really help.

> Certs are leaps and bounds safer than passwords.

Only if the user is perfect. All it takes is them emailing it themselves, putting in Dropbox, being careless with backups or any number of other scenarios and it has the same "safety" as a password. (That is assuming your magic wand doesn't appear and somehow put smart card readers in every device overnight. :)

> Enigmail makes key management, selection, and use trivial.

Enigmail is indeed exactly what I use. Your statement above is true, but doesn't address the big picture. Every time I install a new system, I have to do copious amounts of googling to figure out exactly where my certs are installed, and then copy them over to the new machine, and get them imported. I'm never sure if I have done it right, haven't left myself open to compromise etc. A somewhat distant friend works in security, and once asked several people to email him their public keys. Virtually every response was a private key.

We've had decades of certs for PGP, everyone using it wants to be secure, and everyone wants it to be better and "user friendly". The effort has been a dismal failure.


Thanks for making such a great little tool.

Are you still intending to add a renew page at some point?


Maybe, swamped at my startup (we're hiring!). Pull requests welcome!


Are you getting Smartmeter data direct from the utilities, or do you sniff it direct from consumers via wireless?


Where can I see the open positions???



This is awesome, just replaced my self-signed ssl with it. Great Thanks!!

so the cert will expire in 90 days, how to deal with that? come to the same site every 3 months and regenerate a new SSL cert? Why not at least valid for a year?


One of their goals is to not have people use this interface and instead automate the workflow completely (using a client and your own renewal script[0], full docs [1]). There are reasons for this, laid out here[2].

[0]:https://letsencrypt.org/howitworks/

[1]:https://letsencrypt.readthedocs.org/en/latest/intro.html

[2]:https://community.letsencrypt.org/t/pros-and-cons-of-90-day-...


This is a manual version of what's meant to be an automated process. I believe the idea is that certificate revocation is a big mess right now, with Chrome not using the main revocation registry, and other browsers not being great at checking / enforcing revocations.

So the EFF decided that since automated certificate regeneration makes how often a cert expires irrelevant, they should use short-lived certificates so compromised certs can only be used maliciously for a short window, regardless of how well any given browser honors revocation registries.

I can't speak to whether this manual version of Let's Encrypt has flexibility in choosing certificate lifespans.


If you find out, I would like to know as well.


Let's Encrypt is designed to encourage setting up an automated renewal system. If you follow the links, you won't have to renew within 90 days - or ever again.


What is the intermediate certificate hardcoded in the source?


The Let's Encrypt cross signed intermediate.


I plead ignorance here. I'm sort of out of touch with recent developments, with typically just buying a cert when I need it. So I have a question -- where will Let's Encrypt certificates not work? I see Mozilla and Chrome as sponsors, so I'm guessing it's added as authority in at least those browsers?

This would be great, apart from apparent insurance regular certificates bring, which I still don't know how to claim.


> So I have a question -- where will Let's Encrypt certificates not work?

They're very well supported. Won't work in Windows XP and Android versions lower than 2.3.6. https://community.letsencrypt.org/t/which-browsers-and-opera...

> This would be great, apart from apparent insurance regular certificates bring, which I still don't know how to claim.

To my knowledge, that insurance has never been paid out, from any SSL vendor. It's a marketing gimmick.


Since it's free, could this be included into the installation or configuration scripts of major packages that provide web services ? As long as I have the DNS set up, it would be great if I can run "dpkg-reconfigure exim4-config" and have working STARTTLS with real certificates.


Free is great. My only issue with LetsEncrypt is that the certificates are only valid for 3 months. It's a hassle to keep updating the certs...

I just switched to AWS Cert Manager last month from StartSSL, which is free if you're an AWS customer.


3 months expiry time was a deliberate choice to force users to automate the process. Ideally you would have a central store with a letsencrypt client, and all your actual web servers periodically fetch their certs from there.


That's great except the web server (except apache/nginx) needs to be restarted to load new certs, which isn't ideal for production. Many cloud hosting providers don't have an automated way to update certs, which makes it more tedious.


Both apache and nginx support graceful reloads which will reload the certificates without any downtime.


Let's encrypt looks so cool with its very few steps. But then you install and you get all sorts of errors not me toned on the page. I spent a good 5 hours debugging yesterday.

When it finally works I see that the certificate expires in 2 months.


If you'd be willing to describe the problems you encountered in issues on https://github.com/letsencrypt/letsencrypt/issues (if they're clearly problems with the client software) or in a forum post at https://community.letsencrypt.org/ (if you're not sure), you can help other people have a better experience in the future.

Not everyone is having five hours' worth of problems -- many people are getting it to work right immediately -- but there are clearly also people who are running into difficulties which it would be great to figure out how to address.



Or you could use Amazon Web Services for their certificate manager? They offer wild card certificates for free?


You double posted, but FYI you can't export certs from AWS cert manager


Or we could Amazon Web Services for their wildcard certificates?


This is fantastic! Thank you very much for creating this!


These are the things we need for the web.


Sadly Let's Encrypt still doesn't work if your ISP blocks port 80 and 443.


The DNS authentication doesn't require any ports?



According to this it's online.

https://twitter.com/letsencrypt/status/689919523164721152

But it is weird that there isn't a more "formal" announcement, and I haven't tried it...


Server support has landed, but there's no support in the official client yet (some third-party clients do support it). They're probably waiting for that to land before making a bigger announcement.


Can't you use a different port? I believe you can use something like 4443 for the challenge.

https://community.letsencrypt.org/t/apache-listening-on-a-di...


No, that example just uses port 4443 internally. You have to use port 80 or 443.


I don't get it. If your ISP blocks port 80 and 443 incoming, how would you possibly deploy a website anyway? What do you need an ssl cert for if you can't deploy a website anyway?


> What do you need an ssl cert for if you can't deploy a website anyway?

SSL certs are used to secure TLS connections, not just websites connections. So you can host a mail server, a mumble server, an XMPP server, or really anything that speaks TLS.

Let's encrypt has been explicitely created for websites, but that doesn't mean its certs must be used for websites only.


It's entirely possible to deploy websites on ports other than 80 and 443.


I run a music server[1] on my home server so I can listen to music anywhere. I serve it on a port other than 80 and 443 since my ISP blocks those ports.

If I access the web interface over HTTP, the admin password is sent in clear text, which means someone on the network I connect to could get my password and delete my music, or god forbid, secretly mistag some of it.

* [1]: http://groovebasin.com/


Since it's for personal use, you can fire up something like an EC2 server, point the domain there for a few minutes, and provision a LE certificate. Copy down the generated key/cert files and you're in good shape for three months.


I understand this would work, but obviously this is not free, and it's certainly not elegant or optimal.


It's free - AWS free tier - and if it weren't it'd cost you less than $0.10/year for the couple hours of AWS nano instances you'd need.


It's not free. "The AWS Free Tier includes services with a free tier available for 12 months following your AWS sign-up date" https://aws.amazon.com/free/


Way to skate past the second part. A t2.nano instance costs $0.0065/hour. You need it for a few minutes every three months.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: