Hacker News new | past | comments | ask | show | jobs | submit login
Tips to accelerate SSL (unhandledexpression.com)
158 points by geal on Jan 25, 2013 | hide | past | favorite | 37 comments



Removing export ciphersuites is fine, but it's not the case that TLS randomly picks a ciphersuite from the list and therefore removing some slow ones will speed things up.

Servers either use the client's most preferable ciphersuite or their own. Most servers use their own preference (SSLHonorCipherOrder in mod_ssl).

So removing 3DES doesn't do anything because none of the connections will be using it anyway as everyone prefers RC4 or AES. The only cases where it would be used is if the client supports almost nothing else. In that case, you're just breaking the client.

Far more important is that the text is recommending DHE-RSA-AES256-SHA as the most preferable ciphersuite. The benefit of AES-256 over AES-128 are almost nil in this case, because AES is not your weakest link. And the cost of DHE is substantial.

Disabling DHE removes forward security, but results in substantially faster handshake times. Ideally the text would discuss this tradeoff and how ECDHE gets you the best of both worlds to some extent.

Given that most sites are going to nullify forward secrecy with non-ephemeral SessionTicket keys in any case, RSA-AES128-SHA or RSA-RC4-SHA are reasonable choices for the casual site. Certainly better than just using HTTP.

The points about Keep-Alive is fine and caching are fine, but not specific to HTTPS.

Minimising domains and serving complete chains is correct, but just telling people not to miss intermediates is unhelpful: put your site into ssllabs.com and it'll tell you if you need to fix your certificate chain.


Yes, I am sorry if that seemed misleading to you. I didn't want to get into too much details for a small post like that. I will write more on the subject later, with more data to help my points.

SSL Labs is a great tool that I use regularly, it is very helpful :)


My recommendations would be:

1) Prioritize ECDHE-RSA-RC4-SHA. This gives you forward secrecy, is reasonably fast, and avoids the security problems with how block ciphers are integrated into TLS (see http://www.imperialviolet.org/2011/11/22/forwardsecret.html).

2) Enable HSTS headers. Not only does this protect you from SSL stripping attacks, it will often speed things up by eliminating a roundtrip (user types yoursite.com into their browser, makes an HTTP request, gets a 301 for https://yoursite.com, does a TLS handshake).

3) This article mentions Keep-Alive, but TLS Session Tickets are going to be more effective in reducing handshake overhead. Enable them, and setup a job to rotate your session ticket keys once a day.


Note to those reading Moxie's comment that if you don't enable HSTS, and do have resources that reasonably require TLS (like a login page), and you get audited by a 3rd party (like because an enterprise customer requires it), the auditor will ding you.

HSTS: Not really optional in 2013.


this is our ssl optimisation list. it has some tradeoffs like using small keys (1024bit) which may not be appropriate in a lot of situations.

1. Don't use ephemeral diffie helman for key negotiation. This means if an attacker compromises your private key in the future he can decrypt all your past transactions he has recorded. But it does give a nice speed boost. Understand the security implications and if you are happy with the tradeoff do it. More information here: http://matt.io/technobabble/hivemind_devops_alert:_nginx_doe...

2. Use RC4_128 + SHA1. Force clients to use this use server cipher order. (google does this) http://journal.paul.querna.org/articles/2010/07/10/overclock...

3. Use 1024 bit keys. (google does this) 2048bit is much much more expensive.

4. Use ssl session caching and make sure you have some kind of sticky session support or have some way of sharing sessions between your web nodes. haproxy can do it by ip or by ssl session id.

5. When you are dealing with a lot of traffic understand whether you can handle keep alives or not. If you can't handle keep alives turn them off. Keep-alives will improve ssl performance by quite a big deal if clients are doing multiple requests but if you don't have the resources to handle them then they kill your servers.


Worth noting that the attack that compromises your private key is going to cause much bigger problems for most sites than decrypting TLS sessions.

If you're handling state secrets or privacy for dissidents, EDH makes sense. I would guess that very few YC companies (as a relevant sample) are well served by it.


The overhead of newer ephemeral elliptic curve diffie hellman is as low as 15% compared to rsa[1].

[1] http://vincent.bernat.im/en/blog/2011-ssl-perfect-forward-se...


The article didn't mention TLS-session caching. Sometimes keep-alive alone is not enough. Enabling session caching helped us to reduce nginx server CPU load by about 90%.

Add this in nginx.conf "http" configuration:

  ssl_session_cache shared:SSL:10m;
  ssl_session_timeout  10m;
This makes nginx store TLS-sessions in a cache for 10 minutes and tell the clients about it. Although some clients can still use a shorter duration.


Yes, that will come in a future article :)


"If the browser doesn’t know the intermediate CA, it must look for it and download it."

Is that accurate? Where is this behavior specified? RFC 2246 states "If the server is authenticated, its certificate message must provide a valid certificate chain leading to an acceptable certificate authority." No mention of a client doing behind-the-scenes magic to fill in the missing intermediate certs.

It's been my understanding that cert validation will simply fail if there are missing intermediate certs, and my experience is that this is the case. However, if there's something I'm missing that would allow a browser to synthesize the cert chain, I'd be interested in reading about it.


The Authority Info Access extension ( http://tools.ietf.org/html/rfc3280#section-4.2.2.1 ) can contain caIssuers field that point to URIs from which the issuer certificate may be downloaded.

In practice, there's not a "single" chain for a server. Different clients have different trust anchors, support different signing algorithms, and encounter the same certificates at different times. This has all conspired such that "Every Modern Browser" will, as necessary, examine the AIA extensions presented in the certificates and attempt to construct a valid chain, even if the server supplies an 'invalid' one.

A decent description of the complexity that modern PKI libraries (eg: browsers & OSes) implement can be found at http://social.technet.microsoft.com/wiki/contents/articles/4...


You are, to the best of my knowledge as well, correct.


The first tip for setting the cipher suite in Apache httpd is incomplete without:

    SSLHonorCipherOrder On
Without it, the client's preference is used, which may be a slower cipher in your list. You'll want to revise your SSLCipherSuite directive and there's nothing wrong with specifying individual ciphers instead of aliases when using such a short list.

You'll also want to monitor the effects of the change. Before you do anything, make sure you're logging SSL information with something like this:

      CustomLog /var/log/httpd/ssl_request_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
Collect some data for a few days, then update your cipher suite. In most cases, you'll probably want to see RC4-SHA (for now), since it's fast, widely supported and immune to BEAST attacks.


You're right, I forgot the honor cipher directive.

According to http://www.carbonwind.net/blog/post/A-quick-look-over-some-b... RC4+RSA is supported nearly everywhere, but it was not tested on mobile browsers.

As part of my researches on SSL tuning, I'll do a benchmark of all browsers, for cipher suites and TLS versions support.


Kind of surprised there is not a single word about AES-NI in there.

Both Intel and AMD now support it in modern server cpus.

You want to be using aes128-cbc to take advantage of it.

http://google.com/search?q=cache:http://zombe.es/post/407872...


The bulk cipher isn't the bottleneck; it's the number theoretic crypto that kills performance. RC4 is already very fast. And prioritizing RC4 also mitigates some real-world security problems with legacy client software.


Author mentions this point briefly without saying "AES-NI" though.

<quote> On the contrary, AES can be very fast in software implementations, and even more if your CPU provides specific instructions for AES. </quote>


> Activate caching for static assets

A quick and dirty test in Firebug on one of my SSL sites seems to indicate that Firefox is serving SSL content from the cache already. Is this a Firefox thing or am I overlooking an Apache configuration I might have set?


You're right, it looks like this was changed in Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=531801


If you care about compatibility with applications that use Schannel for TLS on Windows XP or Windows Server 2003, be aware of which ciphers it supports/doesn't support:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa38...

Note that there are no ciphers that use AES for encryption in that list.

tl;dr: Make sure TLS_RSA_WITH_RC4_128_SHA ("RC4-SHA" in OpenSSL) is in your server's cipher list if you care about Windows XP and removed the 3DES ciphers because they're "too slow."


Of course not, as XP dates back to 2001. And this includes IE on XP, BTW.


I wouldn't recommend "KeepAlive On" in Apache using the standard mpm_worker model unless you really know what you're doing. It does conserve some TCP handshaking round-trips, but it could cause resource starvation (in the form of denied connections, or connections timed out that are waiting in the TCP listen queue) when MaxClients is reached. Children are precious and limited, and many could be occupied waiting for subsequent requests that may never arrive - from the time the connection is received to the time it is disconnected, the child process can't do anything else.

In general, keep-alives are best used with event-driven webservers like nginx, which doesn't have this issue. It's a common practice to put an nginx reverse proxy or a good load balancer like an F5 BIG-IP (with keep-alives on) in front of Apache (with keep-alives off) for this reason.

If you use this model, be sure to set the proxy_buffers in nginx high enough to consume the largest possible response from the Apache backend, because Apache children will also block waiting for clients to receive complete responses.


Excellent advice. It's an even bigger problem for those who are still using mpm_prefork with mod_php on a RAM-limited system such as a Linode. In that case, you can only have 10-20 children. Although most Linux distros install mpm_worker by default these days, an alarmingly large number of people are still using this grossly outdated setup.


Why not stay on a safe site and enable private caching (Cache-Control: private)? Any advantages of public?


Most of the CSS, JS and pictures don't contain private data, and should be cached.


But cache control private allows them to be cached on a client machines.


I thought cache control was only directed towards proxies.


I'd love to see this article with the nginx equivalent configuration directives.


Nginx uses the ssl_ciphers directive to select ciphers.[0]

    ssl_ciphers ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:RC4+RSA:+HIGH:+MEDIUM;
KeepAlive is enabled by default, but you can configure it for different durations or numbers of requests.[1]

I believe you can chain your certificates by concatenating the chain to your certificate file.[2]

    cat chain.crt >> mysite.com.crt
[0] http://wiki.nginx.org/HttpSslModule#ssl_ciphers

[1] http://wiki.nginx.org/HttpCoreModule#keepalive_disable

[2] http://wiki.nginx.org/HttpSslModule#Synopsis


EDIT: it is done, I added the nginx configuration to the article

Just wait a bit, I'm updating the post for that :)


I second this. I'm not seeing Apache used much in any of the places I'm working in these days.


I had to change the ciphers in nginx a while ago, and was absolutely amazed at how much faster it became.

Good tips.


Why isn't apache and/or openssl shipping with "optimized" defaults already?


Because not everyone wants to optimize for the same thing, and optimizing for security is probably a better default than optimizing for speed.


Nicely done. I was worried when I saw the headline that he wasn't going to take security into account...


I say the post is nicely done, and note that I'm happy he took security into account, and people downvote. WTF???


awesome




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: