Hacker News new | past | comments | ask | show | jobs | submit login
Securing a Rails App against Firesheep with HTTPS (documentcloud.org)
61 points by jashkenas on Feb 4, 2011 | hide | past | favorite | 22 comments



I would recommend not doing https in rails. I use nginx to switch between http and https when needed.

in your nginx config you do this for https setup: # needed for HTTPS proxy_set_header X-FORWARDED-PROTO https;

then if you want, you can check in rails that certain controllers received https requests.

using nginx as proxy means your rails app only ever deals with plain text http.

the only issue, with all http -> https transitions is making sure that things you are storing in sessions are placed in your forms if you go from a http page to an https page on form submission. If not you will lose state if you are storing sessions in the cookies.

The part about relative urls is right. Using cdns etc makes things harder if they don't support your ssl cert.

At CarWoo! once you login we do everything behind https. For our user creation form you can be on an http page, but it submits to https.

We created partials that represent our sign up forms (we have many kinds of landing pages) that automatically take important things out of the session and put them in the form if needed. These things are not security risks, but are important for the correct functionality of the app.


https targets for forms on http pages are a security anti-pattern. The Tunisian government keylogged people's Facebook details because of it.


This is exactly correct. Security teams at our clients routinely "flunk" applications because they fail to set the "Secure" flag on cookies; this flaw is even worse than that one.


If you were completely unaware that this happened like I was, here's the story: http://www.theatlantic.com/technology/archive/2011/01/the-in...

A javascript keylogger was inserted into pages served over plain HTTP. Here's the source: http://www.hackerzvoice.net/node/105

With a little more tact and obfuscation, I bet an attacker could get away would doing that on a large scale for quite some time.


What if you used something like the EnforceSSL Rack Middleware with a Rails application. I feel like that would be a little more useful.


Sorry -- that's not mentioned in the post, but that's precisely how it works. Nginx handles HTTPS, and Rails only ever sees HTTP, with the "X-FORWARDED-PROTO" header.


Sorry, I didn't get that from the article. I use nginx to do all my redirects to https if someone changes an https link to http. But, that means keeping the nginx config up-to-date.


From the article: "Nginx handles the SSL itself, in a straightforward fashion, as long as you’ve compiled it with the SSL module turned on."


The initial redirect from HTTP to HTTPS is a weak spot. Most users will just type "example.com" into the address bar and an active attacker can strip HTTPS from there. There'll be no padlock icon, but how many of your users really going to notice?

See http://dev.chromium.org/sts to fix this.


It's important to note that this isn't some sort of hypothetical attack either. In fact, it's wicked simple to do.

http://www.thoughtcrime.org/software/sslstrip/


I dig the STS header, but isn't fixing the problem for only a small-ish percentage of browsers ... not really fixing the problem?


No. It's not. But without clients knowing about the fact that a site should be accessed only over SSL, there is no fix. Chrome isn't the only browser to support this. AFAIK, NoScript for Firefox also adds support and once this becomes widespread, more browsers might follow.

Fixing the problem for some is certainly better than not fixing it and waiting for the perfect solution that might never appear.

Especially if the fix is this easy to implement.


Firefox 4 betas have been shipping with STS support since June. See http://hg.mozilla.org/mozilla-central/rev/5dc3c2d2dd4f


I'm implementing a similar solution. My only concern is that the ssl handshake takes an anywhere been 600 and 1000ms - far too long as far as I'm concerning. does anyone have a suggestion to improve this?

My setup 1) Linode $20/mo REE box (will bump up in production) 2) Nginx 3) RoR 3.0.3 4) SSL Through Geotrust. It is a "chained" cert but i don't believe this is the bottleneck.

thanks in advance...


Your chained cert might actually be the bottleneck if the total data exceeds 4K and the user has to do a second round trip to ACK the cert.

http://journal.paul.querna.org/articles/2010/07/10/overclock...

Basically, unless you are certain you need it 4096 bit security, use a 2048 bit key (1024 is not secure anymore) and only include the minimum number of intermediate certs you can get away with. OSCP stapling doesn't seem worth it if it cause you to over flow the initial TCP window.


this is a really great piece of advice. i'll check this out.


OCSP is probably the bulk of that time. On Chrome Linux you can start a fresh instance, load the HTTPS page then open a tab with about:histograms and search for OCSP. The time taken should be in there. (Otherwise, use a packet sniffer and don't forget to account for the DNS lookup too.)

OCSP stapling is the best answer, although one can only staple a single response and many chains these days require two responses.

So a better answer is to get a CA which issues certificates with 24-48 hour validity and doesn't use an out of band revocation system. If you can find such a thing, please tell me where.


If anyone sees any gaping holes in this scheme, or has a more elegant solution to the HTTP-for-anonymous/HTTPS-for-logged-in-users pattern, I'd love to hear the critique.


As a commenter mentioned above, the weak point here is the fact that you only enforce HTTPS after the user has logged in. Since your login page is served insecurely, an active attacker could modify it to steal passwords. A well known tool to do this is SSLStrip: http://www.thoughtcrime.org/software/sslstrip/

The Tunisian government recently took advantage of Facebook's insecure login page to steal passwords for _everyone in the country_: http://blog.jgc.org/2011/01/code-injected-to-steal-passwords...

Protocol-relative URLs may be useful while migrating to HTTPS, but should not be needed long-term. All content should only be served securely.

Once a site is fully functional over HTTPS, adding the HSTS header is an important last step to further mitigate active attacks. http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security


Thanks for the example. Looks like we'll give up attempting to serve any HTTP pages in the future, and do as you recommend -- I don't see any way of getting around the login-form-phishing hack you describe.


Well here's one way I think would work, but it's not ideal and still not 100% foolproof:

A) Keep a running counter between the server and client. B) If, at any point, there are two active sessions both associated with the same user login, delete both sessions (thus logging out both the legitimate user and the attacker).

Session hijacking is still possible with this method, but only for the duration of one request (as long as you, the legitimate user, remember to log out at the end).

The main disadvantage is that if the request or response gets messed up, the session will be lost. Even if someone does something as simple as click a link twice before the response comes back, the session will be lost. So, yea, kind of a major usability drag...

Although... you could probably eliminate most lost session situations by using the counter method and allowing some leeway in it (say 2 or 3 requests off) to take into account double link click scenarios. That might be a good compromise...


Why don't you just run the whole site under SSL? That and add the STS header as an easy fix for people messing with the HTTP redirect to SSL. Might be a minuscule amount of extra load for your servers, but it has not degraded my Facebook experience noticeably (I have SSL enabled).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: