Hacker News new | past | comments | ask | show | jobs | submit login
Stateless CSRF Protection (appsandsecurity.blogspot.de)
42 points by phxql on June 27, 2013 | hide | past | favorite | 23 comments



"Stateless" CSRF protection as described here is strictly inferior to other forms of protection. The reasons are somewhat laid out in this blog post's comments:

1. JavaScript in any subdomain allows a cookie to be set on unrelated subdomains. That means an attacker can set a token and override the CSRF protection entirely.

2. The "replay protection" means that you must continue to maintain state on the server (ostensibly to prevent duplicate requests)

The author has actually gone on to propose a "triple submit" system for CSRF protection (http://www.slideshare.net/johnwilander/stateless-anticsrf) which is still vulnerable to compromise if a related subdomain can be used to attack by setting many cookies.

For a more thorough discussion of CSRF mitigations, check out https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(...


Thanks for this info. That last link includes a section on double-submit cookies with a little further discussion: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(...


So there is no good way in implementing stateless CSRF?


I believe that depends on the attack vector. Any sort of extra protection will probably not do any harm; you can prevent a bunch of script-kiddies who are using pre-made software with simple obstacles, such as this.

I had a small project a while back in which I tried to prevent use of a widely used cracking application. What I made was basically a CSRF protection made with websockets, which fetched the CSRF data at onsubmit event. In the end, I did succeed in preventing the use of the application, which could have been used to save bandwidth of large file-hosting and porn sites when bundled with CAPTCHA (although it worked without one against the application). For an example, when Spotify API was released it was exploited with a speed of few thousand login requests per minute, while some sites get still hammered with tens of concurrent bots making more formal login attempts from ho knows how many different locations. Anyhow, since I used client-side JS in it, the events were able to be fired before the actual POST event, which rendered it useless against more customized attacks. Ultimately the original purpose of the project was a success; I got some crackers on their toes and was essentially banned from the community.

My point is that even small security updates may be worth it, since you can never know who you are up against with.


Not in general, no. You can drop replay protection as a requirement and that gets you to actual statelessness. If you then have your website on a single domain and never put anything else onto other subdomains, theoretically now the only risk is that your single application is vulnerable to XSS. But you shouldn't build your security based on assumptions like that if you can help it.


In general, if you have an XSS vulnerability in your app, you can't have proper protection against CSRF, either. Every single line of JS that executes on your page should be your own, otherwise it'll be impossible to distinguish CSRF requests from genuine requests. So there's no point worrying about JS on your own page having access to the CSRF token. Likewise, if your pages are unencrypted, you don't have control over anything that appears or executes on your page.

Provided that there are no XSS vulnerabilities (e.g. you use a framework that automatically escapes everything you put in a template) and you use SSL on all your pages, an easy way to do "stateless" CSRF protection is to insert a token into each form that contains encrypted information about the form and the client. For example, create an array containing a unique identifier of the form, the client's IP address, the client's user agent string, and any other piece of information that is available with every request (so that you don't have to remember them). Serialize the array and encrypt it with AES using a key that you keep in the server. When you receive a POST, decrypt the token and check if the information it contains matches the client's details. If it matches, you know that it was you who generated that token. Of course this is vulnerable to replay attacks, but if your app is XSS-proof and your pages are delivered over SSL, it should be very difficult for an attacker to obtain the token in the first place.

But realistically, if you're already using cookies, there's no point insisting on a stateless solution to anything. If your site has a login functionality at all, you're probably already restricting dangerous actions to logged-in users only. In that case, spare a few bytes in your session storage. Generate a random token, put it in your form, and also store it in the session. Remove it from the session after use. Problem solved, and no replay attacks either. Why the obsession with stateless stuff? Maintaining state is easier than ever before. Gone are the days when sessions were incompatible with load balancing. Nowadays you can just throw all your sessions in a Redis node and access them in a fraction of a millisecond from a thousand different servers.

Edit: There are also in-between solutions if you really want to avoid using sessions. For example, you could keep a database of all the tokens you ever generated, from which tokens are deleted when used. This can help you prevent replays while minimizing (but not eliminating) state. If you don't want the database to grow indefinitely, you can just say "all tokens expire after X hours" and periodically purge old entries from your database.


> Serialize the array and encrypt it with AES using a key that you keep in the server. When you receive a POST, decrypt the token and check if the information it contains matches the client's details. If it matches, you know that it was you who generated that token.

No. You want to use a MAC for this, not encryption. Encryption does not guarantee authenticity, MACs do. It is very possible to predictably alter the contents of encrypted text; many of the early Matasano cryptopals challenges revolve around doing precisely this.


Encryption does not preclude the use of MAC, and MAC does not replace the use of encryption. My preferred solution is to add a MAC to my data and encrypt them both.


Better, but still slightly wrong. You really want to encrypt, then MAC the ciphertext. Then, before decrypting, check that MAC and don't act at all if it is invalid [1].

1. http://crypto.stackexchange.com/questions/202/should-we-mac-...


>Nowadays you can just throw all your sessions in a Redis node and access them in a fraction of a millisecond from a thousand different servers.

You seriously believe that? Adding tons of extra complexity, latency, and potential failure modes to gain absolutely nothing is not a trivial thing.


If you have enough application servers for session sharing to be a problem, you probably already have enough complexity to worry about. Adding sessions to the mix won't change much. If you don't want to add another daemon to your stack, just reuse whatever you were using before, whether it's Memcached, MongoDB, or plain old MySQL.

Gain nothing? Why? Easily preventing CSRF replay attacks count as something, doesn't it?


You don't need to have very many application servers for this to become an issue. For instance, what if you host your site in multiple data centers? Each data center might only have a few servers, but any additional data layers that have to be kept in sync between the two is a non-trivial amount of additional complexity.


>If you have enough application servers for session sharing to be a problem, you probably already have enough complexity to worry about

You definitely have lots of complexity. Which is why adding more is bad. There is no point where things are so complex that making it more complex is cool.

> If you don't want to add another daemon to your stack, just reuse whatever you were using before, whether it's Memcached, MongoDB, or plain old MySQL.

Sure, just make things slower and more likely to fail for no reason.

>Gain nothing? Why?

Because you can just store the session in an encrypted cookie. There is no need for it to be on any server at all.


What would be the flaws with this 'stateless' approach:

1) for each new session, generate a secure random token as a property of the session

2) serialize session properties to a byte array and encrypt the array, using, say, AES.

3) set the encrypted session state as the value of the (HttpOnly) session cookie

4) when rendering secure pages, decrypt and include the clear CSRF token in the X-XSRF-TOKEN HTTP header (only top-level HTML pages, no other requests)

5) on the client, include the CSRF parameter in your XHR requests and form posts.

6) on the server, verify the CSRF parameter against the value in the encrypted session state from the session cookie

The only shared server state in this case would be the secret key used for AES; this could be part of the production environment configuration and updated with each deployment.


I believe that's how Rails works, except using an HMAC on the cookie instead of AES (since AES itself doesn't prevent tampering).


I don't understand how this works, if the client is generating the data, why can't the CSRF attacker do the same?


Yeah, this article doesn't make sense to me.

As I understand double-submit CSRF protection, it works like this:

The CSRF token is included in an HTTPOnly cookie, and it's also included in the page content itself. That can be in an HTML form that gets posted, or in an in-page script tag that sets the cookie on a global JavaScript object, or whatever.

Then the client sends the second, accessible version of cookie in a request parameter or HTTP header, and also (automatically) sends it as one of the cookies. The server just has to check that the token in the param or header matches the token in the cookie. The server doesn't have to remember the token value, so there's no required server-side state here.

I am far from a security expert, though.


I believe that it's not an HTTPOnly Cookie (which is a problem because of XSS), and that it gets set in JS just before the request.

The idea is that due to the Same Origin Policy, other sites will not be able to set the Cookie for your domain.


But I can throw in an iframe or something to get your browser to make a request against that domain (and get a cookie)


You can't just read cookies for arbitrary domains in an iframe.


The CSRF attacker cannot set a cookie for the domain of the target site, it can only set the token on the request itself, which will most likely be not equivalent to the cookie token.


For XHR, don't forget the option to do stateless CSRF protection by requiring a custom HTTP header: https://code.google.com/p/browsersec/wiki/Part2#Same-origin_...


Store CSRF tokens in the session, store the session in a cookie, problem solved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: