Hacker News new | past | comments | ask | show | jobs | submit login
#1 CSRF Is A Vulnerability In All Browsers (homakov.blogspot.com)
183 points by homakov on March 30, 2012 | hide | past | favorite | 238 comments



Just in case it might be a problem for anyone: The article uses the CSRF vulnerability to log you out of all Google services (and says so in a PS at the bottom).

Don't open the article if you don't want to have to log in to Google again afterwards (might be a problem if you're using two-factor auth and you don't have your phone handy for instance).


There's an info now at the top of the post

> To stir up your interest - check any google service e.g. gmail, you are logged out.

Great hook btw. Even more impressively, I have all js on his blog blocked through NoScript and it still worked.


I don't run any JS. It's just an image:D


Have some more nightmare fuel: http://ha.ckers.org/xss.html

You can inject the things above into somebody else's data, or hide them in your own page from the beginning, I suppose.

As a site developer, you can mitigate some mischief a bit by having any destructive update be a two step process: first get a form (or an "are you sure" page, if no real input is required), and add a nonce to the form, which is submitted back with the "request for destruction and subversion". Of course, the attacker can still request the form, harvest the nonce, and send it back with the attacking request, but now his attack has to be 2 steps instead of just 1. Also, if the nonce has a variable name, he has to know to grab everything off of the setup form, and not just resubmit a hard-coded name. Obviously, this won't stop everybody, but it does force them to try a little harder.


If the target site treats GET and POST identically then that could still be a problem.

Just have

<img src="http://targetsite.php/form?submit=1&data=gjoprgrger />


The internet must be a very dark place for you.


I have this catch-all ABE (Application Boundaries Enforcer) rule (in NoScript config, advanced tab, abe subtab, USER rules). The blog didn't log me out of google.

  Site ALL
   Accept INCLUSION(XHR, SUBDOC) from SELF++
   Anon INCLUSION
http://noscript.net/abe/


Well no, it is illegally accessing a computer system, which i do believe is a felony.


But he didn't access Google's systems, your browser did. You requested the page, and your browser dutifully loaded the cross-site request image.


The law is not a computer program; intent is more important than the mechanism.

Imagine a real-world example. Alice wants to murder you. You have a kid. Alice calls you, exactly imitating the voice of your kid, telling you she's trapped under a girder at the abandoned bridge across town. You frantically race across town to free your trapped kid. You get to the bridge and notice it's on the verge of collapsing, and that there's a "no trespassing" sign posted. You ignore that and try to save your kid. You trip over something and that upsets the unstable structure of the bridge. It collapses, crushing you while you fall 3000 feet to your ultimate demise.

Guess what, you were just murdered.

Even though Alice did not physically drive a knife into your heart, she still killed you. She intended to kill you (that's the first part), and then set into a motion a chain of events that resulted in you dying. That's murder.

Going back to our computer example... even though the author of this blog post didn't break into your house and log you out of Google, the result is the same. He intended for you to be logged out of Google without your permission or Google's permission, and you were logged out of Google. Therefore, your computer account was maliciously accessed.

The reality of the situation is that he's in Russia and I doubt Russia gives a damn about this.


Almost all exploits work by making your computer requst or recieve some resource which causes your computer to take some action you didn't intent.

By your argument if I shoot you with your gun, I am not guilty of murder.


He provided instructions to my browser without my consent, the exact same way he would have if he had done it from his computer. How far are we willing to take the semantics here?

I guess a lot of people disagree with this? To clarify, I don't think this guy's guilty of a felony-- to my knowledge computer fraud requires at least some malicious intent or damage. But you seriously think if somebody used CSRF to drain your bank balance that wouldn't count as hacking because it was your browser? That's absurd.


He provided instructions to your browser to include an image.

If that is considered 'without your consent', then so is every site that is embedding external plugins, images, and videos.

How are you supposed to 'give your consent'? Must you be given a list of all the content on the website before the browser will be allowed to display it?

When you requested the page, you gave your consent to load whatever was on that page. If you don't want that, then you should use wget instead of a browser.


I don't understand this argument. If I buy a box of cereal, and open it to find it contains a live rattlesnake, have I consented to be bitten by the snake because I knowingly purchased the box?

Of course I don't know everything that's in cereal -- I trust the manufacturers to provide me with the product I paid for, whatever that involves. But I know for sure that that doesn't involve snakes, and had I had reason to believe that there might be a snake in there, I wouldn't have bought it. And the rational response to this is definitely not "If you didn't want snakes, you should have x-rayed the cereal before you bought it."

Arguing that my ignorance of something which I wouldn't have wanted had I been aware of it constitutes consent is severely shaky.

(Again, I'm not saying that logging me out of Google is like getting bit by a snake, and I do think it was a decently harmless demonstration of the issue with CSRF for anyone who was unaware. I'm just speaking hypothetically here.)


Well, my point is that almost all (harmless) sites do the exact same thing that this blog does, which is request an external resource such as an image. The browser can't differentiate between malicious ones or not, it simply loads them all, which is exactly what it's designed to do.

Imagine you are buying a 'random flavour' box of cereal, inside of which you may possibly find a snake flavoured cereal which does happen to contain a snake. Similarly, you don't know what you'll get on the Internet until you've received it, and you can't be sure that what you get will be safe.

Of course, you want the manufacturer to make sure there is no snake in your snake flavoured cereal. In this analogy, google is the manufacturer/snake owner. It is up to google to make sure their logout page can't be embedded in an image like in this blog.


Edit: I think I sounded a little sharp before... what I'm trying to say is this: Your explanation makes perfect sense to explain why my browser makes the request. I asked for the page, it assumes I want everything that's in the page. It's dumb. That's fine.

But imagine trying to explain to a judge that the fact that I asked for the page means that it's okay that it did something that I didn't want to happen. She's not going to believe you, and she'll be right not to. That's all I'm saying.


Looks that Google detects the type of logout and doesn't ask for two-factor authentication in this case.


It does, maybe you checked the "Remember this computer for 30 days" box last time you were logging in?


hm yep. should I hide that thing? hm.. Sorry guys in advance.


You could make it into a button: "Log me out of Google" or something like that. That way, you can still demonstrate it's easily done, but only people who are interested will see it.


It'd be nicer if you had a link to a separate page that does it so that people can read the post first.


Leave it. I think it has exactly the effect you wanted.


I can't open google reader anymore, as it pulls in the article and logs me out. Kind of annoying, but funny at the same time. Nicely done.


FWIW, Worked in firefox. Didn't work in Chrome or Opera.


Worked in Chrome for me on Vista (work comp).


worked in Chrome for me


Maybe make it a button, but just disconnecting is not harmful. Great work by the way, and oh so badly needed.

FWIW, worked on Chrome / Mac OS X Snow Leopard.


I don't think a "button" quite conveys the malicious feeling you want to inspire... but maybe a "Read more" link?


Works on Android Dolphin


It didn't log me out. Must be down to Chrome Adblock, or Facebook Disconnect.


Blocking 3rd party cookies prevents it.


Ahh, that must be it :)


I have adblock plus and FB disconnect, still logged me out


didnt log me out either. Adblock plus, FB Disconnect, Ghostery :P


I'm still logged into Gmail on Chrome, so this is an specific browser issue?


CSRF isn't a browser vulnerability. It's a serverside application vulnerability.

To say otherwise is to say there there is some trivial policy, just an HTTP header away, that would allow IE, Firefox, and Webkit to coherently express cross-domain request policy for every conceivable application --- or to say that no FORM element on any website should be able to POST off-site (which, for the non-developers on HN, is an extremely common pattern).

There is a list (I am not particularly fond of it) managed by OWASP of the Top Ten vulnerabilities in application security. CSRF has been on it since at least 2007. For at least five years, the appsec community has been trying to educate application developers about CSRF.

Applications already have fine-grained controls for preventing CSRF. Homakov calls these controls "an ugly workaround". I can't argue about ugliness or elegance, but forgery tokens are fundamentally no less elegant than cryptographically secure cookies, which form the basis for virtually all application security on the entire Internet. The difference between browser-based CSRF protections (which don't exist) and token-based protections is the End to End Argument In System Design (also worth a Google). E2E suggests that when there are many options for implementing something, the best long-term solution is the one that pushes logic as far out to the edges as possible. Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security.

This blog post seems to suppose that most readers aren't even familiar with CSRF. From the comments on this thread, he may be right! But he's naive if he thinks Google wasn't aware of the logout CSRF, since it's been discussed ad nauseam on the Internet since at least 2008 (as the top of the first search result for [Google logout CSRF] would tell you). Presumably, the reason this hasn't been addressed is that Google is willing to accept the extremely low impact of users having to re-enter their passwords to get to Google.

Incidentally, I am, like Egor, a fan of Rails. But to suggest that Rails is the most advanced framework with respect to CSRF is to betray a lack of attention to every other popular framework in the field. ASP.NET has protected against CSRF for as long as there's been a MAC'd VIEWSTATE. Struts has a token. The Zend PHP framework provides a form authentication system; check out Stefan Esser's secure PHP development deck on their site. Django, of course, provides CSRF protection as a middleware module.


It's clear that developers need a simple way to specify that a piece of API should not be accesible from third-party sites.

I propose a new set of HTTP verbs, "SECPOST", "SECGET", etc that comes with the implication that it is never intended to be called by third-party sites or even navigated to from third-party sites. It is a resource that can only be called from the same origin. Application developers (and framework authors) could make sure to implement their destructive/sensitive APIs behind those verbs, and browser vendors could make sure to prevent any and all CSRF on that verb (including links and redirects).


Two things.

First, every mainstream web framework already comes with a simple-to-use way to block forged requests. Even if we adopted new HTTP verbs to give them a common name in the protocol, by the time developers are making decisions they're not working with the ALL-CAPS-NAMES-OF-HTTP-VERBS anyways.

Second, there isn't anything inherently "cross-site" about CSRF, so denying off-site POSTs isn't a complete solution to the problem either. Every site that accepts any form of user-generated content must deal with intra-site request forgery as well.

So no, I don't think that's a great idea.

The things that are insecure here are serverside web applications. Changes to the HTTP protocol or to browsers are a red herring. There's no way around it: web developers have to figure out how to write secure code.


Personally, I would be happy to eliminate any possibility of inter-site forgeries. It's unlikely that my bank will be putting user-generated content in front of me any time soon (and if they do I presume that they'll sanitize it well enough to not be a problem).

It troubles me deeply to have CSRF declared a purely server-side application problem. The browser is quite literally my agent in all of my interaction with the web. It is an extension of me, and when it does things that pretend that they are me, that feels very wrong. That is why I propose new HTTP verbs: my browser should know (and verify) that when it sends out a SEC* request, that my eyeballs are on that data and my finger physically clicked that button, and it can do this if those requests are, essentially, tagged as particularly sensitive.

To place the onus soley on the server-side is for me to abrogate my responsibility to fully control my browser-as-agent. Frankly, even if the server successfully rejects forged attacks, it is not acceptable that my browser, acting as my trusted agent, attempted that attack in the first place.


There are 3 major browser codebases. There are hundreds of thousands of web applications, each with different security needs. I think it's lunacy to suggest that the browsers should take on this problem.

At any rate: there isn't going to be SECGET and SECPOST, so the academic argument over whether end-to-end is better than Apple, Mozilla, Google and Microsoft deciding amongst themselves how security is going to work for every web application is moot.


You are missing a critical point: users do not expect software that is under their control to do things that they did not tell it to do, using their credentials, acting as them. Even if all the server-side software in the world were to be secured against such attempts, there would still remain an underlying problem: loss of control of the browser.

While the vast majority of resource requests (both primary and secondary) are beneficial, some are not. The browser currently does not have enough information to make this distinction. New HTTP verbs would give the browser enough information to refuse to directly load damaging resources.


Again: request forgery isn't an intrinsically cross-domain problem. The HTTP protocol change you provided is already worse than the serverside solutions it proposes to supplant.

Serverside request forgery tokens don't rely on browser behavior to function. They provide a much simpler and more direct security model: to POST/PUT/DELETE to an endpoint, you must at least be able to read the contents of the token. This meshes with same-origin security.


The loss of user agent control is a serious problem independent of whether or not a malicious request is accepted. The fact that the user agent crafted and sent the malicious request at all is a problem worth solving. But for some reason you either seem to believe that it doesn't matter that the UA is acting maliciously on users' behalf, that this is an inevitable consequence of the way the internet works, or that it's such a difficult problem to fix that you'd rather ignore it and focus on the server-side. Or perhaps both.

Personally, I don't believe either of those things. Server authors should certainly take point on battling CSRF. But there is an important client-side piece to the puzzle that cannot be ignored. If users cannot even prevent their own browsers from attempting malicious actions on their behalf, then there is something critically wrong with browsers.


I'm actually quite curious about your viewpoint, and why it seems so difficult to shift - set in your ways, so to speak. So let me see if I can't let Charle's Stross give it a try:

"You're right in principle, but you're right for the same reason and to the same limited extent as if you said "people have a responsibility to be aware of the locks on their front door and windows and to use them". Which is that you omit the other side of the social contract: we all have an obligation not to exploit our neighbors' negligence if they leave their door unlocked by burgling them."[1]

[1] http://www.antipope.org/charlie/blog-static/2012/03/not-an-a...


I have no idea what you are trying to say here. This is an engineering discussion, not a dorm room debate.

What you've tried to argue here is that we should add new HTTP verbs to express "this endpoint shouldn't allow cross-domain requests". Or, more generally, that we should add HTTP features to allow browsers to prevent CSRF attacks.

But CSRF isn't a browser security problem. It isn't even necessarily a cross-domain problem! (CSRF is in that respect misnamed.) The specific changes you've suggested would drastically change the HTTP protocol but couldn't even theoretically solve the request forgery problem, not just because of intra-site CSRF but because your suggested locked-down HTTP requests would also break otherwise viable apps --- meaning that many apps couldn't use these features even if they wanted to, and would have to rely on something else for CSRF protection.

The fact is that having browsers police request validity just doesn't make sense. Even if they could do that, they still obviously have to rely on serverside signals to determine whether a request is or isn't valid. If the serverside knows a request isn't valid, it already has the means to block it! Why on earth would the server punt this to the browser?

Your suggestions have the appearance of not being familiar with how CSRF protection works in apps today. It is almost a one-liner in many modern frameworks.


The thing that troubles me is not that you don't like the HTTP verb solution, but that you don't seem to accept the fact that there is a client-side problem to solve in the first place.

Your argument is equivalent to saying that websites should protect themselves from DDoS attacks - and that users should simply accept that their machines will be hacked and will become part of a botnet (or several botnets) at some point in time. In other words, DDoS is a server-side problem, not a client problem. Whereas I (and I think that most people) believe that it is our responsibility to use our computing resources responsibly, and work hard to avoid being included in a botnet.

You seem like a smart person, and I'm sure you have something to contribute to the client-side of this issue, but that won't happen until you are convinced that there is a client-side problem.

In any event, somewhat selfishly I suppose, I've found this discussion quite useful in clarifying my own views on the matter. So, thank you for violently disagreeing with me. :)


Request forgery is nothing like DDoS. I found the rest of your comment too vague to respond to. I can't rebut anything you've said without repeating myself.

You keep saying CSRF is a "client-side problem", but you haven't explained why you think that, other than that it's a problem that is occurring in a client-server system so somehow the client must be involved. That's flimsy logic.


> Request forgery is nothing like DDoS.

Forgery is like DDoS in that they both use the unwitting (and unwilling) compute resources of an intermediate victim to mount the attack. The unit of distribution of the DDoS case is a binary rootkit (for example) and the unit of distribution for a forgery attack is a web page.

The impact of successful DDoS and CSRF attacks are very different, of course, but the mechanism used to carry them out is very similar. In particular, they both differ from an ordinary hacker-to-target penetration, DoS, forgery etc. attack.


You didn't answer my question.


You didn't ask a question (was there a question mark in your post that I missed?). You did, however, make an assertion which I corrected.

In an honest, respectful discussion that would occasion a response along the lines of either: "Ah, I didn't think about it like that. Let me see about adjusting the line of my reasoning," or, "No, your correction is invalid because..."


I think you would enjoy the book _The Tangled Web_ by Michel Zalewski, of Google's browser & web security team.


Is the following accurate:

If a form is served from domain A (via GET) in to an iframe on a page that was served from domain B, then the JS on the page from domain B is prevented from reading or writing data on the page from domain A (unless an x-domain policy is in place) though it may be able to post it.


Yes it wont be able to read it. But that's not what they are after. What they do want is to execute some user action on the server side. So this action would have taken place just by executing the GET/POST request.


Thank you. I just wanted to make sure I understood the basis of why CRSF Tokens work.


I see some points but >CSRF isn't a browser vulnerability. It's a serverside application vulnerability. you didn't prove this one. CSRF is a browser vulnerability. ANd I don't care about another stuff you said further - you probably right that most popular frameworks have the protection out of box - I know it, no surprise here:). But I did pretty wide audit - only rails' protection looks really elegant. Hm.. probably I'm too much fan of rails, true.

And, please >Baking CSRF protection into the HTTP protocol is the opposite: it creates a "smart middleman" that will in the long term hamper security. Surely, I don't mean "Stop secure your apps from CSRF, it's not your problem". I just want to make browsers think about the issue as millions of developers have to. Because it is their issue, they are in charge. But we are fixing it on the backend(and we will have to for a next 10 years definitely)


CSRF is NOT a browser vulnerability. The browser is doing exactly what it's supposed to do: load content. The browser can not (and should not) attempt to identify the "evil" HTTP requests from the "good" ones. The browser's job is to make requests.

Now, you could argue the browser's job should be to implement security features as well. It does, after all, implement the same-origin policy. But, if you think about it, there is no good way for the browser to fix the CSRF issue. You can ask the user, which is what's suggested, but that never really works. They'll do one of two things: click "okay" every single time, or stop using your browser.

I would guess well over half of all websites do one of the following: (1) load an external JS file, (2) load an external image, (3) load an external CSS file, (4) use an iframe which points to a different origin, (5) use a JS redirect, (6) use a meta redirect, or (7) open a new window.

The proposed "solution" to CSRF stops ALL of these use cases. The user would have to manually approve each and every one of them. Given that well under 1% of alerts would be true attacks, the user would almost definitely "okay" on the attacks as well: they would have been trained by thousands of other alerts that this is an acceptable thing to do.

There was a paper by Barth and Jackson on CSRF defenses where they propose an Origin header, but that's the extent to which security is implemented in the browser. It is fundamentally up to the web application for verifying the user did in fact initiate the request. No amount of code in the web browser can get around this fact.


>I would guess well over half of all websites do one of the following: (1) load an external JS file, (2) load an external image, (3) load an external CSS file, (4) use an iframe which points to a different origin, (5) use a JS redirect, (6) use a meta redirect, or (7) open a new window. The proposed "solution" to CSRF breaks ALL of these uses.

You definitely kidding me. Please point out where in my post I said to deny ALL requests. I was talking about ONLY POST requests. Probably I forgot to add it :) So, I'm talking only about forms sending and GET is ok sure.


Either you do it for everything, or you do it for only POST and you end up missing half of the vulnerabilities. Correct me if I'm wrong, but your CSRF attack used a GET request, did it not? [1]

Web applications make state-changing operations on GET requests. You might not like it, but they do.

[1] <img src="https://mail.google.com/mail/u/0/?logout style="display: none;" />


>Web applications make state-changing operations on GET requests. You might not like it, but they do.

but when developer made a mistake with GET it is 100% his problem - it's out of question. he should be punished :D


Nonsensical. CSRF isn't God's punishment for REST-lessness.


You're both just choosing different places to draw the line between developer responsibility and browser responsibility.


That is like saying "you're both just suggesting two totally different designs for the HTTP security model".

His model is wrong. Again: I assume he wants to know that, so, bluntness.


Perfectly solid web apps routinely ask browsers to POST to other domains.


So rather than deny ALL requests, I think it would work if browsers merely stopped passing cookies on cross-site POST form submissions, no?

Then if 3rd party sites wanted to still use form submissions, they could use an auth token in the form (though I'm unsure why they would do this instead of using JSONP).


Firefox already blocks off-domain POST requests, unless the 3rd party domain responds to an OPTIONS preflight request.

So, I'm talking only about forms sending and GET is ok sure.

Google's logout CSRF works because the logout link is a GET request. So, no, there is no quick fix.


No it does not.

---

https://developer.mozilla.org/en/http_access_control#Simple_...

A simple cross-site request is one that:

- Only uses GET or POST. If POST is used to send data to the server, the Content-Type of the data sent to the server with the HTTP POST request is one of application/x-www-form-urlencoded, multipart/form-data, or text/plain.

- Does not set custom headers with the HTTP Request (such as X-Modified, etc.)

---

This is actually a big deal, since it means you can send a cross-domain mutlipart-POST with no preflight. That allows for an effective CSRF attack against file upload systems.

And of course, cross-domain POST requests via <form> tags have always worked and will continue to work.


Am I missing something here?

Let's say you're logged into Gmail and Gmail had no CSRF protection anywhere.

You're logged in while visiting my site. In my site, I include a little bit of JavaScript to make a POST request to Gmail telling it to forward copies of all your incoming email to my email address.

This will not work even without CSRF protection. It would only work if Google sends back the header Access-Control-Allow-Origin: mysite or Access-Control-Allow-Origin: * as noted in the section you linked to.

Of course, I could also try to trick you into filling out a form whose method actually is pointed at Gmail's and include all the hidden input tags to set you up for forwarding emails to me, but you would know something fishy is going on because it would redirect you to Gmail.


"This will not work even without CSRF protection."

It actually will work.

What you're describing is what's known as a "simple" request in XMLHttpRequest terms. That means there is no pre-flight necessary. Your browser will simply make the POST as requested and receive the response. It won't make the response available to you since the Access-Control-Allow-Origin header isn't set, but you're a malicious attacker in this example and you don't care what the response is: you just care that you were able to make the request. ;-)

You could even do this by creating an HTML form that POSTs to the right URL and using JavaScript to submit it automatically when the page loads. Same exact thing: no CORS checks.

If a pre-flight were necessary you would be right. The browser would send an OPTIONS request to the server, the server would respond without the appropriate headers, and the POST request would never be sent.

Let me know if any of this needs further explanation!


Oh, I see now. I had assumed that because I couldn't get the response, that the request itself was blocked.

Thanks!


I would like to know one thing.

Who the hell thought it was a good idea to allow crossdomain XmlHttpRequests? Given that the standard say that post is for modification no other website should ever make thoes requsts.


The CORS standard for 'simple' POSTs is no different than what you can already submit via a form from a technological perspective. In that way, it actually makes a lot of sense.

And the whole point of CORS is that some websites do want to make those requests. ;-)


"I did pretty wide audit - only rails' protection looks really elegant."

This is handwaving. You were wrong about this. I assume you want to know that, so I'm saying it bluntly.

"I just want to make browsers think about the issue as millions of developers have to. Because it is their issue, they are in charge."

No, the web browsers are not in charge. The secrets and sensitive actions are occurring on the servers, not in the browsers. The servers are what matter. The browser isn't protecting your email. The server is. The browser isn't protecting your bank account. The server is. The browser isn't controlling who is or isn't your Facebook friend. The server is.


> The difference between browser-based CSRF protections (which don't exist)

What about the X-Frame-Options and Origin headers? They are browser-based mechanisms that hint server side, right?

(not for the classic POST case though...)


Neal addressed XFO downthread:

http://news.ycombinator.com/item?id=3778700

Read his comment. It's great.


it took me a long time to understand the point behind CSR (cross-site requests) and CRSF fully enough to find them EXTREMELY malicious.

I think this is a very important line. The sense I get around most of my colleagues is that CSRF exploits are only something "bad programmers" get wrong. Of course, they're all rockstars who've never been exploited (yet/AFATK) so it's not like they need to spend a weekend or five paging through droll security papers. A little modesty would do us all well.

90% of developers just don't care and don't spend time on that.

Indeed. It takes time to learn, time to code, and unless you're working at a big shop, there's little pressure (or even acknowledgement of the need) to get this stuff right.

Keep up the good work OP.


CSRF is like a kafka-esque joke.

Here's my take away from every CSRF article:

A malicious site will load your site in an iframe, fill in your form and post it. Fixing it requires some a token in your form, but I can see you don't understand how an extra hidden field in your form will make a difference so you're clearly not going to handle it correctly. You're screwed. Go home.

As far as I can tell, CSRF should have existed since javascript & frames. How have the browser vendors not fixed such a huge insecure-by-design flaw?


The difficulty is how to deny this happening.

Pages making GET requests across domains is so common and necessary that several technology standards would have to come together to propose a real fix. Every image or script loaded from a CDN. Anyone hosting their own static assets domains. Anyone using a plugin from Google, Facebook, Twitter, Disqus uses this ability.

The tech companies can't even easily create a system to whitelist sites allowed to embed them, because that would severely limit third party's ability to use their services freely and would introduce a huge performance bottleneck.

I haven't seen any particularly compelling solution to solving this. Things only guarded behind a GET request can be loaded by script, link, embed, object, img and iframe tags, and all of those have legitimate reasons for loading resources cross domain without requesting permissions for each one from the user.


I have no problem with cross-site GET requests because I know GETs should behave as 'read-only' anyway for lots of reasons.

What I don't get is how arbitrary cross-site POSTs with malicious values are allowed. As far as I can tell, anyone can post this form:

<form action="http://bank.com/send_money><input name="to_account" value="SCAMMER-1234"></form>

Worse, one article will tell you to only allow Referrer == "bank.com", and then another will tell you that even that is no longer enough?!!!

Why can't we change the browser or the web server layer to prevent this by default?!


Browsers don't prevent it because there are legitimate uses for cross-domain posts. Good frameworks do prevent it with CSRF tokens.


I don't want the legitimate uses prevented. The default behavior should be to prevent, and the legitimate uses should explicitly opt-in. That way, you only have to do security analysis for those explicit points.


This to me is a server side issue- but that doesn't necessarily mean it's on the app developer. The behavior you're talking about can be set most servers directly, by adding the "X-Frame-Options" header into every request by default. Then exceptions would have to be made explicitly, by either the server admin or application developer. If anyone should change the default behavior (which I am not convinced is the case) it should be the server developers, not the browsers.


X-Frame-Options only prevents the page from being displayed in a frame. It doesn't prevent a page on another domain from submitting a POST request.


CSRF is solved very simply by using tokens for each field. If the attacking site can't load the other page, it can't pull the token out, and without the token the post gets discarded. If you've abstracted your form generation this should be super simple to add.


Thank you for pointing me to X-Frame-Origin!

So, in the context of this discussion, why don't the browsers make X-Frame-Origin: DENY the default behavior?????


But there's existing code that would break.


Can you give some examples of legitemate post requests that need to work cross-domain.


It's not that the referer header is not "enough". "Enough" implies that it falls somewhere on the scale of trustworthiness.

It's user input. Don't trust user input.


Why shouldn't you trust user-provided data to secure the same user's data? The potential attack is someone forging their own referer header in order to attack themself.


The referer header can easily be forged. The whole point of a CSRF attack is to turn a user's credentials against him.


How do you forge the referer header as a third-party site?


Ha, I'm wrong. I thought you could set the referer with setRequestHeader on an XHR. Mea maxima culpa.


A malicious site will load your site in an iframe, fill in your form and post it.

I browser cannot do this. The OP probably saw some exploit code in an ad which was served in an IFrame, but the same-domain security model will not allow you to interact with another window or IFrame that is of a different domain.


Am I correct in interpreting that the proposed fix would the be the same as the functionality provided by RequestPolicy (which he mentions in the post)? I've used it for quite a while now, and although it works well for me as a power-user (who is concerned about security), I can't imagine the confusion and pain a user will feel despite the message suggested.

Blocking resources loaded over separate domains breaks a lot of sites today. Few popular sites keep everything under the same domain (CDN´s, commentsystems, captchas and Facebook/Google/Twitter-resources, for example). http://www.memebase.com is probably the worst "offender" I've come across. Hacker News isn't one of them, which I'm happy to see.

Although if this was implemented I could see a lot of sites moving quickly to remedy this, reducing the alerts. It'd still be a pretty hard transition-period, though.

Want to see how much would break today (and if the fix would work for the average user)? Try: https://www.requestpolicy.com


"Am I correct in interpreting that the fix would the be the same as the functionality provided by RequestPolicy (which he mentions in the post)? I've used that for it for quite a while now, and although it works well for me as a power-user (who is concerned about security), I can't imagine the confusion and pain a user will feel despite the message suggested."

That was my interpretation as well and I reached the same conclusion. Having the average user make application-level security decisions is a very bad idea.

RequestPolicy is a wonderful extension and I think its use should be encouraged. But the average user does not understand enough about an application and how it interacts with third-party websites to make informed decisions about whether a particular interaction is good or not. False positives (where the user flags a good interaction) will lead to loss of functionality while false negatives (where the user fails to flag a bad interaction) will lead to security vulnerabilities that website owners can't prevent.


In fact, if you're using Amazon's CloudFront CDN, and you're using HTTPS, you have NO way to keep everything under the same TLD; CloudFront can only serve its own SSL cert, not yours.


As my other comment highlighted, disabling 3rd party cookies will prevent most CSRF. As an added bonus it will also increase your privacy by preventing some (but not all) cross domain tracking.


I'm not following (but I'm a little buzzed). What do third-party cookies have to do with CSRF? CSRF is a flaw in the victim application.


Er, Third Party Cookies have absolutely no influence over CSRF.

And I am kind of at a loss where the cross-domain part of what you are saying is part of cross-domain tracking.

Kindly enlighten.


No need to go that far. The X-Frame-Options: SAMEORIGIN header, supported by all major browsers, can prevent the majority of these attacks (unwanted GET requests in the background).

https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_respons...

Other than that, it should be hammered into developer's heads that GET should not have side effects.


"The X-Frame-Options: SAMEORIGIN header, supported by all major browsers, can prevent the majority of these attacks"

No it can not. I've seen this assertion popping up a few times on HN this past week: it's conflating two similar but very different attacks in a dangerous way.

X-Frame-Options prevents a type of attack known as clickjacking. It is similar to a CSRF, but it involves creating an iframe to a page with a form and convincing the targeted user to submit it. It provides this protection at the browser level: if an HTTP response contains the X-Frame-Options header and the requesting page violates that directive, the response is not rendered.

It does not prevent a CSRF attack. It's impossible for it to do so: once the malicious request had made it to the server and the server has sent back a response, the attack is already complete. There's nothing the browser can do to prevent it at that point. If you use nothing but X-Frame-Options to try and prevent CSRF, you'll have a site completely vulnerable to CSRF.


Is a GET request in an iframe now considered a CSRF vulnerability? As far as I know, he hasn't actually done any cross site scripting. If i submit this as a link on hacker news and get a bunch of people to click it, have I forged a cross domain request as well?

https://mail.google.com/mail/u/0/?logout


Cross site scripting (XSS) is not the same thing as CSRF. If you were to do that, it wouldn't be a CSRF, because the action originated with the user.

Normally CSRFs are automatic, either in the form of an image (<img src="https://...?logout />) or an iframe src attribute. So, if you included the above image tag on your page, then it would be a CSRF, sometimes also called a Confused Deputy Attack.


Cross site request forgery is separate from cross site scripting. He is not clamming to have done any cross site scripting. Secure sites generally require a token to perform any state changing effects, it's just odd that Google doesn't require it for logging out.

Facebook uses http://facebook.com/logout.php to log you out, but clicking that link won't do it.


I interpret CSRF (cross-site request forgery) to be different from XSS (cross-site scripting). This particular vulnerability is indeed a CSRF, but not an XSS which is what it sounds like you're confusing it for.


The other replies to your post are focusing on the fact that you mistakenly used "cross site scripting" in your post, but you raise a valid point: is it really a problem to cause a GET request to that URL? It would be a lot more convincing if he used a POST to a URL that seemed to be doing the normal sanity checks, like if he caused Gmail to send a mail. Right now his example is unconvincing because it's possible that the Google guys just allow logouts via GET because it's relatively harmless to log someone out.


It has nothing to do with future posts, I repeat...


If Google would prefer that any random website can't log its users out of its services, then yes, it's a CSRF vulnerability.


Maybe this is a good time to ask:

I found an xss vulnerability in a website that can be used to cause noticeable problems (enough that fixing it should be a priority) so I contacted the developers behind the site and informed them what caused it, how to fix and an example of it in practice and why it's bad: they've done nothing in over a month. What do I do?

I guess the answer is "forget it", but I feel like if I don't do anything someone malicious will discover the issue and cause harm to users of the website...


> but I feel like if I don't do anything someone malicious will discover the issue and cause harm to users of the website.

They certainly will. Usually responsible disclosure is defined as some form of contacting the party involved, working out some window of time that you both agree on during which they can fix the bug (~30 days say), then disclosing details of the vulnerability. This is like a very polite and necessary threat.

If you care I would contact them again and let them know you plan to make the vulnerability public, and ask how much time they need to fix it.


Is it a persistent XSS vuln or does it depend on malicious input being passed via the URL or POST?

It's persistent if it can be saved in a comment or on a profile, etc, and is much more dangerous if so. Non-persistent XSS realistically isn't too big a deal, most sites are vulnerable and it's usually only a problem if you're a big website and therefore vulnerable to phishing attacks.


I can link someone to a page and it can associate them with something they can then never disassociate themselves with. For example I could create an account, post illegal content (child pornography etc.) on the site then get people to click a link and forcibly associate their account with that content, which they are then tied to until a site administrator realises and fixes it. (edit: without them ever knowing)

Imagine if I could make you the author of this comment, it's like that.


do some friendly hacking to annoy the admins (but only them), and watch them fix the issue in real-time


That is TERRIBLE advice. I don't know exactly what you mean by 'friendly hacking' but ANY exploitation of a website vulnerability without that site's permission would be a crime pretty much anywhere; even if it isn't malicious. It would be far from the first time that an administrator or owner didn't understand that the person was trying to help or just didn't really want to deal with it and it then just ended up being an issue of the vulnerability discoverer vs. law enforcement. Never a fun situation even if you win.


Not much different from the recent Github hack, really.


Same guy :)


For those who didn't see the recent kerfuffle: This guy recently found and demonstrated a major Rails exploit on github. He seems to know a thing or two about security exploits.


Clarification: he didn't recently find the exploit. He's been making noises about it for a very long time and being ignored, so he took the (dubious, to some) step of using the exploit publicly and loudly, to draw attention to the problem.


Another clarification: He wasn't making noises for a long time with _GitHub_ and being ignored. His support responses were replied to nearly immediately (except where the timezone differences came into play). We take security reports very seriously.

We'd have preferred a more responsible disclosure, and I hope he (and others) are more careful about this in the future. Most reporters we get act very responsible, and we are always gracious (and even contract work from them in some cases) In his case, we saw activity that he didn't report to us, and suspended his account while we did a deeper investigation.

The Rails community and we still think that his proposed solution is not a good idea, but it did provoke exploration in some other ideas.

https://github.com/rails/rails/issues/5228

http://techno-weenie.net/2012/3/19/ending-the-mass-assignmen...

http://weblog.rubyonrails.org/2012/3/21/strong-parameters/


I'm sorry, but why should he? If I find a major hole in SHA1 key handling, should I contact GitHub since you are users of it? Of course not.


"SHA1 key handling"?

Anyways, you misread him. All he's saying is that the delay Egor Homakov experienced was with the Rails dev team, not Github. Github's response to Homakov's finding was very fast.


I just pulled something out of my ass to fill in the gap. Was probably thinking of RSA.


There's a difference between complaining about a class of vulnerability and exploiting a particular instance of a vulnerability that you seem to be failing to grasp.

If you find a major hole in a part of Git, you are by no means obligated to tell GitHub. You are, however, legally obligated not to compromise their site using that hole.

Or, a better example: you can talk about XSS mitigation strategies all you want. You can't go around looking for XSS vulnerabilities on random websites and then exploiting them.


technoweenie pointed out that he wasn't being ignored by GitHub, I was saying that's irrelevant. GH is just one of thousands of Rails apps that were/are vulnerable.

> You can't go around looking for XSS vulnerabilities on random websites and then exploiting them.

exploit: to use a situation so that you get benefit from it, even if it is wrong or unfair to do this; to utilize, especially for profit; etc


This is not the definition of "exploit" that the law works from.


Let's be fair to homakov:

He used the exploit publicly and loudly (full disclosure to almost all affected parties) by doing a relatively harmless change to _rails_ _master_ on github.

If his actions should be called an attack, then it was highly targeted - at the people who could fix it - to get their attention.


His relatively harmless change demonstrated a Major Security Vulnerability in the site that hosts thousands of companies secure code.


And got it fixed.

Security vulnerabilities aren't the sort of things that go away just because you don't know they're there.


From experience: had he simply told Github about it, they would have fixed it quickly, and there would have been no window of public exposure. That's the point being made here.


Fair enough, that's basically what was said in the bug he filed with Ruby on Rails.

I won't try to impute motives but I think he did it this way because he felt like he was being treated poorly.


There are two different issues here. 1) A bad security vulnerability at GitHub. 2) Poor design in Rails that makes it easy to produce security vulnerabilities.

Igor found 2, and got ignored by the Rails team. His frustration led him to publicly demonstrating 1, which caused a whole lot of people a whole lot of trouble.

The people that are irritated at him are irritated at him because of 1, not 2.


What kind of serious company puts important or 'secure' code on github?


No, he was making noise about a class of exploits and then exploited a particular instance of that exploit, which is completely different.

It's like the difference between the class of buffer overflow exploits and the buffer overflow exploit in a particular piece of software.

There's a significant difference between the two.


And we all saw what happened: Github got in gear real quick and rolled out a fix FAST.

Point, dood.


I'd not say so. All new - well forgotten old.


My app's web site is built with Django. I use the built-in CSRF tools. (I should emphasize that my site is strictly HTTPS.)

In theory, no normal user will ever fail CSRF checks. In practice, tons of people have complained that they see Django's (very confusing) CSRF error page when they try to sign up for my service.

This was surprising to me; I thought we were _way_ past this point. Digging into it, I've learned that tons of people use extensions that muck about with cookies in ways that break Django's CSRF feature. I don't really know a way around it.

How common is this, in your experience?


Yeah this is something I run into often as I don't accept cookies from sites by default and don't send Referer header (both are required for django's CSRF middleware if over https). This is a good read if you are interested in the rational behind these decisions -> https://code.djangoproject.com/wiki/CsrfProtection

As far as a solution for your users, I'd just let them know that you require cookies to login (obviously) and if you are posting over https make sure they have the Referer header which can be forged to just be the domain and not the entire URL if they prefer. I use https://addons.mozilla.org/en-US/firefox/addon/refcontrol/ set to forge for django sites.


Yeah, there are plenty of reasons to do what you're doing that seem fair to me. But at the same time, through no fault of yours, your requests are indistinguishable from potentially malicious ones. The whole thing is a mess, effectively a band-aid on top of deeper issues with HTTP's statelessness.

Also: that's a good link. Thanks.


If I'm blocking cookies/referer by default then the onus is upon me to enable them for sites that require them for stuff like this. I wouldn't worry about users who have this issue. Maybe customize django's CSRF failure page to say they need to enable both to use your service and call it a day.


I agree in principle. And I have built a custom CSRF page to help my potential customers out.

In practice, lots of my potential users don't even understand that their AdBlock/whatever extensions are mucking about with Cookies in ways that break things. It's a tough sell to tell someone who is thinking about trying your service: "sorry, I don't work with your browser the way it is" when so much of the rest of the world is either HTTP, not HTTPS, or simply has decided to punt on CSRF or be much more selective about it. It looks to them like _I'm_ the one that's broken.

Argh. It's no-win.


Humm.. I'm thinking you could write a middleware that checks for Referer over https and if not set, go ahead and set it to https://yourdomain.com That would allow you to continue to use CSRF middleware for the nonce check (just make sure yours is before theirs).


Except an attacker can strip a referer header: if you fail open like that, you leave yourself open to attack.

See http://blog.kotowicz.net/2011/10/stripping-referrer-for-fun-... for examples


In order to exploit this an attacker would need to be MITM on the network or on a subdomain by setting a wildcard cookie. The site would still keep the nonce check. I don't see any way around this without poking a tiny hole in the CSRF protection. Guess you gotta weigh the cost/benefit.


In terms of the error page, you can modify the CSRF error page by setting CSRF_FAILURE_VIEW:

https://docs.djangoproject.com/en/dev/ref/settings/#std:sett...

Update: I noticed that later in the thread you mention that you already provide a custom error page. I'll leave this for others who might not be familiar with custom CSRF error pages.


I am not familair with Django's CSRF tools, but you could write your own that didn't depend on cookies. Init a js var with a random token in the html somewhere, then require that the browser includes it with any state changing actions.


AFAICT django requires the token as a form parameter, which would be what you suggest doing with javascript.

The issue is the "require that the browser includes it", as the information on the token must be available to the server too, and django apparently puts that in a cookie (rails does too, if the session is stored in the cookie).

So I believe you have suggested a fix for the bit that works already, but not for grandparent's actual problem :)


I've been examining django sites either. confirm


This attack vector requires:

1) previous authentication to a service.

2) service which supports destructive actions as guessable URLs.

3) "third-party cookie" support in the user agent. [1]

4) a visit to a page with a malicious resource construct (an image, script, iframe, external style sheet, or object). Note that this resource could be generated by JavaScript, although this is not necessary.

Sadly, the first three criteria are widely met. If we are to systematically remove this threat, then we have to look at removing each in turn:

1) Previous authentication to a service can be mitigated by simply logging out when you are done, but this is inconvenient and requires manual user intervention. However, there is an interesting possibility to limit "important" services to a separate process - a browser, an "incognito" window, etc.

2) Services should be built with an unguessable component that is provided just prior to calling by a known-good API, probably with additional referrer verification.

3) It is my belief that disabling third-party cookies is the right solution here: users rarely, if ever, get value from third-party cookies. Denying them would allow API authors to write simpler APIs that do not have a secret component, and would allow users to maintain the same behavior and login to all their services from the same browser.

4) While it seems that little can be done on this front apart from releasing some chemical agent into the atmosphere that made people trustworthy and good, actually it may be possible for browser makers to do some simple analysis of resource URLs to detect possible hanky-panky.

[1] https://en.wikipedia.org/wiki/HTTP_cookie#Privacy_and_third-...


Ok so I have two more ideas to mitigate this vector:

1) Rename "Third-party cookies" to "Evil Cookies" and lobby all browser vendors to disable them in all circumstances. They are enabled by default(!) in the major browsers presumably to placate advertising networks.

2) Introduce a new HTTP verb, SECURE, which browsers will not send to a third-party website under any circumstance, including navigation events. That would make requirement number four impossible to satisfy (even for links and redirects).


Third party cookie support isn't necessary. You could just use a link instead of an image.


Yes, that's the active form of the attack. To me, the passive form is far more pernicious (you are taking destructive action passively). At least with the active form you know that you've done something unintentional.

But this does imply that the final onus is on the programmers of services to design services that do not have guessable, destructive one-step inputs.


You could use a redirect, too.


Here's Google's reply to this particular "vulnerability": http://www.google.com/about/company/rewardprogram.html#logou...


I don't understand why Google say that this is an issue that can't be solved - why can't they use a CSRF token on their logout feature, either by switching to using a POST form or by appending a CSRF token to the query string?


That won't work without javascript, and then you need another URL to fallback to that'll respond to GET requests for non-javascript browsers. And then you could just XSRF the non-javascript URL.

[Disclaimer: I work at Google, but not on any area related to this]


I don't understand why this would need JavaScript - regular CSRF protection for POST requests works fine without JavaScript - why can't that be applied to the logout button?


If you think of the damage that can be done by a logout CSRF, it is not going to harm very much. Really a low-level issue.


I get that - but the Google security FAQ suggests that fixing the issue is essentially impossible, whereas I'm pretty sure fixing it is the same as fixing any other CSRF hole.


The point that those blog posts make is that it's possible for a malicious attacker to log people out of a third-party site in multiple ways (specifically by messing with the user's cookies). Protecting one of those ways provides little to no benefit if the others are still unprotected.

One reason I've also heard cited is that you always want the logout links in your application to work: you want users to be able to terminate their sessions quickly and easily. If you have a CSRF token tied to your user's session and that user happens to click on an old logout link (maybe they had an old tab open or something), the user won't be logged out of the application.


I'm having a little trouble parsing this post. Is he saying he's discovered a variant of CSRF that cannot be stopped by using the Synchronizer Token Pattern? Or has he found something that a lot of site's protection patterns don't follow?


You seem to be familiar with the subject. I just read through CSRF and Token stuff on [1] and there's one thing I don't seem to understand.

What would prevent an attacker from open an original site's page in an iframe and then have a script fill in and submit the form on it? In other words, say I am logged in into my bank's site. I then open a malicious page that has an iframe pointing at http://bank/operations/move-funds that contains a fund transfer form. Wouldn't this page include a correct CSRFToken, making the form readily submittable by a malicious script?

[1] https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%...


This is prevented by only allowing frames to interact with each other if they're on the same domain. See (for instance) http://msdn.microsoft.com/en-us/library/ms533028(v=vs.85).as...


Usually with banks, they will require that users enter their password for important requests. Additionally, servers can use the X-Frame-Options to prevent their website from being displayed in an iframe.


are the contents of an iframe open from another domain available to you?


he discovered that you can log out from google via a GET request (surprise!)


Specifically,

    <img src="https://mail.google.com/mail/u/0/?logout" style="display: none;">
When your browser loads the page, it requests that "image", which logs you out.

I don't see a way browsers could effectively enable CSRF protections. How is it supposed to know you don't want to request that page as an image? What about sites linking to images on other domains? CDNs would be blocked, because how is Chrome supposed to know you actually wanted to load the image from fbcdn.net or s3.amazonaws.com?


You can prevent the iframe CSRF with X-Frame-Options: SAMEORIGIN I suppose? - maybe browser could implement X-Image-Options: SAMEORIGIN as well - kind of a hotlinking prevention header.


That prevents the result from being displayed, it doesn't prevent the request from being made. The distinction is subtle but hugely important. In other words, the browser makes the request, gets the response, and doesn't render it. The server doesn't know that the browser didn't render it: it treats it like any other request.


Ah you are correct, forgot about that. Thanks for this point.


lol it's just funny trick 4 lulz. It has nothing to do with future stuff


nope. Token Pattern is ugly workaround browsers' vulnerability - that's the point.


You know more about this stuff than me, but I always assumed it was the cost of using the HTTP protocol due to its stateless nature. Even if browsers "fix" this issue, you're still placing a measure of trust in the client by not implementing server-side protections.


He is incorrect on this point. Authentication of requests is not an "ugly workaround" to a browser security issue.

The browser isn't just doing what it's "supposed to be doing" (always a flimsy argument in favor of the status quo, I agree!) but also all it can do, since only the server has the information needed to judge how sensitive a request is.

It's true that servers & browsers work together to create a semblance of a security model for the web. But the bulk of the job belongs to the server; there are hundreds of thousands of different applications each with different needs. And the servers have a means of enforcing controls flexibly: by authenticating requests.

The browser isn't protecting your email. The server is. The browser isn't protecting your bank account. The server is. The browser isn't protecting your HN karma. The server is. The browser isn't protecting your code repository. The server is. No simple HTTP standard will cover all these cases, and so it's silly to suggest that HTTP is where this security controlled should be expressed.


[ repost from below ]

I just read on CSRF and its mitigation with Synchronized Tokens on [1] and there's one thing I don't seem to understand. What does prevent an attacker from open an original site's page in an iframe and then have a script fill in and submit the form on it? In other words, say I am logged in into my bank's site. I then open a malicious page that has an iframe pointing at http://bank/move-funds that contains a fund transfer form. Wouldn't this page include a correct CSRFToken, making the form readily submittable by a malicious script?

Can anyone comment? It damn sure looks like a big gaping hole that is virtually impossible to plug.

[1] https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%...


Because a script (I assume you're referring to JavaScript) can't fill in a form on or read the contents of a third-party website. That's a violation of the same-origin policy.

CSRF tokens are a well-understood solution to this issue. In order to submit a valid request, you must include what is essentially a secret token that is on the page (although the secret token can just be your session ID). For an attacker to get that token, they would need to be able to do at least one of the following:

A. Guess it, by having you make multiple requests. (so you make the token long enough that it's infeasible to guess)

B. Be able to read it by intercepting the HTTP response or reading it in some way, in which case you have much larger security issues.

C. Be able to read the token in the HTTP request that the browser makes. Again, if an attacker can do this, your session is already compromised.


Right, the same-origin policy, thanks. Just found it after a minute of jsfiddling.

Now, let's say my script is not loading bank's page into an iframe, but rather fetches it with an ajax call. Wouldn't that page (again) include a valid CSRF token? Or is this mitigated by checking a referrer on the bank's side?


You can make but CAN NOT view the result of a cross-domain request via XMLHttpRequest unless the site specifically opts in to it. Same-origin policy again.


Doesn't work, because of cross-domain security policies. Javascript running in http://malicious-site wouldn't be able to read the CSRF protection token in the fund transfer form on http://bank. So the submission wouldn't have the correct token value and the bank would reject the attempt.


He logs you out of Google with a simple

<img style="display: none;" src="https //mail.google.com/mail/u/0/?logout">


CSRF is a bit of a pain to work around but how much of a problem is it in the wild?

Most sites where this could do real damage (and have real gains for the attacker), banks etc are going to be well protected.

You could use it to comment spam a blog but that's going to be a crapshoot. Guessing which blog people are logged into etc, you would need very targeted attacks.

Sure , signing out of google is annoying but if you have lastpass or similar signing back in is pretty frictionless.


>Most sites where this could do real damage (and have real gains for the attacker), banks etc are going to be well protected.

You think so. In "the wild" even serious systems are vulnerable #OpApril1


Why are you spacing out the "release" of your information? I assume you've found some CSRF vulnerabilities?


Gmail has had more serious CSRF vulnerabilities in the past - you could use it to download the entire address book of anyone who visited your site.


RequestPolicy + NoScript are the big reason I have not switched to chromium.

In order for requestpolicy to block this it needs to be in a fairly locked down state too...


A bit of a note regarding REST:

RESTful services are as vulnerable to CSRF as anything else. See [1] for more information (and I'm really sad that there's no second post, like mentioned). However, since RESTful services imply no state on the server (i.e. no token), the question is, how do you prevent CSRF attacks?

One really simple method is to deny all requests (on the server) with the application/x-www-form-urlencoded content type, and deny all multipart/form-data requests that include non-file parameters, which are the only two content types that can be sent from an HTML form. For your application, XMLHttpRequest can change the content type, and isn't affected by CSRF.

EDIT: Also, sort-of-related: I recommend you set the X-Frame-Options header too, in order to prevent clickjacking. Info at [2].

[1]: http://blogs.msdn.com/b/bryansul/archive/2008/08/15/rest-and... [2]: https://developer.mozilla.org/en/The_X-FRAME-OPTIONS_respons...


I had envisioned what I think is a more solid defense against CSRF ... I just haven't had time to build a proof.

Earlier commenters have noted that each request back to the server should include an unguessable token that cannot be derived by mining other pages on the site with cross-site AJAX requests.

My hypothetical solution is to embed that token in the prefix to the hostname after logging into the given site. The token would then be sent in the Host: header for all dynamic requests.

Step 1: You log in to www.somesite.kom.

Step 2: You are then forwarded to dynXXXXXXXX.somesite.kom where XXXXXXXX represents a unique, dynamically-generated token tied to your session.

The attacker must now know XXXXXXXX to properly form up a GET or POST request to attack your account.

The site itself could then use relative URL's for dynamic content or could use the appropriate templating system to ensure that any dynamic URL's ( either in HTML markup or script text ) contain the generated hostname.


There is a HUGE vested commercial interest in the CONTINUATION of the insecure status quo, in which all the control is on the server side (hence, with the companies doing tracking and advertising using third party requests, rather than with the end user).

Furthermore, the players funding browser development all share strongly in that vested interest. (Even for Firefox, follow the money - and if Firefox did try to lock down without industry agreement, it would lose, which Mozilla knows).

So you will not see any change. This also explains the degree of heat directed at the suggestion that client behavior could be less insecure by default, with regard to third party requests.

This is not new. Much of HTTP as originally conceived actually dictated a great deal more user control over what happened. Those standards had to be compromised from the word go in order to reach the present state.


Adding a extra token for protection against CSRF attacks will only work if is changed on each request. Some of the biggest sites out there do not do this. I know of one site in particular (I won't name it, but its HUGE) that generates a unique token every time a user logs in. The token doesn't change until the user logs out even if the user closes the browser and doesn't go back to the site for a week, the token will be the same. So it does its job, until somebody like me pokes around and finds a hole that will parse out that token, and generate a form that can make any request on behalf of that user in a iframe without that user knowing a thing. Evil yes, but I found this months ago, and it still works..and I haven't used it in anyway, besides a proof of concept.


You shouldn't be able to get the token from another domain, regardless of how long it lasts. How are you able to?


Im getting it on the same domain, but the request can be sent from any domain, as long as the user is logged in.


Did you consider reporting it? Many such "huge" sites have bug bounty/white hat programs.


Im getting it on the same domain, but the request can be sent from any domain, as long as the user is logged in. And yeah, but they aren't offering anything that would be worth the time.


I think I know what site you're talking about. If I'm right, they do have a security bug bounty reporting program and you should take advantage of it: it will take maybe two minutes of your time and can net you a bit of cash! :-)

(sorry for being oblique, but I have no way to contact you privately and ask you more directly!)


I haven't made up my mind yet what to do with it but you know there are some ways of being evil without so much evil. ;]


If Google required a POST to log out (as it should be, since logging someone out is changing the session state and therefore not a "safe" GET-able request[1]), we could fall back to CORS as protection which removes the need for a CSRF token. Since the only way (I believe) to get a POST to fire cross-domain, without explicit user interaction through, say, a regular HTML form, is through JavaScript, the browser would refuse to make the request unless the CORS headers explicitly allowed it.

Still, using <form> buttons for logging out, consistently across the entire web, would take some effort. CSRF tokens are probably less intrusive.

[1]: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1...


"Since the only way (I believe) to get a POST to fire cross-domain, without explicit user interaction through, say, a regular HTML form, is through JavaScript, the browser would refuse to make the request unless the CORS headers explicitly allowed it."

I'm not quite sure what you're trying to say here. But you can make cross-domain POST requests in two ways, both involving JavaScript:

1. Create an HTML form, use JavaScript to submit it.

2. Use XMLHttpRequest to make a cross-domain POST.


Yes: "the only way [...] to get a POST to fire cross-domain [...] is through JavaScript".

The request would go against the same-origin policy, at which point CORS comes into play.

Edit: Ah, but creating a form in the DOM and submitting it via JavaScript... that one I hadn't thought of.


To fire automatically, yes: getting people to click on a button of their own free will is easy though.


CORS isn't available in all browsers. You still need additional protection for browsers that don't support it. Plus it has nothing to do with using JS to submit a form.


It's certainly true that not every browser supports CORS. But thinking about it, is logging out via a cross-domain request even that necessary in the vast majority of cases? Same-origin violation would block the POST.

Plus it has nothing to do with using JS to submit a form.

I wasn't saying you submit a form with JavaScript, but rather that the non-JavaScript way you make a POST request is through a user-submitted form, and is therefore intended by the user as opposed to some invisible, unseen operation.


"April fools day" will not be a good day for him to publish "secret stuff".


why? seems perfect.


Many people will assume it's a joke, and not take you seriously - of course, at their own risk.


Bravo! If you thought you were immune because you only visit "reputable" sites it'll make you think twice. I tried putting it in a incognito tab in Chrome and Google apps in a normal tab. That didn't log out, but if I put both google apps and this site in incognito tabs or normal tabs then it logged me out. Pretty important to log out of sites when you aren't using them. But, more important to fix my sites!


I am hesitant to post this, because from experience, smart people frequently misread or misunderstand it, but:

The easiest first-step solution is just to check the HTTP Referer field, and check it matches your domain.

Yes, this is easily faked by someone crafting their own HTTP requests. This is /not/ easily faked by someone causing your browser to make requests, though. And it provides very good coverage against the attack here.


If you do that, you then have to make sure your app doesn't have any redirector anywhere or anything that writes Location. Almost all apps do. This is because you can get the site to insert the right referrer.

for eg. if the URL of the attack request is (both of these are close to real life examples that work(ed))

http://www.site.com/mail/filter?create=*%2Cattacker%40hush.c...

usually the site will have a redirector on the login action, which takes the user back to the page that they were on after login, so you just use the attack URL as the redir URL

http://www.site.com/login?return_to=%2Fmail%2Ffilter%3Fcreat...

amazing how many login scripts still do the redirect even if the user is already logged in, or still do the redirect even if there is no real login

even if you do a javascript in-place login, there is usually a mobile version that has this pattern. I rarely meet a site that doesn't have a way to bounce between URLs and fake the referrer.

I guess the real conclusion is that these types of attacks are complicated and better understood fully than implying a single short solution - because the next response is always "but, if you do that, then" and so forth, like a matryoshka doll


Standard caveat with checking the Referer header: there is software out there which strips out the header in the name of privacy. If you use the Referer as a source of validation, you have to be prepared to deal with users of Norton Internet Security and other such products who will be unable to use your site. And you can't "fail open" by accepting any request without a referer, since there are plenty of techniques an attacker can use to remove the referer as well.


I wonder what will happen to websites which use cross-domain post requests to log in securely, e.g. http://example.com submitting the login form to https://secure.example.com


If they are using tokens then this should be viable as long as the state is shared across servers.


I 100% agree - CSRF is a huge hack: basically telling programes to add CSRF to all their forms and requests shows that all these cookies and security measures in browsers are worth shit. It seems like the entire web security needs to be redesigned from scratch.


I agree that web security needs to be rethunk from the ground up, but I don't think it's fair to blame browsers for things that are fundamentally HTTP protocol and server app problems.


This is a problem with app developers, not browser makers. Applications should follow the HTTP spec and not do any persistent changes via GET.

The "I can log you out of Facebook/etc" trick is one of those techniques that script-kiddies love using on forums.


read again, it's not about GET and google trick is just trick.


Google allows a logout from a GET request, on Chrome of all browsers? Is there a way to run browser tabs completely sandboxed regarding cookies/auth? (private browsing mode and running different browsers is a bit too clunky)


Chrome allows you to set up multiple user profiles; each one is isolated from the others. Like Incognito mode, it's per-window, not per-tab. I have one for GMail, one for Facebook, and one for everything else.


Okay, this is a good idea, but how would it handle legitimate requests to other domains?

Issues: - some users would click allow anyway, so it doesn't completely solve the problem - what about apps built using CORS.. etc


Is this what's happening with "hacked" twitter accounts that don't ever seem to have had anyone access them or change the password and seem to suddenly start spewing bizopp spam for a day or two?


Preventing this in the browser is actually pretty trivial for most sites. Simply block 3rd party cookies. If the site uses cookies to track sessions the request won't have your session cookie and won't work.

It's not up to browsers to prevent this. Just like how you can't rely on client side data validation you must always take proper precautions on the server. Browsers taking additional precautions to prevent this would be nice but it's not the whole solution and never will be.

Edit: If you're going to downvote this please leave a reply stating why. I don't understand an opposing point of view unless you use 3rd party cookies to track people across domains.


Don't allow actions through GET, always use POST.


That doesn't solve it because the attacker can just create a <form> and auto submit it with js (or make a translucent submit button that is the size of the entire page if you have JS disabled).


For example:

    <body onload="javascript:document.evil.submit()">
      <form name="evil" method="POST" action="https://mail.google.com/mail/u/0/?logout">
      </form>
    </body>
The massive button is left as an exercise to the reader ;-)


Aren't you protected if you just use csrf tokens on sensitive POSTs? I thought it was good practice to always do that anyways.


Are REST APIs susceptible in the same way? Or as long as we don't store our auth token on the client cookie we're ok?


I'm having a bit of trouble parsing the post. Did he just discover CSRF and is trying to raise awareness? Or did he discover a new variant of CSRF that makes previous counter-measures ineffective?

For completeness, Rails guide covers these security holes http://guides.rubyonrails.org/security.html#cross-site-reque...


"Jeff Antwood" Entertaining read.


Web is already too useless because people can't think about security in a decent way.

Still baffles me that i can't allow a script at one domain to automate something in another domain for me.


WOW You've discovered absolutely new type of vulnerability! Awesome work! lol lmao


no food for troll here


You realize you could be monetizing these security vulnerabilities, right?


how? If I report nobody pays even 'thank you'.


Homakov, by doing what you are doing.

A lot of people are watching you from your blog posts and some of these watchers would pay you good money to do a security audit.

I don't know the breadth of your expertise but I would reach out to some well respected security consulting firm and use your blog to demonstrate your interest/passion for web security. This might be a great way to broaden your expertise.

If you contacted 10 security firms, I'm sure at least one would hire you and cover VISA issues if you plan on leaving the country.


Thank you. I hope so


That is true for some people, but NOT for all. Recognized security experts, or anyone with a reputation in the field CAN get themselves heard, and information which they report will NOT be ignored. (Whether you will get paid for it is another matter. It depends.)

You used to be a "nobody" -- just some unknown developer whose English communication skills are a bit weak and who was likely to get ignored. That is no longer true. You are now "famous" in security circles, and if you approach people in a professional manner then I am confident that you will be heard.


>are a bit weak don't be too polite ) I reported holes that definitely should be reported.


Fuck reporting it, unless you're contractually obligated because they've retained you (or, if it's an open source project you like, and want to support). If vendors won't even listen to you, clearly they don't value your time or their product, or their customers.

You can sell security vulnerabilities to a variety of parties. If you want introductions, email me.

Some people view this as "wrong" in some ethical way, but meh. Money is good -- it can be exchanged for valuable goods and services. There have been a lot of arguments for "responsible disclosure", "anti-sec", "full disclosure", etc. over the years.

I'd draw the line at blackhatting yourself with the vulnerability, but just selling the info is legal. Generally, security companies are buyers, and their clients tend to be governments, generally western (USA).


I've been black in the past, now I'm completely white hat.


>"Money is good -- it can be exchanged for valuable goods and services."

Money is not "good". Money is "necessary" in our society because people are greedy bullies.


People that do stuff like that need to find another industry. IT is not for them. We're nice people, generally. Maybe try finance or real estate or lobbying or health care or defence work.


I agree the world would be better if it were just nice people being nice, but IT security has become defense, and is an increasing part of defense. Governments are buying.

I don't see a huge moral difference between smart hacker with $0 (publishing 0-day for the lulz) and smart hacker with $250k (selling vuln to a defense contractor).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: