Hacker News new | past | comments | ask | show | jobs | submit login
Demystifying CORS (frontendian.co)
266 points by andRyanMiller on June 4, 2018 | hide | past | favorite | 49 comments



This post never actually explains why any of this was needed. The closest it comes is this bit:

"If [...] the same-origin policy for XMLHttpRequests relaxed, said services could now receive a deluge of DELETE, PUT, etc… requests from any origin"

But why on earth is this a problem? And in any case those services always could receive requests from any origin, browsers aren't the only HTTP clients in the world. In the old days you could simply proxy through a server if you needed to get around the same-origin policy, and I'm sure lots of people did.

I'd love if someone could explain why the whole CORS rigmarole was considered ncessary.


CORS was designed to mitigate e.g. the following attack:

* You are logged at with service X, which uses a Cookie to store your authentication code.

* Service X offers an endpoint to change your password, as well as an endpoint to retrieve your user account.

* You visit malicious website Y, which uses JS to send requests to the "change password" and "view profile" endpoints of service X (which without CORS would be accepted as the browser sends your authentication cookie with them), getting your profile information and changing your password, thereby taking over your account.

With CORS, for requests from third-party domains the browser first sends an "OPTION" pre-flight request to service X, which would then respond with a set of CORS headers from which the browser can determine if the given domain you're on (Y) is allowed to send requests to the service. If not, the request (e.g. GET, POST, PUT, DELETE) is not even sent. CORS therefore protects the user from malicious websites while still allowing requests from specific third-party domains (as there are legitimate use cases for sending API requests to a third-party website).

Note that CORS does not protect you from user-triggered requests (e.g. actively submitting an HTML form), for which you need CSRF tokens if you use cookies for authentication (which are sent automatically with each request).


Aha - I think this answers something that has always confused me.

If I create a micro-service and want to protect it, CORS doesn't help me very much. I still need some sort of authentication mechanism (perhaps provided in a cookie) to say, "yes - this request is permitted."

CORS helps protect that authentication mechanism within a browser.

Is that about right?


Yup! CORS is meant to protect a service's users, not the service itself. Services should always authenticate/distrust user input/etc; no client-side technology makes that unnecessary.


If you need CSRF tokens for forms, etc, then what does CORS give you above CSRF tokens?


CSRF tokens are usually only used for state-altering requests (POST, GET etc.), though one could use them for GET requests as well. The reason people don't do use CSRF for GET is that there's usually no risk involved when calling a GET endpoint from your browser, as it's not supposed to change the state of a resource in any way. And since an HTML form submission or included link will take the user directly to your API, there's no way for the attacker to extract the information afterwards (which is of course given if the attacker can make the request asynchronously via Javascript).

You can also use the "Referer" and "Origin" headers to defend your API against form submission from third-party websites without using any CSRF tokens (as browsers will include the URL/domain of the site from which the form was submitted), though there were numerous cases where browsers or e-mail clients didn't set these headers correctly, so if you really want to be sure the only way is to use a CSRF token. You can put that token e.g. in a JS-readable cookie, which will not be accessible on third-party websites but which you can read out via JS on your domain and then include in the POST/PUT/... request. If you want to run your code from different domains as well you will have to provide an endpoint from which users can get the CSRF token though (as Cookies from your API domain will not be readable there). You will then need to restrict that endpoint using CORS, as otherwise an attacker will be able to get a valid token and e.g. inject that into an HTML form which he/she can then submit.

So CORS and CSRF do different things, but for API-based apps that should be able to run on multiple (non-sibling) domains you need both mechanisms to ensure security (and CSRF needs CORS in that case to function).


But what if one website wanted user data from another website. Couldn't b.com make a GET request to a.com(assuming the user is on b.com now, and was on a.com previously, with a cookie), using the cookie for a.com in the browser and get the user's info from a.com, then send that to b.com? All without the user or a.com knowing?


You meant "POST, PUT, etc"


Yes, GET doesn't make any sense there.


CSRF tokens are a popular mitigation against CSRF, where CSRF is a "vulnerability" that arises in part when a user is tricked into asking its user-agent to make a request the user didn't actually intend.

This is particularly likely to happen in web browsers because they support hyperlinks or image tags to arbitrary destinations (which turn into GET requests), and support form auto-submission to arbitrary targets (which turn into POST requests). The request will often inherit the user's legitimate authentication posture on the target site, because all cookies and headers are also sent, riding on the user's existing session. Despite all this unpleasantness, a web browser adheres to the 'same-origin policy', which prevents the response from CSRF attack to be made known to the originating site. Given the predominance of interactive user-agents that support the same-origin policy, it's not very likely to be tricked into a CRSF with a user-agent that doesn't adhere to the same-origin policy.

CORS isn't a security feature per se, it's a mechanism to selectively relax the same-origin policy to obtain a response from a different source, if that source consents.


I've always found it interesting how there's a difference between actions the user initiated and those the user did not. You'd think that would be very challenging to enforce, given how flexible browser scripting is nowadays, but it seems to Just Work.


There isn't, though. It really is just the same origin policy at work (or, if you mean more abstractly, the website flagging them differently; you can still construct it into a curl call that will be indistinguishable, barring something like a varying CSRF token that requires the page to be loaded, but again, that can still be faked by loading the page, grabbing the token, and using it in the resulting cURL call).

But preventing PUT/POST/DELETE from other origins prevents other origins from making requests on the user's behalf. It also prevents the user from making those requests. It either has to be a GET (which is itself a security hole, but one necessary to the basic utility of the web and which -should- be okay provided people do indeed make GETs idempotent; it has been leveraged into JSON-P, a terrible unsafe hack, though), or it has to use CORS to preflight it.

Once you have CORS in place, it's still indeterminate as to whether the action was user initiated or not. It's just that a request originating from origin (X) has been allowed.


That's not entirely correct, as you can still create a form on page Y and have it submit a POST request to site X even if CORS is enabled, as CORS does only govern asynchronously triggered requests. So you still need CSRF to fully protect against malicious form submissions from third-party domains.


Well, yes. POSTs of the right MIME type also work, not just GETs, due to, again, the way the web worked prior to AJAX and APIs and all that lovely goodness requiring some rethinking the web's security model (back when we had 'submit' forms and that was it).

So, technically accurate WRT where the same origin policy applies, but not really relevant to the parent's base statement that you can differentiate between what is and is not user triggered (since you can send a payload in code with the same MIME type and etc so that it looks identical to what a form would send).


Well, there are a number of reasons, but let me give you one of the more important ones:

If a browser were allowed to make cross-origin requests without restrictions, any site could take advantage of a user’s active session on any other site to perform unapproved actions on that user’s behalf. Without CORS, for instance, if you came to my site with an active Facebook session, I could get information about your account (by making cross-origin requests to Facebook) that I wouldn’t otherwise have access to. Or, if I were feeling a bit more nefarious, I could change information — possibly your password and gain control of your account.

The possibIlities for bad actors to do these types of things is also part of the reason CORS requests don’t include most headers by default, and you have to be very explicit about which headers to expose.


Yeah, this aspect of CORS is almost never properly explained.

First off, it's not about the authentication cookie. It would be simple enough for browsers to just omit cookies in cross-domain requests.

Here's an example of a real issue:

1. You have a network-attached storage device on your LAN.

2. Because the device is not accessible from the public internet, you have decided not to enable authentication. Anyone on the local network can issue requests.

3. You visit a malicious website.

4. That website's JS makes network requests to your device.

Basically, when you run JS in your browser, it's using your computer's network context. That context may have privileges that not everyone has (e.g. authentication based on source IP). You don't want random JS code to have those privileges.


Browsers, by consensus, adhere to a security approach called 'same-origin policy', which limits the situations that data from one context can be read by another context. CORS is a targeted way of relaxing some of those restrictions, so that you can obtain data from another context if the source endpoint consents.

Of course there are other HTTP clients. Anyone can write one, and they're probably not going to adhere to the same-origin policy. CORS, therefore, isn't really a security feature in the same vein as, say, CSRF-prevention tokens; rather, it's a language to communicate to a 'same-origin policy'-compliant user-agent to allow some cross-origin data sharing, lessening the need to pipe everything through script tags (JSONP) or one's own proxy.


CORS allows you to whitelist what domains you accept certain requests from. This is a good thing.

One thing I never understood really is why a webpage is able to load scripts from a different domain. That will I suppose remain a mystery to me forever. Imagine how many fewer ads and junk we might see.


this is a cool feature but the actual whitelist has to be held internally, in responding to an OPTIONS request, you can respond with * or concrete domain name. you can't return something like "www.example.com, www.foo.com" .

if you want to whitelist multiple domains you have to resolve this server side and check the requesting domain against your list of accepted domains.

this took me a little while to figure out.


Right - it is a good safety feature. Also worth noting that responding with a wildcard will not allow you to set cookies in the browser when using `withCredentials` in the client and `access-control-allow-credentials` on the server. You've got to return a specific origin (one that is a match in your whitelist)


Because those requests will contain authorization? Like cookies? Am I missing something?


It's to protect against malicious sites, not malicious clients.

CORS prevents your browser from being an unwitting attack vector by an evil website.


My impression is that people wanted to make use of API's from third parties which would have been prevented by the same origin policies. CORS allowed them to relax that policy for the API's and services they wanted to use w/o totally removing the security against cross site scripting that the same origin policy provides. Having to proxy all requests through a server would have defeated the purpose of the new breed of paid APIs who's end goal seems to be to allow you to build a complex web site w/o having to wade into the details of maintaining the backed servers you would have traditionally needed and all the complexity that entails at scale.


Along with the other responses here, also check out the very brief OWASP documentation on CORS at this link. [1] Searching that site for CORS would also be helpful.

[1]: https://www.owasp.org/index.php/HTML5_Security_Cheat_Sheet#C...


Alex Hopperman talks about XMLHttpRequest starting as an initiative to port exchange/outlook email to a browser: http://www.alexhopmann.com/xmlhttp.htm

XMLHTTP actually began its life out of the Exchange 2000 team. I had joined Microsoft in November 1996 and moved to Redmond in the spring of 1997 working initially on some Internet Standards stuff as related to the future of Outlook […] I don’t recall exactly when we started working on Outlook Web Access in Exchange 2000. I think it was about a year after I joined the team, probably sometime in late 1998 […] we were already a milestone or two into the Exchange 2000 (or “Platinum”) project and had been carefully ignoring the issue of OWA [Outlook Web Access] mostly because the old version was such a hack. […] The basic premise of Outlook Web Access was that you could walk up to any computer that had the browser on it and just get to your email […] The [XMLHttpRequest] beta shipped and the OWA team was able to start running forward using the beta IE5,


Looks like his site is a bit clobbered, there's an archive.is copy here:

http://archive.is/7i5l

You'll likely need to do an "inspect element" over the annoying AdChoices bar on the left and delete its containing div.

Back in 2000-2002 I was lucky to work with a team that leveraged XMLHttpRequest (it was an internal corporate thing for a customer so we could get away with only supporting IE) to build single page apps before much of this stuff had a name.


I've never understood why so many people seem to find this confusing. CORS is really simple: if you want third-party sites to be able to access the contents of a given page, you need to send `Access-Control-Allow-Origin: sitename.com` in your response. If you don't send that header, the browser will default to the more secure behavior and just deny access. (Yes, it can get more complicated when you need to allow credentials or access to other HTTP methods but the concept itself is dead-simple.)

Maybe people are just confused about the same-origin policy in general? (Which is not the same thing as CORS, but related.) That's more understandable, but still really easy to explain; if you're signed into a site or have access to a non-public (corporate LAN) site, you _don't_ want every third-party site you visit to have unrestricted access to all your personal data on that site. Therefore browsers, by default, don't allow third-party access to a given origin; simple as that.


People get confused because it's two requests, first the options one, then the normal one. Their web frameworks are largely built on single connection handling, but because CORS isn't used by many people it's often a tacked on afterthought. For example, in Rails they briefly made it impossible to manually handle Options requests (something that I fixed).

Then there is the core weirdness of subdomains and security headers in general. Most developers don't really care about security. They'll do what they're told and like to do a good job overall, but deep down they don't enjoy spending time thinking about how to pwn the app they're building. They just want someone to use what they've built and love it.


There aren't two requests; at least not normally.

Preflight requests are only needed if you want to send unusual headers in your request or use HTTP methods other than GET or POST. See https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simpl...

But you make a good point about developers not caring about security. If I look at it from that perspective it totally makes sense. If you don't have any reason to care, CORS headers may just seem like an unnecessary annoyance that you don't want to bother learning. "Not allowed access? Why? I don't care about your darned security headers, I just want to make an API request."


You're right.

I forgot about the simple requests angle because 100% of the requests I make are non-simple. I need custom headers and JSON Content-Types. Yet again why this area is so annoying.


From the page you linked to, it depends on the MIME type of the POST

> "[...] for HTTP methods other than GET, or for POST usage with certain MIME types"


CORS as a concept is dead simple. CORS as an implementation in decades old Java services is anything but. I can spend hours trying to get Charles to allow a proxy from localhost development to a backend service.


interesting article, although unfortunately it mentions but doesn't currently cover one of the more mis-understood parts of CORS, which is the Access-Control-Allow-Credentials part.

The fact that Access-Control-Allow-Origin: * doesn't work with Access-Control-Allow-Credentials, for example, is something I've seen sites get wrong quite a lot.

there's a good post which covers it https://mortoray.com/2014/04/09/allowing-unlimited-access-wi...


Where CORS got complicated for me was adding custom request and/or response headers to a cross-origin GET request. Different browsers handles this differently. For example, Safari would make a pre-flight request if certain request headers are sent, while Chrome would just send it. And Chrome would not make a pre-flight request, and won't populate the "Origin" request header, but will then drop "unknown" response headers (and doesn't give you any way to force a pre-flight request or Origin header). I eventually figured out you can just always respond with `Access-Control-Expose-Headers`, even though it doesn't look like a CORS request, and then Chrome will expose the headers to the JS.


That's really odd. There are shared tests for all this stuff, and browsers should really be doing it identically.

Did you happen to report bugs on this to browsers? If not, do you happen to have a link to a page that shows the behavior difference?


Nope, never got around to debug in depth. Will do that when I have some time again.


Thank you! Even just a page that shows the behavior differences, without much debugging, would be really helpful; this sort of thing is something browsers very much aim to have working identically.


Nice article, though it doesn't cover CORS with NTLM/Kerberos auth (ie. .NET on windows).

To my mind this is the most baffling aspect here, as preflight requests shouldn't carry an auth token (according to the spec), however IIS is an all-or-nothing auth which will thus reject the preflight request, unless it is specifically handled.

Additionally, there are different ways to handle preflighting for owin and aspnetcore vs .NET framework (IIS hosting), and these methods also change depending on the technology - MVC, WebAPI, OData.

[edit] Hit return early:

MVC (pure) requests can be handled in the web.config to set custom headers. WebAPI uses annotations on the controllers. See https://stackoverflow.com/questions/29970793/enabling-cors-t...

OData follows a different path through, so the above won't work. You would need to modify Application_BeginRequest() See https://stackoverflow.com/questions/31459416/how-to-enable-c...

The above is all for .NET framework (IIS host). For OWIN you need to modify HttpListener as per https://stackoverflow.com/questions/42104716/owin-preflight-...

For aspnetcore the pipeline again changes: https://weblog.west-wind.com/posts/2016/Sep/26/ASPNET-Core-a...

Lastly, all of the above should be using xhr.withCredentials = true; on the client (javascript) side.

Note that this is not present for breeze-odata4 (I'm patching when I have time) but is present on Jaydata (though it's a shame Jaydata doesn't play nicely with webpack, only browserify).


My biggest frustration with CORS is that none of it is technically needed.

We could have skipped the whole thing if subresource integrity (SRI) had been implemented much earlier, possibly in HTTP 1.0, which makes its absence one of the great blunders of the web:

https://en.wikipedia.org/wiki/Subresource_Integrity

On top of that, SRI would have allowed us to use standardized versions of libraries like jQuery and Angular from their home URLs, which would have reduced the initial load size of single page applications by megabytes via caching (because those libraries would have already been loaded long ago from other sites).

Which naturally would have led to content-addressable data, Merkle trees, etc which would have given us performance more in line with BitTorrent (at least for commonly used files/scripts).

On top of that, I'm not completely convinced that CORS avoids IP address spoofing, DNS poisoning etc. It probably needs to piggyback on HTTPS to be sure, which is a common way to avoid that can of worms so I can't criticize it too badly for that.

I should add that SRI has one critical flaw in that it's still vulnerable to user-posted content (like comments) trying to embed a <script> tag containing a fake hash in the body. I haven't been deep enough in SRI to know if it's possible to specify hashes off page, maybe someone knows? Otherwise we may still need something like CORS, or better yet, something that blocks further includes altogether rather than conflating the issue with focusing on where they came from.


We really need Authorization added to simple headers. Needing to preflight every single SPA request either causes a huge performance hit or else forces super hacky workarounds that introduce their own security issues. Not good for the web.


I feel like that part at the end is very misleading. GET and POST with simple headers are still subject to CORS. The distinction between a form submit and an ajax request is that you are reading the response from javascript. Same origin policy only kicks in when you attempt to read a response from JavaScript. It's why Fetch API has a 'no-cors' mode where you can make cross-origin requests willy nilly so long as you don't attempt to read the response values. The idea is prevent "secret" background requests being sent by malicious sites. Form submits are handled by the browser and will incur a page refresh.


So if I manually pull up an about:blank page and start screwing around in the devTools console, what are my options for pulling in junk from, say, Wikipedia?

Is it only jsonp?


You could disable cors for your browser.


If you set CORS, you'll see option request on every request. You can set a cache expiration header on option request to prevent it on each request and potential denial of service or slow frontend. It's shocking how few developers know this.


What I find perplexing is that for this and every other web technology, there doesn't seem to be a central comprehensive resource that explains it all in great detail.

It's like we have to pick things up as we go along, from tidbits and articles here and there, not of it complete or comprehensive, and often there'll be little mistakes or misleading bits.


I think MDN is doing a great job of becoming the go-to resource for everything: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS


It's the best surely, and that article is decent, but there are so many areas wherein information is lacking or incomplete. One-line examples for entire modules or lack of explanation for specific parameters etc..

It's amazing that Google and Apple rely so much on browser use but they can't publish comprehensive information on them.


I agree, excellent explanation of all things web. One recent example that stands out, their Django tutorial puts the official Django tutorial to shame imo


Absolutely. And many of their docs can be imported to Dash.app. :-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: