Hacker News new | past | comments | ask | show | jobs | submit login
On the Insecurity of Whitelists and the Future of Content Security Policy (research.google.com)
71 points by mmastrac on Sept 1, 2016 | hide | past | favorite | 13 comments



I just read the paper. Good data, great analysis; there will be lessons learned from this. Some quick thoughts:

- Header fatigue. For many people, CSP is 'yet another damn header' [1][2][3] we have to add to our websites. Although it's supposed to require careful thought and be tailored to the need of the particular resource being loaded, the typical programming model of the web separates payloads from headers, and therefore proper use of CSP needs a hook in your HTTP-server that calculates the correct header based off of a bunch of rules you set ahead of time and the URL (and maybe payload) of the resource. This integration typically isn't nicely present in HTTP-servers or web frameworks, so people hardcode headers and move on. (Maybe try a meta http-equiv instead??)

- The nonce approach proposed for CSP is essentially CSRF tokens all over again. I get why it works, but it's ridiculous that the server has to quasi-cryptographically pass tokens around inside the HTML that a third-party wouldn't have... the weakness of the protocol itself is showing; there has to be a better way to communicate cross-domain resource trust.

- Completely unrealistic thought experiment: block ALL cross-domain requests in a browser. Only allow <a> tags. Everything else has to be hosted on the same domain and proxied over. The incentives re-align; all the responsibility and blame will fall squarely on website operators to get their sites right.

[1] https://www.owasp.org/index.php/OWASP_Secure_Headers_Project...

[2] https://securityheaders.io/

[3] https://observatory.mozilla.org/


> - Completely unrealistic thought experiment: block ALL cross-domain requests in a browser. Only allow <a> tags. Everything else has to be hosted on the same domain and proxied over. The incentives re-align; all the responsibility and blame will fall squarely on website operators to get their sites right.

You can use µMatrix[0] to browse like that and whitelist things as needed.

It's surprising how many sites break under same-origin-only rules. And by break I mean they don't even display any text despite the text being present in the markup.

[0] https://github.com/gorhill/uMatrix


In case someone is concerned about how disruptive it will be to their browsing, uMatrix couldn't have a more efficient UI. It still requires page reloads and is only for advanced users, but there is also a pleasure in driving a machine so well-designed.

(Also, I believe it's now called uMatrix; the greek letter is gone.)


Your second point (passing tokens to third-party) is addressed by the new 'strict-dynamic' [1] keyword in CSP3. It allows nonced/hashed scripts to dynamically create child elements without explicitly passing the nonce. E.g. you nonce/hash your google-maps widget loading script and 'strict-dynamic' will allow the widget to dynamically load additional child scripts (e.g. js module loading) via document.createElement/appendChild. Therefore you only need to nonce the scripts you trust in your markup and don't need to refactor libraries to grab an pass on the nonce.

Some 'strict-dynamic' deployment examples:

https://photos.google.com/

https://www.google.com/about/careers/jobs

https://www.google.com/maps/timeline

https://bugs.chromium.org/hosting/

[1] https://w3c.github.io/webappsec-csp/#strict-dynamic-usage

[2] http://www.slideshare.net/LukasWeichselbaum/making-csp-great...


Most of the CSP directives can be included as a meta tag, not necessary a HTTP header. Also, CSP is not dedicated to XSS protection. Some directives, like frame-ancestor and the referrer policy are much easier to manage.

But it is hard to put a good CSP policy on top of an existing website not designed with CSP in mind in the first place. That's were nonce can be a temporary measure to bridge the gap. I really see the nonce as a temporary workaround while migrating a site to CSP.


Option 3 has been on my mind, but the third-parties need to support it. I'd think it'd be in their best interests to do so, if not require it. Adobe Analytics for sure allows you create a CNAME and will even provide a free cert.


I get Option 3 by using uBlock Origin in advanced mode. In order to make any cross-domain requests, even for images, I have to whitelist that particular domain1->domain2 combination.

Not for everyone, but I can live with it.


I'd love to but I'm guessing things like Facebook/Google/github/Twitter authentication won't work? Or will it?


Option #3.

Really? I can't use a CDN for my images?


Hence 'thought experiment'. Possible resolutions:

[a.] If cross-origin anything is forbidden by default, opt in to cross-domain images with CSP 'img-src'

[b.] No You Can't Use a CDN™, or rather, you can [1][2], but some portion of the domain name of the URL has to be the same, or you can only whitelist your own subdomains, or something of the like. This will have to trigger a larger conversation about the nature of subdomains, DNS, possibly TLS DV certs, and how you can prove that your own subdomains are not outside of your control -- both for the satisfaction of DV certs, but also to the satisfaction of CSP.

[1] https://azure.microsoft.com/en-us/documentation/articles/cdn... [2] https://www.maxcdn.com/one/tutorial/creating-cname-dns-recor...

Part of the problem is, right now, the 'default' CSP is neither opt-in, nor opt-out (EDIT: this is technically incorrect, but what I meant to say is, the effect of CSP defaults and their interplay with stuff like same-orgin policy isn't lockdown-by-default) It's whatever legacy behavior the browsers of the world have collectively implemented, but nonetheless pending each of their whims and changelogs. This makes it so that your CSP rules merely override the default, and you're not universally protected from just about anything when you don't specify a policy.


The CDN could be on a subdomain.


Now you just have to ask what privileged relationship makes sense between kittens.tumblr.com and praise-cthulhu.tumblr.com that doesn't make sense for the rest of the web.


Another submission was marked as "dupe" of this one. Since it has comments in it, I will link to it from here: https://news.ycombinator.com/item?id=12472671




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: