Hacker News new | past | comments | ask | show | jobs | submit login
I'm Joining Report URI (troyhunt.com)
161 points by pgl on Nov 1, 2017 | hide | past | favorite | 64 comments



I don't get it. What does Report URI do? Their landing page doesn't say. And Troy Hunt's blog post tells me this:

    * Set up CSP headers
    * Somehow, magically, Report URI will tell you things
Wait what? How does that work? I feel like I'm missing a step. Unfortunately https://report-uri.com/ doesn't really explain either, or have integration docs of any kind.

I'm not trying to be snarky, I'm considering to become a customer.


To get this reporting to work, you need an HTTP header in place. Historically (fairly short term history) this was "report-uri"; hence the name of their product... however this has since been replaced by "report-to". More information on these headers can be found here: https://developer.mozilla.org/en-US/docs/Glossary/Reporting_... (at time of writing the `report-to` documentation is pending)...

Steps:

  - User's browser loads your page
  - User's browser detects something which would be a violation of your CSP policies
  - User's browser blocks that content...
  - ...and also checks for a Report-To, Report-URI, or CSP-Report header in the HTTP Headers.
  - If any of those headers exist, the user's browser makes an http post to the stated URL, 
    passing information about the problem.
  - If the URL stated in those headers was the Report-URI.com service's URL (i.e. the service which 
    Troy Hunt's writing about) then their company receives this data, and can use the information 
    in that to determine that this report relates to your website (i.e. from the info in the 
    `document-uri` field), and store this information in the metrics they provide for your site.


ps. You could implement your own functionality to collect these reports. The Report-URI product (as opposed to header) is just a pre-written service to save you rolling your own. You just need to build something to receive the HTTP POST statements and do something with the data. Or you look on GitHub and find someone's done that for you: https://github.com/seek-oss/csp-server (and more: https://github.com/search?utf8=%E2%9C%93&q=csp+report-to).

Specifics and examples of what `report-to` looks like here: https://wicg.github.io/reporting/#examples

There's also a nice blog about implementing the old `report-uri` header here: https://gdstechnology.blog.gov.uk/2015/02/12/experimenting-w...


Here's a nice example of what the report packet looks like:

  {
    "@timestamp": "2016-07-07T12:01:03.044Z",
    "csp-report": {
      "document-uri": "http://example.com/signup.html",
      "referrer": "",
      "blocked-uri": "http://example.com/css/style.css",
      "violated-directive": "style-src cdn.example.com",
      "original-policy": "default-src \u0027none\u0027; style-src cdn.example.com; report-uri /_/csp-reports"
    }
  }
Ref: https://github.com/seek-oss/csp-server/blob/master/example/c...

I don't know what's in place (if anything) to prevent sending fake reports... i.e. presumably a hacker could read these headers from a site, then send HTTP POST messages reporting all kinds of errors from various IPs (e.g. if they have access to compromised machines), flooding your useful metrics with false data, thus hiding any useful information in there...

Better than the current protocol, you'd have your site generate an ID for every legitimate request so that any reports could be tied back to that... but even then any hacker would just need to call your site once per false report to get that ID, so this only offers a small amount of additional protection (though in doing so increases the probability of the site being targeted by a flooding / DoS attack).


I haven't opened the ReportURI webpage, but I would assume their value added compared to DIY is exactly in spam-filtering and solving (or mitigating) all the security issues that report-uri (the header) inserts itself.

Call me cynical, but I believe reporting causes more harm than good, by exposing new attack surface.

Why not just deploy a crawler that detects CSP errors and reports them in a static report for the site owner?


CSP violations (such as XSS vulnerabilities) can manifest themselves on non-public pages, where they'd be difficult to detect with a crawler.

In addition, the Web Reporting API covers more than just CSP. It can also report various types of certificate errors, such as those triggered by violations of the Expect-CT and Expect-Staple headers (such violations might occur if your users are under attack by a MITM).


I imagine you realize that a user under MITM would have the report POST request tampered? It's false security.

On the other hand, you are right that the crawler wouldn't catch everything.


I'm not certain what the implementation status is in various browsers, but the relevant RFCs (e.g. for HPKP) typically recommend that user agents retry the submission of reports. The report URIs themselves may also use HPKP to prevent them from being intercepted (as opposed to just DoSing the submission). There are certainly scenarios where an attacker can only temporarily MitM the victim and the reporting mechanism would still be of use eventually. The reporting itself is not the enforcing mechanism, so timely submission is not the most important thing in the event of an attack.

That said, it's true that the biggest practical use people get out of report-uri is to test the roll-out of these headers and to detect issues they might cause.


Good point on ReportURI having better abilities to detect false reports. It's not mentioned (at least, not on the front page; I've not delved), but definitely their larger dataset will make it much simpler to blacklist IPs suspected of sending faked reports / spotting patterns to remove false data.

I believe the benefit of having users' browsers report this over a crawler is for scenarios where pages attempt to display your content from another site (e.g. in an iframe behind an overlay for a click-jacking attack). You'd never know to monitor that URL / wouldn't know that the site was hosting your content from any of your metrics; but the users browsers would report it.

In terms of "why report it if they know to block it anyway", I believe the idea is to improve security for others; i.e. we're no longer relying on users having the latest browser to be protected; so long as one user had a browser good enough to spot the issue, we can be made aware that there's a risk out there.


To the extent that this is an issue, the server could presumably sign the document uri plus some nonce and include that signature and the nonce in the report-to uri.

A service like Report URI could trivially validate if the nonce approach were understood.


Sure, but you also should make sure it's on a different TLD, incase you've somehow misconfigured your HPKP cert pin.


HPKP is dead anyway, Chrome is dropping support for it in the next version anyway.


I know and I completely disagree with that decision.

HPKP has a place. Checking CA logs (via Expect-CT header) is bullshit because when the stakes are high enough some CA can get hacked or some low level sysadmin can verify domain control somehow and register a cert and some actor can MITM connections without anyone noticing for months.

With HPKP the fucking page doesn't load. Period. You need to either root the server or use a PDA to stop HTTPS in the first place, but then the browser bar isn't green. If the costs of getting MITMed are high it is better to risk lockout than it is to risk silent data loss.

At the very, very least I should be able to pin which CAs I trust so the threat vector doesn't include every CA in the world.

The Expect-CT header is good enough for most people, but HPKP should be supported if needed.


> At the very, very least I should be able to pin which CAs I trust so the threat vector doesn't include every CA in the world.

You can do this with CAA records today, to restrict which CAs can issue certificates for your domain.


This always seemed backwards to me. If the worry is that certs are being misissued, why on earth are we assuming that such CAs are following the rulebook?

No, the browser should check the CAA record and refuse to trust certs issued by the wrong CA.


This would only make sense under the assumption that the DNS response can be trusted. For the vast majority of domains and resolvers, that would not be the case. (I'm skipping the discussion of whether you'd want to trust DNSSEC at all.)

The idea you're describing has been standardized as DANE, which has failed to gain any adoption. It would not make sense for CAA to try to do the same thing. Instead, it set out to provide a defense-in-depth mechanism for certain CA vulnerabilities that fall short of a full compromise.


I hate this, too, and would love to be able to use HPKP in the future, too.

But I’ve long accepted that Google will simply drop functionality without reason even when people still depend on it, just because they can, and that they’re deaf to all complaints.


You can also implement Dropbox with a local FTP server, curlftpfs, and SVN.


You can set CSP headers that browsers will read for content security policy (to prevent or notice unusual content includes or sources). There can be two consequences: violations can be blocked, or they can load but get reported in console. If a URL is set in the CSP header, the browser can report the violations to another URL for inspection and tracking.

For the details, see https://www.w3.org/TR/CSP3/#report-violation

Like error tracking, the tricky bit isn’t always the reporting, it can also be learning to ignore errors caused by extensions, or grouping and alerting on issues in report-only mode such that they can be fixed in a timely fashion. That said, even knowing what to use as a CSP header can be daunting, so there’s plenty of room to make this powerful security feature easier to use.


>I'm not trying to be snarky, I'm considering to become a customer.

Why would you consider being a customer if you've no idea what they do?


Because the idea of getting notified every time someone XSS'es my app appeals to me.


For how Report URI works in your browser, see the original RFC [1] from 2015. The OWASP recommendation for HTTP Security Headers [2] gives some useful extra information on the how the HTTP Security headers hang together.

1: https://tools.ietf.org/html/rfc7469#section-2.1.4

2: https://www.owasp.org/index.php/OWASP_Secure_Headers_Project...


It's a website for monitoring CSP reports. It is described in step 6 here - https://www.troyhunt.com/the-6-step-happy-path-to-https/


If you don't know who is Troy Hunt, then one thing to do right now is to signup for his great service: https://haveibeenpwned.com/

in essence, everytime(+) some massive break occurs, and your e-mail (and probably password) is among them after reaching "darknet" (meaning black market), you will get from haveibeenpwned an e-mail, urging you to some action (password change, account closure/change etc).

(+) Troy amassed over the years huge db of past breakins (so you can check your email presence also in past events) and with current respect/reputation he gots access to new ones pretty fast, informing you instantly in most cases.

Highly recommended.


> after reaching "darknet" (meaning black market)

> with current respect/reputation he gets access to new ones pretty fast, informing you instantly in most cases

So he has the respect of (street cred with?) the dark net/black market folks and that's how he gets them instantly? Or I suspect you meant to say something slightly different?


These lists get 'traded' amongst security professionals a lot. Quite a few of them will send Troy a copy to load in to HIBP.


I don't know how he obtains the data, but in [1] he makes a point of never having paid for it.

> I've also never paid for data nor traded any of the breaches I've obtained.

[1] https://www.troyhunt.com/thoughts-on-the-leakedsource-take-d...


You can sign up your entire domain if you control the postmaster@ address.


If you have a Google hosted domain you cannot have this as an alias... but you can create a Group with this alias and add yourself to that.


A few other email addresses, including those on the WHOIS record, can be used. You can also verify using a meta tag, file upload, or DNS record.


This is a great service! And according to it my account has been pwned(what does this mean?)


Report URI appears to depend on CSPs being submitted. There was recently a discussion about uBlock Origin blocking CSPs on privacy grounds. [0]

So this will only work for sites that use CSP and users who don't have uBlock Origin installed, unless uBlock Origin changes their default policy on CSPs.

ninja edit: I see uBlock Origin has changed their defaults to permit CSP submission [1]

[0] https://www.theregister.co.uk/2017/10/17/ublock_origin_csp_r...

[1] https://github.com/gorhill/uBlock/issues/3140


Troy very publicly carrying water for Scott Helme/Report URI looks quite a bit different in hindsight, as the number of Report URI marketers doubles to 2.

· https://twitter.com/troyhunt/status/920590590223331329

· https://twitter.com/troyhunt/status/920222303849390081

· https://twitter.com/troyhunt/status/920173854835720193

The uBlock Origin dev compromised with an opt-out for CSP reports, even though they can be used for things like IP address leaks to 3rd party servers (similar to the as-seen-in-court Tor-busting captcha mess). Personally I would prefer only same-origin reports.

--

My biggest problem in all of this is that Scott was not careful to always document his not-quite-obvious conflict of interest, and even worse: Troy's association is only now being revealed weeks later. I do appreciate the work being done to improve the effectiveness of CSP reports in default configurations of open source tools, but can't help wondering about the motivation. Disclaimers go a long way toward keeping unsuspecting maintainers informed of both sides of an issue.

· https://github.com/issues?utf8=%E2%9C%93&q=is%3Aissue+author...

· https://twitter.com/Danbo/status/920325189954654208

Scott was also blamed here on HN last week for his part in promoting the death of HKHP (public key pinning) apparently due to ease of misconfiguration.

· https://news.ycombinator.com/item?id=15573076

Looking back now I'm left wondering how HKHP affected Report URI and Scott's other project, Security Headers. I will disclose my bias as I watch highly effective marketing of convenience (top of HN!) triumph over my personal flavor of slightly paranoid privacy yet again.


I don't understand the argument that it's bad because it leaks the IP address to a third party. So would embedding any other resource hosted on a different domain - from JavaScript CDNs to images. No one is suggesting these should be blocked, right?

I have no problem with uBlock blocking requests to the CSP report-uri if the domain in question is on the user's filter list (I'd probably consider it a bug if it didn't do that), but that's not what is happening here (before this was fixed, that is).

The Tor-captcha-thing was about leaking the origin IP of the Hidden Service, so I don't see how it's similar.


It is 100% clear to me that differences of opinion will result in drawing the line on an ad blocker's default configuration in different places.

Would you mind sharing how long your browser was sending CSP reports to 3rd parties before you knew about it? I personally was unaware [edit]this could happen to me in Chrome with uBlock Origin installed[/edit] until this issue came up, and a free pass through ad blockers to a 3rd party seems like an advertising/tracking company's dream come true.

> JavaScript CDNs to images. No one is suggesting these should be blocked, right?

Many of these [edit](specifically: tracking pixels and just straight up ad images)[/edit] are blocked by ad blockers in the default configuration.

> The Tor-captcha-thing was about leaking the origin IP of the Hidden Service, so I don't see how it's similar.

Thank you for the correction. I apparently imagined this referenced when partial VPN or Tor is insecurely used only to access specific domains, or DNS is not also routed through.

[edit] updated with a bit more detail


> Would you mind sharing how long your browser was sending CSP reports to 3rd parties before you knew about it?

I'm not sure what you're asking - when did I become aware of this possibility? I guess when I learned about CSP?

> I personally was unaware this was possible until this issue came up, and a free pass through ad blockers to a 3rd party seems like an advertising/tracking company's dream come true.

Adblockers block requests to things like Google Analytics by matching the domain (and other patterns). Why is this not an acceptable solution for CSP? If tracking tools start using CSP for this, they'll be added to the filter lists, just like any other tracking script. We're not asking uBlock to block all third-party requests on the off-chance it might be an ad or tracking service, so why would we do that for CSP?

> Many of these are blocked by ad blockers in the default configuration.

Do you have a source for this? This would break a large number of sites. uBlock seems to have an experimental feature that mirrors certain popular CDN URLs locally, but it doesn't seem to be enabled by default and the Wiki page hasn't been updated in over two years. AIUI, even with this feature enabled, requests to non-popular (non-local) assets would still go through.


> Why is this not an acceptable solution for CSP? If tracking tools start using CSP for this, they'll be added to the filter lists, just like any other tracking script. We're not asking uBlock to block all third-party requests on the off-chance it might be an ad or tracking service

This is a good question! When a new technology arrives that allows tracking, I prefer uBlock default to blocking it. Advertising/tracking companies pursue the bleeding edge in their cat & mouse game; this way the barn door is closed before the horse leaves (two too many animal analogies?). I do understand that others will pursue a different choice, and encourage disclosure by those doing so because money is on the line.

The discussion regarding CSP has resulted in a change in uBlock's default behavior which I will take into consideration the next time I re-evaluate which ad blocker to use.

PS. In case it's not clear, I originally commented: >Personally I would prefer only same-origin reports.

PPS. This is a much jucier target assuming it works when JavaScript is disabled.


Just a quick clarification: uBlock Origin only blocks CSP reports if it has injected "neutered" scripts into a page - typically if the site uses some third party scripts like GA. Because these neutered scripts might create extra CSP reports, which would then leak data about the client.

If the site only uses first party scripts, CSP reports are very unlikely to be affected by any version of uBlock Origin.

Also, uBlock doesn't block any CSPs. Only some CSP reports.


Luckily, the next version of uBlock will let us block CSP reports alltogether.

If CSP reports become a reliable way to gather data, they _will_ be used for tracking. This is why we can't have nice things. And why I have to disable all nice things we do have, in my browser.


Ok... So... i just signed up, and it walked me though some steps. Verify Email: Done, Customize (change reporting URL): Done... Configure: (change some domain filters)... Done... 2FA: Done... but how do i actually set this up?! like, what do i need to put on my site to enable this?! WTF?! is this done at a page level? (HTML inserted into the page) is it at a server level (Nginx, Apache?) Or is it done somewhere else?! where do i set this up?! more digging required, but it would have been handy to have a guide for what i need to install...


https://report-uri.com/account/setup/

But yeah, it seems like they could really use some instructions on how to set this up for people unfamiliar with how the Web Reporting API works. The documentation you're looking for is here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...


> Deprecated

> This feature has been removed from the Web standards. Though some browsers may still support it, it is in the process of being dropped.

Ok..


Keep reading:

> Though the report-to directive is intended to replace the deprecated report-uri directive, report-to isn’t supported in most browsers yet.

So it's deprecated, but until it's replaced by report-to (which works in a very similar way) report-uri is still the proper way to make this feature work.

I would have linked to report-to instead, but MDN doesn't have good documentation on that yet.


> ...embeds an XSS attack in the TXT record of a DNS entry which then renders into the output of a WHOIS service

I'm not even mad, that's amazing.


There was a particular domain that, when queried at most whois services, would play the harlem shake in your browser. It worked nearly everywhere that did such lookups.


It was linked in the next sentence of TFA: https://www.youtube.com/watch?v=1anIUDvdCjY

The domain was forfun.net


Remember kids: raw input, escape output.


I used to do this with because I found that some web performance scanners would usually forget to escape the content of the HTTP headers. After sending this domain [1] some of them would immediately execute the script. Feel free to send a HEAD request to this website so you can understand better, it's basically the same thing as with the TXT DNS records.

[1] https://cixtor.com/



Isn't it pretty easy to exhaust anyone's quota -- for the write key is visible in html source code?

However it is not specific to this service but I have always wondered why this design(write key public and quota system) works, while it shouldn't.


I'm assuming this is the standard technique, like the Google Analytics tracking id.


There's nothing to gain from exhausting someone else's quota. People will go to great lengths to exploit anything for profit, but just messing with other people without anything to gain personally -- that's not a good motivation.


DDOS attacks are a well-known attack that falls under the same category. I wouldn't be so quick to dismiss this as a valid concern.


For CSP also check this great Firefox add-on by Mozilla https://addons.mozilla.org/en-US/firefox/addon/laboratory-by... . It generates and allow you to test the CSP header.


Is it really a failure if X-Frame-Options header is missing on a blog? If it is a failure then why not just make it not possible to use iframes at the browser level.

Without context, this service misrepresents the real security of any given site.


To be fair, the challenge of nearly all security assessment tools, including Security Headers, is they don't understand the context of what they are assessing. X-Frame-Options on a blog isn't a big deal. X-Frame-Options on a SaaS app can be. That's why blindly scanning something and then saying "look at all these vulnerabilities" is a pretty poor way to assess the security of a service.


Is the HPKP he mentions as a security measure the same public key pinning chrome did just announce the deprecation date, because it is too risky to do; or are these different concepts?


It's the same thing. There are some differences of opinion on the matter.


Actually Scott Helme (mentioned in the article) also came to the conclusion that HPKP does more harm than good: https://scotthelme.co.uk/the-death-knell-for-hpkp/


Sentry can also handle CSP reports effortlessly, and it's all open source. I had a quite good experience with their ease to use UI.


I really hope that this somehow will improve the adoption of CSP.

Getting CSP right is quite hard and I have seen many big sites just using it in report only mode ( forever).


what about, who cares?


Could you please stop commenting like this?

https://news.ycombinator.com/newsguidelines.html


What a great "I'm joining" post! Congrats Troy and Report URI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: