Hacker News new | past | comments | ask | show | jobs | submit login
The Security Impact of HTTPS Interception [pdf] (jhalderm.com)
160 points by DeltaWhy on Feb 7, 2017 | hide | past | favorite | 65 comments



I was surprised by this note (on page 2):

> Contrary to widespread belief, public key pinning [19] — an HTTPS feature that allows websites to restrict connections to a specific key — does not prevent this interception. Chrome, Firefox, and Safari only enforce pinned keys when a certificate chain terminates in an authority shipped with the browser or operating system. The extra validation is skipped when the chain terminates in a locally installed root (i.e., a CA certificate installed by an administrator).

Seems like a strange default to me. I feel like the user should be notified of this, for instance if they're using a work computer to access their bank account or something like that.


Trying to fight a local attacker with root (which is necessary to add a certificate to the trust stores on most platforms) isn't worth the effort. It's easy for the admin to bypass and would cause even more warning fatigue.

That's not to say I disagree with the sentiment that this is something employers (and other organizations providing access to devices) should be obliged to disclose, but that is perhaps more of a legal and educational issue.


> Trying to fight a local attacker with root (which is necessary to add a certificate to the trust stores on most platforms) isn't worth the effort.

Hah. That's precisely the argument I have made when arguing that there should be an opt-out for addon signature verification (needing admin permissions to toggle it if they insist) because you already utterly lost the security game if someone had admin on the machine.

But no, they argue that they must defend against malware with admin permissions injecting addons into the browser. Because that's a fight worth fighting and the perception of the browser's security is somehow more important than user freedom.


I agree. But the reason they felt forced to do this is because even "reputable" software companies were auto-injecting unnecessary extensions as a side-effect of installing their popular software. Companies like Adobe and Microsoft, and "industry leading" "computer security" companies.

My first instinct is to say "it's important to not install crap software, you need to reasonably trust the software you install". But I immediately recognize that it's un-intuitive that Adobe and Microsoft and Symantec and McAfee are not on the "trusted" list. (Office and .Net have silently installed problematic Firefox extensions in the past.)

I don't really have a conclusion here, just, it sucks.


The problem in the field was application installers quietly "side loading" browser plugins.


Either those are malicious, in which case you lost the game and cannot defend against that because they have the security high ground, or they legitimately act on the user's behalf.

The "quietly" adjective suggests they are malicious. Which means they should be reported to AV vendors (including microsoft) instead of being used as a boogeyman when arguing against user freedoms.


Yes, especially since true malware creators have been able to inject code into browsers and intercept and modify pages for ages. They don't need an add-on, they'll just inject a shared library or something similar.


Don't assume the there's a local attacker with root.

Employees are often required to install local certs (or applications/scripts that do that) - that doesn't mean the host is entirely compromised.


> Employees are often required to install local certs (or applications/scripts that do that) - that doesn't mean the host is entirely compromised.

If they are forced to install those certs, then the computers they use belong to their employers, and those computers are obeying their proper owners. I fail to see the problem.

Don't use your work computer for things you don't want your work to be able to detect, intercept & modify.


  Don't use your work computer for things you don't want your work to be able to detect, intercept & modify.
with that logic, don't get mail sent to the office because your employer has every right to open and reseal the envelope.

You know who also has that right? Prisons.


> with that logic, don't get mail sent to the office because your employer has every right to open and reseal the envelope.

I don't have things sent to the office unless they comply with my employer's policies (e.g. I'd never have a weapon mailed here), and unless I'm happy with my employer having information about my packages.

I'm curious what line of reasoning would justify me doing such a thing and expecting privacy.

> You know who also has that right? Prisons.

You know who also has that right? You, on any network and hardware you own. My employer owns my laptop and the network it's connected to: of course it has every right to inspect its own property.


> * My employer owns my laptop and the network it's connected to: of course it has every right to inspect its own property.*

Careful there: there's a difference between capability and right. Many legislations forbid employers to put surveillance cameras in the bathroom for instance. There is a level of privacy employees can legally expect.

E-mail is similar: if your employer intercepts your e-mail just because you read it at work, it is likely a breach of correspondence secrecy (in France it would be). This likely applies even if you're at fault for using the company's resources on personal matters.

While "don't trust your employer's network" is elementary operational security everyone should be taught at school, 90% of the population don't know what computers can do, let alone how they work. Because of that, their expectations are social, not technical.


Common mistake on HN: the law is not logical.

The law says that if employers own the computers they provide to employees, then employers have broad latitude to monitor how those computers are used.

A different law says that only the recipient of a US Postal letter may open it. If you receive a personal letter at your office via US Postal Service, your employer cannot legally open it.

I'm not up to date on what the law says about FedEx or UPS.


That's employment for you. Employees are not treated as autonomous agents. More like untrustworthy children.

For my last gig, I worked at some company on behalf of another. This workplace was quite explicit about intercepting and monitoring everything. I pondered for a second whether I should use this place's computers to log to my employer's webmail. I gave up and did it, because I wasn't going to read or write work emails outside of office hours.

Personal stuff however I didn't dare.


> That's employment for you.

This might be true in the US but illegal in various other countries!


That's… murky. I live in France, where your correspondence is supposed to stay secret. It obviously applies to personal snail mail, and arguably personal e-mail. It should apply to every personal IP packet (they're structurally closer to snail mail than an e-mail is), but I don't know if it does.

Work stuff… When you use the work computer to do stuff on behalf of your employer, it's not really private, and could arguably be monitored. (There are work regulations that limit how you can use that data however. Benchmarking for instance is forbidden in France.)

The problem is, since proxy servers cannot automatically distinguish work related activities from personal errands, they tend to cast a wide net. The implied distrust kinda disgusts me, but I reckon this puts big companies in a delicate situation: one does not simply trust thousands of people —too many single points of failure.


Yes and that's well within their right and within most companys' policies.. Why would you send personal mail to your workplace?


That's fine, but that doesn't justify doing it silently - if you're going to claim the right to detect, intercept and modify you should have no problems with being completely transparent about doing so.


> It's easy for the admin to bypass

If it involves patching and recompiling the browser it wouldn't be that trivial for your average sysadmin. Besides I don't see why the admin would be hostile to the users being aware that they're being monitored. As you point out companies generally disclose that anyway.


> Besides I don't see why the admin would be hostile to the users being aware that they're being monitored.

Agreed. We could argue all day about companies who think they need to intercept traffic, but why would anyone who believed they had a legitimate reason to do so want to do so silently without any notification?

A persistent infobar near the address bar, for instance, would work nicely. And anyone working in a hostile environment with such monitoring imposed on them (a bank, for instance) would then have a much clearer warning that they shouldn't use their work device for anything they want to keep private.


No, sysadmins don't patch browsers. Endpoint security products do. Patching browsers to implement TLS interception is table stakes for security products. Local pin enforcement would in fact result in million more surreptitious browser patches.


Maybe on windows. Do you know of any Android security products which do this? Why does Chrome for Android not implement this?


The warning fatigue problem is still there - and we're talking about a warning for 4-10% of all connections according to the study. Plus, as you pointed out, this would only help against the average sysadmin; if we assume an advanced sysadmin convinced to not disclose their snooping or an actual attacker, replacing the browser binary would not be a huge obstacle. In that sense, it might even add a false sense of security. It's a bit like the state of certificate revocation - it works most of the time, just not when you actually need it.


This figure of 4-10% of connections is meaningless here, either you're intercepted or you're not. The warning would only matter for websites that bother to implement certificate pinning.

I don't really know how widespread key pinning is but if it's reserved to the more sensitive websites (banking, e-commerce etc...) it might make sense to at least issue a warning.


> This figure of 4-10% of connections is meaningless here, either you're intercepted or you're not. The warning would only matter for websites that bother to implement certificate pinning.

Most Google properties use key pinning in some form (though AFAIK through static pins rather than HTTP headers). I would suspect that most users in that group would see such a warning at least daily.

> I don't really know how widespread key pinning is [...]

"Visitors may be presented with a warning if they're behind a middlebox and you deploy HPKP" would probably be a good way to slow down HPKP deployment even further.


Well, sysadmins have a way simpler solution: just tell their users to use a different browser. This would have to be coordinated between browsers to have any effect at all.


Adding certs to root stores does not require recompiling the browser.


It's path dependence.

Had public key pinning existed before companies started thinking of intercepting their internal TLS traffic, that wouldn't be an issue. But since TLS interception was already unfortunately common when public key pinning was developed, not having that exception would mean that nobody could ever use it, since it would mean instantly breaking whichever site enabled it.

And in a certain way, users are already being warned, as long as their bank uses an EV certificate: the EV green bar won't appear, since EV certificates are only allowed from a hardcoded list of CAs (unfortunately, according to https://www.grc.com/ssl/ev.htm that's not the case for Internet Explorer).


> not having that exception would mean that nobody could ever use it, since it would mean instantly breaking whichever site enabled it.

That's the point...


The exception we're talking about exists primarily for computers owned by companies the likes of the Fortune 500, many of which have a regulated requirement to intercept TLS. If Chrome enforced pins against local policy, they'd simply use a less secure browser.


Personally I'd be happy to DoS my sites by breaking them when policy MitM is used, as an act of solidarity. I even wrote a half-joking spec for it: https://hlandau.github.io/draft-landau-websec-key-pinning-re...

Seems like it should be feasible to develop modules for HTTP frontends to detect policy MitM based on the techniques described in this article and enable conditional denial of service.


There's another way to do that: require client certificates. The MITM proxy cannot present the client certificate to the server, since it doesn't have the corresponding private key.

Unfortunately, the user interface for client certificates is a complete pain, so they are rarely used. But they're the only true way for a server to make sure it's talking directly to a client, in the same way server certificates can allow a client to make sure it's talking directly to a server.


Yeah, I've considered client certificates for this very reason, but as you mention, the UI is truly awful.

I'd like to see TLS Channel ID to become available in browsers for this very reason, but it still doesn't seem to have much buy-in.

Someone once proposed that browsers add a JS API for getting information about the certificate with which the page was loaded. The discussion is on a mailing list somewhere, you can find it if you try. It was shot down on the grounds that a page may contain resources from many origins and made over many different TLS connections, or come from cache, etc. So the idea of "what connection" a page was loaded over isn't necessarily very clear cut. If this API were added it would enable JS-based detection, which obviously wouldn't be foolproof against a determined attacker but would enable some monitoring of this issue to be done.

(Another hare-brained scheme I conceived of to detect this sort of thing is to have a special domain with a HPKP header set that doesn't match the certificates served, and a report-uri. If the browser makes a report, it means it's a normal browser acting correctly. If it doesn't, it means the page was served under a custom CA and HPKP has been disabled. Of course this is incredibly erratic as a means: it doesn't work if a browser doesn't support HPKP, you have to wait for the report to be made, or decide how long you wait before you conclude it's not coming, etc.)


>The MITM proxy cannot present the client certificate to the server, since it doesn't have the corresponding private key.

The MITM proxy is operated by the same department that has root on all the endpoints it's intercepting. If necessary, the "endpoint protection" product will grab the private key, or just scrape the details of the browser session from the browser's memory rather than at network level.


Grabbing the client certificate private key is not always possible; it can be on a smart card (and even when on a file, it could be password-protected). Also, the operators of the MITM proxy do not necessarily have root on the endpoints (they can require the users to add the CA certificate themselves), and even if they have, scraping the session keys from the often-updated browser is not trivial.


Fun fact: the google engineers responsible for this decision were repeatedly accused of being NSA plants as a result: https://twitter.com/sleevi_/status/668911789841608706

Chris Palmer also wrote a really great blog post about this: https://noncombatant.org/2015/11/24/what-is-hpkp-for/


That blog post insinuates that 'strict' HPKP wouldn't work, yet despite this Google Chrome actually enforces strict HPKP... but only for some Google Domains. It's rather a double standard. I wrote about this: https://www.devever.net/~hl/policymitm


In apps like Netflix and Snapchat key pinning works because they control the server side and client side.

This is one reason why squid has a bump splice feature. Look for the SNI in Wireshark then config squid to let Netflix packets go through untouched. I am not aware of any other way to get Netflix and Snapchat to work in a MITM network. My experience is that you have to create a exception and not intercept. Not 100% sure. YMMV.


It is part of the RFC: if a certificate is signed by a root certificate that is trusted in your private store (meaning it was added later on), HPKP is ignored. Unfortunately, this is required in the enterprise world where corporate MiTM is often done (Palo Alto Network SSL proxy, Websense/Forcepoint, Zscaler, Blue Coat, etc.) for content inspection.


>Unfortunately, this is required in the enterprise world where corporate MiTM is often done

This still should not be the default, rather corps should have an easy about:config switch they can flip. The default should protect private users.


How is that a meaningfully different experience? Anything able to install a CA can flip the config value.



Android does what you want, and I hate it. Every time it starts up it notifies me that a third party could be intercepting my traffic (because I've installed one of my own root certs). Of course, I'm not a third party, but there's no way for me to make my phone's OS know that I trust myself.

Meanwhile, of course, if someone did install a third-party root cert on my phone somehow, I'd never know because I always ignore & dismiss the wolf-crying warning.

The fundamental reason I disagree with you is that a computer should do exactly what its administrator wants it to do. If I install a root cert, it should trust that cert exactly as much (if not more than) every other root cert in the world.


Android doesn't do what I want. It just warns you that a user added CA exists, which isn't specific enough to change user behavior but is frequent enough to cause fatigue.


I believe that is mostly to support the corporate use case of being able to snoop all employees traffic. Now that Chrome features 'enterprise' deployment, maybe the default could be changed and only IT departments that specifically want this setting could have it on.


Absolutely not. The owners of IT systems need to be able to assert control over what enters and leaves the system.

Allowing uninspected outbound traffic makes it trivial for an attacker to exfiltrate data, an employee to accidentally or purposely release data, etc.


The arguments around this-- and the people who make them-- have always disappointed me. The expectation that we've drilled into users' heads is that when they see the lock icon their traffic is private. That is not the case, and we could warn them about it, and we don't. In my eyes that's a failure.

The arguments around warning fatigue are specious. The exact same mechanism that currently sets the number of warnings you get due to a pinning failure stemming from a user added certificate to 0 could easily make it 1 instead, or be tied to a "don't show me these warnings again" checkbox. Experimentation and data could determine whether and to what degree this was effective, as is routinely done with related warnings changes that have far less potential upside. But when you bring up the possibility of solving the question with real data the argument morphs into pure philosophy.

The philosophical points are twofold: first, a claim already raised here that fighting local admins is pointless because they'll always win and you don't want to get into an arms race. I attribute this to the fact that browsers developed on poorly sandboxed desktop platforms where admins are de facto root and no intelligent statement of any kind can be made about limitations of their behavior. On those platforms this isn't a crazy approach (although its shoulder-shrugging fatalism is distasteful to me even there). Fortunately, those aren't the only platforms we have today: on systems like Android the expectation is that corporate admins act through narrow, carefully controlled channels and will have no powers beyond those. There, the platform wins arguments with admins pretty much all the time. The arms race was over before it began. Without the risk of escalation from admins, the only question is whether the user is properly aware of the consequences of having had an extra CA added to their trust store, and again I refer to the point I made above: this can be settled with data. Rather than bend over backwards to give admins the benefit of the doubt, let's gather actual data on the degree to which users are comfortable with this behavior. And if they aren't, well, then the admin is an adversary and we have a duty to protect the user.

When you make this argument however the discussion becomes /really/ philosophical: people will start saying that limiting admin powers is anti-user-freedom, despite the fact that the user of the device clearly has a greater ability to make decisions for themselves about their security than in the free-for-all common to platforms of yore. Why that matters in this discussion is beyond me: even if you subscribe to this belief the horse is out of the barn and no amount of smugly screwing users will fix that. And some will assert that admins are users too, and that we need to serve those markets well. But the fact that people will give you money does not mean you should take it: if the data gathered above indicates that users do not want their traffic intercepted then that, in my mind, should be final-- if the amount of money on the other side convinces members of the security community to hurt users then in my view we should just give up the pretense that we're the good guys.


> The expectation that we've drilled into users' heads is that when they see the lock icon their traffic is private.

Except it isn't. Even simple things like cloudflare's SSL termination allows the traffic to go unencrypted over the internet and be intercepted by 3rd parties.

http://www.theregister.co.uk/2016/07/14/cloudflare_investiga...


From the conclusion:

> We deployed these heuristics on three diverse networks:

> (1) Mozilla Firefox update servers,

> (2) a set of popular e-commerce sites, and

> (3) the Cloudflare content distribution network.

> In each case, we find more than an order of magnitude more interception than previously estimated, ranging from 4–11%.

> As a class, interception products drastically reduce connection security. Most concerningly, 62% of traffic that traverses a network middlebox has reduced security and 58% of middlebox connections have severe vulnerabilities. We investigated popular antivirus and corporate proxies, finding that nearly all reduce connection security and that many introduce vulnerabilities (e.g., fail to validate certificates).


I'll jump on my current soap box, which is that we need a standard to allow MITM blocking, without interception, and a nicer user experience.

This won't solve all use-cases, but selfishly, It will solve mine at DNSFilter: If a browser could recognize our SSL cert, or a special field in our cert, and present the user with a block message, and a static link to learn more, it would eliminate the need for us to have our customers install a CA of ours, and MITM traffic. We have not yet done so, and I'd prefer not to, but it seems to be the industry standard way of avoiding users being confused by errors when we block/MITM an SSL site.


I've written a proxy that uses SNI to filter outgoing connections based on the domain name, without decrypting the traffic. It's not exactly user-friendly as you'd like, but it's a good solution to our use-case.

I might open-source it if there's interest but it's relatively basic.


I'd be interested in seeing what you've got -- I have on my list to look into doing so with HAProxy per the link here: http://serverfault.com/questions/628147/nginx-proxy-based-on...


Its sad that interception receives such a bad reputation because of broken security products. Yes, a proxy is a weak link. But if implemented properly and in a trustworthy way, then its better than having endpoint security, which is often worse. And no, the proxy does not belong on the endpoint itself! If organizations with proxy interception are not able to scan traffic, they will just drop the traffic. Instead of using MITM mitigations that don't allow interception at all, we need better user experience (with choice) and safe(r) products.

I have a little side project where I try to implement a proxy for myself. I want to remove ads and be able to scan and cache downloads. I trust the adblock plugins and endpoint security products far less than a MITM proxy, I wrote myself.


Why does the proxy not belong on the endpoint?


For various reasons: you probably have more endpoints on your network than proxy servers and thus a bigger attack surface. Keeping the endpoints up-to-date is harder (e.g. laptops that are not permanently attached to the network to receive updates). If your endpoints are workstations, human interaction (e.g. installing malicious software, opening malicious attachements) and the overall complexity of the system (e.g. GUI, multiple users) makes it weaker than a central, dedicated, isolated, stripped down and locked down set of proxy servers. And finally, process isolation is really, really hard (if your countermeasure _only_ runs on the target (endpoint), you already weakened your position).


Is there a website you can visit that will tell you if your TLS handshake doesn't match your browser's user agent?

Maybe this will have to wait until after the team from this paper releases their fingerprints: https://github.com/zakird/tlsfingerprints


https://www.ssllabs.com/ssltest/viewMyClient.html tells you what your client handshake looks like, and https://www.ssllabs.com/ssltest/clients.html shows the usual handshake for several popular browsers.


Another blow to the utility of AntiVirus.

I hate the AV industry in infosec. It does not work well and in most cases, refundes security. Unbelievable that it's still required for a lot of compliance veers.


What is the meaning of the "AS" acronym that the paper uses (seemingly representing network providers)? I didn't see it explained anywhere and it's not ringing any bells with me…


AS refers to the Autonomous System. https://en.wikipedia.org/wiki/Autonomous_system_(Internet)

It's an IP routing concept. AS Numbers are used to refer to different networks (run by different ISPs and providers) on the internet.



It stands for autonomous system, part of BGP


I’m surprised by the contrast between the best products in each category and the average standard. The few good ones show what is possible, yet the vast majority seem to be sub-standard or outright dangerous. I wouldn’t have been surprised by the odd outlier where someone dropped the ball, but I expected much higher general standards.


> The few good ones show what is possible

If you've read that out of the paper you read a different one. Quote:

"Our grading scale focuses on the security of the TLS handshake and does not account for the additional HTTPS validation checks present in many browsers, such as HSTS, HPKP, OneCRL/CRLSets, certificate transparency validation, and OCSP must-staple. None of the products we tested supported these features."

Read: Some products got the absolute basics right. None of the solutions did anything that can reasonably be called "good".

> I expected much higher general standards.

I didn't. I don't expect anything from security appliance vendors.


Yep, HTTPS is a joke. Always secure your own transport if its sensitive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: