Hacker News new | past | comments | ask | show | jobs | submit login

>To do TLS inspection at that level you need to MitM all HTTPS traffic going everywhere, as you need to read all HTTPS traffic to any possible host, as any of them may be a DoH resolver or relay. Q.E.D.

Yep. And corporate/enterprise environments should (not nearly enough do) do exactly that with devices owned and managed by the enterprise.

Any personal devices or those owned by contractors, clients and other external actors should not be allowed access to internal corporate networks. This is neither a particularly new or controversial idea either. Most large (or even medium-sized) organizations have separate "guest" networks for external resources which aren't secured or monitored.

However, internal networks are (and should be) a very different story.




> Yep. And corporate/enterprise environments should (not nearly enough do) do exactly that with devices owned and managed by the enterprise.

Sure! But the fact that they should (and many have been) already be doing just that does not change the fact that the technique imposes a third party listening into supposedly two party encrypted exchange. It's allowed, but it is still MitM.

> Any personal devices or those owned by contractors, clients and other external actors should not be allowed access to internal corporate networks. This is neither a particularly new or controversial idea either.

Whilst I agree, recent trend to push for more BYOD where the device is owned by the contrators, employee or an external actor, but still allowed access and controlled by the enterprise does tend to blur the lines quite a bit, especially as most tooling has been lacking decent isolation between "enterprise" and "private" on the same device. MDM tooling tends to want to administer the whole device and apply the stricter "enterprise" policies, with a pinky promise that private life is going to be respected.


>Whilst I agree, recent trend to push for more BYOD where the device is owned by the contrators, employee or an external actor, but still allowed access and controlled by the enterprise does tend to blur the lines quite a bit, especially as most tooling has been lacking decent isolation between "enterprise" and "private" on the same device. MDM tooling tends to want to administer the whole device and apply the stricter "enterprise" policies, with a pinky promise that private life is going to be respected.

Which is why employees should either be given employer-owned devices or use device subsidies (as many companies provide) to pay for a device that's only used for work purposes.

That companies attempt to hijack (and I mean that in both metaphorical and literal senses) personal devices for corporate purposes, aside from the obvious issues, it's also terrible security policy.

That businesses do this is exploitative, unethical and insecure. I suspect such businesses don't really care about the first two, but should care about the third.

As an infosec/infrastructure guy, I'd raise hell over such a policy -- because leaving aside the scumbaggery (I think I just coined a new word. Good for me!), having personal devices connected to internal corporate resources (even with corporate MDM configurations) is literally begging to be compromised, for (hopefully) obvious reasons.


>Sure! But the fact that they should (and many have been) already be doing just that does not change the fact that the technique imposes a third party listening into supposedly two party encrypted exchange. It's allowed, but it is still MitM.

Replying again, as I should have addressed this as well.

I disagree. When using corporate resources, the organization is not only well within their rights to monitor (or at least log) all communications, given the potential for malware, data exfiltration and (to a much lesser extent) employee misconduct, an organization would be remiss for not doing so.

Which is why it's extra important not to allow or (as I addressed in my other comment), require those working onsite to use personal resources on internal networks.


You are missing the point here. There is no argument here that the corporation should be able to monitor communications going on or out of their systems (though some limits how and for what purpose that monitoring can be done do exist, especially in the EU - it's not unlimited), but that is not what the calling the technique for what it is is about.

Use of MitM by the corporation as part of Data Loss Prevention interferes with any hardening you or your vendors might be making against a MitM attack attempted by anyone else - it breaks if, for instance, the application vendor your enterprise has decided to use (let us call them "Example plc") has pinned their own CA certificate within the applicaton as the only one that is supposed to sign certificates on the Example domains - say, for "content.example.com" - following example that e.g. Google set. Or, worse yet for this example, as specific certificate to be used instead of the specific trust anchor. I've seen both examples in the wild, so it is not an idle discussion.

Not only you need to override that pinning with your own CA in the application for the content to be inspected, to retain the same level of hardening you'd need to implement the same checks the application did in your DLP system, so that it verifies that the system is legit - and that costs money and time and remains fragile over time, so many enterprises simply do not bother doing so, falling back to the well-known list of public CAs instead (that includes my $CurrentCorpo, much to my annoyance). It weakens the whole system, which is already fragile enough thanks to actors like Symantec, WoSign and StartCom - and possibly others.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: