Hacker News new | past | comments | ask | show | jobs | submit login

> network visibility

This is an odd euphemism. A network that uses plaintext isn't "visible" - I'd use a word like "readable" or "inspectable".

For encrypted networks, MITMing the encryption breaks the security. That's what it's for. TLS1.3 is supposed to prevent that; circumventing that (as NIST proposes) increases the attack surface. NIST's proposals seem to amount to generating and distributing ephemeral keys over the internal network; but I thought best practice was to keep keys and cryptographic operations inside a HSM.

Isn't the proper solution to remove MITMing from the compliance rules, stop trying to detect C2 and malware at the router, and instead secure the target servers?




It's their own network. The argument you're making here is that the (very good) design goals of TLS 1.3 preempt people's custodial interest in their own networks. That's a weird argument.


Why do they do TLS1.3 inside their own network, then?

If they work in high-castle mode, they can as well have their network traffic unencrypted.

Otherwise, as the parent said, they should secure the endpoints rather than rely on COTS to inspect traffic in the hope of detecting malicious patterns.

P.S. This reminded me of an Intrusion Detection System that silently dropped all traffic to what looked like requests to Spring Boot's /actuator* unless it contained a cookie. Any cookie would suffice, it was just a match on the string "Cookie: " in the headers. It took many hours and a dozen of people across the organisation to figure this out. All but productive work.

P.P.S. It's Fortiguard and they proudly advertise this feature here https://www.fortiguard.com/encyclopedia/ips/49620


They can absolutely just not use TLS 1.3. I don't know how that makes anything better for anyone else though.


No, that's not what I was saying. These network owners are working around TLS1.3 for compliance reasons: they're required to monitor traffic at the boundary.

So I'm saying they should not be required to monitor at the boundary (and discard the benefits of TLS1.3); it's dumb to require diminished security. They should be required to monitor; but it's their network, they get to decide how to do it. I guess that means you need compliance rules written by serious people, rather than box-tickers.

That would make verifying compliance harder; you can't just check that they have blackbox X at the boundary. I can see that the existing setup is cheap-and-cheerful.


the way you secure a network is to not allow insecure devices to connect. If you mitm the network, all you do is increase the amount that an insecure device can mess with other devices.


At least for home users, this is not feasible. We're quickly developing a world where the owners of devices have no insight into what they're doing. ECH means your ISP can't monitor you, but even if you're going through cloudflare so the IP doesn't say who you're connecting to, the state can just make cloudflare tell them, so it doesn't protect against state monitoring. And ECH + DOH and cert pinning give tools for malicious devices (i.e. every modern consumer device) to exfiltrate data without the owner being able to monitor/block specific requests.

The reality is many if not most devices are malicious now. You're protecting against one threat while enabling another.


Hmmm, I’m noting that the current industry trend seems to focus on the opposite strategy: assuming compromised devices, assuming breach. « Zero trust », as they like to call it.

It’s not mutually exclusive with your approach, but it’s definitely the new industry gold standard, rather than trusting vetted devices. Seems they gave up on the vetting.


I agree that a zero trust architecture makes sense. every device should sanity check requests made to it from every other device, but IMO that works best when you have a secure and encrypted network as a primitive. the network's job should be able to deliver messages security between endpoints.


Whether it's okay to inspect traffic only depends on whether you own at least one of the endpoints. It has nothing to do with whether it's going over your network.


We are talking about networks where every authorized endpoint is most certainly owned by the organization doing the telemetry. I don't like it either, but I don't see how anyone's going to make a moral issue out of it --- in fact, this is exactly the kind of thing that tends to infuriate nerds like us when it cuts the other direction, like with sealed remote attestation protocols.


Yes, I believe they own all the endpoints, so I'm fine that they're doing the telemetry at all. But if the method they had in mind for doing the telemetry doesn't require that they do, then I'm opposed to that method in particular.


You get that Intel can make the same argument about sealed remote attestation protocols embedded in their chipset, right? You don't tolerate that argument when it's your network hosting sealed protocols, but you do when it's other people's. That's a strange position.


That argument isn't valid for Intel, because when they sell me their sealed chipset, it stops being theirs and becomes mine.


Do you not see how that's exactly what the banks are saying about their own computing infrastructure?


> secure the target servers?

That's much harder than buying off-the-shelf "security" solutions from the likes of Bluecoat.


"For encrypted networks, [the owner of the network] MITMing the encryption breaks the security."

What security, specifically. Security from who/what.

Let's say a network owned by C comprises computer A and computer B, A is connected to B and B is connected to the internet.

Computer A runs "apps" controlled by D and not trusted by C. B runs only programs trusted by C.

Both A and B, i.e., the programs runing on them, are each capable of encrypting traffic.

Let's say the approach C takes on C's network is to let B handle encryption. Not A.

The apps running on Computer A want to encrypt traffic but, in C's opinion, that "security" is for the benefit of D not C.

Computer B encrypts all traffic bound for the internet and decrypts all traffic received from the internet. C does not need D's apps to perform encryption.

It is C's network. Is there a reason C should not control encryption on C's own network.

Is there a reason D should be able to run its "apps" on C's network and encrypt traffic that D cannot inspect.

Would D allow C to run programs on D's network that encrypt traffic so that D cannot inspect it. (Reciprocity.)

One could imagine the encryption by D's apps running on Computer A is security against D, the owner of the network.

Any other "security" provided by D's apps encrypting traffic on A is already provided by B.

(Given the existence of B, encryption by A is unnecessary and redundant.)


Does Facebook "break the security"

https://news.ycombinator.com/item?id=39860486

Why does it need this "network visibility"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: