From under my tinfoil hat, I have to wonder why we would listen to any recommendations from the NSA? Why would we not believe they are going to make recommendations of methods they know how to exploit?
Taking off my tinfoil hat, I understand that one of the purposes of the NSA is to keep US information safe. Following their recommendations should make your data safer.
However, Snowden showed us that the NSA doesn't always follow the rules it is supposed to operate within. Does that mean they are always be suspect? How do we decide when their recos are for the good fo all?
This is one of the arguments to separate USCYBERCOM and NSA.
IMO, the US should really strictly separate the offensive/SIGINT and defensive aspects of NSA into separate agencies, and move the defensive part out of the intelligence community. Make it part of NIST or something. It would restore at least a little credibility to documents like this.
The tricky bit is that offense and defense actually require the same knowledge. A 0-day is immediately applicable for offense and must be immediately patched for defense.
A 0-day is immediately applicable for offense and must be immediately patched for defense.
This outlines why there's a problem with a single agency tasked with offense and defense. If you had agency tasked only with the defense of US infrastructure, it would have a clear mandate to patch zero days. Any "offensive" agency would have to deal with it doing this (just as such offensive agencies deal with zero days being patched by others).
It seems entirely reasonable to prioritize protecting US infrastructure over the possibility of a spy agency culling another nations' secret.
Legal mechanisms for one way doors seem possible. The offensive agency can communicate info about 0 days to the defensive one, but not the other way around.
Except bureaucracies are going to do what they do. The offensive agency will hold back information, and when an attack occurs on the defensive agency, calls for bringing the defensive agency under the umbrella of the offensive agency will be made.
Brats fighting over resources while Mom just wants to sleep. It's not their fault, it's the nature of political organizations.
Sure, you could fix it with centralized oversight and stringent information sharing rules. But eventually you're either strangling them with rules or building one conceptual agency.
Not necessarily. In fact, depending on how they're structured, they can be quite complimentary. Red team strategies thrive off of this sort of competitive environment.
Yes, I stipulated that in my post. However, we have seen how the NSA wants data from US corporations. If they make a "recommendation" for corps to use, then how do we know it is solely for the corporation's best interests rather than the NSA's own interests in more easily accessing the corp's data while providing false sense of security?
We do know that the NSA pushed for a particular type of RSA encryption to become the default because they already knew how to break the encryption. Once something like that is known in the wild, credibility will from then on be suspect at best as to true motive.
I feel it wouldn't be completely unjustified to trust NSA recommendations simply because America isn't the only country in the world with intelligence agencies. The NSA is commonly assumed to be offensive, but part of their job is to aid in defence as well. Ensuring (malicious) foreign actors have trouble gaining information is a National interest just as much as the NSA themselves gaining access.
Only if it doesn't make their own work any harder. NSA prefer weak security for everyone if securing US assets might risk others to be more secure too:
>NIST failed to exercise independent judgment but instead deferred extensively to NSA. After DUAL_EC was proposed, two major red flags emerged. Either one should have caused NIST to remove DUAL_EC from the standard, but in both cases NIST deferred to NSA requests to keep DUAL_EC"
Prosecution of government employees and officeholders. You don't gain trust without a reason to trust you. There is no magic reset button. After COINTELPRO and Iran/Contra and Snowden, the USA Intel establishment has a solid record of attacking their own citizens and behaving as though laws don't apply to them and there is no oversight. Valerie Plame?
And then there's the last five years, starting with the Clinton email server. The Intel community can't prevent that, and then the FBI goes political at the end of election season?
There is no trust of intelligence agencies because they have proven they deserve none.
Your successes are secret, your failures are known. That's your industry's catch phrase.
Here's a list of things I can think of:
1. If you're the CIA/NSA, stop spying on American citizens.
2. See #1
3. Stop NSLs. If you want data, get a valid warrant visible to the public. Allow companies to inform their users/customers that their entire platform is vulnerable.
4. Subject yourself to non-govt over view.
5. Stop hoarding 0-days, and actively work with vendors to fix vulns.
Don't know how to fix it, but until you're no longer in the news for screwing up, this is where we are. Your internal documents show that all of that data slurping has not led to significant positive results. Why spend the money on it then? Why erode the trust that you want? You don't want to tip off the advesary, but your own citizen's rights are much more valuable than what little information you are getting.
6. Whenever coming up with a new spying/data collection scheme, ask yourself "in the light of public outcry in the past, how would this new thing be perceived?" Be honest about it, and not hand wavy "the public will be okay as long as we catch bad guys".
In addition to what others have already said, actually preventing major breaches like the recent Solarwinds hack. Eg, besides the tinfoil hat stuff, just getting the basics right.
Personally I'd like to see the entire US govt IT infrastructure rebuilt around DARPA's HACMS project [1]. Get rid of Windows, base it all on SEL4 Linux or similar, rebuild the apps (everything in userspace), etc. That would significantly reduce the attack surface. Huge project obviously, but one can wish.
You probably don't need HACM. Microsoft Research has produce quite a number of software and compilers for producing provably secure low level code and drivers.
When are we getting that in Windows? It needs to start with a formally verified OS (and firmware), then build a provably-correct protocol/driver/app/etc stack on top.
MS was working on Singularity [1] a while back, but its webpage speaks of it in past tense, so it doesn't appear to be an ongoing project.
Microsoft, and more importantly Microsoft Research, have many, many projects tackling this. Off the top of my head: the P programming language, F*, Dafny, and a bunch of others (I think the last two are part of something called Project Everest). Microsoft Research is one of the biggest players in this field. A lot of their efforts span multiple departments so you will have to search around.
Yup, I'm aware they do a lot of work on this, and hired up half the Haskell and PLT researchers in the world, and have a bunch of projects on it scattered across their web properties.
And maybe some of it is trickling down into parts of Windows or .NET, like F# and others, but I wish they'd develop an entire ecosystem around it and make stronger push to commercialize it.
With all the aforementioned resources MS has at their disposal, by now they should have their own version of SEL4, comprehensive tooling for secure-by-design and correct-by-construction application development, and optionally secure sandboxed containers for backward compatibility with legacy apps. Though ideally MS funds a Manhattan project for rewriting the most popular apps using their SxD and CxC tooling.
Singularity was on the right track, but they let it languish and die for some reason, when instead they could be rebuilding an entire secure/correct ecosystem with Singularity as the focal point, and commercializing it with enterprise support, and selling it to governments and large enterprises with significant security concerns.
It is hard to iterate and make changes when your codebase relies on human/machine assisted proofs. Great for airplanes and cars when regulatory approval means iteration is slow. Not so great for consumer software.
Good point, though I’d suggest most consumer software, at least office apps and similar needed by the government, are mature enough they don’t need rapid iteration. Most new features being added to them these days are optional at best.
For odd family reasons I have a lot of acquantances in SOCOM and intelligence orgs, and I've been around these sorts of people long enough to know that the vast majority of them are well meaning and sincerely eager to do the right thing. But see, that's the problem: they're all employees, and they all report up the chain to an unelected appointee, far removed from any voting-based accountability.
I don't think the trust will ever come back because I don't believe the government has the right incentives in place to restore trust. It only gets worse from here. And certainly nothing I've learned about them in the last 10 years would convince me otherwise.
Not just freeing. Give the guy some kind of lofty award and have the next US president give it to him together with a speech explaining how great the things are that he did.
Actually, I don't think Snowden did that much. He exposed crimes, sure. But they still happen in darkness now. So there's this drifting time element to his story. It's in the past. And the systems that are in place for accountability didn't result in any accountability. Rather things just went dark.
Maybe have a ponder about, if Snowden didn't happen, what would be different about today?
Upside from Snowden, the intel community got better controls around administrator access on their networks.
The point of Snowden's actions was to reveal what our government is doing in an attempt to force them to change it. The immediate branding him as a traitor and foreign agent both allows the government to continue the illicit activity and discourages others from revealing damaging material.
By pardoning him and then rewarding him for his actions, it at least signals that government organizations shouldn't engage in similar activities and will lead to other people stepping forward if they do.
>it at least signals that government organizations shouldn't engage in similar activities.
This is the worst way for the executive to send signals to the organizations it controls.
It also assumes that the alphabet agencies were building these systems without government knowledge. But, everyone knew. They got the money for the project from the government. Sending a signal would be better achieved via legislation. Snowden should have leaked to members of congress.
I strongly doubt Snowden will ever get a pardon. It would be rewarding people declassifying large government programs based on their feelings about those programs.
So why would Clapper lie about it to Wyden? Additionally, what good would leaking it to Congress do then?
As a whole, I agree that the pardon alone would be a poor signal. Ideally it would be accompanied by additional changes. You argued that Snowden didn't accomplish much though and part of the reason is the hostile reaction from the government.
>It would be rewarding people declassifying large government programs based on their feelings about those programs
If a program is classified, it shouldn't be in a moral gray area. The public should be aware if we're committing unethical acts.
James Clapper is still a free man despite lying to Congress and the American public, so I don't see why anyone should trust a word anyone says from any US intelligence agency.
Well, Clapper did come back for another visit with a statement that could best be summarized as "Oops, my bad". So, to Congress, that's apparently enough. Congress basically has battered spouse syndrome*. They desperately want to believe things will change, but it never does. Instead, time after time, intelligence communities run roughshod over whatever civilian oversight they subject themselves.
*I absolutely am not trying to belittle abuse by using that phrase.
It is the nature of trust that it is easily lost and difficult to regain. Basically trust is based on experience over the long term. If there are good experiences for a long time trust grows. Any breach of trust resets the trust clock to zero (or even to a negative value).
Host all their secrets and vulnerabilities on their site and shut down everything else. Prosecute everyone involved. Follow Nuremberg principles set by the US itself (IE. "I followed orders" is not an excuse.)
NSA recommendations for the securing of government infrastructure should be listened to by corps with the normal grain of salt of understanding when and where it is appropriate to apply such measures.
NSA recommendations for the securing of enterprises and corporations should be ignored or avoided. If it was actually secure, they would also recommend it for government use
I can see the logic in your theory. Further testing would be required for validation though. I would almost expect the public recommendations for corporations to be different than a company like a defense contractor.
At the end of the day, it's ther own damn fault that their recommendations are viewed so skeptically.
That's the frustrating thing. The NSA actually used to be good, and generally useful towards securing our digital borders. SELinux, for example, was a contribution to the linux kernel that is generally highly regarded and well vetted, and it was freely given to us plebs that don't have nuclear arsenals to secure.
But now, everything is different. They've been telling us to use flawed crypto algorithms simply because they know how to break them and they can have access to whatever they want. With our current-day NSA, our least risky choice is to consider the NSA an adversary, just like North Korea, Russia, or China. Nothing they say should be taken at face value.
Pretty shitty that we have to treat our own government security agencies like that.
No they clearly do not, not if it makes it any harder for themselves. They include weaknesses and backdoors in products and encryption used by US companies too. That is the direct opposite of "defending the home front".
Under what threat model would you be threatened by a domestic spy agency having a backdoor? Ignoring the red herring that "backdoors don't stay hidden".
My instinct would be to say to verify what they are saying, but it seems like we should just take advice from the people who we would verify with it in the first place
>NSA recommends that an enterprise network’s DNS traffic, encrypted or not, be sent only to the designated enterprise DNS resolver.
its either a slow day at the NSA or federal agencies have become so intellectually bankrupted by the cloud that they consider proclamations of the fundamentals of DNS and networking to be some sort of sage wisdom.
They are responding to the very recent emergence of applications (like Firefox) that (optionally) use their own encrypted DNS, thus bypassing the enterprise's ability to apply security policy based on DNS. (Visibility on DNS is also useful to help detect some malware.) I'll allow it.
They aren't recommending you don't use DoH. Just that you don't allow individual apps to bypass your enterprise resolver. In fact I use the same strategy at home (with DoT) to enforce ad and tracker blocking. It's just common sense really.
From the document:
>[...] NSA recommends that the enterprise DNS resolver supports encrypted DNS, such as DoH, and that only that
resolver be used in order to have the best DNS protections and visibility.
I read that to mean: Do not allow doh to tunnel all your dns traffic out to cloudflare regardless of the promise of encryption. Send it only to the designated enterprise DNS resolver, ie the one under control by the enterprise.
All other DNS resolvers should be disabled and blocked, ie all those public dns resolvers.
It gets so tiresome. I used to think the people who called out tyranny everywhere were just nuts, but it never ceases to amaze that everything nowadays keeps going "centralize and control".
No you're misunderstanding what the recommendation is. If a company runs their own DoH or even regular DNS or AD resolvers, then the company's client computers (the laptops their employees take home) should not be querying any old resolver hard-coded in their web browser (Firefox, CloudFlare) for internal company domain addresses. That's literally all this is saying. It's good corporate IT policy anyway and it's only being reiterated with DoH.
Oh, to be able to buy that anymore. I don't see anyone implementing it that way though. I see it being implemented in a way whereby all DNS activity must go through the corp resolver first, essentially giving full tracking history of 99% of users.
Yes, it already happens with corporate proxies, and yes, I'm sour about that too...
To be clear, I'm not blaming any NetOps here. It's just... Why is it so tempting? The things you can do with that type of data, it almost seems like you have to be superhuman in terms of keeping people away from it. Maybe I"m just too damn enchanted by just delivering packets to enable people... But when you get pushes from agencies suggesting "Hey, add a by definition awesome logging and tapping point" it really ruins my day.
And yes, I run a network too. No. I don't give a darn what my users do with it as long as the servers are up and fine, and the global riffraff stay out. I don't know. Just overly grumpy I guess.
To be honest, I have no idea what point you're trying to make.
Here it is:
* Companies have internal (intranet) network services
* Companies operate their own DNS (DoH) resolvers
* They also have global (internet) employees
* The devices those employees use have hard-coded DNS (DoH) resolvers (Google, CloudFlare)
* Don't let them use the hard-coded DNS (DoH) resolvers
* Make sure their machine uses the company DNS (DoH) resolver.
I know people think that DNS-over-HTTP makes everything private and secure, but it doesn't. Google and CloudFlare still see every single DNS query from everyone.
>And yes, I run a network too. No. I don't give a darn what my users do with it as long as the servers are up and fine, and the global riffraff stay out.
You don't care if your users get hacked? Would you mind telling me what company you work for?
> Sending your DNS queries to a resolver that you control is hardly MITM.
That's if your users are well-behaved and follow the rules. To stop the users from being badly behaved, NSA recommends blocking connectivity to well-known IP addresses of the public DoH resolvers (e.g. Cloudflare) and TLS inspection to stop connections that try to go to less well known ones, including via ODoH, which means your TLS inspection device must understand the protocol.
To do TLS inspection at that level you need to MitM all HTTPS traffic going everywhere, as you need to read all HTTPS traffic to any possible host, as any of them may be a DoH resolver or relay. Q.E.D.
>To do TLS inspection at that level you need to MitM all HTTPS traffic going everywhere, as you need to read all HTTPS traffic to any possible host, as any of them may be a DoH resolver or relay. Q.E.D.
Yep. And corporate/enterprise environments should (not nearly enough do) do exactly that with devices owned and managed by the enterprise.
Any personal devices or those owned by contractors, clients and other external actors should not be allowed access to internal corporate networks. This is neither a particularly new or controversial idea either. Most large (or even medium-sized) organizations have separate "guest" networks for external resources which aren't secured or monitored.
However, internal networks are (and should be) a very different story.
> Yep. And corporate/enterprise environments should (not nearly enough do) do exactly that with devices owned and managed by the enterprise.
Sure! But the fact that they should (and many have been) already be doing just that does not change the fact that the technique imposes a third party listening into supposedly two party encrypted exchange. It's allowed, but it is still MitM.
> Any personal devices or those owned by contractors, clients and other external actors should not be allowed access to internal corporate networks. This is neither a particularly new or controversial idea either.
Whilst I agree, recent trend to push for more BYOD where the device is owned by the contrators, employee or an external actor, but still allowed access and controlled by the enterprise does tend to blur the lines quite a bit, especially as most tooling has been lacking decent isolation between "enterprise" and "private" on the same device. MDM tooling tends to want to administer the whole device and apply the stricter "enterprise" policies, with a pinky promise that private life is going to be respected.
>Whilst I agree, recent trend to push for more BYOD where the device is owned by the contrators, employee or an external actor, but still allowed access and controlled by the enterprise does tend to blur the lines quite a bit, especially as most tooling has been lacking decent isolation between "enterprise" and "private" on the same device. MDM tooling tends to want to administer the whole device and apply the stricter "enterprise" policies, with a pinky promise that private life is going to be respected.
Which is why employees should either be given employer-owned devices or use device subsidies (as many companies provide) to pay for a device that's only used for work purposes.
That companies attempt to hijack (and I mean that in both metaphorical and literal senses) personal devices for corporate purposes, aside from the obvious issues, it's also terrible security policy.
That businesses do this is exploitative, unethical and insecure. I suspect such businesses don't really care about the first two, but should care about the third.
As an infosec/infrastructure guy, I'd raise hell over such a policy -- because leaving aside the scumbaggery (I think I just coined a new word. Good for me!), having personal devices connected to internal corporate resources (even with corporate MDM configurations) is literally begging to be compromised, for (hopefully) obvious reasons.
>Sure! But the fact that they should (and many have been) already be doing just that does not change the fact that the technique imposes a third party listening into supposedly two party encrypted exchange. It's allowed, but it is still MitM.
Replying again, as I should have addressed this as well.
I disagree. When using corporate resources, the organization is not only well within their rights to monitor (or at least log) all communications, given the potential for malware, data exfiltration and (to a much lesser extent) employee misconduct, an organization would be remiss for not doing so.
Which is why it's extra important not to allow or (as I addressed in my other comment), require those working onsite to use personal resources on internal networks.
You are missing the point here. There is no argument here that the corporation should be able to monitor communications going on or out of their systems (though some limits how and for what purpose that monitoring can be done do exist, especially in the EU - it's not unlimited), but that is not what the calling the technique for what it is is about.
Use of MitM by the corporation as part of Data Loss Prevention interferes with any hardening you or your vendors might be making against a MitM attack attempted by anyone else - it breaks if, for instance, the application vendor your enterprise has decided to use (let us call them "Example plc") has pinned their own CA certificate within the applicaton as the only one that is supposed to sign certificates on the Example domains - say, for "content.example.com" - following example that e.g. Google set. Or, worse yet for this example, as specific certificate to be used instead of the specific trust anchor. I've seen both examples in the wild, so it is not an idle discussion.
Not only you need to override that pinning with your own CA in the application for the content to be inspected, to retain the same level of hardening you'd need to implement the same checks the application did in your DLP system, so that it verifies that the system is legit - and that costs money and time and remains fragile over time, so many enterprises simply do not bother doing so, falling back to the well-known list of public CAs instead (that includes my $CurrentCorpo, much to my annoyance). It weakens the whole system, which is already fragile enough thanks to actors like Symantec, WoSign and StartCom - and possibly others.
> You seem to be implying that a company should not know what kind of web requests are coming from its computers.
Please point out where I made that claim.
All I am saying is that the way "knowing what kind of web requests" - and DNS request in this case - is achieved is by becoming a third party in supposedly two party encrypted communication. The company certainly has the authority to do so (check your local laws, though, as there are some exceptions) - but it is MitM in function and practice, if not in name. "TLS inspection" and "data loss prevention" are simply common euphemisms for the technique.
It's also not new, MitM proxies and for that matter endpoint introspection (e.g. keyloggers at the user machine) have been in use for decades in the enterprise, and have been making their way into BYOD private machines as well via various MDM tooling.
Using your company DNS server as the grandparent has mentioned is not MitM.
Inspecting all traffic by all devices in your company to try to enforce the use of said DNS server requires MitM, though.
You keep saying MITM, which is an attack type, as if any kind of traffic inspection is bad. There is no third party in this sort of proxy: the company is communicating with the internet. The fact that a company proxy is inspecting traffic from a company computer does not make the company proxy a third party because both resources belong to the company and should be used for legitimate purposes. Is a reverse proxy doing a MITM attack on a web server if it offloads encryption and authentication for it? No, because both resources are owned by the same party.
TLS inspection and DLP are not euphemisms, they're valid names for a security practice. They're not even the same thing--you couldn't replace both mentions with "MITM" and expect another to know what you're talking about.
In light of SolarWinds hack, I think it's fair to say that DoH is a real threat. Putting that volume of machines under the control of a single "trusted" network provider is very bad idea.
>It gets so tiresome. I used to think the people who called out tyranny everywhere were just nuts, but it never ceases to amaze that everything nowadays keeps going "centralize and control".
The recommendations are for enterprise networks, although they're also reasonable (although not really accessible to the non-technical) for individuals who care about their privacy as well.
An enterprise network isn't (or shouldn't be) some sort of individual free-for-all. In fact, good security practice recommends (although this isn't universally implemented) that all perimeter network traffic, regardless of type, be proxied (or MitM'd, as you put it) to protect from both intrusions and exfiltration of data.
Are you claiming that Enterprise networks should allow external resolvers to be used on internal resources willy-nilly?
In fact, good security practice demands that devices that aren't authenticated (e.g., with 802.1x) shouldn't be granted access to internal resources at all. On the flip side, internal devices shouldn't rely on external infrastructure resources either.
This isn't censorship or some sort of fascistic control mechanism. Rather, it's an appropriate organization response to extant and potential threats to their IT infrastructure and data.
Specifically, they are saying that in a home/personal environment, it makes sense to use DoH with a public resolver like cloudflare, but in enterprise, you will not be able to maintain tight control over internal use as browsers role out with DoH by default unless you block those public resolvers and enforce policy to use the enterprise resolver. It doesn't matter unless you care about these tight controls though even in enterprise.
It is going to be super hard to block DoH since it is indistinguishable from normal HTTPS traffic. If you MITM the HTTPS connection the browser can detect that and refuse to use the connection. Many companies are in this situation that MITMing HTTPS is not working very well. I think Google enforces HSTS[1]. Not sure how that works with DOH. I also think that we need browsers that do not have any other means of DNS resolution than the good old operating system wide /etc/resolv.conf (or similar). I am not going to fight with Google if I have the right to have my own DNS server or not. They are taking the open internet inch by inch. This is the last drop.
At an enterprise level the browser configuration is controlled by the IT department. Your MITM CA certificate is going to be forced into the trusted list everywhere.
HPKP is dead for all intents and purposes as far as browsers go. What pinning?
The CA certificate store that the browser is using is something any enterprise that is interested in control is already extending by adding their own CA cert - and it has been that way for a very long time.
This approach does break some applications that pin specific certificate instead of relying on "any valid CA" model (e.g. Signal desktop) but that is seen as feature, not a bug, when it comes to enterprise.
Well known public DoH resolvers are going to have well known IPs and will be easy to block.
eSNI and the encrypted TLS handshake proposal that was floated recently rely on fetching keys via DNS, so that's not applicable for DoH, and the handshake for a DoH client will probably be easy to distinguish from an HTTPS client.
What if well known DoH resolves are going to be in the same range as the web traffic? If you could distinguish between DoH and HTTPS easily with iptables or pf, sure. I thought it is not that easy.
That really depends on how much Cloudflare and Google are willing to risk.
If enough people block the published DoH resolver IPs, they'll see reduced availability on any hostname that is hosted on the same IPs; but if Cloudflare and Google put important content on the same IPs, it makes it harder to block.
unless i am reading this wrong, they are not saying don't send your requests to Cloudfare, Apple, etc. i am not entirely privy to all this, but aren't they entreprise grade DNS resolvers?
On my network? Absolutely, ability to inspect packets is absolutely essential. On a public network? Different story.
I’ve personally been engaged in incident response and in many scenarios DNS is a control mechanism for malware, or uses it for various purposes. It’s often a key piece of evidence for reconstruction of an incident.
Raw IPs can be used as well, but that doesn’t negate my point.
>Raw IPs can be used as well, but that doesn’t negate my point.
And in fact if you have enterprise-wide visibility on DNS requests, you have the opportunity to detect the use of an IP that was not returned in a request. Making it immediately suspect.
I wonder why they haven't done this along time ago. They don't even need a full DoH endpoint, just an auto-updating hosts file downloaded from a non-blockable domain (one used for other TV features) would do it.
There's no reason any of these products need to use on-prem DNS of any sort, except maybe for the DNS lookup to the central server that they require to operate at all. I know a lot of people base DoH concerns on the idea that it allows their set-top box to evade their local DNS policy, but that's not a coherent argument; these boxes can tunnel all their traffic out, if they want to (you can block that, but it's all-or-none, which is the thing the DNS boffins claim they can work around).
I already use an Nvidia Shield instead of my smart TV, even though both are Android TV. It is such a better experience. If any device started taking over it's DNS in a way I couldn't override, and I had reason to care, I would stop using it. PiHole is already a meh solution.
My two primary apps on my Shield are SmartTubeTV and Kodi. I won't pay for YouTube when they force bundle it with other services I don't want. The alternative of ads has gotten to ridiculous levels, and then the ads in the video from them on top. SponsorBlock is another game changer. Sadly it isn't in an AndroidTV app yet.
On my phone it is Vanced all the way for YouTube, and it does have SponsorBlock.
I don't understand why people can't see the dangers of moving everything to DoH. For example if you have a 3000 user network and 2900 of them are using a local resolver. You have almost no chance of finding those 100 nodes doing DoH without MITM everything over 443.
Someone will probably respond with something like: "Just block the IP address ranges of public DoH resolvers" and that would work for the resolvers we know about.
I don't understand why people can't see the dangers of moving everything to DoH
Because "more security" is hard to argue against. The huge corporations who ultimately want to take control of the population have realised that, and are using that excuse to get in bit by bit.
>Because "more security" is hard to argue against. The huge corporations who ultimately want to take control of the population have realised that, and are using that excuse to get in bit by bit.
But in reality, DoH doesn't really provide "more security."
All it does is obfuscate DNS queries. If you're concerned about ISP tracking, DoH doesn't really help with that at all since the ISP can see where you're going just by looking at packet headers anyway.
And the Googles and Facebooks of the world love DoH because it bypasses PiHole style ad/tracking blockers.
The appropriate solution is to use PiHole (or PiHole style blocklists) in concert with a local recursive resolver (or an external resolver that supports DNS-Crypt), not to obfuscate your DNS requests, allowing all the ads/tracking/spying connections to proliferate.
It's not a perfect solution, but it's a much better solution than needing to implement one or more ad/tracker blocking solutions on every single device on your network.
First they take the DNS queries. Then they start routing the rest of the traffic through their servers, while advertising how it's all "for your privacy and security", of course.
To be clear, I'm not against the principles behind DoH, and think traffic going from the local network into the Internet benefits from encryption; I'm against how it's being implemented at the application-level and its subversive nature.
That's fair enough, but in the short term, Cloudflare is more trustworthy (and tolerant of free speech!) than my ISP and government. Is there an initiative in which I have to trust none of these parties?
You can reroute DoH to your own resolver. If you have a trusted wildcard certificate on the device you want to reroute DoH for this will work 100% of the time. If you don't have a trusted wildcard cert on the device in question it usually will either not care or will fall back to unencrypted DNS.
If you’ve got 3000 nodes on your network without inventory, logging and configuration control on each of those devices you’ve already lost. You don’t have a secure network, at best you’ve got a guest network at a cafe.
An estimated 18,000 companies were affected by the SolarWinds incident. Many of those companies had excellent inventory, logging and configuration control. You simply cannot detect DNS over HTTPS in the network without performing MITM.
Really, you can’t detect that if you’re on the machine?... and if you’re not on the machine or its acting differently to what the logs show. Isolate it until you can investigate.
Actually, no you could not detect that even from the machine performing the DoH. You could probably detect it if you attached a debugger and set a breakpoint on the resolve functions being used. May I ask what you do for a living?
Why even comment on things that you don't fully understand?
I'm a security engineer. It's pretty much my thing. I'm taking about logging it on the machine, that's not owned...and yes you too can do it... I'm doing right now. Even on my raspberry pi.
The detection part I think you're misunderstanding is that you need to compare what the machine is logging and what it's actually doing, by looking network traffic, etc. Looking for parallax, differences between the two.
I am not misunderstanding anything. Let's terminate this conversation, I can see that it will not get anywhere.
It's amusing that you actually believe that you can 'check the logs' to detect all DoH being performed on the machine. Would you be willing to disclose your employer? "I can check the logs" sounds like something a naive systems administrator would say.
I'm glad that 'security' is your thing. The best thing about the internet is that you never know who you are talking to... Even when you meet people that wrote the parts of the operating system you're currently using.
I never said you could log all DoH. You’re not following what I’ve said. If you’re relying on DNS for your security posture in anyway right now you’re in a really bad place. Having those non malicious dns requests in the clear are a safety blanket at best. Check the default DoH resolver and the systems DoH logs. Then look at network traffic and then for gaps. Programs that use their own resolver and just mix it with there own TLS traffic can be observed, even without knowing the DNS record, the ip is enough.
Also feel free to Google me, creepy as it is, I’ve no idea why my specific employer would help this discussion in anyway.
PS the victims of solarwinds had dns and it didn’t help them. Expecting the attacker to use a known IOC or contact an obvious C&C domain is where the industry is at. My opinion is DoH will actually force blue teams to build systems that are effective. My chosen model is parallax. Known behavior, known states that can be checked.
They are just saying within enterprises it is of the enterprises best interest to control all aspects of dns so all traffic can be monitored. Which you seriously need to do if you aren't doing it already.
Dns needs to be monitored holistically it is a great place to catch IOCs.
For non-malicious apps, generally speaking DoT is something you need to specifically enable, whereas certain major applications are working towards using DoH by default (or they already do [0]). DoH is also mixed with regular HTTPS traffic, so it is much harder to detect and act upon - so businesses need to spend more effort to counter it. As a bonus, most of the mitigations in use here also apply to DoT, so you are also getting it in the bargain...
There has certainly been evidence of censorship amongst the thousands of third party open resolvers. Are there any examples of known "malicious" third party DoH or DoT resolvers. Has anyone been studying this.
Could someone create a replacement for DNS entirely please?
DNS does WAY more than what the typical user needs it for and services that present it are resultantly much more complex than what is needed for the 99% use case.
The 99% use case: resolve x.y.z to some IP address.
What I think should happen:
1. At each level, a public/private keypair is used to authenticate valid records for the name. Eg: .com has public/private keypair(s) to represent who can sign x.com records. .com owner only needs to publish these. Reliable sources ( ISPs etc ) can then share these.
2. The x.com records themselves would be: Mapping from x.com to IP address(s) / public key.
3. The x.com owners could then publish out their x.y.com records freely and they could be mirrored by everyone.
Unlike the current methodology, there would be far less need to trust where you get the records from. The public/private keypairs should change WAY less frequently.
Agreeably in such a widely distributed system you wouldn't have nice TTL, but that is for the better. DNS records should not be changing that frequently.
Such a new system also should be done in a fully distributed way and NOT controlled by a bunch of money grubbing bastards who make way too much money from records.
It should NOT cost $20/yr to own a record pointing x.y to a number. It's absurd and really needs to stop.
No. My point isn't to use security on top of existing DNS records. My point is to make a brand new distributed system entirely that is free for all instead of run by a bunch of greedy internet thugs.
Firefox silently pulled all production ESNI code as of v83 without a word of warning to anyone. As in, the Firefox development team simply killed encrypted SNI and told nobody that may have been using ESNI in despot regimes, in exchange for future ECH support which is not implemented anywhere yet.
Taking off my tinfoil hat, I understand that one of the purposes of the NSA is to keep US information safe. Following their recommendations should make your data safer.
However, Snowden showed us that the NSA doesn't always follow the rules it is supposed to operate within. Does that mean they are always be suspect? How do we decide when their recos are for the good fo all?