This issue only exists because without chrome's work here virtually every ISP / Wifi provider etc started hijacking DNS inquiries to pump trash ads down your throat.
THANK you google for stopping this total abuse of internet standards (very annoying for some cases where no UI as well).
The irony of Verisign complaining about this is incredible actually - they were the no 1 abusers of this in the past! They go on about interception being the exception these days. That's actually because chrome put a stake right through the heart of Verisgns efforts to intercept.
Separate question - if google did what mozilla did and wrapped DNS in HTTPS and ran it to 8.8.8.8 (allowing user to pick other resolvers) - would that be better?
It feels like folks complain - you are sending too much traffic. If google said - we can handle it directly (they obviously can - the bandwidth youtube alone uses has got to be magnitudes larger) people would complain - no no - send us the traffic.
Google already runs 8.8.8.8 and encourages adoption of it I think - maybe it can handle the billion requests per day? Or maybe they can help verisign scale their infra?
The real "complaint" is that Google broke the ISPs ability to highjack your DNS. With that real goal in mind, they will still "complain" if Google wrap DNS on https and send to 8.8.8.8
The ISPs now will have a series of PR articles about how Google wants to own the internet, track all your web DNS, monopolize the access, and evil blah blah.
We should read these articles with the mindset that they are PR pieces, not a solution request.
Verisign, the folks complaining did a wildcard DNS entry for ALL .com and .net domains called sitefinder! How can they handle the bandwidth for that (serving full web pages with lots of "partner" offers etc) but not 3 DNS packets?
The GTLD / top level domain servers are different from root servers. Also, root servers are a shared, distributed responsibility. Verisign doesn't run most of them: https://en.wikipedia.org/wiki/Root_name_server
"The ISPs now will have a series of PR articles about how Google wants to own the internet, track all your web DNS, monopolize the access, and evil blah blah."
They'd be right, but that's neither here nor there in this contrived strawman.
And yes, a lot of people would have a serious problem if Google baked in their own DNS. That would break a lot of stuff. And it would make the notion of checking invalid DNS entries rather unnecessary, wouldn't it?
So ~40 boxes 10 years ago would have been able to handle 60 billion queries per day.
I'd expect a single modern server of reasonable size to easily do 100K qps, and probably more like 200-300K qps, if not exceeding a million qps (though a single box may hit the packet/s bottleneck).
So Verisign is complaining about a single digit to low double digit server box worth of traffic.
I have a hard time imagining this being more than a rounding error for them.
EDIT: Updated to directly reflect 60B queries per day figure.
Thanks for confirming my intuition. I haven't paid attention to bind performance for well over a decade. So now I am even more flabbergasted that Verisign are even talking about the load.
Google running the DNS resolver wouldn’t fix this problem; due to the nature of the requests that Chrome is making, the lookups would have to hit the root server because they wouldn’t be cached anywhere else.
I'm suggesting let google run some root resolvers.
Not to rain on the sob story, but netflix, amazon and google probably pump out orders of magniture more bytes then these requests.
It's axiomatic. If my session makes 3 512 byte requests when I start browsing, and then I watch 10 youtube videos at 2K, then a netflix movie at 4K, how in the WORLD is the bandwidth from the 3 512 byte requests crushing infrastructure?
And if it is, let google or amazon or someone with a clue run the infra. Seriously - this makes no sense that THIS is the bandwidth killer out there. Apple updates - sure, those are monsters and a ton of people get them. DNS packets are pretty small by contrast.
Bandwidth isn't the issue, it's the dynamic nature of the content and the global consistency needed between DNS servers. Videos are static content; they don't get updated so you don't need any schemes to check for updates.
It's certainly possible to design DNS to scale, it's just that they may be overwhelmed by QPS right now because they made different trade-offs.
We are talking 15k qps. A single AMD machine should be able to much higher rates - millions. Yes, there is overhead, but my point was, if verisign or whoever is in charge can't run these root servers to handle 15k qps for the entire global internet then let google do it.
These are ridiculously low numbers, and dns is EASIER not harder than video. Google is doing full live streaming, multi res / format delivery with chat across a ton of platforms.
And no - root DNS does not change at a high rate of speed. It's a 2MB file you can download yourself. .com stays .com for a LONG time and the TTLs are going to be in hours and days.
I think the root resolvers can handle it (at least so far). It's just that these probably unnecessary DNS request to test for a somewhat rare situation are causing just about as much traffic to root resolvers as the rest of the internet combined.
Note that this will actually be disproportionately higher than DNS traffic to recursive resolvers, because, since these domain names are randomly generated, they won't be cached very well.
I think the complaint is not "we can't handle this" so much as "we don't want to pay for this."
Now, if you are saying that google should run root resolvers so they can absorb some cost of doing this, that might be fair. Although, I suspect it would be cheaper for them to find a different way to accomplish this in the browser.
Agreed. I'm not a Google stan, but I'm inclined to be on their side here.
It would be good to switch to a less expensive solution, and I hope the Chromium team does so, but at the end of the day we don't want ISPs to hijack DNS queries.
I'm not convinced that I should care more about the extra load on root DNS servers than I should care about the hijacking. And frankly, I don't trust the people who are complaining about this.
> The root server system is, out of necessity, designed to handle very large amounts of traffic. As we have shown here, under normal operating conditions, half of the traffic originates with a single library function, on a single browser platform, whose sole purpose is to detect DNS interception. Such interception is certainly the exception rather than the norm. In almost any other scenario, this traffic would be indistinguishable from a distributed denial of service (DDoS) attack.
> Could Chromium achieve its goal while only sending one or two queries instead of three? Are other approaches feasible? For example, Firefox’s captive portal test uses delegated namespace probe queries, directing them away from the root servers towards the browser’s infrastructure.
Changing one abuse for another is barely an improvement.
Regulations is what will force ISP to behave correctly and stop hijacking DNS queries.
For me, as European, it is worst to be spied by Google and my data given to the United States of America intelligence, than to depend on my ISP that is bounded to European laws and will know where I connect regardless.
Google is abusing its position and most users are not technical savvy enough to understand the implications.
The only additional data going to google would be intranet requests incorrectly sent to google as searches. I don't think google did this to collect that data, but some legal standards would be nice either way.
The context of the discussion is whether Google should be sending those DNS queries. The premises involved technical details about how heavy those DNS queries are. The DNS queries don't make things worse for the end-users privacy wise. Nobody is arguing that we should give Google more "abuse". There is an actual net negative of "abuse" in this case, by removing the abuse from the ISPs, with a solution that doesn't change your data privacy from Google.
They could still improve on it to make the impact less severe. (for example [1]) Mind you they do this globally even though that practice is illegal in many countries, so in a lot of places this is a complete waste.
But isn't Google's business collecting user data and then providing assistance to other companies that may want to stuff trash ads down your throat?
Google's search engine is not an "internet standard". Being the default on Chrome, that is in effect what this DNS querying behaviour is protecting. Protecting Google "search via the address bar" is also the raison d'etre for Google Public DNS. Back in 2008/2009, this type of "search" was being hijacked by OpenDNS.
The solution to not having DNS "hijacked" is to have more user control of DNS. Letting users select a third party DNS provider is better than no choice at all, but that is not really control. The third party still has control. Unfortunately, full user control -- the best solution to stop hijacking and other DNS tricks -- is not what Google and others are promoting.
By design, I have no entries for certain Google domains in the zone files I use on the LAN. I have seen how much involuntary DNS traffic there is from Chrome and other Google software 24/7 (including queries for these random strings). It is significant, even for just one user. Why don't they provide an option to turn this off.
Google collecting user data for ads is a different issue from ISPs intercepting DNS requests, and then hijacking the response. You are conflating two different issues.
And can't you select your DNS provider in Chrome by going to Settings > Security > Advanced > DNS Stuff
In a previous YCNews discussion of this topic, I worked out based on public Root DNS statistics that this traffic amounts to at most a few gigabits spread across hundreds of DNS servers, each of which has at least a gigabit connection.
Notice that all of the breathless articles going on about how evil Google is are using percentages, not absolute numbers?
This is because journalists don't do journalism any more, they just put their name on corporate PR that is intended as a weapon in a war for control.
You know, that actually makes sense to me. You'd want a lot of margin. If they were running the root DNS servers and were hitting like 80% of their bandwidth on a normal day, I'd be worried.
We're not talking about 80%. We're talking about 8%. The size of the traffic they are talking about is low enough that I, personally, would consider paying for it. It is a couple thousands of dollars a month to run this "abused" infrastructure. If they were doing it as a thankless task for free, it would make sense to be upset about it; but they're not. They're doing it for a cut of every single .com purchase on the whole web.
Ironic that this is Verisign complaining considering the very idea to capture nonexistant domains and replace with spam was a Verisign concept from 2003 called Site Finder. All of .com and .net were wildcarded at the root level.
The ISP just took it as a great idea. Now we have to make countermeasures for it.
So half of the traffic is Chrome requests, but how much capacity is all of the traffic actually using?
Are the root servers 90% idle or are they overloaded? That changes everything. Also servers are fast, bandwidth is cheap, and DNS is lightweight. How is this really a major resource issue?
> Are other approaches feasible? For example, Firefox’s captive portal test uses delegated namespace probe queries, directing them away from the root servers towards the browser’s infrastructure.
In your agenda to expose my agenda in linking a Firefox blog post, you have missed that this isn't even about captive portal detection. Captive portals are easily detected by trying to load a known response page; captive portals don't fake that because they want to be detected so that the browser redirects you to their portal.
This is about malicious DNS resolvers that don't return NXDOMAIN for non-existent domains but instead send you to an ISP advertisement page. This messes with the omnibox. For all other domains, they resolve just fine. These resolvers are inclined to evade detection, e.g. if browsers checked a static list of domains, they would just return NXDOMAIN for only those.
> captive portals don't fake that because they want to be detected so that the browser redirects you to their portal
This is not always true; several systems try to get you out of the iOS/macOS captive portal detection because it boxes the user into a restrictive frame, and either always impersonate captive.apple.com or begin to impersonate it when they want the user to believe they're "online".
Ok sorry, please forgive my misunderstanding, it sounds like you’re saying Cloudflare is a ‘malicious DNS resolver’, but I guess I’m wrong there as well? I opted into DoH when it became available in Nightly (though not in the US), am I being a mug here?
I don’t see that in the parent comment, though, to which my misguided reply has been heavily downvoted? I guess I need to learn more about this. Edit: or just stay out of it.
I can't even remember last i got an advertisement page. Must've been about 10 years ago, though the last 5 I've been using third party DNS. I don't know any Swedish ISP that does this. I'm using ISP DNS on my phone.
I assume at some point the calculus flipped for ISPs to not annoy you with low-quality ads and make customers cut them out, but instead operate a real DNS resolver so you can still sell their real-time browsing history to advertisers. Part of that is the Google and Cloudflare marketing campaigns for their public resolvers, so people had a ready alternative.
Yes, it is, and that's in fact one popular way to set up a private root content DNS server for a resolving proxy DNS server to consult. I do it on my machines.
I don’t understand why it has to randomly generate new domains each time. Surely all that matters is that it’s testing for domains that are statistically unlikely to exist, right? Why not seed the RNG with something like the current date, such that the domains are still random, but all instances of Chrome generate the same domains every day? That way the NXDOMAIN results should be able to be cached by upstream resolvers, thus significantly reducing the load on the root servers.
If they were in any way predictable, people would register them, or the broken DNS servers would handle those domains properly while still giving broken results for others.
That shouldn't be a problem: I don't think it's possible to make qwajuixk resolve to anything on the public internet. I bet they could hardcode a list of domains, change it up every couple releases, and that'd stop 80% of the rogue sysadmins who are doing this.
I've mostly seen DNS hijacking from large ISPs. Probably hardly any sysadmin anywhere wants to implement DNS hijacking, but have corporate overlords that tell them to do it. The first big ISP would get it working pretty quickly no matter what Google did, then other big companies would see it and tell their sysadmins to figure it out, the other company is already doing it.
I bet there's a decent amount of cash in DNS hijacking for big players, so you should probably think of it like the hypothetical cryptography attacker. You should assume they will know everything about your method but still can't beat it. If you can tell them what you're doing and they can beat it, they will. They wouldn't have done it in the first place without motivation.
Send one request to such a random domain over DoH or DoT to some DNS server you control for the purpose (Google can easily set such a thing up). Ensure the response is NXDOMAIN. If it's not, generate a new random domain and retry.
Send a second request of the same domain via the system DNS. If it's not NXDOMAIN, it's hijacking unknown DNS requests.
I don't think so. If there is any kind of predictability, I think at least 80% of the spam would come back. And these aren't rogue sysadmins. Huge companies are doing this.
In that case, the ISP runs chromium, figures out the domains, and they can stop hijacking those domains. Then the feature gets the wrong result for the rest of the day.
(Hell, it's open source, ISP could just integrate the algorithm into their DNS hijacking logic).
There’s not really any benefit to the ISP to doing this.
If there was, ISPs would already detect 3 quick requests from a client that fits this pattern and disable the domain hijacking for the second and third request (or even for all 3 if they’re confident enough that the first request is part of this pattern).
If there’s no valid cert for the domain, you can’t 301/302 redirect over https connection. SNI just lets the middleman peak at the host name, it doesn’t allow you to redirect. DNS interception allows the attackers to change IP but https still needs a valid cert for the host.
Incidentally, this affects how my ISP implements its domain blocks. Non-HTTPS sites are 307 redirected to https://assets.virginmedia.com/site-blocked.html, whereas connections to HTTPS sites are just halted (IIRC, it sends a TCP reset, but it's a while since I looked into it).
You don't need DoH to avoid that. Just set up and use your own resolving proxy DNS servers and don't use the Virgin Media ones at 194.168.4.100 and 194.168.8.100 either directly or indirectly. (This means not forwarding to those servers, and not letting your DHCP client, in any of your machines, configure your system to use them when it gets a lease.)
Here is the difference between using Virgin Media's resolving proxy DNS servers and using your own:
That last query didn't even escape the machine in my case. I run a private root content DNS server on every machine if possible. The query here got answered by a tinydns instance listening on 127.53.0.1:
I've just realised my mistake, I needed to turn "use-peer-dns" off.
I did have my router setup to cache DNS requests to google and updated the DHCP config so everything would use my router for resolution, but virgin's DNS servers were still being pulled through as a "dynamic server" according to the RouterOS panel.
I thought Virgin was fiddling with my DNS requests even though I was sending them to a different provider.
Issues with HSTS is that it is opt in. It should be an opt out with a list of legacy sites that ships with the browsers similar to how hsts preloading works.
Requiring HTTPS would absolutely protect against hijacking of top-level domains that aren't registered, as no SSL/TLS issuer that's trusted by browsers will issue certificates covering those domains.
A first step towards that eventual outcome would be to default to https:// for anything typed by a user that doesn't start explicitly with http:// so that they are protected from NXDOMAINs in that regard.
I hope that the various browsers implement at least that first step.
DNSSEC isn’t coming anytime soon. AWS and azure don’t even support it yet. More likely this will be resolved by DoH than DNSSEC. Isn’t chrome suppose to adopt it this fall and Firefox already has it with the default being cloudflare as the resolver.
DNSSEC is coming though. Cloudflare supports it and half the internet runs on Cloudflare.
Amazon and Azure are still lagging behind, but those aren't going to ignore DNSSEC forever.
DoH solves nothing in this context unless we centralise DNS resolution to Google or Cloudflare. The Windows method submits the DNS requests to the same server as your regular DNS server by default, so the protocol doesn't give you any advantage there. Chrome does the same thing; if it detects that your name server does DoH, it'll switch to that, but it won't switch DNS servers.
Perhaps this does solve the problem for the few people with custom DNS servers getting their queries intercepted, but the vast majority of people (everyone not using Firefox inside the US) won't notice any difference.
DNSSEC will solve this problem by making the certification chain break for domains that haven't been signed by their TLD, but only if we start enforcing it (like we are enforcing HTTPS for certain features already). DNS over HTTPS will just encrypt the bad NXDOMAIN DNS query when it comes from the same malicious server, only protecting against traffic interception when using different DNS servers; the two protocols serve different purposes.
Cloudflare sells DNSSEC services. They've supported DNSSEC for years, and adoption of DNSSEC in the US --- outside of people who get it by default with Cloudflare --- hasn't budged.
DoH breaks ISP DNS interception immediately, and for all zones, not just the rare DNSSEC-signed ones, by moving DNS resolution off-net to a more trusted provider.
The irony is, breaking NXDOMAIN interception is in fact a core use case of DNSSEC, and the protocol simply won't work for it, because the ocean needs to boil before every vector of the attack is closed. DoH went from the whiteboard to deployment in a tiny fraction of the time, and actually closes this hole decisively.
I am not against DoH, but it only helps with the privacy of the request/response wrt snoopers. It doesn't help with the authenticity of the response, which is the problem being addressed by this mechanism.
(That the firefox solution works is due to the root servers themselves being deemed trustworthy, and the ISP ones being tentatively not, but the DNSSEC solution is strictly better as now nothing needs to be trusted.)
THANK you google for stopping this total abuse of internet standards (very annoying for some cases where no UI as well).
The irony of Verisign complaining about this is incredible actually - they were the no 1 abusers of this in the past! They go on about interception being the exception these days. That's actually because chrome put a stake right through the heart of Verisgns efforts to intercept.