Hacker News new | past | comments | ask | show | jobs | submit login
Inside Firefox’s DNS-over-HTTPS engine (haxx.se)
238 points by sohkamyung on June 3, 2018 | hide | past | favorite | 130 comments



> This initial approach, at least, does not cache the intermediate CNAMEs nor does it care about the CNAME TTL values.

That's a total violation of the standard and will break A LOT of things. Example: my.domain.com -> CNAME ec2-1-2-3-4.aws.com 30s TTL -> A 1.2.3.4 30days TTL.

So Firefox will now cache my.domain.com to 1.2.3.4 for 30 days? When you update the record for my.domain.com the change today will be applied in 30s, but with this flawed heuristic it won't expire until after 30 days.


I don't like the idea of this, but even the implementation is bad. If we're going to do DNS over HTTPS, then there should be a standalone application, and the system should be reconfigured to use it, so all running applications on the system use it.

I mean, do we really want all of our desktop applications to have their own built in custom ways of mapping domain names to IP addresses?

[edit] E.g on Linux, it could install an application with a DNS interface listening on localhost port 53, which would then convert the request into a "DNS over HTTPS" request, and resolv.conf would be updated to use that resolver.


Totally agree re: implementation details.

And, like you said, this is a bad idea. Is there something wrong with efforts like DNSCrypt + DNSSec? That's supposed to provide authentication and encryption to DNS without sending everything over HTTP.

Did Mozilla just totally ignore the work that's already been done in this area?


DNSSEC doesn't do any of what DoH does. It doesn't provide query privacy. It doesn't even typically protect the last mile between end-user resolvers and servers; it's a server-to-server protocol. I'm deeply skeptical (to put it nicely) about DNSSEC, but you don't even have to share that perspective to see why DoH is useful --- it covers a set of problems DNSSEC simply doesn't address.

DNSCrypt is a niche protocol that basically does do what DoH does. Very few people use it. You can use it instead of DoH, if you like; just disable DoH in Firefox and set up DNSCrypt with your system resolver.


Just a guess but maybe they wanted to build this in a way that it would actually get used.


There's no reason that Firefox couldn't check for the existence of a local DOH resolver, and if it doesn't see one, pop up a one time message offering to install one for you.

Every operating system has a system wide way of resolving names to IP addresses, and every application uses it. This new architecture of building custom name resolution into individual applications might be easier for them to build, but it's crap.


That doesn't address my concerns, actually.

Mozilla is doing two things:

1. Bundling DNS with the browser (e.g. ignoring system DNS)

2. Using DNS over HTTPS

Mozilla could still do (1) and then use DNSCrypt + DNSSEC internally. Then, it would actually be used, but they'd be relying on existing technology that actually fits the use-case, rather than DNS-over-HTTPS.

For the record, I don't think you should ignore the system's DNS, either.


There are lots of Open Source projects that will do what you are asking. Here is the first top hit on using bind to do that - https://github.com/wrouesnel/dns-over-https-proxy

However I disagree that it is a bad idea and that the implementation is bad. Regardless of how software _should_ behave, Firefox operates in how software is actually run for their users. DNS is a source of security vulnerabilities and headaches.

Demanding a higher level abstraction is not always an option for many, but using Firefox often is. This is especially important for mobile, where a lot of people don't have access or knowledge to set in place a system wide proxy after rooting their phones, but it is very easy to install Firefox mobile.

What about web browser usage on library or campus computers? Often they will have several browsers installed as well.

The point is that making security more available and easier to use where it matters most is a good idea.


I just spent some time searching, and I actually don't see much in the way of clients. Most search results seem to be talking about how wizz-bang dns-over-https is, or talking about firefox's implementation.

If you know of a DNS over HTTPS client for Windows, please link it!



I'm using Cloudflare's cloudflared [0] on all of my machines and it working well and does what you are looking for. Nice bonus is being able to collect metrics from each of the agents in Prometheus.

[0] https://github.com/cloudflare/cloudflared


dnscrypt-proxy is probably the most popular DNS-over-HTTPS client. https://github.com/jedisct1/dnscrypt-proxy


I am in Indonesia where Reddit, Vimeo, The Pirate Bay and other sites are blocked. I just enabled TRR in Firefox 60 (They mention best support is in 62) and now I have full unblocked access to all those sites. Awesome.


Using an alternative DNS resolver like 8.8.8.8 (Google) or 1.1.1.1 (Cloudflare) could solve that already, and not only in Firefox.


That will not work. Some ISP do DNS transparent proxy (forward all udp packet 53 to their own DNS server). You need DNSCrypt or VPN.


It depends how the block is implemented. If it's done by intercepting DNS requests, it doesn't matter what server you _try_ to reach. There's also the Virgin (UK) approach of allowing the correct DNS response, but tampering with HTTP requests (though switching to DNS-over-HTTPS wouldn't help in that case).


Or Quad9, but this could be intercepted.


apt-get install unbound


I was already using 1.1.1.1, did not unblock. Would be very curious if the ISPs were actually proxying my DNS requests. Any tips on how I might test that on Ubuntu?


They generally inspect DNS packets and block hostnames from resolving correctly. Does not matter the destination.


Be aware that using it in 60 you may run into frequent crashes from https://bugzilla.mozilla.org/show_bug.cgi?id=1441131


Thanks for the heads up, ill keep that in mind if i start seeing issues. So far so good.


Ok that is news to me, why Indonesia blocks it?


porn, hate speech, and pirate related stuff is blocked by government


Well, shame on you. All the armchair experts at hacker news say that the implementation is really bad and you should feel really bad for using it. So please stop using it and make them happy.


Reddit, Vimeo? What is their justification for blocking those?


"Caveats

TRR doesn't read or care about /etc/hosts

There's no way to exclude or white list specific domains"

For me, the primary advantage of HOSTS/DNS is the ability to control answers to application queries for addresses and block ads.

This seems to remove all control a user might have through controlling such lookups. Yikes.

I think DOH is useful but in a different way. For example, it is useful for retrieving bulk DNS data using RFC 2616 pipelining, alleviating dependence on piecemeal DNS lookups, thus increasing speed and privacy. Data can be stored locally and refreshed periodically, if necessary (I have been doing this witout problems for 15 years). It's also useful for retrieving data from a variety of caches, allowing answers to be compared.


TRR doesn't read or care about /etc/hosts There's no way to exclude or white list specific domains

Sigh. This is aggressively breaking normal DNS behavior (and will be an absurd hassle for a very large number of organizations, but in terms of extremely normal split-horizon and orgs with regulatory obligations to catch HTTPS traffic).

Applications should not contain their own encapsulated resolvers, let alone resolvers that default to sending all of my DNS traffic to for-profit companies that have previously experienced massive data leaks (and fun CF fact, they invited the then-CTO of Cambridge Analytica to talk at their Internet Summit event in SF last year).


They're trying to improve the security of a fundamental protocol - if we waited for committees every time we wanted something new, we wouldn't have HTTP2, HTML5 or a dozen other technologies.

I agree they shouldn't take away the "god-mode" /etc/hosts, which is only ever populated very intentionally by sysadmins and power users. If anything, that should be a flag just like the various modes of using TRR.

And finally - it's an open protocol in development, and anyone can set it up who wants to. If you don't want to use Google or Cloudflare, you don't have to. And FWIW millions of people are already using 8.8.8.8 and 1.1.1.1 and Cisco's OpenDNS as their primary resolver. That GOOG and CF are at the forefront of another increment of Internet standards should not be surprising.


What good is it? TLS SNI already exposes the domain name. Even if that is addressed in the far future, you still can't hide the IP addresses you're connecting to. The only real engineering problem with DNS is authentication, which is adequately solved (however poorly) with DNSSEC and operating systems shipping local resolvers.

Moreover, whatever promises CloudFlare and others make, they're still centralization points and therefore ripe targets for infiltration and exfiltration.

The moment browsers actually solve the privacy problem will be the moment 9/10ths of the internet goes poof and disappears along with their monetization strategies. As long as Google is viable then these are just tricks arguably doing more harm (increasing reliance on centralized vendors, increasing complexity of the software stack) than good.


> will be an absurd hassle for a very large number of organizations

they can disable it, any organization that modify /etc/hosts can also change Firefox's preferences file


This isn't an acceptable compromise.

In general it's not acceptable to break functionality and then demand people invent workarounds. But it's insane to demand that every organization in the world write new portable system integration software that has to take into account 100 varying things just to disable something nobody has asked for. And it's even more insane when the software in question is the underpinning of all internet access that has existed in the same form for 40 years.


I wanted this feature.


No, you wanted encryption for DNS transport, and I completely agree with that desire. This is a terrible half-measure that bypasses many existing security precautions people have intentionally taken by default, and moreover exposes private data to a company with a history of handling such poorly.


I specifically want DNS over HTTPS - except for specialized use cases over known networks where every device in between is tested as behaving properly, protocols that aren't TLS (over TCP) are a hassle to me both as an end user and as a developer, because someone is going to break them. And once you have TLS, there's little benefit in using something other than HTTP inside, and a lot of benefit in using something where everyone already has standard command-line tools and libraries and debugging tools for it.

And I trust Cloudflare, and more specifically Mozilla legal's ability to negotiate with Cloudflare, more than I trust approximately any ISP.


You trust CloudFlare today. Tomorrow CloudFlare could become a Comcast subsidiary.

The willingness of people to put all their stock in a corporation with a promise of benign benevolence boggles my mind. I'm not particularly cynical, but I recoil at the notion that you'll now need an entire HTTP+TLS stack just to do DNS "properly", and that 99% of people will be using a single DNS provider.

EDIT: It's not even benign benevolence. I see that DOH includes server push. So now it's obvious how CloudFlare will monetize this--they're positioned to be able to push DNS updates directly to clients, reducing latencies when domain names change, particularly automated DNS topologies for load balancing, failover, etc. Reducing latencies, that is, if you're a paid CloudFlare customer.


I specifically do not want each application having it's own resolver. That's something operating system has to provide and is configured by administrator, for everything. Some systems go even further and do not allow outcoming traffic on port 53 for proces other than system resolver. Masquerading that as 443 opens a new problem.

You don't have to trust anyone, you can run a recursive resolver too. Even some home routers do that already.


Sure, that's fine. I'm just responding to "... something nobody has asked for," and below it, "No, [you didn't want the thing you say you want], you wanted [other thing]."

Not everyone wants the thing I want. But that's different from nobody wanting it.

(However, a system-wide DNS resolver using DNS-over-HTTPS is definitely a thing I want! I've been considering writing an NSS module in Rust for it, as a way to play with writing a loadable module with tokio, to see if that even works.)


Sure, system-wide DNS resolver using DNS-over-HTTPS (and the autodiscovery using DHCP/RA!) would be a good thing; each app going rogue and piercing through your policies/DNS/firewall is not.


In most Linux setups the resolver is the libc which means each application does have it's own resolver (though commonly dnsmasq or systemd-resolved is used as a local resolver/proxy for DNS). Libc also implements all the stuff around /etc/resolv.conf and /etc/hosts.


While the resolver is in glibc, it is in the form of nss modules. This is one of the primary reasons, why you cannot compile statically with glibc.


You might have noticed that Firefox runs in a lot of messed up environments, where, for example, bad installers downloaded by the user have done many bad things to the OS, including installing bogus hosts files which block known anti-virus and anti-malware websites. Firefox had a huge crackdown on malicious toolbars and extensions, and that was a good thing for most people. Do you have a clever suggestion for how Firefox might take your concept of trusting the OS and some administrator, which doesn't exist for most home users, and make it secure against malicious installers?


If malware can modify /etc/hosts, then it shouldn't have any problems modifying Firefox executable or, even easier, it's configuration.


This is true, but most malware is neither high-effort nor high-quality. It's very easy to edit /etc/hosts. It's a lot harder to inject yourself into the binary in a way that works. It's harder still to inject yourself into the binary in a way that works if the binary is actively trying to not be injected into, or using some security-by-obscurity scheme for signing the config, or whatever. Yes, it's all running at the same privilege level (or lower) and the malware technically has the access, but it's not cost-effective to have battle-hardened automatic reverse-engineering tech built into your malware.

Chrome in particular has a DLL blacklist that seems to work very well in practice: in theory, a DLL blacklist can't possibly work, but in practice, DLLs don't try hard enough to hide their existence from the blacklisting code. https://www.chromium.org/Home/third-party-developers#TOC-Goo...


Yes but you don't need any of that in this case, Firefox's config files are easily editable to toggle am option like using DoH.


The fun with malware will only start, once it starts finding it's command and control centers using DNS-over-HTTPS to a shadow DNS served by botnet and you won't be able to filter out it's traffic.


Also as other have pointed out you can run you own TRR locally as a daemon and have it look to /etc/hosts


Yes, and configure each app on each computer separately, instead of DHCP or RA, and if you miss some app, it will masquerade it's resolving among the rest of https traffic. Great. That's progress /s.


Why did you want this complicated hack when you could just use a SOCKS5 proxy to tunnel both your DNS and HTTP requests over a plain ssh connection on port 443? Or a TLS VPN on port 443 to properly tunnel all traffic (though admittedly it takes marginally more effort than ssh) ?


One, SSH on port 443 is not the same thing as HTTPS on port 443 (I've seen systems that block the former and not the latter).

Two, running a proxy to move all my browsing elsewhere seems like the hack. Changing the internet's norms so normal DNS is HTTPS-based, and everything just works everywhere for everyone, seems like a stable long-term solution.


Everything already works everywhere for everyone with regular DNS. The only thing that is broken is firewalls. Specifically, nobody wants to force network administrators to fix them, so instead they're trying to work around them. Not only is this unnecessary, it's stupid, and just a cheap workaround for something which should have been solved by the industry 18 years ago, but is now left to be wrangled by 3 companies with leverage over users.

So I had a DNS server that used UDP. Now I need a DNS server and an HTTPS server that uses TCP. Now I'm encapsulating DNS in HTTPS. So now the packets are bigger and there's extra latency, increasing traffic and slowing things down. (These kind of "fixes" must be common, because I seem to notice how technology is constantly getting slower and crappier rather than the opposite)

Then there's security concerns, because you're running twice the software with greater complexity, yet it's subject to the same attack methods as everything else, meaning one exploit could impact DNS just as much as every other HTTPS service. Among these are attacks on the PKI system, which is fragile to say the least - an expired or revoked cert kills your DNS, and any CA can generate a cert (which apparently isn't hard to do using a BGP exploit) which can be used to circumvent the encryption. Since you need DNS to do anything on the net, it will be trivial to simply kill all DNS-over-HTTPS queries going over a network and force people to use a downgraded, insecure version of DNS. And you've got to handle encryption in a way that is normally for streams rather than datagrams - unnecessary for data which only really needs authenticity and integrity assurances (which is, again, already provided by DNSSEC)

Then there's reliability. You're tunneling DNS over a connection to one server. We already know this is slower, and we already know DNS is one of the factors that slows down page connections. So you had better hope that the one HTTPS endpoint you have a persistent pipelined connection to doesn't go down, or wait and wait for what you hope is an independent 3rd party's service that is also in your list of DNS-over-HTTPS resolvers to pick up the slack.

As a DNS admin, I had software that communicated with DNS servers. But now I have to rewrite everything to use DNS-over-HTTPS, because if a query fails in DNS-over-HTTPS, you have to troubleshoot the actual connection and not what you think is in DNS, or what you think DNS should be sending back to the client. So now I have to reinvent every DNS tool. And of course, for the enterprise DNS solutions that can't just magically sprout a new protocol extension, I'll need some sort of middleware software to map the DNS-over-HTTPS queries to the real DNS server.

All of this because someone wanted "privacy" of their DNS queries, and was too lazy to allow UDP packets greater than 512 bytes over their network.


Without dignifying the argument that DNS doesn't benefit from confidentiality and needs only integrity, DNSSEC doesn't even reliably provide that. The vast, overwhelming majority of zones aren't signed, and because DNSSEC is a failing protocol, probably never will be signed. On the rare occasions when a popular zone is signed, the user experience is awful: unlike with TLS, where an expired record generates an obnoxious popup, with DNSSEC they fall entirely off the Internet --- which happened to HBO Now the week they launched it across all of Comcast.

DoH provides confidentiality and query integrity now, without requiring a boil-the-ocean step where every DNS administrator on the Internet signs their records.


> The only thing that is broken is firewalls. Specifically, nobody wants to force network administrators to fix them, so instead they're trying to work around them. Not only is this unnecessary, it's stupid, and just a cheap workaround for something which should have been solved by the industry 18 years ago, but is now left to be wrangled by 3 companies with leverage over users.

Solving this is known as "the robustness principle," aka "internet engineering." Expecting every device in between every two points on the internet to a) work reliably and b) not make questionable decisions about packets that it can parse has never worked. Successful internet protocols, e.g. TCP and HTTPS, are those which put the least reliance on the network to behave reasonably. Unsuccessful ones, e.g. IPv6, are the ones that put the most reliance on the network to behave reasonably.

It's true that the successful ones are nowhere as pretty as the unsuccessful ones. But for most people, success is more important.


Firefox's DOH client ignores /etc/hosts, but it shouldn't be too hard to host your own DOH server [1][2] that you could then configure how you see fit. I can see this pattern becoming widespread someday, and with DOH, people can re-use their experience in setting up webservers.

[1] https://github.com/st3fan/tinydoh [2] https://github.com/m13253/dns-over-https


You think regular end users having to set up and maintain server software in order to force a name for an IP is going to become a widespread pattern? That's horrifying. I don't want to live in that world.


Regular users today don't use that feature, so you're choosing a rather odd hill to die on.

To put this another way: it's a significant benefit for my random non-techie friends to be able to use this new feature plus HTTPSEverywhere. And even as a techie, I don't use a hosts file to block anything. So I'm not bothered by how Firefox chose to implement this, like most people.


There's plenty of ad blocking, spyware blocking, ransomware blocking, etc. software that absolutely uses the hosts file to blackhole requests. Much of it is free and intended for home computer use.


Regular users use a browser extension like ublock or Adblock to block ads. Not improving DNS security for the sake of a few users obscure Adblock mechanism would be pretty silly.


Regular users also use Norton, McAfee, and other vendor Internet security suites. Some of those will also use the hosts file.


OK, but software like that can easily include an internal DOH proxy (or just turn off DOH in Firefox).


And what market share do they have? None of my non-techie relatives or friends who've asked me to look at their machines run that kind of software.


Only the use case with the largest market share matters?

That kind of thinking, really irks me.


No, I don't think like that.

It does bother me when Firefox introduces a feature which covers up a huge hole in TLS, and they get a large number of complaints on HN.


Of course it gets large number of complains, when it creates more problems than it solves, and it is papered over "but mainstream users do not need that".

Mainstream users do not need most software ever made.


Did you see someone in this discussion suggesting "but mainstream users do not need that"? I just checked again and I don't see anyone making that suggestion.


It is paraphrased. The argument is, that it improves things for the mythical naive mainstream user, and for where it break things, well they are minority anyway.


I didn't see anyone making that argument, either. Are you trying to summarize what I said? If so, that's not what I said. I'd love to have a discussion about these issues, but it's not going to happen if everyone's talking past each other, plus random downvotes.


It's easier than ever to host a server transparently for end user software. You don't hear users complaining about node.js running in most Electron apps. A http server for DNS can easily be compiled to binary and ran like any other system daemon.


I think there’s great value in DOH caching servers running on home routers; all the benefits of DOH but “regular DNS” between clients and your home router.


Just run local dnscrypt-proxy (it supports DoH) on your machine/router and everything would be fine.


That [1] implements a very old draft so I doubt it is compatible with Firefox.


Also problematic:

"0 - Off (default). use standard native resolving"

...

"5 - Explicitly off. Also off, but selected off by choice and not default."

It seems that the plan for the "0 - default" is to switch the users to other modes without the user knowing it, and to keep the behavior off the user must specifically change the option to "5."


No, it is not problematic, it's a good engineering. Imagine in the future DNS over HTTPS will be supported by OS and there will be an OS-wide setting for it. Then it will make sense to change default setting in FireFox to use OS-wide setting.


"I better speculate on the reason here because surely Daniel is part of a conspiracy meant destroy the browsing experience of millions"

or...

It could be prepared for when the user gets asked what they want and then Firefox can remember an explicit "no" as compared to not selection ever made.

/ Daniel (author of the blog post)


Daniel is not responsible for the decisions made by other Mozilla managers who already used their powers to deliver an unsolicited ad to the millions using the means presented as having apparently other purposes.

Daniel’s own decisions aren’t in question here. If he works for Mozilla he is not more powerful than the whole company.

Having “off” and “off when selected by user” but not other variants still points to the intention of the default state not remaining off. Which is not problematic. Problematic is however naming the state that will obviously be changed “off.”


It’s worth noting that Daniel is the GP here. You’re talking about him in the third person.


It’s worth noting that I know that but that he started this style of referring to him(self) in this thread! So it wasn’t accidental or due to unawareness in any its occurrences.


No, he didn't. What he's done there is use a rhetorical style in which he uses your voice, that's what those quote marks are for. He's loosely paraphrasing your comment in order to ridicule it.

That's arguably rude, but then your follow-up is exactly the type of conspiratorial bullshit he's implicitly accusing you of, so it was just foresight after all I guess.


Your selection of words says much more about you than about anything else.


I'm strongly against this. Bypassing the system's DNS is a no-go.

If this passes, it's going to be a nightmare to system administrators. Basically, each and every split horizon will be broken.

But, promoting DNS-over-HTTPS in the browser and providing an easy-to-install, separate tool for Windows/OSX to solve through DNS-over-HTTPS is something I could get behind.

Like how on my network I do use dnscrypt-proxy, so everything is already using DNS-over-HTTPS.


This further drives centralization of the Internet and the idea that the Internet == Web == Browser.

It's disappointing to see this coming from someone so steeped in Internet contributions and history and believing it to be a good idea.

Bypassing system DNS is not just an enterprise no-go, it probably will end up reducing privacy.

The idea that ISPs sniff DNS is largely a red-herring, and further it's already easily addressed by DNSCrypt or DNS over (d)TLS.


My ISP in Australia already censors various domain names due to copyright lawsuits. ISPs tampering with DNS is not a red herring, but a real issue.

Furthermore, ISPs in some countries like India forces all port 53 traffic to their own censored servers. DNS over TLS won’t solve that.


People also don't realize that their ISP literally sells their DNS traffic. There is a strong market for it, it's not theoretical.


Not really. There's not anything close to what I'd call a strong market for it.


> It also makes it easy to use a name server of your choice for a particular application instead of the one configured globally (often by someone else) for your entire system.

I can see app developers wanting this, but as a user, I really hope, this doesn't happen. It's bad enough that many applications today manage their own certificate stores, making the next part of internet infrastructure app-specific seems to me a way to more fragmentation and less understanding or oversight I'd have over my own system.


If anything we want _more_ trust stores rather than less, although I'd certainly take "uses the latest Mozilla NSS trust store" over "I pasted in this list I found on the Internet fifteen years ago and have never updated it" in most applications.

One reason to desire separate trust stores is that your model of trust is almost certainly not "I wish my application trusted exactly the same CAs as [say] the Firefox web browser and I will incorporate all the same special rules and exception as that browser".

Example: Back in 2016 the US government expressed interest in operating its own public CA. This probably won't happen under Trump of course, but in Firefox this would have been no problem if they met its other criteria, it's easily able to accept a new CA and apply constraints to it. So that in Firefox a US Federal Web PKI cert for whitehouse.gov works, but one for gov.uk or google.com does not. But does your application have that logic? Or would it just blindly trust the new CA because Mozilla added it to their trust store?

The other reason is that your application is (especially if you aren't on the ball enough to run your own trust store) probably not able to keep up with the security treadmill, and falling off may be painful.

Example: If your system depended upon SHA-1 certificates to function, but you used the Web PKI CAs from somewhere like Mozilla's NSS store, magically in 2016 no more new certificates were available. No problem for the browser vendors, they had voted for exactly this outcome. Too bad for your application.


The intent is good - to give users of the applications more power in their relation with their employer and state, similarly to what the GNU project and the GNU Hurd kernel promote. Of course, there is danger that develepers will misuse this and hardwire their preferred DNS into their application. If this becomes a problem, the users will have to either reject the application or apply some MITM remedy.


I'm not very happy we're now going to send all DNS traffic to 6 centralized DNS-over-HTTPS servers[1]. We can't trust our ISP, but we can trust Google and Cloudflare?

I also noticed that when I configure my Android's proxy settings to point at a Privoxy container that routes through a VPN, I still get DNS-hijacked to my provider's "thepiratebay.org has been blocked for you" page -- this only happens in Chrome mobile, not Firefox mobile. I was used to DNS resolving through the proxy server.

[1] https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-av...


Mozilla actually has a contract with CloudFlare to protect user data. It's stricter than CF's normal privacy policy which applies to other users of the DNS-over-HTTPS service. Only 3 types of aggregate information will be kept for more than 24 hours. https://developers.cloudflare.com/1.1.1.1/commitment-to-priv...


That depends on your use case. If you live in a country where you could be prosecuted for making a DNS request to a politically sensitive website, yeah, you're probably better off trusting Google with your DNS history.


The good news is that blocking traffic to those six servers is trivial. You can stick the rule right after the one blocking traffic to port 853.


Is there a particular reason a DNS resolver should be implemented in a web browser? Wouldn't it be better if it was a system-wide configuration?


It's been 30 years and DNS is still a major security and confidentiality flaw in all widely used OSes. I welcome my browser doing something about it. If in the future OSes and ISPs provide better alternatives, this feature can always be turned off.


I do want to use this on all my devices, not that I'm criticising the DoH concept per se. Only that it seems to me that the browser is not the correct place for the DNS resolver; thus I wonder why it's not the OS but the browser that gets the new features.


One reason could be that large number of users of web have no real power to set up their operating system to use their preferred DNS server (e.g. non-technical employees in many big corps). The present-day OSes are designed to favor the administrator and to restrict the users.


> (e.g. non-technical employees in many big corps)

IMO, employees should be using whatever their organization has configured for them -- not whatever they wish.

A "regular user" changing the DNS servers on his work PC (joined to an Active Directory domain) to 8.8.8.8, for example, WILL run into problems at some point. They say they have workarounds for this but I think it's safe to say that bugs will likely be found. In the meantime, things will be broken for those users.

Moving this setting into the web browser (where it likely CAN be changed by a user) is not the proper solution.


It is default off.


For now. In the future it won’t be from what I understand.


> I welcome my browser doing something about it.

Can we please go easy on the newspeak? Centralizing resolving to a handful of actors will not improve privacy for the most part of end users.


Resolving is already centralized: your ISP has 100% control over what you resolve, and you can't do anything about it. This is doing the opposite - by securely implementing resolution in a user agent, in a tamper-proof way, under the control of the user.


Every user is free to run their own resolver or use any of their ISPs or third parties, which is pretty close to the definition of something decentralized.


Except, as others have pointed out, there are documented cases of ISPs hijacking DNS traffic, even for people who have configured their client to use resolvers other than their ISP, which is possible because of DNS's lack of authentication or encryption.

Besides, I don't see how adding an option for DoH to Firefox is centralizing anything, you're free to set the DoH URL to whatever you like, and you're free to run your own DoH resolver, just like you're free to run your own vanilla DNS resolver.


> Besides, I don't see how adding an option for DoH to Firefox is centralizing anything ...

AUIU, this is currently disabled by default but will be enabled by default in the future.

When that switch is flipped, that's when the "centralizing" begins.

If this were to be disabled by default and forever remain that way, I would be perfectly fine with it.


> It also makes it easy to use a name server of your choice for a particular application instead of the one configured globally (often by someone else) for your entire system.

While I'm sure it's true that Firefox's implementation does this (presumably it provides a setting in the UI whereas it didn't before), there was nothing preventing this with regular DNS. It isn't something that DNS-over-HTTPS makes possible.

Specifying your own DNS server instead of using the system default is a feature that the nslookup command has offered for like 25 or 30 years, so I'm sure a web browser could offer it too.


Nice job Daniel (again...)! While TRR only sounds very appealing for certain threat vectors the handling of captive portals is still a ‚nasty‘ thing. While certainly not in the realm of DNS over HTTPS the logic / UX on the browser side as well as interaction with the underlying OS definitely needs improvement.


Thanks! Captive portals are indeed truly complicated beasts to handle and they offer challenging obstacles for browsers (and others). We keep working on trying to improve how Firefox detects and works with them.


Using Firefox 60.0.1 on Fedora 28, I just went into "about:config" and set "network.trr.mode" to "5" [0] since 1) I intentionally have local DNS servers that I wish to use and 2) I do not wish to send any more of my data to Cloudflare (as much as possible).

I thought this might be a good way to ensure that this behavior does not get turned on without my knowledge (whenever the time comes that Mozilla enables it by default).

Unfortunately, browsing broke immediately. I could not access any sites that I tried to access ("Hmm. We’re having trouble finding that site.") until I changed this setting back to the default ("0").

[0]: "Explicitly off. Also off, but selected off by choice and not default."


They have options to tweak soo many other aspects of TRR/DOH, why can’t they add an option to support /etc/hosts?


It's probably on a todo list somewhere.

The difficulty there, I believe, is that /etc/hosts is commonly parsed by your libc, so firefox will have to reimplement /etc/host parsing on top of their libc if it does not expose /etc/host entries in some way (glibc and muslc both don't to my knowledge)


So how does this work with my internal active directory running DNS? I assume firefox is clever enough to at least note the DHCP domain. I still don't like the idea this could mess with my private name resolution.

Also will the end goal be to have a new DHCP option so I can point my clients to my internal DOH server?

One of the nice things about this is I can make lookups authenticate. I hope the extend the update protocol into the HTTP context so we have a simple API for updating zones rather than nsupdate.


this states it bypasses my local resolver.

Does it still account for /etc/hosts blocks? I use a few thousand entries to black hole malware and ad networks and I’d rather not lose this.


In the caveats section it says that it is ignored


Encrypted DNS is great, but please also do something with SNI. I am sure other users don't want their ISP to peek at what sites they are visiting too.


... that's why the article you're commenting on mentions SNI and that we hope to address that too (but separately) going forward!


Also I thought that IPv6 doesn't need SNI, because with it you can allocate a separate IPv6 address for each service.


That doesn't help privacy, as you could just connect to those addresses and see what certificate they present.


Here is a brief demonstration of how "DOH DNS servers" can be useful. Nevermind the idea of applications having their own DNS caches.

1. fetch page of html, e.g., hn front page

   curl https://news.ycombinator.com > 1.htm
2. extract urls from 1.htm

   yyt < 1.htm > 1.txt
(example scanner "yyt" provided below as t.l)

3. convert urls to hostnames

   g=1.txt k 1
(example script provided below as "1.k")

4. retrieve json dns data from doh dns server, efficiently, over a single connection

   see https://news.ycombinator.com/item?id=17228745
5. convert json dns data to csv

   see https://news.ycombinator.com/item?id=17228473
6. import csv into database e.g. sqlite3, kdb+, export to /etc/hosts, export to zonefile for localhost auth dns server, etc.

now, when user is reading hn front page, no dns lookups are needed. user already has the dns data. there is no network usage for dns requests, increasing hn front page browsing speed for the user. there are no piecemeal dns requests sent, increasing user privacy.

7. track ip address changes over time, compare answers from different caches, etc. retrieve type 2 (NS) instead of type 1 (A) records, then compare to NS records provided in public zonefiles from icann, public internet scans, etc.

cat t.l

    #define p printf("%s\n",yytext);
   %%
   \200|\201|\204|\223|\224|\230|\231|\234|\235
   http:\/\/[^ \n\r<>"#'|]* p;
   https:\/\/[^ \n\r<>"#'|]* p;
   ftp:\/\/[^ \n\r<>"#'|]* p;
   .|\n
   %%
   int main(){ yylex();}
   int yywrap(){}

    /* compile with something like: 
    flex -Crfa -8 -i t.l 
    cc  -pipe lex.yy.c -static -o yyt 
    */

cat 1.k

   /k3 (novice level)
   /usage: g=f k 1 where f is list of urls
   h0:_getenv "g";h1:0:h0; h1:{:[(#h1[x] _ss "://")>0;h1[x];_exit 1]}'!#h1;h1:{*((h1[x] _ss "://[^/]")+3) _ h1[x]}'!#h1;h2:{h1[x] _ss "[^a-z^A-Z^0-9^.^-]"};h3:{*h2[x]};h1:{h3[x]#h1[x]}'!#h1;h1:?:/h1;h0 0:h1;
   \\


correction to 1.k: delete empty line 1 from list of hostnames

   / novice alert. this is probably 3x larger than necessary
   h0:_getenv "g";h1:0:h0;h1:{:[(#h1[x] _ss "://")>0;h1[x];_exit 1]}'!#h1;h1:{*((h1[x] _ss "://[^/]")+3) _ h1[x]}'!#h1;h2:{h1[x] _ss "[^a-z^A-Z^0-9^.^-]"};h3:{*h2[x]};h1:{h3[x]#h1[x]}'!#h1;h1:?:/h1;if[0=#h1[0];h1:h1 _di 0];h0 0:h1;
\\


   ftp -4o 1.htm $1;
   fetch -4o 1.htm $1;
   wget -4o 1.htm $1;
   curl -4o 1.htm $1;
   
   exec k 1d \
   |exec tcs cloudflare-dns.com cloudflare-dns.com >1.json;
   yyf < 1.json >1.csv;
   #t=list exec k j;
   #t=hosts exec k j;
   t=db exec k j >1.csv;
   #t=zone exec k j;

   \\

   k).Q.fs[{`t insert +:`ts`ip`hn!("ZSS";",")0:x}]`:1.csv
   k)t

   \\

   / 1d.k
   / urls, hosts
   a:0:"1.htm";
   a:,/$a;
   a:_ssr[a;"\42";""];
   a:_ssr[a;"http://";"https://"];
   a:_ssr[a;"src=//";"https://"];
   b:a _ss "https://";
   a:b _ a;
   b:{a[x] _ss "[^a-z^A-Z^0-9^.^-]"}'!#a / fail: https://example.com-
   c:{1#(3_ b[x])}'!#b;
   b:{c[x]#a[x]}'!#c;
   b:?:/b;
   b:{8_ b[x] }'!#b;
   / http
   a:"GET /dns-query?ct=application/dns-json&name=";
   c:"&type=1 HTTP/1.1\r\nHost: cloudflare-dns.com\r\nConnection: ";
   d:"keep-alive";
   e:"close";
   f:"\r\n\r\n"; 
   g:(#b)-1;
   h:{a,b[x],c,d,f}'!g;
   i:a,b[g],c,e,f;

   `0:,/$h,i;

   \\

   int main(int argc, char **argv){
   char *b[17];
   b[0]="/usr/bin/openssl";
   b[1]="s_client";
   b[2]="-tls1_2";
   b[3]="-no_ssl2";
   b[4]="-no_ssl3";
   b[5]="-ign_eof";
   b[6]="-no_ticket";
   b[7]="-tlsextdebug";
   b[8]="-servername";
   b[9]=argv[2];
   b[10]="-verify";
   b[11]="9";
   /* -host, -port removed from manual */
   /* but still found in s_client.c */
   b[12]="-host"; 
   b[13]=argv[1];
   b[14]="-port";
   b[15]="443";
   b[16]=(void *)0;
   execve("/usr/bin/openssl",b,(void *)0);

   }



   /j.k

   j:("SSSSSS";",")0:"1.csv";
   k:_getenv "t";
   if[k _sm "list";`0:{,/$j[4;x]}'!#j[5]];
   if[k _sm "hosts";`0:{,/$j[5;x]," ",j[4;x]}'!#j[5]];
   if[k _sm "db";`0:{,/$j[2;x],".",j[1;x],".",j[0;x],"T",j[3;x],",",j[5;x],",",j[4;x]}'!#j[2]];
   if[k _sm "zone";`0:{,/$".",j[4;x],"\n&",j[4;x],".:127.0.0.1:5\n=",j[4;x],".:",j[5;x],":5"}'!#j[2]];
   \\

   \\

   /* f.l */

    #define echo ECHO
    #define jmp BEGIN
    #define p printf
    #define nl p("\n")
    #define s p(",")
   %s xa xb xc xd xx xy xz
   xa "\"Question\":[{\"name\": \""
   ya ", \"type\": 1}],"
   x1 "\"type\": 1" 
   x0 "\"type\": "[^1]","
   yb "\"data\": \""
   xw "Date: "
   xx \"\},\{\"
   xy \"\}\]\}
   xz "Sun, "|"Mon, "|"Tues, "|"Wed, "|"Thu, "|"Fri, "|"Sat, "
   %%
   {xw} jmp xz;
   <xz>"GMT" jmp xy;
   <xz>{xz}
   <xz>" Jan " p(",01,");
   <xz>" Feb " p(",02,");
   <xz>" Mar " p(",03,");
   <xz>" Apr " p(",04,");
   <xz>" May " p(",05,");
   <xz>" Jun " p(",06,");
   <xz>" Jul " p(",07,");
   <xz>" Aug " p(",08,");
   <xz>" Sep " p(",09,");
   <xz>" Oct " p(",10,");
   <xz>" Nov " p(",11,");
   <xz>" Dec " p(",12,");
   <xz>\40 s;
   <xz>. echo;
   <xy>{xa} jmp xa; 
   <xa>{ya} s;jmp xb;
   <xa>\.\"
   <xa>. echo;
   <xb>{x0} jmp xx;
   <xx>{xy} nl;jmp 0;
   <xx>{xx} jmp xb;
   <xb>{x1} jmp xc; 
   <xb>. 
   <xc>{yb} jmp xd;
   <xc>.  
   <xd>\"\} nl;jmp 0;
   <xd>. echo;
   \n
   .
   %%
   int main(){ yylex();}
   int yywrap(){}


fix: accomodate nxdomain, servfail, etc.

    /* f.l */
    #define echo ECHO
    #define jmp BEGIN
    #define p printf
    #define nl p("\n")
    #define s p(",")
   %s xz xy xa xb xx xc xd  
   xw "Date: "
   xz "Sun, "|"Mon, "|"Tues, "|"Wed, "|"Thu, "|"Fri, "|"Sat, "
   xa "\"Question\":[{\"name\": \""
   ya ", \"type\": 1}],"
   za ", \"type\": 1}]}"
   x0 "\"type\": "[^1]","
   xy \"\}\]\}
   xx \"\},\{\"
   x1 "\"type\": 1" 
   yb "\"data\": \""
   %%
   {xw} jmp xz;
   <xz>"GMT" jmp xy;
   <xz>{xz}
   <xz>" Jan " p(",01,");
   <xz>" Feb " p(",02,");
   <xz>" Mar " p(",03,");
   <xz>" Apr " p(",04,");
   <xz>" May " p(",05,");
   <xz>" Jun " p(",06,");
   <xz>" Jul " p(",07,");
   <xz>" Aug " p(",08,");
   <xz>" Sep " p(",09,");
   <xz>" Oct " p(",10,");
   <xz>" Nov " p(",11,");
   <xz>" Dec " p(",12,");
   <xz>\40 s;
   <xz>. echo;
   <xy>{xa} jmp xa; 
   <xa>{ya} s;jmp xb;
   <xa>{za} s;nl;jmp 0;
   <xa>\.\"
   <xa>. echo;
   <xb>{x0} jmp xx;
   <xx>{xy} nl;jmp 0;
   <xx>{xx} jmp xb;
   <xb>{x1} jmp xc; 
   <xb>. 
   <xc>{yb} jmp xd;
   <xc>.  
   <xd>\"\} nl;jmp 0;
   <xd>. echo;
   \n
   .
   %%
   int main(){ yylex();}
   int yywrap(){}


What kind of headers get transmitted as part of the DNS query? With DNS, the nice thing is that it is not a chatty protocol, no authentication, no cookies, no user agent. https is exactly the opposite. It would be nice to know that it is not a new backdoor into tracking people.

[edit]: plus isn't Stateless TLS Session Resumption effectively a cookie?


TLS tickets are indeed a kind of a cookie.

Even without tickets, having TCP sessions means that server operators can link multiple queries to a single device, even if multiple devices share the same external IP.

This gives server operators more data than plain DNS.

The DNSCrypt protocol can use a unique key for every query in order to prevent this. Since it doesn't use sessions, and all queries are independent, there is no latency overhead.

For DoH, dnscrypt-proxy has an option to disable TLS session resumption. But it introduces some overhead every time a reconnection is necessary.


Am I seeing this as another huge opportunity for CDN like Cloudflare and Fastly?

Although I am not too comfortable with everything moving to HTTP. HTTP 2 was already complex enough, it seems we want to move everything into HTTP, everything away from TCP to UDP. What happen to QUIC anyway ?


The IETF has a QUIC Working Group. If you have relevant skills you should definitely join it.

https://datatracker.ietf.org/wg/quic/about/


There’s a working group for it that’s bashing out ideas, but I haven’t heard much from it in a while. I recall hearing they’re trying to take the parts they want and merge them into HTTP2.


> [..] DOH increases privacy, security and sometimes even performance [..]

Does anyone know how TLS over TCP can be faster than UDP?


I believe that big DNS responses will be faster because instead of establishing a TCP connection there is already a warm one ready to go.

But yes, for the common case the performance will be the same.


UDP doesn't require any connection at all


Yes, however DNS isn't purely UDP. It can fall back to TCP in a handful of situations.


At least there could be a user configurable black list for known split-horizon domains.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: