Hacker News new | past | comments | ask | show | jobs | submit login
Comcast Is Lobbying Against DNS Encryption (vice.com)
502 points by president on Oct 23, 2019 | hide | past | favorite | 157 comments



ISPs are dying to find some sort of "value add" to get more revenue, but it's just not there. ISPs need to realize that they are strictly a utility now. All the value that they could possibly add can be obtained from any other company now. They used to provide TV service, but nobody wants linear TV and can just send their money directly to the company that produces the TV content and have it delivered over IP. They used to provide phone service, but nobody wants phone service; they have a cell phone and can get a fake landline from any number of companies for almost no money. They used to provide a DNS resolver, but they used the DNS resolver to inject fake addresses, so the power has been moved away from them. They used to provide Usenet, but their "content partners" didn't like that, so now you get Usenet from a third-party provider.

IP is just too versatile. All customers want from their ISP is the dumbest possible pipe, because they can get all other services over that pipe and pick the best deals. If you aren't interested in infrastructure, then you shouldn't be an ISP.

(That said, I don't see why companies like Comcast aren't pursuing things like becoming Cloudflare or AWS. "Click here to host your docker container 1ms away from 87% of the US population," seems like a service that could make a lot of money.)


ISPs providing DNS and caching was a key part of the Pai FCC's reclassification of ISPs as an "information service" instead of a "communication service", as part of their repeal of net neutrality.

The FCC has a lot of leeway in deciding whether something is an "information service" or a "communication service", but they do have to be able to justify the decision. Pai went with the ISP's providing DNS and caching.

I wonder, if most people end up using DNS from third parties, and caching from third party CDNs, it will make it easier for a future FCC to reclassify ISPs as communication services, and put back net neutrality?

PS: Pai's approach partially backfired. He also tried to use the FCCs power to preempt state regulation to prohibit states from imposing net neutrality. But reclassification of ISPs as information services meant that the FCC no longer had the power to regulate net neutrality (indeed, that was the point of Pai reclassifing them), and the FCC's power to preempt state regulation only applies to things that the FCC could regulate.


"ISPs providing DNS and caching was a key part of the Pai FCC's reclassification of ISPs as an "information service"..."

I thought the same thing when I read about that decision. Many try to label DNS as "intrastructure", with the implication that some third party and not the user should have control over it. As an ordinary user, I do not use third party DNS. I will never believe that from a technical standpoint all ordinary users must use third party DNS.


Even worse: unless you are tunneling your DNS traffic past your ISP somehow, even people who think they are using third party DNS likely aren't.


A third party is supplying the answers but not necessarily the third party the user thinks she is using.

Sometimes we can bypass this by using non-standard ports for DNS.


I had an ISP that intercepted my requests, and would return bogus (ads) results if the resolution for that particular address failed. Moving off of port 53 was enough to bypass it, but it was easier to just tunnel my DNS requests. The sad thing now is google thinks I need extra captchas because of my weird DNS geo-location. I can't win.


Is my telephone an "information service" because it has 411?


Or is a phone line an "information service" if the company that provides it delivers a phone book to you periodically?


> ISPs need to realize that they are strictly a utility now. They used to provide TV service, but nobody wants linear TV and can just send their money directly to the company that produces the TV content and have it delivered over IP.

Right, I think Comcast realized that. That's why they _are_ the company the creates the TV content. They own Universal Studios, NBC, MSNBC, CNBC, USA Network, Syfy, Dreamworks, etc.


So it's even stupider that they are scrambling in the ISP business. Relax, bean counters, the rest of your mega-holy-shit conglomerate is doing just fine.


> ISPs are dying to find some sort of "value add" to get more revenue, but it's just not there.

Yes it is. It's Google Fi. Comcast could do Google Fi better than Google does, because they could make every Comcast modem an access point for it.

Then for a few bucks on top of your cable internet subscription you give people unlimited mobile data on unlimited devices whenever they're connected to a Comcast access point, which is 97% of the time because they're everywhere, and metered data using some deal with Sprint/T-Mobile/whomever the other 3% of the time.

How much money is on the table when you allow families to cancel their wireless plans on four, five, six different mobile devices yet still have service everywhere?


This is exactly what they're doing now with Xfinity mobile.

https://www.xfinity.com/mobile/network-coverage


My ISP has done something similar for years. They give you a router which doubles as a "wifi hotspot", where anyone with a valid ISP login can use that network at no cost beyond their regular internet bill. It just becomes a part of what you get with their service.

This doesn't quite work out as well as Google Fi because they don't have the other half of the equation which is to jump to a mobile provider when there's no valid wifi connection. My area has 2 choices for broadband and even with a majority of customers in this area there's still large gaps to the point where if I walk around the neighborhood there's tons upon tons of dead zones.


> This doesn't quite work out as well as Google Fi because they don't have the other half of the equation which is to jump to a mobile provider when there's no valid wifi connection.

That's the key though. Even if you have 97% coverage without it, if that other 3% could be when your car breaks down on the side of the road, people aren't going to be willing to switch off their existing wireless service which does work there.

But add existing wireless carriers as metered data which costs nothing unless you use it, and you satisfy that blocking factor and get the customer.


> much money is on the table when you allow families to cancel their wireless plans

and how much money do they stand to lose on those plans that got canceled? Making a service more efficient means they get less money, and without the threat of competition, a business won't do this.


> and how much money do they stand to lose on those plans that got canceled?

That's not a problem for Comcast, it's a problem for the incumbent wireless carriers.


How would Comcast do Google fi better outside the USA?


Xfinity Mobile is a Verizon Wireless MVNO and already offers international roaming (at high but not Verizon-levels-of-high) rates. If Comcast chooses to go after the roaming market in a way that Google Fi has, it could.


Unless something drastic happens ISPs are going to win. They're increasingly charging per bit for wire data (where there is no inherent scarcity), lobbying to prevent competitors, and have taken public funds and failed to reinvest in their infrastructure.


Lobbying and changing laws to apply reality distortion fields on all their customers and the competition is cheaper than investing into R&D or infrastructure and provide a better service or innovate and compete.

If it’s true for ISPs, it’s true for other big players and is a symptom of a failing system.


What do you mean there is no inherent scarcity? There is only a certain amount of capacity in any infrastructure. When you pay for a bit on a network with metered bandwidth, you are paying for the fact that you used up that infrastructure for a short amount of time. It’s exactly the same concept as paying more to deliver a larger package.

On a network with scarce capacity, flat rate billing (vs metering or caps) means that light users are subsidizing heavy users. A real world illustration of this fact is that ISPs actually prefer flat rate because it is easier to market. On networks with lots of capacity they sell “unlimited”, while on tighter networks (like the cellular networks), they almost always use metered billing.

I’m not sure why some people (mostly only on Reddit or HN) are confused about this, but one theory I have is that they don’t understand the oversale of bandwidth. If you want to get a dedicated 1gbps per second of internet bandwidth, that will cost over $2000/month in most areas. So what all ISPs do is to sell that same gigabit to many subscribers. You can get hundreds of subscribers on 1gbps.

But they advertise “up to 1gbps speeds” instead of “1gbps with a 100x oversale ratio” because that would confuse people. So maybe people think that means they are actually supposed to be getting a dedicated 1gbps. If that was the case, then it wouldn’t make sense to bill for data. But most people don’t want to pay $2000/month for their internet.

The real issue that you are complaining about is that ISPs in many areas are monopolies and feel no need to upgrade infrastructure.


To start, realize stating that a dedicated 1gbps link is $2000 may be true but it does not justify the price, which is what I'm speaking to.

What observations I had were: 1) the price of dedicated links went up in time disproportionate to their capacity; 2) network operators began moving to a per-bit charging model, forcing users to bid against each other, and; 3) at least here, non-DC dedicated links can be had much more cheaply than a link in a DC.

There is scarcity but for practical purposes the scarcity is so far off that I'm not sure it makes sense to consider it. The true scarcity is related to the availability of land and the size of the data links, neither of which are yet very limiting. Regulation related to right of way (or more commonly) city involvement in high level planning could be blamed for creating market failures if these costs really are that high.

Otherwise, in regards to #1, we would expect a downward trend of dedicated network link price while capacity goes up, following storage density and transistor density (or more appropriately computational throughput). But we don't see this, even though throughput does increase, and integration costs decrease.

For #2, going to a bidding model could imply scarcity, but as there are unjustified monopolies often there is no increase in capacity so it seems prudent to assume this is done solely to increase price.

And with #3, well, certain usages are priced higher than others, also implying some kind of bidding. This might reflect space scarcity but seeing as the price tends to not go down even if the customer provides their own network equipment and pays for its space it just seems like a reluctance of the DC operator to perform upgrades.

The fairest model I can imagine has users paying the amortized cost of the network equipment and its installation and maintenance.


Yeah, there are always going to be pricing shenanigans because people are not willing to pay the actual costs.

The actual costs of an ISP are the $1MM one time cost to dig up the streets in your neighborhood, and an ongoing cost of electricity for their datacenter, Juniper licenses, peering, customer support, etc. The problems the ISPs have is that no customer is going to pay that one time cost, so they have to figure out some way to make you pay it without that $1MM bill the first month.

I have worked for two ISPs and the internal debate is always around figuring out how to do this. At the last place I worked, our install costs were relatively low; the city already built tunnels under the streets for fiber, so it is financially viable for someone to just pay the ISP to run the fiber, and then pay the incremental costs. It is certainly not free (everyone wants their cut, including the building that you're already paying rent to), but is the kind of money that a medium-sized business or a condo co-op could find if they were so inclined. Individuals have no chance of finding the money, though, so something has to give.

(Where I'm going with this, is that paying per bit does seem reasonable to me. It's what AWS does; if you want to lower your bill, send fewer bits. But it's still a bit of a hack, not reflective of the actual costs. The first bit you send over your line costs a million dollars, but the rest are practically free. You're just paying to keep some routers powered on, and for them to be upgraded when other customers want to send more bits.)

Being able to sell your browsing history is one way to cover the infrastructure costs, but it's unfortunately not a workable idea because as we've seen, companies like Google and Cloudflare can just obscure that data from the ISP.

I hate to be all pro-government, but can you imagine how inefficient it would be if UPS, Fedex, and the USPS all had to build nationwide road networks in order to be able to deliver packages to your house? Instead, the government built the roads, and it made the economy stronger. Being able to drive to work or have someone send you a package from across the country is quite valuable. I don't see any reason why the last-mile Internet shouldn't be built that way; the city can get traffic to a peering point, and from there, you can pick what ISP you want to connect you to the actual Internet. But unfortunately, we half-assed that, instead paying private companies to built the last mile, and now they want to milk their nearly-free infrastructure. The idea of Internet to the home is a mature one, and the reality of a mature business is that it becomes a commodity, and that is going to drive profits down. Comcast, Spectrum, etc. should all be finding new ways to make money, and the easy one, "spy on our customers", is not viable. Sorry, investors. Too bad for you.


Isn't that $1MM upfront cost one of the major reasons why bonds exist? You get your $1MM bond now, and you repay with the future revenues you get from your customers. You make the calculation and that debt payment, plus ongoing costs, plus profit is what you charge your customers. Eventually you 'pay off the mortgage' and then revenue really goes up, but with the rate of technology upgrades, you will always be 'paying your mortgage'.


You still eat the risk, though. Maybe 5G is a thing and all your customers cancel next year. Now you have a 30 year payment on something that generates no revenue. That's where the fear is.


Isn't that business in a nutshell? You wouldn't have a much of a business if everyone went to 5G bond or not. Competition and doing risky things is part of business and partly why bankruptcy law exists.

If your business implodes due to 5G, the bond is written off as part of bankruptcy and the lender eats the loss while taking over the business assets.

Also seeing how low range 5G is, it basically looks like cell phone providers become wifi access point providers, which smells really close to what a wired ISP does anyway, which sounds like another ISP competitor.


My thought was that the AT&Ts of the world could easily build their fiber networks for 5G backhaul. My former employer's thought was that the AT&Ts of the world would pay them to do that, though. I don't know who's right.

The biggest problem I've seen at ISPs is the general unreliability of all the ISP-grade networking equipment. The routers are bad, the OLTs are bad, the VoIP phones are bad, the CPE is bad, the WiFi routers are bad, and the WiFi clients are bad. If I were an ISP, I'd be working on fixing all of that; everything that interferes with your customer wanting to send a packet and the host on the Internet receiving that... that is the area to optimize. But I don't think the funding exists. (It did when I was at Google Fiber, I worked on CPE. But Google lost interest, and at the smaller ISPs, it's all about buying as much crap as you can get off the shelf and letting your customers integrate it.)


> The biggest problem I've seen at ISPs is the general unreliability of all the ISP-grade networking equipment.

I wouldn't generalize so much. It only seems that way, and only in countries where ISP markets are monopolized, because nobody cares about doing it well. Where there is enough competition things are completely different: ISPs are willing to test different GPON vendors, OLT/ONU, wifi routers, etc., figure out which features and configurations are reliable, which aren't, fix things that work poorly and so on, because quality becomes the reason people switch ISPs. Competitive industry also means lots of people with experience and expertise and lots of shared knowledge.


It doesn't need to be monopolized, oligopolized works too. And it's not impossible for a monopoly to care and focus on these things, it's just highly unlikely.


risk = asking for premium

Whoever is going to hold that black jack, won't do it for free, but will ask for money, because he is taking the risk.


Yes, which is the bond interest rate and the amount they charge customers.


That can only works without risk in a monopoly.


For about $80 million my city of 2 million people got fiber on every main street, not digging open holes but drilling 6 inch tubes with lots of dark fiber for rent too. We have Gigabit Internet for $10/month in 80% of the city. Europe, no ISP lobby.


>I don't see any reason why the last-mile Internet shouldn't be built that way; the city can get traffic to a peering point, and from there, you can pick what ISP you want to connect you to the actual Internet.

Yet not everyone can own a Taxi. Which in an economics way make sense, it also makes sense to avoid avoid too much traffic in the street.

The equivalent on the internet would be to block certain usage, either for economics reason (well we certainly can't have a million content producer, they need to live) or for traffic reasons (Netflix is using too much traffic... we maybe should limit the number of slot that can be used for medias streaming).

I'm all in for a centralized fiber networks, but its goals should be clear and it has to be transparent over what it does to achieve it. It also needs to be outside of the government control the most possible, because Snowden has proven that governments entities can't be trusted with such important data. Having it at the municipal level with some regular audits could be good enough.


Well, there definitely is bandwidth scarcity. Also, unmetered bandwidth is just one major reason botnets are so trivial. Nobody knows or even has incentive to know that their smart toaster is involved in DDoS attacks so now we're all centralizing behind Cloudflare to mitigate this reality.

Now, I'd like to see a fair price per bit, but I'm not convinced the world is better without it.


ISPs are increasingly charging for total throughput on top of point-in-time bandwidth, though. For example, to get actually-unlimited throughput with Comcast you have to pay an additional $50/month: https://www.xfinity.com/support/articles/exp-unlimited-limit


Most broadband access providers don't know how to build, operate or scale out compute infrastructure and the systems around it. They've tried, they've failed, they've blown a lot of money in the process. It's not their core competency and most have given up. They've been taken in by vendors (or acquisitions) in the last 10 years selling them solutions "just like the cloud companies" but can't manage to make it work in practice and entice the select set of customers who could take advantage of being in close proximity to users and less network congestion. The conversations I've heard go on with these companies display how they're a cargo cult and they fetishize buying the same hardware that cloud providers use.


> ISPs are dying to find some sort of "value add" to get more revenue, but it's just not there.

ISPs are some of the most profitable businesses on the planet. If you look at any of the fundamentals for financials of AT&T, Verizon, Comcast, etc their gross profit margins are 60%


Some are. Verizon bought EdgeCast CDN which has a Cloudflare style WAF. They now operate under the Verizon Digital Media Services monicker, but it’s been a bit of a challenge for them to integrate.


And that's backfired for Verizon a little bit. Think about it this way. Verizon is now a CDN: they have to get as much peering as they want. Verizon (as701) the big eyeball network has no plans to peer openly. It complicates peering negotiations because edgecast (separate asn) can't give up the access that everyone wants to Verizon. Similar happened when tata (old teleglobe) bought bitgravity cdn and constrained their peering relationships.


ISPs do in fact sometimes colocate with CDNs to provide caches very close to residences.


Yeah, I believe netflix puts boxes in a huge number of IXPs for just this reason. That said, it's not currently practical without netflix's current scale. The "as-a-service" part provided by an ISP is an actual value-add and could indeed make money.


Encrypted DNS is great. My only problem (as a linux user) is that I want all DNS lookups on my machine to be performed by querying the servers listed in "/etc/resolve.conf". DoH as implemented by Firefox and Chrome breaks that.


DoH as implemented in Chrome still queries the servers configured in /etc/resolv.conf. It just uses the DoH protocol rather than plain DNS if it recognizes the server as supporting DoH (according to its built-in whitelist). Firefox is the one forcing all DNS resolution through Cloudflare's DoH servers by default.

Personally I'm mostly OK with the Chrome approach for public domains but I still worry about applications bypassing the host resolution plugins configured in /etc/nsswitch.conf. In my case that means: files (/etc/hosts), mymachines (automatic local VM name resolution), mdns (*.local), and myhostname. If an app only looks at /etc/resolv.conf and doesn't use the system resolver then it won't be able to see any of these local names. In the end, domain resolution is a system function and not something applications should be implementing on their own.


> In the end, domain resolution is a system function

Then just put 127.0.0.1 in /etc/resolv.conf. nsswitch, particularly for hostname resolution, is fundamentally broken as it doesn't work well with asynchronous software architecture; nor does it work well in languages that don't depend on libc.

Systemd already supports being a local resolver, but see OpenBSD's unwind (https://man.openbsd.org/unwind) for an attempt to seamlessly handle DNSSEC, DoT (and eventually DoH), local Wi-Fi portals, and other issues.


Unbound supports DoT as well.


> My only problem (as a linux user) is that I want all DNS lookups on my machine to be performed by querying the servers listed in "/etc/resolve.conf".

I fear that they're going to end up seeing the inner-platform effect as a way to increase security: Browser makers decide they can't trust Standard OS Component Z, so they implement it themselves inside the browser, and lock it down so their imagined Non-Technical User can't be tricked into changing it to their own detriment. Now you have behavior inside your browser you can't configure because configurability in the wrong hands is a security hole... you're welcome.

https://en.wikipedia.org/wiki/Inner-platform_effect


I don’t necessarily see a problem with this. If OS vendors want to be more than just another layer for running a browser then they need to catch up fast to the work that browsers are doing.

It is insane that connecting to a network is entering into a trust relationship with the the local network operator.

Its silly that most apps run with the full privilege of the user that ran it.

These were fine decisions when they were made years and years ago but browsers have second-mover advantage and aren’t burdened nearly as much by backwards compatibility.


There is a solution for that -- install a local resolver that makes queries using DoT/DoH/DNSCurve/DNS-over-WireGuard/whatever but answers them using ordinary UDP DNS, then make that your DNS server in /etc/resolv.conf.


The default configuration for most linux distros is to set 127.0.0.1 as the resolver in /etc/resolv.conf, and then something like systemd-resolved takes care of doing the "right thing".


Why exactly can't you setup something like dnscrypt-proxy and just turn of DoH within the browser? That is exactly what you are looking for no?

I am currently using both, and as someone at work, I am glad for DoH being built in on Firefox.


Turning it off in the browser is a PITA, even for just a home user. Family of 4 and if I wanted to turn it off I'd need to do it on 9 computers, 3 tablets, 4 phones and 2 set top boxes. Not saying we're exactly average... but the shit adds up fast. [edit: +1 computer, forgot one]


AFAIK as of now DoH isn't turned on by default in any browser.


You should look into DoT instead


How exactly does DoT make any of this any better? Conceptually, your local system resolver can use DoH just as easily as DoT. The primary difference between the two protocols, from a pragmatic perspective, is that DoT can be blocked by ISPs and network providers, and DoH is harder to block.


I've been doing DNS over TLS with pfSense for over a year now.

The problem as I understand it is that Firefox and Chrome will soon default to DoH. Now I have to remember to go in and change default app settings. Ok not a huge deal with just two apps but yet something else I shouldn't have to do.


Just set your network resolver to return NXDOMAIN for the canary domain use-application-dns.net. (https://support.mozilla.org/en-US/kb/canary-domain-use-appli...)

That will signal to firefox (at least) to disable DoH and use the system resolver.


How can you lobby against a browser feature?

It's not like Google or Mozilla were trying to write a new law.

Are there laws that browsers MUST adhere to? Because I'm pretty sure I can create a web browser that behaves in any way that I see fit.


There were literally just House hearings about the impact of DoH on law enforcement investigations. You lobby against it by trying to get a law passed to ban it.


Are you sure you're not thinking of hearings about some other aspect of encryption? I've been working on the DoH issue, including writing to Congress about it, and I don't recall a hearing having happened yet.



I’m sure. I’ll dig up a link; you can watch it.


Please!



If web browsers want to allow playing content from DRM'd services like Netflix, they get tangled up in a bundle of regulations like the DMCA and have to use specific services like this: https://en.wikipedia.org/wiki/Widevine

At a lower level, formats like H264 are basically part of the web platform too, so you end up implementing them and that exposes you to software patent laws.

So, in practice: Yes, there are laws browsers must adhere to. If you're willing to strip a bunch of features out of your browser and you live in a country with a less awful legal climate, maybe not too many laws.

For a while the US banned export of encryption if you aren't familiar with that - it impacted stuff like web browsers and resulted in people using 56-bit (!) encryption keys for things.


In practice you can host your browser redistributable on non-US server that has no obligation to US laws. And if US user chooses to download from that location there is nothing they can do about it.


And at the same time, they deployed a DoH server: https://github.com/DNSCrypt/dnscrypt-resolvers/blob/master/v...


No mention of whether it is logging or not. I think I can safely assume they are logging everything.


Which is just gravy, because the whole DoH crowd is against ISP monitoring and selling it.

Goes to show that the only thing Doh/DoT/et al. did was to make things more complicated and harder to work with.


Did everyone forget that it was Comcast that was injecting JavaScript into unencrypted HTTP responses? I wish I had time to dig out that HN thread. It only makes sense given Comcast's lack of ethos that they would be against encrypted DNS.


I worked on that code. Sorry.

Originally it was a dutch isp that wanted the feature. They wanted a pop up that would ask the customer if they would like to buy more gigabytes when the costumer began running low on data.

There was a large back and forth. The general thought process at the time was, "If technology can be used a certain way, it will be used that way." So, we went back and forth over hypothetical situations of how an ISP or business could use javascript injection. Like, "Could an ISP use this to steal personal data?" Ultimately, browsers were beginning to default to tls at the time, so it seemed like it would be a short lived feature.


From the summary[0]:

> If activated, this feature would by default route all DNS traffic from Chrome and Android users to Google Public DNS, thus centralizing a majority of worldwide DNS data with Google

I can't decide if Comcast is just ignorant since Chrome's plans are to NOT do this, or if they are outright lying about what Chrome's plans are. As I mentioned the other day, this is the problem with not separating the DoH protocol discussion from the DoH browser-provided default resolver. Good job Mozilla, now we can't have a debate about the merits of DoH the protocol because y'all have muddied it with default resolver choice.

[0] https://assets.documentcloud.org/documents/6509454/ISP-DoH-L... (PDF)


That instantly makes me feel dns encryption is something worth exploring in depth...


I have a full tutorial on how to setup DoH for yourself :

https://www.aaflalo.me/2018/10/tutorial-setup-dns-over-https...


Why would you do this instead of say the easier and more general, DNS over wireguard?


You wouldn't. But Firefox and Chrome users might be happy to have this done for them.


erm, how do you mean? DoH and DoWG have essentially the same security properties, no? With authoritative servers only responding in the clear, you have to trust some egress provider. If I'm understanding it correctly, the only use case for DoH is for end users that don't have a remote box to trust with their egress.

Which certainly is a worthy segment. It just seems like any DIY network setup would be orthogonal to that. And so there's no point addressing DoH on your local network unless you're trying to mitigate DoH's effects on eg ad blocking.


US ISPs are currently, actively, aggressively manipulating DNS. People shouldn't trust their ISP DNS. If they run normal DNS to a third-party resolver, their ISPs still see their queries. If they use DoH, they can't. If they use WireGuard, their ISP sees even less of their traffic, but WireGuard is harder to set up than DoH, which your browser will do for you.


Sure, but the post I was responding to was detailing how to set up your own DoH server, which is not so trivial.

I suppose you could set up an Internet-facing DoH server, and then point their routers (dhcp servers) at your new DoH server, rather than heavy-configuring their premise routers to use wireguard. (Of course then you're installing your server as a point of failure, which is probably not what you want to do!)


That's actually the setup I have today.

However, it's easier to setup DoH/DoT than to setup wireguard VPN. Especially when setting it up for mobile devices.


It makes me want to implement it immediately.


You already can for your whole network: https://scotthelme.co.uk/securing-dns-across-all-of-my-devic...

Or if you use firefox you can enable it for each browser.


I do DoH with pi hole and cloudflare (followed arch wiki for all of that) but I think it’s silly. You don’t know what I’m resolving but you still know what ips I’m visiting and can just look up their host names. What does it really do?


You can have multiple hostnames per IP. (i.e. If you are using a site that uses cloudflare)

That fact will make it very difficult to resolve hostname to ip address for anyone behind a CDN. That is the reason Comcast is fighting it.


Makes sense. Thanks. Then I’m glad I’m using it! I fell into the arch wiki black hole and an hour later had pi hole, DoH and OpenVPN all configured so all my devices including my iPhone go through my home internet and the pi hole. Pretty neat. No ads while mobile. I did have to do tcp on 443 since udp and t mobile did not play nice together. I was too lazy to debug that though.


A fair chunk of internet content is hosted in the cloud. So while comcast knows exactly what IPs you are talking to they can't tell much else.

This prevents them from tracking your habits, selling that data, and creating or marketing competition. It also makes it harder to crippling the network for some specific competitor like youtube or netflix.

How else is Comcast going to hold it's users hostage so netflix has to pay extra to get the bits the customers paid to get?


The vast majority of relevant privacy-sensitive information is not IP addresses. The ISP can't read the full content of every page just from IP addresses.


I thought DoH would defeat the pihole. Is this incorrect?


Check out the pi hole arch wiki. Tells you how to use DoH with cloudflare. Takes 2 minutes.


I set up doh using dnscrypt-proxy the other day and it was easy. And as a bonus, I enabled logging, so now I can tell what domains are being looked up by my computer.


This isn’t exactly true... Comcast just launched their own DoH endpoint. I also used to work very closely with the DNS team at Comcast. At the time, they did not sell or even log/look at DNS data. It was sampled in aggregate to break down CDN traffic in Netflow data.


Comcast's masters likely have the same arrangement that AT&T does. Everything passes through a closet the rank and file know nothing about.


That's a totally different and unrelated assertion. If the Gov't wants that traffic, they can just go to a Level3 fiber regeneration site in the middle of nowhere and tap the fiber traffic of hundreds of companies.


Didn't Comcast have a long history of sandvine boxes in passive mode until they finally realized they were too expensive and ditched them?


Doesn't the ASN tell you that?


The ASN will only tell you the traffic is from Akamai, not that it was for apple.akamai.com or steampowered.akamai.com, etc.


If Comcast doesn't want it then it must be a good thing for consumers.


Pitiful people Comcast, shame on you. This information never should have been sold.


Just imagine if Comcast loses the ability to hold content providers hostage. With DOH it's much harder to tell that most of your customers are streaming from Netflix, so it's much harder to artificially degrade their network connections so you can ask Netflix to pay extra.


Beefy cache nodes of any company, not just Netflix, are trivially identifiable.


But if you have a list of 1000 beefy cache nodes who do you send the ransom letters to?

How about if each beefy cache node has 1000 IPs instead of 1?

What if each client was sticky, so if Comcast buys a Netflix account they only end up on one beefy cache node?

Hell if pushed hard enough maybe netflix would enable p2p (encrypted with DRM of course) for content delivery. I'd happily hook up a 1TB usb to my roku if it improved the playback experience.

Of course DoT is not perfect, but it does seem like it would help, so much so that Comcast is lobbying against it.


I may be the last person to join the party, but I do not understand how Dns over HTTPS hides me from my ISP.

From my understanding to fetch a url, my make a request to the dns server. If I do it unencrypted, the isp sees the request in plain text. They know where I am going.

But when I do it over DoH, I send an encrypted DNS request to a service, of my choosing, that gives me the ip address of my destination. (is this correct?)

Now, in order to reach that destination ip address, don't I have to use the ISP? Aren't they the ones routing me my request to the destination? Even if it is under HTTPS, the destination has to be known, right? I'm sure I am missing a piece of the puzzle, but where exactly I don't know.


With DNS over HTTPS, the ISP would have the ability to see the IP address you are connecting to, yes, but critically they would not have the ability to see the domain name that you had visited unless your browser is also doing SNI. Which it probably is, so from a privacy standpoint, not much changes.

What does change is the ISP's inability to tamper with the DNS response. Many, many ISPs will refuse to actually send a DNS not found for certain record types, instead serving up their own custom search pages with advertisements and other garbage. It also prevents certain classes of MitM that involve intercepting plain-text DNS and re-routing that request to a different server by responding with an attacker-controlled value.

So, from a privacy standpoint, DNS over HTTPS by itself isn't buying you all that much (since SNI leaks the same information during the SSL handshake to your target) but in terms of making your access to the DNS infrastructure much harder to tamper with, it does a whole bunch.

EDIT: ooohhh, ESNI is a thing? This seems interesting to keep an eye on: https://blog.cloudflare.com/esni/


eSNI is still in the draft stages, which is why Chrome has opted not to implement the draft until the standard is finalized or in a state it deems satisfactory[0]. Currently FF and CF implement draft.

There are also many hurdles Google has to consider when rolling out things like this that will break Enterprise deployments. Currently, DoH is completely inaccessible if the browser is "managed" (has Policies) at all, even if the disable DoH policy isn't set. I imagine the same will happen with eSNI.

0: https://crbug.com/908132#c7


You can learn a lot just from the destination IP address:

https://irtf.org/anrw/2019/slides-anrw19-final44.pdf

You need a VPN or Tor to obscure that from your ISP, but your VPN server ISP or your Tor exit node and ISP will always be able to see your destination IP address.


So, 95.7% of alexa top million sites are uniquely identifiable just by looking at a set of IP addresses following initial connection.

This is why real privacy is hard and DoH is a joke from technical perspective and evil from power perspective.


If the IP the domain points at is some big hosting provider like akami/cloudfare, how will they know which site you are actually accessing? TLS still sends the domain name in plaintext, but it's only a matter of time till that is fixed.


It is much easier for the ISP to just inspect the smaller amount of DNS traffic on port 53, than to inspect all traffic including port 443, and scan for HTTPS and SNI. Also, the destination IP does not revel much for CDNs.


Just so I understand, why does an ISP care so much about providing the DNS resolution? Anyway once you have a destination IP address, that'll be visible to your ISP from your packets, and a simple reverse DNS can tell them what these IPs are for. Or am I missing something? That's also why I'm not so bullish on the benefits from DNS encryption, though it's a step in the right direction.


And this is a problem why?

DoH is garbage and Mozilla's implementation is just a cashgrab with Cloudflare. Adopt DoT+DNSSEC instead.


A million times yes. DoH is powergrab crap and needs to die.


The fundamental law of pipe companies: Every pipe company hates being a pipe company.


I still can't wrap my head around DNS encryption; can someone explain it or link me to an article/video?

I seem to remember reading that for this to work, you would have to trust someone like google, but then wouldn't you essentially have to proxy all of your data through google or some dns encryption source.. i.e., almost making it a VPN?

Surely I am missing something- I just don't see how you can hide traffic from your ISP without a proxy in the middle..


Normal DNS use UDP protocol on port 53.

It's very easy to intercept, monitor and redirect. Which is what we think Comcast is doing.

DNS-over-TLS encapsulate the DNS traffic in a TCP connection with TLS. In other words, all request are encrypted and it's impossible (without installing root cert and MITM) to know what site you're consulting.

DNS-over-HTTPS decide to recreate the protocol as either JSON or dns-message (binary protocol akin to the original DNS protocol) and use an HTTPS request (with HTTP2 minimum and TLS 1.2/1.3).

Different way to achieve the same idea, encrypt your DNS data, the site you visit, only here to also make Comcast think you're consulting a normal website securely.

You're not redirecting all your traffic to a trusted source, just the DNS traffic. Moreover, you can easily setup your own DNS Encryption service or choose a provider you trust.


But Comcast would still know what IP address you are communicating with.. isn't that going to tell them 99% of the time where you are going still?

I mean right now, I do a DNS lookup for news.ycombinator.com and I see 209.216.230.240 .. I'm refreshing the site and that is the address I'm communicating with. Granted you can have multiple DNS assigned to the same IP.. but that's not going to do a whole lot, right?

What's to stop comcast from just using their ip->dns lookup table to do basically what they are doing now? Yeah it gives them a slightly less clear picture.. but almost the same?


One thing they tend to do is when one tries to go to a site that does not exist, they instead serve up a 'search' page with their ad links and other unrelated junk. Perhaps they don't want to lose the ad revenue from hijacking their user searches?


Say you are large ISP and you want to maximize profit. If you can track who your users are connecting to you can then start degrading their network connection and the target to pay for a better connection.

Comcast was doing this, people noticed much better latency/bandwidth if they used a VPN so that Comcast couldn't tell they were communicating with netflix.

So now with encrypted DNS a user looks up netflix.com: "bill@kona:~$ dig +short netflix.com | head -1" gets you 52.37.69.124

But they just see encrypted packets. They can of course do a reverse lookup on the IP and:

"bill@kona:~$ dig +short -x 52.37.69.124" gets you ec2-52-37-69-124.us-west-2.compute.amazonaws.com.

Is that a webcam watching an eagles nest? One of a zillion video streaming services? Is it a users cat monitoring webcam proxied through a random webcam provider? Someone hosting their plex server on amazon?

Without being able to see the DNS records it becomes much harder to track, market, and muck with a users traffic.

Comcast could of course make everyone's network connection worse (not just netflix), but then people would complain that they are paying for a high speed internet connection (not just a connection to comcast services) and not getting it.

Almost like net neutrality.


More and more website rely on CDN. Which mean, more and more website share the same IP. This is how CloudFlare works. They proxy the full traffic through their server to your server.

If you can't have the host/DNS request, you won't be able to know what website is visited.


This is absolutely not how it works. CDNs, and Cloudflare in particular, have lots and lots of IP addresses and don't share the same one with all the websites, instead they shard websites to IP addresses, so each website sticks to specific IP address. The reasoning behind this is usually all the legal risks, blocking risks, etc. For example, if some government wants to censor a website they are going to lookup its IP address and block it, if the website jumps across many IP addresses they may block all the subnets those addresses belong to, so they can cover all possibilities, which is going to censor lots of other websites on those subnets and make CDN pretty useless as a CDN.

Anyway, such approaches in combination with all the IP addresses of subresources each website links to can identify 95% of top 1 million websites, more than 95% if response sizes are taken into account. No amount of silly encryption toys like DoH, eSNI and TLS 1.3 can protect against it. You need some serious privacy technology to address the problem, like decentralized peer-to-peer overlay networks.


DNS isn't encrypted. Anyone passing your DNS queries between you and your DNS provider can inspect them.

DoH or DoT would allow your upstream DNS provider to see your queries, but anyone passing them around wouldn't see the content of the queries.

Comcast won't be able to read those queries if they're encrypted via DoH, and part of their business model involves spying on their customers' queries and selling the data.

DoH has nothing to do with proxying.


Thank you, however I still don't understand how it works from what you said. What is an "upstream DNS provider" in this context?

I'm just not seeing how Comcast can't still know where you going, since obviously you are routing your traffic through them and they see the IP address; how is the IP not giving them enough information to figure out where you are going? Is encrypted DNS only going to work for a handful of sites running in some special way, or is this supposed to work for everything?


> What is an "upstream DNS provider" in this context?

This would be your DoH provider. You can have layers of caching and DNS servers on your machine and network, but DNS queries ultimately have to go a foreign DNS provider.

> I'm just not seeing how Comcast can't still know where you going

If your connections aren't encrypted, or you only access servers with static IP addresses, then Comcast will be able to know where you're going.

Modern web applications have many layers of indirection, and you can't always correlate accessed IP addresses with the service the user is using.

As an example, consider a site that uses HTTPS and is proxied with Cloudflare. To Comcast, you're sending encrypted bytes to an IP address that doesn't identify the site, it's just Cloudflare. With unencrypted DNS, Comcast can just look at your DNS queries and determine the sites you're visiting. With encrypted DNS, Comcast sees that you connected to your DNS provider and Cloudflare, which isn't exactly valuable.


> With encrypted DNS, Comcast sees that you connected to your DNS provider and Cloudflare, which isn't exactly valuable.

Comcast also sees SNI in plain text, sees all the other connections to other IPs for 3rd party resources on that domain, also with SNI, and sizes of all the responses of course. And just the IP addresses and response sizes give enough information to figure out what domain is visited, never mind seeing it in plain text in SNI.


In combination with Encrypted SNI in TLS 1.3 (https://blog.cloudflare.com/encrypted-sni/), they won't be able to see that.


DoH + eSNI + TLS 1.3 won't prevent seeing IP sets. See this thread https://news.ycombinator.com/item?id=21340671 and my other comments here.

There are only detrimental effects from DoH on privacy, because extra party sees lots of stuff about you and your ISP still sees everything.


Good point. Hopefully encrypted SNI will gain traction, as well.


Has DNS yet removed the possibility of an ISP front-running replies for outside DNS? For example, you want to see HN, but Comcast would rather you see HN through their ad-injecting proxy - their systems can see your DNS query and reply with the proxy.

Seems like encryption and signing would help here as well.


They don't need to front run. They can simply impersonate and not pass along the queries.


Do you mean Dan kaminsky's issue from 2008? (https://en.wikipedia.org/wiki/Dan_Kaminsky#Flaw_in_DNS)

If so, this was fixed... in 2008.


Fixed is much too strong a word. Mitigated is more descriptive. From your link:

> This fix is widely seen as a stopgap measure, as it only makes the attack up to 65,536 times harder. An attacker willing to send billions of packets can still corrupt names.


Fair.


Nah, they'd be able to use a much more surgical approach because they wouldn't need to guess the txid. They can just spool off packets that match what they're looking for and respond to them themselves instead of sending them along to 8.8.8.8 or whoever.


You don't need a proxy for the same reason that your ISP can't read your https data. The data transmitted is encrypted at either end so people passing that data can't read it. Current DNS requests are passed in plain text so all hops along the way can read that information.

However, you are correct that if using some sort of encrypted DNS you'd still have to trust the provider you use to be able to read those requests and have to choose a provider that isn't your ISP (which tends to be the default in most places) and the two primary providers at the moment appear to be Amazon (via cloudflare) or google. To that extent, pick your poison.


How hard would it be for them to instead reverse lookup the outbound socket IP addresses to determine the servers to which you are connecting?

Assuming they have some people performing the same DNS lookups from similar locations I bet they could construct a good enough mapping. It’d probably be even more accurate if there’s DNS caching client side as it’d count actual connections and not just lookups.


Your method wouldn't work too well as multiple websites will be multiplexed behind a single IP address via a method known as "named-based virtual hosting". It exists even with TLS, as SNI (Server Name Indication) was added to serve this purpose. However, in the future, TLS will most likely mandate that SNI be encrypted and not visible to a passive attacker (it is currently in IETF draft status, as someone pointed out below).


Its still just a draft last I checked. https://tools.ietf.org/html/draft-ietf-tls-esni-04

Where as tls 1.3 is a RFC. https://tools.ietf.org/html/rfc8446


One IP address could have multiple hostnames pointing to it. DNS permits this.

I could register my own domain name, right now, and point it to what `dig facebook.com`'s IP address is.


Pretty much the default position for everyday shared hosting environments, unless a static IP is explicitly requested by a customer.


My thought is that with SNI (Server Name Identification), the domain is transmitted in plaintext as is. Encrypted SNI is a long way off unless everyone starts letting a 3rd party like Cloudflare manage everything.

To cut Comcast out of the loop requires the OS/Browsers to switch away from Comcast's DNS, and a huge amount of the web terminating at generic Cloudflare IPs with Encrypted SNI.


Even with esni the payload size is clear, so it isn't too difficult to identify the site being accessed from a small list.

And ESNI's limited privacy depends on people centeralizing their services on a small number of cloud providers that get to see everything.


It's possible to extract much much more information [1] than DNS queries for an ISP with off the shelf products, they don't really need your DNS queries for that (yes, DoH people are lying like crazy that it improves privacy). What ISPs do care about is keeping control over DNS, being able to use it for address translation, routing around problematic servers, blocking and intercepting domains, etc.

[1] There is lots of stuff leaking all over the place in IP packets passively, not just IP addresses, but like website identifying TLS metadata in case it shares IP address with other websites, user machine and OS identifying metadata, etc. Also when you visit a website, you make requests not just to a single IP address, but also to a bunch of other IP addresses specific to the website to load resources that the page includes that send back responses of specific size and all of this is nicely clustered and mappable to a specific machine you were using. Even if you hide TLS metadata, it only takes a single user visiting a website in the open to identify all the rest who did too and that user can be a headless browser going through alexa top million websites. Active probing is yet another thing that can reveal a lot, ISP can connect to the same port from your IP address, send various things to figure out the protocol and so on, that's how all kinds of obfuscation can be detected.


Does anyone know how DoH works in browsers when connected to a new WiFi network with a captive portals?



Is there anything that this company does that ain't suck?!


At first I was excited about DOH, until I realized that you are simply switching data collection from ISP to whomever you choose as your trusted resolve in DOH (most likely Google or Cloudflare). So ideally we need another machine that does DOH requests and then sends you the results, but at this point you might as well setup a full VPN.

DOH might be a good alternative in places like China, because Cloudflare knowing about your browsing history is a lot less dangerous than Chinese goverment knowing about it. Unless DOH providers will sell that data to China. Which they probably will.


You can still use DoH with your ISP - it's about finally encrypting DNS requests in transit

The reason why ISPs are afraid is because they know that given the choice most people wouldn't opt for ISP hosted DNS since they have a history of being abusive


Most people won't care. The coming battles are about defaults.

Mozilla's motives may be pure, but with a key press they're poised to funnel the DNS requests of hundreds of millions of users to a single entity. That's power anyway you look at it.


how will this enhance common man's life is something they need to expose to the public , don't they need to let everyone know?


That probably just means they take advantage of DNS data of their users and that DNS encryption is a very good idea.


Which is exactly why government should have nothing to do with internet standards.


Doesn't DNS encryption effectively break ad blocking software?


DNS encryption is going to be one of the big boondoggles of the 21st century internet.

I am currently going through a project at work to certify all the applications which need a custom root CA cert added to them, for their traffic to be inspected. This is part of GDPR certification. This is not some nefarious project to spy on employees, or even a DLP initiative. To comply with GDPR, we have to know where PII comes and goes, and over 80% of that data goes over HTTPS. That means that for any company to be able to comply with GDPR, they have to inspect all HTTPS traffic.

Now, could TLS clients provide a mechanism to export all decrypted traffic, separate from the validation path? Sure. But that's not a part of the TLS spec, so to my knowledge, TLS libraries just don't have that option, and all the applications I know of certainly don't expose it. So even though to provide for the above requirements, all we need is a read-only tap of the decrypted content, it wasn't required (or possibly even considered), so now we need read-write access to all content, which is in no way what the requirements were intended for.

That's bad, but what's worse is this process is introducing dozens of bugs. In many cases, the proprietary content filters are actually failing to validate correctly, and proxying payloads from hosts with invalid certs to unsuspecting clients. Not only do we have to MITM everything, but we're making security worse, and breaking apps. Take all of these considerations, and now instead of it being HTTP content, it's DNS.

From an internet privacy perspective, there are only so many hostnames and IPs on the internet. Regardless of how you encrypt it, if you can observe hundreds of millions of individual users' traffic, it will be trivial to discover what DNS records a connection is requesting by statistical analysis. I agree with making the data integrity immutable, but that doesn't mean we have to force it all to also be private, not to mention breaking the distributed, decentralized design that made DNS resilient to begin with.


> To comply with GDPR, we have to know where PII comes and goes, and over 80% of that data goes over HTTPS. That means that for any company to be able to comply with GDPR, they have to inspect all HTTPS traffic.

That doesn't seem like a reasonable interpretation to me. Is there a legal reference you can cite that backs this up?


> To comply with GDPR, we have to know where PII comes and goes, and over 80% of that data goes over HTTPS. That means that for any company to be able to comply with GDPR, they have to inspect all HTTPS traffic.

Fine. A company can inspect their own traffic.

This doesn't mean that _other_ companies should have the same ability, including upstream providers.

> Not only do we have to MITM everything, but we're making security worse, and breaking apps.

Sounds like what most of the "security compliance" certifications actually do.


It's comforting to know 20% of the PII wasn't being encrypted in the first place.


...you do know that HTTPS isn't the only kind of encrypted network connection, right


Well, it seems like we're approaching that situation, when people are talking about running DNS over it.

...so what else is being used?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: