Hacker News new | past | comments | ask | show | jobs | submit login
Emergency Directive 20-03 – Remote code execution vulnerability in Windows DNS (dhs.gov)
102 points by PatrolX on July 16, 2020 | hide | past | favorite | 46 comments



This isn't snark - I am serious.

Who would be running a DNS server on a Windows system ?

Why would they be doing such a thing ? What is the thinking here ? I understand running windows-specific infrastructure like an AD server or a PDC or whatever but ... a DNS server ?


You do understand that the corporate world runs on Microsoft Windows and Active Directory, right?

DNS is an absolutely critical component of Active Directory (there's even an old joke that, when dealing with Active Directory issues, "it's always DNS").

In an AD environment, clients are almost always configured to use Windows DNS servers as their DNS servers, with the Windows DNS servers then performing forwarding (of any unanswerable queries) on their behalf. This way is, by far, a helluva lot easier.

You could have non-Microsoft DNS servers that slave the zones from the Windows DNS servers and point the clients at those.

You can even go a step further and avoid using Microsoft DNS entirely. Despite what some people (even some here, apparently), you absolutely CAN run Active Directory without using Microsoft DNS at all (although there are several advantages to using Microsoft DNS)! BIND, for instance, can be used instead of Microsoft DNS. It completely supports all the features that are needed for Active Directory -- in fact, it provides a number of additional features that are often "nice to have" as well.

Because it is a PITA to set up and support, though, these type of deployments are fairly uncommon. In most cases, it's much "easier" to just use Microsoft DNS (which can be installed and set up for you automatically when first deploying Active Directory) -- especially when the folks managing Active Directory don't really have a great understanding of DNS itself. For example, I imagine the average Windows administrator would be completely dumbfounded if you asked them to hand-edit a BIND zone file! Instead, with Microsoft DNS, they can just point and click their way around.

This, of course, leaves out all discussion of DHCP which (in a corporate Windows environment) is pretty much always required as well. Running all of this on Windows means you don't have to deal with integrating all of the various pieces yourself.


A "win" w/ Active Directory-based DNS servers (and storing the DNS records within Active Directory) is that you get replication between the DNS servers by way of AD replication "for free". There's a certain component of "snake eating its own tail" to it, insofar as you have to bootstrap new Domain Controller computers by having them use another DNS server while they pull their initial replica. Active Directory-integrated DNS gets you per-record ACLs to authenticate dynamic updates, too.


Yeah, there are certainly numerous benefits of AD-integrated DNS, such as built-in zone replication via AD replication, secure dynamic updates, updates can be made on any DNS server (e.g., "multi-master"), DHCP integration, and probably several others I'm forgetting at the moment.

At least some of these are available in other DNS servers as well but I'd certainly agree that AD-integrated DNS is much easier to deploy, manage and maintain.


> updates can be made on any DNS server (e.g., "multi-master")

This cannot be overstated. I'm currently in the process of trying to rehabilitate an existing DNS infrastructure based on BIND, and it is a complete disaster. All the "high availability" stuff is focused around replicating zone databases from a single master to many slaves, which will indeed continue answering queries even if the master is down. It appears there is basically zero concern for ensuring that updates will continue to be accepted and distributed to slaves if the master fails. The documentation doesn't address it, and none of the many conversations I found through Google had good answers. I have no idea how large DNS installations are built on this software.


Bind can have multiple master servers. I don’t understand why it wouldn’t continue answering queries when the master is down either, you only need the master to answer IXFR/AXFR. Maybe your SOA has some really short times in it? I’m not great with bind but I’ve been doing a lot with it lately (using it to feed large DNS servers) so you’re welcome to email me if you want help.


BIND has a plugin interface and can load zones from a HA/replicated data store like LDAP.

I don't know how commonly this is used in practice. It is used by FreeIPA.


As someone who inherited a half broken half-arsed hybrid BIND-Windows environment - now that I've decommissioned the BIND side, our DNS is a lot simpler to operate and understand, and we have far fewer 'glitches'.

This is not a knock on BIND - it is good software, but in the average Wintel environment you're probably just adding unnecessary complexity for marginal gains.


Most AD infrastructures, e.g. SRV records. It's used for domain discovery, service discovery, optionally registering computer hostnames to DNS as part of IPAM/Microsoft DHCP server, and probably other things. Not an Exchange admin but I'm sure Exchange uses it.


SRV records are part of the DNS standards and supported by pretty much every DNS server in existence.

While properly functioning Active Directory and Exchange deployments both require SRV RRs, they certainly aren't Exchange- or Windows-specific.


It’s just easy to do, that’s it. It was a little difficult to run another DNS server with Windows 2000.... but that was a very long time ago.

Even if you use MS DNS, it’s smart to run DNS on dedicated devices.


My knowledge with Windows is a few years outdated but the way I recall it is that basically every DC by default is a DNS server (or at least the first DC that gets created) - because you need a name service if you want a useful Intranet.

So every company that has AD probably has a Windows DNS server - which is kind of like every bigger company.

Hope someone can confirm or dispute my info with 2020 hands-on experience.


Yep. Most of the small companies I've seen just use the Windows DNS servers because they need them for AD. You can set up separate DNS servers and delegate AD's domain to the domain controllers, but then you're maintaining additional DNS servers.

And besides, most of the small companies I've seen vastly prefer a Windows GUI than (e.g.) configuring bind on the command line.


AD has a very significant DNS component. All domain controllers are also dns servers.


Perhaps more correctly, all domain controllers can optionally be DNS servers as well.

Even when first setting up a new forest and new domain for the first time, installing a Microsoft DNS server is not required.


All primary domain controllers are also DNS servers.


There are no "primary domain controllers" in Active Directory. It's multimaster. There is one "primary domain controller emulator" in each domain that-- erm-- emulates an NT 4.0 primary domain controller-- but that's it.


Plus the other FSMO roles... and some things only work with the PDCe... and some DCs can be RODCs...


And extras are as well


Yeah, it's a highly integrated piece of Microsoft Active Directory, which is a core piece of any "Enterprise" Windows deployment. It's not just something that stays on internal networks - organizations that have trusts between their Active Directory systems have connections too.

But for public networks, I completely agree that it's insanity to run DNS on Windows.


Why?

It works quite well! I just recently worked on a large government department that had approximately 100 public, Internet-facing DNS zones hosted on a pair of Windows servers.

I seriously don't get this automatic dismissal of Windows Server as "not a real server operating system". An enormous fraction of large enterprises run on it, very often for Internet-facing infrastructure. Microsoft.com. Azure.net. Stackoverflow.com. You know... those teeny-tiny toy sites nobody ever visits.

Just because you're not familiar with it, and just because you can't imagine how it could be managed properly doesn't mean other people don't know how to do it.

PS: The list of BIND's CVE entries is here, in case you think that being UNIX-based makes it magically immune to security vulnerabilities: https://www.cvedetails.com/vulnerability-list.php?vendor_id=...


It's not about capability, it's about licensing. Exposing your Microsoft DNS Server over the Internet would require a Windows CAL for every citizen of the world, or an additional External Connector license to the tune of $5000 per server per year. That's a lot of money for the "convenience" of not having to setup a BIND slave zone.


That's just not true, you don't need CALs for anonymous access, and most enterprises have different licensing such as Datacenter Edition where this just doesn't apply.

Just like how you can run an Internet-facing website on IIS, you can run a DNS server on Windows.

Anyway, $5000 is pocket change for most orgs, and it's a lot less than what it would cost to run an additional operating system on top of Windows. The time it would take just to choose a distro before even starting to set up several Linux or BSD servers would cost more than $5K in most places I've been. Then you have to worry about keeping them patched, backed up, in-sync with each other, etc...

If you don't already run a largely Linux/UNIX shop, spinning up BIND is much, much more expensive than a couple of additional Windows VMs on top of the pile of hundreds they likely already have.


If it's not open-source, then how can you be confident that it doesn't suck? Hobbyists don't tinker with it, people probably don't publish benchmarks of it. Sure, maybe it works fine, but it might also be awful. So why take the risk when there's an open-source option?


Are you... joking?

First of all, I have decades of experience with Windows, Linux, and BSD. I know they all suck, just in different ways.

Don't automatically assume that the only operating system with which you are familiar is magically better. You've just become accustomed to its flaws and warts. It is the devil you know, and other, unfamiliar systems are scary and strange. To you. Not to other people.

By the way, the hobbyist tinkering is the main cause of the things I've found to suck in Linux. It's just unprofessional, through and through. It hasn't got the "boring bits" filled out, the bits nobody could be bothered to tinker with because they weren't being paid to. It still(!) struggles with spaces in file name paths. In 2020 it still struggles to consistently handle esoteric new concepts like the backspace key.

A case in point is the wailing and gnashing of teeth in the Linux community about systemd. From an external objective perspective, this is absurdly childish. Linux is missing 80% of the basic functionality that MacOS and Windows have had since forever. Since the NT4 days in the 1990 forever. In reaction to the systemd team dragging Linux into this century, people lost their minds. They got so mad they were practically frothing at the mouth.

I've seen similarly amateurish, unprofessional, dismissive attitudes everywhere I've turned whenever I hit a problem with Linux. Most recently I dug into the history of why its DNS client is so incredibly bad, and it was just a shitshow of stupid, stupid arguments on various forums that have gone on for decades. Idiots arguing very loudly overruling the people giving good, but humble advice.


> Don't automatically assume that the only operating system with which you are familiar is magically better. You've just become accustomed to its flaws and warts. It is the devil you know, and other, unfamiliar systems are scary and strange. To you. Not to other people.

I don't know where you've got this idea from. I'm familiar with Windows and aware of what it does right.

> By the way, the hobbyist tinkering is the main cause of the things I've found to suck in Linux. It's just unprofessional, through and through. It hasn't got the "boring bits" filled out, the bits nobody could be bothered to tinker with because they weren't being paid to. It still(!) struggles with spaces in file name paths. In 2020 it still struggles to consistently handle esoteric new concepts like the backspace key.

Windows has its share of laughably bad issues in those areas. Its terminal may handle backspaces but it struggles with copy/paste. The inconsistently-applied filename length limit is just ridiculous, as is the file locking behaviour. On the whole it's just as "unprofessional" as Linux or anything else.

> A case in point is the wailing and gnashing of teeth in the Linux community about systemd. From an external objective perspective, this is absurdly childish. Linux is missing 80% of the basic functionality that MacOS and Windows have had since forever. Since the NT4 days in the 1990 forever. In reaction to the systemd team dragging Linux into this century, people lost their minds. They got so mad they were practically frothing at the mouth.

And they were right. Systemd screwed everything up, and I'd rather use a system where the users are "childish" enough to call out when it changes for the worse than when they keep quiet out of "professionalism".


"80% of the basic functionality"

Is this a desktop functionality oriented post or something? Because there are things I find daily that can't be done with Windows Server, but come out of the box on most Linux distributions. Mostly storage, monitoring, or networking related-- aka some of the most important things.


You mean like Storage Spaces, ReFS, Access Control Lists including rich Claims support, TPM-integrated volume encryption, cluster volume encryption, encrypting file system with user-friendly smart card integration?

Or do you mean things like the Windows Management Instrumentation that is object oriented, works locally and remotely, and has over ten thousand counters by default? Or the event log that has had circular binary logs and log forwarding available with no extra software for two decades?

Or do you mean how Windows Server has RDMA support so that even simple things like file transfers over SMB3 can run at wire speed on 40 Gbps Ethernet and replace fibre channel at a fraction of the cost? Or is it the native IPv6 support that actually works (unlike, say, MacOS)? Are you talking about the Windows DNS client, which unlike Linux can properly handle an unresponsive primary DNS server? Or the policy-driven IPsec support? The Always On VPN capability maybe?

Which of these important things were you referring to?


None of those actually.

Mind you 4/5 of what's listed here is also functionally available on -nix, or straight up pales in comparison.

Specifically looking at Storage Spaces and ReFS: can't protect boot media with it (can't boot from USB either really, WTG depreciated), can't do error correction outside of basic configurations, also awesome to look at data loss and performance issues up into 2019 on "stable" releases with hardware on the HCL.

The WMI is a Windows-ism that has similar analogs with tooling on -nix. Don't get me wrong though, along with PowerShell the WMI/CIM stuff is extremely awesome.

Don't really care what MacOS does, not sure why it was mentioned, that's not what I'd consider in the "server" space.

Off the top of my head based on recent solutions I've had to implement: I'm talking stuff like per-process/user network stack segmentation (NetNS), proper native container primitives with an ecosystem to match (cgroups, runC, container, etc.), out of the box full syscall level auditing and logging (kernel auditing framework), fine grained and configurable application mandatory access control (AppArmor, SELinux), low level insight into storage properties should you ever need it (relevant FS tools), fully integrated storage and volume management options (ZFS; on FreeBSD at least), live kernel updates, and in general not making me want to gouge my eyes out any time I have to touch networking configuration.

I believe we probably occupy different spaces though, as I can see some things you mentioned that aren't entirely relevant to my use cases, as I'm sure the things I mentioned may not be relevant to yours.


Call me crazy, but as a hobbyist I tinkered with (pirated) Windows Server, Exchange, and Active Directory installs. I had OWA up and running from my home IP, it synced contacts and email with my phone via some exchange integration, years before gmail. It was actually a lot of fun around the Server 2003 era.

And while complex and byzantine, Windows Server was rock solid if you treat it right.


Saying this w/ extensive experience w/ Windows Server, Linux, and a couple of very old Unix flavors: Anything can be complex and byzantine if the developers put their minds to it.


Indeed. I have the same complaint about linux now.


It's not "linux" it's "GNU/systemd"


Honestly I'd prefer systemd over the mess that is init.d


That isn't, and never was, the choice, however.


"Ninety per cent of Fortune 500 companies use Azure AD, the sign-in engine for Office 365." - https://azure.microsoft.com/en-gb/services/active-directory/...

But you can't know whether it works unless hobbyists tinker with it?? (Even though hobbyists do tinker with Windows)


Yep. I've seen lots of terrible software used by Fortune 500 companies - the person making the purchasing decision usually isn't the person actually using it, so the software quality has no impact on whether it gets bought.


I've seen lots of terrible open source software - nobody is purchasing it, and the people who would normally make purchasing decisions aren't involved at all, so the software quality has no impact on whether it gets bought.


Sure. But you can find what people say about a given piece of open-source software; generally people will have tinkered with it and kicked the tires a bit and written up their impressions, maybe ran some benchmarks. And maybe you miss out on the occasional gem that way because there's a really good library that just hasn't been touched by anyone for whatever reason, but at least you can avoid picking a lemon. Whereas there are lots of lemons in the proprietary software market, and no real way to distinguish them.


> there are lots of lemons in the proprietary software market, and no real way to distinguish them.

You can find what people say about a given piece of proprietary software; generally people will have used it and written up their impressions, maybe ran some benchmarks.

Have you honestly never seen a review of a closed source program? Never seen anyone write about their likes or dislikes? Never seen any shareware/try before you buy/free tier software? Never seen anyone use some, decide it was bad, and change to another vendor? Never seen someone try the free version of SQL server or comment about their experiences with IIS or review a new release of Windows? Never seen a computer game review? Or a magazine dedicated to game reviews? Or reviews of Apple app store software?


> Have you honestly never seen a review of a closed source program? Never seen anyone write about their likes or dislikes? Never seen any shareware/try before you buy/free tier software? Never seen anyone use some, decide it was bad, and change to another vendor? Never seen someone try the free version of SQL server or comment about their experiences with IIS or review a new release of Windows?

For enterprise software? Pretty much no. You don't tend to get shareware-style free trials, the more common model is a fully functional (and sometimes even open source) free version and then a paid version with additional features. Companies do switch vendors but it's hard to impossible to connect that with the actual user experience. You see case studies but usually published by the vendor themselves; a neutral reviewer doing comparisons is virtually unheard of. And it's pretty common for the licenses to forbid benchmarking or reviews (Oracle famously does).

When you're talking about software that only makes sense on the scale of a multi-machine network, who's going to go to the trouble of reviewing it? My experience is that hobbyists sometimes will, but only for the open-source stuff. Back when I worked for a Fortune 500 client we did do full-scale evaluations of proprietary systems, but they were always internal-only. If you're big enough to have your own evaluation department then sure, knock yourself out (this company was also big enough to do things like use their own internal programming language). But for a medium-sized company it's just too risky IME.


What others have said but also keep in mind many corporate environments have been running windows dns for nearly 20 years. DNS migrations from windows take years. So many subtle changes, delegations, forwarding etc etc. Windows DNS isn't the best but it's reliable and gets the job done.

Plus, quick and easy dynamic DNS entries from windows servers or workstations using AD computer account auth


Like others said, every corporate windows network uses windows domain controllers as primary DNS resolvers. I wanted to add that, as computers are added to AD, you get a PTR record automatically in windows. If you have even a medium size network where new computers get added all the time, it would be a pain to autocreate their PTR records in bind. Also, for forwarders, I've seen DC's do full on recursive resolution themselves but when you need a forwarder you would ideally use a commercial solution that probably uses bind internally instead of rolling your own Linux bind server(typically)


> ... as computers are added to AD, you get a PTR record automatically in windows.

This is, of course, configurable and can be turned on or off.

> ... it would be a pain to autocreate their PTR records in bind.

Yes, it would. Fortunately, ISC DHCP and BIND, for example, can also handle this automatically.


Cybersecurity and Infrastructure Security Agency

Report (PDF) :

https://cyber.dhs.gov/assets/report/ed-20-03.pdf


Even if your Windows DNS is not exposed to the internet, it means that anyone getting a device on your WAN can get windows domain admin.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: