Hacker News new | past | comments | ask | show | jobs | submit login
A Docker footgun led to a vandal deleting NewsBlur's MongoDB database (newsblur.com)
554 points by ecliptik on June 29, 2021 | hide | past | favorite | 264 comments



It's not all that great blaming the victim. They clearly made the right moves with at least some of their configuration decisions and leaned on the underlying platform not being bonkers (but alas, it was: https://github.com/moby/moby/issues/4737 and https://github.com/moby/moby/issues/22054).

Should they have hardened in all the other ways for defense in depth, e.g requiring authentication from localhost? Sure. Should the attacker have not done the attack? Without a doubt. But should docker have not completely undone a security control?

Yes, and the fact that docker seems to have persisted with the current state is the topic of discussion.

Previously: https://news.ycombinator.com/item?id=27613217


This same one got me after years of using Docker, I only discovered it after using the combination of Ubuntu Server (and it's ufw) on a DMZed device.

I was running what I thought was an internal FTP instance for almost a week. Luckily it was about as hardened as regular ftp can be, but I noticed the problem when my service wasn't able to log in as the (very low) connection limit was filled by someone attempting passwords.

I've been using https://github.com/shinebayar-g/ufw-docker-automated to make docker compliant with ufw, and defining firewall rules as labels for the containers. It needs some work still, namely the service should be hosted in it's own container for easier updating, but it works reasonably well.


Yep. This exact interaction between UFW and Docker had me stumped for a while too.

Totally unexpected outcome.

Luckily I discovered it in testing, so didn't make it to production. But annoying that those issues still remain for so long.


that first issue links to a really great flow-chart of how iptables works: https://cesarti.files.wordpress.com/2012/02/iptables.gif


I haven’t used docker in production. I guess it is used for k8s all around?


> It's not all that great blaming the victim

* Run a SaaS that costs money

* Do not perform the migration in prod replica setup with test data and monitor for, observe and fix any oddities

* Cause a 100% data breach by not following a sound change management procedure

Just because this individual is being blamed and not a Ltd. or Corp. doesn't mean that this is how a quality SaaS should be run. I know that more it's-just-a-side-gig SaaS wing it more than they do not but this is exactly what happens when you haven't got good ops. Rather than random error, this is a consequence from a systematic error.


> They clearly made the right moves

The machine with DB had a public interface. No matter firewalls, this is just bad. DB machine should be in private subnets, preferably with no inet access at all, even via NAT.

Proof that it had public interface. TFA: "Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world"


>The machine with DB had a public interface. No matter firewalls, this is just bad. DB machine should be in private subnets, preferably with no inet access at all, even via NAT.

This is the type of thinking that lead to the attack in the first place. No, you should not pretend that your database is secure because it is hidden deep behind dozens of layers of firewalls.

You should assume that your database is always accessible from the interne and make sure that internet access to your database is not relevant.

With mongodb you are supposed to use TLS encryption and authentication even for communication within the same host because a low privilege hack or mistake could forward a database port or make it public.

In this scenario you not only need to bypass all the network security theater, you also need the password and a valid client certificate signed by a private CA.


What he said about the DB being in a subnet wasn't incorrect, nor what you just said about setting a password and using certs. It's defense in layers and not "network security theater". You don't do just one of them.


Even Google Cloud SQL just is a managed instance with a public IP (that you don't seem to be able to make private). Firewalls always help, and thats what Cloud SQL does: whitelist your ip when using the CLI.


What I do in AWS is having a VPC with 3 subnet types, public, private and restricted. Public ones have default route to inet, private to NAT, restricted - none. DBs go in restricted, load balancers in public and web/app servers in private subnets. Plus whitelist firewalling using Security Groups.


when i evaluated using a NAT setup using AWS NAT gateways it appeared to be quite expensive. we settled for strict VPC rules using only SecurityGroups in order to restrict communication. the instances did have public ip but nothing but intended services could connect to it anyways... i.e an instance running a HTTP API could only be connected to if initiated from the load balancer and itself could only connect to relevant services such as a RDS cluster...


Normally machines behind NAT use it only for pulling stuff on boot time. Main traffic goes via LBs. depends on your case, of course.


later on they also had the need for it to be honest, albeit only for a very specific external service that required us to have a fixed public IP address to connect from. The only solution deemed scaleable was using such an AWS provided NAT Gateway..

so yeah, it surely depends on the case given...


> Published ports

> This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.

Source : https://docs.docker.com/config/containers/container-networki...

Requiring authentication from localhost does not seem relevant to me, given that the creds would be stored somewhere, either in memory either in a file anyway, but exposing a port is not "binding on localhost".

However testing your firewall after publishing a docker port seems common sense.

Indeed, that's how I found out Docker was using the DOCKER-USER iptables chain that you can customize:

https://docs.docker.com/network/iptables/

And that's how I made a simple firewall that works:

https://yourlabs.io/oss/yourlabs.docker/-/blob/master/tasks/...

Another thing, instead of using exposing ports like that, the easiest is to use Docker-Compose, so that your containers of a stack have their own private shared network, then you won't have to publish ports to make your services communicate. Otherwise, just create a private network yourself and containerize your stuff in it.

So for me that's two newbie mistakes which conducted to falling for this non targeted, script kiddie attack which has been going on since 2017.

But yeah, go ahead publish your ports instead of using docker networks how you should, "believe" in your firewall while you're at it, and then blame "docker footgun".


> Another thing, instead of using exposing ports like that, the easiest is to use Docker-Compose, so that your containers of a stack have their own private shared network, then you won't have to publish ports to make your services communicate.

That only works with local communication unless you use docker swarm. So byebye high availability.

If you're going to make suggestions, at least think about them from a production point of view.


Most productions are fine with 99.9% uptime, and a single server is fine for that, so HA is not necessary for most production out there.

For example, I'm running a governmental, nationwide service handling 300k req/day, with 1k admins working daily on the site, on a single server without anyone complaining, and without less than 99.9% uptime even though I reboot to upgrade the kernel twice a month, I did have to fine tune the system but that was actually pretty easy.


Quick calc: 99.9% of 365 means 8:45 downtime a year. That's not a lot, especially if hardware or power fails.

In my personal experience, given a vendor SLA of 4:00 hours, Dell manages to send you replacement HW on time, but HP has up to now failed to deliver in window every time. YMMV of course. RAID is the only thing keeping us online in these cases.

There is also the normal server maintenance. Rebooting a kernel twice a month, if the reboot takes 2 minutes, already swallows 48 minutes.

Now to be honest 99.9% business hours only is mostly doable if you serve only 1 continent and manage to update/reboot outside business hours. I have a non-critical debian public web server running that updates and reboots every day at 3AM. Mostly because nginx didn't consistently apply lets encrypt certs. So far no one has noticed or at least complained.


If the server burns I'll just deploy another one from scripts and restore the data, I won't need to wait 4 hours I can do it right away, I will surely be able to do that in less than 8 hours!!! However, we don't own the servers, we just rent them from OVH, so we can spawn one in less than an hour and provision it in another, so maybe the fourth time my server actually completely burns in a year I will feel some kind of fear to do 99.8 instead of 99.9 :'''( <--- these are many tears.

Funny to see my above comment get a final score of -4, people don't like us saying "well just get a cheap linux box and we'll get you 99.9% uptime" ahah and this is "hacker news"!

Not to mention the comment that was like "you can't do production without HA", I have been developing, deploying, maintaining a bunch of productions for the last 20 years and very very few of them needed any kind of HA.

Anyway, with that experience I can get you >99.9% uptime with a 3 buck/month linux box, or HA for anything above 99.9, cause that's what experience is about kids ;)


That’s 10qps assuming working hours indeed you don’t need more than one server for that.


How many user facing projects actually got so far they needed more than one server? 1% of them?


I'd say even less - a single db server is almost enough for let's encrypt: https://letsencrypt.org/2021/01/21/next-gen-database-servers...


> Indeed, that's how I found out Docker was using the DOCKER-USER iptables chain that you can customize

Only on IPv4...


> Yes, and the fact that docker seems to have persisted with the current state is the topic of discussion.

It's clearly written in the docs:

    To expose a container’s internal port, an operator can start the container with the -P or -p flag. The exposed port is accessible on the host and the ports are available to any client that can reach the host.
(from https://docs.docker.com/engine/reference/run/#expose-incomin...)

It should be common knowledge by now to either create a virtual interface / network on the host to publish ports on for usage by services that aren't running in Docker or (if you are in an environment where all services are in Docker) use --link between containers.


I don't think that's clear at all. If I set "bind_ip = *" in some application then it's also "available to any client that can reach the host", but the firewall is in front of that. I certainly wouldn't expect an application to frob with my firewall.

And as I understand it, this is very much an unintentional side-effect of ufw and Docker interacting – it's not Docker's the intention at all to override any iptable rules, just an unfortunate side-effect.

"It should be common knowledge by now" is very hand-wavy. I never used Docker much, and I could have been bitten by this. Where do I get this "common knowledge" from? Not everyone is a full-time sysadmin; some people just want to run a small service, read the basic documentation on various things etc., and set things up. Not everyone is super-invested in Docker (or any other tool for that matter) to have all the "common knowledge" that all the experts have. This is why defaults matter.


Same here. It's even a little worse because of the vagaries of Docker networking. I have a server, it has a firewall that has all ports blocked. If I install Docker to run some containers and use the --network=host option, it all works fine. Containers listen on ports on the host, my existing firewall controls it, great. If I use what would seem like the more secure port-forward option, Docker adds a new rule to the top of my firewall ruleset opening the forwarded ports to all traffic, overriding my existing firewall for that port entirely.

I get that Docker is often run by devs on their laptops, but this is bad behavior. The simplest solution I've seen so-far is Docker should only automatically set these firewall rules on devices that only have RFC1918 IPs. That would still keep things simple for devs on laptops, but would prevent firewall override on servers.


You forgot IPv6, unless Docker still doesn't support that, ofc.


> "It should be common knowledge by now" is very hand-wavy.

It is written explicitly in the manual that any port you choose to publish is reachable by all machines that can reach the host.

Just how more explicit does this warning have to be?!

And anyways: a simple "netstat -lnp" would have shown the docker-proxy process and world-wide reachability.


What would you estimate as the total word count of the Docker documentation? The page you linked to alone is about 10k words, and there are a lot of pages. At a guess, we're looking at something the length of notoriously long novels like War and Peace. What percentage of the documentation had you read and internalized before you first put Docker into production?

It's very easy to come along after something blows up, find something a person could have done differently, and blame that person, treating them as stupid or negligent. It's easy, satisfying, and often status-enhancing. It's also going to make the world less safe, because it prevents us from solving the actual problems.

Those serious about reducing failures should read Dekker's "Field Guide to Understanding 'Human Error'": https://www.amazon.com/Field-Guide-Understanding-Human-Error...

It comes out of the world of airplane accident investigations. But so much of it is applicable to software. The first chapter alone, the one contrasting old and new views, can be enough for a lot of people. It's available for free via "look inside this book" or the Kindle sample.


> What percentage of the documentation had you read and internalized before you first put Docker into production?

Admittedly, not as much (and since I've got a couple of years experience, back then the docs were bananaware), which is why I ran into the same issue - but one thing I always do is set up staging environments and check if at least the basic expectations (=can't reach an internal service from the Internet, but can reach public services) are working. Aka: a portscan.

A basic check would have prevented so many carelessly-configured services. Generally, it doesn't hurt anyone to run a nmap scan as part of the regular monitoring - if the line count doesn't match the previous one, raise an alarm.


Oh, is that also in the docs?

My point is that this is the same behavior: looking for a reason why the person is wrong. That is a behavior that makes things worse because it takes the focus away from systemic improvements that make us safer. Please don't do it.


> That is a behavior that makes things worse because it takes the focus away from systemic improvements that make us safer.

Systemic improvement would be to educate people to build fail-resilient systems. There are a number of reasonable precautions that could have prevented all of these "I exposed my unsecured MongoDB docker to the Internet" instances:

1) Dedicated hardware firewalls (or in the cloud, firewall rules): If all your server is accessible on is 80/443 anyway from the Internet, it doesn't matter if you punch a hole in the server's firewall.

2) Regular outside port scanning as part of your monitoring. This helps you catch misconfigurations of your firewall.

3) When planning a new infrastructure, think about which system needs access to which other systems. Your MongoDB doesn't need to have an Internet connection

4) Have someone experienced look over your staging system, do a full scale security audit or attempt basic measures of pentesting on your own.

5) Don't copy shit from stackoverflow or years old random blog post without understanding every single thing it does.

6) Don't use MongoDB which has been so fucking often in the news for being an insecure pile of dog poo that it physically hurts to read that name!

Organizations that fail to do at least the first four points as part of standard operating procedures deserve to be hacked, period. There is no excuse for slacking off on either of these in 2021.


While common knowledge, the resolution is not always clear. Is the current expectation that everyone using docker is schooled in network engineering?


> It's not all that great blaming the victim.

Everyone wants to be treated as a (software) engineer here, but the engineer's perspective in this situation would be the opposite: The victims are the customers, and the perpetrator, acting in negligence, was Newsblur. Risk management is a core part of the engineer's job. https://www.sebokwiki.org/wiki/Risk_Management


This is completely wrong. By this logic me killing a parent turns the parent into a perpetrator because the parent didn't learn proper self defense.

No, the parent is a victim and the children are dependents of the victim. Thus hurting the parent also hurts the children. It's the same thing with Newsblur. Hurting Newsblur hurts their customers.


> Everyone wants to be treated as a (software) engineer here, but the engineer's perspective in this situation would be the opposite: The victims are the customers, and the perpetrator, acting in negligence, was Newsblur.

There can be multiple victims, multiple causes/threat actors, and overlap between two categories.

But what's the end user a victim of in this case? Other than a brief lack of availability?


Is there any other profession were the professional, doing something wrong, would be called the victim? Would an engineer, having made an error when doing structural engineering calculations, be called a victim if the software used was too complex? Or a physician who overlooked adverse affects written in fine print? Or the electrician making errors when fixing things in a house with old wiring?

Of course the behavior of Docker is really problematic in this case, and ditto for MongoDB not having auth active per default. But both applications were consciously chosen by Newsblur.


> Is there any other profession were the professional, doing something wrong, would be called the victim? Would an engineer, having made an error when doing structural engineering calculations, be called a victim if the software used was too complex? Or a physician who overlooked adverse affects written in fine print? Or the electrician making errors when fixing things in a house with old wiring?

I don't understand the relevance of this to my original question.

> Of course the behavior of Docker is really problematic in this case, and ditto for MongoDB not having auth active per default. But both applications were consciously chosen by Newsblur.

Or this, for that matter

---

So that you have it, my original ask was: But what's the end user a victim of in this case? Other than a brief lack of availability?


Your actual original point was:

> It's not all that great blaming the victim.

Which is what I was replying to.

Of course this incident was inconsequential for Newsblur's customers. That in this case the impact for the customers was approx. zero doesn't change the relationship of Mongo+Docker <> Newsblur <> customers (i.e. who has responsibility and who feels the consequences, which defines who can be a victim), just the outcome for the customers.


> Of course this incident was inconsequential for Newsblur's customers. That in this case the impact for the customers was approx. zero doesn't change the relationship of Mongo+Docker <> Newsblur <> customers (i.e. who has responsibility and who feels the consequences, which defines who can be a victim), just the outcome for the customers.

See my previous note:

> There can be multiple victims, multiple causes/threat actors, and overlap between two categories.

I think we've both concluded now that newsblur was largely a victim as a consumer of docker.

Cheers, glad this helped.


If there is a third-party exploiting that error to cause damage, sure (not the case in your examples, which is why they miss the point). If you don't get security for your business right and it gets burned down by an arsonist, nobody is going to not consider the business owner a victim too.

Software culture is IMHO problematic there in multiple dimensions.


Network services really should have auth by default even if they only bind to localhost. There’s so many ways that a localhost service can become accessible to attackers. Any process with network access, no matter what user it’s running as, can access a localhost service - UNIX sockets on the other hand can be restricted by the usual user/group permissions. A localhost service can be exposed by e.g. an SSRF bug from a web server on the same machine. Or, accessed by a browser on the same machine browsing untrusted sites (an attack model for desktop services which open local ports), and so on. (In case you believe that the HTTP protocol restriction in the latter two cases protects you - protocol smuggling over HTTP is frighteningly effective!)

Of course, as seen here, a network service binding to more than localhost is strictly more exposed - and auth should be required in those cases (even when the service should ostensibly be LAN-only).

Secure-by-default is the best kind of security.


It’s an unpopular opinion but it’s 100% a good idea. If at all possible, bind local services to UNIX sockets instead of localhost ports. At least that way you’ll get some measure of access control effectively for free.

If you’re writing software that deals with network connections (as a client or server), write the code for UNIX sockets first. It’s usually trivial to bolt network connection code on top, and being able to deal with sockets directly can make automated testing significantly easier.


A similar but more flexible solution I like is to create a wireguard network between all nodes and developer workstations, which creates a separate interface called wg0. Bind every private service to this and publish what you want through a reverse proxy.

It has an added benefit of not needing vendor-specific TLS configuration and firewall rules easy to configure since everything goes through the single VPN port.


Secure by default is best for those people who do not know every little config detail. Secure by default helps those that you try to make life easier for, eg. you open a port because you assume your user don't know how to... having security by default is the best option for those people. And by "those people" I mean everyone, including myself.


Making something secure "less secure" (according to needs) is usually easy. Making something insecure more secure can be complicated, do wrong, easy to forget, or just something people don't know about.


I don't see how NewsBlur is getting a pass on this and Docker is taking all of the blame. Would they still get sympathy if they had "password" as their DB password and were hacked that way?

I would blame MongoDB for its default-insecure configuration. There's no excuse. It's been like that for at least a decade (when I last used it) and it was a bad choice even then. At a _minimum_ when they upgraded the engine to integrate WiredTiger they should have pushed that through as part of the breaking change.


100% agree.

'easy to start developing against' falls flat on its face once you deploy to production and realize you have to bolt on authn and authz to your code and data connections instead of properly designing it in from the get go. (it might work if you never deploy your products though. god knows how many products don't see the light of day.)


Crap like this is why I dont run docker. I'm glad RedHat took a principled stance on it and dropped it. Podman doesn't punch holes you didn't ask for in your firewall. What a ludicrous anti-feature.


That might be true, but regardless of docker (or other, similar solutions), shouldn't MongoDB have had auth protection?


MySQL/MariaDB have a completely open root account too... although default firewall rules should prevent public access too, unless Docker likes to punch that hole open too.

Yes, root account password and access permissions should be changed upon a fresh install, but the real issue here is Docker's "helpfulness" by opening ports without explicit permission. That's absurd, and has no reasonable excuse.


https://dev.mysql.com/doc/refman/8.0/en/default-privileges.h...

> Installation of MySQL creates only a 'root'@'localhost' superuser account that has all privileges and can do anything.

It's only a local account. Sure, it would be great to have a password on it, but it's not "a completely open root account".


> Sure, it would be great to have a password on it

It has a password, which the user has to change, before they can do anything else. From that same page you linked:

""" For data directory initialization performed manually using mysqld --initialize, mysqld generates an initial random password, marks it expired, and writes it to the server error log. """


That documentation is outdated. Debian does not prompt you for a password, it creates a root account but you can't connect to the database using "mysql -uroot" you can only use it if you are root on the server (I forgot how they do the trick)


> MySQL/MariaDB have a completely open root account too...

That's flat out wrong. You can't start MySQL/MariaDB docker images without either explicitly specifying a root password, have it generate a random one on the first start of the container, or explicitly allowing an empty password.

Regarding native installs of MySQL/MariaDB, the situation is a bit more murky, but at least the Ubuntu/Debian packages will ask you for a password during setup (and the bind-ip is 127.0.0.1 by default which means you'll have to mess around with conf.d files if you want to setup an externally-reachable server).


I don't believe this is true. `mysql_secure_installation` script exists to create a password on the root account, and disable root@'%'. If what you said about the Docker image is true, this article wouldn't exist either.

Regardless, a supposed "enterprise" tool shouldn't start changing your config in surprising ways, like punching firewall holes without explicitly being asked to do so. That's just crazy...


> I don't believe this is true. `mysql_secure_installation` script exists to create a password on the root account, and disable root@'%'.

The packaging does the necessary setup for you, and modern MySQL has secure defaults anyway.


Modern Debian/Ubuntu MySQL packaging doesn't even do that. It defaults to Unix socket authentication, and so you can only log in if you are actually root on the system (eg. if you use "sudo mysql"). It does also only bind to 127.0.0.1 by default as you say.


You can totally start a new mongodb container and connect other containers to it without ever using auth. I'm doing it right now with docker-compose for local development.


I and the post I replied to was talking about MySQL. MongoDB is an insecure pile of cow poo that has been so often implicated in hacks and data thefts that I don't get why a) anyone is STILL using it and b) the containers STILL don't have a security-first default.


I think this still misses the point.

It's Docker's responsibility to not open firewall ports unless explicitly asked to do so.

Opening random ports without A) Telling the developer first, B) Requesting permission to do so, C) Explaining why Docker wants to do so and D) Providing the ability to "opt-out" of this insecure-by-default configuration - is very bad.

It doesn't matter how secure or insecure the software inside the container is... it shouldn't be allowed to communicate with the public unless the developer specifically requests it.


> It's Docker's responsibility to not open firewall ports unless explicitly asked to do so.

You are asking Docker to open firewall ports with the -p option, it explicitly even says in the manual that published ports will be reachable from everywhere that can access the IP you're publishing on (either default 0.0.0.0 or an IP of your choosing).


The OP didn't pass this option to Docker, and it seems it's pretty well established to be a significant footgun[1].

I don't have a horse in this race, as I don't use Docker for a plethora of reasons dating back to it's inception. There are plenty of far superior containers to use. Just adding this footgun to the list of reasons...

[1] https://github.com/moby/moby/issues/4737


That's how both Mongo and MySQL, back in the day, became so popular.

They had super lax security, making them easy to use for newbie devs, who are frequently scared/easily distracted by security settings.

I'm quite convinced the lack of security was by design. Growth hacking and all that. Get everyone onboard and once you have big business going on, you can focus on the minutiae of security, scaling, not losing data.

Node.js, PHP, Docker (heh!) most other popular techs did the same thing: super lax and frequently wrong defaults, just to get adoption.

As a startup it seems you either choose right/security or wrong/money :-)


> right/security or wrong/money :-)

right/no users or wrong/yes users

Not much point being right if no one uses the product :)


does docker get money, though? anyone with just a bit of common sense has their own docker hub mirror and I don't know how else they earn money...


Docker figured out the adoption, but not the monetization...

The development strategy did work, though.


I've ran a few webhosting companies back in the heyday of LAMP/LEMP.

This is a feature and a dangerous one.

Our provisioning had to carefully manage the firewall during installation, because MySQL would be open to the world *while provisioning*. Often this is mere seconds or less, but on a reasonable popular webhosting, this is enough time for automated tools to take over the server. If, during that provisioning some other tool would start punching holes in the firewall you carefully manage from provisining, I would be enraged. I dislike Docker for a lot of reasons[0], and this kind of behaviour is one of them.

So, yes: what MySQL or MongoDB do is dangerous and poor security practice. But what Docker does is worse: it breaks the countermeasures you take and rely on. IF all your layers of protection are removed by poor security practices of a tool that says "yea, but you need more than just this one layer" you still have no protection.

[0] yet I do use it a lot on development-machines to run the services needed on localhost and not have to manage all those with apt/snap/brew or whatever they come with.


> MySQL/MariaDB have a completely open root account too...

My point was: why didn't the author of the article enable the use of authorisation? To rely on only a firewall is just stupid.


Totally agreed with this. MongoDB’s default security posture is ridiculous, and amazingly it used to be even worse.


It’s exactly the same for redis, which means if used for jobs, you can insert arbitrary code.


All my async job code lives in a codebase and redis merely contains the payload and some ids.

What system, framework or stack are you using that stores the code of the job in the database? I'm curious, because I can imagine it solves some issues, e.g. deploying new code while allowing old code to finish running and scheduled jobs.


Depending on the implementation of your job framework you can either:

- run jobs that aren’t supposed to run (removing an account for example, is that’s scheduled after a 2 week grace period) maybe an export or import job. Can be anything of course

- if your job runner allows scripts, or arbitrary class methods, you can do whatever you want

- you can remove jobs if you feel like it

- if you can escape redis because of whatever exploit, you now have access to the internal network

In general I use sidekiq, but resque and inspired implementations generally work on simply calling perform(), so any class with such a method can be called, depending on the typesystem.

The biggest two issues are the ability to perform any defined task, and the larger attack surface of an exposed redis server


I think what they mean is Redis' Lua scripting stuff with EVAL and such.

Also, if you have stuff like this in Redis, which will get picked up by a job runner:

  {"runjob": "foojob", "params": ["one", "two"]}
Depending on how your job runner interprets this, it can be a big problem if people can write to this. At the very least it'll be a DoS in most cases, at worst (SQL) database, shell access, or leaking of passwords.


From 3.6 the default binding is localhost. That would be ook, except that if you mail the power, you listen to all interfaces. This will happen with Redis as well and is a problem with both docker and its dockerfile


Even on localhost, you should still use auth. One shouldn't naively assume someone can't break into your server.


Yes. But by default things should not be open to anything but your machine.

So let’s start blaming every piece of software that listens to localhost, but is exposed by docker.

Not a fan of mongo, but the defaults are way better than before.

Docker needs to fix their defaults


If someone breaks into your server, database Auth won't protect you. And you have to reinstall everything including your database just to be sure he didn't leave holes in those.

And the hacker can still delete your database or steal your data. He will simply download the datafiles and then delete those data from the server. Drop database done without a database password. And no, databases aren't kept encrypted on the server.


Not necessarily -- there can be a number of reasons one can access localhost over the loopback interface that does _not_ imply root access: SSRF, misconfigured tunnels, or just a plain unpriviliged account where the attacker couldn't perform privilege escalation (either because the attacker's incompetence or the system being up-to-date and/or hardened)


Ok. Then add your password on these system when you design like that. That's not the default as most people feel otherwise.


I had the 'opposite' thought: if the whole system were in a VPC, Docker wouldn't have had the authority to expose a port to the outside world.

Looks like the author is ahead of both us. Later in the post he mentions both a planned transition to a VPC, and plans to beef up the security of MongoDB itself.


This isn't a footgun. A footgun is when something happens that should be expected, but isn't for reasons of negligence or ignorance to the thing that should be expected. The example that everyone seems to love is pointer arithmetic. If you make a basic error in your math, invalid memory access may occur and then likely more bad stuff.

Docker altering firewall rules without explicit instructions to do so is either a feature, if you're looking to defend docker, Or a security defect if you're feeling rational. There's an argument to be made for the root cause of the issue, The docker container manifest, or the application itself. But to pretend like this is what docker should be doing, users should know better just seems wrong.


And now Google is picking up the definition of footgun(1) from Hacker news!

(1) https://i.imgur.com/pHlLFJA.png


I wonder if someone who's not known to Google to like HN would get the same result?


It's the 4th result for me, in a new browser and a relatively anonymous IP due to ISP NAT.


I get a quote from Wiktionary, not HN.


Your screenshot shows an earlier date (25-Jun-2018) next to the definition.


Yes it links to this comment: https://news.ycombinator.com/item?id=17393292#:~:text=Genera....

The funny thing is there are definitions for it in wikionary and urbandictionary yet it is picking the HN one.


The other funny thing is that that definition is also from a thread involving MongoDB.


Well, the footgun on this thread is on Docker...

There exists an entire class of software that is best avoided. Looks like their usage is highly correlated. As a parallel, I was just saying in a work meeting that the first step towards a good IT environment is not adding some crazy non-causal component like our F5 middlewhare, while waiting for a command to end on our Oracle database. That other class is highly correlated too.


I feel there are two issues to fix:

1. Docker doing this iptables change out of the box, and

2. MongoDB not having a password set out of the box

The life of a developer (and solo dev) means you often have limited time you need to navigate a project and try and do your best to understand, deploy and use it -- this is just one of many tasks on your TODO today to get you closer to operating your product.

I really wish these kinds of things were more secure in a few ways:

1. Defensive defaults (passwords, not opening holes in firewalls), and

2. Not making the security hard to use

If its painful to work with the security feature and "get it working", someone with limited time may just undo the defensive defaults to get things working again (doh).

But I get it, sometimes a security piece on by default is so cryptically painful to understand that you get so frustrated with it you just turn it off.


One of the main issue with security is that there's no visible difference between a well secured system and one that's open to any script kiddie. And you might install something or make a mistake in a config file, and suddenly your system is completely unsecure, and again there be will nothing obvious about it.

I think what's missing is an easy to use tool, installed by default - you run it and see a clear overview of your system security - what ports are opened, how is SSH accessible, how secure are the running services, etc. with plain ticks/crosses to show what's good or not. The kind of tool that a developer can install on their server and then check that at least the basics are right.


Understanding security is no longer optional if you try to run a business in the internet. People like to talk about how they want to focus on the product and building features, but if you don’t have security you don’t have a functional product. No, security is not easy, but it’s not an optional skill set anymore. Learn it of fail.


The enterprise space is rife with such tools, but unfortunately they tend to tell you everything, which makes it hard to identify what actually matters.


The biggest issue with docker is the false sense of security it seems to give a lot of engineers into thinking they know infra when they really don't. I stay away from this because I don't understand it fundamentally, and now this proves it's better to not think these new technologies are your friend unless you actually know what you're doing (which apparently most don't).


> the false sense of security it seems to give a lot of engineers into thinking they know infra when they really don't.

I don’t say this lightly: this is a big problem in our industry right now. DevOps means (to some) that developers now handle operations. The reality is that it’s difficult to juggle an operations mindset with a feature driven one.

Even if you focus on infra problems full time there is so much ground left uncovered, it would be impossible for anyone to juggle all of dev and all of infra at once, and this is compounded by the fact that ops is reductionist and dev is additive, which is an incredibly difficult problem for a person to reconcile and give full attention to one side. That’s why devops was supposed to be a division of labour; not just a dude/dudette who can configure nginx and write code.

Reminds me of this: https://www.weforum.org/agenda/2021/04/brains-prefer-adding-...


I'd argue an equally big problem here is this setting is that engineers have been taught to think in a perimeter security mindset.


No: good security is layered aka "defense-in-depth".

docker breaches the localhost VS external port security model (and others)


Staying away from it is still not the best strategy - at least learn and play with it to understand its strengths and weaknesses.

> it seems to give a lot of engineers into thinking they know infra when they really don't

Maybe, but technology changes over time - I don’t see many new projects choosing VMware over docker/OCI for new infrastructure deployment since you usually don’t need a full VM for applocations that just need isolation and easy static deployments.


> Staying away from it is still not the best strategy

There's a whole generation of sysadmins that use docker so that they can stay away from foundational knowledge. We interview experienced devops who do not know/understand how to build basic packages from source (e.g. they don't understand the ./configure, make, make install chain) and who only have basic knowledge of the underlying operating system.


To be honest, I know how to type `./configure`, `make` and `make install`. And I know more about Gnu Make than I would admit in a job interview (for fear of someone expecting me to work with Make).

But so far, life has been too short for me to waste my time on Gnu Autoconf. And I don't feel guilty about this, or like someone who doesn't know fundamentals.

Autoconf is by all accounts a horrible system. Gnu Make ain't much better.

So I can't fault people for trying to avoid this mess.


> So I can't fault people for trying to avoid this mess.

Ex-Amazon here. Being deeply familiar with OS fundamentals and internals, "low level" tools, and so on is crucial.

A lot of candidates show to interviews with CVs filled with names of popular frameworks and fancy devops tools and often don't understand what really happens behind the curtain.

People are becoming less familiar with the basics and tend to reinvent the wheel. In many teams this does not get you hired.


I'm not saying you are wrong but this sounds like it is in part a hiring problem.

> Being deeply familiar with OS fundamentals and internals, "low level" tools, and so on is crucial.

...needs to be front and center on the job description then. Far too often these kinds of things are hidden on the initial job descriptions and it just wastes everyone's time. I get that HR/recruiters want to throw a big net but vague job descriptions are often part of the problem.


I cannot generalize for the whole company. When writing jobspecs in my team we never made lists of the "popular frameworks of the month".


I wholeheartedly agree, that I spend to much time of my live with GNU autoconf and make. But docker is on completely other level. After spending years and years of working with and around docker and speaking with others that do the same, I came to the conclusion that I will recommend docker only for one use-case in the future: Making easy examples to startup a system in one line.


All software is fundamentally helping deal with abstractions as this.


Build systems in general can be ok, or even elegant and fun.

Autoconf is just really terrible.


Autoconf is one of those programs that solved real problems when it was created, in a way that was actually fairly reasonable at the time too. There's a reason it became so popular. But it's now 30 years later, and what was reasonable then is a liability now.

Sendmail is another great example of this phenomenon.


Oh, yes, Autoconf was somewhat defensible when it was created.

Just like PHP and COBOL solved real problems back in their heyday.

They are just not something anyone should be using nowadays.

(Btw, Facebook's Hack is a surprisingly pleasant language. Probably the best one they could have made, coming from PHP.)


> They are just not something anyone should be using nowadays.

I guess it depends, take Vim for example: been around since the early 90s, the developers are familiar with autoconf, and it seems to work well for them. Why fix something if it ain't broke?

Switching build systems is not necessarily an easy task. Also, as a user I always found cmake quite hard to deal with. Actually, as a user, I actually prefer autotools in spite of its shortcomings. I think there's still quite a bit to win here.

Either way, I personally wouldn't really call autoconf "terrible".


It's like the saying "the shortest path between two points is the one you know" - the simplest build system is the one that exposes the least complexity to you, which is usually the one that is already deployed and working to your satisfication.


Unless you want to make a change or fix a bug, that is.


Well, cmake ain't exactly my favourite piece of software either.

And, yes, switching an existing system is a different proposition from starting from scratch.

Though see what the Neovim people have been doing to remove some of the accumulated cruft from Vim, if you are interested.


I suppose my premise is that engineers often delude themselves into thinking that they know it well enough to implement docker when they dont. Id much rather prefer that the regular developers focus on the coding and leave those decisions to actual devops folks who can focus on getting this stuff right. But the simplicity of dockerfile lulls many into thinking if they run something with it it's production ready.


Doesn't that run counter to what devops actually should be? I know that in practice, this split is what happens at companies, but we should work to close that rift, not widen it.


Whatever devops is supposed to be, it shouldn't involve relying on a tool whose security characteristics the team does not fully understand. If requiring such knowledge results in widening a pre-existing rift, so be it.


That's true, and something that often gets lost in the acceleration of "devops culture". But, in my opinion, the consequence then should be to try to not use Docker at all, and look for alternatives (like Podman) instead of just taking control from the dev team.


We did exactly that last year.


I can't help but be reminded that zero trust architecture for security has been a thing for at least a decade, and that the 2004 Jericho Forum concluded that perimeter security was illusory, more akin to a picket fence than a wall.


Everyone embracing immutable server configuration would help too. We give so many tools like docker complete trust to touch configuration and do the right thing. Sometimes it bites us though. In a perfect world you'd see docker try to change iptables and it fail, then investigate what's up and understand that a specific change has to be allowed and all the implications of that change.


Perimeter security would have been just fine here. The breach occurred because the host was exposed directly to the internet, rather than e.g. sharing a private network with a load balancer.


Clearly it was not fine, but in fact leaky, and depending only on perimeter security was (and is) flawed. See https://collaboration.opengroup.org/jericho/commandments_v1....


There was no perimeter security here. The attacker did not first enter a private network and then pivot to MongoDB; he dialed MongoDB right from the internet. Had Mongo been un-authenticated on a private network it still might have been owned, but the bar would have been a lot higher.

Side note: everything that’s ever existed is “flawed,” it’s a weird word to use in the context of something you want to discredit, because then your alternative had better be “flawless” and it obviously isn’t.


There were several pieces here that conspired to produce the unfortunate end result, the blackmailing attacker exploiting the broken perimeter security was just the last piece.


There was perimeter security in this case. The user diligently configured a white-list only UFW firewall. That is their perimeter.

Docker diligently sidestepped that firewall, and in so doing exposed that this was a case of perimeter security. Because by bypassing that single external filter, the entire service was now vulnerable.


I guess this is hair-splitting semantics, but I think when most people say "perimeter security" in the context of a web production environment, they mean that things like DBs, message queues, and backend services share a private network with the servers that actually terminate user TCP connections.

Obviously with only perimeter security, those servers are soft targets to an attacker who compromises a frontend host. I am all for hardening the interior.

"Don't put stuff on the internet that doesn't need to be, even if you think it's secure, because it's probably complicated enough for you to be mistaken about that." This is a perimeter security philosophy, and also what OP needed. If anything the host-level firewall mishap seems closer to an application-level authz bypass than to a pivot across a "trustworthy" network.


There was no perimeter security here

...yet MongoDB's (default) configuration assumes there is. That's the big problem with perimeter security: applications offloading their security responsibility to other, possibly imaginary, parts of the system.


Believing that a zero trust architecture is a replacement for perimeter security is just as illusory and dangerous. Defense in depth ensures that a vulnerability or temporary configuration mistake in one facet of security does not lead to total compromise.


> If a rogue database user starts deleting stories, it would get noticed a whole lot faster than a database being dropped all at once.

This feels like an odd statement. Surely a database being dropped all at once is about the loudest possible thing that could happen to a database-reliant application?


I was puzzled as well, but I think "faster" here should maybe be read as "noticed in time to do something about it"


Yes, but eg it doesn't show up in bandwidth monitoring.


it does if the data matters...


Holy shit, I've been running open ports for a year with docker, completely ignoring my firewall! This has to be fixed ASAP.


One of the best things one can do with Docker, for security, but at the expense of convenience is to configure it to disable the docker-proxy and IPTables manipulation and manage IPTables yourself.

It'll be slow at first, and you'll need new rules each time you bring a container up but you'll have far more control.

We switched to this approach when we had an IP dual-stack issue with the proxy and not only do we know it's secure because we wrote the rules, we get better performance without the userland proxy and more control over routing.

It's important to understand that because your containers sit on a bridged network, INPUT rules do nothing to stop incoming connections, it's all about the FORWARD table.


note that if the containers are bridged the iptables FORWARD table wont affect packet forwarding between containers either. you would need to use ebtables for layer 2 firewalling i am almost certain.

Edit: apparently there is a sysctl setting that makes the bridge netfilter code call the iptables filter routines (bridge-nf-call-iptables) so i am kinda wrong...


This (Docker opening a hole in my firewall) is why I moved my dev server from Linode to Digital Ocean. DO provides a “cloud firewall” that provides something akin to AWS security groups and therefore can’t be messed by Docker. Linode doesn’t have anything like that (last time I checked at least).


This is perhaps the best arguments I’ve seen for a separate firewall device even if it’s in the cloud (and just software) - something on your box running as root may bypass your rules just to help you.


Alternatively, running all your services as VMs also helps.

Having root in a VM doesn't typically give you any rights on the hypervisor (at least not on eg Xen).


Well, if they get root on your mongo vm they can still drop all your tables (or ransomware you) right? So would it make a difference in this particular case? Outside the VM tooling probably not being so insane as to bypass the firewall?


Well, in this case docker was trying to be helpful.

On a hypervisor, it's much harder for VMs to influence each other.

Linux containers (and docker amongst them) started out as convenient and reasonably performant, and added security later. One patch at a time.

Historically, hypervisors typically started secure and added performance and convenience over time.

(Very simplified. But I used to work for XenSource back in the day.)


They recently added one. In fact I had to move many of my VM's to new hypervisors because the ones that didn't support the cloud firewall were deprecated. I don't even use their cloud firewall.


I recently started hosting something on Linode, thanks for calling this feature out. It looks like they started launching Cloud Firewalls back in November. Full rollout was maybe April?

https://www.linode.com/blog/linode/cloud-firewall-beta-open/


Make sure to flip on the feature though, it's not on by default last I saw spinning up some droplets a few months ago.


Along the same lines, I remember coming across a lot of bad information when learning about Docker & iptables. For example, this article about using fail2ban & docker will leave your system unprotected: https://chlee.co/how-to-secure-and-protect-nginx-on-linux-wi...

You have to tell fail2ban to use the DOCKER-USER chain when adding rules, otherwise none of the rules it adds do anything for docker.


Docker has worked like this for a long time. It's a really bad default, and they damn well ought to stop operating this way by default, but it's also like Docker Devops 101. When you install Docker on a Linux system, you should configure the DOCKER-USER chain to drop everything originating on your public network interface. You should also stop running services bound to localhost and instead run them on a private network. You can proxy any traffic that really ought to be allowed through from the outside, or if you can't do that for some reason, then make a single exception to the iptables and host binding rules for that one container.


Would you know of some blog post or guide in the vein of "essential sane(r) defaults to apply after installing Docker" kind of tutorial? Which included, for example, the chain config you mentioned


This feels like something you might forget if it's just been running for a while.


When I moved my docket setup to a new fedora server a few months ago I was surprised I had to add firewall rules to allow traffic to the ports exposed in the containers. I’m not sure whether this is for any port or specially for things running under the docker user but RHEL should be safe by default.


I have talked about this before. This is completely non standard behavior, but the way the docker team simply washes their hands is incredible.

https://github.com/chaifeng/ufw-docker/issues/31

https://news.ycombinator.com/item?id=22299693


A big thing is that this also happens on your dev machine with "docker run".

(Unless you know to use -p 127.0.0.1:1234:1234 instead of just -p 1234:1234 like all the examples on the web tell you.)


Alternatively, don't map any ports and instead use docker inspect to find the container's IP address and use that to talk to it. That way you don't have to map different ports for every container


This should maybe be the standard example.


Kinda surprised so few people have any idea how docker networking works...

The OP didn't (I think) say but... were they running their containers with the --expose or --publish options? Without those, there shouldn't have been anything listening. And with them, well, the clue's in the option name...

That said, I've never liked the way docker implements its networking. Not only does it monkey with iptables (not too bad once you expect it) but, worse, it silently enables IP forwarding on all interfaces! Any system where it is installed is now a router; this should make anyone who uses a VPN to connect to work rather nervous...

I really wish docker used a different solution for exposing container ports by default, such as having the docker daemon proxy traffic. Yes, it wouldn't be as fast. But this wouldn't matter for the vast majority of users, who would benefit from having behaviour that matches other listening ports, doesn't bypass the firewall, etc. Have the iptables implementation available as an option that is there when you really need the performance--and know what you're doing!


I see everyone discussing how much Docker is at fault, how much Mongo DB is at fault, how much NewsBlur should have had better settings, and I do agree to some extent.

However, the much worse problem seems to be the fact that NewsBlur didn't test their network connectivity with something as basic as a port scan. This wasn't some complex attack based on some complex code injection jumping through legitimate ports or anything: their machine had the Mongo port open when they thought it should be closed. This is the kind of thing that absolutely shouldn't make it past basic testing.


Thank goodness that someone noticed the real problem: You can't see network traffic.

You don't write a program and deploy it without testing, so don't do the same with your firewalls. You need to actually test your firewall policy and not blindly assume it does what you think it should.


As the after-action report concludes, which is painfully obvious as you read this - should have been in a VPC.


But even before talking about the fix: the setup should have been tested, ideally before it was deployed (if possible).


yes, the author made a junior dev mistake and he is trying to blame [tool name here].

Making mistakes like this sucks but passing the blame makes it even worse.


Making mistakes is not the problem - everyone makes mistakes. But you need testing to catch your mistakes - this is a process failure.


While I'm glad no data was leaked and that Samuel is taking the layered defense approach seriously as it was wildly debated on the previous HN thread, I'm still a bit miffed that we only got confirmation that no leak actually occurred now.

As a Newsblur customer, would have really preferred the following course of action: lock down infrastructure, confirmation that no leak occurred, communication of that fact to customers and THEN service recovery. While this was all happening, the only indication that most likely no data was leaked was a single HN commenter that had a similar experience: https://news.ycombinator.com/item?id=27615708


The real issue here is that the “backend” database engine was exposed on the public internet. It should have been a layer behind that with suitable protection, as mentioned a VPC with security group rules or equivalent firewall.

Regardless of the docker foot gun this could have been a human fudge as well.

Having done exactly this myself before with SQL server and Slammer I didn’t blame the default configuration, I blamed myself.


The issue is that docker circumvents a platform security control without warning.

Docker on Ubuntu has a critical security vulnerability, with a track record of being exploited. That, clearly, is something that should be fixed ASAP.

If you have just a single layer, sure, blame yourself for poor security practices. Services should listen to unix domain sockets or internal networks only. They should have authentication in place. Machines should not have routable IP addresses. Machines should have externally applied firewalls. And so on. That doesn't mean the security layer that did fail shouldn't be fixed.


Sorry if it sounds like a blame the victim (I'm probably doing it, yeah) but if you know all that, and even if you don't, having a DB server meant to be accessed only internally with a public IP is basically wrong under all circumstances. I mean, the docker "footgun" can be valid if you are running a single host with multiple services, some of them public and some of them private, and dockerizing the private ones gets them exposed to the Internet, even if you had a firewall rule to manage that. That's fine, let's blame Docker. But in this case I'm sorry but the Docker behavior just exposed a broken design.

I really hope they learn from all this the right lesson, which is not to just blame docker, but to carefully think whether you really need to use the Internet as a mean of internal communication for your services.


Correct. my point is that security is no good full stop if there is only one layer.

Even temporary rules and default states aren’t protected against until your automation is complete if your box is on the public internet. Or if you screw your automation up, the same happens.


I'm a bit curious as to why they had the MongoDB docker container running with the 27017 port mapped in the first place, maybe for testing?

You don't need to map ports for inter-container communications, and maybe that's where some of the confusion comes in for people using docker.

I have noticed a few friends who have been playing with docker for home server stuff thinking that they need to map ports for everything, just to get containers in a stack talking to each other.

At the same time docker really shouldn't be messing with the hosts incoming firewall rules.

And MongoDB really should force auth by default.


> ufw firewall ... kept on a strict allowlist with only my internal servers

> I switched the MongoDB cluster over to the new servers.

It sounds like they had multiple Mongo nodes on separate machines, communicating over the internet. Presumably the application(s) are also on separate machines and would need to communicate with Mongo.

I think `docker network` has options to create a network across Docker daemon hosts? But that has some tradeoffs, I'm sure.


I learned my lesson of not trusting a machine's own ability to secure itself in 1998 or 99 when an IIS machine was installed on Thursday and was completely taken over by viruses on Friday.

Block the ports using private networks or use the IP security provided by the cloud provider (which does not use your machine's security to accomplish this).

Defense in depth and all that.

But sure, docker has a footgun, everything does, otherwise linux would not ship with sudo. (someone's going to reply with a linux that does not come with sudo).


I was bitten by this as well, luckily I was setting up ufw after docker (and was really surprised that dockerized services showed up in port scan).

To be fair, it did not look to me that docker intentionally punched holes in firewall- it purposefully places it's rules in separate table/queue, which due to way iptables work, hampers the firewall rules.

However, once it became apparent to docker developers, it should have been made a priority to be fixed, as it is counterintuitive default.


Another issue that is not discussed, when I type

“sudo ufw status verbose”

I expect to see open ports in my system.

It’s an issue that ufw doesn’t show some open ports (all relevant iptable rules).


ufw assumes you use only ufw to configure the firewall. If something else messes with iptables, all bets are off.



The only question I can ask is, why Docker? Newsblur presumably existed before Docker.


It did, but by interesting coincidence, only really got traction when Google Reader shut down, which happened to be the same month of the initial release of Docker.


I think docker/podman with firewalld behaves the same way. I was astonished when I found it out. I think you can however bind to localhost exclusively by using the ip in the argument:

docker run -p 127.0.0.1:5555:5555 ....


I am running a docker-compose setup with various services, only ever exposing ports so my reverse proxy on the same machine, which is not in a container, can hit them. The config explicitly specifies "127.0.0.1:external:internal". To my dismay, I can actually reach those external ports from the internet, even though I have some simple iptables rules to drop anything but 22, 80 and 443.

I never really liked the whole container idea, but this is ridiculous.


Sounds like your config is doing the right thing. It would be interesting to see what 'iptables -S -t nat' and 'iptables -S -t filter' say, as well as 'docker port CONTAINER_ID'.


Are you sure? I also have "127.0.0.1:external:internal" on some services and I cannot reach them with nc, but I can access those that are just configured as "externa:internal"


Just want to confirm I understand the issue here.

1. `docker run -p 0.0.0.0:6666:6666 ...` - docker will update the firewall to allow traffic to port 6666

2. `docker run -p 127.0.0.1:6666:6666 ...` - docker won't update the firewall.

3. `docker run -p 6666` - the port is published on 0.0.0.0 and a random port is opened on the host

I see 3 being a little surprising, but 1 and 2 does what I would expect of it.


> 1. `docker run -p 0.0.0.0:6666:6666 ...` - docker will update the firewall to allow traffic to port 6666

I don't think Docker should touch the firewall whatsoever. Apache doesn't, nginx doesn't, PostgreSQL doesn't, etc. Why is Docker different? The sysadmin should decide that!


Maybe it is more convenient to developer so that they don't have to touch the firewall? Docker is not only used by sysadmin.


Why not add a --open-firewall-ports option and let the user decide? Maybe print a warning without that option if the port is not open if you really want to make it easy.


Haha yes of course it's easier for developers, but they write the software that I have to deploy and manage on my server!


I think the one case you missed is

`docker run -p 6666:6666 ...` which defaults to `-p 0.0.0.0:6666:6666`

I can't find it in the article, but I suspect this what actually happened. It's a bad default and unfortunately most examples build on it.

Similar, it's easy to deploy MongoDB without a password. But it's also easy to set one, the docs even mention it https://hub.docker.com/_/mongo

And everyone can run a portscan with nmap against their servers. Unfortunately the MongoDB default ports 27017-27019 are not part of the top 1000 ports which nmap scans by default.


Docker should at least add a warning message that they are editing IP table rules.

I remember experimenting with docker running on a VM, and being confused for a long time why my ufw rules didn't seem to be working, until I found out this was a known issue.


I mean, that's a bad docker feature, but so many other things went wrong to get there in the first place. No db password?! I'm guessing that if it was talking over localhost the db was on the same system as the web server? Agh.


I have been using ufw-docker script[1] to patch the ufw and docker issue on my servers. The docker containers are publically accessible even if you have ufw enabled and rules applied.

These days its better to use the hosting provider firewall on top of ufw. Leading providers like AWS, Hetzner etc all provide the feature to add firewall rules via their UI.

[1] https://github.com/chaifeng/ufw-docker


That’s why you need more than one layer of security over your critical assets. Network based access control to me always feels easy to misconfigure. Basic auth would delay attacker. Encryption of data at rest would render stolen data unusable to attacker.

General advice - perform threat modeling of your services to uncover weak links.


It's amazing to see everyone pointing out the poor infrastructure security being downvoted. I guess engineering doesn't matter anymore, just blame your tools.


Thank you for continuing to support censored users by allowing Tor. Cloudflare, for ex, has wrecked havoc on these vulnerable people.


> When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world

Can Docker really be blamed here?

It sounds like when they ran MongoDB they explicitly published Mongo's port to the internet by either adding the ports property to docker-compose.yml or the -p flag with a Docker command to open Mongo's port to the outside world.

Not to pour salt on an open wound but why are they blaming Docker for this? Surely an ops person should have thought to check if any internal services are exposed to the outside world? This goes with or without using Docker. This would be like explicitly opening a port with iptables by copy / pasting something off the internet and then blaming iptables because it opened the port.

If anyone is curious, years ago I've blogged about the difference between expose and publish for Docker at https://nickjanetakis.com/blog/docker-tip-59-difference-betw....


Or there just should've been a password.

This whole incident would've been prevented and hopefully eventually rectified if they'd put auth on the localhost interface as well.

Let your tools save passwords, but it's insanity to have anything just blindly accept requests - you're one mis-directed script away from people wiping out your prod DB, let alone dedicated hackers.


Yeah, passwords are good.

I remember back in 2016 I had a similar issue with Redis. I ran it with -p 6379:6379 with no password and someone got into my Redis container back when "crackit"[0] was a thing.

But I never blamed Docker or Redis. I blamed myself because I was careless for not fully understanding the implications of what -p did at the time.

[0]: http://antirez.com/news/96


Were you using Docker raw? No orchestration? If so what's the advantage to dockerizing your data stores


Having all of the context from operating system version to all the dependent software packages has terrific advantages in deployment and provisioning. One single command for the predictable and repeatable launch of the DB and all the services is super useful even without Kubernetes or other orchestration.



This is why security in depth is important. A simple DB user could have prevented this, and many other consequences from misconfiguration. This time it's docker, next time it's a broken private cloud setting or a compromised node elsewhere on your network.


Commmmme on a DB without authentication or user permissions on a public network...


Does mongodb still ship in a no-auth-required-by-default mode? I could've sworn that was the case years ago at the early NoSQL days. But if they're still around, surely they must have fixed that by now?


Based on my comments and the reply from a mongo employee in the previous thread about this: yes, they changed the defaults in 2017: https://news.ycombinator.com/item?id=27613761

That said, it also seems like the docker images are a different beast, so be careful what containers you ship and with what configuration…


> In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server!

Does anybody know if this threat is at all credible?


The way it's phrased doesn't make much sense but they could contact whoever is responsible for GDPR enforcement in a particular country (if in the EU) and make a complaint against you. But it's unlikely to come to anything (especially if all the data was deleted) and you won't be arrested.


You could end up paying a fine if the regulatory authority investigates and finds that personal data was exposed (or deleted) due to lack of "appropriate technical and organisational measures to ensure a level of security appropriate to the risk"[1].

Relatedly, you can be fined for not telling the regulatory authority about breaches yourself: "In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 55, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons."[2]

British Airways[3] for example was fined for inadequate security measures. I also remember a case where some company (cannot quite remember which) itself told the regulator about some security snafu as they should, a snafu that most likely was not exploited. They were still issued a fine but a reduced one because they were very cooperative and most likely no actual damage happened.

There is an enforcement tracker website[4] where you can filter for e.g. "security" in the type column.

But nobody will go to prison for being the victim of a hack, that's just fearmongering by that attacker.

[1] https://gdpr-info.eu/art-32-gdpr/

[2] https://gdpr-info.eu/art-33-gdpr/

[3] https://ico.org.uk/about-the-ico/news-and-events/news-and-bl...

[4] https://www.enforcementtracker.com/


Would be interesting to get some stats on how common it is to put Docker on a public facing server. I never met anybody in real life that trusts Docker that much.


A security device (firewall) should not be running all types of services and it certainly shouldn't be allowing firewall rules to be inserted automagically.


Can anybode recommend some usfeul resources regarding currently popular firewalls? Never touched that topic personally so far, but I'm interested.


So basically he messed up something basic and he is passing the blame to the tool he used? I would fell much more sympathetic if he acknowledged his mistakes instead of flexing muscles and throwing Docker/MongoDB under the bus.

The clear mistake here is allowing incoming traffic on 0.0.0.0/32

Disclaimer: I am not into Docker or Mongodb


Is there any easy way to guard against this without putting everything in a VPC?


What about not running production in docker? One less layer to worry about


I don't know how docker is to blame here; to me it's a bit unfair.

It's more about the setting of the ufw. Docker relies on their own NAT table, and setting ufw accordingly requires a bit of setup. That said, having a good setting from hosting provider is quite easier ans safer.


Docker's default goes against best practices. Just look at all those opened issues on Github. Clearly people get caught by surprise and may suffer a security compromise as a result.

Docker is 100% to blame here and they need to change the default! Just put it in the release notes and get it over with.


Props to the developer for being this open and letting others learn and discuss the implications of this situation! I rather enjoyed the original discussion as well, even though the situation itself was unfortunate: https://news.ycombinator.com/item?id=27613217

I agree with the other posters, that Docker can cause problems with firewalls, as pointed out both in the article, in the GitHub issue, and in the other thread as well: https://github.com/moby/moby/issues/4737

Furthermore, it also seems to me, that not only Docker should respect the firewall rules (even though it'd confuse another group of people about why their services aren't accessible externally and would necessitate manual firewall rule management, short of some explicit way to do that, such as docker run ... -p 80:3000 --expose-in-firewall 80 ...) and that MongoDB also should have secure defaults, as any other piece of software!

That said, at least to me it appears that Docker Compose, Docker Swarm and other technologies have attempted to introduce a networking abstraction to allow running services (more) securely and privately, by not exposing their ports on the host directly.

For example, see the following example of a Compose file, which would only expose a web server to the world:

  version: '3.4'
  services:
    # you would probably want to run MongoDB not exposed to the outside world at all
    # notice that there are no ports exposed here, given that we're using Docker networking
    mongodb_container:
      image: mongo:bionic
      restart: "unless-stopped"
      environment:               
        - MONGO_INITDB_ROOT_USERNAME=lazy_example_of_a_single_file_config
        - MONGO_INITDB_ROOT_PASSWORD=but_you_should_be_able_to_store_this_in_secrets
        # see docs for MONGO_INITDB_ROOT_PASSWORD_FILE at https://hub.docker.com/_/mongo
      deploy:
        replicas: 1
        placement:
          constraints:
            - node.hostname == your-mongo-server.com # just an example, probably use labels instead
        resources:
          limits:
            cpus: "4"
            memory: "8G"
    # your web application wouldn't need to be exposed to the outside world either
    # it could just receive the requests from a web server that ensures SSL/TLS
    web_application:
      image: hello-world # which you would replace with your actual app
      restart: "unless-stopped"
      environment:               
        - APP_CONFIG=variables_go_here_or_in_a_config_map
        - MONGO_HOST=mongodb_container
        - MONGO_PORT=27017
        - MONGO_USERNAME=either_the_username_above_or_a_non_root_one
        - MONGO_PASSWORD=though_the_non_root_needs_separate_creation
        # maybe with a helper container, like some people use dbmate for RDBMS
      deploy:
        replicas: 4
        placement:
          constraints:
            - node.hostname == your-application-server.com # just an example, probably use labels instead
        resources:
          limits:
            cpus: "2"
            memory: "2G"
    # finally, the actual web server is the ingress point, which you can self host
    # in the case of Kubernetes and other fancy tech you'd be able to use external solutions, probably
    web_server:
      image: caddy:alpine # or another web server, that would actually be exposed
      restart: "unless-stopped"
      ports:
        - 80:80
        - 443:443
        # though sometimes you need the long format for the above so client IPs resolve correctly
      volumes:
        - some_config_directory:/etc/caddy/Caddyfile
        - some_other_config_directory:/data
        - yet_another_config_directory:/config
      deploy:
        replicas: 4
        placement:
            constraints:
            - node.hostname == your-ingress-server.com # just an example, probably use labels instead
        resources:
          limits:
            cpus: "2"
            memory: "2G"
That's not to say that everyone should use Swarm or other orchestrators, but personally i find it a good approach, which allows defining some of the network topology in the same file as the application deployment, thus making the risk of human error slightly smaller.

Docker even has a page on networking, that provides information about the additional functionality that's available as well: https://docs.docker.com/network/

Of course, that's not to say that defense in depth and external solutions wouldn't perhaps be a better option.


Off-topic but I had never seen the Tor exit node's default text before. Pretty hilarious that it states Tor provides privacy to people need it most: "average computer users." Yep, that's definitely who is using Tor...


There's not much distance between current Docker behaviour and it calling a AWS/GCP/whatever API to route traffic to it. I predict some piece of sw, if not Docker, will be caught doing that in the near future.


"I put my database on the internet without a password. People used it. HACKERS!!!!!"


> write to me in the mail with your DB IP: FTHISGUY@recoverme.one

why the redaction? name and shame.


Wait... people run their databases on public IPs?


Yep. It happens. Welcome to earth.


No, they run databases that listen to localhost, and then use docker to forward that private binding to the public ip


No, they run database that listens to 0.0.0.0/0 and/or ::0/0 on a private isolated network interface (eth0 inside Docker namespace), then let Docker create a NAT to forward packets from a public network interface on the host to this database.

The latter is a mistake. It doesn't ever make sense to expose hosted container ports to host, with the exception of public-facing stuff (like a webserver). The whole backoffice should be on a separate network.

And this is how Docker works by default, since forever, unless one very explicitly requests that it exposes ("publishes") the ports.

Just don't expose stuff. If the host needs to talk to a database or something, it can always talk to it via that `docker0` (or however it's called, I don't really remember the details) interface. (I believe it doesn't always add a route, because on one of my machines I have to manually add a route so Traefik on a host is able to talk to the containers on a isolated virtual network.)


That depends in if you use docker networks. By default, -p maps a port, afaik to 0.0.0.0:port. So this is more an issue with the default docker settings than anything else.

Yes they should’ve configured, checked, etc. But defaults sound be secure by default


Alternative headline: how my unauthenticated database ended up on the public web.


Exactly, like a bank that doesn’t lock it’s vault because the front door is locked.


Or, in this case, not locked


Seems like poor network architecture to me. The fact that you can misconfigure a firewall on a VM or server which in turn allows public traffic is pretty dangerous.


To be fair, he was in the process of fixing the poor network architecture when this happened.


The mistake here is a MongoDB that didn't require authentication, not that docker's clunky iptables setup exposed it to the internet.

Relying solely on a host-based firewall for access control is, for reasons which must now be obvious, admin incompetence.

They are responsible for securing the containers. They didn't.

Your services should be using authentication even if they are only bound to localhost.


The docker issue and original HN thread have countless instances of folks who have hit and been hacked by the exact same issue. There is clearly a major problem with the docker documentation, usage, etc. that is causing people to continually be taken by surprise with its iptables behavior.

The blame game doesn't help solve the real problem that people are unknowingly putting services directly on the public internet when they think they are secured by a firewall. With how many millions of people use docker every day IMHO the severity of this issue is far too great to ignore. A simple mitigation to fail and warn a user when it appears they might be hitting this issue (i.e. before docker run changes iptables config, give a quick look to see if it looks like it's already configured for the most common ufw or other setup) would stop this from happening for 99% of cases.


I consider myself pretty docker savvy. I had no idea about this footgun, and my brain is full of all kinds of docker minutiae.

Everyone knows docker runs as root, and therefore be careful using it, just as you would sudo. It seems this is a major fail in docs.

I kinda get why it's the default, but it needs to be made way better known that by default -p punches a hole through ufw.


If anything here has a bug it's the mongo container image that launches without requiring authentication, not docker.

You're confusing the proximate cause for the root cause.


> The mistake here is a MongoDB that didn't require authentication, not that docker's clunky iptables setup exposed it to the internet.

If you're backing up diligently, have a problem, go to restore backup, and discover that the backups are unusable due to an issue caused by the backup software's default configuration, you could say "the mistake here is you relying on a single backup system, pardner, not the backup software's bad default configuration," and in the sense that it's good advice to have multiple backup strategies, sure, you're technically correct. But that doesn't excuse the backup software from having a dangerously bad default configuration. That would be relatively trivial to fix. That they've apparently known about for years.


> The mistake here is a MongoDB that didn't require authentication, not that docker's clunky iptables setup exposed it to the internet.

It can be both.


It’s both indeed.. start to blame redis too.

It’s a docker problem, which should listen to localhost by default


Yea. Why blame softwares (with multiple user filed bugs) when you can blame the user (again)?


Agree that authentication should be table stakes, though I would argue that the actual mistake is that the MongoDB application/docker container was on a host/VM with a network interface on the public internet.


I don't know why this is being downvoted. Multiple overlapping layers of security would have given newsblur a backup in case of accidental "footguns".

Unauthenticated mongodb instances are a pretty common problem - it's why a "script kiddie" was so successful.


> I don't know why this is being downvoted. Multiple overlapping layers of security would have given newsblur a backup in case of accidental "footguns".

Security specialist here. Startups are built by generalists. Good decisions were made here which made other defense-in-depth considerations not as critical when reconciled with go-to-market needs.

If every generalist focused on every security risk in their product, they'd expose themselves to the business risk of not moving quickly enough.

It's pretty clear NewsBlur did what they could and relied on the expertise of others to not fail them in exceptionally basic, entirely avoidable ways. They were betrayed by that reliance.


So what you are saying is they should have had a password since that would not slow them down?


defense in depth is about as basic as it gets, even for generalists


> I don't know why this is being downvoted.

The core message ("using auth on MongoDB would have prevented this, it's always a good idea to add password auth just in case") is perfectly reasonable; we can all learn from this, and it's perfectly fine to point out such things.

But the way it was phrased was absolutely not okay. People make mistakes all the time and they are not "incompetent". This is the classic "I am very smart, I never make mistakes, if you made a mistake then you're an idiot. You probably eat poop. I am smart btw"-attitude that's just ... ugh...

People rely on firewalls to prevent mistakes from becoming disastrous. Defeating that silently is super surprising. People don't know everything about every piece of tech they use either; very few people do: it's just too much information.

And it's not like auth alone is perfect. Remember when a bug in MySQL allowed people to bypass the auth? Good thing I put a firewall in front of my Doc... oh, no, wait...

So this is why I downvoted it.


Open MongoDB servers getting hacked is literally a meme at this point, there is no excuse for someone to configure a _production_ instance with zero authentication.


Having unauthenticated private services on directly-internet-connected hosts, regardless of the state of the host based firewall, is a mistake that a competent sysadmin does not make. (Then again, so is running MongoDB.)

It's not an insult to call someone incompetent.


> It's not an insult to call someone incompetent.

Yes it is, especially when based on a single data point.


A simple runtime check by docker to detect the most common configuration issue (ufw enabled for a service port that's about to be run by a container) is just another layer of protection to add to the stack IMHO. How can it hurt?


So instead of just going through http://ftp.rpm.org/max-rpm/ and learning how to make RPM's, the author continues employing all this complex technology, because why keep it simple when it can be ultra-complex, hack-prone and Dockerized with a gratis footgun to boot? Makes perfect sense!


What surprises me most is the machine running Docker was 100% connected directly to a public network interface. That’s the real root of the problem here.


Ambiguous addresses are generally a pretty big headache and have a lot of security downsides, network level firewalls are better and would have worked here too. It's literally the old NAT vs firewall case.


Exactly - and there's no mention either of how the fix was implemented nor what remediations were done to prevent it happening again in the future...


There's a section st the end detailing 3 fixes, including using a private network.


I think the real lesson here should be to use managed services whenever possible. There is just no excuse to manage your own infra on the VPS level for a typical web app in 2021. There are a myriad of PaaS providers where legacy VPS setups can be migrated to without too much pain.


I feel sorry for the victim, but it sounds like this whole setup is more or less careless. If your Docker configuration is the only thing that's stopping your database from being exposed - you should reconsider your approach to network and database security. At the very least there should be firewall protection at host level. Further is a standard procedure to put unsecure endpoints in a private VPC that's not accessible from the internet (and a bastion for administration). Relying solely on docker for something as important is bad even if you configure it correctly - it's still a single layer of software protecting you and it may have vulnerabilities of its own.


> At the very least there should be firewall protection at host level

If you read the Footgun at https://github.com/moby/moby/issues/4737, this is exactly what happens: someone sets up a conservative firewall, then Docker drills holes in it and opens itself up to the world, regardless of your firewall.


Okay, good point, and that's exactly why you isolate your networks. You don't want to be one configuration option from being wide open.


True. In this case having a separate (hardware) firewall, a vpn setup or some other networking, protects you against this. That is poorly documented on Docker too.

Yet, what Docker does is still wrong: it should not disable Ubuntu's default security. Ever. Even if that security is inadequate. An analogy would be to say "While watering your plants last night, I left all your windows open. Now your stuff is stolen. I did this because you should have installed window grills: without window grills your security sucks anyway."


Isn't this to a much greater extent UFW's fault? If it claims to manage the firewall for your system, I would expect it to manage all of iptables, not just one particular chain that it thinks is important. At the very least, I would expect it to signal when there are other chains configured in iptables that it can't/won't manager for you.


>In NewsBlur’s case, because NewsBlur is a home of free speech, allowing users in countries with censored news outlets to bypass restrictions and get access to the world at large, the continuing risk of supporting anonymous Internet traffic is worth the cost.

This, the backups, the write-up, all make it really hard for me to want to victim-blame the dev for not catching a very silly Docker default.

That having been said - public ip? - no fw? - no password on the mongod instance?

Idk, couldn't be me, not even in dev, just take the 20 seconds to plumb the pw to both sides. Modding me down won't change these facts and won't keep you from being compromised if you take the same lazy steps


The problem with the Docker default is that they did have a firewall configured but Docker will reconfigure it.

That said, some external firewall would prevent this. My VPS provider allows me to configure which ports I want to expose to the outside world which would mitigate this kind of issue.


I'm a bit confused here about how this was a docker issue. Initially I thought there was some stuff happening with uPnP with a router/gateway of some sort (https://en.wikipedia.org/wiki/Universal_Plug_and_Play)

If I run a docker container on my macbook and expose a port over 0.0.0.0 (docker run -d -p 8080:8080 nginx) and it's available on my LAN network, how does it get exposed over the internet? Unless there's a rule in my router/gateway which does the port forwarding, it's not possible.

I wish the author goes into a bit more detail onto how his setup was vulnerable rather than bringing up an old Github Issue.


It's not a problem on docker desktop for mac or windows. It's only an issue on linux, and only for systems that use iptables as a firewall (typically ubuntu/debian & ufw). And even more specifically it really only affects servers that are in a hosting environment with no other firewall or protection in front--Digital Ocean's default droplet config is a prime example, on AWS by default you have a cloud-specific firewall provided by Amazon's networking.

On your system docker is running in a little virtual machine that has its own linux kernel, virtual peripherals, etc. This gives you an extra layer of security (by pure happenstance) such that the VM has to allow traffic in too.

And this is kind of why this issue is so nasty. You as a mac user might never know that this is a problem or potential footgun. You could happily develop entire production services on your mac, then move them to a shared linux host like Digital Ocean and... uh oh, now your services are open and you had no idea it was even possible.


But this isn't a linux issue - this is by design.

If you're deploying code without knowing how networking works, you're always going to have problems with this like this.

Perhaps my example was a little too simplified as AWS is my day-to-day cloud host but I find it hard to believe opening up ports on your VM and exposing it to the world and then placing the blame on docker is fair.


It's fairly normal for a laptop to be on a network behind a router/gateway.

It's also fairly normal for a server to have public IPs associated with it directly, and for those IPs to be available to the public internet directly.

The latter is what probably happened here.

Newsblur appears to run on digital ocean, where I believe they still default to associating a public IPv4 address to each droplet, which again is pretty normal in the server world.


And more and more things are getting publicly routable ipv6 addresses even when behind an ipv4 NAT device.

So if you’re not careful you may be exposed on ipv6 even if incidentally protected on ipv4 by the NAT.


'Really normal in the server world' is something i heavily disagree with.

So there was no firewall then? https://www.digitalocean.com/blog/cloud-firewalls-secure-dro...


Any old hosted server is going to public to the internet by default. There's no indication that this is hosted locally or in some private network (like AWS VPC).


Buck stops at NewsBlur on this one. Let's call a spade a spade; unqualified. Newb. It happens. It happened. Port scan next time? Metasploit maybe? "Script Kiddie".. It's in the name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: