Hacker News new | past | comments | ask | show | jobs | submit login

The point of Docker is basically that the container-image developer is specifying the sandbox, instead of the sysadmin specifying the sandbox.

None of the things mentioned solve the problem of the sysadmin having to "design" the solution from the top down. Docker does. A Docker image is an appliance. You don't architect it; you just configure it. You don't have to care which OS it's running inside. Docker images running on Windows don't even care whether it's Windows Core or Ubuntu inside them, even. It's a black box with defined configurability-points.

The only real comparisons to Docker are

1. Amazon's AMIs (though nobody hosts a public AMI registry except for Amazon, so it's not really a good comparison);

2. Canonical's "snap" format (https://snapcraft.io)

Both of these achieve the same things that Docker achieves: developer-distributed virtual appliances configured by the sysadmin but "managed" automatically by the runtime.

And both are just as complicated as Docker. The complexity is necessary.




You do have to care which OS it's running inside if only to know when to patch it for $vulnerability_of_the_day. It sure is convenient to consider it as a blackbox that you 'just' need to configure, but that's just pushing responsibility to the developer(s). In my short experience, the latter seldom care about security. When a security breach occurs, who is going to take the fall ? The sysadmins that supposedly run operations, or the developers that failed to provided an updated appliance ?


Let's be clear: This is a general problem with distributed infrastructure. Not necessarily docker. Any org that scales beyond 100+ engineers/services/artifacts is just not going to hire same proportion of infra people to toss application configuration over the wall to.

In the past, we have done:

* Let applications write librarian-chef cookbooks, have a chef server aggregate them

* Let applications write ansible playbooks, aggregate them in a central repo using galaxy

All of them carry the same pitfalls. If its not the OS, then you have to decide how to patch the version of OpenJDK that the developer hardcoded. If its not OpenJDK, then its maven or npm.

We have seen both sides of the arguments:

* Sysadmins cry "security"

* Developers cry "freedom"

The root cause of both arguments is fear and control. The end goal when these words (security vs freedom) get thrown in is not to find solutions, but rather to make the discussion end. Sysadmins will gladly sweep maven/nexus problems under the rug as long as they are the ones doing automation. Developers will gladly disregard all infrastructure engineering principles as long as they have full access to do whatever they want with their application.

Automation is the solution to both. Call it SecOps or whatever bullshit term. But in the end, automated security practices are necessary one way or another.


Conceptually it seems quite possible to automatically send alerts to the devs responsible when something in the container is out of date. It means that the containers have to be specified using supported methods, but I think a lot can be covered quite easily.


> You do have to care which OS it's running inside if only to know when to patch it for $vulnerability_of_the_day.

I think grandparent was using "you" to refer to the docker container developer, not the host OS maintainer.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: