Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Miniboss, versatile local container management with Python (github.com/afroisalreadyinu)
158 points by afroisalreadyin on March 2, 2022 | hide | past | favorite | 40 comments



I've definitely experienced the problems of using Docker Compose for local dev. Oh, that file changed? Control-C to kill the whole cluster. Wait until the DB flushes. Start it again.

And if you just want to restart one service? Well you have to run 4 commands[0] in order to properly do it...

And what the author was saying about needing to "template" the YAML files is also true. In the past, when shipping an Open Source library for others to use, we had to wrap Docker Compose in a CLI that generates the YAML file and then invokes Docker Compose for you. (See it here[1][2][3])

It sucks and this library definitely solves a real problem!

0: https://stackoverflow.com/questions/31466428/how-to-restart-...

1: The code: https://github.com/lunasec-io/lunasec/blob/master/js/sdks/pa...

2: The package it lives in: https://github.com/lunasec-io/lunasec/tree/master/js/sdks/pa...

3: The docs for the wrapped CLI: https://www.lunasec.io/docs/pages/overview/demo-app/overview


These are exactly the kinds of issues that drove me to work on miniboss. My top grievance has been the "waiting for service available" issue, which is generally solved by wrapping the dependent code in a `wait-for-it.sh` bash script. A rather hacky solution for a rather simple problem.


Hahaha yes! I had to add all of that to our code as well. I also had to write a "wait-for-file.sh" script for a mechanism to pass config values. (We use LocalStack to host an AWS env locally and the S3 bucket name is random.)

In Docker Compose v2 they had a more sophisticated "depends on" model with health checks, but they removed it in v3. The docs say something about Docker Swarm so I'm guessing they removed that in order to try to make v3 syntax work for clusters too.

The real tragedy here is that Kubernetes won, but we're still stuck with Docker Compose trying to be the same thing.

I hope your tool can help change this annoying status quo!


Not to be a Docker apologist, because I agree they’ve blundered: Docker Compose v3 is actually not the current version of Docker Compose, Docker Compose now uses “the compose specification” which does support service dependencies and it’s trivial to create reliable healthchecks.

I totally agree on the tragedy, Docker had so much potential. Very impactful, certainly, but far from what it could have been. I still love it though.


> And if you just want to restart one service? Well you have to run 4 commands[0] in order to properly do it...

Care to elaborate? The SO answers seem to indicate it can be done in a single command, and my personal experience confirms that.

Are you perhaps also including pulling new images, rebuilding local Dockerfiles, etc etc as part of the “restart service” process?


Would .env files not have worked? Your script generates a .env, with a static docker-compose.yml

Restarting one service isn't 4 commands.


Not 4, but for me at least it's often 3. docker-compose has a bad habit of re-using the existing container, i.e. `docker-compose restart <service>` won't stop the existing service container and create a new one, it restarts the current container. Often the reason I'm restarting the service is because I need a clean, new container to pick up some env change or for some debugging or something, so habitually I restart containers with `docker-compose stop <service> && docker-compose rm -f <service> && docker-compose up -d <service>`.

I mostly like docker-compose just fine, but this is an annoyance & when I first started using docker-compose some years back I definitely lost more time than I would have liked trying to figure out why some change I'd just made didn't seem to have applied. I feel like destroying/creating containers is pretty cheap (that's one of the great things about containers!), so I'm not sure why docker-compose behaves this way. I presume it's for historical reasons and they don't feel comfortable changing it because of backwards compatibility.


I think `docker-compose down <service> && docker-compose up -d --force-recreate <service>` should do it.


Thanks for pointing out `--force-recreate`: I'm pretty sure that didn't exist when I started using `docker-compose`. That reduces my common usage from 3 commands to 2 :).

As a point of minor interest I also avoid `down` vs `stop` because `down` can remove volumes. Reading the `--help` now (I'm not about to actually try it for reasons that will be clear shortly), it appears that's not the default behavior, but I'm pretty sure it was at one point. I `docker-compose down`-ed once and wiped out my local databases and that was an unpleasant morning, and ever since I've avoided it.

That's a round-a-about way of explaining that a certain amount of my programming habits are certainly born of having stepped on numerous rakes in the past, but some of those rakes don't really exist anymore and I don't always notice when they go away. So I appreciate you prompting me to re-examine this particular habit as being out of date.


Unfortunately `docker-compose down` takes no argument. You have to do `docker-compose rm -sf <service>`


It's potentially possible. I'd have to look at it again to verify.

What we were struggling with were "conditional services" like running Integration Tests. We only wanted to add that container to the cluster when we wanted to run the tests.

Same with the "demo" mode we added. That spins up an Nginx container with an example front-end app.

It's theoretically possible to define all of the containers in one big Docker Compose config and then turn them on/off with env variables per environment, but it's just hellish to debug. In fact, I suspect that's what the engineer that wrote those started with before getting frustrated with spaghetti.

Sometimes there is just no winning with the computers lol

Edit: Answering the "it isn't 4 commands" bit -- if you just restart a container it won't pick up changes to the Docker Compose config. You have to remove it an add it back to the cluster.


The conditional services can be done with 'profiles' nowadays.

https://docs.docker.com/compose/profiles/

Anyway, wrapping everything in a dev command makes a lot of sense for DX, and then if generating docker-compose.yml files or importing miniboss is the most straightforward/reliable/maintainable way, all good

Cheers


If you're dockerizing a dev environment check out batect, it's kind of like the combo of docker-compose + make (i.e. simple script running) that is really the tool we all just want: https://batect.dev/ It can easily define one-off container tasks like integration test runs with just a couple lines of config.


I will check this out. Thank you! I'll also make sure that the "Docker Awesome" list has this info too. I bet there are some cool projects there (and the OPs GitHub project should be added too, come to think of it).


This looks great, I am so sick of YAML.

While it's probably fine for most of the use cases you're predicting, if someone who isn't quite familiar with Python's gotchas decides to subclass, or modify one of the items at runtime, it will be a tricky thing to troubleshoot. Even though that is a rite of passage for Python developers, it's probably best to not have examples that could set them up for weird bugs.

edit: I was talking about mutable types at the top level of a class, and somehow left that out.


Good point. I was thinking of adding `docker-compose` compatibility, so that you can use a `docker-compose.yml` file to define services, and then add event hooks etc. with decorators, or something similar. You would then also have access to the attributes of the services.


I meant to say that having mutable types defined at the top level of a class was a problem, I must have backspaced over the most important part while I was figuring out to say it.

Classes in and of themselves are great, but having lists and dicts defined at the top level can cause problems.


Aha, I get it. I never thought about the attributes of a service definition getting modified, but you're right, it might happen. I'll try to figure out whether using immutable data structures for attributes would work out.


The problem isn't so much if a specific service definition gets modified, it's more if they're trying to modify one, and accidentally modify many.

say they have a service with a ton of dependencies and they want to store it in a separate file:

    class Application(miniboss.Service):
        name = "python-todo"
        image = "afroisalreadyin/python-todo:0.0.1"
        env = {"DB_URI": "postgresql://dbuser:dbpwd@appdb:5432/appdb"}
        ports = {8080: 8080}
        stop_signal = "SIGINT"
    
        def __init__(self):
            with open('dependency_list') as f:
                for dependency in f.readlines():
                    self.dependencies.append(dependency)
Once that class is initialized, any service that previously had zero dependencies, will now have everything in the file, because the list in miniboss.Service is shared by all instances of all subclasses that didn't override it.

One easy way to limit this is to define your miniboss.Service defaults in init instead of the toplevel. Then if users use mutable types at the top level, it only becomes an issue if they make subclasses, or if the process makes more than one instance of a particular class.

I still think immutable types are the best bet, but if they need mutable ones to throw some python magic in their somewhere along the build process, then tell them to define them within an __init__.

Or maybe just keep doing what your doing, and add a warning in the docs not to do screwy stuff like that.

Again, this really does look great I'm excited to see where it goes.


Without looking at miniboss code - is there a reason why services are defined as classes as opposed to instances? E.g.

    # maybe even have the group as an instance
    group = miniboss.Group(name="foobar")

    group.database = miniboss.Service(
        name="appdb",
        image="postgres:10.6",
        env={
            "POSTGRES_PASSWORD": "dbpwd",
            "POSTGRES_USER": "dbuser",
            "POSTGRES_DB": "appdb"
        },
        ports={5432: 5433}
    )

Edit: added the group instance


As I understand it the services are instances, the classes are the service definitions. So if you wanted 5 identical web servers you only need to make one service definition class, and then miniboss would use instances of it to make the containers. I could be wrong though.


I'll second the use of immutable data structures. Had to do some real nasty work of debugging last week because someone accidentally mutated a configuration object causing persistence issues much deeper in the stack. Would've been impossible with frozendicts.


Why not fork docker-compose v1? It's Python! https://github.com/docker/compose/tree/1.29.2/compose

You could keep the YAML format and extend it, and still write compose files as pure Python.


I actually dug around in the source code of docker-compose to figure out how to properly work with networks. However the fundamental ideas of the two projects are different (yaml -> containers vs. Python code -> containers), so starting off from docker-compose just wasn't right. It would have simply complicated things.


Have you thought about taking an approach similar to the AWS CDK with CloudFormation? Aka the whole "infrastructure as code" movement?

I'm not sure how that would work with the whole "lifecycle hooks" you've mentioned a few times, but maybe it would be easier than trying to compete with Docker Compose yourself by simply wrapping it.

I posted in another comment here too but we wrote a bunch of code to deal with programmatic generation of Docker Compose files, and it was really sweet to use! I've honestly thought about making that code[0] a stand-alone library because of how valuable it was.

0: https://github.com/lunasec-io/lunasec/blob/master/js/sdks/pa...


It's easier (and usually more rewarding) to start something from scratch than to fork.


I think you just described the origin of all new software :)


Can this be adapted to the Kubernetes ecosystem?


I have no idea, as I'm a bit lost when it comes to Kubernetes tooling. When I wrote a Kubernetes tutorial (https://okigiveup.net/tutorials/a-tutorial-introduction-to-k...) nearly four years ago, you had kubectl and YAML files, and that was it. These days there is a whole hierarchy of tools, which probably cover the same ground with miniboss and much more. So yes, it could be adapted, but I'm pretty sure it wouldn't be necessary.


This looks potentially very interesting indeed! Thanks for sharing.

How tied is it to the Docker daemon?

Instead of Docker, could one use podman, which doesn't need root.


That's a good question, I think only this class would need to be reimplemented for podman: https://github.com/afroisalreadyinu/miniboss/blob/main/minib... . The question is of course whether there are any incompatibilities in terms of networking etc.


The biggest difference/issue is that podman doesn't provide a daemon of events on its own.

It's docker-compose compatibility is tied up in essentially re-implementing/emulating that, which partially defeats the purpose.

My main need for the daemon component is https://github.com/nginx-proxy/acme-companion which listens to docker events to know when a new service is available to proxy.


Is there anyway to use this with kompose convert so it can be deployed to production in a kubernetes cluster?


I have no idea what kompose convert does, honestly. Principally, one could print out any format from miniboss, but it depends on how much you want to trust a tool written for local deployments with kubernetes stuff.


It takes your docker-compose.yaml and creates a bunch of kubernetes yamls that can just be deployed without touching them.

I've managed to never touch a Kubernetes yaml thanks to it.


Would it be correct to assume that something similar can be done to replace helm.sh with kubernetes ?


Checkout kapitan, with its Kadet input type, you can generate YAML with a semi-opinionated Python based DSL: https://kapitan.dev/compile/#kadet

I am the author and have used it at a FAANG previously and now more recently at my startup to manage large, complex kubernetes and terraform managed infrastructure.


Check out Pulumi, it's a similar idea of python (or other languages) to generate container orchestration config for Kubernetes or other systems: https://github.com/pulumi/pulumi


so it's like terraform or pulumi but scoped down to docker and using python?


I would say it's more similar to Pulumi, as the main aim is to avoid markup (or a simplified programming language, like HCL in the case of terraform). The other two are correct though: Only for Docker and in Python.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: