Hacker News new | past | comments | ask | show | jobs | submit login
Rails on Docker (fly.io)
329 points by mikecarlton on Jan 26, 2023 | hide | past | favorite | 220 comments



> Everything after that removes the manifest files and any temporary files downloaded during this command. It's necessary to remove all these files in this command to keep the size of the Docker image to a minimum. Smaller Dockerfiles mean faster deployments.

It isn't explicitly explained, but the reason why it must be in this command and not separated out is because each command in a dockerfile creates a new "layer". Removing the files in another command will work, but it does nothing to decrease the overall image size: as far as whatever filesystem driver you're using is concerned, deleting files from earlier layers is just masking them, whereas deleting them before creating the layer prevents them from ever actually being stored.


If all you want to do is run a series of commands while only creating a single layer, heredocs are probably the simplest / most readable approach:

https://www.docker.com/blog/introduction-to-heredocs-in-dock...


Nice syntax, but I like the caching that comes with creating each layer.

If you want to reduce layer/image size, then I think [1] "multi-stage builds" is a good option

[1] https://docs.docker.com/build/building/multi-stage/


I mean, you can do both. Or, technically with that link you mentioned, all three.

You can use HEREDOCS to combo together commands that make sense in a layer, ensure your layers are ordered such that the more-frequently changing ones are further on in your Dockerfile when possible (this will also speed up your builds, ensuring as many caches as possible are more likely to be valid), and use mutli-stage builds on top of that to really pare it down to the bare necessities.


> Nice syntax

Is it though? From the post:

RUN <<EOF

apt-get update

apt-get upgrade -y

apt-get install -y ...

EOF

It may be due to my ninja level abilities to dodge learning more advanced shell mastery for decades, but to me it looks haphazard and error prone. Are the line breaks semantic, or is it all a multiline string? Is EOF a special end-of-file token, or a variable, if so what’s it’s type? Where is it documented? Is the first EOF sent to stdin, if so why is that needed? What is the second EOF doing? I can usually pick up a new imperative language quickly, but I still feel like an idiot when looking at shell.


The

  <<XYZ
  ...
  XYZ
syntax for multi-line strings is worth learning since it is used in shell, ruby, php, and others. See https://en.m.wikipedia.org/wiki/Here_document . You get to pick the "EOF" delimiter.


I know those questions are rhetorical, but to answer them anyway:

> > Nice syntax

> Is it though?

Before the heredoc syntax was added, the usual approach was to use a backslash at the end of each line, creating a line continuation. This has several issues: The backslash swallows the newline, so one must also insert a semicolon* to mark the end of each command. Forgetting the semicolon leads to weird errors. Also, while Docker supports line continuations interspersed with comments, sh doesn't, so if such a command contains comments it can't be copied into sh.

The new heredoc syntax doesn't have any of these issues. I think it is infinitely better :)

(There is also JSON-style syntax, but it requires all backslashes to be doubled, and is less popular.)

*In practice "&&" is normally used rather than ";" in order to stop the build if any command fails (otherwise sh only propagates the exit status of the last command). This actually leads to a small footgun with the heredoc syntax: it allows the programmer to use just a newline, which is equivalent to a semicolon and means the exit status will be ignored for all but the last command. The programmer must remember to insert "&&" after each command, or use `set -e` at the start of the RUN command, or use `SHELL ["/bin/sh", "-e", "-c"]` at the top of the Dockerfile. But this footgun is due to sh's error handling quirks, not the heredoc syntax itself.

> Are the line breaks semantic, or is it all a multiline string?

The line breaks are preserved ("what you see is what you get").

> Is EOF a special end-of-file token

You can choose which token to use (EOF is a common convention, but any token can be used). The text right after the "<<" indicates which token you've chosen, and the heredoc is terminated by the first line that contains just that token.

This allows you to easily create a heredoc containing other heredocs. Can you think of any other quoting syntax that allows that? (Lisp's quote form comes to mind.)

> Where is it documented?

The introduction blog post has already been linked. The reference documentation (https://docs.docker.com/engine/reference/builder/, https://github.com/moby/buildkit/blob/master/frontend/docker...) explains the syntax using examples. It doesn't have a formal specification; unfortunately this is a wider problem with the Dockerfile syntax (see https://supercontainers.github.io/containers-wg/ideas/docker...). Instead, the reference links to the sh syntax specification (https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...), on which the Dockerfile heredoc syntax is based.


Thanks, this is the helpful reply I didn't deserve!

> This actually leads to a small footgun with the heredoc syntax: it allows the programmer to use just a newline, which is equivalent to a semicolon and means the exit status will be ignored for all but the last command.

This sounds like a medium-large caliber footgun to me, and while I don’t expect Docker to fix sh, it could perhaps either set sane defaults or decouple commands from creating layers? Or why not simply support decent lists of commands if this is such a common use case?

> This allows you to easily create a heredoc containing other heredocs.

Hmm, what’s the use-case for that? The only effect for the programmer would be to change the escape sequence, no?


> This sounds like a medium-large caliber footgun to me, and while I don’t expect Docker to fix sh, it could perhaps either set sane defaults or decouple commands from creating layers? Or why not simply support decent lists of commands if this is such a common use case?

Ha ha, I guess footgun sizes are all relative. The quirky error handling of sh is "well-known" (usually one of the first pieces of advice given to improve safety is to insert `set -e` at the top of every shell script, which mostly fixes this issue). So I don't think of Dockerfile heredocs themselves as a large footgun, but rather as a small footgun that arises out of the small interaction between heredocs and the large-but-well-known error handling footgun.

I don't know why Docker doesn't use `set -e` by default. I suppose one reason is for consistency -- if you have shell commands spread across both a Dockerfile and standalone scripts, it could be very confusing if they behaved differently because the Dockerfile uses different defaults.

I also don't know why the commands are coupled to the layers. Maybe because in the simple cases, that is the best mapping; and in the very complex cases, the commands would be moved to a standalone script; so there are fewer cases where a complex command needs to be inlined into the Dockerfile in a way that produces a single layer.

It would be really nice if the Dockerfile gave more control over layers. For example, currently if you use `COPY` to import files into the image and then you use `RUN to you modify them (e.g. to change the ownership / permissions / timestamps), it would needlessly increase the image size; the only way to avoid this is to perform those changes during the COPY, for example using `COPY --chown`; but COPY has very limited options (namely: chown, and also chmod but that is relatively recent).

Regarding native support for lists of commands, I don't really see much value since sh already supports lists (you "just" need to correctly choose between "&&" and ";"/newline).

> > This allows you to easily create a heredoc containing other heredocs.

> Hmm, what’s the use-case for that? The only effect for the programmer would be to change the escape sequence, no?

It can be useful to embed entire files within a script (e.g. when writing a script that pre-populates a directory with some small files). With most quoting schemes, you'd have to escape special characters that appear in those files. But with heredocs, you just have to pick a unique token and then you can include the files verbatim.

(Picking a token that doesn't appear as a line within the files can be a little tricky, but in many cases it's not a problem; for example if the files to be included are trustworthy, it should be enough to use a token that includes the script's name. On the other hand if the data is untrusted, you'd have to generate an unguessable nonce using a CSPRNG. But at that point it's easier to base64-encode the data first, in which case the token can be any string which never appears in the output of base64, for example ".".)


I like here-docs, but frequently I think just making small shell scripts to be invoked by RUN is better, eg putting apt invocations in something like: buildscripts/install_deps and simply RUN that from the Dockerfile.


Why didn't I ever think of this.

This is great and really makes things way simpler. Thanks!


Unfortunately this syntax is not generally supported yet - it's only supported with the buildkit backend and only landed in the 1.3 "labs" release. It was moved to stable in early 2022 (see https://github.com/moby/buildkit/issues/2574), so that seems to be better, but I think may still require a syntax directive to enable.

Many other dockerfile build tools still don't support it, e.g. buildah (see https://github.com/containers/buildah/issues/3474)

Useful now if you have control over the environment your images are being built in, but I'm excited to the future where it's commonplace!


And AFAIK buildkit is still a real pain to troubleshoot, as you can't use intermediate stages :/

You can put stuff in a script file and just run that script too.



excellent, thanks!

this has been a serious pain in my side for a while, both for my own debugging and for telling people I try to help "you're gonna have to start over and do X or this will take hours longer".


The older “Docker without Docker” blogpost linked from there goes into that, it’s one of the best deep dives into containers (and honestly one of the best pieces of technical writing full stop) I’ve come across. https://fly.io/blog/docker-without-docker/


An terbative to removing files or go through contortions to stuff things in a single layer is to use a builder image and copy the generated artefacts into a clean image:

    FROM foo AS builder

    .. build steps

    FROM foo

    COPY --from=builder generated-file target

(I hope I got that right; on a phone and been a while since I did this from scratch, but you get the overall point)


Unfortunately this messes with caching and causes the builder step to always rebuild if you’re using the default inline cache, until registries start supporting cache manifests.


How so? I just tested a build, and it used the cache for every layer including the builder layers.


You did this on the same machine, right? In a CI setting with no shared cache you need to rely on an OCI cache. The last build image is cached with the inline cache, but prior images are not


You can build the first stage separately as a first step using `--target` and store the cache that way. No problem.


How would you do this in a generic, reusable way company-wide for any Dockerfile? Given that you don't know the targets beforehand, the names, or even the number of stages.

It is of course possible to do for a single project with a bit of effort: build each stage with a remote OCI cache source, push the cash there after. But... that sucks.

What you want is the `max` cache type in buildkit[1]. Except... not much supports that yet. The native S3 cache would also be good once it stabalizes.

1. https://github.com/moby/buildkit#export-cache


Standardize the stages and names. We use `dev` and `latest`.

It worked wonders for us, on a cache hit the build time is reduced from 10 to 1,5 minutes.


Ah, sorry I misunderstood you. Yes, I don't tend to care about whether or not the steps are cached in my CI setups as most of the Docker containers I work on build fast enough that it doesn't really matter to me, but that will of course matter for some.


I never got around to implementing it but I wonder how this plays with cross-runner caches in e.g. Gitlab, where the cache goes to S3; there's a cost to pulling the cache, so it'll never be as fast as same-machine, but should be way faster for most builds, right?


the cache is small but if you have a `docker buildx build --cache-from --push` type command it will always pull the image at the end and try to push it again (although it'll get layer already exists responses), for ~250mb images on gitlab I find this do-nothing job takes about 2.5 mins in total (vs a 10 min build if the entire cache were to be invalidated by a new base image version). I'd very much like it if I could say "if the entire build was cached don't bother pulling it at the end", maybe buildkit is the tool for that job


I mostly love Docker and container-based CI but wow what a great reminder that even common-seeming workflows still have plenty of sharp edges!


If you're running the CI inside AWS, and assuming the code isn't doing anything stupid, it will be fast enough for nobody to notice.



Thanks for sharing, very useful blog post (not just the linked section). Reference to https://github.com/wagoodman/dive will help a lot today.


This isn’t really useful for frameworks like rails, since there’s nothing to “compile” there. Most rails docker images will just include the runtime and a few C dependencies, which you need to run the app.


Pulling down gems is a bit of a compilation, which could benefit, unless you're already installing gems into a volume you include in the Docker container via docker compose etc. Additionally, what it does compile can be fairly slow (like nokogiri).


There are however temporary files being downloaded for the apt installation, and while in this case it's simple enough to remove them in one step that's by no means always the case. Depending on which gems you decide to rely on you may e.g. also end up with a full toolchain to build extensions and the like, so knowing the mechanism is worthwhile.


How would you go about copying something you installed from apt in a build container?

Say `apt install build-essential libvips` from the OP, it's not obvious to me what files libvps is adding. I suppose there's probably an incantation for that? What about something that installs a binary? Seems like a pain to chase down everything that's arbitrarily touched by an apt install, am I missing some tooling that would tame that pain?


It's a pain, hence for apt as long as it's the same packages, just cleaning is probably fine. But e.g. build-essential is there to handle building extensions pulled in by gems, and that isn't necessary in the actual container if you bring over the files built and/or installed by rubgygems, so the set of packages can be quite different.


Run "dpkg -L libvips" to find the files belonging to that package. This doesn't cover what's changed in post install hooks, but for most docker-relevant things, it's good enough.


The new best practice is to use the RUN --mount cache options. Making the removal of intermediate files unnecessary and speeds up the builds too. Surprised to see so few mentions of it.


As someone whose week was ruined by an overwhelming proliferation of `--mount type=cache` options throughout a complex build system, I'm not so sure.

Externally managed caches don't have a lifecycle controlled or invalidated by changes in Dockerfiles, or by docker cache-purging commands like "system prune".

That means you have to keep track of those caches yourself, which can be a pain in complex, multi-contributor environments with many layers and many builds using the same caches (intentionally or by mistake).


Could you have the first step of the dockerfile hash itself and blow away the cache(s) if the hash changes?


Something roughly equivalent is the medium-term solve, yes. Complicated by the fact that this pattern was used by rather a lot of different folks' code.


hard enough to stay synced with all the latest docker changes, let alone at an organizational level.

Example: in the last few years docker compose files have gone from version 2 to version 3 (missing tons of great v2 features in the name of simplification) to the newest, unnamed unnumbered version which is a merging of versions 2 and 3.


At the moment Rails is focused on simplicity/readability. I've got a gem that I'm proposing (and DHH is evaluating) that adds caching as an option: https://github.com/rubys/dockerfile-rails#overview


Yeah this is a very widely misunderstood or unknown thing about docker files. After the nth time explaining it to somebody, I finally threw it into a short video with demo to explain how it worked: https://youtu.be/RP-z4dqRTZA


Self hoisting here, I put this together to make it easier to generate single (extra) layer docker images without needing a docker daemon, capabilities, chroot, etc: https://github.com/andrewbaxter/dinker

Caveat: it doesn't work on Fly.io. They seem to be having some issue with OCI manifests: https://github.com/containers/skopeo/issues/1881 . They're also having issues with new docker versions pushing from CI: https://community.fly.io/t/deploying-to-fly-via-github-actio... ... the timing of this post seems weird.

FWIW the article says

> create a Docker image, also known as an OCI image

I don't think this is quite right. From my investigation, Docker and OCI images are basically content addressed trees, starting with a root manifest that points to other files and their hashes (root -> images -> layers -> layer configs + files). The OCI manifests and configs are separate to Docker manifests and configs and basically Docker will support both side by side.


The newest buildkit shipped a regression. This broke the Docker go client and containerd's pull functionality. It has been a mess: https://community.fly.io/t/deploying-to-fly-via-github-actio...


Self-hosting and self-hoisting here. Whenever docker gives me grief, I use podman.


I wanted to explain layers really bad, Sam Ruby even put comments in there about it, but I stayed away from it for the sake of more clearly explaining how Linux was configured for Rails apps.

It is really weird looking at Dockerfiles for the first time seeing all of the `&% \` bash commands chained together.


Note that the explicit "apt-get clean" is probably redundant:

https://docs.docker.com/develop/develop-images/dockerfile_be...

> Official Debian and Ubuntu images automatically run apt-get clean, so explicit invocation is not required.


'apt-get clean' doesn't clear out /var/lib/apt/lists. It removes cached downloaded debs from /var/cacpt/apt but you'll still have hundreds of MiB of package lists on your system after running it.


Yes, but apt-get clean is still redundant [ed: because the upstream Debian/Ubuntu images automatically runs apt-clean via how dpkg/apt is configured - and your image should inherit this behavior]. Personally I'm not a fan of deleting random files like man pages and documentation - so instead of:

  RUN apt-get update -qq && \
    apt-get install -y build-essential libvips && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man
I'd do:

  RUN apt-get update -qq && \
    apt-get install -y build-essential libvips && \
    rm -rf /var/lib/apt/lists/*


Not a fan of violently modifying the system outside of the package manager either.

rm -rf isn't configuration management, it's system entropy increasing leaving users scrambling to reinstall the world.

If people don't want docs, then distro vendors should package them separately and make them recommended packages.


Indeed. If you want a small image, there's "slim" and "alpine" variants.


As I mentioned above, I recommend avoiding Ubuntu because it violates the principle of dev - prod parity. For any significant scale, Ubuntu will be left in the dust. Don't rely on system packages, build your own stream of minimal vendored (/opt/company) dependencies and keep them current because defaults are always old and don't consistently apply necessary patches for bugfixes and functionality.

https://quay.io/repository/centos/centos


Please explain

"I recommend avoiding Ubuntu because it violates the principle of dev - prod parity."

What exactly is the problem? Would you please like to provide sources to this issue / some explanation / any helpful content? THANKS!


Yeap. Various docker images are built with pkg mgmt file install exclusions and rm -rf.

It's good to have a slim script but also to have an unslim when you need manpages or locales.


Is there some reason not to use multi-stage builds for this? Blogspam: https://sequoia.makes.software/reducing-docker-image-size-pa...


If you want to check your images for some common leftover files in all the layers, I made an app for that: https://github.com/viraptor/cruftspy


Every command in a dockerfile creates a layer, and most Dockerfile builders cache each if that line hasn't changed. Dynamic callouts to run updates or check things on the web won't rerun.


Such a helpful comment. Just in this one note, I learned a few things about Docker that I had no idea about:

1) a layer is essentially a docker image in itself and

2) a layer is as static as the image itself

3) Docker images ship with all layers

Thanks jchw!


If you only care about the final image size, then `docker build --squash` squashes the layers for you as well.


Definitely, although it's worth noting that while the image size will be smaller, it will get rid of the benefits of sharing base layers. Having less redundant layers lets you save most of the space without losing any of the benefits of sharing slices. I think that is the main reason why this is not usually done.


--squash is still experimental, I believe multi-stage images are the new best practice here.


in case anyone doesn't know what that means, its basically this kind of dockerfile

  FROM the_source_image as builder
  RUN build.sh

  FROM the_source_image
  COPY --from=builder /app/artifacts /app/
  CMD ....
i'm not sure if you can really call it the new best practice though, its been the default for ... a very long time at this point.


Typically I wind up using a different source image for the builder that ideally has (most of) the toolchain bits needed, but the same runtime base as the final image. (For Go, go:alpine and alpine work well. I'm aware alpine/musl is not technically supported by Go, but I have yet to hit issues in prod with it, so I guess I'll keep taking that gamble.)


I take advantage of multi-stage builds, however I still think that the layer system could have some nice improvements done to it.

For example, say I have my own Ubuntu image that is based on one of the official ones, but adds a bit of common configuration or tools and so on, on which I then build my own Java image using the package manager (not unlike what Bitnami do with their minideb, on which they then base their PostgreSQL and most other container images).

So I might have something like the following in the Ubuntu image Dockerfile:

  RUN apt-get update && apt-get install -y \
    curl wget \
    net-tools inetutils-ping dnsutils \
    supervisor \
    && apt-get clean && rm -rf /var/lib/apt/lists /var/cache/apt/*
But then, if I want to install additional software, I need to fetch the package list anew downstream:

  FROM my-own-repo/ubuntu
  
  RUN apt-get update && apt-get install -y \
    openjdk-17-jdk-headless \
    && apt-get clean && rm -rf /var/lib/apt/lists /var/cache/apt/*
As opposed to being able to just leave the cache files in the previous layers/images, then remove them in a later layer and just do something like:

  docker build -t my_optimized_java_image -f java.Dockerfile --purge-deleted-files .
  
  or maybe
  
  docker build -t my_regular_java_image -f java.Dockerfile .
  purge-deleted-files -t my_regular_java_image -o my_optimized_java_image
Which would then work backwards from the last layer and create copies of all of the layers where files have been removed/masked (in the later layers) to use instead of the originals. Thus if I'd have 10 different images that need to use apt to install stuff while building them, I could leave the cache in my own Ubuntu image and then just remove it for whatever I want to consider the "final" images that I'll ship, which would then alter the contents of the included layers to purge deleted files.

There's little reason why these optimized layers couldn't be shared across all 10 of those "final" images either: "Hey, there's these optimized Ubuntu image layers without the package caches, so we'll use it for our .NET, Java, Node and other images" as opposed to --squash which would put everything in a single large layer, thus removing the benefits from the shared layers of the base Ubuntu image and so on.

Who knows, maybe someone will write a tool like that some day.


You will be happy to hear that already exists since. Read up on docker buildkit and the --mount option.


I used to think I hated Docker, but I think what I actually hate is using Docker locally (building/rebuilding/cache-busting images, spinning containers up and down, the extra memory usage on macOS, etc etc). I don't need all that for development, I just want to run my dang code

But I've been really enjoying it as a way of just telling a PaaS "hey here's the compiler/runtime my code needs to run", and then mostly not having to worry about it from there. It means these services don't have to have a single list of blessed languages, while pretty much keeping the same PaaS user experience, which is great


> the extra memory usage on macOS

It's worth noting that the Docker experience is very different across platforms. If you just run Docker on Linux, it's basically no different than just running any other binary on the machine. On macOS and Windows, you have the overhead of a VM and its RAM to contend with at minimum, but in many cases you also have to deal with sending files over the wire or worse, mounting filesystems across the two OSes, dealing with all of the incongruities of their filesystem and VFS layers and the limitations of taking syscalls and making them go over serialized I/O.

Honestly, Docker, Inc. has put entirely too much work into making it decent. It's probably about as good as it can be without improvements in the operating systems that it runs on.

I think this is unfortunate because a lot of the downsides of "Docker" locally are actually just the downsides of running a VM. (BTW, in case it's not apparent, this is the same with WSL2: WSL2 is a pretty good implementation of the Linux-in-a-VM thing, but it's still just that. Managing memory usage is, in particular, a sore spot for WSL2.)

(Obviously, it's not exactly like running binaries directly, due to the many different namespacing and security APIs docker uses to isolate the container from the host system, but it's not meaningfully different. You can also turn these things off at will, too.)


I don't do backend work professionally so my opinion probably isn't worth much, but the way Docker is so tightly tied to Linux makes me hesitant to use it for personal projects. Linux is great and all but I really don't like the idea of so explicitly marrying my backend to any particular platform unless I really have to. I think in the long run we'd be better served figuring out ways to make platform irrelevant than shipping around Linux VMs that only really work well on Linux hosts.


IMO, shipping OCI images doesn't tether your backend to Docker anymore than shipping IDE configurations in your Git repository tether you to an IDE. You could tightly couple some feature to Docker, but the truth is that most of Docker's interface bits are actually pretty standard and therefore things you could find anywhere. The only real reason why Docker can't be "done" the way it is on Linux elsewhere is actually because of the unusual stable syscall interface that Linux provides; it allows the userlands ran by container runtimes to run directly against the kernel without caring too much about the userland being incompatible with the kernel. This doesn't hold for macOS, other BSDs, or Windows (though Windows does neatly abstract syscalls into system libraries, so it's not really that hard to deal with this problem on Windows, clearly.)

Therefore, if you use Docker 'idiomatically', configuring with environment variables, communicating over the network, and possibly using volumes for the filesystem, it doesn't make your actual backend code any less portable. If you want to doubly ensure this, don't actually tether your build/CI directly to Docker: You can always use a standard-ish shell script or another build system for the actual build process instead.


I don't think Linux is going away as the main server OS anytime soon, if ever. So that just leaves local dev

To that end- I prefer to just stick with modern languages whose first-party tooling makes them not really have to care what OS they're building/running on. That way you can work with the code and run it directly pretty much anywhere, and then if you reserve Dockerfiles for deployment (like in the OP), it'll always end up on a Linux box anyway so I wouldn't worry too much about it being Linux-specific


Yeah I'm not too worried about what's on the deployment end, but rather the dev environment. I don't want to spend any time wrestling Docker to get it to function smoothly on non-Linux operating systems.

Agree that it's a strong argument for using newer languages with good tooling and dependency management.


the dev workflow usage of docker is less about developing your own app locally and more about being able to mindlessly spin up dependencies - a seeded/sample copy of your prod database for testing, a Consul instance, the latest version of another team's app, etc.

You can just bind the dependencies that are running in Docker to local ports and run your app locally against them without having to docker build the app you're actually working on.


I've run Docker devenvs on Linux, Windows (via WSL2 so also Linux but only kinda) and Mac.

The closest I've come in years to having to really wrestle with it was the URL hacking needed to find the latest version willing to run on my 10-year-old MBP that I expect to run badly on anything newer than 10.13 - the installer was there on the website, just not linked I guess because they don't want the support requests. Once I actually found and installed it, it's been fine, except that it still prompts me to update and (like every Docker Desktop install) bugs me excessively for NPS scores.


It's, more or less, practically impossible to be OS agnostic for a backend with any sort of complexity. You can choose layers that try to abstract the OS layer away but sooner or later you're going to run into part of the abstraction that leaks. That plus the specialty nature of Windows/Mac hosting means your backend is gonna run on Linux.

It made sense at one point to use Macs but these days pretty much everything is electron or web based or has a Linux native binary. IMHO backend developers should use x64 linux. That's what your code is running on and using something different locally is just inviting problems.


> That's what your code is running on and using something different locally is just inviting problems.

That’s quite the assumption. Graviton is very popular. I haven’t touched x64 stuff in a very long time. Perhaps such generalization is a bad idea.


The problem of course being that x86 linux on laptops is still and might always be terrible. Using an ARM Mac to develop your backend services is not ideal but probably still a better user experience than the 0.01% where a modern language does something vastly different on your local machine than in production (which is btw also very often ARM these days, at least on AWS).

I've used Ubuntu, WSL2 and currently a M1 mac and if I need to be mobile AT ALL with the machine I chose a Mac any day. For a desktop computer Ubuntu works great though


It's not as if you're locked to Linux though. Most if not all of my applications would run just fine on Windows if I wanted to. It's just that when I run them myself I use a container because I'm already choosing to use a Linux environment. That doesn't mean the application couldn't be shipped different but rather it is just an implementation detail


Honestly, to an outsider Docker sure sounds like a world of pain.


To a developer it probably is, as a user, it’s much easier to install self hosted server apps with minimal effort. Especially because the Docker file usually already has the sane defaults set while the binary requires more manual config.


It's not too bad as a developer, either, especially when building something that needs to integrate with dependencies that aren't just libraries.

It may be less than ideally efficient in processor time to have everything I work on that uses Postgres talk to its own Postgres instance running in its own container, but it'd be a lot more inefficient in my time to install and administer a pet Postgres instance on each of my development machines - especially since whatever I'm building will ultimately run in Docker or k8s anyway, so it's not as if handcrafting all my devenvs 2003-style is going to save me any effort in the end, anyway.

I'll close by saying here what I always say in these kinds of discussions: I've known lots of devs, myself included, who have felt and expressed some trepidation over learning how to work comfortably with containers. The next I meet who expresses regret over having done so will be the first.


But I can save a different outside from a lot of pain. For example our frontend dev won't have to worry about setting up the backend with all it's dependencies, instead docker-compose starts those eight containers (all, redis, db etc) and he's good to go work on the frontend.

If you freelance and work on different project, sure rvm is a great thing, but docker will contain it even better and you won't litter your work machine with stuff like mine is after a few years.


If all you need is a statically linked binary running in a Screen session somewhere, then without question, you're going to find Docker to be esoteric and pointless.

Maybe you've dealt with Python deployments and been bitten by edge cases where either PyPI packages or the interpreter itself just didn't quite match the dev environment, or even other parts of the production environment. But still, it "mostly" works.

Maybe you've dealt with provisioning servers using something like Ansible or SaltStack, so that your setup is reproducible, and run into issues where you need to delete and recreate servers, or your configuration stops working correctly even though you didn't change anything.

The thing that all of those cases have in common is that the Docker ecosystem offers pretty comprehensive solutions for each of them. Like, for running containers, you have PaaS offerings like Cloud Run and Fly.io, you have managed services like GKE, EKS, and so forth, you have platforms like Kubernetes, or you can use Podman and make a Systemd unit to run your container on a stock-ish Linux distro anywhere you want.

Packaging your app is basically like writing a CI script that builds and installs your app. So you can basically take whatever it is you do to do that and plop it in a Dockerfile. Doesn't matter if it's Perl or Ruby or Python or Go or C++ or Erlang, it's all basically the same.

Once you have an OCI image of your app, you can run it like any other application in an OCI image. One line of code. You can deploy it to any of the above PaaS platforms, or your own Kubernetes cluster, or any Linux system with Podman and Systemd. Images themselves are immutable, containers are isolated from eachother, and resources (like exposed ports, CPU or RAM, filesystem mounts, etc.) are granted explicitly.

Because the part that matters for you is in the OCI image, the world around it can be standard-issue. I can run containers within Synology DSM for example, to use Jellyfin on my NAS for movies and TV shows, or I can run PostgreSQL on my Raspberry Pi, or a Ghost blog on a Digital Ocean VPS, in much the same motion. All of those things are one command each.

If all you needed was the static binary and some init service to keep it running on a single machine, then yeah. Docker is unnecessary effort. But in most cases, the problem is that your applications aren't simple, and your environments aren't homogenous. OCI images are extremely powerful for this case. This is exactly why people want to use it for development as well: sure, the experience IS variable across operating systems, but what doesn't change is that you can count on an OCI image running the same basically anywhere you run it. And when you want to run the same program across 100 machines, or potentially more, the last thing you want to deal with is unknown unknowns.


Yeah I'm aware, which is why I qualified that part of the statement. But every company I've ever worked at has issued MacBooks (which I'm not complaining about- I definitely prefer them overall), so it is a pervasive downside

Though also, the added complexity to my workflow is an order of magnitude more important to me than the memory usage/background overhead (especially now that we've got the M1 macs, which don't automatically spin up their fans to handle that background VM)


The solution I’ve settled on lately is to just run my entire dev environment inside a Linux VM anyway. This solves a handful of unrelated issues I’ve run into, but it also makes Docker a little more usable.


As someone who has not yet adopted to Docker, even though that seems to be what is done on contemporary best-in-class PaaS....

I don't totally understand the maintenance story. On heroku with buildpacks, I don't need to worry about OS-level security patches, patches to anything that was included in the base stack provided by the PaaS, they are responsible for. Which I consider part of the value proposition.

When I switch to using "Docker" containers to specify my environment to the PaaS... if there is a security patch to the OS that I loaded in one of my layers... it's up to me to notice and update my container? Or what? How is this actually handled in practice?

(And I agree with someone else in this thread that this fly.io series of essays on docker is _great_, it's helping me understand docker better whether or not I use fly.io!)


Yeah the fly.io stuff is great, beautiful artwork too, top class job.

Re: security updates. It's not handled. There are companies that will scan your infrastructure to figure out what's in your containers and find out of date OS base images.

One thing I'm currently experimenting with is a product for people who would like an experience slightly closer to traditional Linux. The gist is, e.g.

    include "#!gradle -q printConveyorConfig"   // Import Java server
    app {
      deploy.to = "vm-frontend-{lhr,nyc}{1-10}.somecloud.com"
      linux { 
         services.server {
            include "/stdlib/linux/service.conf"
         }
         debian.control.Depends = "postgres (>= 14)"
      }
    }
Then you run "conveyor push" from any OS and it downloads a Linux JVM, minimizes it for your app, bundles it with your JARs, produces a DEB from that, sftps it to the server, installs it using apt, that package integrates the server with systemd for startup/shutdown and dynamic users, it healthchecks the server to ensure it starts up and it can do rolling upgrades/downgrades. And of course the same for Go or NodeJS or whatever other server frameworks you like. So the idea is that if you use a portable runtime you can develop locally on macOS or Windows and not have to deal with Linux VMs, but deployment is transparent.

SystemD can also run containers and "portable services", as well as using cgroups for isolation and sandboxing, so there's no need to use debs specifically. It just means that you can depend on stuff that will get upgraded as part of whole OS system upgrades.

We use this to maintain our own servers and it's quite nice. One of the things that always bugged me about Linux is that deploying servers to it always looks like either "sftp a tarball and then wire things up yourself using vim", or Docker which involves registries and daemons and isolated OS images, but there's nothing in between.

Not sure whether it's worth releasing though. Docker seems so dominant.


> It's not handled.

So... what do actual real people do in practice? I am very confused what people are actually doing here.

LOTS of people seem to have moved to this kind of docker-based deploy for PaaS. They can't all just be... ignoring security patches?

I am very confused that nobody else seems to think the "handle patches story" is a big barrier to moving from heroku-style to docker-style... that everyone else just moves to docker-style? What are they actually doing to deal with patches?

I admit I don't really understand your solution -- I am not a sysadmin! This is why we deploy to heroku and pay them to take care of it! It is confusing to me that none of the other PaaS competitors to heroku -- including fly.io -- seem to think this is something their customers might want...


I mean, every Dockerfile has a FROM some_base_image line at the beginning, and it refers to an image and an optional tag (which defaults to the `latest` tag if omitted). As far as I know, whenever you build your docker image, it will check for a new version of the base image and download that if needed. So e.g. if you have FROM ubuntu:20.04 at the top of your Dockerfile, and a new version of ubuntu:20.04 is uploaded to dockerhub, the base image will be updated the next time you run docker build, which shouldn't be too long if you're making deployments regularly.


"I admit I don't really understand your solution -- I am not a sysadmin! This is why we deploy to heroku and pay them to take care of it!"

Right. I didn't explain it very well then.

Basically it's a heroku-ish solution but as a tool rather than a service, and where you build locally instead of pushing source code to some remote cloud. You say "here's my build system, go push to these plain Linux VMs". Now, someone needs at least a bit of sysadmin knowledge - you need to know how to obtain Linux VMs, set up access to them, and make sure they're (self-)applying security updates. From time to time you'll need to roll to new OS releases and do restarts for kernel fixes. But that stuff isn't all that hard to learn.

Still - I'd be curious to know where your threshold is for touching Linux. With Heroku you never did, right? Dynos could have run Windows for all you knew? If you had a tool that e.g. you gave your cloud credentials to, and it then spun up N Linux VMs, logged in, configured automatic updates and some basic monitoring then let you push servers straight from your git repo, would you be interested in that? How much did you rely on Heroku support going in and helping you fix app-specific problems live in production?


> LOTS of people seem to have moved to this kind of docker-based deploy for PaaS. They can't all just be... ignoring security patches?

They can and they are, IME. I've seen images that literally run some version of Alpine Linux from 3 years ago.


It seems not that hard to just manually bump your version periodically, right? I get that it's not as nice as having it taken care of for you, but it's not like you're having to manage a whole system by hand


Assume I've never used Docker before, can you tell me what "manually bump your version periodically" means with regard to this question? Can anyone give me concrete instructions for what this looks like in a specific context? Like, say the first Dockerfile suggested in OP?

The Dockerfile has in it:

> ARG RUBY_VERSION=3.2.0

> FROM ruby:$RUBY_VERSION

Which it says gets us "gets us a Linux distribution running Ruby 3.2." (Ubuntu?). The version of that "linux distribution" isn't mentioned in the Dockerfile. How do I "bump" it if there is a security patch effecting it?

Then it has:

> RUN apt-get update -qq && apt-get install -y build-essential libvips && \ ...

Then I just run `fly launch` to launch that Dockerfile on fly.io. How do I "bump" the versions of those apt-get dependencies should they need them?


If you're basing on a lang-specific container like ruby, then it's the version of that container in the FROM line. Notice how ruby images come in various versions of OS (https://hub.docker.com/_/ruby). You can specify that as part of the FROM string. However, they also let you drop the OS part, and only specify ruby version. This will usually default to an image with the latest OS provided by that docker repo. Nothing to bump in this case, just occasionally rebuild your own image from scratch, instead of from cache, to make sure it downloads the latest base image.


This right here. Most of the time you just need to rebuild your image. If the project is being actively developed and built, nothing to worry about. (Unless you pin to a very specific OS version of course).

If it's not, you just need to trigger a build every so often. Maybe this could be a feature PaaS offers in the future.


Docker caches the results of each command, so to "bump" the versions you have to trigger a rebuild of the whole image and tell it to toss the cache. So it'll re-run apt-get update and use whatever the latest stuff is as of the moment of rebuild.

There are a few problems with this, as noted elsewhere in the thread:

1. You have to do an uncached rebuild and repush all of your images, using some ad-hoc company specific process. There's nothing that can do this for you at the end points or service levels, because Docker images are meant to be immutable after build and don't come with the scripts or inputs used to build them.

2. The default is to use caching, so devs may not notice that they didn't refresh their base OS for a while.

3. You don't get notified when updates are available or applied. There's nothing like the unattended-upgrades package that comes with Debian normally, which will apply upgrades and then tell you what happened.

4. Because of (3) the latency is very high. There is no story (other than third party scanners) for getting notified about urgent upgrades. If there's another zero day in OpenSSL then with a standard Linux install you'll get patched as soon as a new package is released and your machines update, so pretty quick (a day or so). With Docker images, it'll get patched on an app-by-app basis if and when people get around to doing an uncached rebuild and repush of the image.

5. Kernel upgrades are a whole can of eels in container-world. People like to think of the container as being a self-contained OS but it isn't. There is a largely unstated and untested assumption that any Linux distro user space can run on any kernel version or configuration, regardless of whether the OS originally shipped in that configuration, and everything will just automatically do something sensible. Mostly this assumption is OK because servers are very simple, but it's not actually guaranteed by anything. A lot of people misunderstand the "stable Linux syscall interface" guarantees and what that means.

It's for reasons like this that I prefer the slightly older way of running real binaries that are exposed to the OS and which use OS specific packages. I configured unattended-upgrades and use LTS versions of the OS, so that security patches just stream in without me doing anything. There are a few downsides to this too:

1. You have to either restart your servers from time to time to force security patches to actually get reloaded into memory, or use the needrestart Debian package - however that only works if you're using package metadata properly.

2. You do need to understand at least a bit of Linux sysadmin. Enough to know how to ssh in as root, use apt-get and so on.

3. The tooling story is poor, hence my musings above about demand for something better. Without Docker, today you're going to be manually copying files to the server, having to learn systemd and how to start/stop/enable services, how to restart them on upgrades etc. That's why I've written something that does it all for you.


Most places are building software regularly. Once a week update the base layer of the container to the latest OS. Then update your libs and app. Run tests, deploy.


Same boat here. I'm doubtful that we could take on the additional maintenance work for less than the Heroku premium we pay.

I wouldn't be surprised if I'm wrong and newer tools bridge the gap for a lower price and/or time investment, but I also wouldn't be surprised if I'm right and many places using Docker could save time/money offloading the maintenance to something more like a managed PaaS.


Agreed. I actually have not too much problem with heroku pricing (I could complain about some areas, but it's working for us) -- I'm just worried that heroku seems to be slowly disintegrating through lack of investment, so am worried that there seem to be no other realistic options for that level of service! There don't seem to be other reliable options for managed PaaS that takes care of maintenance in the same way.


I've seen render.com thrown around a bit as an alternative. Haven't tried it out myself though.

There's also netlify, vercel and similar sites but I think they're mainly geared toward all-javascript apps.


For what it's worth, the vast majority of vulns in a web app are in its code or dependencies rather than in the base OS. I haven't actually seen any real-world cases of getting hacked because your docker base OS image was out of date. The only exception I would give would be for like language runtime version, which can occasionally be an attack vector. Switching runtime version usually requires manual testing regardless, so I wouldn't really consider it a docker-only problem.

If you're really concerned, just have a CI job that rebuilds and tests with newer base image versions.


The answer to that is "something that is not Docker handles it". Wherever you're hosting your images there'll be something platform-specific that handles it. Azure, AWS, and GCP all have tools that'll scan and raise a notification. If you want that to trigger an automatic rebuild, the tools are there to do it. Going from there to "I don't need to think about any CVE ever" is a bit more of a policy and opinion question.


I hadn't even thought I would be "hosting my images" on something like Azure, aWS, or GCP to deploy to fly.io. Their examples don't mention that. You just have a Dockerfile in your repo, you deploy to fly.io.

But it sounds like for patch/update management purposes, now I need to add something like that in? Another platform/host to maintain/manage, at possibly additional price, and then we add dealing with the specific mechanisms for scanning/updating too...

Bah. It remains mystifying to me that the current PaaS docker-based "best practices" involve _quite a bit_ more management than heroku. I pay for a PaaS hoping to not do this management! It seems odd to me that the market does not any longer seem to be about providing this service.


You can run something like trivy locally, which will do that particular job. Fly.io might add that later.

It is odd, but the general direction of the market (at least at the top) is becoming less opinionated, which means you need to bring your own. Not sure I like that myself.


The result of a Heroku build pack is also a Docker image. The switching of base layers is a feature of the docker image design, though not normally used by normal docker users.


I think the idea is you have a CI/CD pipeline that will periodically rebuild and redeploy with the latest base images along with your code updates and such.


I work a lot on modernizing codebases. Many times it will be codebases without any proper testing, different versions of frameworks and programming languages, etc.

Currently, I'm working in 3 projects at once + my own. Even though it's all PHP, the projects are PHP 8.1 / Symfony 6.2, PHP 8.2 / Symfony 6.2, PHP 8.1 / Laravel 8 and the newest project I joined is PHP 7.4 / Symfony 4. I have to adhere to different standards and different setups; I don't want to switch the language and the yarn or npm versions or the Postgres or MySQL versions and remember each one. Some might use RabbitMQ, another Elasticsearch.

Docker helps me tremendously here.

Additionally, I add a Makefile for each project as well and include at least "make up" (to start the whole app), "make down", "make enter" (to enter the main container I'll be working with), "make test" (usually phpunit) and perhaps "make prepare" (for phpunit, phpstan, php-cs-fixer, database validate) as a final check before adding a new commit.

Locally, I had additional aliases on my shell. "t" for "make test", "up" for "make up".

So now I just have to go to a project folder, write "up" and am usually good to go!


Sure, but that feels like a band-aid on an existing problem. And sometimes a band-aid is what you need, especially with legacy systems, so I'm not judging that. But I think for a new project, these days, it's a mistake to plan on needing Docker from the outset.

Modern languages with modern tooling can automatically take care of dependency management, running your code cross-platform, etc [0]. And several of them even have an upgrade model where you should never need an older version of the compiler/runtime; you just keep updating to the latest and everything will work. I find that when circumstances allow me to use one of these languages/toolsets, most of the reasons for using Docker just disappear.

[0] Examples include Rust/cargo, Go, Deno, Node (to an extent)


How do you mean?

If the new project without any tests with a PHP 7.2 and MySQL installation and I need to upgrade it to PHP 8.2, I first need to write tests and I can't use PHP 8.2 features until I've upgraded.

Composer is the dependency manager, but I still need PHP to run the app later on. And a PHP 7.2 project might behave differently when running on PHP 7.2 or PHP 8.2. And sometimes PHP is just one part of the equation. There might be an Angular frontend and a nginx / php-fpm backend, maybe Redis, Postgres, logging, etc. They need to be wired together as well. And I'm into backend web development and do a bit of frontend development, but whoa, I get confused with all the node.js, npm, yarn versioning with corepack and nvm and whatnot. Here even a "yarn install" behaves differently depending on which yarn version I have. I'd rather have docker take care of that one for me.

I feel like "docker (compose)" and "make" are widely available and language-agnostic enough and great for my use cases (small to medium sized web apps), especially since I develop on Linux.

Something language-specific like pyenv might work as well, but might be too lightweight for wiring other tools. I used to work a lot with Vagrant, but that seems to more on the "heavy" side.

Edit: I just saw your examples, unfortunately I've only dabbled a bit with Go and haven't worked with Rust yet, so I can't comment on that, but would be interested to know, how they work re: local dev setup.


> and I can't use PHP 8.2 features until I've upgraded

Yeah- so in Rust, the compiler/tooling never introduces breaking changes, as (I think) a rule. For any collection of Rust projects written at different times, you can always upgrade to the very latest version of the compiler and it will compile all of them.

The way they handle (the very rare) breaking changes to the language itself is really clever: instead of a compiler version, you target a Rust "edition", where a new edition is established every three years. And then any version of the Rust compiler can compile all past Rust editions.

Node.js isn't quite as strict with this, though it very rarely gets breaking changes these days (partly because JavaScript itself virtually never gets breaking changes, because you never want to break the web). Golang similarly has a major goal of not introducing breaking changes (again, with some wiggle-room for extreme scenarios).

> I get confused with all the node.js, npm, yarn versioning with corepack and nvm and whatnot. Here even a "yarn install" behaves differently depending on which yarn version I have.

Hmm. I may be biased, but I feel like the Node ecosystem (including yarn) is pretty good about this stuff. Yarn had some major changes to how it works underneath between its major versions, but that stuff is mostly supposed to be transient/implementation-details. I believe it still keys off of the same package.json/yarn.lock files (which are the only things you check in), and it still exposes an equivalent interface to the code that imports dependencies from it.

nvm isn't ideal, though I find I don't usually have to use it because like I said, Node.js rarely gets breaking changes. Mostly I can just keep my system version up to date and be fine, regardless of project

Configuring the Node ecosystem's build tools gets really hairy, but once they're configured I find them to mostly be plug and play in a new checkout or on a new machine (or a deployment); install the latest Node, npm install, npm run build, done. Deno takes it further and mostly eliminates even those steps (and I really hope Deno overtakes Node for this and other reasons).

> maybe Redis, Postgres, logging, etc. They need to be wired together as well

I think this - grabbing stock pieces off the shelf - is the main place where Docker feels okay to use in a local environment. No building dev images, minimal state/configuration. Just "give me X". I'd still prefer to just run those servers directly if I can (or even better, point the code I'm working on at a live testing/staging environment), but I can see scenarios where that wouldn't be feasible


I'm not gonna lie, I didn't read all that, but the node example alone proves you either didn't read the guy you replied to or haven't been coding long enough to grok the problem.

What if you want to start a new project using the latest postgres version because postgres has a new feature that will be handy, but you already maintain another project that uses a postgres feature or relies on behaviour that was removed/changed in the latest version? You're going to set up a whole new VM on the internet to be a staging environment and instead of setting up a testing and deployment pipeline you're going to just FTP / remote-ssh into it and change live code?

you define an apps entire chain of dependencies including external services in a compose file / set of kube manifests / terraform config for ecs. Then in the container definition itself you lock down things like C library and distro versions: maybe you use specially patched imagemagick on one project or a pdf generator on another, and fontconfig defaults were updated and it changed how aliasing works between distro releases and now your fonts are all fugly in generated exports... stick all those definitions in a Dockerfile and deploy onto any Linux distro / kernel and it'll look identical to it does on local

nevermind this, check out this thread to destroy your illusion that simply having node installed locally will make your next project super future proof https://github.com/webpack/webpack/issues/14532 and note that some of the packages referencing this old issue in open new bug reports are very popular!

if you respond please do not open with "yeah but rust", I can still compile Fortran code too


Your comment makes clear you didn't read the guy you replied to, and I'm not sure it would've been in good faith even if you had, so I'm not going to spend time writing a full response.


I do work with Go and it doesn’t preclude the utility of containers in development for out-of-process services.


I'm sorry but a complex system goes WAY beyond just the language being used. Not using docker (or something very similar) would only result in a massive waste of time right out of the gate.

I have to assume you work by yourself or in an extremely small company to be able to handle project complexity without docker.


Docker can be a pain sometimes (especially docker for mac!), but in moderately complex apps, it's so nice to be able to just fire up 6 services (even some smaller apps have redis, memcached, pg/mysql, etc) and have a running dev environment.

If someone updates nodejs, just pull in the latest dockerfile. Someone updates to a new postgres version? Same thing. It's so much better than managing native dependencies.


heck, we use a pdf to latex container just to save us (some on linux, some on mac and windows) from thinking about what libraries they need to install on their machine or the servers.


> I don't need all that for development, I just want to run the dang code

Have you ever worked somewhere where in order to run the code locally on your machine, it's a blend of "install RabbitMQ on your machine, connect to MS-SQL in dev, use a local Redis, connect to Cassandra in dev, run 1 service that this service routes through/calls to locally, but then that service will call to 2 services in dev", etc?


I certainly have. And there is often a Dockerfile or a docker-compose file. And the docs will say "just run docker-compose up, and you should be good to go."

And 50% of the time it actually works. The other 50% of the time, it doesn't, and I spend a week talking to other engineers and ops trying to figure out why.

Not a jab at Docker, btw.


The thing I've been trying - instead of the docs saying "just run docker-compose up" (and a few other commands), put all of those commands in a bash script. That single bootstrap command should: docker compose down, docker compose build, docker login, [install gems/npm modules], rake db:reset (delete + create), docker compose up, rake generate_development_data.

This way each developer can/should tear down and rebuild their entire development environment on a semi-frequent basis. That way you know it works way more than 50% of the time. (Stretch goal: put this in CI.... :)

The other side benefit: if you reset/restore your development database frequently, you are incentived/forced to add any necessary dev/test data into the "rake generate_development_data", which benefits all team members. (I have thought about a "generate_development_data.local.rb" approach where each dev could extend the generation if they shouldn't commit those upstream, but haven't done that by anymeans....)


And to borrow from the classic regex saying: "Now you have two problems"


What are some common error cases you see this way?


Docker compose for those services while the language itself runs natively has been the best solution to this problem for me in the past. Docker compose for redis, postgres, elastic, etc.

IMO Docker for local dev is most beneficial for python where local installs are so all over the place.


This is exactly what I recently set up for our small team: use Docker Compose to start Postgres and Redis, and run Rails and Sidekiq natively. Everyone is pretty happy with this setup, we no longer have to manage Postgres and Redis via Homebrew and it means we're using the same versions locally and in production.

If anyone is curious about the details, I simply reused the existing `bin/dev` script set up by Rails by adding this to `Procfile.dev`:

    docker: docker compose -f docker-compose.dev.yml up
The only issue is that foreman (the gem used by `bin/dev` to start multiple processes) doesn't have a way to mark one process as depending on another, so this relies on Docker starting the Postgres and Redis containers fast enough so that they're up and running for Rails and Sidekiq. In practice it means that we need to run the `docker compose` manually the first time (and I suppose every time we'll update to new versions) so that Docker downloads the images and caches them locally.


I do the same thing and it works well.

For your issue could you handle bringing up your docker-compose 'manually' in bin/dev? Maybe conditionally by checking if the image exists locally with `docker images`. Then tear it down and run foreman after it completes?


Yea, typically I just do the step manually to start dependencies and then start the rails processes.


Yeah, and python has been most of my exposure to local Docker, so that may be coloring my experience here. Running a couple off-the-shelf applications in the background with Docker isn't too bad, but having it at the center of your dev workflow where you're constantly rebuilding a container around your own code is just awful in my experience


I don't use Docker locally except for building/testing production containers. I also found them not helpful for development.

That said, I recently discovered that VS Code has a feature called "Dev Containers"[0] that ostensibly makes it easy to develop inside the same Docker container you'll be deploying. I haven't had a chance to check it out, but it seems very cool.

[0] https://code.visualstudio.com/docs/devcontainers/containers


For the backend rails app my docker-compse mounts the work directory like so, which means I don't have to develop inside that container except for when I need to use the rails console.

    web:
      image: rubylang/ruby:3.0.1-focal
      volumes:
        - .:/myapp


The best description I heard about Docker was from a fella I worked with a while back who said, "Docker is terrible, but it's less worse than Puppet, etc." after another co-worker challenged our decision to move forward with Docker in our production infrastructure.


What kind of company does a person work at doing development where docker isn't esential? I am really curious about this because without it complex projects would slow to an abysmal crawl trying to even onboard people and then have them do any meaningful development.


On macOS, I've found that the middle of the road of running my own code natively with all the infrastructure components and their config (db, redis, etc) in docker-compose gives me the best combination of performance and ease of setup.


If anyone is looking for a complete guide I put together this last month: https://nickjanetakis.com/blog/a-guide-for-running-rails-in-...

It includes running Rails and also Sidekiq, Postgres, Redis, Action Cable and ties in esbuild and Tailwind too. It's all set up to use Hotwire as well. It's managed by Docker Compose. The post also includes a ~1h hour ad-free YouTube video. The example app is open source at https://github.com/nickjj/docker-rails-example and it's optimized for both development and production. No strings attached. The example app has been maintained and deployed a bunch over the years.


Am I the only person who struggles to deploy Rails apps.

It's a super productive framework to develop in, but deploying an actuals Rails apps - after nearly 20 years of existance, still seems way more difficult than it should be.

Maybe it's just me.


You are not!

That is in fact, I think, why fly.io is investing in trying to make it easier to deploy Rails, on their platform.

But also contributing to the effort to make Rails itself come with a Dockerfile solution, which Rails team is accepting into Rails core because, I'd assume/hope, they realize it can be a challenge to deploy Rails, and are hoping that the Dockerfile generation solution will help.

Heroku remains, IMO, the absolute easiest way to deploy Rails, and doesn't really have a lot of competition -- although some are trying. Unfortunate because heroku is also a) pricey and b) it's owners seem to be into letting it kind of slowly disintegrate.

I'm really encouraged that fly.io seems to be investing in trying to match that ease of deployment for Rails specifically, on fly.io.


Rails is a huge framework. I remember using capistrano to deploy it on many servers, and that was much harder than with containers today.

The issue is how much RoR does and how tightly it is with its build toolchain - gems often require ways of building C code, whatever you use to build assets has its own set of requirements, there is often ffmpeg or imagemagik dependency. In my opinion, a lot of the issues are from Ruby itself being Ruby.

I agree that it's silly for such productive framework to be such PITA to deploy. To be fair, I pick RoR deployment over node.js deployment any day. I still not sure how to package TS projects correctly.


I still have battlescars from capistrano. What a pain. So much easier with docker.


I remember trying to solve silly problems:

- productions servers should not have C compiler installed (why???) - compiling assets on every single VM that runs service is twice as silly

I'd still take that over something like AWS Beanstalk.


I feel your pain and risking a shameless plug, we built a company to solve this problem as it was the only thing we really didn't like about Rails.

Checkout Cloud 66!


I've been using Cloud66 since 2015 for my rails deployments, makes my life much easier!


+1 for Cloud66


Just use Heroku and move on with your life... its not THAT expensive :) And if it is for you, youre probably at a point with the product where you can afford it.


I've been deploying to Heroku and there it's been incredibly easy.


Not just you.

Whenever you've covered your bases, Rails grows in complexity. Probably necessary complexity to keep up with modern world.

Solved asset pipeline pain? Here's we packer. No, let's swap that out for jsbundle.

Finally tamed the timing of releasing that db migration without downtime? Here's sidekiq, requiring you to synchronize restarts of several servers. Oh, wait, we now ship activejob.

Managed to work around threads eaten by websocket connections in unicorn? Nah, puma is default now. Oh, and here's ActionCable you may need to fix all your server magic.

Rails is opinionated. And that is good at times. But it also means a lot of work on annoying plumbing having to be rebuilt for a new, or shifted opinion. Work on plumbing, that is not work on your actual business core.


> Finally tamed the timing of releasing that db migration without downtime? Here's sidekiq, requiring you to synchronize restarts of several servers

To be fair, this was already an issue whenever you have more than one instance of anything. Whether it's an extra sidekiq or two web servers or anything else, you have a choice of: stop everything and migrate, or split the migration into prepare, update code, finalise migration.


Yes. It was.

But async workers have the tendency to be busy on long running processes, whereas a web server typically has connections that last at most seconds. Their different profile makes restarting just a tad harder.


We've deployed with Capistrano to Ubuntu hosts for wow... 13 years now? It works super well, but our DevOps folks want to move to a push button approach within GitLab (rather than me typing one easy command, but hey).

I'm super stoked about the included Docker config and all the blog posts it will shortly inspire. Finding best-practices for Docker based deployments has been anything but fun. I'm still not sure how we'll implement the equivalent of `cap production deploy:rollback` and the like with Docker. Not that we use that basically ever, but knowing it's available is great.


I think this is one of the things that Java really solved even though the actual implementation is a bit weird.

Being able to generate a war or jar as a released binary is something that would be cool to see in the ruby world.


No, it's not. I've observed the same in the Python community where there is a bit of a disconnect between developers and operations people. Docker has been the obvious way to package up code for close to a decade now. I believe, I started using it around 2014 or so and I actually dockerized a jruby app at the time as well (sinatra not rails).

With Ruby and Python, most instructions for getting something going is just a series of "install this or that", "modify this file over there", "you could do this or that", "call this, than that, and then that", etc. These instructions tend to be developer focused. Virtual environments are usually left as an exercise to the reader and failing to use those leaves you with a big mess on your filesystem. Just pretend your production server is a snowflake developer laptop and you'll be fine seems to be the gist of it. Except of course that doesn't quite work like that anymore in many places and you need to take some steps to prevent that.

I spend some time face palming myself through the Apache Air (python) documentation trying to figure out a sane way to get that on a production environment. As it turned out that involved jumping through quite a few hoops. My conclusion was that whoever wrote that, was not used to dealing with production environments.

So, good that they are tackling this in the rails community. Stuff like this should not be an afterthought. With docker, you don't really need any virtual environments anymore. And you can also use them for development. That actually simplifies getting started instructions for both developers and operations people. Just use this container for development and run this command to push your production ready image to your docker registry of choice. No venvs, no gazillions of dependencies to install, etc.


If you control your own server, it's very easy to deploy a Rails application. The downside is that you need to keep up with patching, ensuring that the disk doesn't become full, etc. etc.

Deploying Rails to PAAS solutions is also easy but used to be expensive with Heroku. I very recently started using DigitalOcean Apps for a personal project and it's been very easy, while costs look acceptable.


I recently deployed a Rails app to Railway and I’m already looking for alternatives. Their Heroku migration guide directly states that…

> We auto-magically add, configure, and deploy your services described in the procfile.

https://railway.app/heroku

…but that’s an outright lie and they caveat that statement by saying they only support single processes which completely defeats the entire purpose of Procfiles.

https://docs.railway.app/deploy/builds

Maybe Fly does this better and I’ll give them a try.


I'm not sure how its harder than python + django, node.js or php.. I used to deploy with capistrano, then we moved to deploy it from a single rpm file, now we just use or pipeline to prepare pods and push it to k8s, continuous deployment, direct from the gitlab pipeline.. not sure how can it be easier than that.


I've never experienced issues supporting rails deployments, and in fact find them quite easy to roll out, just as any other service written in any other language. I prefer Kubernetes though, which provides abstractions.


I've never used it, but I'm curious what makes it difficult?


Try Hatchbox https://hatchbox.io/

Makes deployment super easy.


Thanks @aantix :D


Nope! You're not alone. This has been an issue in the Rails community for a while. It's why DHH started working on it, which Sam Ruby noticed and moved forward in a really big way.

It's still going to be some what challenging for some folks as this Dockerfile makes its way through the community, but this is a really small step in the right direction for improving the Rails deployment story. There's now at least a "de facto standard" for people to build tooling around.

Fly.io is going to switch over to the official Rails Dockerfile gem for pre-7.1 apps really soon, so deploying a vanilla rails app will be as simple as `fly launch` and `fly deploy`.


I have my own local development Rails setup and template files that could be dropped in any project with minimal changes (mostly around configuring the db connection)

- https://gitlab.com/sdwolfz/docker-projects/-/tree/master/rai...

Haven't spent the time to document it. But the general idea is to have a `make` target that orchestrates everything so `docker-compose` can just spin things up.

I've used this sort of thing for multiple types of projects, not just Rails, it can work with any framework granted you have the right docker images.

For deployment I have something similar, builds upon the same concepts (with ansible instead of make, and focused on multi-server deploys, terraform for setting up the cloud resources), but not open sourced yet.

Maybe I'll get to document it and post my own "Show HN" with this soon.


I do miss the pre-docker days of using capistrano to deploy rails projects. Most deploys would take less than two minutes in the CI server and most of that was tests. The deploys were hot and requests that happened during the deploy weren't interrupted. Now with Docker I'm seeing most deploys take around ten minutes.

The downside of capistrano was that you'd be responsible for patching dependencies outside of the Gemfile and there would be occasional inconsistencies between environments.


This experience is mostly about how you see docker used, not any specific property of it. Both Capistrano and Docker can be used for deploys with or without interruptions. Both can be immediate or take minutes. The tool itself won't save you from bad usage.


Why are using Docker then? I have little experience with Ruby, but for PHP projects we're still using ye olde way. Deploys are hot and do not interrupt requests, just as you described (it pretty much comes down to running `composer install && rsync` in CI — there are more complicated solutions like capistrano, which I've also used, but they don't seem to provide enough features to warrant increased complexity).


I hear you, I love(d) capistrano and do miss it quite a bit.

That said, the cap approach was easy for a static target, but a horizontally scalable (particularly with autoscale) was an utter nightmare. That's where having docker really shines, and is IMHO why cap has largely been forgotten.


I still use Capistrano and have full Load Balancing and Autos-scaling implemented via the elbas gem. I would be happy to share my configuration if anyone needs it.


Thank you! That would be neat


Thanks but I already have my own containers and infrastructure for dev and prod Rails.

It's fine for people who are just starting out or want a repeatable environment.

Also, if you want stability and fewer headaches long-term:

- use a RHEL-derived kernel and customize the userland (container or host) quay.io has a good cent 9 stream. Ubuntu isn't used at significant scale for multiple reasons, and migrating over later is a pain.

- consider podman over docker

- use packaging (nix, habitat, or rpms) rather than make install (and use site-wide sccache)

- container management (k8s or nomad)

- configuration management (chef) because you don't always have the luxury of 12factor ephemeral instances based on dockerfiles and need to make changes immediately without throwing away a database cluster or zookeeper ensemble

- Shard configuration and app changes, with a rollback capability

- Have CI/CD for infrastructure that runs before landing

- Monitoring and alerting

- Don't commit directly to production except for emergencies. Require a code review signoff by another engineer. And be able to back out changes.

- Have good, tested backups that aren't replication

- Don't sweat the small stuff, but get the big stuff right that doesn't compound tech debt


This is really cool. Is the Rails server production ready? I was always under the impression you had to run it with Uvicorn or similar, although I haven't been following Rails development recently.


The default Gemfile now bundles Puma in, so yeah it's production ready


Puma is certainly fine as the app server - but normally you'd still have a proxy/load balancer/tls terminator/"ingress server" in front. Something like traefik, nginx, haproxy or caddy.

If (one of the) front-facing servers do regular http caching (a good idea anyway to play nice with rails caching[1])", you can probably "serve" the static assets straight from rails and let your proxy serve them from cache (if you don't have/need a full cdn).

[1] https://guides.rubyonrails.org/caching_with_rails.html#condi...


Yes, usually you still serve your asset (images,...) directory through nginx or something similar, though.


This has been my experience too, and I've never even used Rails at scale. Puma will struggle to serve "lots" (tens on a single page) of (for example) static images (though the actual user experience varies across browsers - safari does okay but firefox chokes and chromium is in the middle). This is with puma on a decent intel prosumer CPU (an i7 iirc) using x-sendfiles through nginx. So puma isn't even actually pushing the image bytes here.

I replaced it with nginx intercepting the image URLs to serve them without even letting puma know and it was instantly zippy. I still use sendfile to support when I'm doing dev and not running nginx in front of it, and I'm not happy with the kind of leaky abstraction there, but damn are the benefits in prod too difficult to ignore.


Serving the assets directly with the Rails app, but with a CDN caching layer in front, is also common, and is what heroku for instance recommends.


Will it be called "Ruby on Whales"? Joke aside, it's trivial to write your own Dockerfile and this still is what nontrivial apps will do, due to customization.


It's trivial to write a Dockerfile, it's not trivial to write a good one.


> RAILS_SERVE_STATIC_FILES - This instructs Rails to not serve static files.

...but... it's set to True....


I saw that too -- I think it's a typo in the OP, that the code is what they intended to recommend, and the comment is wrong.

Standard heroku Rails deploy instructions are to turn on `RAILS_SERVE_STATIC_FILES` but also to put a CDN in front of it.

My guess is this is what fly.io is meaning to recommend too. But... yeah, they oughta fix this! A flaw in an otherwise very well-written and well-edited article, how'd it slip through copy editing?


Open a PR! This will fix the problem if its a bug/oversight or if its intended to be that way, force somebody to comment on it for a future person who is puzzled by this choice.


It's intended. Yes serving static files from Ruby is slower than from nginx or whatever, but you'd need to embed nginx in the image and run both processes etc.

The assumption here is that there is a caching proxy in front of the container, so Rails will only serve each assets once, so performance isn't critical.


I am not sure which they intended, but the problem is that the comment doesn't match the code:

> RAILS_SERVE_STATIC_FILES="true"

> RAILS_SERVE_STATIC_FILES - This instructs Rails to not serve static files

That is not what it does when you put `="true"`, nope.

In fact both, settings are common. On heroku you typically use `RAILS_SERVE_STATIC_FILES="true"` but put a CDN in front. But other people do false and have eg nginx serving them. If this is what fly.io means you to do... where is the nginx, not mentioned in the tutorial?

Whichever they intend to do, their code does not match their narrative of what it does.


I was answering to the person saying to open a PR (on Rails I presume).

That comment isn't in the Rails dockerfile, it was added by the OP.

https://github.com/rails/rails/blob/4f3af4a67f227ed7998fed57...


I've found the "Docker for Rails Developers" book by Rob Isenberg [1] to be a great resource for getting started using Rails and Docker. It's only a couple years out of date at this point but should still be highly relevant for anyone trying to get started. The only issue I've had with Rails and Docker is serving the container on a 1gb d.o. droplet - the node-sass gem is usually a sticking point for a lot of people and believe the 1gb droplet to be just too small to host a dockerized rails app. But the benefits of using docker for Development is still overwhelmingly worth the effort of containerizing things.

It's super cool rails 7.1 is including the Dockerfile by default - not that rails apps need more boilerplate though..

[1]: https://pragprog.com/titles/ridocker/docker-for-rails-develo...


This is long overdue. Rails got very nice updates in the past years to make it easier to handle JS and other assets.

Deploying it was still always a hassle and involved searching for existing Dockerfiles and blog posts to cobble together a working one. At the beginning I always thought I'm doing something wrong as it's supposed to be easy and do everything nicely out of the box.

And dhh apparently agrees (Now at least: https://dhh.dk/posts/30-myth-1-rails-is-hard-to-deploy) as there's now a default Dockerfile and also this project he's working on, this will make things a lot nicer and more polished: https://github.com/rails/mrsk


Great name. I wonder if Maersk (the Danish shipping container company) will note the hat-tip, and if it will annoy their lawyers.


I have a proposal out for making the deployment story even easier by providing RubyGems a way to describe the packages they depend on to operate properly at https://community.fly.io/t/proposal-declare-docker-images-de...

The idea is I could run a command like `bundle packages --manager=apt` and get a list of all the packages `apt` should install for the gems in my bundle.

Since I know almost nothing about the technicalities and community norms of package management in Linux, macOS, and Windows, I'm hoping to find people who do and care about making the Ruby deployment story even better to give feedback on the proposal.


One problem you're likely to run into is that systems using the same packaging lineage cut the same dependency up in different ways. The "right name" for a dependency can change between Ubuntu and Debian, between different releases of Ubuntu, and different architectures. It very quickly gets out of hand for any interesting set of dependencies. Now it might be that there's enough stability in the repositories these days that that's less true than it was, but I remember running into some really annoying cases at one point when I had a full gem mirror to play with.

This is one of those problems that sounds easy but gets really fiddly. I had a quick run at it from a slightly different direction a looooong time ago: binary gems (https://github.com/regularfry/bgems although heaven knows if it even still runs). Precompiled binary gems would dramatically speed up installation at the cost of a) storage; and b) getting it right once. The script I cobbled together gathers the dependencies together into a `.Depends` file which you can just pipe through to the package manager, and could happily use to strap together a package corresponding to the dependency list.

I've never really understood why a standard for precompiled gems never emerged, but it turns out it's drop-dead simple to implement. The script does some linker magic to reverse engineer the dpkg package dependency list from a compiled binary. I was quite pleased with it at the time, and while I don't think it's bullet-proof I do think it's worth having a poke at for ideas. Of course it can only detect binary dependencies, not data dependencies or anything more interesting, so there's still room for improvement.


Interesting.. I’ll be checking this out as I go deeper into this problem. Thanks for sharing.


This is a manual using that image with Redis and Postgres: https://medium.com/@gustavoinzunza/rails-redis-pg-with-docke...


This is a really good news. And to be fun, the easiest way to become mainstream, is to have a guide on how to deploy a Rails application in 1 click, for any cloud hosting service.


I still use Capistrano. In fact, I like Capistrano so much that I have full Load Balancing, Auto-Scaling, and End-to-End encryption enabled for my projects on AWS via the elbas gem.

My primary use case for this is PCI Compliance. While PCI DSS and/or HIPAA do not specifically rule out Docker, the principle of isolation leans heavily twoard the principle that web hosts must be running on a private virtual machine.

This rules out almost all docker based-PaaS (including Fly.io, Render.com, AWS App Runner, and Digital Ocean), as these run your containers on general Docker hosts. In fact, the only PaaS provider that I can find advertising PCI compliance is Heroku, which now charges +$1800/month plus for Heroku Private to achieve it.

I would love to share my configuration with anyone that needs it.


Did you read the OP?

> Fly.io doesn't actually run Docker in production—rather it uses a Dockerfile to create a Docker image, also known as an OCI image, that it runs as a Firecracker VM


OK, I see now that fly.io uses Firecracker, not Docker (thanks for the catch!). And I see that flyio (and AWS App Runner) have updated their docs regarding PCI and HIPAA, as well, since I last looked at their site. Pretty smart actually.

By the same token, I think this reinforces the point that Docker itself is not considered PCI Compliant, unless we are simply treating it's config files as a DSL. And in that case, if you want PCI Compliance and go with the Docker DSL, then you are locked into providers that offer this same "transmorgification" from Docker to Firecracker.

Happy to hear that there are still other Capistrano users out there! I will push up my config later today.


Fly.io will convert your docker image to a firecracker[^1], a microVM engine based on KVM.

[^1]: https://firecracker-microvm.github.io/


If you could share your configurations that would be great! (see email in my profile). I used Capistrano for many, many, years but haven't kept up with it in quite a while. I've had to tackle HIPAA deployments in k8s and it's quite the ordeal for a small team. I miss the days of "cap deploy".


Will do! I will post a link later today.


For some more (too many?) in-depth tricks for rails and docker, see also:

"Ruby on Whales: Dockerizing Ruby and Rails development"

https://evilmartians.com/chronicles/ruby-on-whales-docker-fo...

Previously posted to hn - but without any comments.


I use FROM scratch. The app itself is built with a binary packer that is copied in.

Still, I'd much rather use Nix/Guix over Dockerfile.


This is awesome and I'm glad Rails is adding something official here. Unfortunately if you want to use the Dockerfile in development with docker-compose you will need to make some changes/additions. Most notably: only precompile assets if deploying, not during development


Though this is a great step... you still have to edit the Dockerfile if you are using Postgres and Redis.


Deploying with Cloud66 is a lot easier. Just git push.

I can chose my hosting provider and switch each other.


What kills rails for me at the moment is the time it takes to start up. I'd love to be able to use on top of things like cloud run where resources ramp down to zero when there are no requests, but the startup time makes this very difficult.


(I have not actually used this myself). The folks over at CustomInk maintain Lamby, a project to run Rails in a quickly-bootable Lambda environment. Might be worth checking out, if you otherwise do enjoy working with Rails: https://lamby.custominktech.com


I dislike that too. I've started using Sinatra for ruby apps instead of rails. You end up writing a lot more boilerplate but startup times are near instant and the API is great. Also it's highly stable. I haven't had to update my Sinatra apps (beyond updating dependent gems) in many years.


I dont think Rails is fit for this. It is a full app so I think by design it does not fit into Cloud Run architecture of cloud functions.

May I suggest you take a look at Roda or Hanami?


While I've not tried it personally, Django can be run like this. It being a "full app" doesn't preclude it from having a fast enough startup to allow for cloud function deployment.


You might try caching bootsnap in the Docker image. I didn’t realize this was possible until I saw it in the Dockerfile that Sam and DHH cranked out recently.


It's been a while since I've done any Ruby/Rails development, but just curious why they chose to use a Debian/Ubuntu based image in the default Dockerfile instead of an Alpine based image?


Alpine has some razor edges in it. I would never default to it. Always test your app thoroughly. musl doesn't implement everything that glibc does and some of the differences can cause big problems. This is not purely theoretical. I once spent a week deugging a database corruption issue that happened because of a difference in musl's UTF-8 handling.

Use Alpine liberally for local images if you like, but don't use it for production.


"Use Alpine liberally for local images if you like, but don't use it for production."

We take the exact opposite approach: default to Alpine based images, only use another base OS if Alpine doesn't work for some reason. The majority of our underlying code base isn't C-based, so maybe that's why Alpine has been successful for us, but as always, everyone's situation is different and YMMV.


The dockerfile is optimized for “works for the most number of people out of the box”. There was debate over using the ruby v ruby-slim image. Ultimately it was decided to go with the larger image to maximize compatibility.

That said, keep using Alpine! There’s no reason for folks to stop doing what they’re already doing if it’s working for them.

The new dockerfile is meant more for people who are just getting started that aren’t familiar with Docker or Linux.


I assume because Debian/Ubuntu works more likely out of the box. I tried to use Alpine but ran into various issues in our sad big corporate setup. Additionally, ruby provides base docker images in these too.


As someone who doesn’t know rails at all, what’s the innovation here? Surely Rails has had Dockerfiles written for it before?


How on earth did Rails not have an official Dockerfile until now? Have people been deploying like it's 2010?


I use dokku. Works really well.


IIRC, Dokku can only manage 1 server, so it's essentially useless for anything except small side projects that don't need to scale horizontally.


Maintainer of Dokku here:

We support Kubernetes and Nomad as deployment platforms, and folks are welcome to build their own builder plugins if they need to support others.


How do you define small side projects? One potent server is enough to serve multiple thousand of requests per second…


Small side project, meaning anything that's fine with occasional downtime.

You should run at least 2 servers for redundancy, regardless of size. You just can't lean on a single server, even if you can squeeze thousands of RPS out of it (big doubt).

It will inevitably fail, and you will have downtime.


But how does this work when talking to dev databases on my machine?


The Dockerfile is not optimized for a development environment. It might work for some cases, but this default file is really all about “how can the probability of a successful production deployment be improved out of the box?”

Getting a Docker compose dev env working is another can of worms. Maybe I should write about that next?


Dockerized applications can still reach services on the localhost, but you may want to take a look at docker compose so you get your application and backing systems in one place.

It makes your local development environment incredibly resilient.


Do you have an example how that would work? I unsuccessfully spent quite a while trying to get a docker container running rails to talk to a docker container running postgre. And I wanted to postgre container to persist to the host's disk so I could save state between runs. Maybe that wasn't the best way to do it though?


DB data can be stored in volume and persisted.

There are a lot of dockerfile / compose examples on github.

* https://github.com/docker/awesome-compose * https://github.com/jessfraz/dockerfiles




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: