Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Depot (YC W23) – Fast Docker Builds in the Cloud
224 points by jacobwg on Feb 22, 2023 | hide | past | favorite | 92 comments
Hey HN! We’re Kyle and Jacob, the founders of Depot (https://depot.dev), a hosted container build service that builds Docker images up to 20x faster than existing CI providers. We run fully managed Intel and Arm builders in AWS, accessible directly from CI and from your terminal.

Building Docker images in CI today is slow. CI runners are ephemeral, so they must save and load the cache for every build. They have constrained resources, with limited CPUs, memory, and disk space. And they do not support native Arm or multi-platform container builds, and instead require emulation.

Over 4 years of working together, we spent countless hours optimizing and reoptimizing Dockerfiles, managing layer caching in CI, and maintaining custom runners for multi-platform images. We were working around the limitation of multi-platform builds inside of GitHub Actions via QEMU emulation when we thought "wouldn't it be nice if someone just offered both an Intel and Arm builder for Docker images without having to run all that infrastructure ourselves". Around January of 2022 we started working on Depot, designed as the service we wished we could use ourselves.

Depot provides managed VMs running BuildKit, the backing build engine for Docker. Each VM includes 16 CPUs, 32GB of memory, and a persistent 50GB SSD cache disk that is automatically available across builds—no saving or loading of layer cache over the network. We launch both native Intel and native Arm machines inside of AWS. This combination of native CPUs, fast networks, and persistent disks significantly lowers build time — we’ve seen speedups ranging from 2x all the way to 20x. We have customers with builds that took three hours before that now take less than ten minutes.

We believe that today we are the fastest hosted build service for Docker images, and the only hosted build service offering the ability to natively build multi-platform Docker images without emulation.

We did a Show HN last September: https://news.ycombinator.com/item?id=33011072. Since then, we have added the ability to use Depot in your own AWS account; added support for Buildx bake; increased supported build parallelism; launched an eu-central-1 region; switched to a new mTLS backend for better build performance; simplified pricing and added a free tier; and got accepted into YC W23!

Depot is a drop-in replacement for `docker buildx build`, so anywhere you are running `docker build` today, you replace it with `depot build` and get faster builds. Our CLI is wrapping the Buildx library, so any parameters you pass to your Docker builds today are fully compatible with Depot. We also have a number of integrations that match Docker integrations inside of CI providers like GitHub Actions.

We’re soon launching a public API to programmatically build Docker images for companies that need to securely build Docker images on behalf of their customers.

You can sign up at https://depot.dev/sign-up, and we have a free tier of 60 build minutes per month. We would love your feedback and look forward to your comments!




Congrats on hitting the front page. As someone perhaps one step away from your target market, can you please explain how the value prop here really works? You charge $49/month for 50GB of disk cache, 32GB of RAM and 16 vCPUs along with a bit of custom tooling on top. For 44 EUR/month I can get a dedicated Hetzner machine with a terabyte of disk, 64 GB of RAM and 6 dedicated cores (12 with SMT), and no user or project limits. Because it has so much more disk and RAM, and because builds tend to be disk/ram/bandwidth limited, performance is probably competitive.

You say that this is meant to solve problems with CI being ephemeral. Maybe I'm old fashioned, but my own CI cluster uses dedicated hardware and nothing is ephemeral. It could also use dedicated VMs and the same thing would apply. We run TeamCity and the agents are persistent, builds run in dedicated workspaces so caching is natural and easy. This doesn't cost very much.

When you add more features then I can see there's some value there (SBOM etc) but then again, surely such features are more easily done by standalone tools rather than having to rent infrastructure from you.


So we have the concept of a "project", which in retrospect isn't the best name and is way too vague. :) But on our end, a "project" equates to one or two cache SSDs + one or two EC2 instances we're running, depending on whether you've asked for a single platform build or an Intel+Arm multi-platform build.

We do charge $0.05 per minute of build time used, but in theory that $49/mo plan gives you access to up to 20 build machines, if you're building 10 projects at once.

That said, if you already have your own dedicated build cluster / CI setup, you may prefer to just use that! Depot is effectively doing that kind of thing for you if you don't already have your own hosted CI system or would prefer not to orchestrate Docker layer cache.

We will be expanding to more things like SBOMs, container signing, insights and analytics about what's happening inside the builds, but hopefully in more integrated ways, since we control the execution environment itself.


As someone who is not a user yet 49 sounds like a rounding error in our dev budget and if all I have to do is change a line in my CI script that does sound very tempting. Standing up another machine and registering it as runner sounds like way more work. Also incurs a recurring task to run updates there or cycle the VM to get a new base image. An hour of dev time is really very expensive


I see. So it's $50/month plus .05 per minute of build time after 60 minutes is up in that month. Let's say you need 10 hours of building per week, so that'd be another $120/month on top. I'm not sure how fast these builds would go with your setup, maybe 10 hours a week is a lot. But we're still talking like $170/month for something with user limits and fairly restricted resources. For less than that I can get a 16 core AMD Rome machine with nearly 8 TB of flash split across two drives, which should eat image builds for breakfast. The extra cost is a bit of Linux sys admin which can be fully automated (apt-get install unattended-upgrades and a little more on first install).

Clearly from other responses in this thread there are people who feel this is a good deal, so best of luck to you. But I'm kinda reminded here of 37signals saying they can save $7M over 5 years by leaving the cloud. It seems the goal here is to dig people out of performance problems they get by using one type of cloud service, by selling them another type of cloud service!


>But we're still talking like $170/month

Rounding error for most companies.

>The extra cost is a bit of Linux sys admin which can be fully automated

You are overweighting hard dollar costs and underweighting the value of engineering time. Maybe you're the worlds greatest devops/platform engineer/sysadmin and once you wire up everything in under 5 minutes it will never need maintenance ever again but for most everyone else speeding up image builds by using a service that someone else thinks about and does maintenance on is absolutely worth it for $170/mo.


Yes, maybe. I do know Linux pretty well and don't consider sysadmin costs a big drain on my own company or time. I can see that it'd be much more expensive if you hire people who don't have much UNIX experience.

On the other hand, I've experienced first hand how cloud costs can explode uncontrollably in absurd ways. One company I worked at had a cloud cost crisis and they weren't even serving online services, just shovelling money into Azure for generic dev services like VMs for load tests, DBs for testing, super-slow CI agents, etc. They never managed to properly fix this because of the mentality you express here: a few hundred bucks a month here, a few hundred there, everyone gets access to spin up resources and it's all worth it because we're all soooo valuable. Then one day you realize you're inexplicably burning millions on subscription services and cloud spend, yet nobody can identify quite why or on what, or how to push costs down. Death by a thousand cuts, it was quite the revelation. Free cloud credits are murder, because they embed a culture of profligacy and "my time is too valuable to optimize this". By the time the startup credits run out it's too late.


I think the problem is that once you get in that mindset, you start forgetting that you need to optimize/fix your application.

Having network issues due to slow async calls? Just increase instance size until the machine is so fast it completes everything before it becomes a problem. Now you are paying 10x more for something that’s a few hours of dev time.


I think you're giving the worst-case scenario when citing large-scale cloud spend. A couple of hundred dollars a month for very fast builds is a good deal, and they can build on Intel and ARM, which is useful.


Worst case scenario, I mean, maybe? I don't have many data points. It does feel like cloud costs come up more often lately. I don't think there was anything particularly special about that company though. It felt like almost the default outcome of using modern development practices and giving everyone who "needed" it access to the Azure console.


I have accidentally spent far too much on Azure logging before, so I am a firm believer in giving as much (read-only) access as possible!


You should probably include the cost of your own salary in your calculations


I'm running a bootstrapped startup so a special case, my salary is nearly nothing :) :( :)

At a bigger company you'd just ask a junior to set it up or maybe a sysadmin. You can probably contract to get part timers too. I guess I spent half a day or so setting up the CI cluster at the start and have barely touched it since. That wasn't much cost even if I was earning a big salary. Our builds can use caches to speed them up (not docker, other types of cache) and when we turned that on it was like a 3x speed win, so having persistent disks is definitely worth it for many types of program. Especially if you're brave and trust your build system to cache unit test results between builds!


The value prop is for people who do not want to maintain billing, infra operations, custom software, security, custom features, multiple platforms, and Docker expertise, when they don't need to. All of that is time consuming, difficult, and expensive to develop.

Just paying extra for all the above to magically give you faster builds is easy, fast, predictable, guaranteed benefits. Totally worth it.

If you need a lot of milk every year, you could buy a cow, or you could just buy milk at the store. Most people agree the extra cost is worth it.


Billing? Custom software? Docker expertise? I don't quite understand. You have to pay for something no matter what, right, and you still have to write and test your Dockerfiles locally? Where's the custom software?

The alternative here is having CI workers that use regular disks that aren't wiped between builds, and constraining the jobs that benefit from caching to run on them. If you have CI already set up it shouldn't be that hard? People are acting like you need to pay a L7 Staff SWE $20k to set this stuff up, is that really a cost problem many companies are facing? It'd have been considered junior level stuff not so long ago.

As for most people feeling it's worth it, I dunno man. Yes if I read the comments people seem keen, but my post is voted to the top of this thread. It feels like a lot of people have doubled down on functional-programming like approach to server management and now have really slow infrastructure that spends lots of time doing redundant work and then throwing it away. Yes, you can do that and it even has some advantages, like FP sometimes does, but you can also just accept that computers are under the hood stateful mutable machines and set up things to lean into that. That's what this startup is basically doing, right? Just seems like a 21st century problem somehow.


I think you're missing some of the perspective. It is actually much harder to make a working version of what this company is providing than you describe.

Just the act of running a single VM - to do it right - requires technical expertise (just because you find it easy doesn't mean it is, or that most people would do it right), in addition to maintenance tasks, operational overhead, etc. Deal with the extra infra costs in your corp cloud budget, write the extra software to handle advanced caching on auto scaling instances on multiple platforms, understand how Docker works under the hood (far fewer people know that than you assume)... It is extra work someone has to do that has no bearing on what someone actually wants to be doing, which is just running a Docker build faster.

I would have to assign two engineers to build and maintain this for a medium sized company, at $120K per employee, plus infra cost, plus maintenance, plus the lead time to build it, etc. And they'd probably do a crap job.

So, pay $50 a month for a working solution right now? To increase velocity of sw development, with no other changes? Sign my ass up. It's a tragedy that people don't understand the value here.


> I think you're missing some of the perspective. I

Yeah that's why I'm asking. It's genuine curiosity so thanks for your answers.

Yes if you wanted to make the same product as these guys you'd have to spend the same amount or more, so sure, that'd be a poor use of money. No disagreement there. Productization is a lot of work.

But you wouldn't need a full product to solve this for your own use case! I guess what I'm struggling with is the apparent reluctance to fix this problem by just running ordinary computers. We're hackers, we're software developers, this is our bread and butter right? How can we as an industry apparently be forgetting how to set up and run computers? That's the message that seems to be coming through here - it's too hard, too much work, the people who can do it are too expensive. That'd be like chemists forgetting how to use a bunsen burner and needing to outsource it!? Computers are cheap, they're fast, they can basically maintain themselves if told to! To make your Docker builds faster you can just run them on a plain vanilla Linux machine that just sits around running builds in the normal way, the same way a laptop would run them, with permanently connected disk and cache directory and stuff.

I totally get it that maybe a new generation has learned programming with NodeJS on a Mac and AWS, maybe they haven't ever installed Linux before, in the way we all seemed to learn how to run servers a couple of decades ago. Times change, sure, I get that. Still, the results are kind of mind boggling.


Well not really. I’ve literally spent the past few weeks banging my head on these things.

Especially if you want/need multi-arch. That basically requires buildx which doesn’t cache locally by default. There’s a half dozen types of caching to figure out. Then buildx is very buggy and needs qemu setup even when building natively otherwise you run into decade old bugs doing things like running sudo in a dockerfile.

It took a couple of weeks of on and off tinkering to get a stable arm builder running on a Mac m1. To get the GitHub action server to run stably and not time out was a PITA. It required IT tuning cpu limits and page caching. Not fun.

We run native machines but I would’ve much preferred a cloud solution so I could do my actual job.


I wonder how much of this is specifically Docker related pain? (I try to avoid it) It's super fascinating to see how much ARM support is coming up in this thread. I guess ARM servers are finally happening, huh. Our CI cluster has an M1 Mac in it, it took about an hour to set up. But that's doing JVM stuff with no Docker or qemu involved, so multi-arch just isn't something we need to think about and it's no different to any other machine beyond being much faster.

For servers I'd have thought you'd make a decision up front about whether to use classical x64 machines or ARM, based on load tests or cost/benefit analysis. Then you'd build one or the other. It sounds like a lot of people are putting a lot of effort into the optionality of having both, and then they are using languages and tools that can't cross-compile or JIT compile. Are you using Rust or Go or something? Hmmm.


We've been very happy customer at https://github.com/windmill-labs/windmill, all of our docker builds are on depot and it replaced our fleet of github runners on hetzner :)


Thanks for the very kind words, we're super excited to be working with you all at Windmill!


Docker builds are not slow if you do them properly

1. Add layers properly, so that, the most changing code appears towards bottom.

2. Use a builder pattern where you build in one image and then copy the output binary into the second image (https://ashishb.net/tech/docker-101-a-basic-web-server-displ...)

3. Use Docker layer caching on GitHub Actions or your favorite CI(https://evilmartians.com/chronicles/build-images-on-github-a...)

IMHO, hadolint is a good docker linter but there is no tool in the market that helps people optimize docker images.


This is good advice!

> Use Docker layer caching on GitHub Actions or your favorite CI (https://evilmartians.com/chronicles/build-images-on-github-a...)

Since the time that article was written, BuildKit added native support for interacting with the GitHub Actions cache, so it's even easier than they describe:

    - uses: docker/build-push-action@v4
      with:
        cache-from: type=gha
        cache-to: type=gha,mode=max
Kyle actually wrote a blog post with more details if you're interested: https://depot.dev/blog/docker-layer-caching-in-github-action...

What has to happen with cache-to and cache-from is BuildKit has to create cache tarball files for each of the layers to be cached, then upload them to the remote cache destination. Then next build when there's a cache match, BuildKit downloads the tarballs from the remote store and un-tars them so that it can continue the build.

This process can be slow, from creating the tarballs, to transferring them over the network, to unpacking them again. This is one of the reasons we created Depot in the first place. We had several commercial projects with highly optimized Dockerfiles, where saving and loading cache to GitHub Actions took several minutes, representing a significant amount of the overall build time. We also wanted to use some more advanced Dockerfile features like `RUN --mount=type=cache` that are entirely unsupported in GitHub Actions.

With Depot, builds re-use the same persistent SSD, so you get perfectly incremental performance, similar to the speed you'd get on a local machine, without any network transfers. So if you put effort into optimizing Dockerfiles as you describe, you actually get to keep all that time savings. Our app backend's CI Docker builds usually complete in about 20 seconds for instance.

> hadolint is a good docker linter but there is no tool in the market that helps people optimize docker images

We're just getting started on this, but we'd like to provide insights like this through Depot, since we control the build execution environment and can observe what happens inside the build.


Congrats on the launch!

We've been using Depot with Plane (https://plane.dev/). Prior to depot, I had to disable arm64 builds because they slowed the build down so much (30m+) on GitHub's machines. With Depot, we get arm64 and amd64 images in ~2m.


I've had a few chats with Kyle and Jacob over the last few months.

They're incredibly knowledgeable about the subject and are making amazing strides for build speeds. I'd encourage anyone who doesn't believe these results and benchmarks to just try it out. They're completely real and it's delightful.


Congrats on the launch!

Sorry to ask this silly question, but since your team is an expert in the "fast Docker images" area, could somebody avoid the traditional `docker build` with say NixOS or Bazel and achieve the same results as Depot (aka, fast building with the output being an OCI/Docker image)? Is that what Depot is doing at a high level? Was this considered?

> Our CLI is wrapping the Buildx library

I'm surprised you're able to build Docker images faster than Docker using their code/libraries?


> could somebody avoid the traditional `docker build` with say NixOS or Bazel

Yes! You can think of an OCI image as a special kind of tarball, so things like NixOS and Bazel are able to construct that same tarball, potentially fairly quick if it just has to copy prebuilt artifacts from the store.

Today we're running BuildKit, so we support all the typical Docker things as well as other systems that use BuildKit, e.g. Dagger, and I believe there are nix frontends for BuildKit. In that sense, we can be an accelerated compute provider for anything compatible with BuildKit.

> build Docker images faster than Docker

Today the trick is in the hosting and orchestration. We're using fast machines, launching Graviton instances for Arm builds (no emulation) or multiple machines for multi-platform build requests, orchestrating persistent volumes, etc. It's more advanced than what hosted CI providers give you today, and closer to something you'd need to script together yourself with your own runners. There's also some Docker build features (e.g. cache mounts) that _only_ work with a persistent disk.


> I'm surprised you're able to build Docker images faster than Docker using their code/libraries?

It's not a code/library problem. Knowing what Buildkit options to use is the easy part. It's almost entirely a storage infrastructure and networking problem as it has huge implications on whether or not you'll be able to easily cache build layers.


> It's almost entirely a storage infrastructure and networking problem as it has huge implications on whether or not you'll be able to easily cache build layers.

On a single machine, would NixOS/Bazel handle this better than Dockerfile/Docker/BuildKit?


Potentially - if Nix or Bazel has already built the binaries, and just needs to construct an OCI-compliant image tarball with them, that can be quite quick, similar to a Dockerfile with only COPY instructions. Nix and Bazel can also give you deterministic / reproducible images that take more effort to construct with Dockerfiles.

I've also seen people use Nix or Bazel inside their Dockerfile, like ultimately the build has to execute somewhere, be that inside or outside a Dockerfile.


FYI, the nix2container [1] project (author here) aims to speedup the standard Nix container workflow (dockerTools.buildImage) by basically skipping the tarball step: it directly streams non already pushed layers.

[1] https://github.com/nlewo/nix2container


We, at https://github.com/activepieces/activepieces, had been using Depot.

Before depot we faced extreme slow builds for ARM-based images on github machines. However, using Depot helped us reduce the image build time from 2 hours (with an emulator) to just 3 minutes.

Emulator: https://github.com/activepieces/activepieces/actions/runs/39...,

Depot: https://github.com/activepieces/activepieces/actions/runs/40....


I guess this is a good point, we all can easily have a cheap VPS/EC2 to speed up own docker image build.

But maintaining a matrix of ARMv7, ARM64, X86/X64 instances, it is much more work to do. Than a simple powerful x64 instance.


It's great to see others invest in fast CI as well! Amazing product, quite similar to our stack. We're using Firecracker (fly.io Machines) and Podman to build docker containers. Our baseline is 13 seconds to deploy a change, including git clone and docker push (4s usually). Here's a link to a short video: https://wundergraph.com/blog/wundergraph_cloud_announcement#...

We're soon going to post a blog on how we've built this pipeline. Lots on interesting details to share on how to make docker builds fast.


Super cool, if you'd ever like to chat, let me know, my email is in my profile. We have an API in private beta to dynamically manage namespaces and acquire BuildKit connections within those namespaces, designed for platforms that need to build images on behalf of their customers. Any feedback you might have would be awesome!


Happy Depot user here, our builds are now 10 to 20 times faster: https://twitter.com/matthieunapoli/status/162009074440824422...


If ARM continues to take off, this will be a pretty useful tool. I'm building Rust native binaries for one of my projects using buildx, but it's 1) way too slow using buildx emulation and 2) way too slow to build on the Pi itself.

In the end I created a hacky build process where I use a single container to build both the x64 and ARM versions serially, and then create multi-arch containers in a separate step. It was very painful to get the right native libraries installed, and it's not terribly easy to build these two platforms in parallel.

In short, having access to real ARM builders would be great, and persistent disks would probably boost my build performance quite a bit.

The dockerfile that I had to use: https://github.com/mmastrac/progscrape/blob/master/Dockerfil...

Example build run (~20 mins): https://github.com/mmastrac/progscrape/actions/runs/42285298...


Hey there! I'm the other half of Depot. This is an awesome tool you have built, I have actually been looking for something like this for myself as well.

Couldn't agree more with the pain points you mention, it was the biggest things that led us to start Depot and we're really excited about what we can do next.

If you ever want to do way with your hacky build process and try out Depot, we have a free tier now.


FWIW setup was pretty easy, but I got an RPC error halfway through my build. Not sure what happened here, though it looks like things were definitely building quicker!

  Error: failed to receive status: rpc error: code = Unavailable desc = closing transport due to: connection error: desc = "error reading from server: EOF", received prior goaway: code: NO_ERROR
  Error: failed with: Error: failed to receive status: rpc error: code = Unavailable desc = closing transport due to: connection error: desc = "error reading from server: EOF", received prior goaway: code: NO_ERROR
I didn't have a chance to clean up the hacks yet but I'll give that a shot and see if it clears up the error. It might just be how intense the Rust build process is for this project.


Hey, I'll look into this one - this kind of error usually means that the BuildKit server believes the build has finished, but the CLI missed the fact that the build ended. It _might_ be related to your Rust build, but I want to make sure there's not something happening on our end.


Great! My email is my profile if you need to reach out about anything. The project is open-source, so feel free to dig around if needed.


> Building Docker images in CI today is slow. CI runners are ephemeral, so they must save and load the cache for every build.

>...persistent disks significantly lowers build time

Does this mean your solution places specific caches, like bazel, node_modules, .yarn, and other intermediary artifacts onto a shared volume and reuses them among jobs?


Yes, all the layer cache is saved on a persistent volume that's reused between jobs. In that respect, it's very similar to running `docker build` on your local laptop, where each build is incremental. But in Depot's case, that incremental experience is shared for all CI builds, and between all users running `depot build` on their local devices.


It sounds like the SSD provides the layer cache with `--cache-to=type=local`? Does Depot support connecting additional VMs to the volume if someone wants to scale up? I've been meaning to look into the new S3 cache to solve this issue.


Exactly, we effectively persist BuildKit's state directory, so it doesn't even need to tar and export any of the internal state or layer cache, it gets the same state volume for the next build.

Today we're using vertical scaling, so we run BuildKit on larger EC2 instances, tune `max-parallelism`, and let BuildKit handle processing multiple builds / deduplicating steps across builds / etc. What our website calls a "project" basically equates to a new EC2 instance (so two different projects are fully isolated from each other).

We'd like to expand into horizontal scaling, probably with some kind of tiered caching, so the builders would be able to use local cache on local SSDs if available, but fall back to copying cache from S3 if not available locally.


Awesome. How does this work in a distributed sense? For example, for multiple parallel builds where each build is building a different git sha?


Today we're vertically scaling builders, so multiple builds run on the same large EC2 instance. The instances support processing multiple Docker builds at once, and thanks to BuildKit can even deduplicate work between multiple parallel builds, so like if two simultaneous builds share some steps, it can handle coordinating so those steps run just once.

We have plans to expand to more horizontal scaling with tiered caching, so we can keep the speedups we see today but further increase potential parallelism.


Yes! Love the vertical scaling approach. Pinning similar builds to same host doesn't get enough love.


Yes, Buildkit allows for this with the `--mount=type=cache` functionality. [0]'

Now as an end user you still have to add this to your Dockerfile, but if subsequent builds are able to continually use this cache, build times will drastically improve.

- [0] https://docs.docker.com/build/cache/


--mount=type=cache is awesome!

One of the reasons we created Depot is that cache mounts aren't supported in GitHub Actions, since each CI run is entirely ephemeral, so the files saved in a cache mount aren't saved across runs. BuildKit doesn't export those cache types via cache-to. There are some manual workarounds creating tarballs of the BuildKit context directory, but we wanted something that just works without needing to save/load tarballs, which can be quite slow.


How do you invalidate the cache / do a clean build? I've previously seen errors sneak in to docker builds due to cache - where perhaps an upstream docker image was gone, or other dependencies had changed (most likely to happen if a license is changed, or something is yanked due to a security issue).


We have two ways at the moment:

1. If you need an uncached individual build, you can pass the `--no-cache` flag to build everything from scratch

2. We have a "clear all cache" button in our UI and CLI command, to wipe the entire project's cache disk.


Satisfied Depot customer here! Jacob was super responsive when I had questions about how to integrate with "docker buildx bake". The product works great, it has cut our Docker image build times from 15 minutes previously on GitHub Actions down to 3 minutes on Depot.


This has been an amazing tool and has been speeding up our builds. What is normally a long process is done in seconds. We wait more time for the repo (sideyes at bitbucket) than for depot! Great work Kyle and Jacob!


Congrats on the launch.

We've been using Depot for the past two months, and without changing anything the builds became faster (compared to our CI).

Good luck Depot team and keep up the good work!


Been using this for several months now and it's helped improve our build pipelines significantly! We are planning to integrate it with more of our tooling soon.


OMG, you guys came at a good time. My entire team was having a cathartic groanfest over how slow the build phase of our CI had been getting. just sent this to our devops guy to try this out and see if it improves things. our CI builds typically take around 8-12 minutes. would love to cut that down


Thank you for the comment and sharing your own experience! We definitely resonate with that feeling. We were having to wait for multiple 20 minute builds at our previous job when we got tired of it and started building Depot.

We should definitely be able to help here.


Congrats y’all! Great bunch of humans building cool things.


This is super cool! Do you have any plans to support on-demand image building like the now defunct https://ctr.run? It seems like with your speed it's kind of a perfect match.


Thank you for the very kind comments! It would be really cool to do something like ctr with what we have with Depot today. Did you have a part you really enjoyed with it that would be a must have for you?


Big fans of Depot here at Wistia, keep it up!!


Do you have a timeline for supporting Google Cloud and their Arm-based Tau VMs?


We'd like to expand to support GCP as well yes. Are you looking to run the Depot runners in your own GCP account?


Potentially. We currently use Jenkins build workers and GCP's Artifact Registry as a docker build cache, which works... ok.


Nice, yeah let me know if you want to chat further, feel free to send an email to jacob@depot.dev or the address in my profile. It would be really useful for us to know more about how you have things structured in GCP, etc.


Would like to have the integration with GCP, especially with Cloud Build (https://cloud.google.com/build)!


This looks interesting! Congrats on the launch. Question, though, why is it asking for credit card information for the free tier? Would be nice to try it first without giving my card info.


Hey! This is cryptominer protection - we used to have an option to access a trial without a credit card, and the cryptominers did indeed sign up. :) Requiring a credit card for the free tier is a speed bump for that, on the free plan it subscribes you to a $0 product in Stripe.


I have to imagine that _any_ service that is providing CPU/compute (even if it is for building things) can be abused by cryptominers, hence the additional verification.


AFAIK most orgs that use something like Azure DevOps pipelines for builds will deploy a VM Scale Set Agent Pool with the runner image. This provides layer caching and incremental builds. Ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/age...

What’s the advantage of your platform over this?


I haven't used Azure DevOps scale sets, but it looks similar, assuming it is orchestrating a persistent cache disk.

If you're not using Azure DevOps, or even if you are, you can use `depot build` from any CI provider or from your local machine, anywhere you'd run `docker build` today, and the speed and cache are shared everywhere. And you don't need to configure any infrastructure. We also orchestrate both Intel and Arm machines depending on what type of build you need to do (or both for multi-platform). That can be important if you need to run on Arm in production, or if you have developers using new MacBooks.

We're working on more Docker-specific features as well, we plan to directly integrate things like SBOMs, container signing, build analytics and insights for what happens inside the Docker build, etc into Depot, there are some interesting things we can do having control of the execution environment.


Any chance of offering GitHub Runners as a tweak of the underlying “hard parts”? I’ve had really good luck with this type of offering from BuildJet, except that they don’t offer any ability to have persistent volume between runners. So all of our builds are fast _except_ the ones that build containers, but it’s too much of a pain to have some stuff in GitHub Actions with 3rd party hosted runner and then a totally different system for container builds/test workflows.


We actually briefly entertained the idea of adding GitHub Runners as well. :) Perhaps someday, but we have a long way to go just on just Docker / containers.

Just to note, you can totally use Depot within your GitHub Actions runs, even if those runs are happening inside self-hosted or BuildJet-hosted runners. You might get the best of both worlds that way, having your builds and tests outside of Docker run on BuildJet, and Docker builds accelerated with Depot.


Thanks! Maybe it would be worth adding a docs example of a proper GH workflow using Depot?

It’s a decent option, but it’s still more complicated (multiple vendors, configuration, etc)

You know what, it could be really nice if y’all could create a simple Depot GH Action that would make things more efficient (for example pass in the repo token so that Depot could pull the repo vs first pulling it from GH into the workflow runner and then ship it over Depot)


The docs can be improved, but here's the GitHub Actions guide: https://depot.dev/docs/integrations/github-actions

The most important bit though is that we have a `depot/build-push-action` that implements the same inputs as Docker's `docker/build-push-action`, so just swapping that line and adding a project ID and access token are the majority of what you'd need to do:

      - uses: depot/setup-action@v1
      - uses: depot/build-push-action@v1
        with:
          project: <your-depot-project-id>
          token: ${{ secrets.DEPOT_TOKEN }}

          # Whatever other inputs:
          context: .
          push: true
          tags: |
            ...
I think that's along the lines of what you're describing as a Depot GitHub Action: https://github.com/depot/build-push-action.


Thanks! I missed seeing that on your site.


Congratulations on the launch. I am currently using GitHub actions and docker/setup-qemu-action@master. Then just call docker buildx with the platform arg. This works, except the builds do take a while since it’s running on Intel emulated arm64 (Microsoft booo).

What happens when GitHub adds native arm support though? Seems like a big value add of your service is immediately displaced and additionally can use self-hosted runners with GitHub to solve caching.


> What happens when GitHub adds native arm support though?

That will make it much faster to build arm images on GHA natively - in that scenario, Depot should still be several times faster like we are on Intel today, primarily due to how we're managing layer cache to avoid needing to save it or load it between builds (cache-to / cache-from), as well as just having larger runners and more sophisticated orchestration. We can take advantage of BuildKit's ability to share cache and deduplicate steps between concurrent builds for instance.

We're also expanding Depot in a few different directions, including along the security path with container signing and SBOM support, as well as some upcoming build insights and analytics features. The goal is that it's always super easy to `depot build` from wherever you `docker build` today, and that Depot provides the best possible experience for container builds.


I saw this while checking out Moon yesterday and looked great. I spun up my own GH runner because arm64 builds were taking up so much time and eating into my credit.

But the pricing hurts for people that need more than 60 minutes of build time (which is pretty easy to go through in one month) but would be using it for personal projects like myself.

I could certainly see myself paying X/min over 60, but $49+5c/min for personal stuff is a hard no.


Hey! Thank you for sharing this feedback, it's incredibly helpful and we are so grateful for it. We thought a lot about this when we decided to add a free tier and have some different ideas on what we want to do next in this realm.

For now, if you want to sign up for our free tier we can flip your org to this type of structure you are talking about, 1 project, 1 user, first 60 build minutes free & 5 cents/minute after that.


> And they do not support native Arm or multi-platform container builds

What's the issue with https://docs.docker.com/build/building/multi-platform/? I only just learned about this today but I've already got it building cross-platform images correctly in Github Actions.


Hey, we support that exact thing, the `depot` CLI wraps buildx so we support the same `--platform` flag.

The difference between Depot and doing it directly in GitHub Actions is the "native CPU" part - on GitHub Actions, the Arm builds will run with emulation using QEMU. It works, but it's often at least 10x slower than it could be if the build was running on native Arm CPUs. Builds that should take a few minutes can take hour(s).

For multi-platform, if you want to keep things fast, you need a simultaneous connection to both an Intel and an Arm machine, and the two work in concert to build the multi-platform image.

There are workarounds with emulation, or pushing individual images separately and merging the result after they're in a registry. But if you just want to `build --platform linux/amd64,linux64` and it be fast, we handle all the orchestration for that.


How do you deal with so many base images only supporting a single architecture? We’d love to build multi-arch containers but don’t want to maintain all of our base images either.


What images support only a single architecture? The ones I use: alpine, ubuntu work fine.

If you're already building images (because that's what we're talking about) what difference does it make at which base image you start?


My immediate reaction is, does this offer a feature incentive to use this when I already have this in AWS CodePipeline+CodeBuild?


Hey thank you for the great question! I think the short answer is that if you already have all of this wired up in CodePipeline+CodeBuild and it works for you, then you're probably set.

But there is a few steps to get there in that setup. I believe you have to have three CodeBuild projects, one for each architecture & then the manifest merge. So it works but is a bit of config to stitch together.

With Depot, you would just install our `depot` CLI in your config and run `depot build --platform linux/amd64,linux/arm64` instead of `docker build`. We handle building the image on both architectures in parallel and can push the merged result to your registry. We can even run the builders in your own AWS account so you maintain total control of the underlying infra building your image.

We are working on other features for Depot that would go beyond the speed & multi-platform capabilities. We want to surface insights and telemetry about image builds that could help them be smaller, faster, and smarter. We are also thinking about things in the container security space such as container signing, sboms, etc. Happy to answer more questions about any of this!


Thanks guys for building depot, it makes things easier and faster for us (aws graviton users)


Sounds like this could solve the woes I’ve been having building x86 images on my M1. Docker emulation via qemu is still really buggy.


Yup we can definitely help with this. It was one of the pain points that inspired Depot as we found that QEMU was just to slow inside of things like GitHub Actions where we wanted to build both for both architectures for local development and production.


Lovely design... but on my 14' laptop, the page literally takes the whole screen.


I registered using google auth and still have to confirm my email, is that correct?


Congrats on the launch,




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: