Hacker News new | past | comments | ask | show | jobs | submit login
Dagger: a new way to build CI/CD pipelines (dagger.io)
397 points by shykes on March 30, 2022 | hide | past | favorite | 259 comments



There's so much more to CI/CD than the build definitions (e.g. dashboarding, log viewing/retention, access control, manual interventions, secret management, test reporting, etc.) and while some of your points resonate very strongly with me (e.g. local builds), I can't help but wonder what the endgame is here?

You've raised $27m to what? Give away a free tool for engineers to define their builds in? While they continue to use and pay for another service to actually run the builds? I assume you intend to replace the CI service at some point and move up the stack to monetize?

Without more transparency its easy to imagine something like...

Step 1. Drive uptake of your tool by selling people on the pitfalls of "CI lock-in" Step 2. Introduce your own CI solution, which people can now easily switch to Step 3. Lock people in


Good question. No, we don't intend to replace the CI service. We think of CI as infrastructure, and there are plenty of great infrastructure providers out there. And you are absolutely correct that for Dagger to succeed, it has to remain agnostic to infrastructure providers, which means we cannot build an infrastructure provider business ourselves. If the experience of hosting Dagger becomes so bad that it affects the developer experience, we might ship such a feature as a stopgap and even charge for it - but it is not a strategic priority. The sooner other infrastructure providers offer a fantastic hosting experience, the better. That is not where we see the biggest opportunity for Dagger.

However, there is a great opportunity to help businesses manage their software supply chain. What is running where, and how did it get there? Who is authorized to run which pipelines on which machines, and which which inputs? What is the diff between this dev environment and staging? Staging and production? Etc. Keep in mind this is not limited to your production pipeline; there are staging environments, development environments, all running pipelines all day long. It's hard to simply know everything that is going on.

Each time Dagger runs your pipeline, it can produce telemetry about every aspect of your supply chain. Git SHAs, docker image checksums and cryptographic signatures, specific versions of kubernetes and cloudformation configurations, before and after templating, test results, monitoring status... It also integrates with your secrets management backends, your developer's local source code. Basically every node in your supply chain can be a node in your DAG if you want it to. The logical next step is to give you a centralized place to send all this telemetry, and tools to extract insights from it. You could also perhaps manage the configuration of your various Dagger engines in one location.

Another product that is often requested is a visual DAG debugger. When a pipeline break, you want to know why, and staring at your CI logs is definitely not the best experience for that. With a web UI, there's a lot we can do there.

The business opportunity boils down to this: if CI/CD pipelines are software, there ought to be a platform for developing that software, and an ecosystem of developers creating value around that platform. Dagger aspires to create that missing developer ecosystem. If we succeed, there will be no shortage of business opportunities. If we fail, none of the other features will matter.


Hey Solomon,

question coming from a sales guy among this crowd... Who does this impact the most and what outcomes does that help them achieve? At the end of the day, you need to pay the bills, for you and your team, who is going to sign the dotted line and tell you that yes "I need to invest $100k on this because it solves a major pain!"?

Whilst I can see this as a nice to have, I'm having a hard time understanding who your target market is going to be...

At the end of the day, I presume this is going to end up becoming a full fledged company, with its sales team, marketing and what not. What are you thinking about in terms of revenue channels so far ?


As someone who does a bunch of CI/CD work, the answer is: supply chain and the security therein is a major focus of enterprises right now. Security folks as well as ops folks are keen on this. Big question is can those two groups convince the developers that this is a good idea (though vice versa should be fairly easy at a certain company size).


That's fine, I don't deny that, companies like circleCI and co are doing really well, but Docker is also critical to a lot of enterprise as well and yet the business model was not as strong as initially thought.

I'm curious to understand what the pricing is going to be and whether it's going to be well recieved or not (and right now, there is nowhere on dagger.io's website where pricing is mentioned)


> Each time Dagger runs your pipeline, it can produce telemetry about every aspect of your supply chain.

To me, that still doesn’t seem to go to the core of what value it brings to table. I understand that Dagger can do all this, and that businesses would like to know what runs where and how everything interacts, but… it doesn’t explain to me the CI / build pipeline angle?

How does knowing the telemetry around my build pipelines translate to better software, or cost savings, or other improvements?

If there’s a registry to exchange “components” of a build pipeline, what does that bring me what a regular python / java / etc package can’t?

Don’t get me wrong, I think there are plenty of problems with CI (I have a ~200 job CI pipeline on fire in front of me), I just can’t seem to connect the dots here. :)


> there is a great opportunity to help businesses manage their software supply chain

Yes, very much. There are so many layers, components, and their intricate relations that goes totally ignored today at least in most places. Because, doing so is insane amounts of work. Only BigCos can afford to have dedicated teams for 's/w supply chain management', considering the cost-parity-with-returns. However, the solution on this end that works for BigCo doesn't necessarily work for SMEs & startups. That gap isn't small, if am right.

> Another product that is often requested is a visual DAG debugger. When a pipeline break, you want to know why, and staring at your CI logs is definitely not the best experience for that. With a web UI, there's a lot we can do there.

Yes. This definitely helps. But more than a viz DAG element, people look for an early-warning of a failure. Most common build-failure reasons (other than failed tests) -> expired creds used somewhere in the pipeline, provisioning failed/time-out, problem at some other dependent module totally outside org's control (some OSS/dep). People seem to be bothered equally about how to squash'em rather than just where to squash. Locating the part where pipeline broke is just half the part. Actionable insights as to how that pipeline can be healed is the hard part. And considering the diversity of the ecosystem, that's gonna be a wild ride.

BTW, are you folks hiring? "DevOps OS for enterprises" seems very very enthralling, esp for an old toolmaker.


IMO the best open source infra lock-in strategy is kube

took a while for E/AKS to catch up with G, things like ingress still vary in DX across clouds

'the same everywhere but it only works here'

(no comment on morality of this, quality of kube generally, or whether this team will do the same. docker IMO missed the chance to be a cloud host or a standard interface)


Completely off-topic, but k8s actually removes a lot of the vendor lock-in. Imagine having to migrate a k8s based app from AWS to Azure vs a pure AWS-based setup, provisioned with cloudformation. Sure it'll involve some work, but I know what I'd pick. I've personally ran workloads originally intended to be deployed on GKE on on-prem openshift clusters with very few changes, other than indeed the mentioned ingress stuff.

That was also the initial goals of k8s - make it universal so it's adopted as a standard by everyone else to make yourself competitive against the AWS juggernaut. And it worked.


Can you elaborate on your concerns? I've not seen them occur in practice at all.

Sure each provider does things differently, but they still work with primitive Kubernetes manifests at the end of the day.

A migration from one to the other is nothing more than changing some annotations or potentially transforming the "shape" of some lists or maps.


concern: like every system that we think of as cloud software, the 'cloud platform integration' half of it never gets open sourced

same as like, aws extending mysql and postgres to provide RDS features -- they don't open source those pieces, so 'stock oss' DBs don't have fancy RDS features like backup + restore

with kube, platform-integrated components like network, ingress + storage 1) require custom work on each cloud, so new features won't be available on all clouds at the same time.

But also 2) that means some components, ingress specifically, have different interfaces on different clouds when they do finally get implemented.


I didn't spend a lot of time with the site, but I interpreted Dagger as the Terraform of CICD definitions....


Yes, that's a reasonable approximation.


> You've raised $27m to what? Give away a free tool for engineers to define their builds in? While they continue to use and pay for another service to actually run the builds? I assume you intend to replace the CI service at some point and move up the stack to monetize?

Assuming that dagger does take over the world, this has the docker story of "this is a widely used tool that a bunch of people are using and we can't monetise without resorting to lock-in", which is a huge bummer.


I’m highly jealous of teams that can containerize all their build tools.

If you deal with proprietary toolchains that are tens of gigabytes (Windows WDK, Xilinx Vivado/Vitis) it’s just untenable, and that’s before even mentioning licensing. Even Azure doesn’t have a great solution for WDK development. It’s hard to feel like we’re not being left behind.

Bind mounting the tools into the container is an option but at that point you’re using the container just because that’s what your CI expects, not because it’s any better or more reproducible than a raw Jenkins shell script.


In my personal experience, Windows development is absolutely awful. Nothing about it is ergonomic, and the management of the development environment is just painful. Obviously you don't always have a choice, but creating a nice development experience is clearly not a priority for Microsoft. One really basic example: I was deploying files to a Windows Server box using scp, but my internet connection died midway through the transfer. All subsequent deployments would fail until I could connect via RDP to kill the old SSH process holding an open file handle. Another example: I bought into the WSL hype, hoping it would solve all my problems. But you can't easily interact with files on the system outside of Linux subsystem, which made it more or less useless for my purposes. If I wanted to interact only with Linux files, I would have used a Linux-based server. Maybe I could get better at Powershell, but it felt overly verbose while also being less composable / powerful. The simple posix shell primitives were sorely missed. Running things on startup was also insane. I think I needed to add a triggered job that ran on user log in, and then configure the host to login a specific user when the host starts. Very crazy...


Usually it helps by adopting Windows development practices instead of trying to cram UNIX workflows into it.

Who on their right mind uses ssh/scp on Windows development other than connecting to UNIX boxes?


I use gitlab for ci/cd, a gitlab-runner runs on a windows based system (legacy aka not .net core so can only build with msbuild.exe). Now I have to copy this release to another windows based system. Are you calling me stupid for using scp via openssh? Would like to hear an alternative? Took me like 5 minutes to set openssh up.


Windows has a mechanism for sharing folders; the protocol is SMB (CIFS is a dialect of SMB). You mount the the target and destination folders on the same machine (doesn't matter which one mounts the other), and then copy the file locally on the machine where they're both mounted.


That's my primary issue with Windows workflows, though. Unless you really convert over to using pretty much only MS tools, it is extremely hard to use them. Meanwhile the "Unix" workflow tends to be a collection of many open source tools with lots of competing solutions.


What’s the alternative? I was deploying from a Mac, so connecting to a Unix box is exactly what I was doing.



I was deploying to a fleet of servers. Having every engineer add every server via the macOS sharing UI or using RDP to manually connect to each host doesn’t seem scalable. Maybe remote power shell sessions would work, but I’m not even sure if there is a power shell client for Mac, and it’s also not clear if remote power shell sessions would fix the open file handle issue.

Experienced Windows devs I talked to said they used Packer from Hashicorp to entirely recreate their server image whenever they wanted to deploy. This process takes hours, but that was the best I found.


> Who on their right mind uses ssh/scp on Windows development other than connecting to UNIX boxes?

Exactly. That's like complaining that I can't RDP into a linux box to install the toolchain!


I love unix tools (grep, sed, cut, etc.), and while there are some good sub-systems (msys2, cygwin), they might be bit heavey. For that the windows version of busybox - https://frippery.org/busybox/ - and then I make sure my scripts are not using too powerful features of said tools (grep especially), such that the version in busybox works. Great, and also possible to port some of that back to linux (but I mostly use it to build something, or extract some data but want to share the .bat file with others - one day when I get better in PowerShell I'll try there more).


I use scoop to install this sort of tool in Windows.

iwr -useb get.scoop.sh | iex # install scoop

scoop install coreutils vim nano [...] # yay


Nice! I'll try it out tomorrow finally.. after giving up on choco, and possibly on winget.

But in my case I wanted to leave something small (and busybox.exe is that small) /portable - for others to use (without the requirement to install scoop).


So scoop is awesome so far - the only thing I'm missing (right now) is to be able to specify specific bucket for some actions (but found workarounds).


> But you can't easily interact with files on the system outside of Linux subsystem, which made it more or less useless for my purposes.

It’s possible in file explorer via \\wsl$, but that is not always supported by applications so it’s not 100%.


ls /mnt/c

Oh, hey, there's the C drive in WSL! ;)

ln -s /mnt/c/Users/Somejerk/Documents ~/documents

Oh, man, a shared documents folder!

Granted, access to those files is slow as dirt because of the Plan9 filesystem, and there are some weird bugs where a process sometimes loses access to $cwd if it's not a native wsl filesystem (also a Plan9 bug reported to MS over a year ago). But it's tolerable when interoperability is necessary.

It also facilitates using the same filesystem under multiple WSL instances.


100% this.

I feel like there's a ton of innovation in the cloud/container CI/CD space, but next to nothing elsewhere. In fact, some of the innovations in CI/CD make things _more_ difficult for those of us developing in other environments (such as game development).

There's a lot of low hanging fruit for improving things elsewhere.


Exactly. I manage a CI infrastructure for a mobile game company (28 macs, a few Linux and windows VMs). I still can‘t use docker because our main development box is still macOS. We are slowly moving this to Linux. The reasons why we are still using macs as the main build machines are manyfolded. I know that android SDK and Unity run on Linux but our whole company came from a iOS first model and still uses macOS as the primary development box. But even docker would not help 100%. I have a strong emphasis that the tools run locally as well. And we have a mix of build and development tools. What I mean is that one and the same basic script should be used both during CI and differently with other parameters during development. Jenkins is our build executor. The whole build is setup with gradle as it had 5 years ago some very nice properties over other tools (self bootstrapping, robust plugin system, lots of libraries available). We build everything around this and only use Jenkins-pipelines to kick of said gradle jobs. But I would prefer a nicer solution. Ah we manage the machines with ansible (Mac, windows and Linux) I guess redhad had not thought about the fact that someone took the Multiplattform claims to the test. I can say it is kind of a nightmare to Code playbooks and roles against three different OS types.


Do you work in game development? I would love to hear a little bit more about your experience. I play and follow Apex Legends and I'm always so curious about how bugs and regressions seem to make it into every one of their patches. As well as tons of new information that gets data-mined.

To me, its like they don't have branches (tons of new code not accessible in the game is released in a patch) and they don't have unit tests that catch bugs (a certain ability has 2 modes, with 2 different activation times. In the most recent patch, the activation times ended up the same) but it's possible that developing a game is far different than developing a SaaS product from a coding perspective. Or it could be that studio just has weird practices.


> To me, its like they don't have branches (tons of new code not accessible in the game is released in a patch)

This is likely by choice, feature flags have a fair few advantages over feature branches and are generally far less of a pain point long term.


> This is likely by choice

It's likely because of inertia. I don't work for EA, but the majority of games studios use Perforce for source control which has... awful support for branches. They've got streams which are a huge improvement, but still nowhere near as flexible or easy to use as branches in git.


> As well as tons of new information that gets data-mined.

Client side encryption is basically impossible; the decryption key has to be in the executable you ship somewhere, or at least sent to the client after a connection with the server. Perhaps some encryption tiering system could work to keep unreleased locked content locked for longer but I don’t think anyone has gone through that trouble yet.


Yes! I’m responsible for a game development CI/CD pipeline that needs to run on windows and I feel like it’s harder than it needs to be in 2022.


You can add Apple toolchains to that list also


preach


I've been in your shoes before at a previous gig where we had a highly successful desktop application that ran on Windows. Containerization was an utter nightmare. It's brutal seeing posts like this too, because you feel like the whole world is leaving you behind.

It's made me super grateful to be where I am now, and able to even view tools like this as an option.


I saw a lot of projects build Windows artifacts with GitHub Actions.

It doesn't seem too hard.


Okay, here’s an SDK I use. It’s 16GB.

https://docs.microsoft.com/en-us/windows-hardware/drivers/do...

Show me how to use this with GitHub actions, if it’s not too hard.


I would use a self-hosted runner for that [1]. You can setup the SDK in that machine and it will be available for any jobs that end up running there.

Depending on your requirements/scale the runner(s) can be a VM in your main machine, a cheapo dedicated server (hetzner/ovh) or even autoscaled hotspot instances in your preferred cloud.

It is less pure than using GH's runners and having an end-to-end setup/teardown for your whole toolchain, but it would work just fine. Definitely better than without any CI.


In practice, this is what I’m doing with Jenkins. But my point is that all of these container-first CI solutions lack relevancy for my team.


Hi everyone, I'm one of the co-founders of Dagger (and before that founder of Docker). This is a big day for us, it's our first time sharing something new since, you know... Docker.

If you have any questions, I'll be happy to answer them here!


The carbon footprint of the cloud has exceeded the footprint of air travel. A move away from monolithic statically compiled binaries to constellations of microservices (usually bloated docker containers) is a significant part of the problem.

Docker's explosive growth is partly due to the convenience of the abstraction it provides, abstracting the entire linux userspace, putting even OS-wide package managers and language-specific package managers inside another box. This usually breaks any caching / code sharing that the now containerized packages had, resulting in the bloat. The docker image is portable, yes, but disk and RAM efficiency of the systems people are building are awful. It has been the norm for every little microservice to add a few GB of bloat to the overall software system. A dev writes "RUN pip install pytorch" and you have CICD servers pulling down 2GB of pytorch to build the container, every time the software is built, probably forever. Meanwhile species are going extinct and a lot of people are starting to wonder if it's ethical to work in technology at all.

What can your team do to reverse this tragedy of the commons? Can you come up with some equally ergonomic tool that can migrate the container ecosystem on to something that has a solid foundation with good caching?


> What can your team do to reverse this tragedy of the commons?

It's tragedies all the way down :-)

Making a successful tool in a competitive space is hard enough. Asking a creator to somehow factor in environmental impact isn't going to work. A creator that places additional constraints on themselves will more likely lose out to a competitor that doesn't.

This is the kind of thing a carbon tax is perfect for. Cost optimization is infused into how businesses and individuals operate. Tax the things you care about and watch all that machinery get to work!

Right now, people use Docker (and dynamic languages, bloated JS libraries, etc) because burning electricity is the cheapest route. (Sort of... the market's obviously not perfect.) Make electricity expensive enough and we'll get more efficient systems.

And of course taxes aren't the only solution. For example, Google Flights now shows the carbon cost of each flight option, which I think might actually move the needle. But that category of solution might only work in an industry with similar characteristics, e.g. infrequent but large purchases, mature industry that can define measurement standards.


> Asking a creator to somehow factor in environmental impact isn't going to work.

That attitude doesn't leave a great impression for me. I take your points about how difficult it is, but I think we can all do better than throw our hands up in the air. For instance, you could talk about how easy containerization of CI/CD makes it easier to move your pipeline where impact is lowest. Or that you can control your own impact rather than leave it up to the whim of someone like CircleCI.

There's no silver bullet with environmental impact, which is why we all have to collectively apply whatever wins we can, wherever we can.


I haven't fully formed this thought yet, but your request might result in a net loss of "good" people. The people who actually consider your request will probably fail to launch their business due to those environmental constraints. If you take 10 people who run a business regular and compare them to 10 who do it in an environmentally conscience way (while environmentalism not being he the bizmodel in the first place), I'd think you'd have equal or less environmental businesses succeeded which actually ends up hurting your problem more than helping.

I think the ideal way is to let a startup thrive any way possible and when they're not trying to not starve anymore, begin environmental changes.


> For instance, you could talk about how easy containerization of CI/CD makes it easier to move your pipeline where impact is lowest. Or that you can control your own impact rather than leave it up to the whim of someone like CircleCI.

Oh yeah -- I'm not at all saying Docker is all bad.

* Like you mentioned, increasing interoperability allows the market be more efficient.

* Docker continued the path that VMs started towards making strong isolation even more efficient and accessible.

* Layers and caching are obviously good for resource consumption.

It's just that the original comment seemed to try shaming the Docker creators about what they've built. All they did was try and make something better. And if they didn't, someone else would have.


It was meant as genuine constructive criticism and a plea to do better. They have more traction than most people to morph docker into something that comes close to its predecessors in efficiency. Somehow replace their fragile layer caching strategy based around diffs of the entire filesystem, with something that understands the OS and language-specific packaging systems being used inside the containers, and can therefore at ~least cache the package downloads. We desperately need a better, more efficient package manager, and Docker has been a huge setback; it has become normal for CICD to rebuild your image every time you touch the code, and pull down practically an entire linux distribution every time. I know you can do better with docker caching if you put in enough consistent and dedicated engineering effort into how you structure things, and set up local apt and pypi mirrors, etc... but that's the default behavior, and none of (admittedly very small number organizations) I've worked at have had the organizational capacity to really get past it. I don't know if we need what they're building, but we absolutely need a much more efficient new version of Docker with an easy migration path. Indeed they may be the ~only people with traction to resolve this, since in the current climate as soon as someone introduces a new package manager with more efficient dependency resolution or caching (i.e. nix, poetry, cargo...), people just stick it in their Docker container and break the caching anyway.


One of the big points Bill Gates makes in his recent book on climate change is that once the genie is out of the bottle in terms of lifestyle, there's no going back. It's naive to expect people to willingly reduce their energy use enough to make an impact, and immoral when you consider all the people in poor villages that don't even have electricity yet.

The solution is, in a nutshell, to electrify everything, and push to make electricity clean and plentiful. Anything else is doomed to fail because we can't beat climate change by reducing our carbon footprint; we have to eliminate it entirely.

From that perspective, cloud energy usage is not a problem, since it's already electrified by nature. Now we just need to stop emitting carbon in order to make electricity (among other things).


> a lot of people are starting to wonder if it's ethical to work in technology at all

Can you share some evidence to support this claim? Who thinks we're better off addressing climate change, etc. with less technology?


“A move away from monolithic statically compiled binaries to constellations of microservices (usually bloated docker containers) is a significant part of the problem.”

Untrue, in every way.

Why did you say this?


It won't be true in every shop, but I do this professionally and it's been my firsthand experience. A native statically compiled binary containing just the functions that actually get called will usually be... 10-100 MB. Ungroomed Docker images are ~10GB-20GB, same as you'd have on the root partition if you sat down and brought up a linux workstation or server node manually, and this is not a coincidence. Sure, docker avoids duplicating the linux kernel, making it more efficient than an old school VM, but these days all the ~other software bloat dominates the kernel in size. Most companies do not have a 100 person team of engineers dedicated to optimizing their image build and management workflow, and pruning what goes into their containers.


Hi! I've browsed the docs quickly, and I have a few questions.

Seems to assume that all CI/CD workflows work in a single container at a time pattern. How about testing when I need to spin up an associated database container for my e2e tests. Is it possible, and just omitted from the documentation?

Not familiar with cue, but can I import/define a common action that is used across multiple jobs? For example on GitHub I get to duplicate the dependency installation/caching/build across various jobs. (yes, I'm aware that now you can makeshift on GitHub a composite action to reuse)

Can you do conditional execution of actions based on passed in input value/env variable?

Any public roadmap of upcoming features?


> Seems to assume that all CI/CD workflows work in a single container at a time pattern.

Dagger runs your workflows as a DAG, where each node is an action running in its own container. The dependency graph is detected automatically, and all containers that can be parallelized (based on their dependencies) will be parallelized. If you specify 10 actions to run, and they don't depend on each other, they will all run in parallel.

> How about testing when I need to spin up an associated database container for my e2e tests. Is it possible, and just omitted from the documentation?

It is possible, but not yet convenient (you need to connect to an external docker engine, via a docker CLI wrapped in a container) We are working on a more pleasant API that will support long-running containers (like your test DB) and more advanced synchronization primitives (wait for an action; terminate; etc.)

This is discussed in the following issues:

- https://github.com/dagger/dagger/issues/1337

- https://github.com/dagger/dagger/issues/1249

- https://github.com/dagger/dagger/issues/1248

> Not familiar with cue, but can I import/define a common action that is used across multiple jobs?

Yes! That is one of the most important features. CUE has a complete packaging system, and we support it natively.

For example here is our "standard library" of CUE packages: https://github.com/dagger/dagger/tree/main/pkg

> For example on GitHub I get to duplicate the dependency installation/caching/build across various jobs. (yes, I'm aware that now you can makeshift on GitHub a composite action to reuse)

Yes code reuse across projects is where Dagger really shines, thanks to CUE + the portable nature of the buildkit API.

Note: you won't need to configure caching though, because Dagger automatically caches all actions out of the box :)

> Can you do conditional execution of actions based on passed in input value/env variable?

Yes, that is supported.

> Any public roadmap of upcoming features?

For now we rely on raw Github issues, with some labels for crude prioritization. But we started using the new Github projects beta (which is a layer over issues), and plan to open that to the community as well.

Generally, we develop Dagger in the open. Even as a team, we use public Discord channels (text and voice) by default, unless there is a specific reason not to (confidential information, etc.)


Thank you for the detailed response. I appreciate you taking the time. One last question/note.

> Note: you won't need to configure caching though, because Dagger automatically caches all actions out of the box :)

Is this strictly because it's using Docker underneath and layers can be reused? If so, unless those intermediary layers are somehow pushed/pulled by the dagger github action (or any associated CI/CD tool equivalent), experience on hosting server is going to be slow.

Sidenote, around 2013 I've worked on a hacky custom container automation workflow within Jenkins for ~100 projects, and spent considerable effort in setting up policies to prune intermediary images.

Thus on certain types of workflows without any prunning a local development machine can be polluted with hundreds of images, unless the user is specifically made aware of stale images. Does/will dagger keep track of the images it builds? I think a command like git gc could make sense.


> > Note: you won't need to configure caching though, because Dagger automatically caches all actions out of the box :)

> Is this strictly because it's using Docker underneath and layers can be reused?

Not exactly: we use Buildkit under the hood, not Docker. When you run a Dagger action, it is compiled to a DAG, and run by buildkit. Each node in the DAG has content-addressed inputs. If the same node has been executed with the same inputs, buildkit will cache it. This is the same mechanism that powers caching in "docker build", but generalized to any operation.

The buildkit cache does need to be persisted between runs for this to work. It supports a variety of storage backends, including posix filesystem, a docker registry, or even proprietary key-value services like the Github storage API. If buildkit supports it, Dagger supports it.

Don't let the "docker registry" option confuse you: buildkit cache data isn't the same as docker images, so it doesn't carry the same garbage collection and tag pruning problems.


> Don't let the "docker registry" option confuse you: buildkit cache data isn't the same as docker images, so it doesn't carry the same garbage collection and tag pruning problems.

IIRC doesn't buildkit store its cache data as fake layer blobs + manifest?

I don't see how it can avoid the garbage collection and tag pruning problems since those are limitations of the registry implementation itself.


You still need to manage the size of your cache, since in theory it can grow infinitely. But it’s a different problem than managing regular Docker images, because there are no named references to worry about: just blobs that may or may not be reused in the future. The penalty for removing the “wrong” blob is a possible cache miss, not a broken image.

Dagger currently doesn’t help you remove blobs from your cache, but if/when it does, it will work the same way regardless of where the blobs are stored (except for the blob storage driver).


Is there a task runtime stat for a blob pruning task?

This sounds like memoization caching: https://en.wikipedia.org/wiki/Memoization

> In computing, memoization or memoisation is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

Re: SBOM: Software Bill of Materials, OSV (CloudFuzz), CycloneDX, LinkedData, ld-proofs, sigstore, and software supply chain security: "Podman can transfer container images without a registry" https://news.ycombinator.com/item?id=30681387

Can Dagger cache the (layer/task-merged) SBOM for all of the {CodeMeta, SEON OWL} schema.org/Thing s?


Are you guys aware of Nix, both the language and the build system? Nix at its core is a build system, but the community pushed the boundary of what a "build" means so hard, now Nix could also be used as one definition language for everything in a CI/CD pipeline (also with a canonical collection of "building blocks" in nixpkgs), from (reproducibly) building artifacts to running automated testing/integrating tasks to automatically deliver the artifacts to whatever the "infrastructure" is. After all in a very general sense the whole CI/CD pipeline could be seen as just another build artifact, which I think resonates a lot with your idea. How do you think your project and Nix would overlap and/or (or both) complement each other?


Thanks for answering Qs. Does this compete directly with Tekton ( https://tekton.dev/ ), or do you imagine a way the two could interoperate? Why choose Dagger over Tekton to power pipelines?


You can (and people do) run Dagger on top of Tekton, in the same way that you might run a Makefile or shell script on top of Tekton. The benefit is that you are less tied to a particular runtime environment. The same Dagger pipeline can run on Tekton, Jenkins, or your laptop. This makes local debugging and testing in particular much easier.


So basically, if I want to not write Jenkinsfiles but still use my company's existing Jenkins installation, I can use Dagger?


Yes :) You can write one last Jenkinsfile that runs Dagger, then do everything inside Dagger. Then you can run the exact same Dagger configuration on another CI system, or on your laptop. All you need is a Docker-compatible runtime (we run everything on buildkit under the hood).


> You can write one last Jenkinsfile that runs Dagger

I'm very confused by this sentiment. This approach loses the best of existing CI tooling does it not?

Jenkins users lose understanding of what's actually being executed, what stage they're in, and how long it took. It might provide convenience, and the a (great) benefit of running locally the same as in your CI environment, but it seems to me this would make it difficult for devs to easily understand where/why their build failed, since it just has one megastep.


I assume if the jenkins-step failed, you'd click a link to the dagger UI to see which dagger-step failed. Alternatively, never open jenkins at all and instead keep a tab open with the dagger UI.


Yes, that’s right. Especially since you’re probably running the same dagger actions all day long in development, upstream from CI.


I guess I'm a little confused where the line here...

Where is dagger UI and how does it relate to your CI? I don't see it in docs or cli help. Sounds like Dagger UI in this context (above) is providing little value beyond logging if it's not doing workflow execution.

I ask not to talk down on the product, but because I'm actually quite interested. Local execution, plus containerized execution sounds awesome. Just trying to understand the vision.


It can be a “megastep” in jenkins, but it doesn’t have to. It could be one individual step that happens to run on Dagger. Both work equally well.

In the “megastep” approach, it boils down to which tool can provide the most useful information. Jenkins is more mature but Dagger has more information about the DAG. So in some cases developers might actually prefer using Jenkins as a “dumb” runner infrastructure. It depends on the situation.


Man, I am SO excited for that! Kudos!!


You should be able to drastically simplify you Jenkinsfile(s) and have them just invoke Dagger. The issue you may run into is when you have different Jenkins nodes for different types of work. You could always invoke Dagger on each of these, depending on your setup and needs. Where there is a will, there is a way, with Jenkins :]


What's the monetization strategy going to be?


There will be an optional cloud service (not yet available). Its features will be based on requests from the community. Some problems just can't be solved with a command-line client. For example: visualization of your pipelines; a centralized audit log of all jobs run across all machine; centrally managed access control and policies; etc.

We will not rely on unusual licences to restrict competitors from running Dagger as a service. We don't need to, since `dagger` is a client tool. We do encourage cloud providers to run buildkit as a service, though :)

Generally, we take inspiration from Red Hat for their balancing of open-source community and business. They are very open with their code, and tightly control how their trademark is used. You can clone their IP and use it to compete with them - but you have to build your own brand, and you can't confuse and fragment the Dagger developer community itself. We think that is a fair model, and we are using it as a general guideline.


> We will not rely on unusual licences to restrict competitors from running Dagger as a service.

Your "Trademark Guidelines" appear to contradict you:

> Third-party products may not use the Marks to suggest compatibility or interoperability with our platform. For example, the claims “xxx is compatible with Dagger”, “xxx can run your Dagger configurations”, are not allowed.

> but you have to build your own brand, and you can't confuse and fragment the Dagger developer community itself

If I do an incognito Google search for "dagger", the first result is the Wikipedia page for the knife, and the second result is for Dagger, the dependency injection tool. By naming this "Dagger" you're confusing not just your own developer community but the pre-existing one as well.


> > We will not rely on unusual licences to restrict competitors from running Dagger as a service.

> Your "Trademark Guidelines" appear to contradict you:

They do not. Software licenses and trademark guidelines are two different things. Some commercial open-source vendors have changed their licenses to restrict use of the software in various ways - typically to limit competition from large cloud providers. We don't do that, and have no intention to. Our license is OSI-approved and we intend to keep it that way. That is what I am referring to.

> but you have to build your own brand, and you can't confuse and fragment the Dagger developer community itself

This is the intent behind the language in the trademark guideline which you quoted: you can redistribute and modify our code. But if you distribute a modified copy, call it something else.

> > Third-party products may not use the Marks to suggest compatibility or interoperability with our platform. For example, the claims “xxx is compatible with Dagger”, “xxx can run your Dagger configurations”, are not allowed.

> By naming this "Dagger" you're confusing not just your own developer community but the pre-existing one as well.

I disagree. Dagger has existed in private beta for over a year, thousands of engineers have been given access, and I can't remember a single instance of any of them being confused by the name. We have registered the trademark, and nobody has raised an issue.


> > > We will not rely on unusual licences to restrict competitors from running Dagger as a service.

> > Your "Trademark Guidelines" appear to contradict you:

> They do not. Software licenses and trademark guidelines are two different things. Some commercial open-source vendors have changed their licenses to restrict use of the software in various ways - typically to limit competition from large cloud providers. We don't do that, and have no intention to. Our license is OSI-approved and we intend to keep it that way. That is what I am referring to.

I'm glad the product is open source, but that provision isn't in the context of source code, it is a top-level item listed on that page. That's why I interpreted "unusual licences" to generally mean a sort of "legal acrobatics".

When you're threatening people with legal action you need to be clear, and right now the text on that page is not, according to what you're saying here. I doubt many people are going to be searching Hacker News comments for the true intent behind these guidelines.

> I disagree. Dagger has existed in private beta for over a year, thousands of engineers have been given access, and I can't remember a single instance of any of them being confused by the name. We have registered the trademark, and nobody has raised an issue.

I don't think that really addresses the point. Dagger (as started under Square) is nearly ten years old, and Google's 2.0 fork is from 2016. It's used by thousands of published Maven artifacts, countless applications, and tens of thousands of developers (at least). This is the first time I've heard of your project, but that's bound to happen in tech. Whether you registered it or not without complaint doesn't much matter either, the issue is being raised here, now that you've publicly launched.


> When you're threatening people with legal action you need to be clear, and right now the text on that page is not, according to what you're saying here.

That's good feedback, thank you. We can try and make it clearer as long as it remains legally correct and enforceable. Do you have specific feedback on which parts you found unclear, and why?


How is possible to restrict someone from making a factual statement like X is compatible with Y?


If Y is a trademarked term, you are free to not allow statements about your brand.


"It is perfectly acceptable and within the bounds of the law to use another's trademark in advertising, provided certain standards are met. The advertisement must be truthful and the use of another's trademark must not give a false impression of connection, approval or sponsorship by the owner of the other mark."

https://www.gfrlaw.com/what-we-do/insights/beyond-brand-x-us...

So as long as they don't imply endorsement, "I'm compatible with X seems fine".


Congratulations! I know exactly how this tool will benefit us DevOps Engineers as I knew when you did a demo of Docker at PyCon 2013, wishing you and your team the best!


Congrats on getting this far with the new venture and good luck. Wishing you all the success in the world!


Congrats on launching!

How mature is this? We have a 20 person team and we're prototyping different pipelines for our next CI/CD pipeline (currently Heroku). Is this ready for production workloads?


We consider Dagger to be beta-quality. It definitely has bugs, and APIs are still moving, though we make a big effort to limit breaking changes, and help developers migrate when breaking changes do occur. We aim to be able to guarantee complete peace of mind and continuity for production pipelines, but can't make that guarantee yet.

That said, one nice aspect of Dagger is that you don't need to rip and replace your entire system: you can use it in a localized way to solve a specific problem. Then expand over time. It's similar to writing a quick shell script, except it's easier to reuse, refactor and compose over time.


Dagger is already a popular dependency injection framework, so why choose a name that will be confusing to people who will likely use both of these frameworks in their projects?


And the reason they're both called Dagger is as a play on using a DAG, directed acyclic graph, to model dependencies. Stealing that wittiness and pretending it's their own is pathetic.


I believe the required creativity for that is not that high as it doesn’t seem too far fetched. I’d rather call it a piece of convergent evolution.


These comments are extremely uninteresting. If you care so much, please just create an issue on their bugtracker or something.


"If you have any questions, I'll be happy to answer them here!"


Except this question.


Your Windows instructions/process needs work.

> curl https://dl.dagger.io/dagger/install.ps1 -OutFile install.ps1

This uses the `curl` alias, which is really `Invoke-WebRequest`. It also makes the incorrect assumption I haven't fixed this dumb Microsoft mistake to be an alias to actual curl.exe.

> [Windows] We try to move the dagger binary under C:\Windows\System32

Ack, please don't do this! This is similar to installing something under /usr/sbin/. Malware is the only modern thing that would attempt to deploy to that folder.

> but in case we miss the necessary permissions, we'll save everything under <your home folder>/dagger > C:\<your home folder>\dagger.exe

I'm glad you have a non-admin fallback, but also: yuck. I don't want this polluting my home folder (more importantly: I don't want 100's of other things like this also polluting my home folder).

The "Windows way" is to install system-wide to %ProgramFiles%\dagger\ (eg c:\Program files\dagger\dagger.exe), or to install to %LocalAppData%\dagger\ (eg: c:\Users\shykes\AppData\Local\dagger\dagger.exe). The latter is kind of the closest equivalent to $HOME/.dagger on linux. Add whatever folder to the user's PATH environment variable to make it easy to run.

Honestly, providing just the .zip is better: then Windows users can muck up their own system however they like. Alternatively, package it with something like Scoop [2] which is a fairly popular dev tool, and provides a fairly easy way to get a sane install with updates, versioning and path stuff all handled.

[1] https://docs.dagger.io/

[2] https://scoop.sh/


Thank you for the feedback! I referenced it in an issue here: https://github.com/dagger/dagger/issues/1946


EDIT: You can now directly install dagger with scoop by running

  scoop install dagger
I have opened an issue and PR with scoop [0], see also [1]. You can directly install dagger with scoop meanwhile by using

  scoop install https://gist.github.com/vardrop/a25e0c8e2dc055f86a3ff4dd7a7de309/raw/0b7f1e29454d7d2cbe1ae9d6c807ddfcdacb7feb/dagger.json
[0] https://github.com/ScoopInstaller/Main/issues/3460

[1] https://github.com/dagger/dagger/issues/1946#issuecomment-10...


> This is similar to installing something under /usr/sbin/

As someone who's trying to get to grips with the Linux filesystem conventions, would you mind elaborating on a) why that's wrong, and b) what you would suggest instead? This reference[0] suggests that `/usr/sbin` is for "general system-wide binaries with superuser (root) privileges required" (and `/usr/bin` for those that don't require root privileges). I've therefore been using them in my homelab for binaries like the Cloudflare Tunnel Client[1]. Where instead should I have installed it?

* If to a "well-known location" that is commonly-used by convention, how should I find out what that is?

* If to a custom location of my choosing, how should I communicate its location to scripts/tools that _use_ that binary? I see later in your comment that you suggest "Add whatever folder to the user's PATH environment variable to make it easy to run.", but that doesn't seem like a scalable solution for a multi-user environment?

[0] https://askubuntu.com/a/308048

[1] https://github.com/cloudflare/cloudflared


Generally, `/usr` is for stuff packaged by your distribution, while `/usr/local` (so, `/usr/local/bin` and so on) is for your own custom stuff. Both the `/usr` and `/usr/local` equivalents will be on your $PATH by default in most distros.

For stuff that isn't just a self-contained executable, consider installing it to the folder `/opt/$MY_APP` and either symlinking the main binary into `/usr/local/bin` or putting a wrapper script in there (if the thing doesn't like being symlinked outside of its primary install dir). The wrapper can be as simple as:

    #!/bin/sh
    exec /opt/my-app/bin/my-app "$@"


Thanks! This helps a lot!


/usr/sbin is a legacy artifact that shouldn't be used. /usr/sbin is usually just symlinked to /usr/bin


This was not a helpful reply because it only told me what not to do, without providing a better alternative. The sibling comment is much more helpful.


OT but is Scoop competing with Chocolatey? I'm having trouble seeing the difference and I've never heard of Scoop before


There's a good comparison on the Scoop wiki [1].

Chocolatey is a mixed bag. Since it is basically just wrapper scripts around upstream installers, it very much depends on what the upstream installer does. To me, it acts less like a package manager (apt) and more like a rudimentary installer runner.

This causes all kinds of annoying usability issues. Chocolatey doesn't know if you uninstall via Add/Remove programs, or update via standalone or self-update mechanism, and will just show incorrect install and/or version info, and fail to upgrade properly. A lot of packages don't even install the version that Chocolatey says, but instead just install the latest at time of install.

Scoop is way different, and has none of these problems. Funny enough, they describe it as "not a package manager" but it feels way closer to apt to me. Everything is essentially a portable version of the app, and it puts them in ~/scoop/apps/[name]/[version], creates a junction (symlink) to ~/scoop/apps/[name]/current, and adds executable "shims" to ~/scoop/shims (which is in the user's path). There's no Windows "uninstall" entry, no versions to be desync'd, and no hidden garbage that can sneak in.

[1] https://github.com/ScoopInstaller/Scoop/wiki/Chocolatey-Comp...


I gave up on chocolatey (though some software from google relies on it to build). Have been using WinGet for a while, but feels bit underwhelming, and at work already several folks have recommended scoop - I guess it's the time!


I played with a similar idea a while ago: https://github.com/ecordell/cuezel/ (cuezel as in: "Bazel but with CUE"), but I was never sure that what I was doing was in the spirit of CUE.

CUE pushes nondeterminism into "_tool.cue"[0] files that are allowed to do things like IO and run external processes. Tool files scratch a similar itch to Makefiles, but they lack an integrated plugin system like Bazel (hence why I played with the idea of CUE + Bazel).

With Dagger you seem to be restricted to the set of things that the dagger tool can interpret, just like with my Cuezel tool you are limited to what I happened to implement.

In CUE `_tool` files you are also limited to the set of things that the tool builtins provide, but the difference is that you know that the rest of the CUE program is deterministic/pure (everything not in a _tool file).

There's clearly value in tooling that reads CUE definitions, and dagger is the first commercial interest in CUE that I've seen, which is exciting.

But I'm most interested in some CUE-interpreter meta-tool that would allow you to import cue definitions + their interpreters and version them together, but for use in `_tool` files to keep the delineation clear. Maybe this is where dagger is heading? (if so it wasn't clear from the docs)

[0]: https://pkg.go.dev/cuelang.org/go@v0.4.2/pkg/tool


I know exactly what you mean. Earlier versions of Dagger actually took this "embedding" approach: any CUE configuration is a valid Dagger configuration, with an extra layer of annotation that describes how to make that configuration runnable. What we learned is that this model is thoroughly confusing to almost everyone except hardcore CUE enthusiasts.

Now we use CUE in a more straightforward way: as a superior replacement for YAML. There's a (simple) schema that all Dagger plans must follow; beyond that, you can import any CUE package and apply any definition into your plan. But you don't need to go upstream and annotate these packages with additional Dagger metadata.

I'm not sure if my explanation is clear - even I get confused by this embedding business sometimes.


In the examples it looked as though there is still a CUE unification loop happening during dagger processing:

  deploy: netlify.#Deploy & {
    contents: build.contents.output
  }
It looks like dagger is using cue as a bit more than a YAML replacement; it hydrates cue values as it runs - which is cool! - but that's the part that seemed at odds with CUE's philosophy of pushing nondeterminism into clearly marked files.


Yes, we use CUE as more than just a YAML replacement. In particular we use CUE references to construct the DAG and "compile" it to buildkit.

And, yes, Dagger will gradually fill the missing values in the CUE tree during runtime. Essentially resolving the DAG on the fly. It is pretty cool :)

We have discussed this topic at length with the CUE developers. Our conclusion is that CUE's deterministic core is what matters, and the `_tool.cue` pattern is more peripheral: more of a reference for how other tools might use cue for non-deterministic operations. It's not realistic for CUE to be both a ubiquitous deterministic language, and a successful non-deterministic tool. Its priority is clearly the former, and we're focusing on the latter.


Not to be confused with https://dagger.dev/


When I read the headline, my first thought was dagger dependency injection got a CI/CD feature ... which is ridiculous of course. I think this naming will cause confusion, especially since dagger DI is a large project from google.


Also not to be confused with https://dagster.io/


+1 what a strange choice to deliberately conflict with an existing project


There are only so many words.


We can limit our word space by e.g. bird names (like they do internally at Twitter), and still haven't used most of them.

There's a funny paradox: ask people to "name 10 white things", and they do it SLOWER than if you make the task harder by asking to "name 10 white things in your fridge".


We've really jumped the shark with people not understanding the word DevOps, haven't we? When the Docker people don't get it, we should probably give up.

Fine. "DevOps" now means "some something development something something servers something something operations". Are you happy now, tech world? You've made Patrick Debois cry.


It's simple.

Before we had developer and operations teams.

Now operations write yaml and we call them devops.


I gave up when a friend defined DevOps as "the entire software development lifecycle". There is a trend of this sort of regression to the mean. Vendors want to jump on a keyword, so they fight to expand the definition to include their thing. Pretty soon everything is "DevOps".


Honest question: how would you define it? This question has come up in my team and everyone has a different definition for it.


It has a definition in the same sense that "Agile" and "Lean" and "Six Sigma" have a definition. The definition isn't important, it's the group of ideas that the word refers to.


This seems interesting.

But, I wish there was some code to show me what makes it so radically different. It seems like this is targeting developers (or is it devops team?) and I'm excited about the new language here, but I don't see any examples of code. Code engages both my head and heart.

I am reusing a lot of code in my CI jobs. I have an upload script that I reuse. I have a DEB package script that I reuse across many projects. So, that assertion rings false to me, and seems to indicate there is an unhealthy wall between devops and the developers that prevents shared code. Maybe I misunderstand.

The thing that always bites me is that I have trouble debugging a job that depends on the artifacts from a prior job. My e2e job is flakey, and I'm loathe to fix it, because I have to re-run the entire pipeline, test -> build -> installer, etc to get the final artifact used in the e2e job. I've not figured out a way with "gitlab-runner exec" to run a job later in the pipeline and somehow pass in my own artifacts locally. This would be something (albeit very specific to gitlab) that would make me very excited.


> But, I wish there was some code to show me what makes it so radically different.

Sorry to be that guy but maybe try the docs page...?

https://docs.dagger.io/

https://docs.dagger.io/1205/container-images


Thank you.

It isn't hard to find, but my point was that if you say "using an intuitive declarative language" then a developer will get excited by a code snippet that shows that intuitive developer language. It wasn't there, and I think their post could be improved by having less fluffy language and more code, if they are targeting me, that is.

This link does show the code, but it took a few clicks to get there from your links:

https://docs.dagger.io/1202/plan

At first glance I'm not in love with the language. When I look at the first example, there is a lot I have questions about.


The language is CUE, which I think will see mass adoption in config / DevOps in the coming years. So regardless what you think of the language today, it is likely to become important and part of your like in the not too distant future.

https://cuelang.org | https://cuetorials.com

Dagger builds on top of CUE and the (DAG) flow engine therein


Oh wow, the validating yaml example on the front page looks super useful. Might try and play with this over the next few weeks!


Once CUE clicked for me, there was no going back


That's great context. That should be added to that doc link right at the top, it would have me feel much more safe about investing the time to learn it!


CUE has good pedigree, the creator Marcel wrote the prototype for Borg (k8s), worked on both Google config languages, and worked on the Go team. CUE is how he thinks those Google config languages should have been designed.



There's some code examples in their docs: https://docs.dagger.io/1202/plan


I'm not touching anything Docker anymore.

Here's the scenario: you're the unfortunate soul who received the first M1 as a new employee, and nothing Docker-related works. Cue multi-arch builds; what a rotten mess. I spent more than a week figuring out the careful orchestration that any build involving `docker manifest` needs. If you aren't within the very fine line that buildx assumes, good luck pal. How long has `docker manifest` been "experimental?" It's abandonware.

Then I decided it would be smart to point out that we don't sign our images, and so I had to figure out how to combine the `docker manifest` mess with `docker trust`, another piece of abandonware. Eventually I figured out that the way to do it was with notary[1], another (poorly documented) piece of abandonware. The new shiny thing is notation[2], which does exactly the same thing, but is nowhere near complete.

At least Google clearly signals that they are killing something, Docker just lets projects go quiet. I'm now looking at replacing everything I've done for the past 4 months with Podman, Buildah, and Cosign.

How long before this project lands up like the rest of them? Coincidentally, we were talking about decoupling our CI from proprietary CI, seeing this was a rollercoaster of emotions.

[1]: https://github.com/notaryproject/notary [2]: https://github.com/notaryproject/notation


I can sympathize, but want to clarify that, although I did found Docker, I left 4 years ago. Dagger is not affiliated with Docker in any way.

Also: Dagger can integrate with Podman, Buildah and Cosign :)


Are you me?

I just started a new job a few weeks ago, and guess what, I'm the first one (in my small group of ~5 or so) to have an M1. First it was just figuring out what the issue was, then trying UTM and manual port forwarding and getting horrible performance, then Colima which seems passable, but the default 2GB of RAM is useless for anything like kafka, so I end up having to allocate half the RAM of my machine just to docker containers.


I find it hilarious that you're both seemingly blame docker for these issues.


The whole M1 thing is definitely Apple's fault, but it did lead me to a huge amount of bitrot in Docker tooling.


I don't blame anyone for the issue. Well, maybe Apple itself but even then blame is pretty harsh. But from someone coming from 12+ years of software development on Windows, with occasional Mac usage personally, to suddenly doing development on a Mac, it was a big shock (mostly in a good way).


You might also want to try Rancher Desktop. At the time I was solving things RD had some annoying bugs (so we are using colima for now), but it has come a long way and seems to handle networking a bit better than colima.

I plan to post a blog about the whole lot. Shoot me an email, or a tweet, or something and I'll ping you when I get round to it.


I'd recommend renting / buying a powerful (x64?) machine accessible over the internet.

Ever since I did that docker on Mac became bearable. That coupled with vscode remote development even allows me to easily mount volumes.

Plus I'm now less concerned about malicious 'npm post install scripts' which could potentially nuke all my data.


After reading this entire post, I’m still left wondering what problem this solves for me, beyond fluffy promises of ‘everything is going to be better’.

At the very least I’d want to see a comparison with what we have now, to show me how this is better.

I get that I can try to explore more, but if I don’t get a compelling reason to do so after reading the introductory post, I’m not very motivated to do so

Any motivation I do have completely hinges on the words ‘from the creators of docker’, not on the merits of this particular product itself.


That's fair, it can be hard to find the right balance of high-level explanation and technical detail. We tried to solve this by tailoring different parts to different audiences:

* The blog post is more high-level. It describes the very real problem of devops engineers being overwhelmed with complexity, and the promise of a more modular system, but does not provide lots of details.

* The dagger.io website does provide more technical detail. For example it talks about the 3 most common problems we solve: drift between dev and CI environments; CI lock-in; and local testing and debugging of pipelines. It also features animated code samples.

* The documentation at https://docs.dagger.io go in even more details, and walk you through a concrete example.

We do feel that we can do a better job explaining the "meat" of the tool. But we decided to launch and continue improving it incrementally, in the open. If you have any specific suggestions for improvements, please keep them coming!


I run a CI application for Laravel developers (Chipper CI).

It turns out, the gap between "works locally!" and "works in CI!" is not negligible, especially when you're not sure "about all that server stuff".

Getting this working locally with a fast cycle time, and then being able to easily move that into a CI environment of your choice sounds exciting to me.

Furthermore, the majority of our customer support is "I can't reproduce this locally but it's broken in CI". Everyone blames the CI tool, but it's almost never the CI tool - just drift between environments. A way to debug locally is a killer feature.

Is it worth an entire, funded company? I'm not sure, but I'm excited for them to exist!


Always nice to have new players in the space, but that gap isn’t even addressed here.

Same old problems with configs/secrets, integration with internal/external services, and the details required by your cloud provider.

This is the sort of solution I have regularly been hired to untangle, after a company entrenches itself.


It mostly solves this problem:

- write code

- run tests

- commit code

- update CI

- commit

- CI broken

- update CI

- commit

- CI broken

- update CI

- ...

The workarounds for this are generally awful.

For Jenkins, you stage your own instance locally and configure your webhooks to use that. It's exactly as terrible as it sounds, and I never recommend this approach.

For Travis and Concourse (I think), you can use their CLI to spin up a runner locally and run your CI/CD yaml against it. It works "fine," as long as you're okay with the runner it creates being different from the runners it actually uses in their environment (and especially your self-hosted runners).

In GitHub Actions, you can use Act to create a Dockerized runner with your own image which parses your YAML file and does what you want. This actually works quite well and is something that threatens Dagger IMO.

Other CI systems that I've used don't have an answer for this very grating problem.

Another lower-order problem Dagger appears to solve is using a markup language to express higher-level constructs like loops, conditionals, and relationships. They're using CUE to do this, though I'm not sure if hiring the creator of BCL (Borgmon Configuration Language) was the move. BCL was notoriously difficult to pickup, despite being very powerful and flexible. I say "lower-order" because many CI systems have decent-enough constructs for these, and this isn't something I'd consider a killer feature.

I also _also_ like that it assumes Dockerized runners by default, as every other CI product still relies on VMs for build work. VMs are useful for bigger projects that have a lot of tight-knit dependencies, but for most projects out there, Dockerized runners are fine, and are often a pain to get going with in CI (though this has changed over the years).


My "workaround", if you can call it one, is to design things so they don't need the CI/CD server to get a build/test/deploy feedback loop. I should be able to do any stage of the pipeline without the server, and thus no code is committed until I know it is working. The pipeline is basically a main() function that strings together the things I can already do locally. If I need anything intelligent to happen at any stage of the pipeline, I write a tool to do it using Go or Python or something that I can write tests for and treat as Real Software. After fighting with this for many years, this approach has worked best for me.

I didn't dig deeply into the docs, but Dagger appears to be doing a multi stage pipeline locally. If that is the case, I wouldn't want that either. I use Concourse, which has very good visualizations of the stages, and if I used Dagger there, it would consolidate those stages into one box without much feedback from the UI. Also, with Concourse you can use `fly execute` to run tasks against your code on the actual server, without having to push anything to a repo.


Jenkins lets you replay a Pipeline, with changes, which is massively useful — removing the need to change things locally and commit.


Concourse has `fly execute` which makes the commit-push-curse problem go away. It's had it since 2015 or so.


Concourse also has `fly hijack` which is the baddest/funniest command of the decade. It's also very nice to use, instantly logging you in the remote container of a failed build so you can poke around an see what actually went wrong and try to run it interactively before fixing and re-executing. Much better than poking at things in the dark until you hit another issue...


> every other CI product still relies on VMs for build work.

Gitlab CI has dockerized runners? Works great!


My main takeaway was the ability to debug a build or deploy pipeline locally.

That would come handy once or twice per month


This seems similar to what https://earthly.dev is doing?


Yes, I believe that is a fair comparison. Earhly is more focused on builds, whereas Dagger has a wider scope: build, test, deployment, any part of a CI/CD pipeline really. But the overall philosophy is the same: run everything in containers. The choice of buildkit as a runtime is also a key similarity. One big difference is that we use CUE as a configuration language, and Earthly uses YAML.

We have a lot of respect (and common friends!) with the Earthly developers, I am confident we can help each other build even better tools, and grow the buildkit ecosystem in the process.


Another *monster* difference is that Dagger is (at least currently) Apache 2: https://github.com/dagger/dagger/blob/v0.2.4/LICENSE but Earthly went with BSL: https://github.com/earthly/earthly/blob/v0.6.12/LICENSE

That means I'm more likely to submit bugs and patches to Dagger, and I won't touch Earthly


I can't speak to Earthly's choice of license (and am not familiar with BSL). But can confirm that we have no intention of changing licenses, and if we did, it would be for another OSI-approved license.

Our monetization model follows the same fundamentals as Red Hat before us: open code, strictly enforced trademark rules. "You can modify, use and redistribute the code at will. If you distribute a modified version, please call it something else."


I recently converted my company's build process to Earthly. I find its syntax to be much easier to grok than CUE. They've also extended/added Docker commands that shore up some of the pain points of working with Dockerfiles.

>Dagger has a wider scope: build, test, deployment, any part of a CI/CD pipeline really

I don't see any reason this can't go into an Earthfile. We have all of these parts in our Earthfiles.

The one common pain point that both Dagger and Earthly haven't solved for me is unifying the machine parallelization with the DAG parallelization. According to this comment[1], it seems like Dagger doesn't have that goal.

For example, we only run our +deploy target if +build, +test and +lint pass. We parallelize each of those targets across workers in Github Actions. I don't know what the solution is to this problem but I know this was annoying to have to handle with Github Action's workflow syntax and horrible to debug locally.

[1] https://news.ycombinator.com/item?id=30859864


Earthly doesn’t use YAML but a syntax which is similar to Dockerfile with dashes of Makefile. I’m finding it extremely pleasant to write.


I stand corrected! Earthly developers if you read this, I'm sorry for misremembering. I would correct my original post if I could.


I've been using Earthly for about 6 months.

Earthly uses Dockerfile style syntax so I don't have to learn a new language, I can leverage my existing knowledge.

Another advantage is that in Earthly I can run up a docker compose within my pipeline so that I have selenium, envoy and postgres running for integration testing.

You can see my integration tests here https://github.com/purton-tech/cloak/blob/main/Earthfile#L14...

Is that possible in dagger?


Thanks for the info, that is wonderful to hear! Best of luck to both you and Earthly, much needed work you're doing!


> Dev/CI drift: instead of writing the same automation twice - once in a CI-specific configuration, and again in a Docker compose file, shell script or Makefile - simply write a Dagger plan once, then run it in all environments.

Working on developer tooling, a lot of times I would hear from people that they wanted CI and dev to be 100% the same and wanted a simple "run all CI locally" command to pre-check before posting.

Unsure how Dagger is handling this but my concerns with the scenario I described

- CI normally divides things up into multiple jobs for speed which breaks the shared "do everything" command

- Commands need to scale down to fast iteration as people fix things

- Generally people get the best integration by using the underlying wrapped tools directly due to pre-commit or IDE integeration


> I would hear from people that they wanted CI and dev to be 100% the same and wanted a simple "run all CI locally" command to pre-check before posting.

This is exactly what developers should want. It's the most efficient workflow for a dev, because we then don't have to think at all. This is a huge dev efficiency anti-pattern: thinking your code is good to merge, pushing the change, and then finding out 10-20mins later that CI isn't happy for some reason that wasn't natural to check locally.

The thesis in the following is the way: https://gregoryszorc.com/blog/2021/04/07/modern-ci-is-too-co...


I don't understand why folks can't write everything in a makefile and then call make targets in CI.


That is a common pattern. Dagger is essentially a more modern iteration on that pattern.


It's mostly for speed, if the tasks are defined separately then a smart CI system can cache the results and avoid rerunning some of them.


This is the way. Writing your CI in groovy is a fast track to hell.


I hate makefile syntax even more than I hate yaml


>Write your pipeline once, run anywhere

Ha, finally. The timeline for a pipeline that I wrote recently looks like this:

  1. Write local test/deploy script
  2. Promote scripts to hosted CI system
  3. Local scripts rot
  3. CI system down, need to use local scripts again
  4. Re-write scripts to be current
  5. Force CI system to use my local scripts


This bait and switch in the docs when trying to see an example for anything other than Github feels a bit off to me:

> If you would like us to document GitLab next, vote for it here: dagger#1677

If you don’t have an example for a specific tool, just don’t add it to your documentation.


Disagree, it's a perfect way to learn about interest from customers by counting votes or just traffic to that page. This way they can learn which things to implement first with very little impact to you, while giving you a way to help them know you want a Gitlab integration. What's the alternative, you want to have to email them to ask about it or wonder if they're considering it vs Bitbucket or a bunch of alternatives?


To clarify, you can run Dagger on Gitlab today. It just requires some manual configuration that we would like to automate away, for convenience. We have only done this for Github so far, and would like to do it for more.

We will look for a way to make this more clear in the documentation.


What about Bitbucket?


The only dependency for running Dagger is buildkit - which itself requires a docker-compatible container runtime. This could be containerd, runc or of course Docker engine. If your environment can run Docker containers, it can run Dagger.

There's another aspect which affects performance: cache persistence. Buildkit needs a place to persist its cache across runs. By default, it relies on local storage. If that is wiped between runs (common in CI environments), everything will still run, it will just be slow - possibly very slow. Luckily, buildkit supports a variety of cache backends, including a Docker registry. In practice, persisting the cache is the most labor-intensive part of integrating in your CI, even though it's technically not a hard dependency. This is the part that we want to automate away for users, but it requires additional engineering work - hence our asking the community for input on what to prioritize.


How would this handle CI/CD jobs which need to be run on a mac?

From what I understand, everything needs to be a container, and that doesn't work for xcode.


Hell yes. A build system that I can test locally without hacks (like act) is literally the dream.


What is the benefit of using a new bespoke syntax vs. running docker containers directly? Drone.io does this very well.


One way or the other, you will have to write a declarative configuration describing which containers to run, and how to interconnect them. Then you'll want to add templating to that configuration, perhaps a schema verification tool, some sort of test scaffolding, etc. Our philosophy is that, if you're going to do all those things, it's better to use a dedicated platform with a complete syntax, API and integrated tools, instead of cobbling it together yourself.


During the past few months I wrote a CI/CD pipeline using GitHub actions, Terraform, and Kubernetes. I'm not too sure Dagger would have saved that much time for me unless there was already a fully setup service already being sold.

The main issue see so far is that I'd also want bringing up a CI instance of the service the same was as new prod (backup) cluster. All of that is written in Terraform already (and GitHub Actions). Why rewrite it at this point? If I need stats I could probably push them to Prometheus.

So yeah, ha, sounds like just writing something like Dropbox would be easy on Linux. But it could be great if I wouldn't need to drop down to Bash Run all the time, since a lot of official or community extensions would need to be created: https://docs.dagger.io/1202/plan#plan-structure. Somewhat reminds me a bit of Earthly: https://earthly.dev/.


Glad someone mentioned GH Actions. Recent SaaS issues aside, one of the value-props to me with GH Actions is that I can leverage it in a multi-cloud environment if I'm using GitHub for SCM, which my organization is. There are some other benefits I specifically care about that others might not as well.

Infra is still largely a separate concern (as in, it doesn't matter what I use, I have options). I suppose I need to look into Dagger more to understand the value-prop.


Interesting, I should check out if this works with Gitlab CI. As Gitlab CI is quite a pain to debug (guess all CI environments).


Come on guys...there is an extremely well known and widely used Java library called Dagger.

Call yours Cinquedea, Jambiya, Anelace, or Rondel if you really want a knifey name.


Seems like more stuff that Nix solves out of the box


Is there a thing that Nix can't solve?

This is a CI/CD system. It's not made to configure a single machine from coded configuration, but to build, test, publish software for multiple targets and then manage deployment to different clusters.


Nix builds, tests and publishes more than 80,000 packages, continuously on every commit, for multiple target platforms, via one monorepo.. without Docker.

It’s not just for building machines, that is NixOS and its module system, which is a library you can use with the build tool.

It’s possible to build your own Nix-based monorepo internally at your company, too, and still ship containers in production (that are also built by Nix, again, without Docker).

Dagger is not a CI provider, it is a build tool and task runner. Integrating Nix with CI is the same deal as integrating Dagger with CI.

Enter my biased opinion, as someone leading a team through solving the problems this tool purports to solve, but by using Nix: this tool will suffer from the same fatal flaws that Docker does, by being built on its foundation (Buildkit). It is abstracting dependency management at the application build level, whereas Nix solves it at the fundamental system dependency level.

I would like to be proven wrong, so best of luck!


Congratulations! This looks amazing. I think I am the target audience and I cannot wait to try this.

One very important thing for my use case is being able to run steps in parallel distributed across multiple agents. Is it capable of this?


Yes :) We rely entirely on buildkit (https://github.com/moby/buildkit) to run the pipelines (`dagger` itself acts as a "compiler" and buildkit client). So anything you can do with buildkit, works out of the box with Dagger. That includes clustering.

There is a lot of ongoing work in that area in the buildkit ecosystem. For example Netflix has a distributed buildkit farm, which I believe relies on containerd storage backends to share cache data. There are also integrations to run buildkit nodes on a kubernetes cluster.

Dagger itself is obviously quite new, but buildkit itself is very mature, thanks to its integration with 'docker build'.


I’m ready to be an early (&& paying) adopter of this today, but I have real work to get done so it would be extremely helpful if you could describe what doesn’t work well about your solution today.


Our team has had great success with GitHub Actions and Environments for CI/CD. One nice thing going that route is the build related code is contained within repositories. A large number of developers are already familiar wit GitHub, which makes onboarding new team members easier. I don’t see anything too compelling with dagger.io that is missing with GitHub. You can even use ACT to test workflow changes to builds locally.


I'm in the same boat, but I do think there's a prospect of Dagger being a superior option in the long term, if...

1. They invest in building out their catalog of actions to compete with GitHub's. I maintain a few GitHub Actions and despite the GitHub catalog's depth, it's still lacking in many ways and GitHub don't appear to invest in it too much: a "maintainer fund" and creative poaching from Dagger could rapidly bring them up to par. A few million of their raise, well deployed, could crush GitHub's catalog.

2. They invest in tight integrations with platforms. GitHub Actions is great because of composability, yes, but also the deep integration with GitHub itself. Being able to run Dagger on GitHub Actions is one thing, but being able to leverage deployment environments cross-platform would be another.

3. GitHub Actions is great, I am a fan of it, I'll speak highly of it often, but the codebase... it is bad. If Dagger can build out a platform that competes with GitHub Actions on functionality, and it has a pleasant codebase, they'll make huge gains from community participation. Contributing to GitHub Actions is painful.

So, I agree with you today, but a year from now, I could see a very different situation and I am optimistic.


Also I'm sure he's on their radar given he works for Docker, but they should spend whatever it takes to hire github.com/crazy-max


Fun fact, Crazy Max is the author of the Github Action for Dagger :) https://github.com/dagger/dagger-for-github


How well does act work nowadays in practice? I was automating multiple PHP, Ansible, and Nodejs related projects last year and act failed (can't remember the exact errors now) for each project at some step.


My experience trying to get vscodium to build using act was similarly "oh no," which I think is a cat-and-mouse pitfall that's found in every emulator

The patch I made to act was bigger than I thought the act project would accept, so I just worked around it with some well placed docker volumes and running GH actions "by hand"


I support the effort to build a platform-agnostic CI/CD pipeline solution, but I don't want it in the form of yet another platform. Rather it needs to be a protocol that any platform can tie in to. I'm especially wary since this is another VC-backed effort that will eventually need to be monetized in some shape or form.

Additionally, as someone else here has already mentioned, my mind first went to Dagger, the dependency injection tool (https://dagger.dev). That tool in particular was named as a play on DAG (directed acyclic graphs), whereas in this case I don't think it would apply since there may be instances where you'd want cycles in a pipeline.

On a whim, I clicked on "Trademark Guidelines" (https://dagger.io/trademark) and from that page alone I would recommend avoiding this based on the aggressive language used to try and claim ownership of generic words. According to their own language, it seems I'm violating their guidelines by writing this comment.

> Our Marks consist of the following registered, unregistered and/or pending trademarks, service marks and logos which are subject to change without notice: Dagger; Blocklayer; and other designs, logos or marks which may be referred to in your specific license agreement or otherwise.

> Blocklayer does not permit using any of our Marks ... to identify non-Blocklayer products, services or technology

Which would include Dagger, the dependency injection tool.

Other sections of note:

> Do Not Use As Nouns

(This one just reads amusingly to me, for some reason.)

> Do Not Create Composite Marks

This section seems to suggest that you can't use "dagger" in any shape or form, even as a smaller part of some other word or body of text.

> Websites And Domain Name Uses

>

> ... Any principal or secondary level domain name should not be identical to or virtually identical to any of our Marks.

>

> The website owner should not register any domain name containing our Marks and should not claim any trademark or similar proprietary rights in the domain name. For example, “daggertech.com”, “dagger-group.com”, “Meetdagger.com” are not allowed. Any violation of this policy may result in legal action.

>

> The website should otherwise comply with domain name registry policies and applicable laws regarding trademark infringement and dilution.

This would technically include dagger.dev, which again refers to the dependency injection tool.

---

Full disclaimer that I'm not a lawyer and there could be totally reasonable explanations for these provisions, but they certainly look scary to a layperson such as myself. All in all, the founders seem to be taking a pretty arrogant approach here, but it unfortunately seems to be a common one. I'm choosing not to support it, however.

---

EDIT: formatting


So if it is a "devops engineer" why not call them just "ops" if they don't do software engineering?..


The name war continues unabated.

I'm a "devops engineer." Besides pipeline code and maintaining/building out infrastructure, I also write CLI applications to handle complex infrastructure tasks. At what point is a "devops engineer" not a software engineer?

[p.s., I prefer the term "platform engineer," personally]


So you spend all your time looking after dev infrastructure? Can I ask how many devs there are where you work?


Somewhere around 75, if I had to guess.


fair enough, thanks. Just yourself or a team?


It’s just software engineering with more obstinate, less tolerant components and without an IDE.


> without an IDE.

There's an IJ plugin for cuelang but I'm not at my desk to know if it would be helpful for use with Dagger or not


Historically the devops name comes from the methodology where "dev" and "ops" teams got merged so there'd be at least 1x dedicated ops person for each team of developers, only focussed on that team's operational needs.

Point was to break out of the Developers versus Operations silos and actually be more productive as a single team.

Back in the silo'd days... the ops teams would just flat out refuse to release any code on a Friday. Which doesn't work if you're twitter. Hence the switch.

A lot of my time as a "devops" engineer is reiterating that I can fix and automate as much as you want me too, but if your developers don't think about the infra the code eventually runs on in prod then your software is just gonna keep breaking --- "but it worked fine on my local machine".

So, while I agree the "Devops Engineer" name is a bit ridiculous, and not in the spirit of the original methodology, half of my job is writing software that tries to help developers not break production and the other half is attempting to change the culture, even slightly, towards "oh, wait, will this run in production?".


Because the point of their role is to not do the same manual work over and over but to gain leverage over time by developing software to automate ops


Not to be confused with Dagger, the dependency injection framework used in Android apps [0]. That's DI, not CI!

[0] https://developer.android.com/training/dependency-injection/...


Ci/CD is transforming right now but I don't think dagger is solving those things or I don't understand it well enough yet.

GitHub action is a game changer. CICD finally has a proper UI were it belongs: on the repository.

GitHub actions are also much easier than gitlab runner. The basic actions are great and probably solve 90% of all normal use cases.

Then we/I see a big trend of going away from self build pipelines. Provide Standard Features through convention over configuration: the docker image build action checks if it need to run on the pipeline and runs isolated by default.

The build packs are providing something like this but they are to clumsy/complex.

Then you have the real issue no one is solving:

- fast sync between stages and builds. 1 gb of source code, maven cache etc. Still need effort to do it fast. Preferable a filesystem which is fast and allows to share snapshots you can mount ober the network (it's not a big issue but puts a latency limit on how fast you can provide build results and I like it fast)

- standard building blocks with retry mechanism. When a 3h build fails, restarting is annoying like hell. Pipelines have very little resilience.

- unified CICD output: lots of plugins support Jenkins Replacing Jenkins needs a new UI (GitHub has it now) which still is not unified. There is a thing going on to univy on GitHub through GitHub checks.

I think in the next few years GitHub will have solved those issues for us. They do exactly what you are looking for. Would love to work with them.

They even now inject the repository secret for you. They are slow (hello personal access token alternative we are waiting...) But steady.

Btw. Before someone says gitlab: the Auto DevOps thing was shitty and bad supported. A default case with java and postgres was/is not fixed for 3 years.

And those vulnerabilities. The last few years? Nope nope nope.


Have you given IDEs any thought?

I feel like it isn't write once run anywhere until I can use the same configuration inside the IDE, from the command-line, and in CI.

The IDE would need to understand what the dependencies are in order to provide things like auto-completion, debugging and syntax-highlighting.


Yes IDE integration are a great idea and we plan on building those too. What IDE would you personally want to integrate, and what actions would you want it to support?


I'm happy to see dagger, because it is something that I have been working with since 2016, and ended up creating

https://github.com/ysoftwareab/yplatform

It uses well-tested software (GNU Make, Bash, Homebrew, Docker) and there's not much to learn really.

  * Incubated at Tobii (eyetracking, Sweden),
  * it has been used for several projects of different sizes,
  * tested on 12+ CIs !
  * tested on macOS, Linux and Windows (WSL) !
  * tested on 8+ Linux distros !
  * comes with prebuilt Docker images !
Early integration with VSCode. And much more. Just read the README :)

Happy to help anyone integrate it.


Paul Graham just highlighted it on Twitter: https://twitter.com/paulg/status/1509209579414040588


harness.io was by far the best CD experience I ever had, and it allowed me to do exactly what I wanted. I can't stand the gitops way of defining every change through a PR and having an operator/controller sync the changes. It's a lot more intuitive for me to say "everything in master ends up in prod" and define a pipeline that moves artifacts between environments. The only other software that seems like it does the same thing is Spinnaker, which i have no desire to try self-hosting.


what highly critical comments so far. Every stipulated problem resonates so well for me. I don't really care what the solution is or will be, they are addressing the right problem imo, which is overlooked in the CI/builder space.

So many days are wasted for me because of the ridiculous back and forth to the buildserver when adding or changing automation. My experience is mostly with Gitlab, and I dare to say it's an absolute business strategy for them to lock you in, into the buildserver. They've had the option to use local build runners, but that one lacks fundamental functionality which results in you having to change the Yaml file locally, temporarily, just to extrapolate templates and such. It's horrible. There are so many tickets asking for feature parity for this local-runner and the actual runner on the build server; it's simply ignored or false promises are made. Afaics, Github actions won't do any better.

I do expect Dagger integrates well with Gitlab such that I can download logs, artifacts and such from the Gitlab UI. But I'm afraid if something like Dagger catches on, the existing CI platforms will gatekeep functionality through pure yaml instead of allowing API calls from your pipeline. That would mean something like dagger will have to rely on compiling to a backend language, probably YAML. However, when did the 'one-ring-to-rule-them-all' approach ever work?

So, basically my question is, what's the incentive for the current CI providers to play nicely? And isn't it trying to follow multiple moving targets?


> Run on any Docker-compatible runtime.

so does its worker run on a native machine, because we need to build docker images using CI/CD, which we can't do easily within a container?

also, is there a gui?


> so does its worker run on a native machine, because we need to build docker images using CI/CD, which we can't do easily within a container?

`dagger` is a client tool, which uses buildkit as its worker. Buildkit itself can run on a native machine, or inside a container. Either way, building docker images is one of the most common actions performed by Dagger, and it is very well supported.


> also, is there a gui?

Not at the moment, but we plan on offering one (optionally) as a web service.


I'm getting "AmongUs" vibes from this site. Its called 'Dagger', and the illustrations are an anonymously clad astronaut running around a space station.


I remember reading about composable pipelines a while ago. I remember thinking this is going to take the devops world by storm, hopefully Dagger brings us closer to that dream.


>.. by composing reusable actions, using an intuitive _declarative language_ and API. Actions can safely be shared and reused thanks to a _complete package management system_ ..

That sounds horrible to me. Yet another abstraction layer and set of tools to do something that should be simple. Shell scripts inherently aren't bad. And I really don't see why they'll need more maintenance than using this new tool.


"A developer’s dream is a devops engineer’s nightmare"

Well, the opposite is very much true as well.

A devops engineer's dream is usually a developers nightmare.


That is sadly often true today, but not inevitable! When you are overwhelmed with complexity, a natural defense mechanism is to restrict choice. But with better tooling, you can better manage and reduce complexity, which allows you to say "yes" to developers without compromising on reliability and security. That is what we are trying to enable with Dagger.


I think the opposite cannot be true, because developers come physically before operations.

Isn't it actually devops engineers' job to make it so?


So much for "devops" merging together development and operations.

"A developer’s dream is a devops engineer’s nightmare" underines the failure of devops.


I really wish you would have stuck with a language that is higher in popularity. Python, JavaScript or such.


We did. Each action can be written in the language of your choice (I personally use bash) then wrapped in a declarative CUE config (typically quite short).

Then you compose declaratively in cue.

Here is an example of a package to deploy to netlify: https://github.com/dagger/dagger/tree/main/pkg/universe.dagg...


Have you heard of or explored https://github.com/aws/constructs (related: https://github.com/aws/jsii and https://github.com/aws/aws-cdk)?

This is what CDK uses for declarative modeling, but gives the opportunity to use languages/tooling that most devs are already familiar with. CDK8s already uses it as a replacement for yaml (technically, the yaml becomes an implementation detail rather than actually replaced)


Just another CI/CD pipeline...

Not saying we should write one [1] but CI/CD are a commodity these days.

[1] https://aosabook.org/en/500L/a-continuous-integration-system...


One of the big benefits of using a cloud build server like CircleCI is that it allows me to parallelize tasks. For example:

- Build code and lint code at the same time - Run slow end-to-end tests in parallel

Is parallelization possible with Dagger, even in principle?


I think this comment from the OP answers your Q ? :)

https://news.ycombinator.com/item?id=30859125


Thanks for the pointer! I still don't understand 100%. The link states that tasks are parallelized automatically. At the same time, Dagger AIUI is intended to run on top of existing build servers like Jenkins, Circle CI or Github Actions. But then I would assume that some sort of integration between Dagger and the build server needs to be in place so that tasks are parallelized on multiple worker machines (rather than all running in parallel on a single machine). If everything is running inside of a single CircleCI job, that job doesn't have enough cores to run all the e2e tests in parallel.

I guess my question is how this integration works in practice and what kind of complexity it generates.


Good question. Let me provide a more detailed answer in terms of concurrency and parallelism. To paraphrase Rob Pike's excellent explanation:

- Concurrency is the breaking down of a program into pieces that can be executed independently.

- Parallelism is the simultaneous execution of multiple things (possibly related, possibly not)

Dagger is designed to be highly concurrent with minimal development effort. Compared to an equivalent configuration in a traditional CI system, your Dagger pipelines will be more concurrent, and require less lines of code.

Because it is highly concurrent, Dagger can be parallelized with relatively little effort. But as you pointed out, you still need to configure parallelization by setting up multiple nodes, etc. Dagger uses buildkit as an excution engine, so parallelizing Dagger boils down to parallelizing buildkit. There is a lot of work in this area (one benefit of building on an existing, mature ecosystem). For example, here is an example of Kubernetes deployment: https://github.com/moby/buildkit/tree/master/examples/kubern...

Note that, because of its highly concurrent design and because of built-in caching, a single node may get you further than you might think. For example, in the e2e testing example: instead of running every test for every new commit, like many CI systems do, Dagger will automatically cache tests with unchanged inputs. So, for example, a change in the documentation will not trigger the API tests; a change in the iOS app source code will not trigger the Android tests; etc.

It's not uncommong for CI runs to become faster after switching to Dagger, without any additional parallelism. As it turns out, most CIs are very wastesful and leave plenty of low-hanging fruits to pick. Dagger helps you pick them :)


I wonder if there is still opportunity to stop saying “CI/CD”.

Here is a talk that I gave last year: https://youtu.be/BJrxYuqG64Q

“Why saying ‘CI/CD’ is not enough”


I agree and I think we will get there. I see CI/CD as one of several “micro-categories” that no longer makes sense standalone, and will gradually merge into a single mega-category.

Those include, off the top of my head: continuous integration, continuous deployment , configuration management, infrastructure management, PaaS, IT automation.


Been using Drone CI for several years, it has a cli and a command to run locally. Drone Runners allow executing builds on different platforms. CUE is interesting, but I can’t see any major advantages to switching to Dagger.


We run most of our CI locally already using vscode devcontainers and precommit.

It seems like that is a DIY dagger. Since most of our precommit hooks are just docker containers themselves. I guess dagger does add parallel builds.


Yes, Dagger can be integrated in a devcontainers workflow. How much or how little is done by Dagger is up to you.


Example of integration of Dagger in Kraken CI: https://lab.kraken.ci/runs/4393/jobs


Congrats on the launch.

Hopefully this will be the last pipelining language I'll learn.


We should definitely escape from YAML, but this language isn't it


Full YAML is crazy, but the StrictYAML subset I can definitely live with.

I regularly go over the main alternatives (TOML, JSON, XML), and they all have serious warts for config IMO.

Better formats exists (such as HOCON) but are much less popular. For a public tool I'd still stick with a YAML subset that everyone knows and can work with a minimum of fuss.


I agree everyone knows YAML but its only really great as a config language, not as a general purpose turing-complete programming language. The answer here is more likely to be something like what Pulumi did


Pulumi is great and we think of it as complementary to Dagger: one is focused on managing infrastructure, the other on deploying applications on top of the infrastructure.

They are more similar in their choice of language than you might think:

* Dagger lets you write actions in any language: Go, Python, Shell, it doesn't matter. Then it runs it in containers, wrapped in a declarative configuration. This allows cross-language interoperability.

* Pulumi also supports existing languages, although it is more deeply integrated, with language-native SDKs (the tradeoff being less supported languages). Eventually they realized that they needed an additional layer to bridge language siloes, otherwise the Pulumi ecosystem is really an archipelago of incompatible components: Pulumi/Go, Pulumi/TS, etc. To solve this Pulumi introduced... a declarative configuration wrapper.

In summary: the best design for this problem requires two layers. One layer for leveraging the existing programming languages with maximum compatibility. And a second to add a declarative configuration model and reduce ecosystem fragmentation. Pulumi and Dagger are both converging on this model, each for their own domain of application.

I personally believe we will see many Dagger + Pulumi integrations in the future :)


This would be wonderful but isn't what I gather from the docs. I only really see that the pipelines are written in Cue


How would Dagger compare to Toast? https://github.com/stepchowfun/toast


It seems very similar. The main difference (from a cursory look at their docs) is that toast uses YAML as a frontend, and Docker Engine as a backend. Whereas Dagger uses CUE as the frontend, and Buildkit as a backend.



To clarify, this post is unrelated to Dagger.jl which is a parallel computing package for Julia.


What if we call it "democratize CI/CD" ;)


shykes, can you explain or have the folks working on the website explain how is this different or better than just using Jenkins?


Sure, it's better for 3 reasons:

1. You can use the same pipelines in development and automation. Dagger runs on Jenkins, but it also runs on the developer's laptop. This solves a common problem which is drift between the automation available in dev and CI.

2. You're not stuck with Jenkins. If you want to migrate a Jenkins-native pipeline to another CI, you have to rewrite everything. Dagger on the other hand can run on any major CI. This makes migrations much easier, as well as supporting teams with heterogeneous CI setups (which is quite common in larger organizations).

3. You can debug and test your pipelines locally. In theory this is possible with some CI systems. But in practice, the experience is very different. You can actually iterate on your Dagger configuration like you would on regular code: make a change, run, try again, run again. It's quite fun and addictive.

EDIT: there is a 4th reason, which is that Dagger uses buildkit under the hood (the hidden backend for "docker build"). So every action in the DAG is automatically cached when possible. This means that your pipeline is tuned for optimal caching out of the box: no manual tweaking necessary.


Thank you, I appreciate the brief summary very much.


Sigh ... this page is unreadable in Firefox on iPad, and unreadable without JavaScript. Either no text, or black on black


Could you share more details please ? It would be awesome to open an issue on dagger repo : https://github.com/dagger/dagger/issues.

Thanks :)


Sorry about that. We'll fix it asap.


Nice choice of naming, really no research done on it?

https://dagger.dev/

https://developer.android.com/training/dependency-injection/...

Now good luck searching for using Dagger CI/CD for Android development.


Congrats on the launch - best of luck!


How is this different from buildkite


TLDR this basically looks like portable GitHub Actions + Workflows if I understand correctly.


Yes that is a good summary. It has other advantages beyond its portability - but that is the central feature.

Also, importantly, it can run on Github Actions itself!


I thought that maybe the dependency injection tool was expanding into new areas.. https://dagger.dev/


What an unfortunate name collision:

https://github.com/google/dagger

Is it some weird SEO niche-squatting technique? IMHO we see it happen way too often


Worst company name I've experienced in years. Absolute embarassment. How do I introduce an app named "Dagger" into my stack without sounding caustic or ignorant to my staff? How do you make a service called this without feeling caustic or ignorant. And you have investors?

Are you serious?

Did you think Mailgun was a good name?

edit: Figures Paul Graham is pimping this trash. Good work Sykes, you found a way to float in rich-man-land for another 5 years. I'll never be there but I'm perfectly content living a normal human life. Try it sometime, it may give you some insight into developer tooling and how to profit off of it.


It’s not the name that’s the problem with your introduction it seems.


It's not my introduction we're critiquing here. I don't give one fuck what Paul Graham invests in and I can't wait until he realizes nobody younger than him who isn't desperate to be a CEO-bro gives a shit about his opinion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: