One thing that I still don't understand about github actions is how much computing power gets wasted downloading and reinstalling the same dependencies over and over. A build that takes a couple of seconds in my local machine can easily take several minutes in github actions.
I prefer GA to basically every other build system especially because of the plugins but yeah, and I think caching can be turned on... but it has tons of inconsistencies related to packages and deps. This video really highlighted just how brittle the whole thing is even behind the scenes:
In Azure DevOps pipelines there's a fun issue where if you're pulling a dependency from nuget.org it gets pulled very quickly, but pulling from your own artifact repositories hosted in the very same Azure DevOps instance take 40-80 seconds. I'm not the one paying the extra bills when my builds take 5 to 6 times longer than they should but it's still frustrating.
And Microsoft has effectively zero incentive to fix this because it still technically works and they get to charge my employer for all that extra build agent time. Here's a few examples of people reporting similar issues and getting zero help:
The issue in this case isn't caching, even if I'm not actually pulling any packages from our private nuget repo simply having that repo listed in the project's nuget.config file is enough to cause the issue.
Yeah. We have a similar thing going on with self-hosted GitLab at my work. (And there, there isn’t even any incentive why it should be slow.)
When I make an edit and rebuild on my computer, it takes a few seconds to rebuild because most things are cached.
In CI it takes 20 minutes on the MR branches of our repo. 30 minutes on master. Because a whole bunch of crap is downloaded from scratch and rebuilt from scratch.
Our infrastructure guys did set up something that is able to cache stuff that some of the repos use. But with the limited access I have I have not been able to figure out how to make use of it for the repo I work on. And they don’t have time to look into it for me either.
Considering gitlab is a very k8s native app it's way of running a build and moving caches into it, vs moving the container over to where the cache's are is very annoying. I've had issues open for years to update the cache-key with a CSI mount and let k8s move the pods to where the mount is fast. As it is now it pulls everything from S3.
Yeah gitlab has pretty good caching support and it is basically a must if you do bigger jobs, especially on stuff like c++ apps. It is pretty brittle sometimes, and in my experience it won't be able to be as fast as actual local dev but it's still good. You can even make it use a remote cache IIRC.
On free runners I'd regularly hit the 1 hour limit. Took some time to set up self-hosted through EC2 and really optimize caching, its now down to 10-20 mins, depending on the repo. Most of that is because of the self-hosted runners, the free ones are painfully slow.
Still, on my local machine these jobs would take 5 mins so its not perfect. And as the build gets more complicated and more stages are added, the problem compounds since the initial download is the slowest part.
Last time I tried to add caching to my actions, I spent several hours and still couldn't get it to work properly in the end. I really wonder how many computing hours have been wasted because the default and most likely to work is to not cache anything.
On CircleCI it's also easy to add using save_cache and restore_cache, the docs have examples for different popular frameworks. There are a couple of advanced features and I have been able to cache most build artifacts.
They also cache the Docker images without you having to do anything but it really only works if you use their base images. With custom images the cache rate doesn't look great in my experience but I have no stats to back that up.
I use persistent runners, but we had to add additional steps (docker prune, mvn -U, docker image ls, etc) to keep the runner healthy and be better able to debug issues.
Given GitHub also owns npm these days, at least some of the constantly reinstalled dependencies are just getting copied from the equivalent of "next door".
"super" seems a bit generous. Actions runners have always felt slow to me. Fast enough to get the job done for CI/CD but for a batch job running it locally would be faster.
This idea does leech free compute in an API-agnostic way though, if this could be tied together with a GCP free tier, AWS free instance, etc I wonder if you could cobble together enough free resources to run everything you wanted.
Alex's product vision is fantastic. I hope https://github.com/self-actuated gets noticed by more folks out there who are hitting the limits of GitHub's hosted runners.
Seems like a solid tech approach too. I'm surprised there isn't more of this around GHA. It really feels like Microsoft calibrated exactly good enough, yet everyone seems to have their own piles of workarounds in published actions, even bigger piles in their infra, and then some more in the workflow yamls themselves, to get everything actually workings, especially when you need to support GHA runners, self-hosted runners, ARC runners, `act` runners, etc. If they can foster an OSS community around easy self-hosted, and then also offer a hosted runners product that is priced well. I'd pay 'em.
A self-hosted macOS runner will be more economical in the long-run, if you have a spot you can hook it up at; or, if you're fine doing things less than legally, you can use https://github.com/sickcodes/Docker-OSX.
I'm sure legality depends on jurisdiction, too. If you acquired the software legally and you need to keep it running in a VM, I'm sure it's legal at least in some places.
But yeah, just drive-by-downloading MacOS to your Windows box it is probably not quite on the up and up.
Highly recommend everyone check out the self-hosted runners. GitHub has made it crazy easy to set up and all your actions are free and can have local cache. And you get all the benefits of the Actions ecosystem. Throw a used Mac mini or old PC or whatever at it.
I truly don't understand why this isn't more widely discussed (I've seen several "GH Actions Gotchas" articles where this isn't mentioned). Many of the community actions also seem to be designed to run as short jobs to paper around missing features (for ex: https://github.com/dorny/paths-filter ), that end up eating up an enormous amount of your minutes budget.
That's not what dind is. Rather, there is a docker daemon running inside the container, and the containers it hosts are nested inside its cgroup in the host kernel. The result is very close in feel to docker in its own VM.
Furthermore nesting can be done inside one of the payload containers creating a turduckin. E.g. you can run k8s in a container, with k8s nodes implemented as nested containers and the cluster pods as doubly nested "pigeon" containers.
I haven't tried more than three levels but in theory more should work.
docker-in-docker doesn't run a docker daemon in the container, it just bind-mounts the host's docker socket inside the container, and the docker client talks to that. Any containers you launch from within docker-in-docker are siblings, not nested.
What is Docker in Docker?
Although running Docker inside Docker is generally not recommended, there are some legitimate use cases, such as development of Docker itself.
...If you are still convinced that you need Docker-in-Docker and not just access to a container's host Docker server, then read on.
This makes it pretty clear that it's a different copy of the docker daemon (which eg. allows you to test changes to docker itself) and specifically says it's different from "just access to a container's host Docker server".
> you need to expose your docker socket to the container
I always thought this was a hard limitation, but I deployed some self-hosted GHA runners in Kubernetes this week and to my surprise that setup came with an option to run the full docker daemon inside of a container - so apparently it is possible.
If you're running a full docker daemon, then you'll be running as a privileged container which is worse or about the same in terms of terms of poor security. Anyone's workload can compromise the host, and likely the cluster.
Rootless containers are a lot of work and do not support many scenarios that you're going to need.
MicroVMs are the same experience as GitHub, full system and Kernel, do what you will. Even launch a nested VM.
> There's something persuasive about running jobs and I don't think it's because developers "don't want to maintain infrastructure".
I remember taking a small university course on Ethereum, and getting an introduction smart contracts, trustless environments, and so on. We then heard about a couple of example projects, and were finally asked for our own ideas.
Now, after learning a bit more, I'm pretty sure none if the ideas presented either by the lecturers, me, or the other students really benefitted from the trustless environment, which is mostly what you'd use Ethereum for, and arguably what you pay (a lot) for during contract execution. Yet there were so many ideas about what could be done using smart contracts which were really cool projects on their own.
I think a big world computer with nodes that can perform calculations, react to user input, be called from other nodes, and exchange tokens and information, is somehow an incredibly natural abstraction that humans can work very well with. So, agreed.
The company I co-founded (Terrateam) develops a Terraform/OpenTofu GitOps CI/CD that focuses on GitHub. We use GitHub to run the operations, for a number of reasons, but we treat it just like ephemeral compute. We initiate the run, basically a near blank image, with our runner on it, and it "morphs" the image into what we need to do for that run. It works really well, except that GitHub Actions can be slow and I think they haven't quite figured out how to do reliable operations for it, yet.
The major thing about the API that I don't like is when you initiate a GHA run via the API, it does not give you an ID that you can use to track it. So if you initiate and it either never runs or fails for some reason prior to any code you put on the image, there is no good way to track that.
If you're a fan of elegant prose, I can't recommend Epicurian Dealmaker[1] enough. Sadly, the blog hasn't been written to in 8-ish years now. But I suspect that most of the information is still probably reasonably accurate.
Ironically the fastest path to getting everyone connected was to have them talk to cloud data-centers/neo-mainframes.
So it both justifies the mainframe, but if you look at why we all went mainframe in the end, it was to, ironically, connect many computers together.
There's also the overwhelming issue of power and control. Moving computing back to the mainframe allow totalistic control over computing by the service provider. This is a good way to make money. But is it good for the world? And what would the world look like if we had reliable fast trustable interconnected system, instead of cloud mainframes/data keeps?
I'm forgetting which books but some of the books about early computing talked about protests against computerization, against the mass data ingestion (probably among others What The Doormouse Said?). For a while the personal computer was a friendlier less scary mass-roll-out of computing, but this cloud era has not seen many viable alternatives to staying connected while keeping computing personal. RemoteStorage was early in, and Tim Berners Lee trying a seemingly very reasonable Solid idea seem like very reasonable takes, or going full p2p with data/hyper and that world: none of these have the inertia where others can follow suit. The problem is much harder, but I think it's more path dependence and perverse incentives, that breaking out will be found to be quite workable and good and validated, but there's gross inaction on finding the moral, open ecosystem, protocols & standards based alternatives for connecting ourselves together as we might.
Hosting from home seems absurdly viable for many. I have a systemd-timer that keeps a upnp-igd nat hole punched so I can ssh in, and that has absurdly good uptime. My fiber to the home would survive to quite a lot of use.
Past that, vps can be had so cheap. If we have good software, the computing footprint ought to be tiny.
One real challenge is scale out. Ideally, for p2p to really work, some multi-tenant systems seem required, so we can effectively co-host. I loved the sandstorm model, but didn't actually use it, and I think there's further refinement possible.
Ideally imo, I could host like 10 apps, but if you want to use one, you spin up your own tenant instance. The lambda engine/serverless/FaaS thing wouldn't actually spin up new runtimes, it'd use the same FaaS instances, but be fed your tenant context when it ran and only be able to access your tenant stuff. That way as a host, I kind of know & can manage what runs, but you can have your own freedom of configuring your own instance to a large degree.
Then we need front ends that let you traffic steer and any cast in fancy ways, so you can host your own but it falls back to me, or you have 10 peers helping you host & you can weight between them.
Operationalizing what we have already & finding efficient wins to scale is kind of cart before the horse, since fediverse &Al are so new, but I think the deployment/management model to let us scale our footprint beyond ourselves is a crucial leap. And I think we are remarkably closer than we might think, that the jump into a bigger more holistic pattern is possible if we leverage the excellent serverless runtimes & operational tools that have recently merged integratively.
This is really cool, Alex: the ability to run arbitrary jobs on GitHub actions that you've wrapped into a convenient API that (I guess) automatically runs on the 3000 free-minutes that every GH account gets on default Ubuntu-runner actions every month.
Super cool, I love this idea!
Less general than you, but I also had a notion to do kewl shit on GH actions, and I figured out how to turn it into a "personal ephemeral VPN-like" using BrowserBox.
Basically, you:
1. fork/generate the BrowserBox project into your own account
2. enable Actions and issues on the fork, and
3. open a new issue from the "Make VPN" issue template which triggers the action to create your remote browser and gives you a login link.
And voila! In a few minutes you get an up and running remote browser/private VPN that runs for 15 minutes or so (conservation, but you can tweak the value in the actions yaml!),
Even cooler, the action job will post comments guiding you through any setup steps you need, and then post the login link for your BrowserBox instance into the comments section of the issue you opened! Here's the action yaml that I use to do this^0
One hassle: I have still not avoided is the necessity of using ngrok for this. Sure, you can get around it using a tor hidden service which we also support, but that requires the user connecting via a tor browser.
Ngrok is required to create the tunnel from the BrowserBox running inside the GH Actions Runner, to the outside world.*
Seems like a lot of steps, right? So I added a "conversational" set of instructions auto posted by the runner as issue comments to guide you through it.
* Technically, this is probably possible by using mkcert on the IP address of the runner, and posting the rootCA.pem as an attachment to an issue comment. You then need to add it to your trust store and you're good to go, but ngrok is the easier way: sign up to ngrok, get your API key, add it to the Repository Secrets in settings and hit the "Make VPN" issue.
I know this is tangential to the linked article, but why the "Open" in OpenFaaS if the community edition is so limited? The first payment tier is 1000 dollars. Is that open in the sense you can send pull request and inspect code?
The approach the author describes is batch processing, not time sharing. The whole point of time sharing was to allow users to work with the computer interactively rather than having to submit a job and wait for results.
Wouldn't Google Colab then we closer to both "time shared" and "supercomputer"? Jupiter
notebooks are certainly interactive and Colab's purpose is to allow often low-powered clients to offload ML training/inference to a GNU-enabled server, so kind-of like time-shared systems of the old?
This reminds me of my dad talking about how amazing it was when time-sharing became a thing at UWaterloo. He'd talk about how batch-processing was such a pain when you'd discover some tiny error and have to get back in line, and how with time-sharing you could show up at 3am and have an entire PDP to yourself.
My PhD advisor stayed up late at night (in the '60s) to get exclusive access to an IBM mainframe that was normally used at UC for payroll. When I joined the lab the convention was to go find a machine that was only lightly loaded (manually; there was no batch queue, just telnet to a cluster of machines) and start your multi-month simulation and hope nobody else popped onto the machine to steal half your processor.
Speaking of mainframes... I ran across this video on the latest workflow for COBOL / CICS development on your IBM mainframe. You have the old 3270 emulation way yes, but then we move into 2010 technology: a custom version of Eclipse IDE that submits the jobs for you, then we move into modern times with Visual Studio Code integration (using something called Zowe CLI):