One thing that always confuses me about the business factors in this kind of announcement, where they say that the vast majority of users won't be effected:
> We evaluted CI/CD minute usage and found that 98.5% of free users use 400 CI/CD minutes or less per month.
Okay, so just that 1.5% of free users, each using at most 1600 minutes more than the 400 under the new limits... is enough cost to actually matter and make it worth making this change?
Or they anticipated that number going up if they didn't make the change?
It seems odd to me to say "This hardly effects anyone at all, almost everyone can keep doing exactly what they are doing for the same price (free in this case)", AND "this was necessary for the sustainability of our budget."
These are carefully constructed truthful statistic lies.
Of all the free users, how many of them even have a repository? Out of all the users who have a repository, how many of them actually make any use of CI/CD?
They are saying "1.5%" to make it sounds small, but those 1.5% could account for a significant portion of total minutes of CI/CD used.
This seems... ambiguous, actually. I for one automatically took "free users" to mean free users of the CI/CD feature, not free users of GitLab. Meaning, the set of people who use > 0 CI/CD minutes. I'm actually surprised if that's not what they mean, given the entire point of the discussion is about how this affects users of the CI/CD feature. It's one thing to state things clearly and let the reader make incorrect inferences just due to natural assumptions, but writing the text itself ambiguously to induce that effect is a little different.
More importantly, it's not the same 1.5% each month. So if they're going to stay with with Gitlab, they'll need to switch over to paid, even if they're the 98.5% in other months.
Oh, that's even more devious if true! Although I'm not sure about this one, it might not be exactly the same 1.5% of users every month, but I'd guess most users CI/CD user stays relatively constant or increases at a gradual relatively constant rate, it doesn't jump up and down unstably month over month. I'd guess which users are in the 1.5% is relatively constant, right? Although you're right that the % of users in a given year that went over in any month must be somewhat more than 1.5%.
Man, I was expecting some sort of business/budget answer that I woudln't have insight into not being in this kind of business. I was not expecting straight up "misleading statistics, that number probably doesn't mean anything like it seems." :(
This is exactly it. 1.5% of users does not equal to 1.5% of CI/CD minutes. And such messaging isn't always dishonest. If you have 99 users using 100 minutes each and 1 user using 10 million minutes, setting the max limit to say 1000 is a net benefit for the system and its users, at the expense of the one outlier.
Yes, but in this case one user couldn't use 10 million minutes, they were already capped at 2000. Now they are capped at 400 instead. So that limits how extremely disproportionate the high percentile use could be. As I said, that 1.5% of users could have been using at most 1600 more minutes a month than the new max.
This sort of thing happens a lot when you first set up a free plan. Some part of it turns out to cost more money than you expected, and it’s a small but growing segment. If you’re going to have to pull back in it eventually, why wait? Waiting might just annoy twice as many people when you inevitably have to introduce stricter limits later.
Except in this case individual users will also grow. You'd have to do something like grant them their current usage plus some buffer (without letting them know ahead of time maybe, since otherwise this is gameable).
Why do you need a buffer? If you’re grandfathering in old and limiting new users more, what is gameable? Besides a small portion of people registering accounts in the short time between announcement and implementation?
The math works out if you presume that 20% or more of that 1.5% is converted into a paying customer. If I was using GitLab that much and could pay $20 to support GitLab but wasn't already a paid subscriber, I probably would think about it. (I'm thinking about it now, and I don't even use GitLab.)
Worth thinking about this in the context of fat-head vs. long-tailed distributions, or less formally "whales". A very small number of users (the "head" or "whales") can account for most of the area under the curve, which in this case is the cost of delivering free build minutes.
For a toy example, imagine one user uses 2k minutes, 10 use 100 minutes, and 1000 use 1 minute. In this case you have "Over 99% use 100 minutes or less", but 50% of your cost is going to the one 2k minute user.
I have no idea if this is their exact curve, just showing how a fat-head distribution could explain what they are saying. I'd expect the "demand" (i.e. minutes used if they were not constrained / all paid for) to be a power law distribution.
Also worth noting that there's a sort of bimodal selection effect going on here -- it's unlikely that you use exactly 2k minutes/month, since you'd be hitting your limit and that would be disruptive. So the closer you get to using 2k minutes, the more likely you are to pay for more than the free tier. So I'd expect this pricing change to also force some users that were previously on 400 minutes +- 100 to have to upgrade too; this will impact some of the 98.5% of free users that are using <= 400 _on average_ per month.
Don't forget the fact that you have the option of running pipelines on your own machines. I do this for all my projects, and only really use shared runners by mistake when I forget to set it up.
I am a little dismayed to find out that open source projects no longer get unlimited minutes, though.
What position do you think I took? I just asked some questions out of curiosity, I did not mean to take any position.
Do you have a blanket position against being curious about the business models or policies or statements of companies offering free stuff about their free stuff? Does that apply to facebook and google too?
OK. But they're saying the number of free users that would have to switch to a paid account is 1.5%, right?
Which doesn't necessarily seem like enough to justify a disruptive change that might scare customers... but I don't really know I'm hardly an expert or have experience in this kind of business. You think it is, is your hypothesis?
Their existing policy has never made sense to me. You can only give away so much stuff on a free plan. I don't think CI/CD is cheap. Buying extra minutes for $10 (not per month) is completely reasonable. At some point I feel uncomfortable using a free plan that is obviously unsustainable.
Business models which leverage network effects often subsidize things to build out a network. I wouldn't feel bad about it.
That said, this move seems 100% reasonable. I care about having a free tier. If they were killing the free tier, I'd be sad. But if I'm not paying anything, I'm okay being required to make my CI/CD pipeline efficient for my benefactor. I'd even take less than 400, gladly.
GitLab earned their market position by offering free private repos. They leveraged that position to get the funding to massively ramp up their enterprise features while GitHub was burdened with thousands of open source users. Now GitLab is cashing in and I don't blame them. Their product is excellent. It's basically an all-in-one package for small to mid tech orgs.
Given the rise of Sourcehut as some people's new goto and Github finally offering container registry, it seems like a good time to pull that trigger too. I as a whole like GitLab as an all-in-one solution.
I love Sourcehut, but Sourcehut's only competitive right now in the indie/free software enthusiast market. I doubt Gitlab views them as a major threat.
Much like Sourceforge isn’t and Bitbucket is not the same sort of competitor or threat as a couple of years ago. The private version that is sold (Stash?) could be though.
It's not the best comparison. Gitlab is VC-backed, whereas Sourcehut aims for slow, sustainable growth. Plus, the UX decisions of Sourcehut are so drastically different from Gitlab/Github that it doesn't threaten existing customers very much.
Relying on an obviously unsustainable free plan to build something always felt icky to me, because it will inevitably break, and then you will have to deal with the fallout of a perhaps now-unsustainable project of yours, your investment upon which then could end up possibly sunk, leaving you worse off than not having started the project in the first place.
Paradoxically, a lower free tier makes me a lot more likely to use Gitlab CI now, since I now know they know their costs and limits, and from now on, not eating the cost for a future drop of the hammer in an undetermined timeframe.
In general in any SaaS business, the free plan usually is written off as part of marketing budget.
The idea is that instead of paying Google/Facebook to display adds in the hopes that it will convert people as leads and hopefully down the line as a paying customer, it's usually much much cheaper to provide a free plan instead, which helps out as a "trial" of the final product.
Despite the sales factor, it also helps you get real users early on which can provide invaluable feedback to you, and help you prioritize the parts of the software that has real need versus what you imagine that people would want instead.
You can also run your own CI/CD runner for free and use their platform for coordination. I do this and it works well, as easy as running a docker image. Runs on my homelab.
Can you share a bit more about this? I'm interested in setting this up myself. A link the docker image or name I can google?
To be clear, you store all your code on Gitlab's servers (i.e. not self-hosting git instance) but just "outsource" the CI/CD work to your homelab? That's my ideal.
You run an agent called the GitLab runner. You configure it with a token from your Gitlab.com group or repo (depending on where you want the runner to be available). You tag the runner and reference it by tagging CI jobs you want to run on it. The runner polls for outstanding jobs and then executes them. It is extremely flexible in how it does this. In our case, the runner manages a fleet of executors using docker+machine. I believe this is similar to how Gitlab does it internally.
I did macOS runners this way, combining the runner with the VirtualBox executor. Best part is once you set up the base VMs (one per macOS version), the runner is smart enough to take a snapshot right after boot and restore that on subsequent runs, which not just makes job running stateless, but also makes job worker boot about 2s.
The link has already been provided.
The runner is only an administrative process. It can be indeed put into a Docker container. The instructions are easy to follow, the image is built by gitlab and updated regularly. If you choose the alpine variant instead of latest it's much smaller. We have done that for 2+ years and not noted any limitations.
The runner will start jobs that do the real building (or whatever you have coded in your CI). Again you can choose to have jobs executed in docker containers. Also documented by gitlab, easy to set up. Of course you need to provide a suitable Docker image where your build can work. Nobody can do that for you. In simple cases you can pull something existing from Docker hub without further additions.
I think a good free tier still makes sense as long as it seems sustainable overall. For example, if a lot of people are willing to pay for at least a basic plan, I feel a little better.
I think for CI/CD having a generous free tier is great because it makes it easier for people to get started and really dig into a project, not to mention the obvious benefit to open source that works as a continuous PR machine. Practically everyone knows what Travis CI and CircleCI are.
I still agree that it can be unnerving at times. I worry about services that seemingly offer no paid tiers. Like, draw.io. Thankfully draw.io actually seems to be sustainable, but you wouldn't guess it based on their very unobtrusive app!
You can pay for Discord. Admittedly, I do. It's not that Discord is perfect, I have a lot of personal gripes with it. But it's still significantly better than where I came from (Skype) and I use it a lot so it seems fair enough. Discord Nitro also thankfully pivoted from being a games service and the features it does provide are nice to have. (Larger file uploads, better stream quality, cross-server emoji.)
Is it? $15-30/head sounds like a lot, but these are employees you're paying 30k+ to in salary alone, if the addition of slack makes them 1% more efficient per month, that blows past the $15/head. Forget the tech companies with employees that easily crest 200-300k in costs after insurance and other benefits.
There's a lot of audits and regulation, in addition to tighter security, that Slack needs to prove to its enterprise customers that they can trust their employees blasting confidential information on it every day of every year.
You are comparing efficiency like in "with Slack" and "without Slack", but in fact it is a "with Slack" vs "with another messenger". On a previous job we used Telegram for work communication. It has bots and stuff, and it's blazing fast. And it's free.
Nitro is for individual accounts and provides features for you as a user, boost is for the server and provides features for every user of the server.
Maxing out a server takes 30 boosts but the level 3 perks seem pretty… thin on the ground:
- +100 emoji (from 150 to 200)
- 384Kbps audio (from 256)
- 100MB uploads (from 50)
- custom URLs
Only the third one is somewhat useful, but 15 boosts for that doesn't really seems worth it.
As to price, a boost is $5 so a level 3 server is indeed $150 (level 2 is half), however Nitro ($10) provides 2 boosts and 30% off all boost purchases, meaning you can max out a server for $108, or 55.5 for a level 2. Nitro classic is only $5 and also provides 30% off of boosts, but doesn't include the free boosts, so it comes out at $110 to get a level 3 server on your own.
>Buying extra minutes for $10 (not per month) is completely reasonable.
I agree that free tier is moderately reckless, as it invites people like me who bookmark https://free-for.dev to devour your service with no gain.
The pricing question though is tough. How much would an instance in AWS cost you for 4000 minutes? Two dollars? Pretty sweet markup if you can find a buyer.
I am not sure whether their CI/CD has been a main driver, but their investment rounds haven proven that they have built up a very positive image. So in a way attracting users for free can bring money to the company.
But you are right: Such model is not sustainable for very long.
I've never really understood "minutes" as a unit of build work.
What kind of server are we talking about? What CPU? How much RAM? How fast is the storage access? Is my instance virtualized? And if so, do I have dedicated resources?
I have a build that takes around 70 minutes on an 8-core i9 with 32 GB of RAM and M.2 SSDs. What does that translate into for Gitlab "minutes"?
We define Pipeline minutes as the execution time for your pipelines. You bring up an interesting point, though. So today, for our Linux Runners on GitLab.com, those Runners are currently offered only on one machine type, Google Compute n1-standard-1 instances with 3.75GB of RAM. Our current Windows Runners on GitLab.com are Google Compute n1-standard-2 instances with 2 vCPUs and 7.5GB RAM.
In the future, for Linux and Windows Runners, we will offer more GCP machine types. For our soon to launch macOS Build Cloud beta, we are planning to start with one virtual machine size and then possibly offer different machine configurations at GA.
And yes - the virtual machine used for your build on GitLab.com are dedicated only to your pipeline job and immediately deleted on job completion.
Finally, the only way to know how long your current build job will take on a GCP n1-standard-1 compared to the 8-core machine is to run the job and compare the results. I assume that your 8-core machine is probably a physical box, so you should of course, get better performance than a 1-2 vCPU VM.
AWS or GCS charge for VMs in actual clock (wall) time though, right?
Tthat your load may spend more or less time waiting on IO instead of actually using the CPU... I would not expect to effect your charge. Which is the main difference between wall time and actual CPU time, right?
What would be the most immediately understandable way to present that? Suppose they added new, faster servers in the future; what unit would make the most sense to offer that won't change in the future?
Probably could just include a * that points to the infrastructure it runs on with a link to the update history of that infrastructure. It only becomes an issue when the infrastructure isn't standardized, but that is just exposing the underlying issue where the same build won't take equal time due to a difference in infrastructure.
A generic "build credit" term could be better, maybe - with some details about what one credit gets you. Maybe one build-credit is normalized to "One minute on an 'n1-standard-1' with 4GB of RAM and 40GB of storage."
Under a system like that, users could maybe choose between a couple of different worker types. Or if there's only ever the one type, periodically the 'n1-standard-1' could be swapped out for whatever is the latest-greatest for the same price.
I mean like other public cloud providers it would make sense to have instances and per minute pricing for an instance. If there is only one instance type that's fine.
What about if there's one instance type, but sometimes that instance gets upgraded so that the same things take less time? Is there a unit that would make more sense than "minutes", and be stable over time?
For instance, "time to compile XYZ well-known project"?
Time to compile is a hard to gauge metric. I’d rather just be transparent that the instance type has changed.
It’s presumably in a big standardized DC. They don’t have a continuum of instance configurations, they probably upgrade rarely and systematically. If they are mid upgrade just have 2 instance types available then sunset the older one. Since the upgraded instance is a new instance type it can have new (or same) pricing. In addition, they could publish benchmarks for each instance type if they want.
It is literally what we see with cloud providers having v1/v2/v3 names for some instance types.
That would probably be 280 minutes of "build time" to do your build in 70 minutes. My math is: they are going to call your 8 core CPU 16 vCPUs, because it's something cloud providers can charge money for but Hyperthreading/SMT doesn't really speed up builds that much. It does do something, but 2 threads scheduled on the same core is not going to be 2x faster than one thread being scheduled on that core. (It sure will use 2x the RAM, though, while it sits there waiting to execute instructions. They will charge you for that too!) Then, because they are running on a 64 or 128 core CPU with the same TDP as your consumer chip, things are going to be really clocked down -- your desktop may boost to 5GHz during the build (assuming you got a good cooler and overrode the time limits for boosting; who doesn't, though?), this thing will be running at 2.5GHz. So you will need twice as many actual CPU cores to get the same performance.
I am being very pessimistic with these numbers, but I am continually amazed at how slow computers in the cloud are compared to my desktop. And when you're being charged by the minute, there is no incentive to make the computers faster, of course -- the business incentive is to make them slower! Buyer beware. (To be fair, they are getting a lot better Wh/build out of their system than you are. If you were paying for the electricity and cooling and got paid no matter how slow the build was, you'd make the same decision.)
In case anyone is wondering about GitHub (not GitLab), they're planning to add multiple runner sizes[0]. Unfortunately it was moved from Q4 2020 to 'future' so there's no expected release time.
Either they went the cheap route and stuck it on some price efficient EC2 instances, or they went the vogue-but-expensive route of lambdas for "rapid processing and ease of development"
Honestly not that big of a deal given that(I would assume) most probably run their own gitlab runners. I personally use the free tier and have a bunch of my own runners.
For smaller projects who aren't ready to start spending yet, it's pretty trivial to spin up your own runners on a server. Not sure how it scales, but Gitlab has pretty solid guides on how to make one. It took me maybe an hour the last time I looked at it, worth checking out. Definitely easier to drop $10 than maintain that though, I'm a fan of Gitlab's CI/CD infrastructure.
This thing of spending like an hour setting up a runner is one of the things I wanted to address with https://boxci.dev - A CI service I’ve built with a similar bring your own runners model but where “setting up the runner” consists of just installing a package, literally done in seconds :-) You should check it out.
To be fair, setting up a runner for Gitlab CI is very simple too. We self-host GitLab and we only have one runner on duty, but some projects have many stages that can be executed in parallel. I start a docker runner on my laptop before pushing new commits, which cuts the CI wait time by half.
$4/mo for 2000 minutes is reasonable. GitHub Actions does 2000 minutes for free/3000 minutes for $4, wonder if they'll drop their free tier a bit without the competition.
At $4/month you can get a server from anyone for 700+ hours of CICD .. this doesn't sound like a great deal to me unless you're CICD can't exist without ~2gb/ram and the cpu limits... Or unless GitLab runners have 16+GB RAM and lots of CPU for large jobs
Today, for our Linux Runners on GitLab.com, those Runners are currently offered only on one machine type, Google Compute n1-standard-1 instances with 3.75GB of RAM. Our current Windows Runners on GitLab.com are Google Compute n1-standard-2 instances with 2 vCPUs and 7.5GB RAM. In the future, for Linux and Windows Runners, we will offer more GCP machine types. For our soon to launch macOS Build Cloud beta, we are planning to start with one virtual machine size and then possibly offer different machine configurations at GA.
So as you get going initially with GitLab SaaS, you don't have to set up your Runners for your first CI/CD jobs. Then, depending on your requirements, and as your use cases evolve, you can easily set up your own Runners but still benefit from the included minutes.
CI/CD is mostly burstable workload. You are not going to run it 24/7, but when you need to build/test/deploy a commit, you'd prefer to do it as fast as possible.
“We want to reduce cost and make more money, therefore today we reduce the number of free minutes from 2000 to 400 for free accounts. There are options to buy more minutes. kthxbye.”
I work at a medium-ish enterprise and the comm team forbids engineering from sending out notices to customers without clearing past them. Not sure if it’s the same for Gitlab, but sounds like something similar...
> Does anyone else think this is a Gitlab campaign against overuse of monomorphization in Rust projects?
No,not really. I mean, in healthy projects build times are dwarfed by the time it takes to run tests. In web development projects even the delivery and deployment steps dwarf build times.
> in healthy projects build times are dwarfed by the time it takes to run tests
I don't doubt that's often true but many Rust projects may be outliers here. A full, non-incremental build of a Rust project involves building all of its dependencies. This can add significant amounts of time if a project uses a big framework like Actix-web, which adds many dependencies.
My tests however run very quickly, ~1ms each. So running thousands of tests only takes a few seconds, even on relatively slow gitlab runners.
You should be caching the builds of your dependencies. This is very easy with cargo and GitLab. I think encouraging people to optimize this is a reasonable cost to push onto the free users.
Is there an equivalent of `ccache` for Rust? For C++ it's been a total lifesaver, I've introduced it in multiple organizations for massively reducing (average) build times, even by sharing the cache on an NFS drive between multiple machines.
Note the caveats though. In particular it can't cache Serde, which is easily the slowest popular crate to compile. Also the heavy use of static dispatch and LTO in Rust means a lot of the actually compilation happens in the final crate which is usually the one you modified.
Plus Rust by design avoids all sorts of bugs leading to less tests imo. Python for example you really want a lot of testing, maybe even contracts, but with Rust I find that many of those tests are irrelevant.
> Plus Rust by design avoids all sorts of bugs leading to less tests imo.
This assertion on the amount to tests makes no sense at all. Tests are not about language features. Tests are about checking invariantes, checking input and output bounds, and checking behavior. Tests focus on the interface, not the implementation. Tests only work if test coverage is high.
It makes total sense. Rust statically enforces many things you'd have to explicitly test for in other languages, for example types. So you don't need to write as many tests.
GitLab Product Manager here - We’re working on ways to run fewer tests OR only the necessary tests earlier in a pipeline so you get to a result in fewer minutes. The first project towards this in the product is https://docs.gitlab.com/ee/user/project/merge_requests/fail_... which we hope to bring down to the Core tier soon.
I think most test runners have a --stop-on-failure feature, but using it has the downside of not giving you the complete list of failing tests.
If you haven't touched your gitlab pipelines for a few months, check out DAG pipelines [0] - I got my web project deployment pipeline down from 25 to ~12 minutes by running tests sooner.
That page links to https://about.gitlab.com/solutions/open-source/program/ for determining if your project can have more free minutes. It sounds like that says if you have a public repo you get a bajillion (50,000) free minutes. That seems like a crazy good deal.
The feature that sold me GitLab was the ability to run your own runners. It's basically 10x (source needed) cheaper than other hosted CI/CD services. Most basic setup is like 3 commands.
Gitlab's SaaS offering is an excellent product. It is absolutely worth paying for. Their free tier was and still is very generous. I hope this isn't a signal that they are having trouble. I've kind of naturally shifted 90+% of my work over the Gitlab over the past year. I love it and want to stay!
400 minutes is plenty to decide if you need to pay for the service, and they need to stay in business. Eminently fair, and still far better than the typical 1 month free.
Investment money drying up is very well distributed. Even if none of those companies are looking for money right now, when it gets tight it's a good practice to reduce your spending so you need less of it.
Those things the OP is talking about are all investment, so it's natural that they get cut.
Gitlab might also be looking for where they can grow during the pandemic.
1.5% of 6m existing free tier users is 90k accounts who now need to reduce usage, pay Gitlab, or move to another platform. Only one of these options is fast!
I don't think this is the only reason they are doing this, but it does sweeten the pot.
Does anyone know how to see historical runner usage on GitLab? I'm digging into the interface, and I only see current month usage, no ability to go back to previous months. With this reduction news, I thought the interface would have updated with better details.
Can anybody provide more info about how to set up your own runners? I have a VPS which runs dokku - can I spin up some containers there? Or can I just run it locally on my laptop (usually my laptop is connected to the internet when I push my changes and trigger CI so it could run there?)
edit: I had the look at the docs but they're quite overwhelming with lots of options. I run linux on my laptop. If the setup is too complicated I'll just purchase some minutes and call it a day
I see a lot of comments citing gitlab as devious or showing statistical lies. The basic point is that gitlab is a business that is competing in an area where competitors have infinite pockets. GitHub can allow users to use 2000 minutes and even make it free for teams because Microsoft has a huge war chest. Gitlab on the other hand is valued at 1/1000 and has to be profitable to survive.
I wish they would make there merge request work like GitHub’s pull requests. I always miss the messaging when there are merge conflicts. Also would be nice to have something similar to status checks with the annotations support like GH.
Now I am going to see if they fixed the bug were copy to clipboard stopped working a few months ago. Why would I want some JSON blurb intrad of branch name when copying it ️
Do they have metrics on how many free tier accounts actually use the CI/CD feature and of those how many exhaust the former 2000 minute quota?
I am asking because my use is definitely in the minority of the user base which is just slapping projects into a managed git repo that is not owned by Microsoft.
This was before github decided to allow private repos for free.
At the end of the day it will all come down to a very simple thing: pay up or hit the highway. There is only so much charity company can afford if it is there to make money rather than vaporizing investments.
CI pricing (gitlab, github actions, circleci etc.) is all extortionate. When they price by 'user seat' (gitlab) or 'credit' (circleci), comparing pricing is like trying to pick a cell phone plan.
AWS Code Build will always be an order of magnitude cheaper, it's just slightly harder to set up but it works very well. It's unclear how all these other services will ever compete with that.
For example, to run a CI server on Gitlab for a team of 8 that never spun down, it would cost $492 per month on their 'shared' runners. On AWS Code Build, you get a DEDICATED ec2 instance for $223 per month and only pay for what you use when it's running.
The setup time isn’t cheap, though, and the UI is... confusing.
If you’re using a dedicated ec2 instance, most providers (gitlab, buildkite, github, etc) will let you connect it as an agent for free.
IMO using your own runner is a better way to go in general because the standard ones tend to be very underpowered and you can get much faster builds without spending much.
At that point different providers are largely competing on price and UX (imo Buildkite have the best developer UX and the time saved as a result is well worth the price).
(Not affiliated with buildkite other than as a user).
You can install GitLab Runners on EC2. You can even GitLab install Runners on a Raspberry Pi or use some old computer and put it in a closet somewhere (uptime not guaranteed though).
They sort of hang themselves with this -- gitlab runners will occasionally hang and just run for the full 1 hour max, making it very easy to max out your hours. Presumably they don't fix this so people max out their hours and have to buy more, but it also hurts the bottom line of their free tier. I see this happen at least 2x a month on our project, and it doesn't seem to correspond with any particular docker image.
Metering of services like this only makes sense for organizations that don't have the time or expertise or desire to maintain the open source, self-hosted version themselves.
Raspi and similar devices will some day hopefully rid the world of this tyranny of having to trust a website like this and pay them for eternity and hope they dont go down and hope they dont raise prices (oops) and hope they dont obfuscate pricing like ermmm well every cloud provider has.
I'm suggesting that $4 per month where I have to estimate the number of requests I will actually make is actually not that good of a value compared to self hosting on a raspberry pi.
The devices will pay for themselves within the first year and generally my philosophy is to avoid building your operation around 50 services cobbled together because there is a possibility you will spend more dev time trying to understand a service's idiosyncracies than actually just rolling your own.
Build vs Buy is a very individual decision that should be based on your team and your product.
At my last employer, we used a free tier of CI/CD through CircleCI as it was sufficient enough and easy to spin up for testing a couple of small internal libraries we needed to hook things together with a SaaS product we were using. We weighed the benefits and came up with a number that balanced the estimated cost of implementation and running cost of self-hosted against the free tier offering and estimated cost of implementation there.
Once you factor in engineering costs and the additional server to maintain, it made sense to go hosted for us. But every team is different, and hardware cost isn't the only thing. You need to consider the running cost of utilities, maintenance, and in the case of larger equipment, even cooling costs.
That said, for personal projects, yeah, I just kick things onto my home file server, since it's running anyways, and normally has nearly no load other than managing my ZFS and occasional backup operations.
Fair enough, I know I self host everything. I would just suggest that a pi is probably a bad example. Since its ARM and you would want your ci/cd to run as close to production(x86) as possible.
Cool :) If I'm testing OpenJDK code on linux, it should be the same whether that linux device is running on arm or x86 right? I imagine C would maybe be different but I would hope that linux abstracts those differences for interpreted languages
> You could barely rent a dedicated server for that amount of money
You can get a Ryzen 5/64G for like $40/month on providers like hetzner. While I get your point about maintenance, it's not right to say that dedicated servers are that expensive (you also can't compare the performance of 2vCPU/4G with a dedi but that's beside the point)
> We evaluted CI/CD minute usage and found that 98.5% of free users use 400 CI/CD minutes or less per month.
Okay, so just that 1.5% of free users, each using at most 1600 minutes more than the 400 under the new limits... is enough cost to actually matter and make it worth making this change?
Or they anticipated that number going up if they didn't make the change?
It seems odd to me to say "This hardly effects anyone at all, almost everyone can keep doing exactly what they are doing for the same price (free in this case)", AND "this was necessary for the sustainability of our budget."
What am I missing?