A lot of discussions about how git repos are supposed to be small are totally missing the point. This storage quota applies to everything, including release artifacts, containers, etc. Forget containers or CI artifacts on every commit, let's look at a very common scenario: using goreleaser to build binaries and deb/rpm/etc. packages for multiple architectures every release. This way a moderately sized Go project can easily consume 50-100MB or more per release. That gives you at most 50-100 releases across all your projects.
Using hosted GitLab for open source projects is looking less and less appealing.
Edit: An open source program that upgrades the quota is mentioned elsewhere in the thread: https://about.gitlab.com/solutions/open-source/ I don’t use hosted GitLab for my open source work, so no idea how many people get approved.
Hi, GitLab team member here. The GitLab for Open Source program provides open source projects with Ultimate benefits and higher limits. More info in previous comment: https://news.ycombinator.com/item?id=32387621
Why do you have to “apply” for these benefits? Shouldn’t providing the source publicly in the open (ie not a private repository) by definition be enough for “open source”?
I guess this is the next step to reduce costs after the brakes were put on the "let's delete old OSS repositories" leaked plan.
For comparison, I think GitHub just have a cap of 100MB on any single individual file, plus:
> We recommend repositories remain small, ideally less than 1 GB, and less than 5 GB is strongly recommended. Smaller repositories are faster to clone and easier to work with and maintain. If your repository excessively impacts our infrastructure, you might receive an email from GitHub Support asking you to take corrective action. We try to be flexible, especially with large projects that have many collaborators, and will work with you to find a resolution whenever possible.
Which is a bit wishy-washy, but sounds like there's room for discretion / exceptions to be made there rather than a hard cap at 5GB.
The difference being that the value Microsoft gets from GitHub isn't its revenue, it's its influence. Whereas Gitlab is just another corporate software suite.
Well, Github (free or not) had no CI until Github Actions, which was after the acquisition. Integration with things like Travis were already there and free.
> I think GitHub just have a cap of 100MB on any single individual file
There's a 100MB limit on the size of a push, which also limits the size of a any single git object (i.e. file) to 100MB too. However GitHub supports LFS for large files, and their documentation says to use LFS for files over 100MB:
If you build an image for testing on every commit and don't have a retention policy set up, you could be using a massive amount of space without realizing it. I can see why they did this.
Also, for some things there is no retention policy configurable at all -- eg pipelines and their associated stdout logs appear to not be auto-cleaned-up at any expiry date (no default, nothing configurable). If you want to get rid of those you need to script it via the API, it seems: eg https://gitlab.com/eskultety/gitlab_cleaner
(I just ran that on the QEMU project and reduced the usage from 295GB to 165GB by deleting pipelines older than 1 Jan... so that's a lot of low-hanging logfile fruit gitlab could be auto-deleting.)
Gitlab didn't even have retention policies until sometime in 2020. I have a six year old project that's consuming something like 4TB of space in their container registry.
Yeah tbh we are leaving the age of abundant cheap money where everything is unlimited and unpriced. Having people pay for their usage will get them to actually start cleaning up all their junk build artifacts rather than just eating the loss on storing petabytes of old docker images which have no value.
Not op, but a simple node application can very easily make container images that are well over a gigabyte. Build and store that container images on every push and boom, you can explode in usage.
I wouldn't say easy because regexes are needed for complex ones (e.g. retain -prod images for 6 months, -staging for 24 days) but they are powerful and not that complicated.
GitLab team member here. The impacted users are notified via email and in-app notifications will begin 2022-08-22, so far we've contacted 30,000 users. Only GitLab SaaS users are impacted - the limits are not applicable to self-managed users.
We're affected by this change, and have been trying to get support in forums, but can't get hold of anyone.
Build Artifacts are listed as part of what contributes to the quota (which is fair enough), but there's no way (that I could find in the docs) to manage build artifacts that are stored.
I suspect we have a large historic storage, which we don't use / need, but there's no way to browse this, no way to verify it, and no way to delete what we don't need.
I'm dreading getting a big ole bill in a few months for storage we had no way to opt out of.
GitLab Support Engineering Manager here. These limits will first be soft limits as they are now. Impacted users will be notified and there will be about 2 months to take action before enforcement starts. After that point, limits will be enforced.
For some reason I had to run this multiple times to completely remove everything; each run removes about half of all the pipelines and only when there were about 20 remaining did the script remove everything.
What's the difference between self-managed and SaaS users? To me gitlab is a SaaS ?
I mean if I'm self hosting on digitalocean doesn't that just mean that I'm using your FOSS and doing all the work myself and completely separate from gitlab anyway other than the common codebase?
Sorry, I'm not a guru of gitlab and don't know all the common parlance.
> Also if you use your own runner, cloning the repo to the runner will also be included in your bandwidth limit.
You make it sounds like it's horrible. I'd offer a different take in that sure, it can be a huge negative if done recklessly on GitLab's part. But if done correctly, it can actually be a good thing. I think CI pipelines doing a full clone from scratch on every build (or npm installs or other bootstraps) is extremely wasteful in terms of resources, so I'd be glad if that reduces it significantly.
Thanks for the suggestion of adding a Git cache for a GitLab Runner. A similar mechanism already exists that allows control of the Git strategy with fetch on a local working copy which is faster, and clone if not yet existing. All benefits and limitations are documented [0].
To help prevent unneeded traffic when a new pipeline job is executed, GitLab Runner uses a shallow Git clone by default on GitLab.com SaaS that only pulls a limited set of Git commits from the current head instead of a full clone. [1]
There is a feature proposal [2] to add support for partial clone and sparse-checkout strategies that have been added in more recent Git versions. Recommend commenting/subscribing.
Please note that for GitLab.com SaaS Shared Runners, the traffic limits do not apply with mostly internal cloud traffic. [3]
If you start from scratch, regardless of history, you're fetching at least the size of the repo. In many cases, that's hundreds of MBs for every build and I'm a huge fan of building for every commit. I find that a colossal waste.
I have a very old fashioned and "not recommended" setup with Jenkins that only pulls new commits, because it works out of a persistent working directory. Works wonders for a Django/ES6 project and sips bandwidth. I wish more of the modern containerized, start-from-scratch and so on setups would work in a similar way wrt the bandwidth they use.
We understand people don’t want their quotas being filled by something outside of their control. For Open Source projects, we have GitLab for Open Source which contains higher limits from the Ultimate tier (250GB Storage, 500GB transfer/month). In addition, we intend to look into other ways to address your concern such as counting only your own traffic or allowing you to limit external traffic.
I love the fact that they haven't thought out a lot of how this needs to work. You can't even decide if it's worth sticking with GitLab because they don't even know how they are going to handle half of this stuff.
I appreciate their open discussion and iterative approach. There’s no need to get your feathers ruffled. These measures mainly impact users freeloading their services, so maybe have some empathy for the company providing them?
> These measures mainly impact users freeloading their services
Does it? We use GitLab at work, and we have to use local runners (for regulatory reasons + GitLab runners don't support what we need anyway).
Unlike other CI/CD providers (e.g. teamcity), GitLab Runners don't have a local git cache. So if you need to do a clean clone for an important build (gitlab runners don't clean up well after themselves), you need to re-clone the repo from GitLab.com.
With a 1:2 ratio for storage vs bandwidth (10GB storage, 20GB bandwidth per month), assuming using 1GB in latest commits, and 2GB total repo size (e.g. a vendored dependency that doesn't change frequently):
- 2 runners
- Clean once a week
- 5% of bandwidth per shallow clone
- 40% of bandwidth per month on CI/CD alone.
Leaving space for 6 clones by developers (hope you don't upgrade your dev machines often).
If you're using a submodule in multiple projects you're going to tear through your bandwidth.
Like I said, it /mainly/ impacts freeloaders. Having a 1+2 GB git clone is not a common usecase. That being said, it appears that you would not be affected:
> Transfer is the amount of data egress leaving GitLab.com, except for:
> Paid plans only: self-managed runner transfer and deployments. This is determined by transfer authenticated by either a CI_JOB_TOKEN or DEPLOY_TOKEN
Thanks for sharing for your feedback. I have added more insights into caching and checkout strategies to reduce traffic and speed up job execution in GitLab Runners in this comment: https://news.ycombinator.com/item?id=32408960
Am I reading it right that the original Free Tier had a quota of 45,000GB? That seems absurdly high and not very sustainable (hence the change I assume).
I'm reading that as a "stop-the-bleeding" number; presumably there is someone out there with a 44TB repository and they want to impose initial quotas that don't actually impact anyone immediately.
I guess someone has been backing up their movie collections to Gitlab or something.
One case I've seen on our local Gitlab server is someone in data science/HPC (accidentally?) adding the output of a solver to their repo. Easily hundreds or thousands of gigabytes of data.
I had to learn svn surgery because someone imported a 1GB archive to test the ol' signed 32 bit file size bug, got yelled at, and deleted it. Well, except it's still in the repository, mate, it's just hidden.
I replaced it with a fixture that was 2GB of space characters, which compressed down to about 3KB. I know there's a canonical file bomb zip file that's under 1K but there's clever and then there's clever.
No, originally there was no quota enforced to begin with if my memory serves me right. The limits discussed here are likely meant to gradually tighten up the limits, rather than immediately locking out projects that exceed these limits.
I might be wrong about that but I'm pretty sure that I saw that limit (100 MB) when I was reading about GitLab plans in the past. They just didn't enforce it for some reason.
IMO GitLab does the wrong thing. They should have enforced those limits from the beginning. And if they didn't, they should've eaten those expenses or at least grandfathered old repos.
> And if they didn't, they should've eaten those expenses or at least grandfathered old repos.
Why should they eat the expenses of abusive users? As well they clearly have and are finally taking action about preventing it. Your entire post seems extremely entitled to their money.
No, 45,000GB wasn’t a quota. At the moment, the 5GB limit is a soft limit, and the rollout plan included tiered enforcement. This is an internal implementation detail of our technical rollout plan.
Git excels at tracking human keyboard output. A productive developer might write 100KB of code annually so a git repo can represent many developer years of collaborative effort in just a few MB. That is, unless you require git to track large media files, third party BLOBs, or build output.
However, sometimes tracking these things are necessary, and since there isn't an obvious companion technology to git for caching large media assets ("blob hub?") or tracking historical build output ("release hub?"), devs abuse git itself.
I wish there were a widely accepted stack that would make it easy to keep the source in the source repo, and track the multi-gb blobs by reference.
Git LFS is a pretty widely accepted stack for managing binary blobs by reference in git: https://git-lfs.github.com/
The plugin is installed out of the box in many git distributions now. Many hosts support it today, including Gitlab, which is relevant to this article's discussion: https://docs.gitlab.com/ee/topics/git/lfs/
You mean something like git Large File Storage? It comes with git for Windows by default and every Linux distro I know has it in its repos. MacOS also has it in Homebrew.
Git change compression is quite effective. Unless you start checking in large binaries, most companies will never get close to 5 GB. (Even 1 GB is a pretty substantial amount of code.)
GitLab team member here. Forks of projects get deduplicated, so only the changes you make will contribute to your storage consumption as long as the fork relationship is maintained.
Update: our team recently identified an issue impacting how we calculate storage which results in forks being counted towards Usage Quota. This will be addressed before we begin enforcement.
Artifact size depends on what garbage from the project infrastructure gets included into the artifact. So simple things like not having a fully populated ignore file will cause things to get included into the project or the artifact.
People who don't care about file size don't care about file size. If the artifact is ridiculous, sometimes the repo is ridiculous too. Therefore if you take a bunch of projects with outsized artifacts, you are going to have above average repository sizes as well.
Do personal accounts really have 5gigs of code??? Unless you have a lot of models/artifacts, images, plain text code should be well under 5gb for 100s of projects
Where this will really hit is the docker image registry. The lazy CI implementations will just tag a new image every single commit and leave them sitting there forever. Hundreds of megs each, completely useless.
In a way I'm almost glad people will have to start cleaning up after themselves.
I dont disagree with that, especially given the assertion that artifact storage from pipelines does not count towards the 54gb, it just feels like a tiny amount for 2022. just bumping it to like 20gb would make me feel better even though it doesnt really matter.
What's the point of tapering it down in stages like this? Between the October 19th quotas and the October 20th quotas, if you wait until the last minute, you have 24 hours to move 37.5TB of data. Then 4 more days to move another 7TB; does that actually help anyone? The proposition of getting that much data out of it at that speed seems a bit unrealistic. Why not just say "the quota will be 5GB on November 9th" and be done with it?
The phased enforcement of the limits are a part of the technical rollout plan for this change that was added to our docs. Related comment: https://news.ycombinator.com/item?id=32387597.
The communication sent to impacted users via email and future in-app notifications includes only the applicable enforcement dates and limits.
- If I tag a docker image with multiple tags, and then push it to Gitlab, each tag counts towards the storage limits even though SHAs are identical. eg 100MB container tagged with "latest" and "v0.5" uses 200MB of storage.
- The storage limit is not per repository, but per namespace. So 5GB free combined for all repositories under your user. If you create a group, then you get 5GB free combined for that group. Does this include forks? Does this include compression server side?
- The 10GB egress limit per month includes egress to self-hosted Gitlab Runners in free tier. Consider this with the 400 minutes per month limit on shared runners.
These limits feel less like curbing abuse and more like squeezing to see who will jump to premium while reducing operating costs. Is this a consequence to Gitlab hosting on GCP with associated egress and storage costs? Is this a move to improve financials / justify a market cap with fiscal storm clouds on the horizon? Is this being incentivized by $67m in awarded stock between the CFO and 2 directors?
> To celebrate today's good news we've permanently raised our storage limit per repository on GitLab.com from 5GB to 10GB. As before, public and private repositories on GitLab.com are unlimited, don't have a transfer limit and they include unlimited collaborators.
> If I tag a docker image with multiple tags, and then push it to Gitlab, each tag counts towards the storage limits even though SHAs are identical. eg 100MB container tagged with "latest" and "v0.5" uses 200MB of storage.
Any "duplicated" data under a given "node" (be that the root namespace, a group or a project) counts towards the storage usage only once. So images latest and v0.5 would only represent 100MB in their namespace registry usage, not 200MB.
> Does this include forks?
The registry data is not copied/duplicated when one forks a project. So this is not applicable. But even if it was, as long as the fork and the source are under the same root namespace, any "duplicated" registry data across the two would only count towards the storage usage once.
> Does this include compression server side?
Yes, the measured size is the size of the compressed artifacts on the storage backend.
We should be updating the docs shortly to make these answers more transparent!
I didn't even know there was no storage limit - that seems like an immediate way to get your platform used to store non-code data in very large quantities.
5GB isn't much different than the storage limits of other services, but their storage pricing is atrocious. I've seen the writing on the wall for a while and watched as GitLab went from being the cool open source alternative to GitHub to becoming a bloated oversized mess. I know several popular open source projects were offered premium tier upgrades for free. I am curious to see if these changes, especially transfer limits, will impact them enough to move away.
My Qt/C++ cross-platform FOSS Wallpaper Engine project[1] currently uses 47gb of storage. This is because I compile for every platform and store the artifacts for 4 weeks. Not sure what I will do in the future, because having older builds around to try out without recompiling is always nice.
47GB would be less than 3 cents per month for object storage in backblaze b2. And most repos won't have anywhere near that much storage. It isn't zero, but afaik, GitLab's main competition doesn't have a similar limitation.
GitLab for Open Source provides OSS projects with Ultimate tier benefits, and includes 250GB of storage and 500GB transfer/month. Please apply to join the program here: https://about.gitlab.com/solutions/open-source/
And note that even if you're just "a random someone with a bunch of open source repos", you almost certainly already qualify for this program:
---
In order to be accepted into the GitLab for Open Source Program, applicants must:
- Use OSI-approved licenses for their projects: Every project in the applying namespace must be published under an OSI-approved open source license. [1]
- Not seek profit: An organization can accept donations to sustain its work, but it can’t seek to make a profit by selling services, by charging for enhancements or add-ons, or by other means.
- Be publicly visible: Both the applicant's GitLab.com group or self-managed instance and source code must be publicly visible and publicly available.
That isn't Gitlab saying those are the requirements for being "open-source", it's just their requirements for who they are willing to give a generously large amount of free services to.
I think it's quite fair to say you're not going to give free services out to open source projects that are seeking to fundraise beyond covering their costs.
Having said that: if you run a public open source project that has requirements beyond what a regular free user gets, then why not apply to the open source program[1] (which you almost certainly already qualify for) so you're not space constrained?
Quoting the requirements:
---
Who qualifies for the GitLab for Open Source Program? In order to be accepted into the GitLab for Open Source Program, applicants must:
- Use OSI-approved licenses for their projects: Every project in the applying namespace must be published under an OSI-approved open source license. [2]
- Not seek profit: An organization can accept donations to sustain its work, but it can’t seek to make a profit by selling services, by charging for enhancements or add-ons, or by other means.
- Be publicly visible: Both the applicant's GitLab.com group or self-managed instance and source code must be publicly visible and publicly available.
Thanks for sharing this tool to help cleanup the Git history. Please be aware that it will rewrite the history, which could be very impactful to existing branches, merge requests and local clones. A similar approach is described in the documentation: https://docs.gitlab.com/ee/user/project/repository/reducing_...
Seems like a buried lede here is that limits also now apply to paid accounts. Just checked my team’s name space: we have 700GB of storage used, and gitlab is going to start charging us $0.50/month/GB for everything in excess of 50 GB. On top of the hundreds of $/month we’re already paying in per-seat pricing. That seems absurdly expensive.
Thanks for your feedback. I’d suggest starting the analysis to identify the type of most storage being consumed. Maybe CI/CD job artifacts are kept forever and need an expiration configuration [0], or container images in the registry would need cleanup policies [1]. The documentation linked in the FAQ provides more analysis and guides to help. [2]
If you need additional help with analyzing the storage usage, please contact the GitLab support team. You can also post cleanup questions on our community forum. [3]
I'm assuming this is only for the ones they host and not the self-hosted solution. That is insane that anyone uploads terrabytes of data into gitlab, is there an actual valid non-illegal / weird backup choice use case? Is there some big ass GIS open source project out there that could use the attention of GitLab before they nuke some vital data somehow?
The quota includes not just the git repo, but also other services that gitlab offers. In particular the container and package registries are easy to setup in a way that accumulates a lot of data (e.g. by building and tagging a docker image in every commit and not cleaning up old images).
In most cases this will be a misconfiguration and all the quota enforcement is doing is to force people to configure their projects a bit more considerate.
As far as I know, the quota itself isn't new, it just wasn't enforced in the past.
GitHub limits the package registry to 500M for free accounts, so 5G doesn't seem so bad. Git repos themselves usually don't take up that much space; all my 135 projects on GitHub are ~250M combined (quick count, could be off a bit, but roughly on that order). Even larger repos at $dayjob with daily commits typically take up a few hundred M at the most.
Basically you need to either be in the top 0.1% of highly active accounts or do something specific that requires a lot of disk space, but for most people 5G doesn't seem so unreasonable.
I'm actually somewhat curious how much of an impact this will actually have - I think I've only seen a 5 GB+ repo once or twice and they were not really source code, but mildly "abusing" github CDN/releases for downloads.
At least gitlab is not deleting any data, just rejecting pushes if you're over the limit.
Anybody found out which projects on gitlab exceed the 45 TB limit?
I'm curious what kind of project would even need such a repository size. From a distant view this sounds like heavily mismanaged build artifacts in the project's git history; or abused storage for free CDN of video data or similar.
Maybe a super active repository with a large build matrix? For instance, a repository with 10,000 commits, 25 artifacts per build, and 200 MB/artifact will take 50 TB. It is still ridiculous though.
This doesn't affect me, but a better way to handle this would be to sell extra storage at, say, double GitLab's cost. Digital Ocean sells 250 GB object storage at $5/month and $0.02/GB beyond that.
Yeah that really hurts. A single Unity project can easily reach that in a short amount of time. When I get that warning email in a few weeks, I'll be shopping around.
So, we either go to Github, where our licenses are abused for their shitty ML.
Or we pay $20/month to Gitlab. And I can't figure out how the quotas will intersect with "professional", if at all.
For us Open Source devs, neither is a good option. Although I have heard good things about sr.ht / sourcehut. And for the service, it appears to be fair https://sourcehut.org/pricing/
Why do you need more than 5gb for a git repo? IMHO, any repo above 100mb should have an exceptional reason for being so big, and if it's just code a normal repo is more like 1 to 10 mb in size, max.
You don't even need to pay gitlab, free tier can do for most stuff and there's a generous sponsorship for Open Source.
I don't get all the hate Gitlab is receiving these days.
I do hardware hacking, presentations, code, pictures, and full media to reproduce what I've shared.
FreeCAD files get big. And to accommodate easy printing, STLs are also needed.
KiCAD for board layout and schematics can also get larger. And remember, these also have 3d board components too.
Presentations are naturally larger.
Full high-rez pictures eat storage like you wouldn't believe.
Code is small, thankfully.
One such device I have created is hovering around 4.5GB for a full reproduction for the current snapshot. And if you do the command to pull the whole history (and not current), its around 10GB. And, I'm not sure if GL is counting the whole history, or the current? And are they pruning old after a certain date?
> And, I'm not sure if GL is counting the whole history, or the current? And are they pruning old after a certain date?
GitLab stores the Git history with all data revisions as commits. To reduce the repository storage [0], older Git commits can be pruned but this would rewrite the Git history as an invasive change [1]. To store larger files, it is recommended to use Git LFS [2]. Alternatively, you can use the generic package registry to store data. [3]
$ git clone https://github.com/DefinitelyTyped/DefinitelyTyped
$ du -h DefinitelyTyped/.git
850M DefinitelyTyped/.git
home-assistant/core:
$ git clone https://github.com/home-assistant/core
$ du -h DefinitelyTyped/.git
380M core/.git
The above is also only taking into account repo size, while the GitLab limit applies to everything including release artifacts, CI build artifacts, hosted containers, etc. Many of which GitLab currently provides poor or sometimes even zero support for implementing purging.
I don’t think using the Linux kernel, which is the OG repo that git was created for, is a fair comparison. Though, I guess since it’s barely 3GB, it’s proof that you almost never need >5GB.
It's not just Git. Gitlab hosts a container registry for every repo, so if you build and store a new container image regularly then you could easily top 5GB.
Git works perfectly fine without a web interface like that. It's not popular nowadays, but you can just keep the primary repository on your own server and merge patches by email.
Except not all contributors to projects signed the TOS since you can upload an existing project with existing non-Github contributors. One person uploading a project does not override or change the license of that project. So MS is already not relying on the TOS for legal protection which means they feel legally they can train on any OSS projects.
Well, at least in the US training an AI will probably fall under Fair Use. In the EU there is an explicit copyright exception for data mining. So I don't think there's a legal obligation for Microsoft to only train within the bounds of public GitHub repos.
Hi there, in another reply I mentioned GitLab for Open Source provides Ultimate features and higher limits to Open Source projects for free: https://news.ycombinator.com/item?id=32387621
Yes, and even the Ultimate plan limits will be significantly impacted. If they are not already self hosting now is a good time to look into it or explore other lightweight options like Gitea.
Yup its looking to be the only platform where projects can realisticly grow.
But after CoPilot I still dont want to use it, on the contrary, I'm switching to GitLab because all the free stuff on GitHub are just unrealistic, and are just to play the long game.
Also I dont want to invest my time and effort into a proprietary platform, like GitHub Actions, and Gitlab CI and most other core features of GitLab are Open Source.
If GitLab didn't add the 5 person limit per group most people will be fine, but now with the upcoming bandwidth limit, the 5 person limit, and many others.
Its almost impossible to grow on the platform Excexpt with the OSS program, which requires Lawful agreement with each org member.
Reading between lines it also says it going to enforce a 10GB limit on Paid tiers.
> Namespaces on a GitLab SaaS paid tier (Premium and Ultimate) have a storage limit on their project repositories. A project’s repository has a storage quota of 10 GB.
Even it's not mentioned as a change nor in the timeline, but that limit does not exist currently.
Playing devil's advocate: sometimes your build is non-reproducible, annoying to build, or both (for instance, needing a proprietary tool which can only run on a particular developer's laptop because the license is tied to that particular hardware, and which crashes half of the time for no particular reason). Keeping the build artifacts in the repository means you can reproducibly obtain that exact artifact, even years into the future.
With this change, the 5 user limit[1], and original intent to delete dormant repositories[2][3], it seems as though GitLab is no longer able to support the free side of its business. GitLab has been touted as more OSS-friendly than GitHub, but a large part of the OSS ecosystem depends on free repositories. With these changes and this trajectory, I can't see myself putting another OSS project on GitLab.
It's a shame it's come to this, but I'm confident GitLab didn't make this choice lightly. It must be done in order for them to stay afloat.
Thank you GitLab team for your efforts. I hope you guys are successful in your future endeavors.
I think a lot of the problem is how they communicated it. It isn't like "Well hey we have a problem here", it's just "Hey here's this super complicated to this problem you didn't know existed and you can take it or leave it"
Using hosted GitLab for open source projects is looking less and less appealing.
I also posted about issue trackers on gitlab.com not allowing search without signing in a while back: https://news.ycombinator.com/item?id=32252501
Edit: An open source program that upgrades the quota is mentioned elsewhere in the thread: https://about.gitlab.com/solutions/open-source/ I don’t use hosted GitLab for my open source work, so no idea how many people get approved.