Hacker News new | past | comments | ask | show | jobs | submit login
Gitlab Critical Security Release (about.gitlab.com)
260 points by sidcool on Feb 26, 2022 | hide | past | favorite | 147 comments



I still wish they would accommodate security conscious users who avoid running JavaScript by practicing progressive enhancement. IIRC all of the tickets about it have been rejected though. I note the UK government uses this web development methodology.

https://en.wikipedia.org/wiki/Progressive_enhancement https://gdstechnology.blog.gov.uk/2016/09/19/why-we-use-prog... https://news.ycombinator.com/item?id=12538144


Gitlab's refusal to get their act together over this is why I stopped sending them money.

It's ridiculous.

Almost everything that doesn't require logging in works on github without javascript. There is no excuse for gitlab's sitution here.


I think the excuse is economic. That’s a pretty big investment of time to please probably a very small subset of their users. That JS isn’t exactly hostile..


And yet github and gitea had no trouble making this "big" investment?

I call BS. Overreliance on JS is an architectural flaw, not a cost-saving measure.


> Overreliance on JS is an architectural flaw, not a cost-saving measure.

While i (somewhat) agree, i don't think that this is a charitable take.

Relying on heavy JS libraries with a large attack surface would be an architectural flaw. Using JS for your front end is your architecture. Probably not a choice that you approve of, however one that they've made nonetheless.

Of course, one can talk of a possible middle ground in the form of interesting projects like LibreJS, but those haven't exactly gained much popularity either, unfortunately: https://www.gnu.org/software/librejs/

Sadly the people who actually turn off JS and the reasons for doing this (as well as any sort of developer advocacy) are the minority [1][2] and therefore are largely irrelevant and unheard, even if the arguments are sometimes pretty sound (especially things like battery usage).

That position has largely lost out and i guess most of us now just need to deal with the consequences of it in our daily lives where using JS frameworks can also just be easier, but even if it isn't (e.g. React and Redux), then we just have to bite the bullet and deal with it.

Sources (though there are probably better ones):

  [1] https://w3techs.com/technologies/details/cp-javascript
  [2] https://stackoverflow.com/questions/9478737/browser-statistics-on-javascript-disabled
Curiously, i actually migrated to Gitea (and Nexus/Drone CI) for slightly different reasons, about which i wrote on my blog: https://blog.kronis.dev/articles/goodbye-gitlab-hello-gitea-...


> Overreliance on JS is an architectural flaw

I don’t buy this. There’s lots of valuable things you can do in a browser with only JavaScript.


Jesus the amount of remote-exploitable bugs in Gitlab the last months is astonishing. At this point it's madness to even consider running a publicly-reachable Gitlab instance...

And that's actually pretty sad, because Gitlab is the only open-source alternative that can keep up with Github feature-wise.


>Gitlab is the only open-source alternative that can keep up with Github feature-wise.

I think it's a bit the opposite, GitHub seems to have started adding most of these features as a response to competition with GitLab. Not that GitHub hasn't pulled ahead in some ways now.


We've talked about this a lot internally. They make a big deal about releasing on the 22nd of every month, but they usually have to turn around and release a patch shortly thereafter. 14.8 is a bit of an extreme: 14.8.0 on the 22nd, 14.8.1 (bug fix) on the 23rd, then 14.8.2 (security) on the 25th.

We'd personally prefer to see a better release when it's done, vs. them keeping this farce of a 'release streak' going and then asking customers to install another upgrade within days.


Seems like you could just wait a few weeks until upgrading to a new release?


When I used to maintain a GitLab server I used to do exactly this by upgrading almost a month behind their schedule.


The problem with this approach is being vilnerable to major zero-days (like 30k servers hack in Nov 2021) every month.


I've proposed to other customers who want less releases to track one minor version behind for the experience you are looking for. Still within security fix range, up-to-date and you will jump the the latest patch release (and none should follow unless it's a security release).


I always ran 1 ( sometimes 2) point releases behind, never ever had an issue upgrading. Omnibus on Ubuntu.


I know Gitlab takes security seriously and I think part of why we hear so much about it is because they're so transparent.


That's the rub, isn't it? I feel some of the least secure places are the ones which never mention (or realize) that they have a security problem.

I don't have any particular opinion of Gitlab, but it does seem to be that acknowledging fault is more valuable than the alternative. If I were to attack a service, I'd probably tend to avoid the one that actively updates it's security regularly.


I feel some of the least secure places are the ones which never mention (or realize) that they have a security problem.

It is an age old conundrum that people seem to struggle with a lot. As part of my previous profession, I've security reviewed hundreds of open source libraries. They fall into 3 categories:

1. The libraries/applications that have either had a really security conscious developer behind it or a very rocky past such that it was given an extraordinary amount of attention security wise. These account for about 1% of software

2. The "normal" libraries/applications that nobody care about with regards to security. As long as the code works everyone keeps using it. They are often insecure by default, but the lack of attention by SME means they wont be judged. They account for 80% of software.

3. The horrid ones. Built-in code execution as a feature. Authentication systems with more critical bugs that you can possibly image. We never talk about these because... well, we could spend months on polishing a turd. Nobody dares to publicly speak up and say "don't ever use X, Y and Z" for fear of the repercussions. They account for the rest of software.

GitLab used to be in category 2, but got moved into 1 about two years ago when security professionals started to give it attention.

As to your feeling, 99% of software have security issues, some worse than others. We rarely talk about them.


How many people would even want to host a public Gitlab instance? Gitlab.com is free, works well, and even gives you some free CI minutes.

I’ve been running a private instance for about a year and am absolutely in love with it. Gitlab CI is the killer feature IMO, and self hosting it means I never have to worry about usage limits.

But if all you need is basic git hosting with an issue tracker, I don’t see a reason to use Gitlab over something like Gitea.


I run a Gitlab instance. It wouldn't be a pain except that user spam is nonstop.

How does gitlab.com deal with this? Or do they just put up with rando users signing up and spamming the snippets/issues/etc.?


GitLab employee here.

You can disable signups, or require all users to being approved by an administrator, if that works for your instance. https://docs.gitlab.com/ee/user/admin_area/settings/sign_up_... There are more ways, like limiting specific domains for signup.

Future spam detection ideas shared in https://news.ycombinator.com/item?id=30479511


Thanks.

> You can disable signups,

I want potential GSoC participants to be able to sign up.

> or require all users to being approved by an administrator, if that works for your instance.

I don't have time to hand separate the Indonesian casino spammers from the potential GSoC participants. And I do mean "by hand"-- the Gitlab UI requires me to click a button to open up a secondary menu, then choose add or delete, then wait for the user screen to reload.

At least when sifting through an email spam folder back in the 90s I could press the delete button multiple times in a row. Even that would be a relatively usable solution.


Thanks for the additional context. Agreed, manually approving and filtering is not efficient here. Spamcheck suggested in https://news.ycombinator.com/item?id=30480296 should be the path.


The API is good enough to write some Python code that does this way faster. Some autoclassification based on keywords helps a lot too.

I made some scripts to do this, but would have to extract them from beside the user data in the repo.


GitLab employee here.

We have internal tooling that we're working on incorporating into GitLab itself to help with this: https://about.gitlab.com/blog/2021/08/19/introducing-spamche....


Hello, do you have any plans/desire to support ActivityPub federation on Gitlab? It's a killer-feature-to-come for Gitea and certainly would help dealing with spam, as admins could allowlist trustworthy instances on an opt-in basis, enabling easy cooperation across related communities.


I don't think it's currently scheduled: https://gitlab.com/gitlab-org/gitlab/-/issues/30672


Yeah i saw that issue two years ago. It's sad nothing has moved on here, whereas the forgefriends project (ex-fedeproxy, not directly related to forgefed) has been super active in the past year (checkout their monthly reports) in this area of forging interop.

EDIT: someone on that issue summarized the issue pretty well:

> Its really annoying how fragmented gitlab is rightnow. I have a dozen accounts on a dozen instances. This feature combined with oauth login to other instances, would make it like there is one big gitlab we all use!


Sorry, I'm not sure I understand. How does that help "dealing with spam?"


Because once you have federation you can either use an operator/domain web-of-trust, or you can use allow/denylists on your instance. That's how email or XMPP is kept mostly spam-free (on a selfhosted server most spam - if not all - i receive is from gmail addresses, not from selfhosted servers who are easily denylisted if they start sending spam).

In particular, if an instance or specific repository concerns only people from specific projects/instances, it would be easy to allowlist those specific instances and not have to deal with spam at all.


> on a selfhosted server most spam - if not all - i receive is from gmail addresses, not from selfhosted servers who are easily denylisted if they start sending spam

And most - if not all - potential GSoC contributors are from gmail addresses. So again, I don't understand how this could be a general solution to spam.


I think you don't get my point. I'm not advocating for denylisting gmail.com because it produces spam (although this has tempted me on more than one occasion), i'm saying fighting spam in federated environments has decades of experience of various techniques that work well. Open nodes (eg. remailers) have terrible reputation and are denylisted pretty much everywhere, but specific communities/servers can maintain a decent reputation as long as they have some form of moderation/cooptation. By opting into the federation, Gitlab could support various advanced workflows depending on your threat model:

- a new organization using your project? maybe grant their whole gitlab instance "issues" read/write access to the project

- publishing FLOSS in a "community" setting where random people submitting contribution is not expected? maybe we can check the PGP WoT before deciding whether to accept that PR

- running a federation of organizations, some of whom may run their own instance? allowlist all the instances so they can interact across instances

- running a public forge like gitlab.com, codeberg.org, or chapril.org? maybe maintain an allowlist of servers who ask for it and pledge to fight spam

- feel adventurous? setup an entirely public instance and help catch spam and reporting it to denylists

All this is already possible on email level, but pointless as you pointed out as trustworthiness of the mail server is not correlated to trustworthiness of the forge.


Sounds like a potential solution.

When will it ship?


Looks like it shipped in 14.8 (4 days ago)

https://docs.gitlab.com/ee/user/admin_area/reporting/spamche...


Wait a sec... this is from the feature request[1]:

> Just because I don't think I said it explicitly anywhere above: Because we are using an obfuscated, non-free component (the preprocessor), we can't include spamcheck in CE (users of CE expect no proprietary code to be included in the pacakge), but only in EE.

So... is it available in the current version of gitlab-ce or not? I don't want to waste time trying to get it running only to find out you've only made it available for enterprise editions and gitlab.com.

1: https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/6259


Non-free obfuscated code cannot be included in the community edition unfortunately. https://gitlab.com/gitlab-org/gitlab-foss/-/blob/master/LICE... The architecture in https://gitlab.com/gitlab-org/spamcheck#architecture-diagram shows the spam detection, where the ML training models remain obfuscated to not give spammers an advantage.

You can run EE without license, it provides the same features as CE. Maybe that is an option for you: https://docs.gitlab.com/ee/update/package/convert_to_ee.html I've created an MR to help clarify the docs: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/81751


If I didn't care about the open source license I'd simply use github. (Which, unfortunately, may be the only solution that doesn't continue eating more and more of my time.)

Anyhow, this sounds like a death knell for gitlab-ce. My GSoC use case isn't fringe (there are 100s of GSoC orgs), and Gitlab wouldn't have spent money on the ML approach for EE if it weren't generally important.


Oh wow, thanks!

I'll have a look.



> only open-source alternative that can keep up with Github feature-wise.

Imo they're still leading Github in some areas. My favorite Gitlab exclusive feature is a very simple one: folders inside of projects. Usually Github has followed a pattern of catching up to Gitlab, so I think it's probably only a matter of time before they add this feature (if they haven't already, it's been a while since I checked).

Actually I take that back, my favorite feature is that I can host my own private instance. Gitlab gets even better if you have admin access to the system. And that's one feature Github will probably take a while to copy, if ever.


> And that's one feature Github will probably take a while to copy, if ever.

Eh you can already self host GitHub, longer than GitLab has been around, it’s just not free to do so.


> it’s just not free to do so.

My general rule of thumb is that if a product lists its price as "Contact our sales team", then its effectively unavailable to me as an individual. So I guess if we're talking about what exactly the Github "feature" is that MS won't copy, it wouldn't be that there's an enterprise option, but that Gitlab is free and open source and practical for me as an individual to install. Obviously if you pay Microsoft enough money they'll do whatever you want (up to and including I guess buying Github itself as a company from them. Always has been an option, it's just not free to do so.)


Pricing isn’t hidden: https://github.com/pricing

$21 per user per month.


Well that confirms I can't afford it!

Interestingly, this page isn't linked to from https://github.com/enterprise (except in the footer), where the calls to action are to either "start a free trial" or to "contact sales".


It's also prominently displayed in the header.


Only if you're not logged in. If you're logged in it's not there at all.

Why does this feel like people are trying to turn this into some sort of gotcha? It wasn't even my main point. The point is that GitHub enterprise is not an option for individuals.


Hense the title "Enterprise"...


Exactly. But the original argument was "well ackshually, you can self host Github, you just have to pay for it". Technically true, practically not.


While not practical for you, it's not priced out for every single small team or individual user ever. If I recall correctly, there's no minimum seat requirement still. You're changing the position of the goal post on your original comment :).

Original comment from you:

> Actually I take that back, my favorite feature is that I can host my own private instance. Gitlab gets even better if you have admin access to the system. And that's one feature Github will probably take a while to copy, if ever.


Why are people being so pedantic about this? What is going on here?

I never claimed it's "priced out for every single small team or individual user ever". You even quoted it right there:

> my favorite feature is that I can host my own private instance.

I have 33 people on my personal Gitlab instance. It would cost me over $8k per year to run a Github Enterprise instance with that many users and I don't have the kind of money to do that. Until I can do that with Github for $0 then my point stands.


Yeah, enterprise pricing typically works that way. It's designed for businesses, not individuals. Smaller teams pay 3.33 cents a month based on the pricing page that was pointed out to you and individuals can sign up for free. I also find the copy claim to be entirely ironic.

They're both commercial companies chasing dollars. The OSS sales model is simple. Get companies/people hooked on OSS, have them reach the boundaries of the OSS product and then sell them a commercial license. Most people can and will just move to the better enterprise offering which is probably why I see so little GitLab out in the wild. Based on what I'm seeing/hearing from MSFT, GitHub is basically a free product that now comes with your MSFT enterprise agreement. If you agree to spend enough Azure or Visual Studio dollars MSFT agrees to pony up GitHub licenses. GitLab can't compete on that level, and the offering alone isn't compelling enough for your average fortune 500 when it comes to spending money.


I had to upgrade a slightly older, internal-only GitLab instance for a company a while ago. I was shocked by how many upgrade path dependency problems there were. Ended up having to roll back to a backup and do the upgrade in a stepwise fashion through various intermediate steps according to their long document on the process, including a fair number of manual commands and migrations and a lot of Googling for obscure error messages.

I enjoy GitLab and it’s the real only option for open self-hosting, but it made me miss the days of having GitHub Enterprise.


Yeah. I just booted up an old VM to update it. It was on v13 and told me I needed to upgrade to 14.0 first, then 14.8. I did that. Now it's totally broken with crappy error messages and I'll get to waste a few hours today trying to fix it.

I switched to Gitea a long time ago and have no regrets.


Literally going through this right now. Performing the upgrade from 13 to 14, following the required upgrade path.

In the past, you could go from current version -> latest z release -> X.0 release of the next major -> latest minor release of Z. As long as gitlab completed the db migrations and came up, you were (relatively) ok to continue to the next upgrade.

Turns out 14 introduces a brand new upgrade failure case. Starting with 14, upgrades can include async db migrations that must complete before you continue upgrading. But once you start upgrading, it's too late, things are hosed, so now I'm falling back to a backup and starting over.


I am not even sure if most people would need all the features. Nice to have yes, but I have been running Gogs for at least 2 years and never thought I need more. This is for personal usage though.


CIs + runners + embedded registry is a really nice add-on that I couldn't do without


Agreed! However the amount of maintenance that you need to do because of the vulnerabilities is disappointing, especially due to how contrived the upgrade paths are: https://docs.gitlab.com/ee/update/#upgrade-paths

Thus, it might be a better idea to look into a less popular stack that's less likely to be targeted as much due to not being such a juicy target.

For example:

  Code: Gitea/Gogs/GitBucket
  CI: Drone/Jenkins (okay there are probably better options than Jenkins, to be honest)
  Registry: Nexus/Artifactory (not just for containers, they support most formats and have better control over cleanup of old data so you don't have to schedule GitLab cleanup yourself)
Of course, at the end of the day all of those still have an attacks surface, so i'm really leaning more and more into the camp of exposing nothing publicly since it's a losing battle.


What maintenance?! I just bump the docker-compose.yml version numbers and Stop/Start the service. It's very painless... My cellphone has more frequent updates than Gitlab does.


If you do that with minor versions, you should generally be fine. When you need to upgrade across major versions, you'll most likely be met with the following in case you haven't followed the updates closely:

> It seems you are upgrading from major version X to major version Y.

> It is required to upgrade to the latest Y.0.x version first before proceeding.

> Please follow the upgrade documentation at https://docs.gitlab.com/ee/update/index.html#upgrading-to-a-...

In addition to that, you should NEVER just bump versions without having backups (which you've hopefully considered), so there is probably another step in there, either validating that your latest automatic backups work, or even just manually copying the current GitLab data directory into another folder, in the case of an Omnibus install, or doing the same manually for all components in the more distributed installation type.

Disclaimer: this has little do to with GitLab in particular but is something you should consider with any and all software packages that you upgrade, especially the kind with non-trivial dependencies and data storage mechanisms, like PostgreSQL. Of course, you can always dump the DB but it's easier to back up everything else as well by taking the instances offline and making data copies of all container volumes/bind mounts.


* I never skip (minor) versions.

* I have automated backups created every 2 days stored via S3, I've done a full restore twice in 5+ years of uptime.

* I run Gitlab at home and at work.

None of the points touch on a maintenance burden ... Just saying. Skipping versions while updating any software is just being a lazy sysadmin and praying it works. Typically skipping to major versions during upgrades always comes with breaking changes so operator beware.


> Skipping versions while updating any software is just being a lazy sysadmin and praying it works. Typically skipping to major versions during upgrades always comes with breaking changes so operator beware.

It's nice that GitLab actually prevent you from doing that and give you messages that it's unsupported and direct you to their documentation, which describes the supported upgrade paths...

However, at the same time one cannot help but to wonder about why you can't go from version #1 to version #999 in one go. Most of the software at my dayjob (at least the one that i've written) absolutely can do that - since the DB migrations are fully automated, even if i had to create a lot of pushback against other devs going: "Well, it would be easier just to tell the clients who are running this software to just do X manually when going from version Y to Z."

But GitLab's updates are largely automated (the Chef scripts and PostgreSQL migrations etc.), it's just that for some reason they either don't include all of them or require that you visit certain key points throughout the process, which cannot be skipped (e.g. certain milestone releases, as described in their docs).

Of course, i acknowledge that it's extremely hard to sustain backwards compatibility and i've seen numerous projects start out that way and the devs give up on the idea at first sign of difficulty, since it's not like they care much for that and it doesn't always lead to clear value add - it's a nice to have and they won't earn any less of a salary for making some ops' person's life harder down the line.

I also have automated backups with BackupPC, however i expect software to remain reasonably secure and stable without having to update that often - props to GitLab for disclosing the important releases, but i'm migrating over to Gitea for my personal needs as we speak, even if having someone else manage a GitLab install at work is still like having a superpower (with GitLab CI, GitLab Registry etc.).

I actually wrote an article about how really frequent updates cause problems and lots of churn: https://blog.kronis.dev/articles/never-update-anything (though the title is a bit tongue in cheek, as explained by the disclaimer at the top of the article).


Your db migrations may support updates from #1 to #99 but your OS does not directly support updates of MySQL 5 to MySQL 8 with issues. For example there are plenty of examples of deprecated my.cnf configuration values. Similarly APT on Ubuntu will prompt how to handle a my.cnf that differs from the Distribution release when upgrading Versions. Often times this is more painful than minor version updates.

I think the version milestones in Gitlab are akin to dependency changes for self-hosted Gitlab. An example is the Gitlab v9 (?) Postgres upgrade to Postgres v11 I think, it was opt-in for a prior version of Gitlab than required at that version milestone. It's difficult to make db migration scripts for Gitlab,as in your example, that may depend on newer Postgres idioms not available in the legacy db version. So you can't simply support gitlab updates from X to Y version due to underlyng dependency constraints...

Thanks for the insightful discourse.


> Your db migrations may support updates from #1 to #99 but your OS does not directly support updates of MySQL 5 to MySQL 8 with issues.

That's just the thing - more software out there should have a clear separation between the files needed to run it (binaries, other libraries), its configuration (either files, environment variables or a mix of both) and the data that's generated by it.

The binaries and libraries can easily have breaking changes and be incompatible with one another (essentially treat them as a blob that fits together, though dynamic linking muddies this). The configuration can also change, though it should be documented and the binaries should output warnings in the logs in such cases (like GitLab actually already does!). The data should have extra care taken to make it compatible between most versions, with at least forwards only migrations available in all other cases (since backwards compatible migrations are just too hard to do in practice).

Alas, i don't install most software on my servers anymore, merely Docker (or Podman, basically any OCI compatible technology) containers with specific volumes or bind mounts for the persistent data. GitLab is pretty good in this regard with its Omnibus install, though there are certainly a few problems with it if you try to do too many updates or have a non-standard configuration.

I actually wrote more about it and why i just migrated away from GitLab to Gitea, Sonatype Nexus and Drone CI on my blog: https://blog.kronis.dev/articles/goodbye-gitlab-hello-gitea-...

Of course, i'll still use GitLab in my company because there it's someone else's job to keep it running with a hopefully appropriate amount of resources to keep it that way with minimal downtime and all the relevant updates. But at the same time, for certain circumstances (like my memory constrained homelab setup), it makes sense to look into multiple lightweight integrated solutions.

You can actually find more information about what broke for me while doing updates in particular, seemingly something cgroups related with gitaly related stuff not having the write permissions needed inside of the container, which later lead to the embedded PostgreSQL failing catastrophically. In comparison, right now i just have Gitea for similar goals which is a single binary that uses an SQLite database, as well as the other aforementioned tools for CI and storage of artefacts, which are similarly decoupled.

It's probably all about constraints, drawbacks and finding what works for you best!


> how contrived the upgrade paths are: https://docs.gitlab.com/ee/update/#upgrade-paths

Thanks for your feedback, agreed. I've created an issue https://gitlab.com/gitlab-org/gitlab/-/issues/353862 - please add additional thoughts and suggestions there as well. Thanks!


Well, i don't think that there's actually that much that can be done here, since that page does contain adequate documentation and a linear example migration path:

  8.11.Z -> 8.12.0 -> 8.17.7 -> 9.5.10 -> 10.8.7 -> 11.11.8 -> 12.0.12 -> 12.1.17 -> 12.10.14 -> 13.0.14 -> 13.1.11 -> 13.8.8 -> 13.12.15 -> 14.0.12 -> latest 14.Y.Z
It's just that the process itself is troublesome, in that you can'd just go from let's say 11.11.8 to 14.6.5 in one go and let all of the thousands of changes and migrations be applied automatically with no problems, as some software out there attempts to (with varying degrees of success).

Of course, it's probably not viable due to the significant changes that the actual application undergoes and therefore one just needs to bite the bullet and deal with either constant small updates or few and longer updates for private instances.

But hey, thanks for creating the issue and best of luck!


Side question: is the gitlab docker registry really that useful?

The fact that any non-protected runner can push to it makes it useless to store images to be used in other CI pipeline (unless I missed something, which I would be really glad)


The registry is fine for our use case, which is to manage all the dev artefacts on a private repo before CI might push a release to a production registry.

> any non-protected runner can push to it

My understanding is the job token inherits its permission set from the user causing the job to run. If the user has `write_registry` to a project (developer up), then the job does. Do you see more access than that?

The access can be limited per project to specific projects by setting a scope [0] but your description sounds like it might access within the project that is the issue.

0: https://gitlab.sam3.io/help/ci/jobs/ci_job_token#configure-t...


The whole point of having protected runners is to do things that developers are not allowed to do. If any developer can push images to the registry without any review/approval, and those images are used in other CI pipeline that's a problem for us.

Having a separate production registry is good indeed, but for images to be used for CI itself, having something self-contained within gitlab would have been nice.


> it's madness to even consider running a publicly-reachable Gitlab instance...

why would you want it to be a public instance?


That seems pretty reasonable for any open source project to want.


The dreaded CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:N strikes again.


Out of curiosity, what are some other noteworthy projects that were affected by this CVE?


This is a vulnerability scoring system, it indicates the severity level across a number of different metrics.

https://www.first.org/cvss/calculator/3.0


Got it, so it’s a classification system, I wasn’t aware of that.

Thanks for the reply, I learned something today. :)


If you're going to run gitlab, I would suggest using the docker image and docker-compose to run it. Just done this upgrade: two commands, took 30s.


What are the system requirements for a personal setup? I mean

> 4GB RAM is the required minimum memory size and supports up to 500 users

How far down does it scale to just a handful of users though? (usual free vps are about 1GB, and are normally used to run other things as well)

EDIT: found it, unfortunately not as much as I would have hoped (but not unexpected)

> Minimum 2GB of RAM + 1GB of SWAP

> You should be able to run GitLab with up to 5 developers with individual Git projects no larger than 100MB.

https://docs.gitlab.com/omnibus/settings/memory_constrained_...


It looks like gogs only requires 512MB. It might be a better choice for resource constrained environments.

https://github.com/gogs/gogs/issues/5487


Am I missing something?

In my experience, updating gitlab involves looking at either the docker hub page with the tags or that gitlab news page that I'm trying to find right now and then painfully going from minor to minor.

If you update every week you're probably fine, but if you skip a version some database migrations might not work and then the real fun begins.


My only annoyance with GitLab on that front is the lack of tags for "14.8-ce" to enable use of watchtower to keep the minor/security patch up to date, like we have on other images.

I definitely do not want to use "latest", in the past there have been updates or intervention required between major versions.


> We strongly recommend that all GitLab installations be upgraded to one of these versions immediately.

Sounds like an RCE vulnerability. interesting recent fate of a Ukrainian company.


I doubt it's related. DZ left GitLab a while back, and even then his involvement had been limited for quite some time. Knowing GitLab and its history of security vulnerabilities, the timing is probably just a coincidence.

There has also never been any evidence of e.g. state actors sneaking in backdoors and what not. The code review process is quite extensive (probably a bit too much even), so it would be difficult to sneak in vulnerabilities deliberately. It's much more likely these issues are caused by growing complexity and the amount of functionality that GitLab supports.


> Knowing GitLab and its history of security vulnerabilities, the timing is probably just a coincidence.

You're probably right - unless Ukraine has more GL instances than most countries, in which case I'd be a little less likely to believe it's just a coincidence. Russia is deploying lots of attacks now so regardless of where the software was developed, if it's used in Ukraine at all it's more likely that we will find out about its vulnerabilities in the coming days.


It's only important when GitLab is used in government infrastructure, though. I imagine that most GitLab instances are civilian, in which case attacking them doesn't help Russia much.


Not if they're instances related to critical infrastructure, like utility software vendors. Or just generally trying to sabotage their economy through hurting commercial entities


because us gov operations are totally divorced from the civilian sector?


The US government is currently very much not involved. And sure, the ua gov will have a GitLab somewhere, but their time is limited and GitLab does not seem like a great value.


Is code search still a hot mess for Gitlab?


I don't know if this is what you're asking, but they have an opt-in integration with Sourcegraph and it's handy since code-aware navigation is almost always more effective than search: https://docs.gitlab.com/ee/integration/sourcegraph.html#enab...

It's also unclear if your bad experience was with their saas offering, or with a self-hosted setup


Are you using the proprietary global/advanced code search powered by elastic search?


I reported a 2FA bypass vulnerability (TOTP code could be reused), and it was marked as low severity, and took them nearly a year and a half to fix.

They didn't even bother creating a CVE for it.

It doesn't seem like they take security very seriously.


Oh no, an attacker with the victim's credentials AND a valid TOTP code that was just used can access the victim's account...

Seems low priority to me too

(Calling that a 2FA bypass is really disingenuous)


Allowing reuse a TOTP sounds exactly like 2FA bypass to me.


A 2FA bypass would mean you can completely bypass the TFA protections. His attack doesn't allow that - you still need to steal the OTP somehow. It's only notable in situations where you can steal OTPs but for some reason only after they have been used. That does not seem like a likely scenario so I'd say it is low priority.

Still, it's definitely an embarrassing flaw and probably trivial to fix so taking over a year to fix it is not great.


I think the main point stands though, and the OP was spinning things quite hard with the whole “they don’t take security seriously.”

I mean okay, when I file a big report and it’s marked as low sev, it makes me salty too, but then I don’t go on forums to spread FUD about the team.


* It was initially closed as "not applicable". I had to insist that it was a vulnerability.

* It was originally scheduled to be fixed within about 90 days, which was reasonable, but they kept delaying it more and more.

* They took 4 months to notify me that they've fixed it. That's 21 months in total from opening to closing it.

* They miscategorised the severity as low, when the exact same vulnerability was medium. It's quite feasible for a determined attacker to set up a camera to record a monitor, and it doesn't require any special exploit code or tools. Exploiting it gives you access to the "crown jewels".

* They didn't open a CVE. Probably didn't issue a security bulletin to their customers, but I didn't check.

* They don't commit to fixing security issues in a timely manner.

* They didn't make the effort to fix the issue themselves, it was incidentally fixed when eventually they updated an dependency which was unmaintained for years.

* The fix was as simple as pointing to a patched fork of the dependency (there was an unmerged PR), not something that requires more than a year to fix.


People like you are why companies hate bug bounty programs. Complaining about a bug marked low severity when your stated attack vector requires installing a physical camera


Then again, how long does a fix realistically take for a security-relevant issue? Even if it's low or minor, more than a year is long.


Are you saying that a valid TOTP code can be reused within its validity period? What’s the proposed threat model here (how is an adversary using this to inflict harm)?

Given the engineering effort of tracking used tokens and the relatively low exploitability it seems odd to generalize to the entire organization based on this.


There can be several threat models:

* Industrial espionage and state actors - somebody setting up a camera to record the monitor and reusing the token

* Phishing site - the token can be reused, and then redirected to original site. There wouldn't even be an error

* Keylogger

* Somebody logging in while sharing the screen during a Zoom call

* Somebody standing over the shoulder, etc.

Considering this could be prevented with a single if statement, and how important 2FA is in protecting accounts, that's certainly my impression.

The password can be found out through other means, e.g. (password reuse, phishing, keylogger, etc.)

For comparison, the exact same vulnerability was rated as medium severity (5.3/10) - high impact, but difficult to exploit

https://nvd.nist.gov/vuln/detail/CVE-2015-7225


Really? A single if statement is all it would take? More hyperbole from you...

(How do you log used codes, check if a code was previously used, and clean up old used codes in a single if statement?)


It already recorded in the database when the TOTP was last used, but it allowed to reuse the same code during the grace period (30 seconds later).


Allowing authentication not only within the original time frame but one interval before and after is by design: https://en.wikipedia.org/wiki/Time-based_one-time_password#S... .


The security issue is with allowing reuse, and not because of allowing use in the previous and next time frames.


Not OP, but yes really.

Check the last login/session/whatever for that account and if it was within the period of the TOTP that was submitted, force a relog.


> Are you saying that a valid TOTP code can be reused within its validity period? What’s the proposed threat model here (how is an adversary using this to inflict harm)?

Allowing re-use violates the RFC:

   Note that a prover may send the same OTP inside a given time-step
   window multiple times to a verifier.  The verifier MUST NOT accept
   the second attempt of the OTP after the successful validation has
   been issued for the first OTP, which ensures one-time only use of an
   OTP.
* https://datatracker.ietf.org/doc/html/rfc6238#section-5

This is actually the only "MUST NOT" in the entire RFC (besides the definition of the term in §2).


i would think the "OTP" would be the giveaway, but hey, i'm no CTO


Theoretically it means that if someone has your password and has MITM'd you or has a keylogger on you, they can log in as you in the 30 second window.

That said, someone with the level of access required to exploit this vulnerability isn't going to be stopped because GitLab patches it. There are plenty of other things they could do with that kind of access.


This is why threat vector analysis is so important, because in your case, there is no additional vector. If someone has MITM'd you, they can just intercept your token before they pass it to Gitlab.

And to your other point, you're right, if your adversary already has a keylogger running on your device you're pretty much screwed in any case.


It enables phishing where you're actually logged in after submitting your credentials. You don't need a MITM from a network perspective.


Separately, if the attacker has the password and a mitm for the totp code, then this allows them to logg in without being detected.


Valid TOTPs should still only work once when implemented well. And yes it's not easy to exploit. The idea is something like a malware could sit and intercept a successful login then initiate its own session by re-using the MFA code before it expires.


Yeah, but malware in that position can also just steal your session cookie.


Malware can also just steal your session cookie.


It takes about 30 minutes to set up your own git server. You can easily create your own git hooks to send email notifications on "git push", or to trigger CI or deployment. I get that gitlab is tremendously valuable for bigger teams, but smaller groups can easily do without.

This isn't the first time gitlab has security issues, and it won't be the last. It's just not worth it for us.

https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-...


You could make this argument about basically any service and it comes up on HN without fail anytime something like this happen: "Just self host, it isn't hard."

And sure - self hosting might not be terribly complicated but that's one more thing that now you've got to worry about and keep track of. One more thing that you've got manage and keep up to date. There is a lot of value in convenience, ease of access, and simple scalability.


This service update only applies to people who self-host. Servers managed by gitlab are patched by gitlab staff. If you're going to self-host I think you should at least --consider-- self-hosting something with a tiny attack surface instead of a complicated product with many dependencies and "casual" security practices.


You're in a thread about an update to the self-hosted Gitlab offering. They aren't saying "use this instead of the service", but "if you self-host you can also self-host a simpler thing", so your response really doesn't fit the topic at hand.


There is a point where "simpler" is so laughably inadequate when compared to the alternative that dismissing the comment makes sense. You lose code review, integrated CI, and will need quite a lot of time to manage this entire solution anyway.


If you want to make a different argument agains the comment, feel free to do so, but I'm not sure how that's relevant to my comment or the comment I replied to?


I'd love to see the same effort that is going into avoiding managing a service go into contributing better tooling to that service or to the OS package/configuration so it's not 30 minutes of work. I'd love it if it worked more like this: apt install myservice and maybe one or two tweaks/decisions to configure for use. A lot of the issues here are that dynamic language based services (python/ruby/js) don't play nice with OS provided interpreters. This is a problem that really needs to be solved, if we'd like to continue to have nice things.


The tooling is already there. All the "not playing nice" issues are largely solved by using Docker. Installing a local Docker instance of Gitlab is literally four commands. Mkdir three directories and docker run to get it started.

See https://docs.gitlab.com/ee/install/docker.html


> All the "not playing nice" issues are largely solved by using Docker

Not really. Now any integration has to be done inside the container or with other containers... which adds lots of extra steps. I do agree using Docker makes a simple deployment easier, though.


In this case though, gitlab is a self hosted product


It takes just 30 minutes to clean your office. You can go to Home Depot and buy all the supplies for cheaper than what it costs for a janitor in a month.

Accept your time is limited and valuable. You might drive a higher roi focusing your time on the core business than doing ancillary tasks


Sure... and yet I would happily spend an entire day learning about git to avoid 30 minutes cleaning: the premise that time is fungible is simply incorrect.

(Regardless: this "30 minutes" is of course an estimate, and isn't "30 minutes longer". I do demos in college classes on how to set up personal git servers--given your existing ssh and web server setup--with nothing more than a git init --bare and renaming a single default hook, and so setting up GitLab to me is definitely going to take way more time as it probably is going to require learning some new config file format and figuring out if it has some kind of database dependency and the such. But the reality is that time is NOT fungible so GitLab might be worth setting up no matter how hard it is vs. using git as intended.)


It only takes 15 minutes to order a new prefabricated office!


For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.


Exposé without encryption, at least use SFTP and rclone mounts in this theoretical use case scenario


I'm pretty sure this is a reference to the famous Dropbox comment[0], not an actual proposal.

[0] https://news.ycombinator.com/item?id=9224


This comment drastically undervalues the features that both Gitlab and GitHub provide, even for very small teams.


Oh geez I've been hand wringing about which self-hosted git thing to put on my personal VPS to replace Phabricator and this is such an obvious answer. Thanks!

EDIT: replace isn't the right word for most phab features (code review, issue track, wiki), but I just mean replace for repo sharing with push and pull


Personally I hope Phorge gets enough momentum to live on as a community project https://we.phorge.it/source/phorge/


My team went to gitLab after phabricator maybe 2 years ago. Works great here


Thanks for sharing :) GitLab team member here.

Phabricator has many great features which inspired GitLab. I've shared some of them in this blog post: https://about.gitlab.com/blog/2021/08/13/five-great-phabrica...


> It takes about 30 minutes to set up your own git server.

It takes about 50 seconds to setup and run a local Docker instance of Gitlab-ce.

The value for time and money (it's free) is insane. You don't just get git, but a fully featured issue management system, a nice UI, CI/CD integration, Kubernetes hooks...etc,etc.

If you want to run your own repo I can't think of anything which is even remotely as valuable. You can even throw it on a Raspberry Pi and have a home lab code repository setup which would make major corporations envious just a few years ago.

If you're using just plain command line git to manage your code projects you're lingering in the past, which is perfectly fine, but these days you can do way better for less work. It's as if you were using bvi (binary vi) to edit your photos when GIMP is freely available. (I hope I get extra credit for not using a car analogy :)

To be clear I'm not dissing command line git - I use it every day. But if you're managing multiple code projects and need to keep track of issues and run automated tests then Gitlab gives you an amazingly powerful tool set for making your life easy.


I think the difference is that the gitlab instance will probably require care and feeding beyond letting the OS "apt update; apt upgrade" once a night.


There are 100 different features in Gitlab, and only one of those is keeping your code in version control. It's valuable for any size team. Why would you want to waste a small team's time trying to DIY everything when they can just install one solution that does everything for them for free?


What about the cost of maintaining the server? I've had poor experiences with maintaining my own VPS in the past. Basic advice is setup fail2ban and I'm unable to ascertain if that's adequate. We're not all walking around with that knowledge, nor do I know how to easily acquire it.


Either go with a hosted solution or take the time to learn linux server admin. Linux hasn't changed much in the past 15 years, and I don't think it will change much in the coming years either. With web stuff you have to keep up with new developments, but linux is very stable and boring by comparison. There's a learning curve, but taking the time to understand how linux really works is totally worth it.

You'll want to follow some linux hardening guides, set up a firewall, set up fail2ban, set up full disk encryption, backups, and some other things.


The parent suggestion of just running git over SSH has a small attack surface, its not like HTTP accepting anyone through the front door. Set `PasswordAuthentication no` in `/etc/ssh/sshd_config` and I don't think you even need fail2ban. But you could put a rate limit on new connections to port 22, or leave the fail2ban setup as an additional guard, and of course you want to block all the other ports you don't need.


It doesn’t though. I need a secure, maintainable, audited, backed-up solution which would pass a SOC2 audit.


Any soc2 auditor (or other security auditor) that will sign off on self-hosted gitlab, but not self-hosted git should have their accreditation pulled!

Git's attack surface and trusted computing base are a subset of gitlab's. Even if gitlab security were perfect, it would still be no better than git.


Gitlab and Github do way way way more than just run git.


Smaller groups can do without code review, CI/CD and issue tracking? I mean I guess if your "smaller group" is literally 1 or 2 people.


Your setup could also have a security issue.


What's the attack surface of a git linux account that doesn't have a shell and can only log on through ssh with a key from whitelisted IPs in a firewall? Compare that attack surface with that of a complicated web app like gitlab.


Sure, but you could control access to the Web app, too? E.g. allow only certain VPN accounts to access it? There are lots of ways to manage attack surfaces.


How are you going to do code review and manage that discussion?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: