Hacker News new | past | comments | ask | show | jobs | submit login

GitHub repos of mine are seeing upticks in strange PRs that may be attacks. But the article's PR doesn't seem innocent at all; it's more akin to a huge dangerous red flag.

If any GitHub teammates are reading here, open source repo maintainers (including me) really need better/stronger tools for potentially risky PRs and contributors.

In order of importance IMHO:

1. Throttle PRs for new participants. For example, why is a new account able to send the same kinds of PRs to so many repos, and all at the same time?

2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?

3. Create across-repo across-organization flagging for risky PRs. For example, when a repo owner sees a PR that's questionable, currently the repo owner can report it to GitHub staff but that takes quite a while; instead, what if a repo owner can flag a PR as questionable, which in turn can propagate cautionary flags on similar PRs or similar author activity?






GitHub needs to step up its security game in general. 2FA should be made mandatory. GitHub "Actions" are a catastrophe waiting to happen - very few people pin Actions to a specific commit, they use a tag of the Action that can be moved at will. A malicious author could instantaneously compromise thousands of pipelines with a single commit. Also, PR diffs often hide entire files by default - why!?!

Maybe accounts should even require ID verification. We can't afford to fuck around anymore, a significant share of the world's software supply chain lives on GitHub. It's time to take things seriously.


The rampant "@V1" usage for GitHub Actions has always been so disturbing to me. Even better is the fact that GitHub does all of the work of showing you who is actually using the action! So just compromise the account and then start searching for workflows with authenticated web tokens to AWS or something similar.

It's probably already happening.


Not that long ago Facebook was accidentally leaking information through their self hosted runners, through a very common mistake people make. https://johnstawinski.com/2024/01/11/playing-with-fire-how-w...

That's the second time for PyTorch, to the best of my knowledge. I know someone who found that (or something very much like it) back in 2022 and reported it, as I had to help him escalate through a relevant security contact I had at Meta.


Exactly.

It simply should not be allowed to do this. Nor maintain Actions without mandatory 2FA. All it takes is one account to be compromised to infect thousands of pipelines. Thousands of pipelines can be used to infect thousands of repos. Thousands of repos can be used to infect thousands of accounts... ad infinitum.


2FA matters very little when you have never expiring tokens.

2FA also matters little if the attacker has compromised your machine. They can use your 2FA-authenticated session.

Only once… but if they can get your forever token… that's not the same.

Once is enough.

And thanks to the likes of composer and similar devs end up making non expiring tokens to reduce annoyance. There needs to be a better system. Having to manually generate a token for tooling can be a drag.

GitHub specifically recommended that you have a v1, v1.x and v1.x.x

When you go from v1.5.3 to v1.5.4 you make v1.5 and v1 point to v1.5.4


The point is that any of those tags can be replaced maliciously, after the fact.

If tags are the way people want to work, then there needs to be a new repo class for actions which explicitly removes the ability to delete or force push tags across all branches. And enforced 2FA.

Using a commit hash is the second most secure option. The first (in my eyes) is vendoring the actions you want to use in your user/org's namespace. Maintaining when/if to sync or backport upstream modifications can protect against these kinds of attacks.

However, this does depend on the repo being vetted ahead of time, before being vendored.


Sorry I followed up to this point - how can this be done?

From the GitHub UI, very simply. Go to a repo you administer, in the /tags page, and each tag has a ... Drop-down menu with a delete option. Then upload a new tag by that name.

Tags are not automatically updated from remotes on pull (they are automatically created locally if it's a new tag). This doesn't mean that the remote can't change what the tag points to, only that it's easy to spot.

Edit: and to be clear, for many years after release, this was the recommendation from the Visual Source Safe team (Yes, that team developed GitHub Actions) for managing your actions. Tell people to use "v1", then delete the tag update it each time.


Ah - is the problem a malicious administrator of the repo you're pulling from?

Yes, exactly that. Or anyone who hacks their Github account.

And even if you pin your actions, if they're docker actions they can replace the docker container that is at that label:

https://github.com/rust-build/rust-build.action/blob/59be2ed...


Also the heuristic used to collapse file diffs makes it so that the most important change in a PR often can't be seen or ctrl-f'd without clicking first.

Blame it on go dependency lists and similar.

What do you even review when it's one of those? There's thousands of lines changed and they all point to commits on other repositories.

You're essentially hoping it's fine.


Shipping code to production without evidence anyone credible has reviewed it at a minimum is negligence.

You're claiming here that you do a review of all of your dependencies?

For security critical projects, of course. I even reproducibly bootstrap my own compilers and interpreters.

I've always considered the wider point to be that viewing diffs inline has been a laziness inducing anti-pattern in development: if you never actually bring the code to your machine, you don't quite feel like it's "real" (i.e. even if it's not a full test, compiling and running it yourself should be something which happens. If that feels uncomfortable...then maybe there's a reason).

2FA is already mandatory on GitHub.

Seems I missed that change, thanks.

It only happened in the last month or so I think.

Nah. A year maybe?

Six days for me:

>Your account meets this criteria, and you will need to enroll in 2FA within 45 days, by November 8th, 2024 at 00:00 (UTC). After this date, your access to GitHub.com will be limited until you enroll in 2FA. Enrolling is easy, and we support several options, starting with TOTP apps and text messages (SMS) and then adding on passkeys and the GitHub Mobile app.

I think the exact deadline depends on the organisation. I know that I only enabled 2FA for my throwaway work account (we don't use github at work, and I didn't want to comment using my personal one) last week.


Lucky you :D

I was talking about non-work accounts that don't belong to organizations. Mine got forced to use 2fa a long time ago.


For my personal account it was only in the last month but I think I'd been getting warnings for a while.

What's next, checking that Releases match the code on Github?

With what, a reproducible build? Madness! Madness I say!

Having a reproducible build does not prove that the tarball contains the same source as git.

SLSA aims to achieve this, though, right? Specifically going from level 2 to level 3.

TL;DR: Why not add a capability/permissions model to CI?

I agree that pinning commits is reasonable and that GitHub's UI and Actions system are awful. However, you said:

> Maybe accounts should even require ID verification

This would worsen the following problems:

1. GitHub actions are seen as "trustworthy"

2. GitHub actions lack granular permissions with default no

3. Rising incentives to attempt developer machine compromise, including via $5 wrench[1]

4. Risk of identity information being stolen via breach

> It's time to take things seriously.

Why not add strong capability models to CI? We have SEGFAULT for programs, right? Let's expand on the idea. Stop an action run when:

* an action attempts unexpected network access

* an action attempts IO on unexpected files or folders

The US DoD and related organizations seem to like enforcing this at the compiler level. For example, Ada's got:

* a heavily contract-based approach[2] for function preconditions

* pragma capabilities to forbid using certain features in a module

Other languages have inherited similar ideas in weaker forms, and I mean more than just Rust's borrow checker. Even C# requires explicit declaration to accept null values as arguments [3].

Some languages are taking a stronger approach. For example, Gren's[4] developers are considering the following for IO:

1. you need to have permission to access the disk and other devices

2. permissions default to no

> We can't afford to fuck around anymore,

Sadly, the "industry" seems to disagree with us here. Do you remember when:

1. Microsoft tried to ship 99% of a credit card number and SSN exfiltration tool[5] as a core OS component?

2. BSoD-as-service stopped global air travel?

It seems like a great time to be selling better CI solutions. ¯\_(ツ)_/¯

[1]: https://xkcd.com/538/

[2]: https://learn.adacore.com/courses/intro-to-ada/chapters/cont...

[3]: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

[4]: https://gren-lang.org/

[5]: https://arstechnica.com/ai/2024/06/windows-recall-demands-an...


When I saw the screenshot I almost laughed out loud at the thought that anyone would say this is innocent looking.

It looked like a PR stunt

Yeah, the guy is literally named evildojo.

And then a 666 to boot, I mean gosh. Bad news.

> 2. Help a repo owner confirm that a new PR author is human and legit. For example, when a PR author submits their first PR to a repo, can the repo automatically do some kind of challenge such as a captcha prompt, or email confirmation, or multi-factor authentication, etc.?

Just do a blue checkmark thing by tying the account to real-world identity (eIDAS etc). It's not rocket science, there are gazillion providers that offer these sort of id checks as service, GH would just need to integrate it.


No, this is the exact opposite of what we want. Ability to maintain pseudoanonymity for maintainers and contributors is paramount for personal safety. We mist be able to keep online and meat space personas separate without compromising security of software. Stay wary of Worldcoin as the supposed fix for this.

Ah yes I'm sure it's completely impossible to game these services by printing a fake id at home and showing it on the webcam /s

But github gets an higher evaluation having X amount of active users. The last thing they want is to make that number drop!

By the way on gh you can also buy stars for your project from fake accounts.


Step 1: Automatically reject PRs from usernames like "evildojo666"

Your username would suffer from this policy, as would anyone describing themselves as a hacker.

Why though



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: