This has been a frustrating thing for us on GitHub , not because of crypto, but for stealing credentials. It's hard to make it so only approved people can run actions and lock down what exposed actions' runtimes can do.
Until it is default-deny for ~all capabilities with say RBAC for enabling, which are basic security principles, they're pretty scary for public repos. It's even scary for private ones as you might want say a contractor or intern to be limited. That they have people publicly dedicated to whackamole response, but seemingly not to security fundamentals (or if they do, not following them / empowered to), is frustrating. Look at the GHA permissions panel and then think like an attacker or defender, it's scary.
The environment variable stuff awhile back was understandable.. but not the big issue. As basically any user can submit a PR that can make current actions that run code (ex: run tests) to run something else by editing the action or code, that means any public user can burn repo $, play in repo-exposed runners (ex: get into corp sandboxes), and if using continuous deployment or service integrations , into their production systems, wherever they publish packages, etc. GitHub is api exposed and git can have commits deleted, so they can even cover most of their tracks. This is SolarWinds all over again, but now a bot can run it automatically on ~everyone!
GHA is both one of my favorite things about GitHub but also been scary. Maybe now that they have access to some. of the best security engineers and researchers in the world, they can fix this...
Yeah and afaict actions on fork PR's are off by default. This is akin to fixing a sinking ship by plugging your fingers into holes as they appear, not how you build secure systems: default deny on all logical+physical capabilities, central authorization policies for grants, defense in depth for layering those.
Very clearly we can still see the battle between the cathedral and the bazaar. I think the idea of blocking all network traffic from the CI should be entertained. It would be harder now than it would have been twenty years ago because more and more stuff assumes you can download what ever from where ever in your CI; the art of making an air gapped software development lifecycle is evaporating. Maybe GitHub can offer per repo/per CICD white listed network access, so you can whitelist NPM and I can whitelist go Lang and no one white lists the mining endpoints.
I'm optimistic here. This stuff probably feels intimidating to most PM-types, though interestingly, it's a wonderful era for security engineers. We're in a new era of reified/virtual/sofotware-controlled-everything. There are multiple layers of virtualization already at play with each supporting all sorts of controls nowadays. Hypervisors, VMs, os / docker, networking layers can all be instrumented now -- GHA certainly does tricks here to speed up things, save money, and isolate tenants. However, they don't expose those controls too apps, so we're all an action away from a remote code execution doing... well.. anything.
I'm not sure on cathedral vs bazaar. I think it's more like disneyification: instead of handling this stuff, they want it to be a few buttons, and as soon as it's not, it's your problem. Except by nature of the intended use of actions, that's pretty fast. The other side of the spectrum is something like AWS's thrusting of a giant ball of IAM at everyone... but there is plenty of middle ground.
RE:artifactories & repos, github wants you to use those as its part of their monetization strategy -- onramp for a few paid areas + less public internet traffic costs for them. Likewise, as most package managers now have enterprise stewards or b2b investors backing them, the harder / more core parts of the tech is in place for locking down sw deps for most ecosystems.
I was excited when MS bought GitHub primarily because they could accelerate GH to support more of the software dev lifecycle. For the security side, I like they've pushed on auditing, but what could be a massive business for them and boon to the community has instead felt quite slow and small. The dev org is clearly great, just seems lopsided on focus wrt security approach.
I just meant the people saying it is crazy to allow anons to do something to run CI vs the people saying it is crazy to make me look at a PR before CI has blessed it. The prior assessment of value to an anon PR is -1 for the former, Cathedral folks, and +1 for the latter, Bazaar folks. I don’t know if FSF even now has CI but I think you have to sign paper work of copyright assignment before they consider your PR, more or less. While rando GitHub repos will take even my PRs.
If I could wave my wand no build or deploys would need to talk to the internet. We want that all to be deterministic and based on resources we control. But that battle is lost. But white listing software depos and so on seems winnable still.
It also mirrors the “unit tests / CI should be self-contained with network deps mocked” vs “it isn’t tested till it is integration tested” debate. The mock/unit test folks don’t need the network access in CI. Even the clever integration testers can spin up a DB and a local Hadoop cluster during the tests. Only the busy, harried integration testers need the real system to test against during CI. (Disclaimer I have written tests of all three kinds).
Ah, right. I'm in the camp of "Why not both?" for OSS public PRs: Make it safe for CI on external pulls by external users. To support the disneyification of security / smart defaults, make an easy button to infer the policy per workflow, such as based on the last X runs. However, instead of today's unsafe and chaotic set of checkbox rules & enforcements, tie that to an RBAC policy over user/role x workflow x capability, and cover the usual suspects of physical + logical resources that virtual & data envs typically protect.
Ex: I'd expect the typical inferred public PR CI policy to be no creds/keys, a fixed set of network URL regexes (package deps like https://npm/@trusted_org/\*), upperbounds on CPU/memory/network/disk/etc, and ~no use of internal github APIs. There's probably surprises like `apt-get update`, which in turn is probably addressable with some common special cases. Likewise, for failure modes, as long as no creds are there and resource quotas are in place, most orgs are probably ok to make network read violations be WARN instead of HALT.
That probably covers the 90-99% case when making public PR workflows much safer.
For internal teams and higher-trust actions (CD, issue bots, ...), I'd expect the same but different. Currently, I'm not really sure how to do something like "Add a contractor but keep them away from most things" except by setting up a second repo with just CI. If there was RBAC and policy inference, however, that'd be all of 2 minutes.
Until it is default-deny for ~all capabilities with say RBAC for enabling, which are basic security principles, they're pretty scary for public repos. It's even scary for private ones as you might want say a contractor or intern to be limited. That they have people publicly dedicated to whackamole response, but seemingly not to security fundamentals (or if they do, not following them / empowered to), is frustrating. Look at the GHA permissions panel and then think like an attacker or defender, it's scary.
The environment variable stuff awhile back was understandable.. but not the big issue. As basically any user can submit a PR that can make current actions that run code (ex: run tests) to run something else by editing the action or code, that means any public user can burn repo $, play in repo-exposed runners (ex: get into corp sandboxes), and if using continuous deployment or service integrations , into their production systems, wherever they publish packages, etc. GitHub is api exposed and git can have commits deleted, so they can even cover most of their tracks. This is SolarWinds all over again, but now a bot can run it automatically on ~everyone!
GHA is both one of my favorite things about GitHub but also been scary. Maybe now that they have access to some. of the best security engineers and researchers in the world, they can fix this...