From the Homebrew post about the incident:
“The security researcher also recommended we consider using GPG signing for Homebrew/homebrew-core. The Homebrew project leadership committee took a vote on this and it was rejected non-unanimously due to workflow concerns.”
How is PGP signing not a no-brainer. What kind of workflow concerns would prevent them from signing commits?!
If you make PGP signing easy enough, at some point you end up with a Jenkins with a trusted PGP signing key, and you haven't actually solved anything.
The problem isn't making it easy to sign things, the problem is making it sufficiently hard for unauthorized parties to sign things without affecting any workflows you'd like to preserve - that is, the real problem is a workflow problem. The real problem is figuring out how to secure automation so it has the privileges to do what it needs but isn't leaking access. The Jenkins instance was designed to do authenticated pushes - it needs automated write access to the Homebrew repos.
Also, signing commits doesn't help you if the risk is unauthorized pushes to master. You can pick up someone's test commit and push that to master, or push a rollback of OpenSSL to a vulnerable version, or something, and still ruin many people's days.
That is true for git's definition of "history," but it is not helpful here. If I force-push a commit that was signed a year ago, then the signature does not cover the fact that master was just rolled back by a year (the signature does not cover the reflog, in git parlance). You have a valid signature of the previous version of history, and clients cannot tell that history was rolled back without authorization.
If I push a pull-request to master that wasn't approved to be on master (e.g., a maintainer did a build of "disable signature validation to narrow down why tests are failing", and signed that commit and pushed it to a PR with the intention of rejecting the PR), then I also have a valid and completely signed "history", and it probably isn't even a force-push to get it onto master.
Git commit signatures authenticate exactly one thing: that at some point, the holder of this PGP key committed this commit. They say nothing about the suitability or future suitability of that commit to be used for any purpose. They don't solve the problem Homebrew had here, and they cause other problems (like breaking rebases).
Git tag signatures are significantly more useful, since they include the name of the tag. So you're not vulnerable to the second attack, and you're mostly not vulnerable to the first since a client wouldn't intentionally request tag 1.0 after getting tag 1.5. But you still have the problem of the client knowing which tag is current, and frequent tagging isn't a great replacement for a workflow where you want people to follow master.
Technically `git push --signed` also exists which could fix the issue of rolling back commits. It would verify that the person doing the push also holds the GPG key at least. But as far as I can tell you have to manually do something with it in the post-receive hook and GitHub doesn't support it at all.
Are there any OSS maintainers who use air-gapped computers to sign packages (at least for major versions)? I would expect this level of precautions be taken for projects that may have large-scale repercussions in the event of a security breach.
I don't think air-gapping is particularly helpful in practice - for large and active codebases, you'd need to read all of the changed source code on the air-gapped machine to look for subtle back doors, which is difficult. Alternatively, if you want to do air-gapped builds and binary signatures, you're going to have to copy over the build-dependencies, which for most build environments are assumed trusted (i.e., they can compromise your air-gapped machine).
For small and intermittently-active ones, the primary development constraint is the OSS maintainer having free time, and doing dev or builds on an air-gapped machine is a big time sink. I maintain very little open-source software and I have still tried to at least use a separate machine for builds + PGP signatures and not my day-to-day machine, and maintaining this machine is just overhead that eats into my, what, one evening every two months that I get to spend on my project.
The solution I'd like to see is a) better tracking of community code reviews/audits; if every line of code has been read by multiple people (perhaps identified with a PGP key, but something simpler would be fine), you're more confident that there are no subtle backdoors than if you make one person stare at the entire diff between major versions on an air-gappped machine until their eyes glaze over, and b) better ways to do builds on multiple clouds and verify that they're identical. The Reproducible Builds effort is a good approach here; if you do that plus a CI infrastructure that runs on two different clouds with two different signing keys, and client systems require both signatures, you can be reasonably assured that the build process wasn't compromised.
How much review does 'cperciva do of the source code and build-dependencies that are copied to the air-gapped machine, which presumably originate from internet-connected machines?
Also, how secure is the kernel on the air-gapped machine against malicious filesystems on the USB stick? (If it's running Linux, the answer is almost certainly "not;" I could imagine FreeBSD is better but I don't know how much people have explored that.)
To be clear I'm not opposed to air-gapping if the maintainer is excited about it, I just suspect there are many much weaker links on the way to/from the air-gapped system, and fixing those is a much harder project that almost nobody is excited about.
How much review does 'cperciva do of the source code and build-dependencies that are copied to the air-gapped machine, which presumably originate from internet-connected machines?
I verify that the source code being compiled is the source code which is published in a signed tarball. Yes, someone could have tampered with the internet-connected system where I do Tarsnap development, but their tampering would be visible in the source code.
Build dependencies are verified to be the packages shipped by Debian. If someone has tampered with the Debian gcc package, we've lost even without Tarsnap binary packages.
Also, how secure is the kernel on the air-gapped machine against malicious filesystems on the USB stick? (If it's running Linux, the answer is almost certainly "not;" I could imagine FreeBSD is better but I don't know how much people have explored that.)
I don't use a filesystem on the USB stick I use for sneakernet, for exactly this reason -- I write and read files to it using tar. (Yes, you can write to a USB stick as a raw device just like you would write to a tape drive.)
The 'malicious file system' on a USB stick is not something to worry about - the firmware of your USB stick is. People (ie for-fun hackers) modified firmwares on USB sticks to make them look like HID keyboards and send commands to target computers - it is well enough in the capabilities of a determined adversary to own your internet machine and implant something on your USB stick.
For secure airgapped computers I'd use one way low tech comm channels with no side bands, maybe IR or sound? (if you trust the device drivers)
Good point... stupid of me but I didn't think about the input side of things. Could be handled with a tar dump dd-ed straight to the USB device. (That's probably not happening, but it's good to think about these things.)
Or a virtual throw away machine just for mounting a USB filesystem, extracting the files, place them in some dump directory and then the whole virtual machine is wiped.
Heck, the virtual machine for copying should run BSD. :)
I'll happily state publicly that I voted for this proposal but in this case the issue is that Homebrew/homebrew-core does not follow a GitHub Flow process but pulls binary packages in with a custom tool (`brew pull`) and generally relies on a rebase workflow which GitHub does not sign (understandably as it's modifying the original commit and would have GitHub signing non-merge-commits). I'm still optimistic we can figure out a way to do this in future.
General rule of thumb for secure package distribution:
1. Is the identifier mutable? Make sure it points to a content addressable identifier (SHA2), and sign that link.
2. Is it a content addressable identifier? Nothing to do.
When it comes to signing in git, signing tags is usually where you see the most value (mutable identifier that points to a git tree, which is content addressable).
You’re just trying to improve the trust in saying “Hey, v1.2 is this SHA digest”.
He seems to be discouraging signing every commit's individual data but encouraging signing the actual commit ID (SHA1) which should be perfectly feasible for something like homebrew.
You're still getting a signature directly from the developer's machine, not from the repository server and as such you're still vastly shrinking the attack surface.
You have no idea how creative people get when faced with minor nuisances. I've seen devs/admins go to great lengths to avoid doing more than one 2FA per day.
Code signing is important, but artifact signing is even more important, because that's what you end up trusting at the end of the chain. So not only do you have to sign your code and secure all your code signing keys, your build agent has to have a build signing key to sign builds. If any of this is compromised, there goes your build integrity.
This is an interesting example of a long-standing problem which is that in general we make use of huge amounts of software and trust our security with it, whilst knowing very little about the security practices of the people developing it.
The article makes a good point that it's very hard for small projects, like the team running Homebrew, to fund their security, yet they are likely to be a target for quite high end attackers, given the access that can be gained by getting unauthorised access to package repositories.
As a side note it also shows that Jenkins tends to be a tempting target for attackers as it often has access to a wide range of systems to carry out it's functions.
> As a side note it also shows that Jenkins tends to be a tempting target for attackers as it often has access to a wide range of systems to carry out it's functions.
This. I'd really love to stop us using Jenkins but none of the hosted macOS CI services scale to meet our needs (i.e. having jobs that run for multiple hours on better hardware). The ideal solution for me would be for us to have some sort of modified Travis or Circle CI setup. We can even pay for it now we've got Patreon money coming in.
Jenkins is not the problem, we have quite a few secured instances which wouldn't leak secrets like this, or at least not to non-admin or unauthenticated users.
It's just often misconfigured because there are a plethora of plugins and ways to store and use secrets, and nobody audits it enough to look at the console output or build artifacts for leaked secrets.
The project is simply missing someone familiar enough to configure Jenkins properly.
> The project is simply missing someone familiar enough to configure Jenkins properly.
I would say that this is a specific case of the far more general one. Substitute "any small organization" for "the project" and substitute any configuration familiarity (i.e. Ops skill) for Jenkins configuration familiarity.
Even in "Devops" job postings for startups, when mentioning CI/CD tools like Jenkins, the main desire seems to be to hire someone who's more Dev than Ops, to create the code to run the CI/CD pipeline, with something like configuration or security a mere afterthough, if that.
Have you looked at Buildkite[0]? The management/pipeline part is hosted (like Travis), but you run the build agents on your own infrastructure (they run on pretty much everything, including macs).
Yeah. Let’s just blow the entire months budget on a single day of consulting. And I’m sure one day is all they would need to understand the workflow requirements of the team, implement and configure it and test it. That seems reasonable.
How many billions or trillions of market value are built upon foundational technology like this, while the maintainers have to go begging for funds not for themselves, but for the good of the project(s)?
It’s a pretty unfortunate state of affairs. Google/Microsoft/Amazon should be throwing money at these projects. It’s in their best interest if they use it.
A policy in tech companies to grant a given amount (let say 50$) per month to each employee, the employee can attribute this amount to whichever OSS project(s) he sees fit.
If the employee doesn't chose a project, this amount is given to OSS projects in need.
To streamline this, the ideal would be to have a service where everything could be done:
* OSS projects register to it
* Tech Companies then give the money to the service, which then dispatch it to the OSS projects
* Employees of tech companies logs into the service to select which projects they chose to give money to (with maybe some suggestion to avoid over/under funding projects).
> to grant a given amount (let say 50$) per month to each employee
Let's make that amount dependent on the employee's salary. Not 50$ per employee per month, but e.g. salary for 1 hour per employee per month. Salaries vary a lot between countries (and even within countries).
This is happening all over the industry, pretty much all companies just make use of open source software without giving a penny to the thousands of projects they leverage on a daily basis to make profit or even keep the lights on.
As someone who runs a small open source project that clearly states that Patreon donations are accepted and even offers some convenience benefits to donors, I often see people jumping extra hoops to avoid it, including clearly profitable companies that saved many thousands by using my software, spending extra time that would amount to more than those donations. So far I get donations from less than one percent of my estimated users.
Facebook, microsoft, github, etc all pay $$ and our time into a pool that is used to incentivize the finding, vetting and fixing of security flaws in major software running the internet.
Absolutely, it's a source of constant surprise to me that basically every large company (and likely Public sector body) is making heavy use of Open source code and few are meaningfully investing in the security of it.
The people who make the decisions about where the monies go aren't the people who understand this. Some decide that it is a "calculated risk"; some are just short-sighted "it can't happen to me"; some are penny-pinchers, and the ${CURRENCY} amount is all they see.
for example (at least that one has a happy ending - where after the right exposure a bunch of companies benefiting from his work stepped in to fund it...)
> The article makes a good point that it's very hard for small projects, like the team running Homebrew, to fund their security, yet they are likely to be a target for quite high end attackers, given the access that can be gained by getting unauthorised access to package repositories.
I know this is kind of a shameless plug, but this is the exact reason I launched BountyGraph [0] last week. I think that we can crowdfund security budgets for these projects to help encourage the discovery of issues like this one.
> (...) very hard for small projects, like the team running Homebrew, to fund their security, yet they are likely to be a target for quite high end attackers, (...)
I disagree. Homebrew’s security considerations in this case have nothing to do with their funding. There are lot of terrific services available for next to nothing for open source projects, Jenkins is one of them. It must have been a conscious decision the way HB set up their CI, unaffected by funding.
And such lapses are not an open door jyst to “high end attackers”. This was a single person with just internet access and a little knowledge about how modern OSS projects work.
When time and money is limited, new features will always win both time and money, until something goes wrong. At that point where it's gone wrong, people will step in and lament "why didn't you just do X" for a few days, before they go back to wanting more features.
The cost of good security is high - audits, slowed down development, limited data retention, higher compute costs... and the return on that investment is only ever going to be a reduction of liability.
Big company with lots of resources, small company with no resources; it doesn't matter. Security is a cost center, and will only ever get a token amount of resources until the costs of doing nothing outweigh the costs of doing something.
Yes, security is a cost. There's a bit of tragedy of the commons effect here - many of the downsides are pushed onto others. I like Doctorow's general take on socializing costs of privacy and security breaches while privatizing profit: https://locusmag.com/2018/07/cory-doctorow-zucks-empire-of-o...
One would think that Apple would be a prime candidate for contributing to Homebrew security funding, since in practice it is a project with security implications for so very many OSX developer workstations.
On the other hand, what percentage of developers within Apple use Homebrew? I think they still have major skin in the game, even if they aren't targeting outside developers.
For what it's worth, I didn't mean my comment that started this thread to be Apple bashing at all. I think it's pretty reasonable for Apple to ignore this market; it's quite niche. It's just also a bummer because their laptops were the pinnacle of what I want in a computer for a long time.
I don't think the 2018 MacBook Pro leaves much to be desired unless you want features that would turn it into a ThinkBook-like brick. For $50 you get a charging USB-C hub that allows you even to leave your power brick home. The iMac Pro is pretty great.
Power Mac is a different story.....don't know how and when they're going to fix this. That's a real shame.
> unless you want features that would turn it into a ThinkBook-like brick.
Yeah literally all many of us want is a standard keyboard with function keys instead of the touchbar. A couple regular USB ports would be nice as well so we can plug in a mouse and keyboard without needing dongles. Neither of those things would turn their "pro" laptop into a brick.
So in my use case I actually got rid of one dongle at a price lower than a replacement MagSafe 2 adapter. I really liked MagSafe though.
Never used the F-keys really. ESC is not that hard for me to live without but that's all personal. But the difference in performance is quite noticeable.
The Surface line by Microsoft has a very similar magnetic power+docking connector. The only issue I have with it is that the LED doesn't change color when the device is fully charged.
I have the previous xps 13 running linux now. I'd like to support them, but unfortunately, CostCo sold me the Windows version with mostly equivalent specs for like $400 cheaper than Dell had the Developer Edition at the time. I replaced the Broadcom WiFi chip for $30 and was good to go. I think I saw where the current model no longer has replaceable WiFi modules though.
I think that publicly exposing your build/deployment system is a very bad idea. It contains too much sensitive and valuable information (credentials, commits, commit author names, paths, hostnames, scripts etc.) and it is too hard to make it secure the right way.
Unfortunately, many people do that. For example, here in New Zealand there is a new startup Onzo (bike sharing). When they launched, I tried to google about them (just out of curiosity). And in 1 minute I found their Jenkins server exposed to everyone as well. I could see how their build process works, who commits what etc. I decided to try to login using simple credentials (something like admin:password) and it didn't work. But there was a Register button. "Why not?" I thought and clicked it and created my own account. And voila, it gave admin permissions by default - I could delete their projects, change variables etc. Emailed them about that.
Moral: never expose your build/deployment systems. If you really want to expose some parts (for whatever reason), then use/write client/UI that has no permissions. 'Build Status' badge is a good example -- it exposes build status info, but doesn't show too much and doesn't give any permissions whatsoever.
Jenkins sucks for a lot of reasons but it does have a perfectly serviceable credentials store exactly for hiding these kinds of secrets from the parameters page and the build output. Any release engineer with the slightest inclination to avoid incidents would have set it up, this just looks like a lack of experience at breaking everything for everyone.
This is an excellent piece. I often wonder about adversarial security issues in large scale OSS projects like Linux kernels. You don't even need to hack commit access to repo. One can intentionally wrap malicious code out in plain sight in an otherwise what would appear as benign change (thanks to undefined behaviour in C/C++). What if a black hat hacker climbs up in Linux contributors hierarchy? What if a person who is already higher up in OSS hierarchy decide to defect and plant a logic bomb? Given that Linux kernel now runs majority of our world from servers in data center to mobile phones in our pockets and hospitals to war machines, security issues like this is a huge deal.
It's a pretty scary prospect, to the point that I have to imagine it's already happening to some degree. If a nation state wants a backdoor, what better way than to bribe the cash-strapped OSS maintainer of that little project that every company depends on.
The problem is that the type of engineers that work on OSS takes own integrity very seriously, and they build their network of trust on that integrity.
This is a fundamental problem of doing CI for open source. Running tests for random people’s changes means giving them RCE in your test environment. Sandboxing rarely gets approached with the seriousness it requires.
How is PGP signing not a no-brainer. What kind of workflow concerns would prevent them from signing commits?!