At some point, someone somewhere has to have access to infrastructure, and be able to deploy it (if not, then generate a token of set of credentials that _can_).
Circleci's business model is hosting CI runners for you, so of course they need to be able to decrypt the data, and if you have access to dept new infrastructure, you likely have permission to read encryption keys (or deploy new infrastructure that can read said keys and then use those machines to get the keys).
What CircleCI _have_ done is set up enough logging and auditing that they were able to figure out who was compromised, how they were compromised, the time frame and the resources they accessed, which IMO is about as much as you can ask for.
> At some point, someone somewhere has to have access to infrastructure
They could run everything in Nitro Enclaves or similar, that require multiple people to deterministically compile and sign new software for, and release secrets into.
I design quorum controlled infrastructure for a living, mostly in fintech where no single human can ever be trusted. You 100% can run infrastructure that, barring a platform 0day, can prevent any single human from having access to the memory of customer workloads and secrets. Customers likewise would encrypt any secrets or code directly to keys that only exist in the enclaves.
CircleCI had negligent security design, but all its competitors are just as bad, to be fair.
Building with security in mind makes you last to market, which is unforgivable in our industry. Getting hacked however is just considered a cost of doing business.
There is no evidence GitHub has any multi-party accountability for sysadmins or enclaves for secret management. You enter secrets into the GitHub Web UI in plaintext, which means at least some employees can access them in plaintext.
GitHub/NPM have historically failed to support supply chain integrity practices in their public offerings such as hardware anchored code signing, signed code reviews, reproducible builds, multi-party approvals, etc. It is reasonable to expect they are not doing any of that internally either.
Assume any secret you give GitHub will become public knowledge and act accordingly.
The good news is there is never a reason to trust a VCS or CI system with high value secrets. They should never ever need any power beyond running tests, accessing a test environment, or sending notifications.
CircleCI have sufficient logging, however, failed to do fraud analysis to detect irregular access. Fraud analysis can be basic and just have a threshold on the number of encryption keys accessed each day. For this particular employee, there will have been a spike.
They also point their finger at anti-virus for not detecting the malware. That is a lame excuses. Professional malware developers will check if their product goes undetected by the major anti-virus providers. Anti-virus should not be relied upon to protect against sophisticated attacks
They didn't blame the anti malware. In the blog post[0] all they say is
> This machine was compromised on December 16, 2022. The malware was not detected by our antivirus software
That's not blaming that's telling people that their antivirus didn't detect it. If that wasn't there, people would be talking about why they didn't use X antivirus which would probably detect it.
> CircleCI have sufficient logging, however, failed to do fraud analysis to detect irregular access.
This is a significant move of the goalposts. Of course CircleCI messed up, but to go back to the OP's point of "nobody should have that much access to production", well that's just not true.
I can't judge if their employees need that much access to production.
However, if they need this, you need a detection mechanism. It doesn't have to be advanced; a threshold on the number of production keys accessed per day is sufficient. It should raise the alarm to the manager in question, who can then confirm if this is expected behaviour given the tasks assigned to the employee in question.
Not necessarily. AWS engineers cannot access your encryption keys unless you give them explicit permissions. They even offer Nitro enclaves where AWS Engineers can never access your keys.
Giving access to customer data to your staff by default is a design decision and can be avoided. However, in the end, it is a cost-benefit analysis where you have to decide how much you care about your customer's security. I have worked for enough start-ups/growth companies to know the value put on customers' security is sometimes shockingly low.
Not all attacks are targeted and AV can detect abnormal behaviour. Regardless, having the operating system do proper sandboxing is much more valuable than trying to fix it after the fact with AV.
They can use hardware wallets for digital signing the operations, and keep encryption keys offline, split the keys to make sure that there's no 1 person that can access it.
There are lots of tools for key management, but lots of companies don't care about it.
Circleci's business model is hosting CI runners for you, so of course they need to be able to decrypt the data, and if you have access to dept new infrastructure, you likely have permission to read encryption keys (or deploy new infrastructure that can read said keys and then use those machines to get the keys).
What CircleCI _have_ done is set up enough logging and auditing that they were able to figure out who was compromised, how they were compromised, the time frame and the resources they accessed, which IMO is about as much as you can ask for.