NPM makes an interesting trade off - with the current scheme, security updates which improve the security posture of the application are accepted by default. Those are far more common than malicious updates. Would pinning dependencies lead to larger problems? Imagine if a log4j-like issue showed up tomorrow; most NPM-managed software would just require reinstallation to be fixed.
> Those are far more common than malicious updates
Are they though? It seems to me that security updates that actually affect a given individual or company are few and far between.
Example: I use a library that both reads and writes a particular file format. In the past few years, it has literally had hundreds of potential vulnerabilities fixed in its parsing code. I only use it to write files, so most or all of its vulnerabilities are of no importance to me.
How often do log4j-level RCEs (or of equal severity) happen? And of those, how many occur in JS packages hosted on NPM?
In my opinion, when I'm using code from an untrusted source that neither I or anyone else has carefully reviewed, it is less risky to wait to patch until I am sure a change will improve my security posture versus YOLOing everything into production.
But there is no right answer to this problem. We are all unknowingly running vulnerable code, and will at some point likely will make an update that introduces more vulnerable code. With the current state of our industry, we will all be owned at some point. It becomes a blame game - "you didn't apply the security patches?!" versus "you deployed code you didn't review?!".
The way you do it is simple. Take ruby gems. In development you update as often as possible. When you see the lock file changed, you check you're happy with the upgrades and commit the lock file. When you don't want to keep the newer version, you change the gemfile (package.json) to point to a version you're happy with, ideally with a comment describing why you're pinning, and what you'd have to do to adopt the newer version. In CI/CD you exclusively use the lock file. This workflow is very difficult to emulate with npm/yarn.
Can you describe why this workflow is difficult? I'm probably missing or misunderstanding something, as we literally use this workflow at work, and we never had dependency issues with our npm projects...
Edit: npm has lock files and a way to install from the lock file (simples way being `npm ci`)
npm ci is relatively new, before that it was hard to get it to precisely respect the lock file. There is also an issue if your devs don't all decide to use either yarn or npm with the separate lockfiles.
I do agree that it's relatively new, but not _that_ new. I feel like it's been around for at least a couple of years (but my sense of time since the COVID pandemic started has been unreliable).
FWIW: Yarn has had lock files for a longer time. I know it's not technically npm, but they share the ecosystem.
Well I think it was introduced in npm 6, either way you slice it, 6 major versions is way to late for this basic functionality.
Yes but that is the issue with yarn: it has its separate lock file, which is why when I (used to) devops node apps, I required the devs all agree on either npm or yarn but not a mix.
The gap in this workflow is you have to go out of your way to get a diff. Never mind a diff of what's actually in the .gem. Glancing at changelogs on GitHub only reviews changes from good actors.
In practice most of us just update, often with live reload running, and move on.
We need mandatory peer review of updates before they're distributed in the first place.
Diffs are easily in the 10k if not 100k range. No company but proverbial FAANGs can cope with that. Exploits are hard to spot by design and may span commits and even major versions. Only a computer process can handle that scope.
I'm not saying you should at any other diff than that of the lockfile itself. From there figure out potential impact, run more complete tests, and other mitigations are available. In most famous instances (leftpad and this newsline), the problem would have been immediately obvious. Just because it's still possible to fall victim to sabotage doesn't mean you shouldn't be doing this.
So they would just have to be re-deployed? I find it insane you would put in production un-reviewed dependencies. Wouldn't it be better to update them explicitly and then redeploy? I don't see the benefit and I see huge downsides.
I'm not 100% on how npm handles the following but composer (PHP package manager) allows you to specifically lock a version for a particular del then when your ready to upgrade and test you can manually change that pinned version to get the next. Thus if Log4j happened, your project wouldn't automatically pull in the fix but you would know about it through media etc and then would go into your project and change the version to the one with the fix.
Npm has lock files, and the documentation tells you why they exist and when to use them (eg `npm ci` being the shortest/easiest way to avoid this incident).