Hacker News new | past | comments | ask | show | jobs | submit login
What NPM should do to stop a new colors attack (swtch.com)
237 points by mfrw on Jan 10, 2022 | hide | past | favorite | 672 comments



The whole "grab the latest semver compatible version" was designed for a reason, and just pinning to a specific version would bring us to a world where very different articles need to be written. Specifically, when there's a vulnerability in any dependency (a thing that happens much more often than a rogue OSS dev) the upgrade process would be a nightmare for everyone involved.

You have to chose one of the two evils; automatic bug/vulnerability fixes, or protection against rogue OSS devs you depend on. I'd say the former is an order of magnitude more common and important, and so the npm/JS world works this way.


But the npm/JS doesn't work that way.

NPM will by default create a lockfile that pins the dependencies. `npm install` will install dependencies from the lockfile as long as package.json hasn't been updated, or update the lockfile if it has. `npm ci` will install from the lockfile and fail hard if it can't satisfy the version constraints in package.json.

imo this gives the best of both worlds: Easy, controlled updates to the latest compatible versions and stable CI/CD.

The reason color.js was an issue is because people use `npm install` in their CI and either don't commit their lockfiles or have their lockfile inadvertently out of date—It's easy to do and I'm not sure how many people actually know about `npm ci`.


> as long as package.json hasn't been updated, or update the lockfile if it has

There is a problem here: it will update the __entire__ package-lock.json file, regardless of the actual changes in packages.json.

Pinning doesn't help here if colors dependency is not your direct dependency.

So you can have a pinned library that doesn't pin colors.js. Now you make a change in packages.json (say adding or removing another unrelated package). You will end up with colors being bumped to latest.


> So you can have a pinned library that doesn't pin colors.js. Now you make a change in packages.json (say adding or removing another unrelated package). You will end up with colors being bumped to latest.

What the actual fuck. Who designed this fucking tire fire? Are you kidding me? What even is the point of package-lock.json??

I can't believe this. I actually can't. Now I have to audit all our dependencies, thanks.


NPM is pretty bad at this. It frustrates me to no end every time I have to use it. It manages to do everything exactly wrong, especially in comparison to the other package manager I use daily: Composer. That one does everything right.


Its the same in Python (at least with standard virtual envs)- you can pin libraries, but dependencies of the libs are not pinned, and you can get different versions based on which lib is installed first.

Im told poetry fixes this, but havent checked it myself


> There is a problem here: it will update the __entire__ package-lock.json file, regardless of the actual changes in packages.json.

I'm so happy I don't need to fight npm anymore.


> npm ci

Yes, that is the problem.

> people use `npm install`

Yes and not just in the CI but also to install CLI tools user wide on a system and similar.

> not sure how many people actually know about `npm ci`

Idk. either but `npm install` had been the default for years, and a lot of documentation is based on it. Not deprecating `npm install` or changing it's behavior is typical for the npm ecosystem, i.e. having subtle sometimes UX related security issues (at least in my experience, I doubt it got much better).


iirc the biggest issue with `npm ci` before was that it cleared node_modules, which used to be the primary way of caching deps in CI builds, but now most suggest caching ~/.npm[0].

0: https://github.com/actions/cache/blob/main/examples.md#node-...


Aside: would be great if they actually used XDG directories despite many open bugs.


Honestly, `npm install` should come with a big honking warning that the lockfiles will change. Or change the behaviour. This would negate this attack on 90% of the cases.


- `npm install` should just install what's in the lockfile

- `npm update` should update the lockfile

- `npm upgrade` should update the versions in package.json


It's... almost like pulling random code from the internet is a liability and the people doing so need to eat the costs of doing so safely and not pull code that cannot be pulled safety. Like, for example, evidence that the author actually takes security issues seriously and has a formal way to announce them.

The reality here is that just grabbing and running random shit on clients or servers has _always_ been unsafe. The people not pinning and mirroring have gigantic exposure. Pinning and mirroring people have some exposure. None of them should be doing what they are doing now.


> almost like pulling random code from the internet

I would say blindly pulling _any_ code from _anywhere_ is a liability.

No matter of weather it comes from or weather it's open source or a proprietary vendors library.


>The reality here is that just grabbing and running random shit on clients or servers has _always_ been unsafe

That is what this whole catastrophic mess has been based on from day one.

Do not use NPM for anything new. Let it die a natural death.


NPM is the model that almost all package managers have been based on for a while now. The cat is out of the bag and the benefits of the vast library of convenient packages has won over carefully selected but outdated distro repos.


Many new package managers (e.g. cargo) have small differences which often have a massive impact for such situations.

They are still not perfect, but they tend to know it and work on improving it (almost often slow).

The problem tends to be more around missing proper CI setups and workflows(1) and similar. And naturally npm being very easy to use wrongly.

(1): Lock versions, (shallow) review diffs of most dependency bumps, test and build in a sandbox, automatically check CSV/audit databases on frequent basis, don't automatically bump versions of release builds, etc., etc.

> carefully selected but outdated distro repos.

and also:

- inconsistent between distros

- sometimes long term unfixed security vulnarabilities

- (more) incompatibilities between packages and distroes

- unofficial patches to fix security vulnerabilities or compatibility, which not seldom cause future subtle problems, including security problems

-problems to push important security fixes in time

-often not language specific due to the high amount of work needed to properly maintain them

- costly to operate

- an additional attack-able link in the supply chain

- often OS specific

A distro based system is quite nice for distributing programs, but in my experience it's a nightmare for software packages.

I mean just look at the endless problems distros have with packaging python, npm and shared objects.

Instead I think most dependencies should always be bundled with the program (if anyone can rebuild if needed, i.e. open source) and distros should only package programs, not libraries.

There are some exception. But like less then 10 on most Linux systems, and many could adept by having a well defined non C-FFI interface + bundled support library.


Maven got this right 17 years ago. You can have both deterministic dependency resolution and a vast library of convenient packages. You just have to choose it.


Well other package managers don't automatically update packages like NPM does though but pin the exact version that you installed.


This is absolutely not true at all. Go ahead and pipenv install a package.


Python has probably the worst most miss designed package manager out there.

I mean it doesn't even really have support for proper package versioning, instead it just magically hacks the environment. Which happens to work, but is also kinda a nightmare.

I have no idea why python programmers still mostly accept that as a viable solution.


Python package management is without a doubt, the worst I’ve seen… not used… seen.

Short of downloading individual jar files without the original source code.


It's not just NPM. It's no better to pull some github python repo into your server side unvetted and run it.

The issue is that a broad swath of tech companies have thousands of instances of software that they've never even casually inspected running in user web browsers and in their back end servers, in contexts that are almost never locked down (network controls and segmentation, privilege reduction/separation, CSPs, etc.).

We are one malicious party from a broad total disaster.


The difference is that your tooling doesn't suddenly decide to pull in a new version than the one you've tested with.


I know people who, in their jenkins jobs, just run git clones from the repos. They're not even managing versions.

This is a really broad problem. Focusing on NPM, which I agree is a trashfire, is kind of missing the problem.


Tell them about requirements.txt, pip freeze & virtualenv. You can also curl | bash but that doesn't mean bash is to blame, because it doesn't encourage this bad behavior. NPM is literally built around this broken behavior.


Aren't those files associated with a previous generation of python package management? At least npm hasn't been replaced half a dozen times.


"Tell them about requirements.txt, pip freeze & virtualenv"

NPM offers the same functionality with lock files.


npm install would like to have a word with your lockfile.


We're getting into minutiae here. Yes npm install will update the lock file if package.json conflicts. But it certainly does not update every time npm install is run if you use a lock file, which is what your earlier comment (and those of others) mistakenly imply.

If you have ~1.0.0 in package.json and 1.0.11 specified in the lock file, but then bump package.json to ~1.0.12, yes npm will upgrade the package and then bump the dep in your lock file to 1.0.12. That seems fine?


This is not my experience with how most people specify their dependencies (and the tools use by default)

If you have `^1.0.0` in package.json, the version mark is `^1.0.11` in the lock file, which means that a random `npm i` or `yarn install` will install `^1.0.12` in your lock file.

I more or less said this yesterday[1], but the behaviour of `yarn install --frozen-lockfile` and `npm ci` should be the _default_ behaviour. The current behaviour is demonstrably dangerous. A lockfile should remain locked without explicit action by developers.

[1]: https://news.ycombinator.com/item?id=29869859


"If you have `^1.0.0` in package.json, the version mark is `^1.0.11` in the lock file, which means that a random `npm i` or `yarn install` will install `^1.0.12` in your lock file."

That's not true. package-lock.json file dependencies are specified as exact versions, not ranges. If you have 1.0.11 in your lock file, npm install will always install 1.0.11 unless it no longer satisfies the version in package.json.


You're mixing up npm ci and npm install. npm install will update your lockfile. See the docs: https://docs.npmjs.com/cli/v6/configuring-npm/package-locks


Where are you seeing that? From your link:

“The presence of a package lock changes the installation behavior such that:

The module tree described by the package lock is reproduced. This means reproducing the structure described in the file, using the specific files referenced in "resolved" if available, falling back to normal package resolution using "version" if one isn't.

The tree is walked and any missing dependencies are installed in the usual fashion.”


The scenario is a _little_ more complicated than I laid out, but I have seen this in practice. I _believe_ that it will happen when you have two copies of a repo and one of the copies does not have one of the dependencies. If user A (repo copy 1) adds a package with `^1.0.0` but the lockfile indicates `^1.0.11`, then user B (repo copy 2) will get `^1.0.12`.

The _only_ correct behaviour for a lockfile is `=1.0.11` (I think that npm calls that `1.0.11`, but NPM’s semver specification is unnecessarily complex compared to Elixir or Ruby) unless there is an explicit update. Otherwise, you will _not_ get the same version installed for all developers.


"If user A (repo copy 1) adds a package with `^1.0.0` but the lockfile indicates `^1.0.11`"

NPM lockfiles can't indicate `^1.0.11` so I believe this is where you're mistaken. They point to a specific version (`=1.0.11`) just like you're saying they should.


No, that is madness. A new developer clones the repo and gets wildly different dependency versions than the previous devs tested with. Then once you PR your changes, you include that lockfile and production explodes. Insane. And yes, this is a real problem. See the thread we're posting in and how many projects it affected.


“A new developer clones the repo and gets wildly different dependency versions than the previous devs tested with.”

Could you describe a scenario where you think this happens? It doesn’t if you have a lock file in version control. Do you mean if it’s not in version control?


That’s how you get tons of security vulnerabilities. It’s not a perfect solution but I rather have npm updating versions automatically, because developers are lazy and rarely do it manually.


Setting a notification for empty GitHub search results on "filename:package.json modified-ago<=1h"...


> Like, for example, evidence that the author actually takes security issues seriously and has a formal way to announce them.

Personally I think there is a business model in there. "Curated" code releases/versions by a trusted third party you pay to review all the code. It would be a boring job, like being an auditor.

Bonus points, you can pay an "ethical" one that then tries to get money to the OSS devs of libraries that their clients demand the most.


This is hypothetically the redhat/monte vista/canonical business model, or part of it. They don't do deep curation though because they're not spending the money on it either, but at least they do some basic vetting.

Which is apparently more than most of the coders out there.


I wonder if both are possible with a trust system where releases can be approved or signed by community members. This way we can have automatic semver updates for releases approved by people we trust.

Something like "I trust NPM so accept all releases that NPM has approved". Large companies could have their own internal approval keys etc.


That's in a sense what Debian is doing with its experimental/unstable/testing/stable repositories. We also adopted a similar approach in build2 where we have queue/testing/stable sections of the repository. A newly submitted package version first lands in the queue where it is not yet visible to anyone. After it has been tested and reviewed, it is migrated to testing where it becomes available for testing by the larger community. After some time, if no issues are found in the package, it is migrated to stable. You as a user of build2 have a choice from which section(s) of the repository you want your dependencies.


This exists, it's called crev: https://github.com/crev-dev/crev

It is really good, allowing for distributed reviews, not necessarily from the people authoring your dependencies. The OP suggests that if you -> A -> B, the package manager should only install versions of B vetted by A (or close); I think this doesn't scale well in practice (all the more so if the only way A can vet a new version of B is by doing a new release). Having the possibility to rely on other people to vet releases (possibly in your company) opens a lot of doors, such as the ability to not trust the author of A at all.


IMO what should be done is dependency pinning, but package repositories can send a "critical update" to essentially replace a pinned dependency. Maintainers have to explicitly request that a critical update be applied, and it's ultimately up to the repository owner to approve - generally these critical security fixes shouldn't happen often.

This would handle both Log4Shell and colors/left-pad issues. Additionally, a repository owner could force a mandatory package update without the author's consent, in case someone hides malicious code in their package and it becomes popular.

Of course the repository owner could go rogue, but I trust NPM/Maven/Cargo/OPAM maintainers much more than a random package developer.


Have you used Go? Updating a module is easy and it uses exactly the solution described above.


Hrm. Generally what you're saying sounds reasonable, although I wonder if there's a chance of a false dichotomy there.

What about human errors introduced by non-rogue developers? (a reasonably common occurrence, probably?)


Sure, but there are points in the design space between “everyone specifies exact version requirements” and “everyone specifies lower bounds and pulls the absolute latest”.


Yes, and that reason is no good. There's a reason maven3 moved away from it.


It's really tricky when you end up using libraries 4 levels deep, and never consciously chose something.

Looking at one of our production projects, we use colors via: "css-loader" -> "cssnano" -> "svgo" -> "colors"

I wish I could say I spent the hours to go line by line through every dependency of our app. But that wouldnt leave much time for anything else.


There are tons of tiny libraries that have gotten themselves into the dependency chain of popular libraries, and any attempt to remove them is treated as a turf war.

I've tried to remove single-line dependencies only to have the PR rejected by the creator of the tiny library who happens to contribute to the parent library.

IMO the Javascript ecosystem really needs a decent standard library to remove the inordinate amount of power granted to these these tiny library squatters who wrote 5 lines of code and a package.json file 10 years ago.


What "inordinate amount of power" is this really though? If they're a contributor to the parent library then they can already put whatever they want into code that will get executed by many people; meanwhile having libraries properly broken out into smaller pieces is great for maintainability. Let's not throw the baby out with the bathwater.


The parent library in that situation was a popular library with a whole team of contributors who could reject malicious PRs. But a PR that just updates every dependency (including a malicious update to the tiny library) can easily go unnoticed.

And that was one situation. The mindset of the Javascript ecosystem is still to maximize code reuse, meaning even if the tiny library maintainer isn't a maintainer of the parent library, the parent library still frequently clings to their one-line dependencies when I've tried removing them. Thus granting the tiny library owner tons of power like the maintainer of "colors".


> I've tried to remove single-line dependencies only to have the PR rejected by the creator of the tiny library who happens to contribute to the parent library.

So fork it and throw out everything you don't like.


Exactly. In this case, the package manager should use the version of colors that svgo asked for, not the one that appeared on the internet 5 minutes ago.


It's damned if you do, damned if you don't for packages in the middle of your direct dependencies and a sub-sub dependency with an issue (be it security or tantrum based).

These middle deps could pin the exact version, but then when a security vuln is found and a patch issued, these libraries also need to update. This is like a traffic jam. If you're 6 hops from the vulnerable package you need to wait for not one, but six maintainers to push an update to npm before you can clear the security warning.

To get around this, middle packages list semver ranges. And then you have your occasional left-pad issue.

If I had to choose between those two ways to lose, I would use server ranges. The only way to win is to not play at all - have no dependencies.


as an example look at the log4j mess. "Everything" in the Java world needs log4j updates, but some dependecy's dependencies pull in some specific version.

There you need the way to say "get the new version from 5 minutes ago" without waiting for all levels of the dependency hierachy. Especially as there were a bunch of emergency releases in short sequence and younhabe tonmake sure youbget zhe latest one.

Dependencies are a mess. NIH can't be the solution either, though.


> "Everything" in the Java world needs log4j updates, but some dependecy's dependencies pull in some specific version. There you need the way to say "get the new version from 5 minutes ago" without waiting for all levels of the dependency hierachy.

At least with Maven, that's extremely simple. Just add to your project a dependency on that "new version from 5 minutes ago" of log4j. Maven always prefers the version from a direct dependency. You don't have to wait for "all levels of the dependency hierarchy" at all.


The problem is that, AIUI, each NPM package installs its own, isolated set of dependencies. It has facilities to manage versions down in the dep tree, but it’s not as simple as other managers which ultimately install a single set of packages that satisfy all version constraints.


many java jars contain other jars withbthe dependency, so users don't have to run maven or something like thst, but jsut grab the jar. This is even worse. (there was no package manager early on, thus many java developers aren't taught that way, even decades after Maven was created ... and for end-user products it makes sense to bundle)


No, we really don't need a way to say get the last version from 5 minutes ago.

We need to get the version that we tested with and know works, which is what it does. When we need to update, we say what version we want.

Java can be a pain, but I'm very glad it doesn't handle dependencies like npm.


This is also saying you can't easily update from 1.2.3 to 1.2.4 without all of your dependencies which specify 1.2.3 also being updated without having an override mechanism. This isn't a simple problem where one option has no downsides.


Oh, I agree that there's no simple solution that solves everything.

I'm still glad that basically noone else handles dependencies like npm.


Everybody else handles dependencies like npm, e.g. installs the latest versions that satisfy constraints. Some package managers have the ability to install from a lock file (e.g. Python's Poetry, Python's Pipenv, Rust's Cargo, npm) but will still grab the latest version when the lockfile doesn't exists (e.g. cloned a project with no lock file, or just added the dependency yourself), and have a command to update which updates every single package to the latest compatible versions.

You can argue that npm's commands are poorly named and guide users towards bad defaults, but saying that it works in a unique way is not true.


I guess my experience is limited. For example, maven dependencies usually specify the exact version required in the pom files. At least, on all the projects I've ever dealt with.


I’ve seen plenty of pom.xml files which are unversioned — not having a standard command to update means less disciplined developers say it’s too much work – but increasingly in other languages (and Gradle IIRC) there’s a distinction between the metadata file listing your direct dependencies and a lock file detailing exactly what was installed. The idea is that (using Node as an example) I’d say “I depend on the AWS SDK” in the main file, which changes infrequently when I add or remove direct dependencies, but my tool would use the lock file (npm-shrinkwrap) to record the exact versions of every package, preferably even by file hash, so you can have a highly repeatable build.

Separating the two is handy both due to frequent churn and to avoid transitive dependencies being kept unnecessarily — I’ve seen projects where libfoo stopped depending on libbar a while back but they were still installing it because someone had copy-pasted that block years before and was just incrementing the version.


Thanks for the explanation, makes a lot of sense :)


You’re welcome!


Well, in a log4j situation or similar, I am sure there are cases where it is preferred to update first, and fix production later.

So that would be “get the one from 5 minutes ago, no questions asked”


Nothing stops you from updating to the latest version and not testing it if it's urgent.

You just specify the latest version. But you do it explicitly.


You can ask for specific later fixed log4j version. You don't have to ask for unspecified latest one.


This isn't the problem being discussed: when something like log4j happens, everyone in the world needs to update. The difference is if one of your dependencies pin 2.14.0, you either have to wait for that project to also ship an update or do something more complicated whereas if they specify less than version 3 you can immediately ship that patch.

Something like log4j or many packages in the NPM world are so widely used that on any non-trivial project the odds approach certainty that multiple dependencies will specify versions. The more specific the pin, the more likely it is that you'll be unable to ship a security update without risk of a functional change or blocking incompatibility. You can mitigate that with good CI/CD infrastructure and lots of automated testing, but that means taking on more infrastructure expense.


What if the patch for the buggy `2.14.0` library was released as `3.0.0`? In semver MAJOR can be a superset of MINOR and PATCH, so this is a perfectly logical semver operation.

You still have the exact same problem you described with `^2.14.0`. Someone would have to manually update the package to get the security fix in 3.0.0.

Unless you're suggesting code should also automatically update major versions aswell?


Yes, if someone chooses not to follow semver correctly they can create problems but there are a somewhat unlimited number of ways in which an untrustworthy maintainer can do that. The difference is that following semver means it's easy to not do that since they can always ship 2.<latest + 1> with no changes other than the security fix.


This is following semver correctly. A PATCH update may be a MAJOR update (since its a superset) and it may be considered a breaking change.


> The difference is that following semver means it's easy to not do that since they can always ship 2.<latest + 1> with no changes other than the security fix.

If the line between bug and feature were clear. (Log4j worked 100% as specified btw. in regards to log4shell)

https://xkcd.com/1172/


You don't have to wait till that lib updates. You update version yourself and overwrite it.

Second, log4j is bad examples. Libraries don't pin that at all and people report bug is they do. Libraries are supposed to depend on logging api in general and end project decides whether use log4j or slfj.


What if NPM allowed a "global minimum version" for any package found to have critical security vulnerability?


global for what? How much tracking of issues do I have to make? In an ideal world (which we can't have for multiple reasons) I get security fixes by default. (and no other breakage)


The problem is by default `npm install [package]` will put a "give me [package] that appeared on the internet 5 minutes ago" into your package.json file by default. Until `npm install` pins to a minimum version by default instead of a maximum version by default, this will keep happening. It's a mess. The benefit of "maximum by default" is that you pick up security fixes by default. So... pick your poison, I guess.


On the other side of the spectrum, if no one was installing 'latest', then no one would be testing latest, and we wouldn't catch these bugs until later. We would just all get surprised after 2 weeks (or whatever arbitrary delay) until the bad version became the blessed default version.

In other words.. If there isn't a trusted test pipeline then there's no benefit in delaying the latest version. Might as well just get latest.


In other contexts, the best ways to deal with trade-offs use randomness. Perhaps this sort of stunt would cause less damage if npm just randomly chose which versions to use. To avoid churn, the random choices could be stored in lockfiles. The point would be that not everyone is using the newest version of any particular package.


People starting from scratch would get latest, whereas people maintaining existing projects would be able to upgrade when they feel it’s safe to to so. Do you really want the people maintaining the nuclear reactor code being testers for latest?


> should use the version of colors that svgo asked for

Another thing about setting exact dependencies is reducing duplication. Libraries are encouraged to use loose ranges, because if everyone pins exact versions for every dependency, then you could end up with 10 to 20 different copies of 'colors' in your tree, instead of having just 1 or 2 copies that work across the board.


If you are unconsciously shipping something, you shouldn't be shipping anything.

Our industry has gotten along for far too long with zero liability or accountability.


The people posting that it's not their fault, that they can't possibly vet the code they ship, etc. are basically people who don't have any professional ethics.

Not only that, even if you excuse their lack of ethics, there's a basic competence issue: they have a basic failure to learn from history and the most basic part of this, which is that you shouldn't be pulling live from the internet.


I'm very sympathetic to your arguments in this thread, but it seems like in practice doing it right in your eyes would mean either 1 - not using dependencies at all, or 2 - only using dependencies when you have read and understood every single line of code in both the immediate dependency and every sub-dependency in its tree. Neither is realistic, unfortunately, for building anything non-trivial without a huge team of programmers.

Perhaps you have another solution in mind? I think something along the lines of package-level content security policies with granular permissions has a lot of potential.


Well

I did try (2) for awhile with some success. You basically vet everything you import.

This generally follows the same rules of supply chain validation expected _and common_ in other industries where people actually give a crap about what they are doing.

In the case of large projects, especially those with a solid track record of testing and security issue management, the vetting is organizational. You don't need to read every line of Kafka (though you should, as I did, actually look through it to see what you're getting into), as an example, at least today.

But for smaller projects, just like with smaller vendors, you need to go deeper. You do need to read the code. Then you look at their issue tracking and see if they have any concept at all of quality, sanity, security notices, what their maintainer attitude is, and so on. You definitely read the code. If you're sloppy, you at least read through a good sample of the code. Again, diligence - you have the obligation to your downstream to know what you are getting into.

This is no different than the vendor management you expect in pretty much every supply chain that you interact with. Food, cars, medicine, pharmaceuticals, building materials, heavy machinery, embedded software in medical devices, and so on. Software has gotten away with complete negligence for a long time and instead of addressing it, the common practice has gotten worse and worse. At some point, we are going to accidentally kill someone.

Then, on top of that, you lock it down. You mirror it, you pin it, you make an active plan to monitor the project to make sure you are aware if it is abandoned, hijacked, deleted, etc. You take responsibility for dealing with these possibilities. You mitigate immediate threat by not pulling random crap live when you build. You further make sure you partition your system such that code you have not classified as high quality has a reasonable blast radius and security boundaries.

But whatever, let's say you're a typical valley company that operates like a psychopath and you don't care about that. What companies should be worried about is liability. Yes, most of the security activities for most SaaS and other SW is laughable and mostly theater and checklists. But there is liability nonetheless. After all, you are about to represent it as high quality software and you are taking on the liability. No judge in the world is going to let you go with the "but it was this javascript we pulled in, not ours" excuse.

We built quite substantial systems using this approach, and it was important in the domain that we were working in to have done so. It is not without hassle and cost.

Really, what are we talking about here? It's _too expensive_ or _too difficult_ to have even the slightest quality verification of software? The industry is a complete joke if that is true.


I mean, the bare naked reality is that software, web software in particular, is still like... a hundred-billion dollar industry. Maybe more. As long as it's still profitable enough to deal with these supply chain attacks occasionally, and nobody legislates or regulates things, we'll keep lumbering along like this. I'm disappointed, and I don't like the current state of affairs, but we can't just expect anything to change in the absence of any kind of external pressure.


> and nobody legislates or regulates things

Completely agree, which is why I support such endeavors wholeheartedly.


I did not agree with regulation until this post on hacker news.

The comments here have convinced me that the industry is basically beyond self help.


I agree with you, but it’s not my opinion which matters, it’s my bean counting bosses opinions’


Short term bean counting always sets you up for failure. The fact that this guy's packages were depended upon by so many projects and people were not actively supporting him with money shows how broken open source is. The projects with the most dependencies should be rooted out and collectively paid for or a new license should be made that excludes the use of OS software by corporations unless they pay a minimum amount based on their size. I'm sick of hearing about stories like this and it has discouraged me from releasing much of anything open source that any corporation could use without paying me. That's why small projects that I do release get GPL licenses and I don't put my best work out there for free.


Indeed and crying for the little developer is of no help, the little coffee owner is liable for everything that gets sold, the kitchen cleanness and the good state of the food being packaged.


Imagine a car manufacturer like "I wish I could say I spent the hours looking at every valve and every screw..."


I suppose they must. Because car is a hazard on road and may not end well if you don't put enough man-hours within design, documentation, testing etc.

Now, who's gonna pay you for writing that single functionality whole year? Sure, you tested, documented, did everything right, you may have almost no bugs, pretty code, decent test coverage, unit tests, integration tests, UI tests, performance tests, edge cases ironed out, top notch performance, UX no-one matches, one click deployment, static code analysis, linters, fuzzers, vulnerability scanner tools, monitoring, auto scaling... while you get there, your startup may be no more. Or maybe just a money sink: https://thedailywtf.com/articles/unseen-effort

Alright, I'm little off the track, but not every software project is comparable to automotive/air/rocket/med/... industry. Ofcourse you will put 10x+ more money/time/effort if human lives may be impacted with your commit.

To put that into context, let's appreciate SQLite - software that is thoroughly tested and your airplane shouldn't be afraid to run those. From the creator of SQLite:

> I’m going to write tests to bring SQLite up to the quality of 100% MCDC, and that took a year of 60 hour weeks. That was hard, hard work. I was putting in 12 hour days every single day. I was just getting so tired of this because with this sort of thing, it’s the old joke of, you get 95% of the functionality with the first 95% of your budget, and the last 5% on the second 95% of your budget. It’s kind of the same thing. It’s pretty easy to get up to 90 or 95% test coverage. Getting that last 5% is really, really hard and it took about a year for me to get there, but once we got to that point, we stopped getting bug reports from Android

https://corecursive.com/066-sqlite-with-richard-hipp/#testin...


This is part of why cars cost $30k apiece. If open source maintainers saw the same kind of revenue as car component suppliers, there'd be a lot more paid QA jobs.


Pretty sure they don't MRI every other batch of rust covered by paint received from yet another shitty frame manufacturer. Many PSU, GPU, etc manufacturers often ship first batch with hi-class components, and then "downgrade it a little" or "lose control of a dealer" when it sells nice.


There we go again, someone always has to mention “rust” in every thread about programming languages or dependency management…

;)


Spot checks and manufacturing line quality monitoring absolutely do happen in the process you are describing, which is a hell of a lot more than the weak excuses for not even bothering to mirror the SW and look at diffs now and then accomplish.


Just out of interest do they generally vet the factory that made the steel for those screws and bolts?


The difference is that 1 person responsible for installing 1 valve can't disable every car sold in the last year if they decide they aren't being paid enough.


They better, given liability in the industry.

What we are missing on the software industry is proper liability.


Nailed it. Shallow dependency trees are much easier to maintain and secure.


It's the responsibility of "svgo" to make sure it's direct dependencies are alright.


> It's the responsibility of "svgo" to make sure it's direct dependencies are alright.

No, it's your responsibility to make sure your dependency tree is alright.

Your personal definition of 'alright' may be more easily satisfied if the packages you choose to depend on autonomously practice some level of responsibility towards their own dependencies. Choose wisely.

But there is no way to dictate your requirements to dependencies, or impose some kind of responsibility or demand some kind of warranty. You can accept what is offered, or not. In fact, if you are using svgo, even indirectly, you have agreed to this: https://github.com/svg/svgo/blob/main/LICENSE

>THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.

So svgo doesn't have to care and you can't make them. Even if they do care, there's no guarantee they will meet your standards - and you still can't make them.

If you want someone to blame, find someone willing to sign a contract that says you can blame them.

Efforts within NPM and github to control this situation are simply the bare minimum of case-by-case disaster mitigation, in the interest of reputation alone. If you're using their infrastructure, you apparently find this acceptable.


Is it? In retail, AFAIK, if a store sells you a defective product, they are liable (or at least partly liable). It doesn't matter that some other manufacture made the product. The point being, responsibility is shared.

You're responsible for every dependency you add to your project and that includes all sub-dependencies. Your users will sue you for not doing your due diligence. You may turn around and try to sue your suppliers but that doesn't absolve you of your responsibility.


Who's paying svgo to do that checking?

NOTE: this isn't an open source versus closed source thing. Linux has distributions which test included packages (to varying extents, I'm sure) and some of these are commercial operations. It's not impossible to have code whose verification you have paid for, to one extent or another, even with open source. (and hey, you can install malware with automatically updating closed-source see Solarwinds).


Sure, that is technically correct and if your only goal is to assign blame that's helpful to point out. But it does nothing to prevent this from happening again.

The point here is how we can make it easier for svgo and every other package to avoid the problem in the first place.


* It should be the responsibility of "svgo" to make sure it's direct dependencies are alright.


Why can't you point that attitude in both directions?


So.. basically you are OK shipping unvetted, attacker-controlled code to your users?


Basically, no. I'm not OK with it


Reminds me of my time at a global bank. After many security incidents, they finally decided to step up their game, and implemented multiple security scans.

For any project using NPM, it was an absolute nightmare. Dependencies 3 or 4 levels deep would get flagged, and there wouldn't be a way to resolve them. The security teams didn't care, and multiple projects were left stranded. Since resolving those issues would mean having to contribute to open source, which we weren't allowed to do.

On the other hand, because we had specific security teams doing the scanning and assessing the results, you weren't allowed to question them, because they could just block your entire project. So developers were extremely unhappy with them, and what ended up happening is that developers just did what the security teams asked, without questioning. Which to me, let to more security concerns, because we were just pulling in libraries without knowing what had changed.

Sadly, the few devs who stopped relying on third party dependencies all left, because most of their work took longer (since they were implementing stuff that third parties usually do). Business teams took notice and questioned why everything was taking longer.


We used to use npm audit at my last job, and it’d regularly flag dependencies 20+ layers deep - usually with something like create-react-app or jest at the top, and some trivial obviously not actually a problem issue at the bottom.

The resolver tool we used couldn’t fix dependencies that deep even when there were fixed versions available. And since their internal processes didn’t use the same audit tooling, Facebook devs would often close issues requesting dependency upgrades to fix those not actually security issues. I think Dan Abramov wrote a post on overreacted about it at some point too. I’m not convinced any of the time we spent on those issues was well spent to be honest.


There's obviously very valid security concerns sometimes. We didn't use npm audit, but an external tool. And often it would highlight things like "use https", and when looking at which line, it was just a comment that referred to an external link.

Obviously a non-issue, but you still had to verify, and document everything. Best practice would be that we ran this scan on every commit, but the scan took about 15-20 minutes. It just ended up being something that was part of our release process, with the obvious danger of breaking our application right before release.

All in all, it stopped me from relying on third party modules, and tried to use as much vanilla code as possible.


> that wouldnt leave much time for anything else

By the time you got to the end, it would be time to go back to beginning and start over again...


From the Department of Funny But Hurts Because It's True:

https://www.reddit.com/r/ProgrammerHumor/comments/6s0wov/hea...

>Heaviest Objects in the Universe:

>Sun

>Neutron star

>Black hole

>node_modules


Unfortunately the way that this can be prevented would be to audit the packages before including them or having your own package management that you control.


Summary:

The author argues that version bounds should be treated as a maximum rather than a minimum, like Go does. e.g., if the latest version of colors is 1.0.3, and you have dependencies that request 1.0.1 and 1.0.2, you would end up with 1.0.2. The end result being, the exact resolved version will have been tested with at least one of your dependencies.

I must admit I like the idea.


Except that a shit-ton of developers will code and test with one version of a dependency, and never, ever, ever update it. If the dependency has a catastrophic security hole, that security hole will be pretty much permanent.

And what happens if project A pins projects B and C, which in turn pin DIFFERENT versions of project D? Is there any language or environment out there that can make that work?


Read it again. Any one dependency, or the root project, has the power to pull in the latest version. One laggard dependency does not stop that.

For your second question, yes, Rust handles that well. If you depend on ">=1.0" and ">=1.1", you end up with a single copy of 1.1. If you depend on "=1.0" and "=1.1", you end up with both copies of the library. Every crate uses the version it requested. You can argue whether that's good or bad, but at least it's principled. There's a lint if you dislike that behavior.

https://rust-lang.github.io/rust-clippy/master/#multiple_cra...


OK, so suppose I depend on anarchy, which wants shades >=1.0.1, and chaos, which wants shades >=1.0.2. The author of shades releases 1.0.3 because of a bad security hole in all prior versions. My project will still get 1.0.2, so it will still have the security hole. For that matter, it may ALSO break because anarchy is broken by a change made between shades 1.0.1 and 1.0.2... which is why the maintainer of anarchy hasn't updated their dependency.

On the whole, I think I'd prefer a solver that gave me 1.0.3 by default (but maybe would NOT give me 2.0.0 by default, depending on what the version numbers mean in this particular system). But the bottom line is that there is NO solver that can be SURE that what I eventually get will really work.

That's an interesting fact about Rust, and I didn't know it. On the whole, it sounds like it at least needs some serious tooling so you can make sure you're not dragging in a bunch of old versions that both bloat your code and open you to abuse. Can I ask for a warning if I'm getting two different versions linked into the same binary? If something depends on "=1.0", and the maintainer issues 1.1 with a flag that says "I really, really don't think think you should be using 1.0 any more", will that throw an error? And what happens if both versions get pulled in, but the package in question uses an external data file whose format changed between 1.0 and 1.1?

Edited to change "<=" to ">="...


Oh, and another Rust question, if you don't mind. If I have both versions 1.0 and 1.1 of package X in my binary, and X defines a data type called foo, and one of my dependencies constructs a foo using X 1.0, is that value of a different type than a foo constructed by X 1.1? Or can version 1.0 foos wind their way through the code and up being processed by version 1.1 code?


> My project will still get 1.0.2, so it will still have the security hole.

Right. To mitigate that you would regularly run `npm audit` or even just `npm upgrade` – and test afterwards of course.

I'm not completely sold, but I do think it's a very interesting idea.

> Can I ask for a warning if I'm getting two different versions linked into the same binary?

Yes. That was the lint I linked in my last post. Alternatively, you can run `cargo tree --duplicates`.

> "I really, really don't think think you should be using 1.0 any more"

That's called "yanking". Personally I think it has limited usefulness, but it exists.

https://doc.rust-lang.org/cargo/commands/cargo-yank.html

> And what happens if both versions get pulled in, but the package in question uses an external data file whose format changed between 1.0 and 1.1?

If it uses something like `include!`, both copies will be compiled in (and maybe optimized later by the linker). If it's truly "external" like hosted on some website outside the package manager, then it just means the author broke their package. Maybe I misunderstood your question.

> one of my dependencies constructs a foo using X 1.0, is that value of a different type than a foo constructed by X 1.1?

I believe they are always different types. Cargo encourages but doesn't enforce semver, so anything can change between versions, including private fields or enum variants behind non_exhaustive, etc. So they're treated as different and you need to convert between them. Although this might only be true for major versions; I don't know off the top of my head.

To work around it you can convert the types at crate boundaries, or the package author can use the so-called "semver trick" [1]

[1]: https://github.com/dtolnay/semver-trick


By an "external data file", I mean that the package keeps a runtime database in a disk file or something, and will end up getting confused if two versions are reading and writing that file concurrently. The same would apply if the two versions had any way to end up sharing an in-memory data structure as well.


If the version varies even a little bit, they are treated as different types by Rust.


> is that value of a different type than a foo constructed by X 1.1

I encountered this case once, and they are considered to be different types (in my case the issue was that using v1.0::foo with v1.1::foo traits wouldn't compile), so either keep them separated or pick one. Error messages can be confusing though, if you don't suspect using two versions.


> For your second question, yes, Rust handles that well. If you depend on ">=1.0" and ">=1.1", you end up with a single copy of 1.1. If you depend on "=1.0" and "=1.1", you end up with both copies of the library. Every crate uses the version it requested.

IIRC rust only does that for MAJOR versions (1.1 and 2.0, or 0.2 and 0.3, since cargo consider releases before 1.0.0 to be major ones).


To clarify, if you depend on ">=1.0" and ">=1.1", you might also end up with a single copy of 1.2. Rust does not implement the "maximum version" scheme described above.


And what happens if project A pins projects B and C, which in turn pin DIFFERENT versions of project D? Is there any language or environment out there that can make that work?

Node and npm have always been able to do this. Once upon a time this was accomplished via very hierarchical node_modules directories. This caused some issues, so the directory was flattened, however the multi-version compatibility was always maintained.


I agree that this approach is much better for stability. The trade off is that a lot of systems will end up missing their needed security updates. Very few people are consciously balancing their type I and type II errors when considering upgrade strategies.


This in part has to do with how versions are specified and the standard way that this is done allows minor and point releases to be trusted.

https://stackoverflow.com/a/22345808 - and especially the comment.

You will find a lot of `^1.2.3` in version specifications which means everything from `1.2.3` up to (but not including) `2.0.0` is allowed.

Specifying `1.0.1 - 1.0.3` is allowed too and would meet the desired functionality - but that isn't the culture of JavaScript developers.

The version range is allowed in other dependency management systems (e.g. Maven - https://www.mojohaus.org/versions-maven-plugin/examples/reso... ) but rarely do I see it used - most often its pinned to a specific known good version.


Let's say you have two dependencies each requesting colors:

colors@1.0.1-1.0.3

colors@1.0.1-1.0.4

With npm, you'll end up with 1.0.3 because it satisfies all constraints. OP wants to end up with 1.0.4 if at least one dependency tested with 1.0.4 (and reject 1.0.5). I don't know of a way to do this with npm today.


You might like the [1] "overrides" field in npm v8.3 although I would recommend using it with caution, changing your dependencies dependencies has unknown consequences... even a patch release can break everything...

[1] https://docs.npmjs.com/cli/v8/configuring-npm/package-json#o...


You are reading this wrong. The OP is suggesting that if you have two dependencies that are requesting:

    colors@^1.0.1
    colors@^1.0.2
then npm should get you 1.0.2, instead of 1.0.4, because it's the "version as close as possible" to "the dependency version that the package was actually tested with".

OP is not suggesting that npm should ignore dependency constraints, just that the version that is picked is the closest to the tested version (among those that satisfy the constraints).

If you have a package that explicitly says it won't work with >1.0.3, installing 1.0.4 is silly.


> the standard way that this is done allows minor and point releases to be trusted.

I feel like this event (and previous ones) has taught me that one should NOT trust patch and minor version upgrades to work. Obviously we want them to, but I distinctly recall having "minor" patches that broke existing behavior in the past, and has bitten my team on multiple projects over the last several years. Pinned versions are a giant pain, but having builds suddenly stop working seems worse, because you can't plan ahead for the time to upgrade.


I've come to believe that pinned versions with an active dependency check is the way to go. A lot of the dependency checks/scans are build time rather than an "on going" approach.

If nothing else, that is a step in the direction of reproducible builds which are also in the Good Thing category.

This is likely going to be another maturing event for NPM and the community where they will need to decide how they want to move forward. The blind trust of a `^1.2.3` version specification is something that will likely be outgrown.

I still believe that one of the biggest problems that JavaScript libraries face is the transitive dependency explosion combined with the "always update" build policies and that in turn makes makes the issue of a suddenly untrustworthy developer more likely and more problematic.


Just do it like Maven. You know why no-one talks much about dependency management on the JVM? Because everything works properly so it's all very boring.


I love boring technology. :)


That means that if I depend on leftpad and leftpad depends on colors, and a new version of colors is released, the maintainer of leftpad has to be pestered about testing it with the new colors and doing a new release with absolutely no code change and only this (semver-compatible) dependency bump, otherwise no one using leftpad will be able to update their version of colors?

And the security of this new scheme depends entirely on the leftpad developer correctly assessing the security of the third-party colors package, possibly much bigger than his own?


The way Go's versioning works: no, the highest version wins, not the lowest. So anyone can force an upgrade by upgrading their minimum version. This includes your application's go.mod file, i.e. you can force updates of anything.

Which has other problems too, particularly where semver is not followed strictly, since it fairly often means that using an update of X might force an incompatible update of Y that you now have to go and fix. Go modules have no way to specify upper bounds to prevent or warn about this.


I know, nothing does what OP recommends. And I'm with you that it's not a good idea at all.


I've used NPM shrinkwrap before back in versions 1-3 of NPM. It's a little confusing that the author of this post calls it "new", so I investigated.

Since NPM version ~2 (current is 8), you're allowed to publish a npm-shrinkwrap.json file[0] in your package. That's in contrast to a package-lock.json, which may not be included in a published package.

Back when I used shrinkwrap, it was pre-lockfiles and it was trying to accomplish something similar to what lockfiles accomplish today. (Lockfiles were added in v5 [1])

I dug up this old StackOverflow post to verify that the behavior of shrinkwrap hasn't changed[2] since many years ago.

That doesn't leave me very hopeful. If package maintainers aren't doing this already, and they've been able to for 6+ years, then it's unlikely they will in the near future.

(I'm currently investigating how to deal with bad deps with some Open Source tooling I'm building. Feel free to ping me if you have any thoughts to share)

0: https://docs.npmjs.com/cli/v8/configuring-npm/npm-shrinkwrap...

1: https://nodejs.dev/learn/the-package-lock-json-file

2: https://stackoverflow.com/questions/44258235/what-is-the-dif...


It's because having an actual shrinkwrap file in a library introduces a huge number of conflicts. If library A has 20 transitive dependencies, and library B has the same 20 transitive dependencies, deduplication based on ranges can leave you with only 20 transitive dependencies. If both libraries have shrinkwrap files in them, you can end up with 40 transitive dependencies, even if those are nominally compatible with each other. When you consider that even with the current behavior projects end up with thousands of transitives, you're looking at adding literally thousands or 10s of thousands of duplicate dependencies to a project. That's not even considering that some duplicate dependencies will just break things if they show up multiple times in a project (older versions of react, for example).


It is worth noting that this is not an impossible problem. I'm not a js expert, and it seems that js loves imports, but Nix solves exactly this problem on Linux. You have, for each end binary, a set of dependencies. If they are exactly the same, they can be shared among programs, otherwise you would have different versions of the dependency, all the way down, per program.

It can be bloaty, and you have to manually look at and test packages if you want to get everyone on the same set of dependencies, but you end up with reproducible environments.


That is exactly how npm works. Except instead of one at least somewhat-aligned group of maintainers of a Linux distro trying to keep things under control, you have a much larger set of packages individually maintained (or abandoned) by totally independent people.

So practically nothing will ever exactly match, which leaves you right back at many slightly-different copies of hundreds or thousands of dependencies.


This is probably the only sane way to proceed when our software has dependency chains more than a few levels deep. Establish mechanisms to try to prevent bloat, but otherwise make it possible upgrade independently and make it starkly apparent if the is duplication.


Ah, that totally makes sense. I know that's what "peerDependencies" is used for by many projects. For example, `react-dom` has a peerDependency on `react`. This can either be pinned to an exact version (react-dom@1.1.1 must be used with react@1.1.1) or it can require a range (react-dom@1.1.1 may be used with react@^1.0.0 (any version 1.0.0 and up)).

That seems tricky with the shrinkwrap, as you describe above, because it might create additional versions everywhere instead of trying to collapse them. (If you have 3 versions of `react` in peerDependencies, but each package had a npm-shrinkwrap.json, then you'd end up with 3 different versions of `react` installed.)

Would the burden then fall on your package manager to resolve this, ie npm or yarn? If it sees 3 different shrinkwraps (1.1.1, 1.1.2, 1.1.3) but they all have compatible peerDependencies (>=1.1.1, >=1.1.2, >=1.1.3) it could then pick the version that is compatible with all of the peerDependencies but that also exists in a shrinkwrap (so 1.1.3).

That would protect you from this "colors" scenario of somebody publishing `1.1.4` with malicious changes because, unless somebody were to shrinkwrap it, then it wouldn't get picked up by default. (In contrast to the current semver, which likely would pick up 1.1.4 upon a fresh `npm install`).

The inverse side of this is highlighted by another comment on this post: That semver makes absorbing security updates an easier process in the event of a log4j-style vulnerability. There is always a tradeoff, for sure!


Recent and related:

Dev corrupts NPM libs 'colors' and 'faker', breaking thousands of apps - https://news.ycombinator.com/item?id=29863672 - Jan 2022 (955 comments)

Important NPM package colors from Marak causing console problems at the moment - https://news.ycombinator.com/item?id=29861560 - Jan 2022 (1 comment)

Creator of faker.js pushed an update of colors.js which has an infinite loop - https://news.ycombinator.com/item?id=29855397 - Jan 2022 (1 comment)

Marak adds infinite loop test to popular colors.js - https://news.ycombinator.com/item?id=29851065 - Jan 2022 (7 comments)

Marak's GitHub account suspended after he erased his faker project - https://news.ycombinator.com/item?id=29837473 - Jan 2022 (53 comments)

Faker.js Erased by Author - https://news.ycombinator.com/item?id=29822551 - Jan 2022 (2 comments)

Popular JavaScript package “Faker” replaced with message about Aaron Schwartz - https://news.ycombinator.com/item?id=29816532 - Jan 2022 (3 comments)

Faker.js Has Been Deleted - https://news.ycombinator.com/item?id=29806328 - Jan 2022 (9 comments)


This isn't actually about colors repo. It is about package manager design.


In this case, Microsoft seized ownership of an open source project and modified the code without the permission of the person who still controls those copyright rights that they haven't specifically waived.

Legally, shouldn't Microsoft limit themselves to banning the developer from npm and forking their repos under a new name?


Didn’t they just revert it to a previous version? A version they are legally allowed to distribute? They are legally allowed to ban people and use whatever urls they want to host git repos. They are also legally allowed to give someone else permision to npm publishing under a specific package name.

Seems like they did things they are allowed to do.


Microsoft isn't asserting any sort of ownership - colors.js is licensed under MIT. Microsoft is free to make whatever changes they like and redistribute said changes. (But as was already mentioned, they merely dropped the broken versions and set the last working version as "latest".)


> Microsoft isn't asserting any sort of ownership

The person who holds the copyright isn't allowed to access the repo.

Microsoft changed the code in their repo.

Those things don't happen without Microsoft asserting ownership.


The person who holds the copyright doesn't own the storage for the repo. They agreed to a terms of service, which they almost certainly violated by pushing malicious code. Microsoft undid a change; that's not even derivative, it's literally the same code that the author published. To claim that Microsoft can't remove content from GitHub is wild.


While I agree that Microsoft has every right to deny service to the developer, ban their account and remove all of their repos, that doesn't mean that Microsoft has the right to forcibly seize the developers IP.


What action here constitutes the seizure of IP? The code was licensed under MIT. The developer is free to host it elsewhere and Microsoft retains no rights other than the ones explicitly granted to them.


Being the copyright owner of a piece of OSS doesn't give you control over every location where that software is hosted. For example, the developers of Python can't update the Python package in Debian's apt repo, Debian decides when to pull in new versions. (And if they want to add custom patches.) This doesn't mean Debian is declaring ownership of Python, they're simply distributing it in accordance with the license.

Just because NPM allows developers to self-publish doesn't mean that's a guaranteed perpetual right, and it doesn't mean MIT-licensed packages can't be published on NPM against the developer's wishes.

Licensing code under the MIT license (or any common FOSS license really) is the wrong move if you want to control where your software is distributed, and by who.


Who owns repo? The author has a right to package code, but he doesn't want have the right to use other people's platforms to distribute it. This is a lot like twitter bans isn't it?


Is Twitter allowed to change the text of your tweets?

As mentioned in the original post, Microsoft had the right to ban the developer, and not host their projects in the future.

They also had the right to fork the repos and change the fork in any way they liked.

What I don't believe they have is the right to seize and modify somebody else's IP without permission.


The MIT license explicitly gives the entire world the right to distribute and modify and project licensed under it. The author isn't allowed any takesie-backsies. Everyone is well within their rights to host copies of faker and colors and any other MIT-licensed project in existence regardless of whether or not the author objects to it.

If the author didn't want third parties redistributing copies of their code, they shouldn't have released it under the MIT license.


Well I didn't explicitly license my tweets to allow that, so maybe not. But they probably are, anyway.

What's really the difference between "fork the repos and change the fork" and "seize and modify somebody else's IP without permission". It just comes down to the specific reference/name on GitHub and npm. I don't think there are any IP issues involved. I am pretty sure that those platforms have given themselves unlimited rights to do whatever they want with the identifiers inside their platform. You don't "own" a username on social media.

Maybe the worst that could be said is that they are impersonating someone, which might be illegal? IANAL as should be obvious.


> What's really the difference between "fork the repos and change the fork" and "seize and modify somebody else's IP without permission".

Ownership is the difference.

In the US, you get copyright on the code you write.

The MIT license waives certain rights to the code you own, but not all of them.

The developer gave Microsoft (and everybody else) the right to fork the code and do anything with the fork they liked when they chose the MIT license.

They did not give up every right to the code they owned.


I live in a country where copyright is automatic and inalienable, so I am aware. But what does seize mean in this context?

What is the difference, as far as the MIT License is concerned, between "fork the repos and change the fork" and what Microsoft is doing here? They are not seizing the copyright, if such a thing is even possible.


Licensing your code under an open source license is not the same thing as giving up ownership of your code.

Here's an example where a huge open source project must get permission from every coder who ever contributed before they can make a change to the licensing.

https://blog.llvm.org/posts/2021-11-18-relicensing-update/

This coder chose a license that allows you to fork and modify the fork. That does not give you the right to seize the project and change the original.


GitHub has always owned 100% of the project from the day the developer created their account. The developer owns the code, yes, but the account itself and the actual GitHub project structure is 100% owned by GitHub. From a legal perspective, it is a 100% GitHub-owned project that's a derivative work of the developer's code. Legally, the website is a derivative work, and derivative works are owned by the creator of the derivative work, not by the owner(s) of whatever the work is derived from. There are restrictions on what the owner of a derivative work can do based on the licensing of the original work (for example, GitHub can't merge GPLv2-licensed code and Apache2-licensed code hosted on their platform) but GitHub still owns the derivative work entirely.

For example, if I were to stand up a website using Apache httpd using PHP and Drupal, then the website is 100% mine but it contains code owned by the Apache Software Foundation, Dries Buytart, and Zend Technologies. None of those three have any rights over my website, even though they own the code I built it on. I still have to respect the licenses to the code I use—I can't make my own fork of httpd containing code I copy-pasted from a GPLv2-only project, for example—but the website is still my website.

Or for a non-code example, let's say I were to write Lord of the Rings fanfiction. As a derivative work, the fanfic is 100% mine even though it contains characters copyrighted by the Tolkien estate. I can't legally distribute my fanfic to people without getting a license from the Tolkien estate (but thankfully the Tolkien estate is willing to look the other way), but it's still mine, and the Tolkien estate can't just yoink my fanfic and publish it in an anthology unless I give them permission either.


NPM/MS/GitHub are distributing code consistent with the terms of the license provided to them. The developers (current lack of a) relationship with his (former) service provider doesn’t have any bearing on that.

If one doesn’t way service providers to distribute code they’re licensed to if other relationships are terminated, one should include those terms in the license under which they rel,ease their code.


If you edit a tweet I'm sure Twitter reserves the right to roll back your edit, yes. And I don't see why that would be illegal.

Edit: re: reserving the right, https://twitter.com/en/tos does indeed grant Twitter the license to "adapt" and "modify" the content you post to it.


It's certainly setting a precedence that Microsoft can assume complete editorial powers over a package.

Platforms (as oppose to distributed protocols) enable this, but most platform owners tend to avoid this because users would rightly think it unfair.

Bold move by Microsoft, in this case though it's unlikely to backfire.


They don't need permission to modify the code under the terms of the license granted by the owner.

The owner still owns the code and they can publish it or make it available where they like.


Although, I can see how altering a repo under someone else's name rather than forking it is problematic.

In this case I can forgive them as the repo owner was clearly acting maliciously and causing major problems for many others.

We do need to figure out how this sitation should be dealt with in a transparent way in future.


They were certainly granted the right to fork the code under the license chosen by the developer (and then make changes to the fork) but that isn't the same thing as changing the original.


It's an interesting question as to what constitutes the "original" code. Presumably that exists on the developer's machine. It wasn't written on GitHub.

But I do agree that the expectation is that the repo is controlled by the repo owner, and that expectation has been violated.


Surely "the original code" would be the last code committed by the owner of the repo.

As always, if we don't like the direction they have taken, the solution is to fork the project.

I think it would be a mistake to set precedent based on how much you like or agree with this particular developer.


The developer owns a copyright to the specific work - the literal arrangement of characters in a specific order, and non literal aspects like structure or organization. Copyright is created when the work is ‘fixed’ but is not attached to a specific copy of the work. All copies thereof are either made by the owner, produced under license, or a violation of copyright. Everyone can distribute any work they have license to, intact or modified, latest or not, as long as they adhere to the terms of the license. “Ownership”of a repo, latest commits, etc. barely enter into the equation.


The developer only owns the code, not the account. GitHub owns 100% of all accounts on their service.


Github has a clickthrough license that probably gives them this right.

If you are of the legal opinion that one cannot simply wish away some fairly fundamental rights through something as blazé as a clickthrough license, then that's a problem: It means the even-worse-than-clickthrough FOSS licenses that handwave away merchantability surely have no legal standing either. Then, if you are also of the legal opinion that the relationship between FOSS author and FOSS user is a vendor/buyer one, then what Marek did is _illegal_. At which point Github/microsoft surely regain their right to intervene. You are usually allowed to attack someone if they are about to commit a murder, even if that act would in all other circumstances be a slam-dunk assault case.

You're now banking on the notion that the relationship between FOSS author and FOSS user is fundamentally not a vendor/buyer relationship even though github has a sponsors system, the FOSS license uses terms that relate to vendor/buyer relationships, and that nevertheless copyright law still does apply. That's possible, certainly, but that's a lot of coinflips that have to land correctly.


> Github has a clickthrough license that probably gives them this right.

If Microsoft is asserting the right to seize control of anybodies open source project any time they choose, they are going to have much bigger issues than if they had simply made a mistake and overstepped their bounds.


> seize control of anybodies open source project any time they choose

the platform the code is under belongs to microsoft. The only thing they cannot do is impersonate someone (e.g., pretend they were the author and make a commit), because that would be illegal.

However, they can roll back a commit from being displayed - it is no different than only showing certain commits (it's a right that a website has - showing _anything_ they wish to). There's no claim of ownership (of the copyright), and there's no seizure of the code IP rights whatsoever (for an example of seizure, see the multitude of DMCA takedowns, which is a right specifically granted to them by the state).

Presenting this case as a violation by microsoft is incorrect.


In this case, we have a clear, bad actor. MS has a good reason to intervene on their own platform.


Basically, applications should use a lock file for dependencies based on known tested good versions of dependencies.

How is it that people aren't doing that today? For the sake of security and stability, lock files should be used.


They are used. A lot of the comments in this thread that are about NPM specifically are strawmen, since it has been standard to use lock files for years.

You could still have an issue if you need to update a dependency, for security or other reasons, since it could bring along a bunch of updated sub-dependencies of its own (and sub-sub-dependencies, and so on). But that problem is not unique to NPM and exists in any language or platform that includes package management.


> Other package managers should take note too. Marak has done all of us a huge favor by highlighting the problems most package managers create with their policy of automatic adoption of new dependencies without the opportunity for gradual rollout or any kind of testing whatsoever.

(Emphasis added) - is this actually a widespread practice? That's certainly not how apt packages are handled...my impression was this is a problem unique to the js ecosystem.


I don't have an answer to your question, but I'm glad you've highlighted this part of the article. The idea that an attacker "has done all of us a huge favor" by attacking the free software community is so toxic that it needs to be called out, even if it was meant somewhat in jest.

If we don't reject this logic, then we'll get more attackers claiming "just a prank, bro!" and "social experiment!", like the University of Minnesota researchers carrying out human experimentation on kernel developers without their consent:

https://www.theverge.com/2021/4/30/22410164/linux-kernel-uni...


Well, do you want bad security practices changed because of a prank that hit millions of machines or because of a cryptolocker that did the same?

We should encourage old-style prank hacking.


I'm not sure what this prank showed beyond what all the previous malicious NPM packages already showed, other than that developers of free software are unstable and can sometimes ruin your day for lolz.

Even if you accept the idea of vandalism being used for a positive purpose, a better form of protest would have been to make the package just print a message saying "This software has been abandoned by its author. Please pin your dependencies to known good versions." and then exit.

That would still have been annoying to the people having to do that unnecessary version pinning work, but would at least have preserved some shred of sympathy for the maintainer.


I meant mostly in general.

For this particular case obviously previous packages didn't show it clearly enough. And yes, if you give thousands of 3rd party devs (or anybody snatching their credentials) direct access to your build machines or production systems, you should absolutely expect some of them to be unstable in all kinds of ways.

Insider problem is hard enough to guard against when you know the people involved.


Pin your versions, this was so effective because no one pins their version for what they are using.

Anyone saying it was the dick's fault and not yours needs to start owing their project.

I admit that the last time I worked with node it was a cluster fuck of nested dependencies that only works through the prayers of angles above. But standards are standards, own your project.


The many suggestions about what NPM should do differently are universally all really unsatisfying, rearrange-the-deck-chairs-on-the-Titanic level solutions.

At the end of the day, if you're running code that you didn't write or verify yourself, you're taking on risk.

IMO the next hugely successful package manager (in whatever programming language) will be the one with a really good capabilities model, so you can run untrusted code with much less worry.


Module-level capabilities is an interesting idea. We still don't have process level capabilities though, at least not for native code. Yet another argument for wasm and friends, I guess...


One feature I've wanted for some time now is the ability to say "give me all the latest versions that are at least N days old" when you run npm install without a lockfile or npm update.

The idea being "I want the latest version, but not the absolute bleeding edge, I want something that has at least baked for a bit".


How do you tell your "baked" version is any good though? Doesn't that mean N days after colors.js ground zero you'll start picking up the bad colors.js since it's now baked for N days?


The "bad" colors.js existed for less than 24 hours before it was removed by NPM. Similarly the other recent malicious packages were detected and removed quickly.

The idea is that for maliciously published packages like this (both deliberately in this case, and also when the maintainer's account is hacked) that not all packages that depend on this dependency will need the same length of N. Some users will want to pick it up right away, but some very conservative applications may want N to be much longer. It's basically the idea that alpha/beta users can be the canaries, but those who don't want to take any risk can hold back. Right now, unless you've got a lock file, essentially everyone is pulling the latest released version whether they want to be conservative or not.


Ah, that was the piece I was missing. Basically we are relying on NPM to curate/remove obvious bad packages from the global registry within some timeframe (say, 3 days) and by lagging behind by 3 days you'll never pick up an obvious bad package.


Yes, but presumably it would have been discovered by then.


Seems like you would also want "and there isnt a release 2 days after it" fixing a regression. Starts to get tough :/


It seems npm and GitHub do take action in cases like leftpad and colors.. and people don't like them (Microsoft) doing so unilaterally. Maybe part of the 0-day fix is go multiparty: allow weighted votes by dependents to take over the namespace to force a patch/reversion. You own your code etc, but the OSS distro service has its own community-owned rules, and you are free to run a competing package manager without them. By using community managed package managers, you signal your intent not to break your users & contributors, and give an explicit remediation mechanism for handling such trust chain violations. Instead of a hundred-message GH thread, we get a voted minor version bump same-day.

Though agreed with the sentiment of 'prefer stable' during install would be A+. 1am package scans was a cruddy way to start my vacation.


> It seems npm and GitHub do take action in cases like leftpad and colors.. and people don't like them (Microsoft) doing so unilaterally.

A few people don't like this but I think that most people are quite happy not to have their projects compromised. I would not assume that the loudest voices are broadly representative, especially with Microsoft being a lighting rod for complaints.

> Maybe part of the 0-day fix is go multiparty: allow weighted votes by dependents to take over the namespace to force a patch/reversion.

I think there's a lot of promise in this approach, especially if it's all public and deliberative — e.g. freeze the attacker's account, remove the compromised package, etc. but then have a public voting period for a transfer. The other question I'd have is whether it'd make sense to have a restricted set of maintaining organizations — e.g. I'd be a lot more comfortable if, say, the new project got transferred to an organization like the Apache foundation than some random developer who could plausibly be preparing to quietly ship a crypto-miner or something.


I like what npm is doing in these cases and I think so do most developers.

Isn't this what people are always praising about linux package managers? That they modify the software in all kinds of ways and take things into their own hands, beyond the developers will?

For GitHub it's maybe not a great look. If there was content in the readme that they felt violated their ToS, fair enough, I guess. They aren't obliged to publish that.


Yes agreed.. except move from relying exclusively on just MS npm employees for shepherding OSS packages to the much larger JS community, and esp those tied to any specific project. MS is richer stewardship than when npm was a struggling VC funded company, but still a clearly struggling community moderation model, esp for OSS.

I'm skeptical of cryptocurrency but do like DAO style innovations (with no $ involved) because of this kind of issue. Many stepping stones to get there though and too many unknowns to go whole hog with an overly ambitious schemes. But seems inevitable to divest & decentralize from MS here. Something like community moderation has small components that can be iterated on for pushing to a community model vs current overwhelmed & overall inappropriate Microsoft moderation team model.


> Maybe part of the 0-day fix is go multiparty: allow weighted votes by dependents to take over the namespace to force a patch/reversion. You own your code etc, but the OSS distro service has its own community-owned rules, and you are free to run a competing package manager without them.

Not a good idea. You are pretty much asking to be review bomb. You would need very good moderation to be able to trust votes. That means npm spending a bunch of resources on hiring them, not sure if they are up for that (no, free mods aren't the solution).


Yep, so build in adversarial controls like thresholding. The status quo is already beyond what gh/npm are managing: they already need the mods they aren't hiring. Scanners are helping, but a big part of GH is community tools, so weird not to figure it out here too.


Alternatively, instead of a futile attempt to reinvent the universe, services like NPM could stop pretending that dependencies are easy. They should be encouraging people to pin versions, keep track of updates, and avoid packages with poorly defined dependencies of their own.


Or you could just pin to a specific version and not update until you test.


Those both have their own scaling issues, so not as clear of a step forward.

Ex: We pin & package lock our versions, but it's harder for libraries to (they should do ranges), including ones we release. Likewise, for our apps (non-libraries), updating pinned package locks is reasonable for our direct dependencies, nested dependencies are hard to really have confidence on. Colors and leftpad both exemplify this: both projects fail our standards for direct dependencies, so the concern is nested ones. Unlike apt, we don't want super stale versions of everything.

For stuff like security, defense in depth + minimal targeted mechanisms for targeted threat models are generally a win, and I think that applies here for mitigating rogue releases by rogue package owners. Shifts like bitcoin -- crypto mining, ransomware, and wallet theft -- have made stuff like package buyouts and sneaky patches a reality where we must 'assume breach' of bad releases, not just try to prevent.


I take issue with the title of this article. This was not an attack. The project's author committed a change to his own open source project, which he had every right to do.

Was that an asshole move, and did he lose all credibility in the OSS community? Yes, absolutely. But it was not an attack.

In fact, I also take issue with GitHub / Microsoft taking over the package and am very worried about the precedent that sets, regardless if their motives this time appear to be entirely selfless.


Just because the package owner had implicit permission to run code on other people's machines, doesn't mean that this wasn't an attack.

A postal carrier may have a key to a building to deliver mail, but if they use it to turn on all the faucets and leave the doors open, it's an attack.


That's exactly backwards. He wasn't granted permission to run code on other people's machines, he granted other people the right to run his code on their machines!

Wow!


There is a social contract in the world of open source. The author violated it.

If he exploited trust to steal private crypto keys, we'd have no trouble calling it an attack. The difference is merely a matter of degree.


> There is a social contract in the world of open source.

No there isn't. What there is however is the actual contract of the repo (MIT License):

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Listen, open source is about freedom, and the author is perfectly within his rights to freely change HIS OWN REPO as he wishes. Just as anyone is within his rights to freely FORK THE REPO.

This is open source.


> This is open source.

No, it's not.

The idea that you can waive the implied warranty of fitness for a particular purpose covers good faith mistakes or omissions. It doesn't cover intentional misrepresentations or excuse blatant fraud and criminality. For instance, if you sneak in a ransomware package, it's not "all good folks" because "there's no warranty, it's my repo and you can munch on my nuggets."

The author offered something to the community on certain terms and made certain representations. The author then decided that they had sellers remorse, and instead of acting like a professional adult who realized they made a poor investment (and finding a new maintainer), they threw a tantrum and decided to try and stick it to "the man" by violating the representations they previously made to the community as a whole.

This was petty, petulant and in bad faith. Does it rise to the level of criminality? It might, IANAL though. Nobody's going to go after them because it was more immature than truly harmful.


Perhaps it would help if you took a step back and realised you are demanding something from someone who volunteered to let you have some of their work with no guarantee other than that you'd get the work.

There was a time where we'd say "yes, this is cool. I'll use it" and we moved on. Now you can't publish something online without an army making frankly toxic demands on your time because they feel there is something wrong with what you made.

The attitude you are displaying right now is the one that birthed CoCs and the various other straitjackets that we use these days because folks have lost the ability to tell a toxic community to go jump in a lake when they make demands they shouldn't. And it is a shame, because the permissive and open climate is what makes open source: If you don't like it, you fix it.


> Now you can't publish something online without an army making frankly toxic demands on your time because they feel there is something wrong with what you made.

Asking you not to intentionally break a package you know 20,000+ products rely on as a result of your representations hardly seems like a toxic demand on their time. They hadn't committed to the project in over three years. All they had to do was continue not committing to it.

Asking them to find a new maintainer or not intentionally wreck other people's work that relies on it isn't a toxic demand, it's asking for a modicum of respect.

I'm not saying you have to keep building sandcastles, im saying if you decide to stop for whatever reason its not a toxic demand to ask you not to knock over 20,000 other sandcastles on your way out - out of spite.

This feels like a clear case of stupid games -> stupid prizes.


That analogy needs rephrasing, the OP built a sandcastle and left instructions on how to build it which 20,000 other people used to build (their own) sandcastles.

Then the OP decided to make a new change for his own castle.

"Draw a vulgar expression on the front door."

Which 20,000 people blindly copied and then have the audacity to complain about.


It's not audacity to use a piece of software in the way it was made available on the terms under which it was made available. What is audacious is simply rugging everyone that took you at your word.

[edit] To be honest you're not even advocating for a coherent model of open source software. Should everybody consuming every library fork it in case the author throws a tantrum? That hardly seems workable. The author didn't just stomp on big company sandcastles, the author knocked over everyone's. Every pet project, every startup getting off the ground.

For what? Sellers remorse. Author wrote something and gave it away, then regretted giving it away. Sorry.


> Should everybody consuming every library fork it in case the author throws a tantrum?

Yes you should always fork open source projects critical to your project that are not mirrored somewhere safe, you never know when they may be inadvertently deleted, that's just common sense.

For dependencies, simply pin your versions and put library upgrades through a code review, that way no unknown code enters your system. Or just wait a couple days after a new release, the shit will hit the fan from all the incompetents.

> That hardly seems workable

Right, no one wants to do the work.

> For what? Sellers remorse. Author wrote something and gave it away, then regretted giving it away. Sorry.

For what? Buyers remorse. You blindly pulled code without looking at it, then regretted pulling it. Sorry.


> For dependencies, simply pin your versions and put library upgrades through a code review, that way no unknown code enters your system.

These are all solid recommendations but don't excuse shit behavior on behalf of certain poor participants in the ecosystem. Which it seems you're super eager to do for some reason.

> Right, no one wants to do the work.

I mean so far it looks like one guy lol.

> For what? Buyers remorse. You blindly pulled code without looking at it, then regretted pulling it. Sorry.

We're talking about this guy's motivations not that of the consumers.


> don't excuse shit behavior on behalf of certain poor participants in the ecosystem

Why not? You are excusing the incompetence of everyone affected by this.


It's not a ransomware package and drawing upon metaphors like this is poor reasoning. I suggest you read some Dijkstra on this idea of "medieval thinking."


[edit] The point I was making is not that the author committed a ransomware package (or even that it is a good analogy) but that the idea there is a blanket waiver of warranties that covers whatever you wish it to by virtue that it is yours is trivially falsifiable.


Sorry, I can't continue a good faith argument with someone who defines this as malware. It's delusional.

I won't edit my comment above except to note what the original parent comment was:

" The author committed malware to a package many relied on, that they had not made a commit to in 3 years. My point is not that the author committed a ransomware package but that the idea there is a blanket waiver of warranties that covers whatever you wish it to is trivially falsifiable."


You are correct that calling it malware is borderline - it was malicious software designed to cause harm or nuisance (to some degree) for users. However, I have amended my comment to for clarity of intent.

The definition of malware for the record is:

  ...  software that is specifically designed to disrupt, damage, or gain unauthorized access to a computer system.
Seems close to me, the modifications were designed to disrupt the users of this package. This is not strictly relevant.


All I want to say is that if accepting your argument as 100% true, then corporations have done much worse with software end users have actually PAID for. Yet, this same community that holds this developer of software that is free is held to a higher standard they don't apply to their own commercial projects.


Out of curiosity do you have some examples where customers paid for a library, and then the author of that library decided they disliked the terms of the arrangement and injected some code to make it non-functional demanding nebulous payment? Or worse?

I suspect not, because they would be sued into the dirt for breech of contract and shortly find themselves out of business. In fact I believe the customers could even sue for specific performance and force the seller to actually execute the contract as committed.

It's not that I hold free software to a higher standard, I hold folks to the standard they committed to. The author had big sellers remorse, upset that they had offered this software free of charge. I'm sorry, we all make decisions we regret. If they didn't want to maintain it anymore they could have found a new maintainer. Instead they decided to mess with the computers of people who accepted their representations.


Sure, How about Google borking a user's device with OTA updates and then telling users to go pound dirt with their silence?

https://9to5google.com/2015/04/10/nexus-5-nexus-7-bricked-an...


That appears to be (a) unintentional, which is a critical distinction (b) 7 years ago and (c) if the comments section is to be believed, resolved by calling Google up, explaining the situation and receiving a complementary warranty extension.

  To their credit they stood behind it. Mine was repaired, FREE (October 2015)
The delay was, more than likely, simply that big companies move slow. This is not the same thing.


Point A is a simplification. Point B is irrelevant. If you think it's bad it doesn't matter when it's done. Or will you say that in a few years what you think is bad that this developer has done is no longer bad? This is flawed reasoning. Point C you're sugar coating it. He initially was denied and given the run around and only after making a sufficient fuss was he deemed worthy of a warranty extension. How many others weren't? It's the same thing in american healthcare where insurance companies will "Accidentally" deny coverage a few times for certain things just to get the less persistent people to give in and not bother fighting for what they're owed.

But here's the real context on Point A since you don't want to actually look into it further and most outlets aren't going to bother doing it either. Once Google realized what was happening, well, it kept happening. They were pushing these OTA updates in some cases with 0 user intervention. Not to mention you're taking the comments out of order. That specific commenter had his device bricked multiple times. Why don't you quote that part of his dialogue?

Maybe you should ask why devices would be bricked in the same way multiple times?


I'm sorry but there's no comparison between mishandling a bug causing an issue for customers unintentionally, and intentionally nuking all your customers from orbit with a malicious commit because you threw a temper tantrum. There's a ton of good ways to handle this. Find a new owner, for instance.



It happens all the time.

Some anecdata: When Garmin bought Navionics they cancelled everyone's paid lifetime licenses, pushed updates demanding logins (worse in this case than others because 99% of the places you'd use their software have shitty internet, so an intermittent login wall is a bit crippling even if it were free) and monthly subscriptions, and hounded every negative review saying that they were doing it for the users' own good. Could a person sue for $19.99 and not have corporate lawyers juggle you between headquarters in Kansas and Italy? Maybe, but it's a pain in the ass for minimal gains.


You should have been on the internet a couple decades ago, you'd have hated it (and we'd all have laughed at you.)


I don't think you remember how the internet was 20 years ago very well to make a statement like that.


Thankfully, most of the community has matured in the interceding decades.


Actually, most of the community no longer spends their time here, but of course you wouldn't know about that, you are not invited.


If it's a community of folks who release open source projects, see their success, then throw tantrums I mean, that doesn't sound like a loss to me. I'm happy that y'all were able to build a community on common foundations and I wish you the best of luck!


At first, I upovted zepolen's comment about there being no social contract, only license, but something bugged me about that, and I felt that I wasn't right.

After thinking about it for a while, I think that you are right, there is an implicit social contract when using open source software, and a part of this contract would be to expect the maintaining people to provide the best quality software they can create.

But the thing about contracts is that they work between multiple parties, and not just the maintainer(s) and some void. And consumers of the library didn't do their due diligence, that is, supporting the code that they rely on.


Best quality in what regard and to whom? Software in its purest form is for self expression which is exactly what he is doing. He never explicitly intended this software for commercial usage. And in a non-commercial setting his changes aren't even close to being deemed harmful.


> And in a non-commercial setting his changes aren't even close to being deemed harmful

So you don't even know what happened?

There's an infinite loop after it finishes printing garbage. It's an unquestionably harmful change.

Also, you know how you prevent non-commercial use?

https://github.com/Marak/colors.js/blob/master/LICENSE

-

I don't get it, Marak is clearly having some sort of episode*, and people are bending over backwards to rationalize the result as if there's some clear connection between his intent and his actions.

How many people realize he made those anti-commercial comments two years ago. Right now it actually seems like his comments were more political than financial.

* I don't mean that in a backhanded way, based on his history and his recent comments I think this is someone who needs help regardless of his actions on NPM*


it's art. I like the idea of occasionally writing to my fans....:D


lol do you actually think people care about providing the 'best quality software they can create' https://www.npmjs.com/package/is-even etc


[flagged]


I live in a world in which you get banned from Github, NPM, and possibly other communities for violating that "imaginary" social contract.


All centralized platforms, with the inherent issues that have been analyzed for decades already. Continuing to use them and complaining of those issues doesn't make any sense at this point.


Who is complaining? I'm glad this guy got booted out of these (relatively) high-trust societies.


Merely a matter of degree? There is a chasm between stealing data and printing nonsense to stdout.

That's like saying getting all your limbs cut off is technically a flesh wound and the only difference between that and a papercut is the degree of the wound.


Well... yes. That's exactly what "degree" means.


Yes, but you wouldn't use it in this manner in a good-faith conversation.


> There is a social contract

No. You use what they wrote because they allowed you to. It is not then their responsibility what you do with it or how things inconvenience you. It is up to the user to vendor the dependencies and make sure everything works the way it should.

> The difference is merely a matter of degree.

In the same way that using your front door to break someone's nose on purpose and someone skinning their knee on an unfortunately placed step in your garden is different degrees of assault, sure (i.e. it's not and the only way in which it is is pedantic and not useful).


> No. You use what they wrote because they allowed you to. It is not then their responsibility what you do with it or how things inconvenience you. It is up to the user to vendor the dependencies and make sure everything works the way it should.

There is also no obligation for you to use or distribute newer versions, even if you are NPM.


This still leaves the elephant in the room: who decides when this line has been crossed?


I think the organizations that spend tons of money to run the central repositories we all use are a good candidate for that.


I would have said it's the organization that earn tons of money and don't give a cent to developers if they can avoid it who should think about the structure they have created and profited from and now realize open source doesn't exist in abundance like air.


Intent is what matters, was it meant as an attack - and it kind of was but Joe Developer Schmoe really just got caught in the crossfire of some guy who watched Mr. Robot one time too many.

To answer your question directly, I think the jury’s still out and it will depend on the repercussions. So far it seems to have been a nuisance at worst.


He didn’t, though. People signed up to a distribution method to run the code and just assumed future updates would be okay. He didn’t push the code to them, they pulled it.


People didn't actively go seek out a new version.

He intentionally abused semver to disguise it as a safe update, when it was not.

I know it's popular in programming/infosec to blame the victim for trusting anyone, but you can't deny that this it's common behavior among many (though not all) projects to use the caret prefix when specifying dependencies because you trust the package maintainer to honor the semver agreement.


It's his project. It's a free, open-source project. What does the maintainer owe us? Absolutely nothing.

> I know it's popular in programming/infosec to blame the victim for trusting anyone, but you can't deny that this it's common behavior among many (though not all) projects to use the caret prefix when specifying dependencies because you trust the package maintainer to honor the semver agreement.

Be thankful that all he did was spam the console. It could have been something more malicious. Updating dependencies without auditing source code is a very common practice, but that doesn't mean it's not the victim's fault. It is unrealistic to audit the insane amount of source that our projects are built on, but that is the exact tradeoff that we're making in exchange for the productivity of not building libraries ourselves or paying for those libraries.

Open source is something that we're gifted not something that we're owed. If you disagree, then feel free to either use exclusively proprietary software where you can have expectations of the maintainer, or build all of the functionality that you need by yourself.


I think that open source developers still owe us not to maliciously run malware on our machines.

To be clear they do not owe us to produce good code, or to keep producing code, or to produce the code we want; it is remarkably close to charity giving: the developer/mantainer is donating their work/code to the community.

Just like charity you should not give poisoned gifts.

The kid that gave homeless people oreos with toothpaste filling wasn't charitably offering food to someone in need, he was maliciously offering them a "poisoned" gift.


Open source developers don't owe anyone anything. It's this entitlement that has led so many of our fellow people in the community all the way to depression. Stop expecting something when it was explicitly stated that software is provided with no guarantees.


My point is different: People owe everyone not to maliciously inject malware on their machine, it is irrelevant whether you are an open source developer or not.

The author published the packages to a public registry with the expectation that the code would be run by others, it was not just a random github project, it literally invited others to run the code by being registered on npm.

The author explicitely offered no warranty on the code, but distributing malware is both illegal and against npm ToS.

Being an open source developer does not put you above ethics.

The only thing that would change this would be if some third party was mirroring their code on npm without the author consent.


If you choose to download my code and include it in your project and it doesn't do what you expect, that is on you.

The colors library has an infinite loop and you shipped that version to your end users? That's on you. Test and pin your dependencies. Not doing so is the true offense. Be upset with AWS and the likes who will run unvetted third party code on your machine.

Also note that an infinite loop is not malware. Implying such is insane. Are you now going to call every piece of crashing software malware?


People are going to include the code in question mostly via transitive dependencies. Creating an app with create-react-app will lead to at least 2000+ such dependencies. That's just how the entire npm ecosystem works, and so there is certain community wide expectations not to do this sort of thing, and when it does happen, have mechanisms in place to contain the damage. I agree that people should reassess their exposure to such a ecosystem but that is besides the point.

I believe this is akin to not driving dangerously on the road. There is obviously a social contract. This what living in a society is all about. Just because someone can do something doesn't mean we can't expect otherwise.

His work benefited from the work of many other people who also released open source software. That not only includes the packages that he released, of which he had other contributors, some of which actually wrote more of the package than he did (colors.js) as well as packages that he more or less did a direct port from (faker.rb and CPAN::faker). He also benefited from the entire npm/node ecosystem.

People have every right to be pissed.


> If you choose to download my code and include it in your project and it doesn't do what you expect, that is on you.

Completely agree.

But the key concept that keep avoing is malicious intent.

Let's make an analogy in a completely different topic: Are (D)DoS attacks wrong? do people owe us not to (D)Dos our machines?

On one hand we connected our machines to the internet and declared ourself as ready to accept incoming traffic; on the other hand spinning up a bot army to render said machines unsusable is not nice.

Is it correct to say that you should have some kind of (D)DoS pretection on your services? yes. Is it also correct to say that people should refrain from crapping on other people stuff? also yes.

Getting back on topic; there is possibly a solid case to argue that this was not criminal (as no unauthorized "access" happened) but as much as open source volunteers should be more appreciated the "don't intentionally damage stuff with the SOLE intent of damaging stuff" is something expected of any human regardless of their social position.


> Are (D)DoS attacks wrong?

Yes, if you attack infrastructure that you do not have permission to test. Here people downloaded code, ran it on their own infrastructure (or worse: shipped it to their users!) and were surprised this code did not do what they expected.

So, the only malicious actors here are those who shipped the package to their end users. Not the developer, they did not ship the code to anyone. They simply released a new version that other people chose to download and include in their own work.

To be clear, it's a total dick move and I don't support it at all, but the real blame should go to those who blindly download such code and run or redistribute it.


I totally agree.

We need better tools to prevent things like this from happening again because nobody is ever going to audit the 2000 dependencies of every new create-react-app release.


> I think that open source developers still owe us not to maliciously run malware on our machines.

The open source developer didn't run anything on your machine. You ran it on your machine. The developer published code. Others blindly pulled the update, and that propagated.


> Be thankful that all he did was spam the console.

Why would I be thankful for that?

If you slap me, should I be thankful that my nose wasn't broken?


If nothing else, he illustrated a point that many people needed made. They got off cheap - he didn't exfiltrate data, install malware or whatever. He showed that their supply chain is insecure, and that they are trusting way too much in the kindness of unpaid strangers.

If your business or development practices depend on pulling a bunch of packages from NPM or other sources un-audited and so forth - especially straight into production! - you need to seriously rethink things. You got off relatively light, this time, if you were impacted by this.


Fool me once… etc.


> People didn't actively go seek out a new version.

When they set their dependency versions to 'latest', that's exactly what they are doing.


however, I think the more common case is that they set it to the latest bounded by a minor or patch version (as opposed to a major version) as defined by semvar.

at least that seems to be quite common in the npm/js world.


> People didn't actively go seek out a new version.

They did. They chose not to pin their dependency versions, so this is their clear explicit choice. They also chose not to test their dependency updates before pushing it through to their end users, pure negligence by e.g. AWS who was affected by this.


> People didn't actively go seek out a new version.

Yes, they did. Have you read the article ? That's how npm works: it always pulls the latest version.

You can't use tools you don't understand and blame someone else if it doesn't work as you expect it to.


> Yes, they did. Have you read the article ? That's how npm works: it always pulls the latest version.

No, it pulls the specified version.

The affected packages fixed it by removing the leading carpet in the version specifier, which was designed to allow patch version bumps for things like security fixes.

He literally abused the versioning system designed to allow security fixes by introducing a breakage and disguising it as a non-breaking change.


There's a thread on the "zalgo" issue on GitHub where he's being a jerk. And he has a sizeable past history of jerkdom. Don't blame the victims that the well was poisoned, blame the jerk who poisoned it.

Do I really think he's a jerk? No, I think he's unwell. Previous headlines, lots of conspiracy theory posting recently, now this. He needs help.

Do I think this is an attack? We call other exploits attacks. The shoe fits. This was a deliberately hostile act.


> Don't blame the victims that the well was poisoned, blame the jerk who poisoned it.

Why are they drinking from someone else's well? Someone with "a sizeable past history of jerkdom"?

If you can check the water - nay, if you can copy the water and check it, and create an identical well from that water so that said jerk may never interfere with it again - then the "victims" have no one to blame but themselves.

If he had poisoned other people's wells you might have a point. He didn't, in the same way that poisoning the water I keep in my fridge is not an attack on anyone else. Why do you trust the water in my fridge without checking it?


> Why are they drinking from someone else's well?

Because the well said "free water". And in some cases it's not even them taking water from that well, but other people who they asked to bring them tea (third party libraries).

> Someone with "a sizeable past history of jerkdom"?

Do you really dig through the internet history of every maintainer of every package you ever used?


> Do you really dig through the internet history of every maintainer of every package you ever used?

No, because that would be impossible but I do try to check the libraries I use to the best of my ability, the same with browser extensions I install. I even read the ingredients on things I buy. Sometimes I read the instructions on drugs I'm prescribed.

The perfect is the enemy of the good, after all.


In fact, it said "free water without implied fitness for a particular purpose".


He did what he did knowing full well that people would end up downloading and running that code. By your same reasoning you could argue that websites distributing malware aren't pushing viruses to people, people are coming to them and pulling the virus onto their computer.


Microsoft may have the right to host and distribute this person’s code, but if they use it to take over the package and make decisions on the author’s behalf, it’s an attack.


A package isnt even implicit authorization to run arbitrary code on other peoples machines. It’s authorization to run a specific, known version of your code, preferably one that others are also using without incident.

That’s the whole point of TFA: this is a design flaw in NPM, a package author shouldn’t be able to have the impact this event did, regardless of how popular their package is.


It's still his project. He could have deleted it for all I care, it's his every right.

I wonder if GitHub/NPM even have the legal rights to modify the code you publish, depending on your license.


Deleting it would have been less offensive because deleting it would simply have caused his code to stop executing on other people's machines altogether. What he did instead was execute malicious code (albeit a relatively benign malice) on the computers of people who previously trusted him.

Deleting the package is within his rights. Modifying the package to be abusive may technically (legally) be within his rights but is certainly not ethical, and I'm not even convinced that it was legal. Had he modified the package to install a Bitcoin miner instead of just breaking apps, I suspect he'd be breaking the law, and it's not obvious to me that a DOS attack should be treated differently.


> Modifying the package to be abusive may technically (legally) be within his rights but is certainly not ethical

Of course it's not ethical. But, publishing whatever code he wants under his name, under his projects, is well within his rights.

We are setting the precedent that users can modify the code you've published under your own name, if they don't like what you've done with it.

That is not acceptable. My users should not have hegemony over my project. They can decide to fork, they can decide not to use it. But under no circumstances should they be able to edit my project under my name.

Because that's also unethical.

> I'm not even convinced that it was legal. Had he modified the package to install a Bitcoin miner instead of just breaking apps, I suspect he'd be breaking the law.

I really doubt this. People are choosing to run his code, and choosing to update to whatever new code he publishes, regardless of the contents.

It's not like he's forcefully installing his code on anyone's machine. By enabling automatic updates of his package, developers have implicitly granted him permission to run whatever he would like on their machine.

Unethical, sure. But perfectly legal.


Most people had no idea they even use colors.js as it’s used by some dependency of some dependency. For example, anyone installing AWS CDK afresh would be unable to deploy their infrastructure - it uses colors.js, which blocks the CPU as soon as it’s require’d deep in the stack. With that rug pull, the library author intended, designed, and successfully accomplished a Denial of Service move against all those people.


> Most people had no idea they even use colors.js

Why is that? Do NPM package managers not create a lock file that is a text file anyone can peruse?


yeah, it's just nobody reads every package in that file.

apps generated via create-react-app have more than 2000+ transitive dependencies.


That is a design problem of npm and really the nodejs dev community in general.

They (you) are Blaming the dev because inherent design flaws in their development methodology.

People have been pointing out this problem with npm for years and years


> Most people had no idea they even use colors.js

Isn't that the problem? That they were running a lot of code from unpaid contributors without any guarantees (as said explicitly in the license) and then blaming those contributors when something breaks?


Yes, it's a huge problem. But the author here is absolutely at fault for this attack.

If I leave my garage door wide open all night and a bunch of stuff gets stolen, I'll kick myself for stupidity and vow not to do it again. I'll also call the police and hope they find the perpetrator, because there was a theft.

It is true that we as a profession need to drastically rethink our approach to dependencies and also that this author acted unprofessionally and should be condemned for his actions.


> If I leave my garage door wide open all night and a bunch of stuff gets stolen

It's more like, "I let some random guy do whatever he wants to my garage door and he decides to brick it."


> We are setting the precedent that users can modify the code you've published under your own name, if they don't like what you've done with it.

Not users of your code, platforms you distribute on, and that precedent has been rolling forward for a while now, it's not new. Even so, I don't approve of GitHub's ban, but that doesn't mean I have to endorse the author.

> People are choosing to run his code, and choosing to update to whatever new code he publishes, regardless of the contents.

This line of reasoning would suggest that the authors of malware that requires affirmative action to start running (download an exe, open a PDF, whatever) are clean and clear legally because the victim chose to run their code. How is this any different than a malicious EXE sitting on a server somewhere waiting to be downloaded? It can't just be that the victims in this case ought to have known better, because the victims in most cases of malware ought to have known better.


> are clean and clear legally because the victim chose to run their code

If you take a knife from my kitchen and use it on yourself that is not my fault.

If you ask me for a tissue and I provide you with a knife and tell you it's a tissue, then I have some responsibility.

Ultimately, in either case, why were you not more careful?


I mean to strain the analogy it's more like feeding someone a sandwich and then having the ability to transmute the sandwich into a knife once they've already eaten it.


Did he meta-program that infinite loop? I haven't bothered to read the code… like everyone else :-D


For me the question is was the new (breaking) functionality announced when the package was updated. It appears to me it was thus I find no issue.

This is different then software announcing on the next release they will include ads or even a crypto miner as part of revenue generation

As long as the change is announced then there is no ethical issues and if you as the dev or user fail to read the updates of your depencacy tree well that is a failing on your part


> I really doubt this. People are choosing to run his code, and choosing to update to whatever new code he publishes, regardless of the contents.

Not sure that's true. The fraudulent element there would be the clear and intentional misrepresentation. If the package were represented as a Bitcoin miner, that's one thing. However, representing it as color and styling for your terminal and then mining with it, that seems illegal. Would love a lawyer to weigh in.


> We are setting the precedent that users can modify the code you've published under your own name, if they don't like what you've done with it.

Who has done this? Can you point to it?


The correct approach would be to allow changes like this to be pushed to _new_ versions but to lock pre-existing versions in place. The alternative is this weird world where the platforms themselves have spooky editorial control over every package and could step in at any time and lock out the maintainers (sounds a lot like SourceForge).


The "alternative" is exactly what Linux distros like RHEL and Debian do. They have editorial control over every package and often make changes the original developers don't like. It seems to work fairly well as long as you don't insist on following the latest version.


One mental model is that Linux distros are really just curated package repositories, with all the upsides and drawbacks that come with that. Especially regarding their dogmatic insistence that across the entire repository there's only a single (or at most several) versions of any given library


Aren't developers always using the same version and blindly downloading and running malicious code doing the same thing, running a single version?

Only those were harmed. And by harmed it was just really a bunch of tests runs failing, I guess no production workload was impacted by that jerk move.


Right, but my point is this code, in and of itself, really isn't malicious. It's only malicious in the context of "version XYZ used to do X, now it does Y". If you make versions indelible, you eliminate the potential for this to blow up as an issue.


To be clear, you can't edit an existing published version of a package on NPM, nor can you delete packages (havent been able to for 5 years-ish).


> Modifying the package to be abusive may technically (legally) be within his rights but is certainly not ethical

Sorry to play devil's advocate here, but who are you to say that the future vision of faker isn't some text being sent repeatedly to the console? One man's feature is another man's DoS attack, and while it's clear here, there are plenty of scenarios just like this where there is no obvious solution. We should just let maintainers do what they want with packages (other than install viruses or crypto-miners), but we shouldn't let them pull or modify old versions. Otherwise everything becomes like SourceForge where the platform itself can "editorially" decide to take over projects for financial gain whenever they feel like it and that behavior becomes normalized.


is it his right? to sabotage downstream code in revenge for not being paid for work he willingly took on knowing full well that he may never be paid?

legality is defined at the behest of societal needs. if society says something legal should be made illegal, or vice versa, then it will be.

GitHub/Microsoft chose to do the right thing for the society in the short term. in the long term, choosing a less stupid library update policy will benefit everyone AND let authors do all the "non-harmful" malevolent stuff they want.

as if this wasn't a single-point denial of service attack...

we are slowly (far too slowly) learning that our assumptions that all developers are benevolent is incorrect, and it's going to take another 2-5 instances of this kind of attack before people really start to understand why and see the danger of simply using libraries at all; especially in ecosystems like NPM where almost no one writes a single line of code on their own if it can instead be pulled in as a library dependency. this attitude is the exact opposite of a secure code approach and until people learn this, and learn it well, this kind of attack will continue, and the time between the attacks will gradually shorten.


> to sabotage downstream code in revenge for not being paid for work he willingly took on knowing full well that he may never be paid?

Some consumers of his project chose to use his 3rd party dependency in their code for free knowing full well that it could disappear or break at any time, and yet they still chose not to mitigate those risks they chose to take on.


yes, that is the other half of the coin: don't run code if you don't know what it does.

an extreme version of that: don't use libraries at all.

I am trending more and more to this extreme as I age, because people aren't learning their lesson, and the npm ecosystem keeps biting them over and over...


"we are slowly (far too slowly) learning that our assumptions that all developers are benevolent is incorrect, and it's going to take another 2-5 instances of this kind of attack before people really start to understand why and see the danger of simply using libraries at all"

I agree, but at the same time if we had this mindset earlier; open source probably wouldn't have caught on as quickly. Maybe it wouldn't have become mainstream?


Honestly, it is kind of surprising we've gotten this far assuming that basically all open source maintainers aren't malicious


Unfortunately I don't think that 2-5 instances of this will produce any change.

We've seen more than that number of actual malicious package takeovers in the last couple of years and it has produced basically no real change.

heck I did a talk describing this kind of attack 6 years ago and I found prior instances when researching that talk...


Should your users have control over the code you publish under your name?

Because that is the precedent we are setting here.


Did he own this code? Colors.js had plenty of other contributors, and there was no CLA involved.

github user "DABH" actually had more lines of code than marak did: https://github.com/Marak/colors.js/graphs/contributors

npm and github both have terms of service that you agree to when you choose to distribute something over those systems. npm in particular has had a long policy of unrolling malicious changes to the registry of packages ever since left-pad.


If the responsibility lies with the maintainer, then the responsibility lies with the maintainer.


It’s not just a random personal repo on GitHub. It has a long documentation published on NPM describing the intention of the module. It is advertised as being something it now clearly and deliberately is not.

Some would even call it a genuine fraud. Doesn’t matter if you paid for it or not, there are plenty of free services in meatspace that can be compared. Free quote of bathroom renovation - plumber changes all your locks when you look away while doing the inspection.

Now should consumers be more careful about situations like this, yes. Can this guy expect to claim any rights after a move like this, no.


Why are you bringing up modifying his code? I haven't seen any indication that anyone did that.

Changing the metadata around a npm-stored package is something that npm should have every right to do. Same for changing the metadata about a GitHub project. Neither the npm registry entry nor the GitHub project settings are the copyrighted code.


I mean, for almost every mainstream open source license, the answer to that is a pretty definitive "yes, they can", right? The most stringent of them require things like attribution and open source the modifications, neither of which seem like requirements that Github/NPM would have any qualms with following.


But does GitHub/NPM have the right to modify his code and publish it under his account/his name? I doubt that is legal.


The MIT license doesn't require NPM to list marak as the publisher of the NPM package, as long as they still obey the license's requirements.

Besides, the NPM Terms of Service are almost certainly broad enough to permit NPM to publish a modified version of a package or to delete a package.


And all of the contributors to his code?

What if he had done nothing more than create the repo and the community did all of the legwork? Is it still his right to delete all of their work?


Yes, it's his repo after all. If I let you grow flowers in my garden, it's still my right to burn it all down.

Otherwise, a multi-owner org should be used, or we need new concepts (like changes to repositories signed by multiple contributors.)


But with contributors, it's literally not his to "own", in terms of the actual work. Copyright law says roughly that you own the characters you type, unless you assign it to someone else.

The guy who actually wrote more of colors.js (by lines of code) than marak actually got npm to nuke the offending releases: https://github.com/Marak/colors.js/issues/317

right now they are reviewing a request to change the npm target to his repo


To write and merge code to Marak's version DABH has its own copy so your question makes no sense.

Marak could decide to make his repo disappear it doesn't make the local or hosted repos of other contributors disappear. It seems that github managed to distort your mind into thinking that git was no longer a decentralized cvs.


I don't get your point.

The repo, gh/npm accounts and such are just metadata and metadata of metadata, which is hosted and distributed based on the terms of service of gh/npm. The thing that matters most is what the npm project points to, since it is relied on by other packages and apps.

The only thing this guy "owns", according to the law, is the copyright for the part of the codebase he wrote. Not that it matters much since it is MIT licensed.


> The thing that matters most is what the npm project points to, since it is relied on by other packages and apps.

And it doesn't really matter that much because the NPM people can decide to make it point somewhere else wherever they want, or people can decide to create a new package name on NPM and use that new name on their package.json file.


I don't agree with the general sentiment on this. This seems like a blatant attack to me. He went out of his way to deliberately sabotage all of the projects that depended on his. I have a few open source libraries (admittedly, not very popular) and I'd never do this. Even if I am giving the project out for free with an MIT license and are not liable for any damages.


I'd never do this either. You and I have different ethics than this guy. We're all in agreement that's a total asshole move and no-one would be trusting him again in the OSS community, that's not in contention.

However, even assholes have rights. Didn't you ever play with neighborhood kids who take their toys and ruin everyone's fun just because they're losing? Total assholes, but still it's their shit and they can take it if they want to.

It's like me keeping a bowl of M&Ms at my desk at work to snack and let people grab a few when they pass by too, but I go on a diet and replace them with brussel sprouts one day. Don't act all butt-hurt that you're not getting your free sugar, it's my bowl and I'll put whatever I want in it.


> It's like me keeping a bowl of M&Ms at my desk at work to snack and let people grab a few when they pass by too, but I go on a diet and replace them with brussel sprouts one day. Don't act all butt-hurt that you're not getting your free sugar, it's my bowl and I'll put whatever I want in it.

I think it's more akin to lacing the M&Ms than simply swapping out for sprouts. He pushed a deliberate denial of service with the express purpose of causing harm to as many others as he could. Even joked about it in the ticket system. Knowing full well his purported target of large corporations have layers of systems in place such that would be unaffected.

I do sympathize with him, but his actions have made him appear extremely troubled, vindictive, and callous. The OSS community may yet forgive, but he shut a lot of private sector doors permanently.


Actually I think the brussel sprouts is an apt analogy. People got so used to the M&Ms they stopped looking into the bowl and just ate whatever they plucked out of it. I think that's a perfect analogy.


Just because you normally leave a bowl of candy by your front door to let whoever wants to take a piece, doesn’t mean you can then poison the candy.


> no-one would be trusting him again

I keep seeing people write this. But how would you ever know if one of your projects depended on a future package he maintains? Do you really track the authors of all packages you use against a shit list? And what if he just uses another name?


And there’s the real problem: a shit ton of people/orgs don’t know what’s in the software they use/ship, and just expect people to DTRT for free, forever.


We need to apply zero-trust everywhere. How do we know anything about anyone online? Unless it comes from a respected company, any npm project author could be a malicious actor just waiting to hit some large number of downloads to surreptitiously add a crypto-miner to it.

I think we just need to assume all of them are bad actors and review our dependencies (yes, unlikely to happen in practice given limited resources but that's a problem for someone else to solve).


If the guy had installed keyloggers, or formatted drives, would you still say he's within his rights?


> The project's author committed a change to his own open source project, which he had every right to do.

Indeed, and it goes beyond this. I think the real implication is that software published on NPM and similar networks isn't truly free and open source, in the sense that the platform owner can and will step in and make editorial changes without the knowledge or consent of the software maintainers.


Sure, he’s free to make new versions of it that do whatever he wants - spam zalgo, mine bitcoin, fire the mussels, whatever. That doesn’t obligate anyone to _use_ that code, nor does it obligate NPM or anyone to distribute it.


Nor does it give NPM permission to tamper with his code and pretend it was he who published it. I firmly believe what they did is in violation of the underlying open source license, and if it isn't, the major licenses should be updated to account for this concerning scenario.


What changes did they make to the code?

None as far as I can see.


The maintainer's intended release was text spamming the console. They replaced it with the old faker. This is changing the maintainer's code.


> https://github.com/Marak/colors.js/blob/5152d16f22789d66e107...

So can you point me to where this license says you cannot modify this code? Or that by accepting distribution of one version of this code, you are obligated to distribute future versions?

You can't hide behind "The license doesn't say he has to play nice, so you can't complain about him not playing nice", then say it's unfair that npm don't go behind their license and TOS obligations to host his spite version.


Because they are interfering with _his_ distribution of the code. They are free to distribute it separately and make changes in those cases, but instead they are essentially re-writing history without the knowledge of the package owner, and for reasons that are largely editorial and, I think, unprecedented. They are interfering with his right to distribution, essentially. I know that in a slimy, technical way, you are correct, but I think it is relevant if at the end of the day it is practically impossible to distribute code without some unauthorized third party modifying it in transit, essentially. The implication is we need to start running our own git servers.


I feel that replacing a package with one that doesn't fulfil the same needs in a patch update because you feel that "no warranty implied" justified it is as much "slimy, technical" use of that license.

Again, if Marak is going to rely on technicalities, I don't blame npm for treating him with as much good faith as he treated his users.


They changed the package metadata only by setting `latest` to point to the last non-malicious version.

Can you actually point to any code - or any package contents at all - that they changed?


Of course it was an "attack". Is doesn't matter that he owned his own packages. He knew exactly what he was doing with his changes and the effect it would have on projects that relied on his.


It's his project. It's your fault if you blindly trust random 3rd-party code on the internet and have your mission-critical software depend on it.


> It's his project.

If you publish software you have responsibilities and no "provided as-is" clause can fully free you from it. Especially if you do so with the intention to cause harm.

> It's your fault if you blindly trust random 3rd-party code on the internet and have your mission-critical software depend on it.

The problem is not to "rust random 3rd-party code on the internet" but the "blindly" part.

Like never ever deploy without locking dependencies and testing any new dependencies before updating the lock and preferable even give the changes/diff a shallow review.

I think anyone providing programs (instead of libs) installed though npm (or "blind" untested CI builds for releases) is as much a problem as the one who caused the problems this time. Maybe even more as they also open the door for other more malicious attacks.


> If you publish software you have responsibilities and no "provided as-is" clause can fully free you from it. Especially if you do so with the intention to cause harm.

Says who?

I can publish whatever the heck I want to my project and unless you and I have a contract that clearly defines expectations and resolutions, you're SOL.


Intentions matter. The 'provided as-is' helps cover the author for unintended behaviors that are a result of some non desired bug.

You can't just update your extensively used code to add some ransomware or virus and be let off the hook because you warned users in a text file. The legal system will check what did you know and what your intentions were.

In this case, not that the author did a bad attack, but it's still a jerk move when the intention was uniquely to disrupt others and break things.


Well, at least in the United States, it's the default the other way (there are implied warranties) unless licensed otherwise. That's exactly what most open source licenses do to protect the author. That being said, I could imagine in some jurisdictions, the law limits the ability for people to disclaim such warranties. It would be an interesting case.


> Says who?

law

EDIT: At lest in some places.


What swims like a duck and quacks like a duck


He had two things: (1) the right to make and publish a change to a project he controlled and (2) a mechanism (npm and its ecosystems versioning practices) to have his changes run on thousands of machines. Making changes via (1) is not an attack. Leveraging (2) to cause ‘damage’ is without question an attack, even if the damage is just burned CI minutes and even if the victims’ comically poor practices bear some (most) of the responsibility.


Agreed on all points. People are quick to jump on someone especially when they work really hard on something and then give it away for free. Companies turn around all of the time and change their revenue model and nobody says a thing. It's their right. The guy still had the rights to do whatever he likes with the software he wrote for free. All the assholes vilifying him are shilling for corporations that didn't want to pony up to help this guy continue his work. His actions were pretty drastic, but remember he owns the rights. If he wants to delete what he owns he's in his rights and corporations better take note. You use something for free that you don't own and have no intention of compensating the people behind it you only have yourself to blame when those people rage quit. There is no real argument against this without looking like a freeloading dick.


> If he wants to delete what he owns he's in his rights and corporations better take note.

Except for that he didn't just delete the code. He added malicious code that without warning broke applications at runtime, most likely disproportionately affecting small projects that lack rigorous dependency controls.

Companies can change their revenue model, but they don't get a blank check to do anything they want. See Toyota's recent blow-up with the key fobs as a counterpoint to your argument. It's one thing to phase out support for an old product. It's quite another to yank out the rug from under people without warning, and companies don't get a pass on that anymore than this guy should.


Everyone who had code that broke due to this (and, really, everyone else too) should see it as a clear wake up call that they need to do better managing their dependencies.


Yes. Just like everyone who gets ransomware on their computers should see it as a clear wake-up call to improve their security practices. It doesn't mean we shouldn't also condemn the attack that prompted the wake-up call.


If he just wanted people to pony up, there are plenty of other alternatives like changing the license for future versions like SugarCRM did

https://sugarclub.sugarcrm.com/engage/b/sugar-news/posts/sug...

Since he's been acquired in the past, he could also make it into a SAAS play. He has the connections, skill and experience.

Otherwise, he can just walk away like everyone else. Maliciously changing code to break people's stuff is uncalled for. If he wanted to charge people from the start, then maybe he shouldn't have used the MIT license for his code? If you want more restrictions on usage, choose a more restrictive license.

On a related note, the developer in question is not well mentally which helps rationalize what he did

https://www.qgazette.com/articles/more-charges-possible-for-...

"A team of NYPD investigators and FBI agents found potassium nitrate, which is used in fertilizer, metal containers, fuses and other bomb-making materials in the crate, along with printed bomb-making and survivalist materials and a book on how to make a bomb scattered throughout the home, the source said."

'The chemicals separately are what they are, but taken together they can assemble an explosive device,' NYPD Dep. Commissioner of Intelligence and Counterterrorism, John Miller, said. 'There were books about military explosives, booby traps and other things.'"


I think most on here sympathise with his frustration even if they don't agree with his actions.

The real question is whether you believe as a maintainer you have the right to do whatever you want to your code while in the knowledge that others will run it.

The emphasised part is important. I have zero issue with people writing malicious code purely out of academic curiosity but if you write in with the intention of that code propagating, as this author had done, your actions become far less sympathetic.


Sure, he can do that. But everything else is released under an open source license. Anyone that has a copy (like, I dunno, NPM) can continue to distribute whatever version of the software they want according to the terms of the license.


You don't get to have it both ways:

If you go by the spirit of things tanking a package and Github/MS/npm taking over to undo that is fair game. Even if it's not literally a virus, obviously damaging basic functionality to make a statement is not a sustainable practice for NPM to continue functioning

If you want to go by the letter and what legal rights someone has:

https://docs.npmjs.com/policies/conduct

> The Service administrators reserve the right to make judgment calls about what is and isn't appropriate in published packages, package names, user and organization names, and other public content. Package that violates the npm Service's Acceptable Use rules including its Acceptable Content rules will be deleted, at the discretion of npm.

https://docs.npmjs.com/policies/open-source-terms

> Your Content belongs to you. You decide whether and how to license it. But at a minimum, you license npm to provide Your Content to users of npm Services when you share Your Content. That special license allows npm to copy, publish, and analyze Your Content, and to share its analyses with others. npm may run computer code in Your Content to analyze it, but npm's special license alone does not give npm the right to run code for its functionality in npm products or services.

> When Your Content is removed from npm Services, whether by you or npm, npm's special license ends when the last copy disappears from npm's backups, caches, and other systems. *Other licenses, such as open source licenses, may continue after Your Content is removed. Those licenses may give others, or npm itself, the right to share Your Content with npm Services again.*

https://github.com/Marak/colors.js/blob/master/LICENSE

-

Seriously, why are people so adamant about defending this? We're all part of a giant ecosystem that relies on everyone being "a freeloading dick" in your eyes. Colors wouldn't have its audience without its creator being a "freeloading dick" and expecting NPM to serve it millions of times for free. NPM relies on a JS ecosystem propped up by "freeloading dicks".

"Freeloading dick" is such a dumb characterization of what it really is: Expecting FOSS creators and maintainers to be somewhat cognizant of the ecosystem past the tip of their nose.


If we are all a part of a giant ecosystem then it should be acknowledged that work has been done and should be compensated for otherwise you are just continuing to advocate for the exploitation of FOSS software without handing any compensation over for the value that it adds to this giant ecosyatem


Why are you replying to a statement I didn't make? Where am I saying people should not be compensated for FOSS work?

First you wrote a rant that promptly got flagged before I even had to read it... now you're putting words I didn't say into my mouth.

Are you just incapable of having a mature conversation about this subject?

-

It's becoming a little clearer what kind of mentality it takes to defend how the creator of colors handled this situation. Some of us are able to both:

- understand the frustration of feeling your work is unappreciated.

- realize there are better ways to show frustration than proverbially flipping over the table.

I mean I've literally defended this guy, Marak, before on this website, when people tried to jump to conclusions about his alleged 'bomb making'

But this is ridiculous. It's gone past compensation even, he's ranting about GamerGate and stuff and using broken packages to make political statements.

That's not a mature or effective way to do anything except yes... get yourself banned from multiple platforms for breaking the ToS you signed up for in the first place and leveraged to even get the kind of reach you have.


"should be compensated for" - what a wonderfully passive-voiced phrase. If you don't want giant corporations using your open-source code, it's as simple as using the GPL. Of course, in that case your code probably won't get so popular in the first place.


s/GPL/AGPL/

Plenty of SaaS providers end up using GPL code as if it was MIT since no distribution = no virality.


You are toxic and a perfect example of shilling for corporations. I bet you depend on that package and many others that you and your corporate overlords haven't paid a cent for. Deplorable at best. I donate to projects that I use as a percentage of revenue. Name more than one OSS project that your company has actually donated money to on a regular basis to ensure that it's original maintainer has some sort of compensation for the work they put in based on your usage? I bet you can't because you are more than happy to exploit things that you consider free. Defending freeloading dicks just puts you in that category.


Personal attacks like this and https://news.ycombinator.com/item?id=29885496 and https://news.ycombinator.com/item?id=29753758 are against the site rules and extremely not ok here. We ban accounts that post like that. I'm not going to ban you right now, since we haven't warned you before, but please don't do this again.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Fair enough dang. Will do...


Sorry


I think "attack" still best describes the move even if it was an indirect attack (ie he poisoned his own code then others ran it). The intention was to create a denial of service "bug" as a protest and given it was deliberate I don't think the term "bug" is sufficient. But you're right it wasn't a direct attack and I'm inclined to also agree that it wasn't intended to be malicious (though not all attacks are).

> In fact, I also take issue with GitHub / Microsoft taking over the package and am very worried about the precedent that sets, regardless if their motives this time appear to be entirely selfless.

Yeah, I agree with you here.


yeah i don't see it as an attack and was very surprised Github suspended his account, less so that NPM unpublished the package. It is easy to think of these platforms as unbiased repositories, but as soon as something is deemed 'wrong' suddenly you need to think what the criteria is for that decision, who makes the decision and how? Where is the line?

Is an infinite loop is the line? Updating a package to display random text without an infinite loop is crossing the line too? Who decides what _my_ OSS library is allowed to do? Can I update it just to add a banner asking for donations? How is that functionally different to random text other than intent?


A trojan distributed through NPM is still a trojan. It matters not whose code it was but the intention to do harm.


is it a "trojon" when a maintainer decides to break backwards compatibility with their next release and breaks your code? how is this any different?

Did you miss the bit in the license that states, "THE SOFTWARE IS PROVIDED "AS IS"?

when you decide to incorporate code with no guarantees into your project, you get what you get. Marak is a hero in my view to the open source community. The cost of free software is that there is nobody to point a finger at. Acts like this show-case that being able to point a finger at someone is worth something. And, maybe corps that use open-source software will begin to address this "risk" and maybe this outcome will benefit the open source community.

The "AS-IS" part is a big problem and is addressed through funding. Where, exactly what IS gets negotiated for a price.


If it had deleted files, or extracted user password data and uploaded it to pastebin you would agree it's a Trojan, yes?

Your license agreement is not going to save you from the feds if you write code like that.

So we are arguing where the line is. From what I understand, having not run the code myself, he wrote code that put the system into an infinite loop, i.e. he intentionally caused a hang. That crosses the line in a clear way, for me.


Then it would be a crime in more or less any country.

Now the question is if it's hanging intentionally?(Software as-is) license only helps so much. Think the answer for this is not clear cut.

Without question a line was crossed.

I mean sure there are endless problems about how npm handles things, but it's more nounced.

Like e.g. you needing to go out of your way to lock dependencies instead of it being some form of defaultish behaviour is a problem.

People distributing CLI tools without having strict dependency locking is a even bigger problem/no-go.

And this showing how careless some people are when it comes to distributing software is probably the most interesting thing.


It wasn't backwards incompatible, it was outright designed to cause chaos. If an open source maintainer pushes an update that intentionally starts killing whatever processes it can or intentionally fills up the disk with garbage, is that not an attack? How is an update that intentionally infinite loops, printing garbage to stdout any different? There's very much someone to point a finger at: he went out of his way to cause chaos.

Does that mean corporations have no responsibility? No. But it's also the case that we live in a society and your ability to wreak havoc because you can shouldn't be empowered.


> a maintainer decides to break backwards compatibility with their next release and breaks your code?

a maintainer breaking backwards compatibility is not malicious.

The faker.js code is intended to be malicious, and thus should be treated differently to another project which might break your code in other ways. Because it's not about breaking code, it's about the maliciousness - the same would still apply if the faker.js trojan is more hidden, and thus was not discovered (e.g., they embed a crypto-miner into the library and extract compute resources secretly from those who use it).


The difference is intent. If someone makes a mistake, even if deleting all their code, that's a bug. If they intentionally make a change with no other purpose than breaking your stuff, that's an attack. The scale and whether they're successful doesn't matter.

    > attack (plural attacks)
    > An attempt to cause damage, injury to, or death of opponent or enemy.


> In fact, I also take issue with GitHub / Microsoft taking over the package and am very worried about the precedent that sets, regardless if their motives this time appear to be entirely selfless.

Is there more on this? This is the first I’ve heard about this npm package being modified which I would have just ignored as regular npm problems, but hearing that GitHub took over someone’s repo sounds crazy. Does their ToS claim to override copyright or something?


I do not think NPM altered a package, they unpublished the nonfunctional/infinite-loop version, so that the prior, functional, version (pushed by marak) becomes the "latest" version again


Yes, that is a very dangerous precedent. This isn't like the leftpad issue where someone stole credentials from the project owner. This is the project owner himself publishing a new version of his project.


Nobody stole creds off the project owner in the left-pad incident. npm chose the needs of the many rather the few in that incident: https://twitter.com/seldo/status/712417370686365697

which seems like very much akin to what they are doing now.


They supposedly took over the npm packages[0,1], not the github.com repos. npm is a system where you push archives as package versions, it doesn't do its own pull from a github repo or otherwise.

To add, unused/squatted npm package names regularly get reassigned[2].

0: https://www.npmjs.com/package/colors

1: https://www.npmjs.com/package/faker

2: https://docs.npmjs.com/policies/disputes#definitions


> I take issue with the title of this article.

The author wasn't mentioned in the title at all: "colors attack." Names have to come from somewhere. The article is about some hypothetical future antagonist that puts a hack/bitcoin miner in their widely adopted package, or something. "Colors attack" is just the name of, effectively, the CVE.


I regard that as an attack, specifically a denial of service.

Whether he had the right to do so doesn't matter much, he voluntarily broke other people's apps.

Btw, I'd say that he for sure has the right to write purposely broken code, but probably not to push it on npm (it depends on npm TOS, though).

If it was for me, I would just suspend his npm account and keep distributing the existent code, since the license allows to do so.


They own the sites, they are allowed to get rid of some fuckwit.


It's worth asking, I think, how much energy should be spent against guarding against an attack like this in the future. I think every organization's risk model is different.

In this case, the Net interpreted Marak's actions as damage and seems to have effectively routed around them. There was disruption, but we already have systems in place for flagging broken or malicious versions (`npm audit` being one example).

For people willing to have the system work 99% of the time with the occasional Marak-screw, we already have a working solution in what currently exists. For people who cannot tolerate that level of risk, you have to firewall your package sources and only allow use of vetted code dependencies in your software.

Open source relies, in the end, on trust. Marak broke trust and that's all. The question isn't "How do we prevent people breaking trust" (that's impossible; humans have free will) but instead "How do we mitigate damage / protect that which cannot be risked if someone chooses to break trust?"


Here's my 2 cents how build system should work. I'm not sure that any build system works like that... But anyway.

All dependencies sources must be included with project sources. When you check out repository, you should have everything necessary for building the project. Sources should be in their original form, so you could fix anything quickly.

Libraries builds must be reproducible. Output artifacts hashes must be provided along with sources. It means that one should be able to download pre-built artifacts from the Internet. But if they're not available, that should not be an issue.

Ideally whole build environment should be part of project, something like docker container. So your compiler and necessary build tools are fixed as well.

Of course there should be some tooling to check for new dependencies versions, print changelogs if available, etc. But it must not be automatic.


Check out Nix (https://nixos.org) which is designed around exactly this model.

It works by putting the package hash into the path that you access the package by and hardcoding the path in all dependencies. This way every package has a hardcoded version as a dependency, however updating dependencies, even indirect ones is doable.

However, the thing that’s lacking in typical use is granular updating of dependencies. By default you update sources independently, however the biggest source (nixpkgs) is a monorepo providing most major software. You can always figure out which version things have updated to and manually audit and pin the versions you don’t want and there is some tooling (not in Nix itself, but a 3rd party package) that does this.


> All dependencies sources must be included with project sources.

This sounds great for applications. But for libraries, you'll likely end up with the diamond dependency problem: an application uses two libraries, and each library uses their own copy of a third library.


In Java you have to decide which version application is going to use in the end. Maven decides that somewhat randomly. Gradle have some tricky algorithms to determine which version is highest one. You can put two versions of the same library on the classpath anyway, at least without some tricks with classloaders which are not used for ordinary applications.

So basically with Java you would need to flatten all dependencies and resolve version conflicts in some way (sometimes you must downgrade one library or even declare that some libraries are not compatible with each other, but that's a rare occasion).

AFAIK npm uses hierarchical dependencies, so few libraries with a different versions are not a problem at all (at least if those libraries are good citizens and don't pollute global namespace or use it carefully).


Maven isn't random - it is described at https://maven.apache.org/guides/introduction/introduction-to...

> Dependency mediation - this determines what version of an artifact will be chosen when multiple versions are encountered as dependencies. Maven picks the "nearest definition". That is, it uses the version of the closest dependency to your project in the tree of dependencies. You can always guarantee a version by declaring it explicitly in your project's POM. Note that if two dependency versions are at the same depth in the dependency tree, the first declaration wins.


Well, in practice that turns out pretty random IMO. Nearest definition is meaningless. Order of declaration of dependencies is meaningless as well. It's not random in the sense that it does not change between builds if pom.xml not changed, but this deterministic algorithm does not make any sense, at least to me.


Checking in your dependencies with https://github.com/JamieMason/shrinkpack can help insulate you from these problems until you're ready to face them. I created this before left-pad and thankfully meant that we were unaffected.

A lot of developers, understandably, baulk at checking in dependencies, but there is a concrete benefit in being able to continue uninterrupted during outages.


Yeah no.. The author chose to change his own project, that's in no way an attack. People not only pulling random code from the Internet, but also integrate it automatically without review and without as much as locking the version is the problem here if there is one.

The author is under absolutely no obligation to maintain his project or to keep in some state desirable by whoever is currently using it.


> Essentially every open source software license points out that the code is made available with no warranty at all. Modern package managers need to be designed to expect and mitigate this risk.

It's almost like the package manager's job has become to protect users from their dependencies.


Package managers should not intentionally expose users to new versions without explicit action.

Go modules did different then npm, and I argue Go modules did it correctly.


Until the opposite happens. Some big security hole is fixed in the dependency, npm gets the fixed version by default while Go is stuck in the tested unsecure version. Or Go mitigates the somehow?


Go doesn't mitigate that somehow. You get the code that you specify, not the code that someone else has decided is better.

In practice, for both npm and Go dependencies, you'll get a Dependabot PR that upgrades the dependency for you. Obviously that is Github-specific, so if you're on a different platform, you'll have to subscribe to security updates in some other way. I am guessing there are many services that you can subscribe to that do the same thing.


Some big security hole is introduced in one of thousands of dependencies, npm gets the insecure version on next npm install while Go is stuck in the tested secure version. How difficult is that to see? I'm not the one to believe in conspiracy theories but this is just nuts.


Until the secure version is tested, as it should be for anything you deploy to production.

I'm not sure why you think deploying untested updates is a good idea?


I agree that Go made the right call with MVS. It's a nice compromise between pinning and fetching the latest version of everything.


It is very reasonable to ask NPM, owned by mega-giant tech firm Microsoft, to protect users from malicious packages.


Protecting users from their dependencies should be the job of their language's standard library. With a good enough stdlib, the number of dependencies required is much lower.


when i have any code that requires dependencies, i always use an exact version. i am mainly a python developer, so my requirements.txt would look like this:

    flask==1.1.1
    request=2.3.2
    tensorflow==3.4.2
i tested/built my code with these versions, so i know that these versions work. when i install my dependencies, i use these versions, not the latest version. is there a way to do this with node.js? and wouldn't using this method solve any problems? pardon me if i'm missing something


Yeah, it can be done easily using lockfiles, both yarn and npm allow that. You would seldom run npm update directly in production without testing the updated deps first


Yes, if you put exact versions in your package.json and do not prefix them with a caret or tilde, npm will install the exact version. A package-lock.json version will also pin your transitive dependencies to a specific version. Unfortunately, the default behavior of npm is to add a caret to the version number which npm interprets as this version or any greater minor/patch release.


Yes, you can specify an exact version in package.json. The easiest way is by adding "-E" when installing:

    npm install chalk -ES
    yarn add chalk -ES
The -S installs and saves to package.json (equiv to requirements.txt).

That results in:

   "dependencies": {
       "chalk": "5.0.0"
   }
which is an exact dependency. By default npm and yarn use "^5.0.0" which means minor upgrades are allowed (5.x.x).


uhhh yes staying pinned to an old version forever solves some problems, but not other problems? article doesn't mention 'npm audit' and how there are cases where you want to encourage an upgrade

real long-term solve here is a code review community for widely-used public packages I suspect?

am not a huge blockchain fan but this is one thing that blockchain could conceivably do well, because reviews are public, need to be authenticated, exist as compact metadata that can fit on chain, and benefit from public reputation dynamics


The real long-term solution is for projects not to have hundreds of weird dependencies.

If dependencies are pinned until developers update them, you have massive security holes, because developers can't be trusted.

If you take the latest dependencies on every update, you have massive functional exposures (and security holes), because different developers can't be trusted.

If you have "community review", you create a new class of thankless work that nobody will want to do... except power-tripping control freaks who want to gatekeep over whatever their personal obsessions may be.

If users have to pick the versions, nothing will ever work, because users can't be trusted (and wouldn't want to do it anyway).


review is a compliance function and may be more likely to attract payment than writing OSS, paradoxically

also companies may devote in-house resources to do it

best case IMO is that normalizing paid review leads to normalizing paid bugfixes / feature requests. people need to start thinking about what monetized github looks like


Hahahahaha.

You think anyone will pay anything more than lip service to QA?

That's a good one.


A blockchain isn't needed for that. Authentication needs "crypto"graphy, but not "crypto"currency.

This wouldn't be a complete thread without someone mentioning Rust, so I'll do it. cargo-crev is a nice web-of-trust type code review system for Rust crates. https://github.com/crev-dev/cargo-crev


I swear I'm not normally this person but I think if 'web of trust' means a database that is replicated in parts across many different computers and uses cryptographic signing to authenticate messages, you're describing the thing known to the bros and the gen-Zs as a blockchain


Web of trust just means "If A trusts B, and B trusts C, then A trusts C." Nothing to do with distributed databases/ledgers.


Blockchain refers specifically to a linked-list-like data structure which utilizes cryptographic hashes at each node to store an authentication of the tail of the list on each head (node). If you have a similar structure using trees, it's a merkle tree. Replication + message signing does not imply either (necessarily).


"Web of Trust" predates Cryptocurrency / Blockchain by decades.


A blockchain and a cryptocurrency are two different things.


Where did you get forever? The idea in the article is to upgrade dependencies only when the maintainers are ready to/explicitly say to.


Maybe the JS language itself should vet and absorb some of these more common libraries. Printing pretty colors in a console could be a feature of the language itself.


> Maybe the JS language itself should vet and absorb some of these more common libraries. Printing pretty colors in a console could be a feature of the language itself.

Maybe Node.js should start shipping with a substancial standard library instead of making developers rely on NPM. Node.js can't even parse a multi-part request body without a third party library. So much for a HTTP server that doesn't even implement the HTTP spec...


There is a TC39 proposal for a "Javascript Standard Library." It's at stage 1, which is better than stage 0.

https://github.com/tc39/proposal-built-in-modules

Even so, unlikely that a "Faker" would become part of any such standard library.


There was a major push few years ago, but it died down. Basically performance benefits of built modules weren't worth it and with that standard library proposal also lost momentum.


Isn't Deno doing this with its own standard library?


A standard library over time would end up consisting of ancient garbage. You could have a vetted collection of node.js module on npm.


What if my engine doesn't have a pretty console?

Javascript isn't just node and web browser scripting.

Things like UI should not be in the language.


Some ecosystems adopt latest semantic version (i.e. caret/hat versions).

Some ecosystems adopt lowest applicable version (i.e. explicit/implicit ranges).

Although this may be more specific to the former ecosystems, I think the real lesson here is that there's associated risk with not pinning your dependencies in favor of receiving security/bug fixes. Yes, lock files are a thing, but there's still a real downstream effects to these things when they are regenerated/created.

Was the author within their rights to do this? Absolutely.

Was the package manager within their rights to mitigate harm? Absolutely.

These actions do not go without consequence. Time will tell what those are exactly. This is not the first time this happened, and it won't be the last.


Do you have any example of a package manager that adopts the lowest applicable version?


Maven is most frequently done with a pinned version and rules about the order and depth of dependencies.

  libBar:1.0
    libFoo:1.1
  libFoo:1.0
In this example, libFoo:1.0 is defined at a higher level than the transitive dependency for libBar which defined libFoo:1.1 -- libFoo:1.0 will be the version used.

  libBar:1.0
    libFoo:1.1
  libQux:1.0
    libFoo:1.2
Here, libBar is defined earlier in the dependencies. libFoo is used by both libBar and libQux, but since libBar was defined earlier, it has precedence.

https://maven.apache.org/guides/introduction/introduction-to...

There are ways for the project to specifically redefine versions to be used, exclude certain dependencies, prevent the build if certain versions show up and enforce various rules upon the build.

https://maven.apache.org/enforcer/enforcer-rules/index.html

It is possible to extend the build in rules - https://maven.apache.org/enforcer/enforcer-api/writing-a-cus...


This is even worse, it doesn't satisfy the constraints at all! If libBar needs features added to libFoo 1.1, it will crash with NoSuchMethodError, even though they declared their dependency correctly. That's madness.


And then you can either:

  * Manage the dependency to libFoo:1.1
  * Put libFoo:1.1 as a top level dependency for the project
  * Exclude libFoo:1.0 as a dependency from projects that include it
There are many ways to go about this.

The key thing with this is that the dependencies are deterministic, reasonable, and repeatable.


They are as deterministic and repeatable as any lock file from any tool that has lock files (npm, poetry, pipenv, cargo, ...). They just don't offer a way to generate that manifest (which is a lock file) correctly automatically, instead you have to do that dependency resolution yourself.

Is that reasonable? I certainly don't think so. This sheds light on why log4j is such a large-scale issue.


By "reasonable" I mean "can be reasoned about" - not "is it the best way".

http://the-whiteboard.github.io/coding/debugging/2016/04/07/...

While that speaks of code - the key in there is the ability to look at it and figure out how it works and how to change it.

With Log4j, we had to select one of those approaches, rebuild, and redeploy. In the Java community, repeatable builds are valued - the "I build the jar today" should have the same result tomorrow as the default.

It is possible to change the way maven specifies its versions to do ranges, just most people don't do that. I don't want my oracle ojdbc jar to suddenly jump to a different version that doesn't work with the backend I'm running.

---

https://www.sonatype.com/resources/log4j-vulnerability-resou...

If you look at the "downloads per hour" you'll see that about half the people using it downloaded it and continued to update their code as new releases came out.

It is concerning that 41% of the builds out there are still specifying a vulnerable version.


Maven and NuGet are lowest applicable when using range syntax.


I just want to add, colors is pretty poorly designed package.

It modifies the prototype of String! This is a well known anti-pattern (yes, there's a safe mode you can opt into but it shouldn't have the first mode at all)

It also has a ridiculous "theme" feature that can be replaced by the same amount of standard JS. Meaning you gain nothing by using the theme feature that you couldn't do yourself except you make your code more dependent and more likely to break/need updating/etc since you added more dependencies to achieve something where none was needed.

It does tries to do too much by auto detecting whether or not to emit colors and provides no way to tell it what you actually want (since it's guess may be wrong)


On the flip side, what if a common dependency has a critical vulnerability, and a fix needs to go out quickly? If evey package needs to update dependencies to get the fix, that can add considerable delay to patching all vulnerable systems.


And of course a clever attacker would do this:

* add some code with a subtle and accidental-looking vulnerability to a package

* wait until lots of other packages were dependent on it

* report the vulnerability so the package got flagged by "npm audit" as needing an urgent update

* release a new version with a more damaging malicious payload in it, which everyone would rush to install.

The way to stop this scheme would be for NPM to make sure that "npm audit fix" installs the earliest non-exploitable version, and to make sure that that version contained only the minimum changes necessary to fix the reported vulnerability.

That would mean having engineers whose job it is to do security reviews of patches to packages that have vulnerabilities found in them, but that shouldn't be too much work to take on. I mean, how often are NPM package vulnerabilities found? Two per day, or something?[0] Also it would require that packages are reproducibly built from their published source code, which sadly isn't a thing yet.[1]

[0] https://github.com/advisories?query=type%3Areviewed+ecosyste...

[1] https://hackernoon.com/what-if-we-could-verify-npm-packages-...


So who has a great Node development environment that runs in a container?

One of these days a rogue package is going to try to do serious harm on my computer and I'd like to start figuring out a setup to protect myself.


Whole npm ecosystem is so fragile.

Remember event-stream[1]? Did we learned something from that? Yes, we might. So was it improved? Never. People are still installing 'new' colors package and wondering why its texts are broken.

What if he uploaded malicious code rather than just just gibberish? What if he uploaded on only npm and not on GitHub? Would we even notice that?

[1]: https://github.com/dominictarr/event-stream/issues/116


I never understood idea behind non exact version of dependencies. What’s to gain here? If I develop and test my app with given version I want it to run with given version. Not with something around, near by or similar. I want this exact version.

There’s no way you can predict that even a small bugfix in library won’t break your code.

Please just stop using all these lock files nonsense and allow only exact versions for dependencies. Use some upgrade tool like ncu to upgrade them quickly.

I wonder who has invented this stuff and why.


Say you depend on foo@1.0.0, and some of your dependencies themselves depend on foo@1.0.1, foo@1.0.2, and foo@1.03. When you install you'd get 4 copies of nearly the exact same library. On the other hand, if everyone of those packages used a tilde or caret, they'd all depend on foo@1.0.3.

That's the benefit, but it's not worth it in my opinion. Pin your dependencies and use a package-lock.json for transitive dependencies.


> The specific misfeature is that when you use NPM to install a package, including a command-line tool, NPM selects the dependency versions according to the requirements listed in package.json

I thought in 2021 its well known that not using locked dependencies for npm is a no-go? Just people accidentally braking dependencies happens so often that in my experience not using a lock file is not an option.

> a package declares the minimum required version of each dependency, and that’s what the build uses

if that is how it works it IMHO sounds like a way worse choice for security.

Personally I prefer what cargo (rust) does, if there is no lock file, pick the newest compatible version and create a lock file. If there is a lock file use the version of the lockfile (if possible, i.e. if the semver requirements in the project file are not incompatible with the current lock file).

Then it's not a question of weather you use some command to create some freez/lock file or similar, but just weather you commit it. It also means that during local development your version will not magically bump (which would be horrible).

Similar you can tell cargo to bump dependencies in the lock file to the newest semver compatible version, or increase the minimal version number in the Cargo.toml file.

Similar you can use options to _only_ use dependencies in the lock file failing if it's not compatible with the current project definition.

Sure cargo still has a lot of ways it can improve in, but I still prefer it over more or less all alternatives I ever tried (I haven't tried all of them).


NPM makes an interesting trade off - with the current scheme, security updates which improve the security posture of the application are accepted by default. Those are far more common than malicious updates. Would pinning dependencies lead to larger problems? Imagine if a log4j-like issue showed up tomorrow; most NPM-managed software would just require reinstallation to be fixed.


> Those are far more common than malicious updates

Are they though? It seems to me that security updates that actually affect a given individual or company are few and far between.

Example: I use a library that both reads and writes a particular file format. In the past few years, it has literally had hundreds of potential vulnerabilities fixed in its parsing code. I only use it to write files, so most or all of its vulnerabilities are of no importance to me.

How often do log4j-level RCEs (or of equal severity) happen? And of those, how many occur in JS packages hosted on NPM?

In my opinion, when I'm using code from an untrusted source that neither I or anyone else has carefully reviewed, it is less risky to wait to patch until I am sure a change will improve my security posture versus YOLOing everything into production.

But there is no right answer to this problem. We are all unknowingly running vulnerable code, and will at some point likely will make an update that introduces more vulnerable code. With the current state of our industry, we will all be owned at some point. It becomes a blame game - "you didn't apply the security patches?!" versus "you deployed code you didn't review?!".


The way you do it is simple. Take ruby gems. In development you update as often as possible. When you see the lock file changed, you check you're happy with the upgrades and commit the lock file. When you don't want to keep the newer version, you change the gemfile (package.json) to point to a version you're happy with, ideally with a comment describing why you're pinning, and what you'd have to do to adopt the newer version. In CI/CD you exclusively use the lock file. This workflow is very difficult to emulate with npm/yarn.


Can you describe why this workflow is difficult? I'm probably missing or misunderstanding something, as we literally use this workflow at work, and we never had dependency issues with our npm projects...

Edit: npm has lock files and a way to install from the lock file (simples way being `npm ci`)


npm ci is relatively new, before that it was hard to get it to precisely respect the lock file. There is also an issue if your devs don't all decide to use either yarn or npm with the separate lockfiles.


I do agree that it's relatively new, but not _that_ new. I feel like it's been around for at least a couple of years (but my sense of time since the COVID pandemic started has been unreliable).

FWIW: Yarn has had lock files for a longer time. I know it's not technically npm, but they share the ecosystem.


Well I think it was introduced in npm 6, either way you slice it, 6 major versions is way to late for this basic functionality.

Yes but that is the issue with yarn: it has its separate lock file, which is why when I (used to) devops node apps, I required the devs all agree on either npm or yarn but not a mix.


The gap in this workflow is you have to go out of your way to get a diff. Never mind a diff of what's actually in the .gem. Glancing at changelogs on GitHub only reviews changes from good actors.

In practice most of us just update, often with live reload running, and move on.

We need mandatory peer review of updates before they're distributed in the first place.


Diffs are easily in the 10k if not 100k range. No company but proverbial FAANGs can cope with that. Exploits are hard to spot by design and may span commits and even major versions. Only a computer process can handle that scope.


I'm not saying you should at any other diff than that of the lockfile itself. From there figure out potential impact, run more complete tests, and other mitigations are available. In most famous instances (leftpad and this newsline), the problem would have been immediately obvious. Just because it's still possible to fall victim to sabotage doesn't mean you shouldn't be doing this.


So they would just have to be re-deployed? I find it insane you would put in production un-reviewed dependencies. Wouldn't it be better to update them explicitly and then redeploy? I don't see the benefit and I see huge downsides.


I'm not 100% on how npm handles the following but composer (PHP package manager) allows you to specifically lock a version for a particular del then when your ready to upgrade and test you can manually change that pinned version to get the next. Thus if Log4j happened, your project wouldn't automatically pull in the fix but you would know about it through media etc and then would go into your project and change the version to the one with the fix.


Npm has lock files, and the documentation tells you why they exist and when to use them (eg `npm ci` being the shortest/easiest way to avoid this incident).


That justifies using something like rollup or webpack to bundle all your dependencies into one huge file to make a "static build" of sorts. Then you can at least do a cursory check for anything obviously bad in the changes, if tree-shaking works well enough.

It leaves vulnerabilities open until next release, but deals with direct attacks.


I know I will be buried in these comments but PLEASE NO. Do NOT pin specific versions in you package.json unless you know you need to.

Instead DO USE package dependencies pinning as much as possible:

1. Commit and keep package-lock.json / yarn.lock files

2. Use the right commands in CIs (npm ci / yarn install --frozen-lockfile)

3. Teach others


Attack is hyperbole. This wasn't an attack. Nothing was attacked. This verbiage has got to stop being used around this. A maintainer of popular packages got fed up with being a maintainer - rightly or wrongly - and decided that they were going to alter their code - publicy no less (a far more dubious move would have been to make these changes in private and release that to npm, without pushing it to github also, in my mind) - and that means this code was working as intended by the project / package maintainer.

It was not injecting harmful code onto the machine, it was not an "attack" on anything, in any real sense. I feel all the media is doing so far is raking the maintainer over a fire, instead of asking the question of how did we get here in the first place? Why would a maintainer feel they need to take actions like this? What are they trying to achieve?

Instead of talking about the role of maintainers, consumers, and what to do about the state of open source software and its longevity, we are instead using this moment to go after the maintainer as if they were doing the equivalent of using their npm packages to inject actual malicious, harmful code on the consumer machines, like a cryptominer.

I know it inconveniences everyone, sure, and would I have done this if they were my packages? No, I'd go through a normal deprecation process of announcing its deprecation, archiving the repository, and plainly stating that the package is no longer maintained. With that said, consumers can always choose a different library.

This wasn't malicious, or an attack. It was an update to the package working as the maintainer of the project intended. If lodash issued a breaking change, would that be considered an "attack"? They even followed semver, by bumping the packages to their next major version.

I think this event also served to expose how little organizations have around testing for breaking changes in their dependencies, which may be adding fuel to the issue.

EDIT: this is a plea for more nuance in the conversation. I think there has been an unreasonable amount of haste in passing judgement in this situation, that, after doing even just a little bit of background research, seems to suggest there is more moving parts than meets the surface.


I think you’re being far too kind in your interpretation. If one wishes to stop maintaining, all you must do is nothing. Instead, by pushing an intentionally broken patch, It is foreseeable that there will be hundreds or thousands of maintainers of other code that will have to respond to your actions. I think it’s reasonable to call that an attack.


If you consume open source code without reviewing it, it’s no different than executing arbitrary user code. Your environment’s security is based on hope and trust.

Maintainers shouldn’t be malicious, absolutely, but users should do a bit more than pulling code and running it. Otherwise, it’s curl | bash dressed up with a ci pipeline. How many times does dependabot bump your dependency revisions and no one looks except to ensure tests pass?


A lot of it is based on trust, that’s evidently true, and as such, I think it’s perfectly fine to publicly shame a maintainer who looks to have breached that trust.


Doing this for something like GNU/Linux is simply not practical, _at all_. In what world can a user be expected to do little else that simply install linux? Since you pipe, can I assume you know everything about the kernel tree?

And bad actors work in companies as well. At some point you just have to trust what you are using.


People install Linux by picking a distribution to install. The maintainers of the distributions look at the packages they're including. For example, this broken colors package will never end up in any of them, because there are gatekeepers.

It's certainly impractical for every developer to vet every package they use, so much so that almost no developers vet any of the packages they use. What's needed are more gatekeepers. Repos like NPM would be well-served by coming up with mechanisms to support this. Off the top of my head, some sort of "this is safe" consensus, whereby the community can vouch for specific immutable versions of packages, that don't become the default version until they cross a threshhold. Organizations with a history of good stewardship and trusted validation processes could be allowed to skip the vetting step (e.g. things published by groups like the Apache Foundation or even corporations like Microsoft). I'm not arguing that one-man shops shouldn't be able to publish to NPM, just that there should be something like a "--untrusted" flag that's required before unverified updates are installed. And I'm not calling out NPM; every package repo has this problem.


I agree we’ve built a metropolis on sand, and a large component of solving for this problem is going to be some form of trust and managing that trust (more code reviews with an audit trail of some sort, human gating of code changes using automated risk modeling, etc).


>>I think you’re being far too kind in your interpretation. If one wishes to stop maintaining, all you must do is nothing.

This sounds like a gross misrepresentation of what happened.

It has been reported that the author of the colors and faker packages announced last year that he was no longer willing to do free work for Fortune500 companies.

He also proceeded to state quite publicly that either companies paid him for his work, or they should fork the project and maintain it themselves instead of taking advantage of everyone else's work.

After a year has passed, the author proceeded to release a couple of major versions that had a more artistic feel.

https://web.archive.org/web/20210704022108/https://github.co...

Source:

https://www.theverge.com/2022/1/9/22874949/developer-corrupt...

These packages are his circus, and his clowns run the act he chooses.

If anything, this update showcases the completely abhorrent security culture in place in these mega-companies, which literally ingest into their product line anything that they can find on GitHub without a shred of due diligence.


> After a year has passed, the author proceeded to release a couple of major versions that had a more artistic feel.

Artistic would be a new color scheme or maybe a “support open source maintainers” message. The accurate description you didn't give is sabotage because he introduced an infinite loop:

https://github.com/Marak/colors.js/commit/074a0f8ed0c31c35d1...


> Artistic would be a new color scheme or maybe a “support open source maintainers” message.

The author already did that, and complained about fortune500 companies mooching their work without giving anything in return.

The author stated that people should fork the repo if they intended to keep using it.

The author made said statements a year ago. Is a year enough time to take action?

Now the author decided to do a major version release with a protest version of his own work, and that makes him a villain?


> Now the author decided to do a major version release with a protest version of his own work, and that makes him a villain?

He did more than that, as noted in the comment you replied to.


> He did more than that, as noted in the comment you replied to.

He really didn't. It matters nothing if you feel a commit breaks your code because you not only ignored the author but also failed to perform any sort of due diligence regarding what dependencies you ingest.

I mean, it was a major version bump. People blindly upgrade stuff without any regard about what goes in, and somehow that makes the guy who volunteered his work on a side project a villain?

There are a lot of red flags in this story, but the author of the colors flag ain't one of it.


I see you've switched from “he didn't break anything” to “it's the users' fault for not catching what he broke faster”.

Hint: going from 1.4.0 to 1.4.1 is not a major version bump — https://snyk.io/vuln/npm:colors


Not to mention plenty of people (me at a big company included) used both of these packages as indirectly (as dependencies of dependencies) and had no idea about this until this week.


In what world is it not malicious? He owned the packages and I guess that gives him the right to do what he likes with it, but I have a hard time believing he didn't do it deliberately fully aware that a whole bunch of people would update and it would cause disruption. It's not an exploit, but it's certainly malicious, and I don't think it's unreasonable to call it an attack.


People who were pulling this unvetted and live are lucky he did not sell it to a malware group.


Malice or selling out is not even required. Malware groups are perfectly capable of getting access to maintainer credentials themselves if needed.


Yes, I agree.


Sure did, however, does that mean its malicious?

The definition of malicious, according to Merriam Webster[0], is the following

>having or showing a desire to cause harm to someone : given to, marked by, or arising from malice

Was the intention here to cause someone harm? What the intention here malice? Which conversely has this definition[1]

>desire to cause pain, injury, or distress to another

Was this their primary motivation? What evidence do we have that is in fact an action driven by malice? In fact, I don't see any, especially after doing roughly 20 minutes of research, I have arrived at the following:

Reading the public history surrounding the maintainer of these projects, the issue trackers of the repositories, and the maintainers own blog, you may come from a different conclusion entirely. If I was to speculate - and mind you, I am speculating, as I don't know the maintainer personally nor do I claim to understand them completely - there is a clear sense of mental strain they are feeling from the burdens here. Being burnt out, feeling taken advantage of, can lead to very real and deep feels of depression and/or other mental challenges (this I speak first hand to), for starters. I think the maintainer - again, speculating here, however evidence does seem to suggest this - is acting from a place of disillusionment, and possibly is resentful of their status in the word, relative to those benefiting from their contributions to open source software. Are they right to feel this way? Maybe, maybe not. However, is this malice? That's the question. Is this actually malicious? All evidence points to no, its not an act of malice. Its an act of someone who is disillusioned, strained, and struggling. I think the most reasonable interpretation given all the evidence is that they are trying to raise some kind of awareness - albeit in ways that I don't think is well communicated, due to the issues I previously highlighted - most likely.

I could be wrong, I'm just following the publicly available evidence and, its a lot more murky than this is an malicious act by a malicious maintainer acting in malice.

And again, this brings up the broader issue around open source maintainability, longevity, the role of consumer and maintainer, and how to build better symbiotic relationships around that, yet we aren't talking about what lead to this event, which I would argue is more important than what happened.

My ask is for nuance in this conversation, where there is seemingly little.

[0]: https://www.merriam-webster.com/dictionary/malicious

[1]: https://www.merriam-webster.com/dictionary/malice


You write a lot of words to explain how the maintainer could justify his malice. At the end of the day it's still done out of malice, though.

Unless he's gone absolutely bonkers, he's doing this to intentionally cause damage. The reason for it does not matter, as the definition you posted helpfully points out.


Is it though?

Was it malice when maintainers started adding those posinstall scripts asking for funding? which lead directly to the creation of `npm fund`, for instance. Was that also malice?

Given all the evidence we have - I think its murky. That's my point, at the end of the day.

Arguably, I could say that making millions of dollars off an open source library and not contributing back is malice - yet that argument is rarely given the same open and shut approval.

Mind you, this is bigger, IMO, than `faker.js`, this raises questions around open source software as a whole, that are relevant in this case and beyond

There's nuance in this conversation that is missing so far.


I keep seeing this comment that companies are making their millions 'off of open source libraries' and it's a catchy phrase but let's be real the big companies that are using faker JS are making an insignificant amount extra by using that library, there are alternatives to mock data, etc. A key feature of MIT style licenses is that you can do whatever the hell you want with it for free, use a more restrictive license if you don't want people making a bunch of money off your code. I feel bad for this maintainer but a good takeaway is probably to not do a bunch of work for free unless you see a path to income down the road from it.


It is actively anti open source argument. It makes limited sense in subculture where contributing to open source is expected, but that subculture is absurdly small and answer to it is "this absolutely should not be mandatory".

In any other context, it is argument against open source existing.


> Was it malice when maintainers started adding those posinstall scripts asking for funding? which lead directly to the creation of `npm fund`, for instance. Was that also malice?

No, because that did not impair use of those libraries whereas this change deliberately broke every program written by someone who trusted him and used his code under the social contract he offered.

Open source maintainer funding and burnout are real problems but betrayal won't make things better. Imagine if you lived in a house with a bunch of roommates who weren't doing their share of the housework — you're entirely within your rights to stop volunteering to clean the toilet but it's crossing a line if you instead modify it to flush up.


What is being described here is a prototypical case of industrial sabotage. Books have been written about it, the jury is very much not out on whether we should consider it malicious, although I very much consider it not... In fact industrial sabotage is quite often made out of love for the fellow worker.

This is very much a philosophical question and I doubt we will resolve it here on HN. However if you want to deem this act malicious I beg you to ask your self: To whom is the malice directed towards, who was harmed the most? What does this act tell us about the industry?

I, on the other hand, see this sabotage as a clear act of love from a fellow worker.


Sabotage is still malicious even if you sympathize with the motivations.

I would also strongly question any “fellow worker” explanation since I'm sure the pain will almost universally be felt by the “fellow workers” who have to update things, reassure their security team, or increasingly start justifying their use of open source software.


As is usually the case in most direct action. Your average strike will cause much further harm then this act for other workers. However the goal is not to halt production, halting production is simply a means to an end, the goal is usually better working conditions for the rest of the workers. This is why you often see workers stand in solidarity with strikers even when the strike directly harms them.


In that case, this is extremely weak since it was so quickly and easily reversed. A strike which has no purpose or chance of success tends not to get much support.


> To whom is the malice directed towards, who was harmed the most? What does this act tell us about the industry?

It certainly wasn't directed at the large corporations, they have many systems in place to mitigate this type of attack. It was directed at everyone else, like fellow open source devs, contractors, hobby developers, students, and small to medium businesses.

The goal simply appears to be chaos and attention.


If you publish your code under a license that says anyone can use it for free and don't have to contribute, don't act all hurt when people and companies do just what you said they could.

If you want to get paid, or you require contribution, use an appropriate license. It's not hard.


It broke a ton of other people’s projects, on purpose. You can argue that he could have done much worse — say exfiltrating all of the AWS credentials from CDK users – but there’s no definition where it’s not an abuse of trust to sabotage your users.

https://github.com/aws/aws-cdk/issues/18322


Nobody had to upgrade. Anyone who accidentally upgraded because their dependencies had sloppy programming practices should be mad at their direct dependencies.

aws-cli should not be an attack vector, and if it is, AWS engineers are at fault.


This is a terrible argument. Marak didn't have to purposely break anyone's projects either. If he wanted to end it, he should have just sent a goodbye message stating the end of his involvement and walked away. This is the normal thing to do. If he wanted to be paid, he should have either picked the appropriate license at the beginning, or just changed it for future versions. The latter has also been done before with SugarCRM being one example. They are still around with paying customers. Something as free for all as the MIT license sends the wrong message.

This has been posted before, but Marak seems to be mentally unwell right now, which helps explain but doesn't condone his behavior.

https://abc7ny.com/suspicious-package-queens-astoria-fire/64...


I don't know what makes you want to try to gaslight people on marak's behalf but this is a pretty poor attempt at doing so. He published a deliberately broken update with a non-breaking version number change to a package management ecosystem where the community expects to follow a semver-ish model where point updates do not break things. This is an abuse of trust, just like it would be if you said “I'm tired of cooking for everyone so I'm going to spit in the soup before serving it”.


But you didn't have to go to that restaurant! Don't you do a full health and safety inspection of every restaurant you visit (and of their entire supply chain)?


Up-thread there is a comment that the Major on the semver was updated. Which is correct? Was it Major or Patch number?

Edit to Add: According to this page https://www.npmjs.com/package/faker

The previous version was 5.x.y and the endgame version was 6.6.6

So, which tools defaulted to magically bumping the dependency from 5.x.y to the 6.x.y? Seems like jumping a major shouldn't automatically happen.


I'm not sure what they said specifically but https://security.snyk.io/vuln/SNYK-JS-COLORS-2331906 and https://snyk.io/advisor/npm-package/colors show that the version went from 1.4.0 to 1.4.1.

Edit: your comment makes more sense now that I see it was talking about faker.js.


aws-cli, a tool which provides access to an absurd amount of resources (compute, storage, text messaging to name a few) relies on "community expectation of semver-ish adherence" for security and continual operation, and I'M the one gaslighting people?


You meant CDK (aws-cli is written in Python) but it’s not that simple: they shipped a lock file which pinned 1.4.0 but while NPM honors that the popular yarn package manager does not:

https://github.com/aws/aws-cdk/issues/18322#issuecomment-100...

This floating behavior allowed for it to be overridden locally:

https://github.com/aws/aws-cdk/issues/18322#issuecomment-100...


Ah yes, the classic "you're holding it wrong" defense.


Pushing broken code on purpose is malicious.


> Was this their primary motivation?

> Was the intention here to cause someone harm? What the intention here malice?

Yes. Removing / archiving the project – not malice. This, however is absolutely malicious:

  let am = require('../lib/custom/american');
  am();
  for (let i = 666; i < Infinity; i++;) {
    if (i % 333) {
      // console.log('testing'.zalgo.rainbow)
    }
    console.log('testing testing testing testing testing testing testing'.zalgo)
  }
> Are they right to feel this way? Maybe, maybe not. However, is this malice?

Reasoning about Marak's motivation or intentions does have anything to do with whether his actions were ultimately malicious.

> Its an act of someone who is disillusioned, strained, and struggling.

You might be right, but there were a lot of things he could have done before doing what he did, and when he did act, he did it with the intention of causing people distress, frustration and confusion.

> ...its a lot more murky than this is an malicious act by a malicious maintainer acting in malice.

We're really far down a semantic rabbit hole at this point. Ultimately, adding an infinite loop to a widely used package deliberately and without warning is clearly an attempt to 'cause ... distress to another.' There's no two ways about it. Sure, Marak likely felt under-appreciated, mistreated and taking advantage of by large corporations etc but those possible explanations do not absolve him of blame, or mean his actions were not unequivocally malicious. In the legal system, mitigating factors can lead to a lesser charge or a shorter sentence, but even if they do reduce the severity of the outcome, you're still guilty.


> Whats the intention here to cause someone harm?

The change has an infinite loop, so it literally breaks any software that uses it.


Also the description, readme, and roadmap that was included in the package still passed it off as a functioning library which makes this a trojan.


Dear Christ, I'm sick of this dichotomy. Deliberately releasing code like this is malicious, and was clearly done with intent. I'm not sure if you could sue, but there is such a thing as torts. You can't booby trap your yard and then be like "BUT I SAID NO WARRANTY!" when someone expects it to be a normal yard and then blows their leg off.

> If Lodash issued a breaking change...

It's called intent. Did Lodash _intend_ to break users' programs? Probably not. That's for a court to decide. Did this dude intend to break users' programs? The legal system is set up to resolve questions like this, not get fooled by clever gotchas from software developers. Intent matters in the eyes of the law and I'm utterly flummoxed and quite frankly concerned that so many developers don't understand that.

Simultaneously, pulling thousands of double digit layers deep of random-ass dynamic code dependencies is _also_ a bad idea at best, and professional negligence at worst.

The package author can be liable. The people who rely on him can, at the same time, make bad professional choices. Both things can be true.


You are raising point which noone is disputing. Yes stuff broke, people’s time was wasted, this was the intention. However that does not necessitate malice. IWW for example calls these kinds of sabotage a direct action. The malice is towards the oppressors. The goal is to slow down production and such that our oppressors suffer.

This act is really no different then a strike. A striking worker is only malicious towards those who unfavorably profits from their labor. Other workers might suffer, but a true worker will stand in solidarity, for direct action is an act of love for the entire working class.


Keep in mind Marak was active in the issue tracker afterwards pretending to be fixing the "bug". Marak didn't change the description, readme, or roadmap of the package. He passed the release off as a functioning library that deliberately crashes any process that used the library. That is a Trojan.


This still describes a very standard industrial sabotage. Quite often workers will continue deliberating confusion as part of the sabotage, either to cover their tracks or to maximize the time of diminished production. The goal is still the same.


What he did was actively malicious. There was ill-will behind the actions. He did it for the purpose of interrupting a lot of stuff that was working, simply to make some sort of statement. It was an attack, by ALMOST any definition. The fact that it was a very benign attack with very few real consequences, doesn't make it less of what it was—an attack.


He did not force anyone to update the package. They did that without looking at it. He no longer wanted his work to be disturbed. You may not agree with how he did it but calling it an attack is completely ridiculous.

These comments on HN are sending a pretty clear message out... don't open source anything. It is really really unfortunate.


No one expects a maintainer to purposely break their project. If he wanted to end it, he should have just sent a goodbye message signaling the end of his involvement with maintenance. This has been done many times before without incident. If he wanted to be paid, he should have either picked the appropriate license for his software from the start, or just changed it for future versions. The latter has also been done before with SugarCRM being one example

To be fair, Marak seems to be mentally unwell right now, which helps explain but doesn't condone his behavior

https://abc7ny.com/suspicious-package-queens-astoria-fire/64...


> No one expects a maintainer to purposely break their project.

If you're using a 3rd party dependency, especially one you aren't paying for, you should expect it to break or disappear at any time.


Falling into disrepair, or not being maintained due to various valid reasons: yes. This happens all the time.

Some asshole maliciously breaking your stuff: no.

I do not use either colorsjs or fakerjs. I was close since I used the Ruby version of faker that Marak ported into nodejs as fakerjs.


This is so ass-backwards and you're practically twisting yourself over backwards a dozen times to somehow try to argue that this wasn't malicious. He basically poisoned the library, and you're blaming all the people who got poisoned because "they didn't fully inspect the contents of the code".


So do you disagree with the idea that it is his library?

If you agree that this is his library, do you believe what he did is different than a company changing their public API or deprecating them without any notice?


It’s deliberate sabotage and shipped as a routine update. If he’d walked away or made a breaking change in a major release, nobody would expect more.

Similarly, if it was a service everyone understands that those require money to operate but there’s no analogous reason to tell people to upgrade to deliberately broken code.


With that said, consumers can always choose a different library.

Yeah, you can always just choose a different css-loader > cssnano > svgo > colors depchain, because there's plenty. Dependencies like that empower people great, and we all know what comes with it.

When you're in a hot summer bus, it's immoral to turn off the AC only because it was you who decorated the knob.


> It was not injecting harmful code onto the machine, it was not an "attack" on anything, in any real sense. I feel all the media is doing so far is raking the maintainer over a fire, instead of asking the question of how did we get here in the first place? Why would a maintainer feel they need to take actions like this? What are they trying to achieve?

> Instead of talking about the role of maintainers, consumers, and what to do about the state of open source software and its longevity, we are instead using this moment to go after the maintainer as if they were doing the equivalent of using their npm packages to inject actual malicious, harmful code on the consumer machines, like a cryptominer.

I agree with all of your points. I would be more sympathetic to the backlash if those affected were paying license holders, but that's not what happened.

It is a privilege to use someone's hobby project that you didn't write for free and with no strings attached.


>It is a privilege to use someone's hobby project that you didn't write for free and with no strings attached.

It isn't a privilege, as far as FOSS is concerned, it's a fundamental right. Whether one pays or not is irrelevant, that code was given to the community and it belongs to the community. No one, not even the author or project maintainer, has the right to vandalize community property regardless of their motives.


FOSS doesn't mean code is in the public domain, nor does it mean that the code is held in a public trust. It is very much the property of copyright holder. If the community feels this way, then they could have done the work to develop, maintain and distribute the product themselves. They didn't do that, though. They chose to depend on the labor of someone who was not compensated for their time or work, which is a privilege. No one is entitled to the developer's time or labor.


>It is very much the property of copyright holder.

And FOSS licenses purposely give away many, if not all, rights typically granted by copyright. If I buy a book and I have the same rights as the author to rewrite and republish that book, and even put my name on it, even to make money on it, then it can be correctly described as being owned by the community rather than owned by the author.

> They chose to depend on the labor of someone who was not compensated for their time or work, which is a privilege.

The developer chose to give away their labor for free, they chose to forego compensation. The FOSS model is clear in considering the ability of the end user to read, modify and redistribute code without limitation to be rights, not privileges. Whether a developer can find a way to make money is orthogonal and incidental. That's a privilege defined by contractual obligation, not a right.

You can have FOSS or you can require developers be compensated for their efforts, but you can't have both. These two concepts are mutually opposed.

>No one is entitled to the developer's time or labor.

No one is entitled to make demands of the developer, but likewise, the developer isn't entitled to make demands of anyone else. As far as the fruits of their labors go, everyone is entitled to whatever they choose to give. That's the whole point of FOSS.


But let's not pretend the original maintainer did all of the work or owned all of the copyright. There were a ton of contributors to color, and some dude actually had more lines of code edited than the original maintainer: https://github.com/Marak/colors.js/commits/master

and faker.js seems to be more or less a port of a ruby library which was a port of a perl library.


If the terms of the license are met it is a right to use the software.


It's interesting that you strongly defend Marak here, and were also the one who shared his blog post ~seven months ago about how retool had stolen his code and his idea (which decidedly wasn't the case).


There should be a reputation score for new releases on npm with scores from beta users who are part of the community. Sounds similar to app store but more community controlled.

In general, there should be a risk assessment score on npm for each package sourced automatically from different criteria like how many maintainers are there in a project, ownership changes etc.

Also, making the new package available only to few % of random users would have limited the impact.

Overall this pull with complete trust is just asking for trouble.

And yeah, this developer needs to be committed to a facility for his own good (if this doesn't qualify him for that then I don't know what will).


That's literally gatekeeping. It took me so long to get enough karma on HN to be able to leave a comment. This would just shift the system to be gamed by bots / upvote4upvote, and keep existing hegemony in the community on who gets the power and say in porjects. Do not want.


This article invokes the initial impression of providing a solution while actually containing as much value as saying that we could end world hunger by giving everyone something to eat.

Yes, if every maintainer would actively test their software in depth(!) on ever update in dependencies, things like this would happen less.

This not only fails to acknowledge the issue that the package author intended to "highlight" but just plain rudicules it.

Yes if people put in way more unpaid labor, we might end up with a better quality product.

(I do not support the package authors decision, I just think this article is a shameless clickgrab)


>The right path forward for NPM and package managers like it is to stop preferring the latest possible version of all dependencies when installing a new package

As a package author specifically around one that wraps/interacts with REST APIs, begging people to upgrade at my company so I can deprecate old APIs can be a challenge. Maybe that's an argument for a monorepo, but still..

Obviously the current system has its flaws, but nudging packages towards newer code does have other clear benefits, like them automatically getting security fixes.


A quick shout out to all those who advocate code reviews because they don't trust themselves nor their co-workers, but will pull in dependencies without checking them.


Pin dependencies. That's it. If you want to try out the new version, just test it on staging. Otherwise, you'll just be shooting yourself in the foot


Time we had a new FOSS licence that specifically forces commercial users to pay for it. Why should big greedy corps profit from using software that's free?


"FOSS" and "forces payment" are conflicting ideals.


We already have them. If I'm wrong, then we wouldn't need them. Just change the license for future versions like SugarCRM did. They didn't break anything and now they have paying customers.

https://sugarclub.sugarcrm.com/engage/b/sugar-news/posts/sug...

You don't need to do something crazy like what Marak did.


Why should NPM even do anything? If someone decides they want to publish a 'disfuntional' version than it's the role of devs to use the older version and check before updating.

NPM is just a glorified file sharing services, as it should be for JS packages, that features an associated manifest format. They should be able to transfer rights, as far as licenses allo and is reasonable, but that's about it.


I was trying to find Marak's motives for this, but I couldn't find much, does anyone know more?

On his Twitter Marak is claiming [1] that GitHub/NPM have suspended his account, which is an interesting move, but I guessing that that's the only tool available to prevent further "sabotage".

[1] https://twitter.com/marak/status/1479200803948830724


Seems likely connected to this old post of his about no longer wanting to provide free service to corporations:

http://web.archive.org/web/20210704022108/https://github.com...

Also he seems to have somewhat of a Guerilla mindset, based on this:

https://nypost.com/2020/09/16/resident-of-nyc-home-with-susp...


Thanks for the links!

While it's not enough to generate a stable income, faker.js has received over $20k in donations through Open Collective. It's more than most other projects -- but I guess nowhere near what one could get if a big corporation wanted to sponsor the continued development of the project.

https://opencollective.com/fakerjs


he guy has violent tendencies for sure, he even was arrested for assaulting his ex-girlfriend


Also he seems to have somewhat of a Guerilla mindset, based on this: https://nypost.com/2020/09/16/resident-of-nyc-home-with-susp...

I see "charged," not "convicted" in that article.

Real world: Innocent until proven guilty.

Internet: Always guilty, because justice doesn't scale.


people don't build bombs for fun


Sure they do. Some call them rockets, others fireworks. Oh, and anvil shooting…


Do you really need 40 kg of potassium nitrate for your hobbyist fireworks though? That amount can blow up a house.

Also if you absolutely can't live without the immature thrill of mixing explosives with your bare hands, at least don't do it in a residential neighborhood.


It seems to be partially that big corporations are using his library without payment – I can sympathise with that, although if you choose to release it under that license I'm not sure whether you can accuse the companies of doing anything particularly wrong.

But also seems to relate it to a pretty insane conspiracy theory that Ghislaine Maxwell was involved in the death of Aaron Swartz, almost entirely based on the debunked theory that an account on Reddit named "MaxwellHill" was hers.

Seems like he was just flailing wildly.


> the debunked theory that an account on Reddit named "MaxwellHill" was hers.

I assume by "debunked" you are referring to the Vice article, which offered nothing beyond name-calling of the people investigating the theory, and ended with "u/MaxwellHill did not immediately respond to a request for comment sent through Reddit."[0]

An example of a relevant fact might be that vice.com was the most popular domain among all the links submitted to Reddit by the user MaxwellHill.[1]

It's also worth pointing out that another Reddit mod tried to debunk the theory by claiming that they knew MaxwellHill and that he was a Malaysian man, but this claim started to look suspicious when people found that MaxwellHill had mentioned he/she had "visited" Malaysia before (which is not something one typically says about one's home country).[2]

[0] https://www.vice.com/en/article/y3zbaj/incoherent-conspiracy...

[1] https://www.reddit.com/r/conspiracy/comments/hoqheb/vice_pub...

[2] https://rareddit.com/r/conspiracy/comments/hoopdf/why_is_vic...


No I'm talking about the fact that the only "evidence" is that they share a name. Every other detail doesn't match up, someone did the heavy lifting here: https://coagulopath.com/ghislaine-maxwell-does-not-have-a-se...

The point is there's been no strong evidence, so it's a huge leap to then decide this is a solid basis for "Aaron Swartz was killed because Ghislaine Maxwell was tight with the Reddit staff and he got wind of her child trafficking".


That's a pretty thorough debunking, thank you, and you're right that it's a huge leap to connect Aaron Swartz to this (which I've seen no evidence for). I do want to say, though, that it's a pity that the debunker didn't get better responses from believers on Twitter, as there are some reasonable counter-arguments.

Firstly, I'm not impressed by the (lack of) probability calculations. The article concedes that the Reddit user and Ghislaine are both born in December, which has, let's say, a 1 in 12 chance, and also says that there are "hundreds of thousands" of Maxwells in the world, so let's say 800,000 out of 8 billion, or 1 in 10,000. So the chance of these two details both coinciding is 1 in 120,000.

On the other hand, there are apparently 430 million active monthly users on Reddit, so we should expect there to be some December Maxwells in there, but considering the user's interests (US politics) and their use of British English spelling and phrases[0], I think the number of non-Ghislaine accounts with all these attributes would be very low.

So the fact that this Reddit user posts consistently for 14 years and then abruptly stops posting (publicly) right at the time of Ghislaine's arrest does seem statistically improbable. The debunker would have us believe that the account is still active sending private messages and posting in private subreddits (and no doubt has a girlfriend who lives in Canada), but if they're happy to have their activity disclosed like that, why wouldn't they also post publicly saying "I'm not Ghislaine you idiots!" or respond to the Vice journalist, for that matter?

Is it really that hard to imagine that influential people on Reddit would fake screenshots of post-arrest MaxwellHill activity to try to distance themselves from the actions of a notorious criminal? Do we just have to take these screenshots at face value, despite all the incentive to fake them, and the fact that a mod has already been caught out in a lie about whether MaxwellHill lived in or visited Malaysia?

The debunker also claims that someone trying to hide their identity (by falsely claiming to be male) wouldn't be so stupid as to use their actual name as part of their username, but obviously the motivation to hide what you are doing changes over time. When MaxwellHill was first registered, it probably seemed anonymous enough (there are "hundreds of thousands" of Maxwells in the world, after all), but over time, as the account became more influential (and Ghislaine got involved in more and more shady activities, and gave away more and more clues about her identity from her posts) she would develop a need to throw people off the trail by dropping some false biographic details.

I won't go into more details about the stylometry and contents of MaxwellHill's posts, as that is more subjective, but I will end by saying there is nothing contradictory about Ghislaine as a Redditor making negative comments about Trump (which is implied but not cited by the debunker) but Ghislaine in person bragging to someone about being a friend of Trump. It's possible that she hates Trump's current politics, but got on fine with him in a social setting 20 years ago, for example. In fact I suspect that one of her greatest talents is being able to seem friendly and trustworthy to powerful people whose actions she doesn't fully agree with.

[0] https://www.rareddit.com/r/conspiracy/comments/hnfx0r/not_co...


He was arrested a while back for having bomb-making materials in his apartment. Based on that and some other public comments, he seems to be mentally ill.


> It seems to be partially that big corporations are using his library without payment

He doesn't charge for it.


He is mentally unstable. His motives are probably irrational. This is not the first time he reaches the news for strange criminal activities.

(Yes, it's his code. No, it isn't legal to maliciously add an infinite-loop to a library that you know is used by other companies. The license adds some liability protection, but it's not so simple.)


Personally I think he did it to prove a point. IIRC he made a post last year where he said he would no longer be developing software for companies to use for free. This gripe is that he does all this work for free and companies then use his code to make money and do not contribute back to the ecosystem.

IMO this was not the most graceful way to make the point, but at the end of the day it's his project, his code. If you want the expectation of it always working, then pay him. Don't bitch and moan that the old man giving out free bread on the steps isn't here this morning.


> he made a post last year where he said he would no longer be developing software for companies to use for free

I understand the sentiment, but why chose MIT license then?

GPL would be the right choice if one wants to stop companies profiting from free work without giving anything back.


Perhaps by the time he started feeling this way, it was too late to relicense? Megacorps would just fork from a pre-GPL release and carry on.


that's my understansing as well.

I'm not judging the author here, but on one hand I understand the frustration, on the other hand the package probably owes its popularity in part to its very permissive license.

Anyway, a fork maintained by the companies that use the package would still be a better outcome than keep working for them for free (or remove the package entirely).


Would GPL help though? It's a library used by tests, not part of the end product distributed to users.


I don't know in this case but in general a GPL fork must stay GPL and, AFAIUI, importing a GPL package in your code it's similar to linking to it, so if the code that uses your GPL package is published (on GH for example) that could be considered redistribution. Not sure about the legalities but it could create enough friction to keep companies not willing to contribute away.


I guess I'm thinking more in terms of faker, which I believe was a library for creating test data. Makes me wonder if licenses matter much for tools that are never intended to be included in the final distribution.

Internal changes aren't going to be detectable, so it feels like the best you can hope for is that they don't want the maintenance burden of a patch set on top of your project. At that point it's not much different than MIT.

That said, AGPL would scare off most companies :p


Yes, especially GPLv3. Still open source, and companies avoid it.


License is pretty good starting point for that. Or, simply making closed source only as overwhelming majority of programmers do.

The whole "contributing to open source" while effectively saying "I think open source should not exist and is rip of" makes no sense in world where literally nothing forces you to create open source.


"...it isn't legal to maliciously add an infinite-loop to a library that you know is used by other companies"

Um, what?


> No, it isn't legal to maliciously add an infinite-loop to a library that you know is used by other companies.

Could you cite an source for this? Because I got a impression that it "isn't legal" which mean it is not illegal based on your comment. I would assuming you are referring to USA Computer Fraud and Abuse Act?


"18 U.S.C. § 1030(a)(5)(A) knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;"


I agree with awinter-py, it will be tricky to use "without authorization" for this. The developer only published the code, that is up to other companies to verify the code and roll with it. If Company A installed the dependency, that could mean Company A "authorized" the code because they pulled the dependency to their system. So not sure how CFAA could protect this if the Company A proceed to download the code which in turn that the code are authorized. It is their responsibility to audit and verify the code before incorporating into their software.


Is it considered "authorized" if I knowingly visit a website but did not realize it would execute malicious JavaScript on my machine? Anyone who unknowingly installed this malicious package in their project is having that same problem.


'without authorization' here is going to be tricky. author probably did have authorization to both github + npm? and didn't knowingly cause transmission to anywhere else? the rest of the steps were pull, not push.


If we're honest about the US justice system, this would be a subjective decision decided by non-technological lawyers, jurors, and judges. The purposeful malicious intent is working hard against his stance.


>> The purposeful malicious intent is working hard against his stance.

OK, but should cloud providers similarly be held accountable for screwing their customers through negligent acts - to come full circle, like pulling these updates without doing any checks or QC?


Although it's a different area of law, product-defect liability attaches to all actors in the "stream of commerce" stretching end-to-end from the manufacturer to the retailer.


scotus in van buren (2020) let off a cop who was selling LPR searches for cash so in theory the days of aggressive interpretations are over? eff called it a 'victory for security researchers', though it's probably too soon to say whether it's that or just a victory for people selling LPR data


The precedent doesn't apply. The SCOTUS interpreted (and in effect, defined) that the "authorized access" in 18 U.S.C. § 1030(a)(2) can't be qualified and limited to less access. If I'm authorized to see usernames, and due to light hacking I can also see emails - I'm n̵o̵t̵ ̵a̵ ̵c̵r̵i̵m̵i̵n̵a̵l̵ maybe a criminal (EDITED). If I'm authorized to check license plates for some reasons, and despite employer policy I checked license plates for some other reasons - I'm not a criminal.

The issue we're discussing here is based on 18 U.S.C. § 1030(a)(5) (note the last digit) and "authorized access" is not mentioned there at all. This section deals with damage and not access.


hmm, not really my area. This coverage of van buren seems to show the court trying to make 'authorization' agree in meaning in different parts of (a)(2)?

https://www.natlawreview.com/article/supreme-court-ends-long...


This incident has nothing to do with (a)(2) as Marak didn't _access_ any system. The only sections violated are (a)(5) (_knowingly damaging_ a system) and, arguably, (a)(7) (extortion). (a)(7) is a lot harder to argue though as his extortion attempt doesn't have a named target or an explicit demand and is generally... lame.

Edit: Note that I'm seeing this the same as a virus, not the same as a data-extraction hack.


[flagged]


He did breach basic ethics and standards of professional conduct by his actions, for sure. I would lean against considering what he did illegal, but I think there is an argument to be made that it would be illegal under the CFAA.


No he didn't, what are you smoking?

It is on the person using code to do due diligence to ensure that any code pulled down in an update is good to utilize. You're seeing an implicit obligation where there has never been one.

In general, most just have the integrity, empathy and detachment to not do what he did, however, any programmer/developer who doesn't have a checklist item of "audit that code" before updates is committing an aggriegious breach of professional ethics;as this is the exact circumstance that everyone should be on the lookout for.

Everyone here assuming there is an obligation on Marak's part to continue to provide an interface in a non-molested form for their convenience are part of the problem. You should have mirrored, or paid the man.


Rational motives can produce irrational actions, and do so somewhat reliably.


It can't be illegal if the software is provided as-is without any warranty as most OSS licenses do.


I'll leave it as an exercise for the reader to understand the difference between "I am not liable if I have a bug that ruins your production environment" and "I am not liable if I maliciously introduce a fatal bug knowingly into your production environment".


But he didn't introduce it into any particular production environment.

For fucks sake, people need to pin dependencies to a known good version at the very least.


Of course he did. Intent matters, and this was a reasonably foreseen consequence of the way the system is set up.

He knew how npm works and he knew the implication of adding that code is that hundreds of libraries and production systems would automatically upgrade and install it.

In fact, the whole point of what he did was to introduce the code into production environments.


Raise your hand if you pull directly from the internet into production without testing!

<no hands raised>

How can we claim he did anything to production if no one will admit they're dumb enough to push this latest version without testing it?


Most of those (malice, who introduced it to your environment, fatal bug) seem contestable, even if we grant for the purpose of argument that the as-is disclaimer does not cover all cases.


Did you see the commit before it was deleted? I'd love to see a lawyer claiming anything else.


Which of the 3 claims are you referring to?

The commit is here as far as i know, not deleted: https://github.com/Marak/colors.js/commit/074a0f8ed0c31c35d1...


Any reasonable expert in the field will testify that it is not possible to write an infinite loop like that unintentionally.


The commit had a comment to the effect of being test / toy code not meant to be put into a release. I don't think a claim of randomly producing the snippet would be put forward in the hypothetical court case. Then there's the question of malice vs some other motive of expression in looping and printing some ASCII / zalgo art in your own terminal art lib.


Any reasonable expert in the field will tell you you don't plug an auto-updating dependency into production. Marak wrote code. You, (the consumer), pulled, and deployed it without due diligence. That is entirely on you.

Not one person is obligated to keep your crap working except you. This has really outed all the people who really should know better.


If you put a bomb in a box and attach a button with a note that the button is provided as-is and author disclaims any liability, then leave it in public place and someone presses it, do you think you will not be found liable?


if you build a car oitside in public view and someome copies it and crashes are you liable?

tbis ismt a bomb in a box its his project car you copied without any warranty or.gurantee of stability.


No, he has no civil liability to the extent permitted by law, as the license states. He basically can't be sued.

That's different to criminal liability.


It could be illegal (regardless of warranty or license), but it happens to not be in most of the US.


His twitter is very weird - there's a bunch of Ghislaine Maxwell / Aaron Swartz conspiracy theories, which he somehow ties to GamerGate.


My mental model for "people who believe in conspiracy theories" has changed since I lost a few friends deep into them.

I think it's a refuge for distressed people. Some belief that unconnected terrible events are somehow all controlled and "part of the plan".

I still don't get it and likely never will, but it at least aligns with anecdotal cases I've experienced of seemingly normal people going a bit off.


I never understood 9/11 truthers until someone explained it that it was more comforting to at least believe someone was pulling the strings in the background because at least there was a plan, even if it was sinister, compared to accepting a militarily inferior enemy half way across the world can upend everything about your life seemingly randomly.


IMHO the whole point of 9/11 was to make Americans feel insecure and distrust their government


Go pick up a copy of "Through our Enemies' Eyes." It's an account from a CIA agent who did extensive research on the events and circumstances that led up to 9/11 and the actors involved. His conclusion: It's not really about that at all; the answers are much more personal than that.


There was a plan, and it was by Osama Bin Laden. But he was naive. He wanted the American people to question themselves why something like this would happen to them, then find out about all the horrible things USA has done to other nations. It was never about hating freedom.


I don't understand how the former is more comforting. Military inferior enemy with one successful attack is preferable. It is safer the sinister plan inside.


Observing a schizophrenic I knew made me realize people high in "intolerance of uncertainty" can probably discard reality checking to arrive at a firm conclusion. I googled that and found a psych paper claiming schizophrenics indeed tend toward high intolerance for uncertainty. It was a relatively new finding.


Astral Codex Ten posited that conspiracy theories are part of the Epistemic Minor Leagues [0]. In other words a way for people to flex the part of their brain that doing real research and discovery stimulates even if they don't feel they can belong to or contribute to "regular" intellectual activity like academic research.

[0] https://astralcodexten.substack.com/p/epistemic-minor-league...


[flagged]


I don't think Trump believes in conspiracies, generally speaking. I think he simply finds them useful for manipulating large groups of people. I could certainly be mistaken.


i don't think he does either

plays along on the conspiracies as long it benefits him and shows him in good light

his psychopathy is a level beyond those of conspiracy believers


[flagged]


> Downvotes in lieu of logical argument in 3...2...

If you want a logical rebuttal, you have to make a logical argument, and not

> At least 70-80% of 'conspiracy theories' are true. Yes, even many of the 'crazy' ones.


I hate this position. Imagine someone ranting about MKULTRA if the CIA never acknowledged it. Imagine it was just some papers on the internet with no providence. I like to think of it as a matter of diversity of though. We need the "crazy" people to explore the really unlikely bizarre scenarios that MIGHT be true that most mainstream people will never even humor. At the end of the day reality is stranger then fiction.


Sure, 1% of conspiracy theories turn out to have some truth to them — but on the whole, if there’s no way of knowing in-advance which ones those are, I don’t know if “engage with all the conspiracy theories, spend time and energy attacking fictional problems and innocent people” is a better response than “reject all the conspiracy theories, spend time and energy on definitely-real problems, allow some dodgy organisations to get away with stuff”...


You also have the conspiracy theory that says that the most outlandish conspiracy theories are amplified much more than they naturally would be in an attempt to discredit all conspiracy theories. Which I do believe, myself.


The "crazy" people make it less likely that the bizarre scenarios that are real will actually be exposed, because the crazy people that do on rare occasions get one right usually are at the same time ranting about a dozen other things that actually are completely bonkers.

For instance if someone says they were abducted by aliens (from space), but they also say that COVID was designed by Gates and Fauci when they were roommates at Princeton to depopulate the world [1], and that Betty White was using Hollywood to promote the vast secret satanist pedophile network that traffics millions of US children each year, while her sister Barbara Bush did similar in the government, all at the behest of their father the satanist Aleister Crowley [2], and that in early January 2021 the military arrested Biden, Harris, Pelosi, Schumer, Democratic governors, Fauci, Gates, etc and took them to Gitmo where they were tried and executed, and restored Trump to the Presidency but are using robots and actors to make it look like all those dead people are still running things because they don't want to tip off the people running the underground adrenochrome harvesting operations until the children have been rescued [3], I'm not going to take their alien abduction seriously.

If on the other hand Neil deGrasse Tyson said he was abducted by aliens I'd take it seriously. I wouldn't necessarily believe he was actually abducted by aliens, but I'd believe there was a high probability that he had a good reason to believe he was, and that would be something worth investigating.

[1] Yes, there are people who believe that, and yes, they really say it was at Princeton even though neither of them went to Princeton.

[2] That started appearing on fringe sites shortly after Betty White's recent death.

[3] Another real conspiracy theory.


Oops...I just realized alien abduction was a bad example, because it is not really in the same category as the other things I used for the crazy person's beliefs.

Those other things all involve belief in things they have been told but have not personally experienced. The alien abduction claim would be a claim that they have personally experienced alien abduction.


Everyone has some non-orthodox beliefs. I am sure I'm not exempt.

I'm not trying to describe someone who has a few odd beliefs, especially if they know they are a bit odd and can entertain good arguments.

I'm trying to describe where someone descends into the whole web of inter-related beliefs and accepts them with a religious determination, resisting even the suggestion they could be wrong.


the problem starts when you start preaching your beliefs onto others


I was about to point out the fact that all this guy's motivations seem to be mental illness YOLO and that it seems he recently lost his job. Kind of like the movie "falling down" but even more petty somehow


https://www.google.com/amp/s/www.theverge.com/platform/amp/2...

Seems like he had series of odd statements around the same time: liberty, Aaron Schwarts and some such.

Overall it sounded to me more like breakdown then anything else.


I'm pretty certain there is mental illness involved when he's making bombs

https://abc7ny.com/suspicious-package-queens-astoria-fire/64...


>> On his Twitter Marak is claiming [1] that GitHub/NPM have suspended his account

Interesting. Shouldn't they also block access to the repositories? Oh right, that would hurt their other users. This makes it clear who thinks they own code and who works for them for free.


Well fuck them. It's his code, he should be able to do with it whatever he wants.


If a major FOSS Linux distro decided to add a virus to their latest release, would you say the same thing?

It certainly isn't "his code" anyway. Many people contributed to it with the good-faith belief that he would not use it as a Trojan horse.


Using someone's code (for free) doesn't mean you can dictate terms to them.


Nobody is forcing them to rewrite it or anything. Github canceled his account because knowingly posting malicious software is against the TOS, and it is not difficult to argue that putting infinite loops in the startup code of all your users is malicious. He still has the code on his computer to do whatever he pleases with. The rest of the world is just looking at it askew and calculating the odds something like this is going to happen again and what they should do about it. If you are going to be a dick to your users you should not be surprised that they will refuse to use your software afterwards.

In a way, it is kinda sad that this behavior drives him away from the financial security he so seemed to crave. "Yes I just caused your dev team to do hotfixes during the weekend and cost you XYZ dollars in downtime, can I have a senior dev position now plz" is not a very convincing line when interviewing.


I agree that it was a dick move from his side. Fork his project, or use an older version. I am not convinced that him breaking his code is something that ought to be punished.

GitHub gets the right to close their hand while Marak doesn't.


I see it more as attempt to protect against another similar action. Not so much to punish as in "teach him" or "get revenge".


If I voluntarily share my lunch with someone at work, I can't poison it even though they got free food. Even if someone at work is stealing my lunch, I can't poison it.

We have a right to basic safety even in situations we aren't paying for things. That extends to the digital world. You can't put malware in your open-source project without a giant disclaimer "THIS IS MALWARE. USE ONLY FOR RESEARCH PURPOSES" or something like that


>> . That extends to the digital world. You can't put malware in your open-source project without a giant disclaimer "THIS IS MALWARE. USE ONLY FOR RESEARCH PURPOSES" or something like that

sourceforge?


You do not have a right to basic safety when you divest yourself of basic precautions any reasonable actor would have taken. A reasonable actor checks new code pulled down to ensure it actually works. You accept the risk integrating in a stranger's code without mirroring/audit.


The legal profession tends to take a more permissive stance when determining the limits of what's "reasonable", as can be seen from how the phrase "moron in a hurry" has become a term of art.


My understanding is that he wasn't voluntarily sharing it. In November he pretty clearly told people to fork his code and move off his repository, and not to rely on his packages anymore. He used a graphic clearly warning of an impending "strike."

GitHub Post: https://archive.fo/9fwGz

HN Discussion: https://news.ycombinator.com/item?id=25032105&p=2

That doesn't make it not a dick move. But let's not pretend they still had permission to remote load directly from his repo. If I own a personal storage facility and I decide to demolish it, and I say months in advance for everyone to take their stuff out and store it somewhere else, and then I demolish it and people lose their property... the harmed individuals are not faultless.

Would the professional and responsible thing to do be to announce a cut-off date explicitly and make very clear that Bad Things were going to happen if you didn't stop relying on him? Yes, 100%, and I have a low opinion of him for not doing so.

However, while what he did was bad, it isn't quite as bad, in my opinion, as people are making it out to be.


Could you please tell me how to avoid ever using your code ?


> If a major FOSS Linux distro decided to add a virus to their latest release, would you say the same thing?

https://www.eff.org/deeplinks/2012/10/privacy-ubuntu-1210-am...


> Well fuck them. It's his code, he should be able to do with it whatever he wants.

He can do whatever he wants with his codes. It didn't means that private companies have to accept that due to liabilities. He is using an private company platform to publish his code which could make the company liable for him. By removing or suspending his account, the liability will be minimized and shift it to the developer. If the private company keep it in their system and widely available for distribution, other companies that got hit by this malicious code could have a standing to sue the hosting private company for allowing the code to be published. Imagine thousand companies have a lawsuit against the hosting company. So the hosting private company don't want to be liable for this and shutting off the account is their best interest to keep the liability off on them.


Not different from Google shutting down someone's account, something that's constantly bemoaned on Hacker News.


Did Microsoft completely banned this developer from their other services? To my understanding, the developer account is suspended from GitHub. He is not banned from other MS products.

That is the difference. In Google case, Google completely ban on all of their services to those banned users. If they are banned in Gmail, then likely they are banned from Play Store, Workspace, Google Drive, etc (entire Google ecosystem). In this case, Microsoft didn't banned this developers from their other services, it only the developer's GitHub account is suspended without affecting other services that the developers are using.


How do you know that when you don't know what else he was using his GitHub account for?


GitHub isn't preventing him from doing what he wants with his code it's just not hosting it for him anymore.


Instead of the government, we let corporations repress deviants.


Deviant or not and corporation or not aren't at play. It'd be the same story if you had a video sharing site and didn't want to host my videos because I broke the "no comedy" rules by uploading a skit.


Exactly.


Exactly what? The government wasn't involved here so GitHub was allowed to decide what they wanted to host and the author was allowed to do what they want with the code?


This caused me so much headache last night, I maintain a little used package for our school that everyone can use and a sub-dep used the colors package. I was just about to give up when it was fixed out of the blue and then I found this article today.


How about a foundation to run the package repository (for independence), a requirement for multiple sign offs to publish an artefact, and a requirement that successor maintainers exist?

Yes, this would create a barrier to entry, but I’m not sure that’s a bad thing.


What I love about .NET is that most of the stuff I need for backend development is provided by the standard library from Microsoft. So I tend to use 10-20x less external libraries comparing to Node/NPM.


This was posted yesterday on HN btw (at least the announcement of this problem).

https://news.ycombinator.com/item?id=29863672


rsc's post isn't (directly) about this colors package. It is marginally about npm and broadly about package managers.

rsc is stating: (paraphrasing) the current state of affairs in many package managers is not a good design. This is (yet another) reason why package managers should work differently then they often do by default.


Was hoping this would be something productive - like giving some help to OSS developers suffering burnout - but it just advocating for not updating to minor versions. Simplistic and will cause other problems.


I have to ask the question - how would people have felt if instead of this, he’d spliced output into the package that printed a donation link to his patreon or similar on every import?


Does anyone have a cli command to check down the dependency tree for these packages? I'm seeing this issue on my company's app and trying to figure out which package(s) it stems from.



A better option would be distributed social code review like crev:

https://github.com/crev-dev/


Is it possible to have users vote on whether a new version is "good"?

Maybe automatically via software that depends on the module successfully completing test suites?


Easy, make --save-exact default behavior. We don't have stable subresource integrity in Node.js yet, but this would be the next best thing.


I know this isn't a solution but .... I wondered if npm should run the tests before it allows a project to be published. In in particular it should run the previous tests. If the current package is 1.2.3 and you upload 1.3.0 the 1.2.3 tests should pass on 1.3.0 or else fail (not sure that's even possible to automate given the current design)

Even if work poor or no tests would get around this but hopefully people would chose well tested packages over poorly tested packages.


1. Tests should not be published

2. You can update the test script with the release too: "test": "echo pass"


> 1. Tests should not be published

tests are published, they just are currently used by anyone but the owner

> 2. You can update the test script with the release too: "test": "echo pass"

which is why my suggestion was for npm to run the previous tests as well as the new tests. Given that people will more likely choose tested packages vs untested a package that just had "echo pass" wouldn't get much traction. And, given my suggestion is that npm run the previous tests, you couldn't just change the tests to "echo pass". Your only opportunity to do that is a major version but major versions are not upgraded automatically in any form so people would be more likely to catch and flag.


I see what you're suggesting. That' clever! Not sure how scalable it is. I'm sure if you ask npm folks they will give you some wild examples of packages doing crazy stuff!


Or package signing would help. Something NPM has continuously refused to implement because they believe it is difficult…


Maybe I misunderstand what package signing is, but the actual owner of the code published the BS. He owns the keys to signing the packages as well.


How would that help if these changes were pushed directly by the original creator himself? Not only form his account but himself as a person.


What exactly is this person's motivation to do this? Interest? What does he want to draw attention to?


When you're starting to make homemade bombs, there's a good chance that mental illness is involved

https://abc7ny.com/suspicious-package-queens-astoria-fire/64...


Could this have not been avoided by using a fixed version in package.json E.g 5.5.3 rather than ^5.5.3?


I made a post yesterday where I advocated for this (with lockfiles) at https://maxleiter.com/blog/pin-dependencies but it was pointed out to me that pinning dependencies would cause duplicates in your rollup/webpack outputs. YMMV, as I haven't experienced that but haven't extensively looked into it


This could turn into a real legal test for FOSS licenses!

Merchantability seems to be the root issue here. This action by Marak Squires (maintainer of the projects) would ordinarilly be illegal by way of breaking the law on implied merchantability: The 'product' (faker.js and colors.js) that the 'vendor' (Squires) supplies to its customers (users of the package) is intentionally designed to be not at all fit for the purpose that it is 'advertised' as.

it's like a bakery selling objects that look exactly like bread, but which aren't edible. Even though the shop does not _call_ any of its products 'bread' - the look of the store and the name of it, as well as its reputation as having been a bakery before, implies it. Calling something 'bread' implies it is edible.

The interesting snafu is two-fold: FOSS license, and a more fundamental issue. The FOSS license attempts to disavow an implied right, and does so in a license that does not require acknowledgement of the 'buyer'. A casual analysis would indicate that a FOSS license is therefore laughably unenforcable: Of course you can't do something as fundamental as denying an implied right without clear explicit acknowledgement from the other party, let alone via a license that's buried 5 pages deep into an About... dialog someplace.

On the other hand, FOSS is free, so the 'buyer' would presumably be expected to realize extraordinary rules attach to their 'purchase'.

Or better yet, perhaps it is better to stop couching this in terms of 'product', 'vendor', and 'buyer'. If you paid nothing, are you a 'buyer'? If this isn't a transaction then you aren't a merchant and therefore merchantability cannot apply. (In this sense, all FOSS licenses mentioning merchantability is in fact __reducing__ the legal protections of FOSS authors! By using that term you're, yourself, construing the relationship between the FOSS author and the FOSS user as a vendor/buyer relationship! But, I'm not a laywer, so I best put this pet legal opinion back in the shed where it probably belongs).

At any rate, however FOSS is supposed to imply non-merchantibility (be it the license, or simply that you never paid for it, or some combination thereof): That would, I think, be __required__ for this act to not have been illegal.

Once it's illegal, github's actions are entirely justified. If it isn't legal, then github's / npm's actions of taking over the 'project' is more dubious, though I'm pretty sure the github agreement gives them the right to do this.

And if you think that github cannot just claim this right via a clickthrough license.... github is __free__. If github can't claim this right via a clickthrough, then Marek's claim that he isn't on the hook for a merchantability promise seems doomed, making it illegal, making github justified yet again.

Turn left or turn right, github was allowed to do this. Whether it's 'moral' - oof. That requires beers and a long night I think.

I think it's a test of the FOSS license. If a bakery decides to bake their bread intentionally with extremely foul smelling chemicals (harmless, other than that it makes the bread test utterly disgusting), even if they announce it on their facebook page or what not, that's not just unethical, that is illegal: There is an implicit merchantability and all that.

It seems to be rather obvious that a software product that implies that it does a certain thing (and whether you go by its name, its description, or simply by the fact that it did that thing before the update, a thing that is well described and that the author is clearly aware of, is making no efforts to correct, and is in fact contributing to by e.g. exposing documentation that echoes this) - and then it intentionally doesn't do that thing, is breaking the same law.

However, the FOSS license explicitly denounces this, which you ordinarily cannot do, but backs that up with: Well, ordinarily, you'd be paying something for this, and you're not, so even implied merchantibility. It does this in a worse-than-click-through style license (you do not need to explicitly acknowledge a FOSS license).

Ordinarily (and I am not a lawyer), I'd say the license style of FOSS projects is obviously stupid: You can't write an extreme license (and 'disavowing merchantibility certainly is extreme!') and do so in such a roundabout way as a file buried in some about page someplace. However, the rub is: You don't pay for FOSS, and the concept of FOSS itself (including disavowing merchantibility) is somewhat well known. One could make a plausible claim that 'buyers' can be expected to either know what FOSS is, or at least that the literal zero cost surely means it comes with caveats that are extraordinary.

Perhaps there's no need for FOSS licenses. If you don't pay anything, can you impose duties? US law (and many other country's laws) do this all the time, but for this specific case: Treating someone who puts a FOSS project online as a 'vendor', who therefore implicitly adopts the duty of providing a merchantable product (which they then attempt to get rid of via the license; for the purposes of this defense, lets assume that FOSS licenses are not legally binding, so it applies) - that's weird, right? The 'buyer' isn't a buyer; they did not pay.

I'm in fact tempted to think that FOSS licenses __reduce__ rights; by having that license and mentioning merchantability you've construed your relationship as FOSS author with the FOSS user as one of vendor and buyer.

But, I'm not a lawyer, and the OSI and the lot are, so I should probably defer to their judgement, perhaps? Then again, a lawyer is somewhat unlikely to get behind the notion that 'no legal documentation is the best solution here, let common sense dictate and pray instead'. What being lawyers and all, even the good persons.


just pin your code to specific versions, the package manager, build tools and linter can just give warnings and case study examples about why the warnings are relevant


And then in five years there's a log4j vuln and you've got to figure out how to upgrade a bunch of (potentially incompatible) versions to get a fix.

It's a different approach, with pros and cons. Personally I think the npm model has more downsides than other approaches, but it's not clearly always the wrong approach.


Should do nothing. If your software come with a contract, well, that's different I guess. Otherwise, make sure you have a reproducible for a start.


Does anyone else feel awkward about the use of the word "attack" in this context?


This isn't like leftpad being deleted: he added an infinite-loop on purpose in a patch release to the package. This is a malicious attack. Only later did he delete packages.


No. It's a change he wanted to make to his code. Code is and has always been art. People have been consuming his code, and not keeping an eye on it to make sure it continues to mesh with their own work.


The author never even said he was following semvar.


Intentionally adding code that has an infinite loop (the for loop literally uses "Infinity" as the target for the 'for' loop) sounds like an attack to me..


Indeed, it's common nowadays to label things (ideas, people, etc.) in order to frame them in a way that's convenient to the labeler and helps him advance his agenda. I think given the global situation, some people become more sensitive to this kind of tactic (which is often used), while others have shown just how susceptible they are to it.

The author of the software didn't attack anything. He just pushed some code into a place he had legitimate control of.

Some irresponsible (see what I did?) developers downloaded and executed this code without checking, and as a result their stuff broke.


Yes I do strongly disagree with the wording (attack) here.

If publishing a package you control is considered an attack than the same could be said about the developer using the package or the admins deploying said package


It isn't an attack. He didn't do anything out of the range of his rights.


Attack and rights are not exclusive concepts. I would venture to say that your comment is mostly nonsequitor.

It's an indirect attack against the lazy and complacent, at the very least. How dare the developer do that to them?!

People hate it when you make more work for them and companies will actively fight back so the outrage is predictable. What's surprising is the lack of support for both user and developer agency. Some have gone so far as to say that users have some sort of ownership over someone else's licensed code they chose to blindly change (apply an update) by right of "community" because they used it when it did what they wanted.


If it’s his project, as far as I’m concerned he’s within his rights deciding to make it do something different to what it did before, even if that is malicious. There is precedent for this with Chrome addon devs selling their addons to malware companies on the quiet.

That said, it is an attack on his users and it’s a shitty thing to do. He’s likely ended his career as an open source developer, and likely a paid developer as well.


[flagged]


Oh, you can, just don't be surprised if the remaining community works around your sabotage.


I cannot fathom why Russ bothered writing this. Npm is a dumpster fire, and below him.


Nothing. Better live in the world with some gaps than a completely closed prison. Because that's what you get when you start governing stuff like this with an endless list of strict regulations. Better accept that things can go wrong sometimes. It isnt the end of the world.


Have you read the article? It doesn't look like you did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: