Hacker News new | past | comments | ask | show | jobs | submit login
Backdoor in event-stream library dependency (github.com/dominictarr)
1034 points by cnorthwood on Nov 26, 2018 | hide | past | favorite | 491 comments



I really have a hard time putting as much blame on the author as the people in that Github thread are doing. Maybe they could have handled this specific issue a little better, but the underlying problem is just one of the flaws in the open source community that everyone has to accept. Maintaining a project is a lot of work (even just having your name attached to a dead project involves work) and the benefit from doing that work can be non-existent. If the original author has no use for the project anymore and someone offers to take it over from them, why should the author be expected to refuse? Isn't adding another potentially unknown maintainer generally better for the community than a project dying?


Especially given that Dominic maintains hundreds of packages[1].

He's a good guy and a great developer, and he's always been supportive of others who want to help out. When working on Scuttlebutt[2] I got some flak for a change I made and he said this[3]:

> I argued against this before, but @christianbundy wanted to do it, so I figured okay do it then. Maybe it is annoying for everyone (in which case christian probably learns for experience) or maybe it turns out to be a good idea! in which case we all do. Either way, shooting down* something someone wants to do (that doesn't have a strong measurable reason) leads to a permission based / authority driven culture. And that isn't what I want. This may make ssb less stable... but I'd rather have less stability but more people who feel like they can work on it / are committed to it.

>* "shooting down" is not the same as negotiating a better solution that addresses everyone's needs. I think that's a good thing, plus being good at negotiating is a very good skill.

[1]: https://www.npmjs.com/~dominictarr [2]: https://scuttlebutt.nz [3]: http://viewer.scuttlebot.io/%25%2B2WIaJ%2BoRVURoTAke9YWuJzp%...


Dominic is wrong. If there's no authority, then there's nobody taking responsibility. This is a perfect example of how lack of organizational structure simply does not work in the real world. Dominic's other projects like scuttlebutt are likely doomed to fail as well because of his wrongheaded views about organization.

For a successful counter-example, one can look at the well-structured, hierarchical organization behind Linux. Lieutenants gain authority based on the merit of their contributions, and are responsible for reviewing the work of other developers. Authority works, has worked for thousands of years, and will continue to work for thousands more.


Linux has a giant user base, a giant installation base, and a giant pool of talented devs willing to take on unpaid work.

If this is an indictment of anything, it's an indictment of the entire NPM ecosystem -- it's been the wild wild west for years; haphazardly using whatever NPM install gives you is baked into the culture.

Sure, Dominic is an active participant in that culture but it seems to me that it is impossible to have a largely unmoderated volunteer system with as many packages are actively used without things like this happening.

Keep in mind, this is a case where the system worked, more or less -- an observant user caught the issue, and made a public issue of it. Who knows how many packages have slipped by like this?


> Linux has a giant user base, a giant installation base, and a giant pool of talented devs willing to take on unpaid work.

And also important: It even has a giant number of paid maintainers, for who this is their main job.

For those the incentive to continuously maintain things is different than for someone who gets nothing expect more work out of it.


> Linux has a giant user base, a giant installation base, and a giant pool of talented devs willing to take on unpaid work.

Linux didn't always have a giant user base, and it wouldn't have gotten there without strong leadership having a sense of pride and responsibility.


"I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things)."


I'm curious if anyone knows how it really has developed. Has anyone documented the history? Do I understand correctly that right now it's a hierarchy with Linus at the top and levels of "Lieutenants" managing increasingly more detailed levels of subsystems. How was development organized before? Are the developers that are paid mostly on the top or the bottom of the hierarchy? Are the proportions of paid developers very distributed among different companies or does a major portion of them belong to one company?


> other projects like scuttlebutt are likely doomed to fail as well because of his wrongheaded views about organization.

Define failure. I don’t know Dominic and I haven’t looked into the Scuttlebutt project beyond being aware of its existence and what it is, but...

He talks about creating a community where anyone is welcome to contribute.

It is perfectly fine for an open source project to have the development process and the community as its raison d'être.

Just because a project doesn’t outcompete every single alternative doesn’t mean it’s failed.

Just because a project isn’t even used by more than a handful of people doesn’t mean it’s failed.

It all depends on what the goal of the project was in the first place, and what the goal of the contributors are.


A security hole and backdoor is always a failure in a software product. Having someone else control your system is just about the worst you can get. It has nothing to do with relative definitions because it always degrades every other objective the project could or does have. If a model of leadership leads naturally and often to security holes, it is time to reconsider the model. If it is a common library, then it is even worse. Open source that is used widely is _even_ more important to protect because the impact can be so much greater.


This sounds like "if it's not perfect, it's a failure."

I'm actually not aware of a single piece of software that didn't at some point have a production security vulnerability.

Facebook failed.

Google failed.

Amazon failed.


I don’t know, if I write open source software, the only definition of success I use is whether or not I had any fun writing it.

Anything else is gravy.


Lots of projects, including windows and linux have had security holes that allow remote control of your system. Security flaw does not mean failure, its always about weighing the cost of security flaws against the utility the software provides. Even a completely compromised system can provide utility to many users.


> He talks about creating a community where anyone is welcome to contribute.

Yes and in this case, that was exactly the problem.

> It is perfectly fine for an open source project to have the development process and the community as its raison d'être.

Conway's law is not an instruction manual.


The problem was that too few people contributed. A contribution to an open source project doesn't have to be a pull request. Glancing over the code you're pulling down rather than assuming the maintainer is infallible counts just fine. Very, very few people do that in the JS ecosystem though.

When a package gets 2m downloads a day and it still takes 2 and a half months to find a problem, a huge number of developers have failed to do their part.


> Dominic is wrong. If there's no authority, then there's nobody taking responsibility. This is a perfect example of how lack of organizational structure simply does not work in the real world.

Maybe, but the sort of people who are not jerks (i.e. who don't get too much pleasure out of having power) who are willing to lead for free are... very rare.

Fundamentally, non-commercial open-source needs to evolve organizational methods that require less leadership, just because there aren't a lot of good leaders willing to work for free.


But in the absence of a trusted, well-structured organization, authority rests on the shoulders of those using the library as a dependency.


I would rather see it as if contributors gain _trust_ based on their contributions, which is not necessarily the same as authority.


If a contributor's end goal is to publish a backdoor, then making them wait 0 or 100 commits to the project before trusting them doesn't change the end result.

In fact, if you had the energy to do the attack at all here (which took some work), having to fake trustworthiness doesn't require much more effort. Just look like a super enthusiastic contributor, put work into the readme, bike-shed over some issues every month, and bam.


What happened is that Dominic gave ownership to the only person who wanted it: "he emailed me and said he wanted to maintain the module, so I gave it to him. I don't get any thing from maintaining this module, and I don't even use it anymore, and havn't for years." At every single point, there was authority and responsibility. It is just that authority turned out to be bad actor.

It was situation where former authority was not interested in being authority anymore, since former authority gained nothing from it and had other work.


> that doesn't have a strong measurable reason

I think that's the key here - organization/hierarchy is important as a project scales up, but he didn't want to stifle a new contribution without good reason.


Scuttlebutt already works, people use it and are building on it.


> Lieutenants gain authority based on the merit of their contributions

Meritocracy is an outdated discriminatory practice.

https://postmeritocracy.org/

https://www.theguardian.com/commentisfree/2017/mar/20/merito...


Is there a showcase of open-source projects managed/developed according to this post-meritocratic system?


> Meritocracy is an outdated discriminatory practice.

Of course merit is judged subjectively (like anything else), but what exactly is the alternative?

In particular, I don't find anything actionable in that manifesto in regards to decision making.


To be clear, you're arguing that people should be judged on their identity and background rather than the merit of their contributions?


Skimming those links, I think the position is that people are already being judged on their identities before they're judged on the merit of their contributions, not that they should be.

The extent and congruence of the findings regarding, e.g., blinded vs non-blinded resumes alone should put the lie to the idea that a true "meritocracy" is something humans can actually meaningfully do at this point.


That's a contrived "no true Scotsman" right there. Also quite a few people argued that democracy should be replaced by dictatorship on similar grounds: "look at America, it's clear that democracy is shit".

Imperfect realization doesn't mean the ideal is invalid (except when that imperfection is inherent to the ideal). You can argue for more equal, more meritocratic community; tearing it down because it's not already perfect is disingenuous and destructive.


You are feeding a troll. Please don't.


> maintains hundreds of packages

Might be worth looking over some of these to see if he has also given access of them to some dodgy people. As most of them appear to be as old as event-stream and are also probably as unused by him. And if we are concerned, we should, he would advise, simply make a fork and write an email to npm to warn them. But hes a good guy, so don't blame him if you find that he's done this before in the other 422 packages.

Edits: I mean I do understand having software that's old and unmaintained, and it's fine and easy to hand off control to others - we developers are generally trusting of other developers. I wonder if he has also trusted others in this way before and that he has been exploited before without anyone knowing.


> And if we are concerned, we should, he would advise, simply make a fork and write an email to npm to warn them.

Well... yeah. If you don't trust him, don't trust him.

If you don't trust NPM's vetting, don't trust NPM's vetting.


npm does vetting?


NPM increased their efforts with regards to auditing. They realized it was a prominent attack vector and without them taking responsibility for some level of the problem they would be throwing their reputation down the drain. It isn't perfect, but it's a step to improving the situation.


Sweet! There are so many brilliant & creative people in the Node community. I'm positive there are some innovative ways to approach this problem.


> Especially given that Dominic maintains hundreds of packages

> maintains hundreds of packages

> hundreds

How can a single human reasonably and responsably do this? This number alone demonstrates how sloppy and inept the Node.js community's practices are.


NPM packages are frequently tiny, single-purpose functions. The infamous left-pad module is only 47 lines of code. https://github.com/stevemao/left-pad/blob/master/index.js


Node people highly follow the single responsibility principle that is prevalent in Linux. By your logic, Linux is sloppy and inept too.


It's not.

Not to that degree.

Unix frequently espouses "do one thing", but how much "one thing" is is always open to interpretation.

And it's never been quite as small as e.g. left-pad. It's more something like `printf` or `cat`, which do "print a formatted string" and "concatenate files" as their "one thing", respectively.

Let me reiterate: The problem isn't "single responsibility", the problem is "single responsibility for half a thing".

The pieces are just too tiny, which leads to an explosion of packages, which leads to packages being badly maintained.


`cat` without any options support is possibly simpler than left-pad.

I think the big difference is that very few people set their systems to trust and receive updates of `cat` directly from the developers of `cat` (and same for all the other standard tiny utilities). Instead, they rely on a Linux distribution to vet the developers' code and to regularly pull in updates. Maybe there's room for a similar model of distributions vetting code in the npm ecosystem.


Do you really think cat is as simple as left-pad?

https://github.com/freebsd/freebsd/blob/master/bin/cat/cat.c

Also, cat is kind of an important primitive for building a functional system that uses files as one of its core abstractions. It makes total sense that it would (a) exist and (b) be well maintained by an authoritative and reliable source.

Left pad it is not.


> `cat` without any options

Interestingly, the `raw_cat()` function is 37 lines long.

https://github.com/freebsd/freebsd/blob/e511976db9f51bae1a59...


UNIX `cat`, unchanged from V2 through V6 until it was rewritten in V7, was 44 instructions.


The core utilities are provided as a suite. The cat utility is one of a hundred others, all packaged together. Even without distributions, you'd only need to vet one organization for those basic utilities.


Add to that the fact that while the Unix way works well for connecting certain types of programs through pipelines and/or shell scripts, those programs themselves are written usually in C or Perl, Python &c, which are languages that have big standard libraries. It might be reasonable to assume that while programs that conform to Unix philosophy may constitute useful building blocks for end user or sysadmin tasks, more complex programs, even those "building blocks" themselves are better off with languages that provide them with a rather large, well-though-out toolset that's built as an integrated whole, i.e. a standart library.


I don't follow your comparison. What parts of the Linux ecosystem have a single person maintaining a large multitude of packages?

Looking at the kernel itself you see a hierarchy where people are only responsible for small segments of the kernel.

Looking at a standard distro like Debian, which makes this easily available at https://www.debian.org/devel/people you'll see that most individuals are only responsible for a handfull of packages, with the bulk having a team responsible for them.


I think with "single responsibility" he means the packages. So instead of complex packages that are hard to maintain they have lots of smaller packages that are easier to maintain. So a single person can maintain several packages.

Not saying that I think maintaining hundreds sounds like a good idea, just trying to point out the misunderstanding :) .


Handing over a popular software project to a random stranger you've never spoken to who asks to take it over is just irresponsible. Nothing is preventing that stranger from adding malicious code and potentially compromising millions of devices, which is exactly what happened.


Dominic is the same kind of random stranger to most of the people whose codebases he's in.


Sure but he earned the trust of the community by building everything he did. It took times and effort, unlike that other strangers, which does give more credence to what he does.

Sadly he didn't deserved that trust.


Have you ever created a package that people started to use, which you then had to maintain for purposes that didn't do anything for you and then no longer even need the package? Pretty much any rando offering to take over maintenance is going to be welcome to it compared to the dozens of requests and insults offered as bug reports.


Yes, I am in this situation right now, actually. I have a project with around 700 stars on GitHub with over a million downloads per month (according to PyPI) which I no longer have the time, interest, or willpower to continue maintaining.

I placed an open call to find a new maintainer months ago, and have received many requests. Every single request I've received has been roughly as shady as the request sent to event-stream. I've declined them all due to their shadiness. I know exactly what could happen (this situation right here) if I give it to the wrong person.


This is good maintainership right here!

For outsiders, it looks like the maintainer is greedy to keep the project to themselves. The bug reports pile up, and dozens of people are offerring to maintain. And then insults start to come up, because it looks like you don't want to hand it over.

If you want to take over a project, you just earn the trust if the current maintainer, be it patches for existing PRs, finding security vuons, etc over the time. Linux has gotten this right.


Thank you. Exactly.


Explain how you would be able to evaluate the trustworthiness of a person asking to transfer ownership to.


Actually, it's not as hard as you make it sound:

1. - don't transfer - mark your repo Abandoned, tell people to use that other person's fork if you have to 2. - does that person have any other online presence? Long running? 3. - does he also put his face in front of the crowd? Talk at conferences?

This creates accountability - someone like that wouldn't pull out the same kind of injection, because he could be caught for that and his "brand" would be destroyed.


This is a reasonable response. I'm not sure why you're down voted but I gave you a up vote.


At the very least, a few years of regularly contributing to Node projects on GitHub. The account he handed it over to has essentially zero history: https://github.com/right9ctrl. The one repo they have is just a copy of https://github.com/barrysteyn/node-scrypt.

I'm not against handing over projects, but some vetting has to be done.


If you can't evaluate the trustworthiness who you're handing the package over to, just don't hand it over. Mark it as deprecated and call it a day. This is Open Source 101.


GitHub has a Read Only option, and package managers of most languages have a way to convey that the project is abandoned or provided security fixes only.

You don't have to feel guilty to abandon the projects. Most us do it, every day.


exactly, transferring a package to an unknown person is not only giving them access to the code but also to all the people who already trust that code


To be honest that kind of your fault for trusting that code. Heck, the commits were not even signed as far as I know, had they been signed the change of "ownership" would have been clear.


Makes no difference if they were signed or not. He handed over the NPM package, not the GitHub repo. NPM doesn't show commit history or authors and has no connection to GitHub; you can easily have a GitHub repo for your NPM package with benign code in the GH repo and completely different, malicious code in the package.


> not the GitHub repo

He did that too actually.

> you can easily have a GitHub repo for your NPM package with benign code in the GH repo and completely different, malicious code in the package

This is one of the biggest issues that NPM has, along with not enforcing packages to be signed. If the package was signed this would not be an issue as people would see that the signer changed.


No. Transferring a project wholesale to an unknown maintainer is effectively a fork; I'd much rather have my dependencies die than start silently pulling in a fork. If I want to swap in right9ctrl/event-stream, I'll do it myself.


Why would someone update to a new version of a dependency if they don’t trust the new maintainer? Can’t you pin dependencies to a particular version with npm? In my world, you should have a really great reason to update a third party dependency—“there’s a new version” is not a sufficiently good reason.


> Why would someone update to a new version of a dependency if they don’t trust the new maintainer?

They didn't know there was a new maintainer, let alone that they didn't trust them.

> In my world, you should have a really great reason to update a third party dependency—“there’s a new version” is not a sufficiently good reason.

Because new versions come out with bugfixes _and_ security patches all the time. In my world (which I'll be clear is not npm), I update all my dependencies to whatever backwards-compat latest version there is, to get advantage of any bugfixes or security patches without having to spend time being aware of every possibly applicable bugfix or security patch. If I spend a day hunting down a bug that ended up having already been fixed in an upstream dependency and I wouldn't have even been subject to it if I had updated... that was a wasted day.

Also if I regularly update, I get security patches without having to have spent time being aware of every one.

This all applies to indirect dependencies (dependency of a dependency) along with direct ones too.

Some of these issues are general to any kind of dependency system. But the particular balance of trade-offs changes in different ecosystems. One thing that seems to be somewhat special in npm is how _small_ and _numerous_ the dependencies are. Npm ecosystem is built upon a large number of very small and discrete dependencies. This has advantages, but the big disadvantage that it becomes very difficult to 'manually' keep track of them, and easy for a bad actor to sneak something bad in.


Without locking in to a specific version you are trusting the maintainer in perpetuity. That trust extends to things like reviewing their own dependencies, reviewing pull requests, adding other maintainers, or just flat writing bad code. I don't blame the author any more in this instance than if any of those other possible examples occurred.

Security patches are part of my original point regarding why it is bad for a project to die. I have no data on this, but I have a feeling that there are a lot more unpatched security bugs in widely used dead projects than there are popular projects that are taken over by a malicious maintainer.

The simple fact of the matter is that you are responsible for all the open source code you run. If you allow the code to be updated automatically you are abdicating your responsibility to review that code as a trade for being relieved of the responsibility of keeping up to date with patches. You are the one liable when something goes wrong because you are the one who made that decision and misplaced that trust. That is an inherent agreement you make when you use open source code and is often spelled out explicitly in the license.


And yet, to produce the software that nearly any employer or client in 2018 wants at the price they want, requires using open source dependencies without manually reviewing every diff from every version released.

I don't really understand what you are trying to suggest.

Yes, whether we realize it or not, we are trusting the maintainers of our dependencies. Sometimes that trust is misplaced. That might lead us to try and avoid a particular dependency or maintainer in the future when it happens. And then it'll happen again.

We've built an ecosystem around using open source dependencies. Isn't that what the open source dream was? You might not like it, but not liking it doesn't make it realistic to develop software for money without using all sorts of open source dependencies for which you can't personally vouch for every line of code. I'm seriously not sure what it is you are trying to advocate for.


I am generally advocating for being an informed user of open source software. As for something more specific, I would echo the two suggestions from the original author in one of his Github comments:

>1. Pay the maintainers!! Only depend on modules that you know are definitely maintained!

>2. When you depend on something, you should take part in maintaining it.

You can't expect someone to maintain your code for you without you contributing anything in return and if something does go wrong in that situation you have no one to blame but yourself.


I suppose this is hard in NPM.

For example of I have a project with 5 dependencies, this explodes to hundreds of tiny projects. These individual projects are almost always just a few lines of code, and there's no way anyone's going to become a donor for them.

Many major projects are well funded (webpack, etc), but all it takes is one of those tiny packages to send a malware.


That isn't a problem. You pay the maintainers of the packages you use, and they pass on some of that money to the maintainers of the packages they use. If a problem like this one happens they'll lose a lot of customers, so they're incentivized to audit the code they're pulling down.

You only need to worry about the level immediately below yours.


A lot of users had it locked to a semver major version or major.minor version, which isn't necessarily trusting the maintainer "in perpetuity", just the current development track.

Is a change between maintainers a semver-major breaking change? Several people have suggested that that is a baseline that npm could easily automate/enforce. That would have at least sent a community signal to re-review/re-audit the package in light of a new maintainer.


Does npm currently control versioning to any extent? A quick Google search seemed to show that they only make versioning recommendations with no real rules. If you are operating from the premise that the maintainer is potentially compromised, why would you trust them to stick to the semantic versioning spec?


Admittedly, npm can't control the "semantic" in semver as that will likely always be a human judgment call, but npm has a lot of general control in package versioning. Their semver recommendation [1] is pretty strong in their docs, and embedded in the default version range specifiers ^ (a previous default) implies semver major, and ~ (the current default) implies semver major.minor.

Other small ways that npm controls versioning is that it does not allow you to publish the same version number for a package [2], and `npm version` [3] is often the most common tool for incrementing version numbers, which itself provides a number of semver-focused shortcuts.

So yes, it's certainly a common expectation in npm that packages follow semver, largely due directly to npm's documentation and tools support.

> If you are operating from the premise that the maintainer is potentially compromised, why would you trust them to stick to the semantic versioning spec?

The suggestion was that in the current case the maintainer changed with no warning. One warning system that npm provides is semver major breaking changes. npm does have enough version control (for instance, the part not allowing previously submitted version numbers to be reused) that they could theoretically force all new versions submitted after a maintainer change to jump a semver major version. That would at least send a signal to the large number of developers that don't pay attention to the changelogs of minor and patch versions (and may naturally have a ^ or ~ scope in their package.json) to at least check the changelog for breaking changes. That's possibly the easiest "sufficient" fix for this problem of an otherwise unannounced maintainer change.

[1] https://docs.npmjs.com/about-semantic-versioning

[2] https://docs.npmjs.com/cli/publish.html

[3] https://docs.npmjs.com/cli/version


You publish the packages from a directory. While npm supports VCSs, it's not a requirement.


Interesting, thanks for the explanation. I was not familiar with the npm ecosystem. If the ecosystem has a cultural norm of "blindly" updating dependencies regularly, that could be the root problem. If you're going to live on the bleeding edge, rolling the dice over and over, you're going to have a bad roll once in a while.

Your world seems more sane, and I (also not in node, JavaScript, npm etc.) generally follow that too. Updating third party dependencies should be a rare thing, something you do only when you critically need to. If I were working on any sort of serious or commercial project, I'd expect to do my due diligence when considering updating a dependency, including examining the dependency's downstream dependencies. Do I really require the additional functionality or fixes from the new version? How mature/tested is the new version? Have there been any changes in the API? What do the release notes say? Are the trade-offs of updating worth it?

Just saying "YOLO, update my dependencies and go!" would give me severe anxiety.


You misunderstood me.

I don't work much with npm. I work mostly with ruby, using bundler for dependency management.

And i update my dependencies to new patch releases _all the time_ without reviewing the releases individually. I think most other ruby devs do too. My understanding of the intention of "semantic versioning" is to _allow_ you to do that.

My projects have dozens if not hundreds of dependencies -- don't forget indirect dependencies. (I think npm-based projects usually have hundreds+). How could I possibly review every new patch release of any of these dependencies every time they come out? I'd never get any software written. If I never updated my dependencies anwyay, I'd be subjecting my users to bugs that had been fixed in patch releases, and wasting my time debugging things that had already been fixed in patch releases.

This is how contemporary software development using open source dependencies works, for many many devs.


Not just that, but the longer you wait the more you diverge from mainline. The more you diverge, the more painful re-integration will be.

By updating all the time you can address bugs and changes as simple fixes here and there. Wait too long and you run the risk of having a really painful merge/update cycle, most likely happening under duress because now you have to update because of some bug fix or security issue.

The more dependencies you have, the more critical it is to stay up-to-date, and thus this directly leads to all the craziness with NPM based projects whenever something goes wrong.


Security vulnerabilities due to not updating are more likely then those from library being intentionally hostile.


And therefore you are ultimately responsible if any of those dependencies leaks your users data or compromises them. Just because that's how you choose to work doesn't absolve you of those responsibilities.


You can, but this package is very low-level so chances are that it's not a direct dependency for many end-users. A lot of popular modules that may be using this module also don't pin dependencies or don't use lock files so that's one problem. A second problem is that when a npm package is transferred to someone else and then a new version is published, there is nothing notifying you that the package in under new ownership. A third problem is that sometimes users might delete their lock files and re-install dependencies when they encounter some versioning issue and are looking for a quick fix without realizing the implications of doing that.


You can, but it's opt-in. At least once with every new project I go "gah, dammit!" because I forget to check in a lockfile, but I sleep better at night.

I do not know if NPM lockfiles are transitive, though. Guess I'd better go looking...


As far as I know, you can pin your versions (something I always recommend), but those packages can always pull in other dependencies with version bumps. And NPM in particular encourages a massive plethora of micropackages, so the dependency tree gets extremely large.


Do they? I'm not 100 percent sure about the npm package lock (because I don't typically use it), but I'm fairly sure that yarn's lockfile will lock the full dependency tree, including the dependencies of your dependencies. I believe npm's package lock will as well, but I'd have to double check that.


npm's package lock will lockfile the whole dependency tree, but there are easily overlooked differences between `npm install` and `npm ci` on respecting locked dependencies for things like semver-patch level changes.

`npm ci` is so new a lot of projects aren't using it and are still using `npm install` everywhere. (To be fair, evaluating if most of my projects should switch to `npm ci` in more places is still on my own personal devops backlog. This thread has reminded me to bump priority on that.)


that assumes you know that the maintainer changed


A reasonable position -- and obviously better in this case -- but in general I'm not sure most people agree. The overwhelming majority of the time maintainers are not malicious.


the overwhelming majority of people are not thieves either, but we still lock our cars when we leave them in the parking lot


Sure, but we don't typically walk around the car and check that no one has slashed the back tires.


> I'd much rather have my dependencies die than start silently pulling in a fork.

Maybe, but Node had that issue too, no? Remember leftpad?

So, lots of people in the Node ecosystem don't agree with you.


No, that was a different issue. Left-pad was removed from npm, not just left abandoned and unmaintained. The people using it were happy to still use the (unmaintained) version of it.


well then lots of people in the Node ecosystem are wrong. If many people stopped hashing passwords, would you say that's a good thing to follow?


then we might want to question why the community sees silently pulling in updates as "best practice". Is it? Seems like a double-edged sword.


> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

No. If it's someone involved in the project for years you feel you can trust then sure, go ahead, but someone coming along being like "sup, commit rights plz" then something's just wrong with you of you agree. If that person were serious about it they could still fork it and maybe msg users of the old version about it.

Sure those people could still mess up by just switching over without checking the new fork, or having good reasons to switch like bug fixes or new features, but at least you as the author of the original lib did your part to prevent any messups.


Open-source projects greatly depend on reputation.

E.g. I can trust the people behind git: some of them may be poor UX designers, but they won't let out a version that eats my files, or is backwards-incompatible. If a completely different team forked git, I would be quite wary to use the fork outright for my daily work.

When all long-term maintainers stop maintaining a project, the last person leaving should turn off the light. The project should be officially stopped, because no people remain in behind it.

Whoever wants to pick up the development should fork, and use a different name. This way, the forked project will accumulate its own reputation: of course, initially propped in major ways by the idea of "this is a fork of a well-known project".

At the very least, it would be a heads-up for everyone who depend on that project that a major change in it is happening.


Definitely. One of his coworkers has a great thread about this, including on how he has hundreds of libraries in use:

https://twitter.com/andrestaltz/status/1067157915398746114

Many, many people treat open-source projects like consumer products. Except the paying-money part, of course. That they're happy to leave out.

This is a systemic problem, and blaming one guy won't solve anything. Especially since so many are blaming him for not doing enough free work to help them with their paid jobs. If open-source libraries are truly valuable, we need to find a way to jointly pay maintainers for their labor. I can think of many options (a non-profit build with hefty donations from tech companies, governments giving out tax-funded grants, places like NPM and GitHub making crowdfunding 10x easier). But the ideas aren't the problem. It's getting one or more of them done.


> Many, many people treat open-source projects like consumer products.

To me, this linr of argument completely misses the point.

It's entirely immaterial how a FLOSS project is treated. This problem is essentially an identity hijacking problem. The community trusted the old maintainer, but then he screwed up by enabling an attacker to essentially take over his identity and thus create and exploit a major vulnerability in his behalf.


It is not immaterial. Projects are run by human beings, and economics does not cease to exist once a computer is involved. People's expectations were out of line for what they were paying.

I also think the words "community" and "trust" are being badly abused here. A community is a relationship of mutuality. Most of the people affected here never did a thing. They were leeches, not participants. A healthy community would have looked at the guy laboring overtime for free and said, "Hey, buddy, let's split that load up." And most of the people who benefited from his work didn't even know his name. They didn't trust him; they trusted the system.

He owed those people nothing. He gave them more than that: a good-faith effort to find a new maintainer.


This, and then make sure the rewards trickle down to everyone involved: https://github.com/ktorn/vdp


The author's stance is that because it was a volunteer effort, he bears no responsibility for transferring ownership to an unknown third party who wants to commandeer the code used by millions of people. I believe this is false.

He also feels that there's nothing anyone can do about it except scramble to control the damage. I believe this is currently true.


You may not like it but from a license standpoint, I think you may be incorrect

As quoted elsewhere in this thread

" THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."


End-users are harmed and are not licensees, and so, even if the license disclaimers are effective (boilerplate disclaimers are often broader than the law of some jurisdictions will give effect to) the claims they would have for negligence would not be covered (and would not be transferred to downstream maintainers, who likely have concurrent liability, absent a indemnity clause as well as a disclaimer in the license.)

So, while I don't think the upthread claim was about the maintainer having legal responsibility, I don't think it would be entirely wrong, even with the license text, if it did.


The people with legal liability to the end-users, I expect, are the developers that made use of this library without properly vetting all changes to it.


Liability is very often not exclusive in law, and, in particular, tends to flow the whole way up supply chains. With nothing being sold you probably aren't dealing with strict product liability upstream, but exposure of end users is reasonably foreseeable so there's no immediately obvious blanket reason for ruling out upstream negligence liability to end users.

The idea that the party most proximate to end users is exposed to potential liability is true, but the idea that this must mean noone else in the chain is exposed is not.


You honestly think that the author of software released as open source is going to be liable for vulnerabilities in that software ... really?

If that were the case, you'd pretty much wipe out the software industry as it stands today :)

I'd be very interested in case law where you can see the users of a service (which might not even disclose what software they use) are able to sue the author of a package used as part of that service.


I'm pretty sure they'll be legally liable for intentionally inserted malicious code, no matter what the license text says.

I'm not talking about the original maintainer (who didn't introduce malicious code intentionally), but the person he turned it over to (who seems to have).

Legal liability and ethical responsibility are not always the same thing, although it's generally only the first that matters in court.


oh sure the criminal who put the backdoor in place, no-one's arguing his/her liability.

But the point that I was referring to is any suggestion that the repo. owner who handed it over could bear any liability for doing so, I'd suggest that's not probable/practicable.


I don't see how the text of the MIT license can be construed to indemnify a negligent developer but not a malicious one.


I am not a lawyer, but my understanding is that the text of the MIT license is a potential defense to a suit, it does not prevent one. As in I try sue you, you say, "But did you read the license?" I do, talk to my lawyer, and still decide to sue you. You tell the judge, "But look at the license!" And then it is up to the judge to decide whether it matters.

Therefore the issue isn't the license, it is the rules of the law in question under which the author is being sued.

Therefore your intent can matter. Whether a valid contract exists matters. Whether I can be expected to have read it matters. THAT THE INDEMNITY IS WRITTEN IN ALL CAPS MATTERS. (I'm not making that up - see https://law.stackexchange.com/questions/18207/in-contracts-w... to see that it does matter.)

The result? The indemnity in the contract can say whatever it wants and still only provides partial protection. The real rules are complicated and elsewhere in the legal system.


Well, let me put it to you this way. If a malware author installs a piece of software on your machine and it steals bitcoins from your machine, do you think they'd be able to argue to a judge that it was all OK 'cause of the software license...

Put it this way, I would not suggest relying on that defence in court.


Oh, no, sir. I didn't insert the backdoor. I gave the keys to this anonymous person on the Internet, and he inserted the backdoor.

That clearly absolves me of any responsibility, does it not?


> You honestly think that the author of software released as open source is going to be liable for vulnerabilities in that software

Where the type of harm that results is reasonably foreseeable and could have been prevented by reasonable care by the developer (or maintainer; different though often co-occurring roles), I don't see how the general law of negligence doesn't fit. AFAIK, negligence has no open source software escape hatch.


Do you think anyone would publish open source software if it was possible that they might be held liable by people who used services or software which included that code at any future date when they had no say in how their code was used??

Really you think that's realistic, given the astonishingly heavy presence of open source software?


IANAL but I see two issues here. First, you still have to show that he had the duty to act, which is quite problematic given that there was no relationship between the parties beyond an open source license which expressly disclaims any liability. There's no relationship between the end users and the library maintainer and for any specific instance of the harm, it's difficult to argue that the end user, whose connection to the library is merely that whoever wrote the software happened to use the library, is owed some duty by the library maintainer. Likewise, the idea that the library maintainer should have foreseen this harm, given that the library maintainer likely has no idea how the library is being used, seems far-fetched.

Second, since software engineering is not a licensed profession, for any related conduct to be seen as negligent, it has to be something that a reasonable person should be able to avoid and foresee that could cause specific harm. Even a relatively gross act of incompetence by any reasonable engineering standards likely does not meet this bar, given that there's no license required for someone to be in this situation and that it takes a lot of expertise to understand how specific bad practices could cause harm.


There's a difference between legal liability and moral liability.


I hope everyone talking about motel responsibility are donating to all the open source projects they depend on. If not I find the moral preaching a bit one sided.


You and I both know they aren't. It astounds me how many popular OSS projects are terribly funded despite how many people use them.


> There's a difference between legal liability and moral liability

Making business decisions on the hope that someone else's moral codes will perfectly align with your own is unscalable. That's why we have written laws, codes and contracts.


When that happens, pay people.

Seriously. Having a moral discussion in tandem with scale seems very poorly misaligned with the interests, needs, risks of pretty much everyone involved.


I don't even see any moral issues here. Is there any reason to believe the original author acted in bad faith? If you sell your used car and it gets used to rob a bank, did you act immorally?


If the car is sold, but still uses the same number plate, and is still attached to the name of the original owner, there is a problem.

Selling a car = shutting down the maintenance of current project, pointing to a fork done by someone.

What happened is just handing the car's keys to someone, without much notice.


When you sell your car there is generally a title transfer. A process which lets everyone know that the car is no longer yours.

I think the largest gripe here is that the original maintainer let the new, unknown maintainer commit to his repo and publish under the already established package name instead of making him fork it and publish as a new package.


There is nothing wrong with publishing rewritten package with same name under full supervision of original developer. Transferring control generally implies full trust, and Dominic haven't established any trust with new developer. He didn't even ask them for their real name!


If someone's determined, they can always create a legit looking GitHub account, submit a few PRs (they did, in this case), gain trust, and _then_ deliver the malicious code. It just takes time.

But this trust part seems to work pretty well. You need to be trusted to be a Debian package manager, and I volunteer as a Drupal code review admin where we require all contributors to have a real name, and there is a back and forth discussion for a few days until we mark the user as vetted.


If you run a business where you have convinced people to give you access to their house to do some chore and you sell your business and your copy of their keys to a criminal it could be morally problematic.

A car is merely a fungible vehicle the customer would have been no better or worse off had the robber been driving a different car.

This would be an apt analogy for just giving / selling a code base.

Had it been distributed under a new account/name users could have decided to trust or not trust a new maintainer.

The dev allowed new people to trade under his name and rep worse allowed new people to delegate further to unknown others.

He is morally liable and ought to have known better.


He was doing this for free and releasing it under an open source license.

In your analogy the business would be performing the chores for free and telling the users that the business is not responsible for any damage related to the access granted by the key. I don't think most people would sign up for that without a business relationship.


> Had it been distributed under a new account/name users could have decided to trust or not trust a new maintainer.

This is the myth we keep telling each other but I don't seriously believe this is how open source works in reality.


> sell your business and your copy of their keys to a criminal

This implies the seller _knows_ the one they are handing over the keys to is indeed a criminal. In that case it is certainly morally problematic.

I see your point but I still think calling the maintainer's behaviour immoral is going a bit too far. Perhaps careless. Or maybe naive. But not more than that.


He is morally liable

Google says "Definition of Liable: responsible by law; legally answerable".

If you claim he's not legally responsible but is "morally liable", where "liable" itself means "legally responsible", what in your world does the term "morally liable" mean, specifically? What does it mean you can do to him, or what does it mean you should do in future in response to this?


Google isn't the ultimate source of truth of the meaning of words. When someone says that another party is morally liable they mean morally responsible. That he ought to feel responsible and act according and consider others actions in the future lest he feel like he has morally failed people in the future. Ultimately we are often and are often expected to be our harshest critic and ought not to limit our duty to others to the minimum that the law requires.


The meaning of words isn't what I want to focus on, but the irrelevance of any response to this event along the lines of "well he SHOULD feel bad".

If "he is morally responsible" leads only to "he should feel bad" and nothing more, then what does it matter if he is/isn't morally responsible?

Ok he (does/doesn't) feel (justifiably/unjustifiably) bad .. now what?


Nothing.


> If you sell your used car and it gets used to rob a bank, did you act immorally?

If he transfers it knowing other people are going to use it, and don't tell then them, then they cut the brakes, that's a problem. It's not just that it was sold, but people continue to use it and weren't told. That's a different situation.


There's a moral liability to do your due diligence when using upstream software, too. Otherwise it's just whining about "the untrustworthy system I built upon is untrustworthy".


yep, and that's why I said "from a license standpoint". Software licenses aren't moral constructs, they are legal constructs.


Putting it on paper does not make it so. A single lawsuit and you could easily be out thousands even if you win.

The legal outcome could vary widely state to state and nation to nation.

If putting a blurb in a text file makes you feel safe about being sued for negligence you haven't considered all possible venues.

Here is an article about liability waivers

https://www.enjuris.com/blog/questions/liability-waivers/


So you reckon that people who publish open source software should be liable for flaws/vulnerabilities in their code?

It's a bold assertion, I'd be interested to see if you can provide any case law where it's been tested.


it depends. laziness can actually be used against you. even in european countries.


But on this topic specifically, I've never heard of Open source software authors being held liable in any fashion for software they've released in any country.

I'd be interested to hear if such precedent existed.


This license is invalid in quite a few jurisdictions around the world.


> I believe this is false.

It's literally not, though. That's what the license says. It expressly disclaims any such thing.

If you want somebody to incur responsibility, you have to get them to take it on. You can't just demand it of them.

Fortunately, there is a good way to do exactly this. Did you bring your wallet?


If I thought I might be on the hook for damages caused by a mistake in how I run an open source project, I would never open source anything.


Yeah, no kidding.

That's why the idea of commercial support exists. You need to depend on it? Pay for it.


The other person set out to subvert node modules, the author was just a target interchangeable with 100,000s of other module maintainers. There are already documented cases of very popular node modules having their passwords compromised so this is probably a form of attack we will see grow significantly more prevalent since these modules can see database credentials, encryption keys etc.

This is also very similar to bad entities obtaining or acquiring browser extensions to discretely poison with spyware and advertising, which happened a lot of times.


What was he compensated with in exchange for taking on responsibility? Without compensation the contract by which he takes on responsibility is not valid.


Node could implement something like an "ethical transfer of responsibility for packaging" clause for their code of conduct. I'd like to see that.

I'd also like to see something like StavrosK mentioned in his comment[1] about https://codeshelter.co made a part of this. When a maintainer gets an email for a long-dormant project of theirs, the maintainer needs options. One of those should probably be to yield the package back to the community. A "code shelter" is one way of doing that.

Then the question of, "How do we vet maintainers at scale?" comes up. All I really know is it'd take a financial & human capital investment in Node community infrastructure to make it happen.

I think it's in Node's best interest to do so. These highly prolific maintainers like dominictarr are prime targets for black hats. Overworked, underpaid, huge product portfolio they manage. Who among us wouldn't be grateful for the interest & help?

So Node should invest in fixing this.

1 https://news.ycombinator.com/item?id=18534741


> If the original author has no use for the project anymore and someone offers to take it over from them, why should the author be expected to refuse?

People who use your project place their trust in you. If you pass that trust on to another developer without your users' involvement, then that developer's actions reflect on your reputation. Why? Because you've not given me the option of only trusting you. Your policy made it a package deal.

It's kinda like how I can't tell Bob about anyone's Christmas gift. He sometimes tells Fred the Loudmouth, who tells everyone and ruins the surprise. Bob never ruins the surprise directly, and he always asks Fred to keep it a secret, but it doesn't matter. I still can't trust Bob, even if Bob's only mistake is that he trusts in the wrong people.


This thread is an amusing rediscovery of the auditing process major companies use before they choose to include open source in their projects. How is it maintained? Who maintains it? What's the risk if it goes rogue? How are updates reviewed? If you think Google simply ingests any random update to their Node dependencies you're crazy.


> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

I don't disagree with your overall premise: if you aren't actively maintaining a project for what-ever reason (you don't currently have the interest in it compared to other project or can't justify the time that you could spend elsewhere instead) and someone else does have the time and the interest, the project should be allowed to live on through new maintainers.

But: just handing over admin access of a project that many rely on to an unknown entity is not a safe move as this case proves. I understand the point of not wanting a permissions based community and so forth, but (and call be cynical if you will) that is rather naive. The world is just too full or arseholes for that sort of idealism to be at all safe in practise.

Instead let the new person/people create a fork and update your documentation to state that you are not currently actively maintaining your fork and people should consider moving to the new one instead. This way no one else unknowingly uses the new fork. Of course people might blindly switch over without verifying the new maintainers which puts those people at the same risk, but at least they take action to blindly move over rather than not knowing at all that it has happened, and people who are more sensibly cautious will hopefully monitor the changes more carefully than they would under the previous stewardship so this sort of backdoor is less likely to go unnoticed.


I don’t think any blame is due to the author at all.

Volunteer project ownership is voluntary, and transferring to another volunteer is the only choice other than abandonment for a lot of volunteers. Package repositories don’t support monetization and there’s no pool of volunteers associated with the repository itself to take up maintenance of what otherwise would have been abandoned.

Would we have preferred if the original creator had simply deleted the repository altogether? The last time someone did that on NPM, it generated deafening howls of rage — but here we are today in the non-abandonment scenario, listening to renewed howls of outrage.


> Package repositories don’t support monetization and there’s no pool of volunteers associated with the repository itself to take up maintenance of what otherwise would have been abandoned.

Your statement contradicts itself: there is a pool of volunteers, — they are the ones maintaining NPM packages to begin with. In fact, the author of event-stream has given control to another person, because he believed, that the guy was an ordinary volunteer like himself. Unfortunately, the reality is a bit more complicated than just that: people will voluntarily help you to maintain your projects, but only if they share your goals. Ideally, one should ensure, that the to-be-maintainer has invested a lot of effort in the project they are trying to take over — reported issues, fixed bugs, added features etc. for considerable amount of time. In other words — ensure, that they can contribute to the project in significant way and move it forward. Otherwise, what is the point of transferring maintenance?!


We’re colliding on English imprecisions. I distinguish growth (all unnecessary work) from maintenance (only necessary work: fix serious bugs, no new features, minimize rewrites). My use is the latter, not the former.


There should be well known best practices to signal that a repository is not maintained. Example: archiving the repo (it becomes read only). Then somebody forks it and updates the NPM registry with the new repo.


If you no longer want to maintain a package, and you do not have a trusted source to hand it off to, the right thing to do is let it wither.

If the package remains relevant, someone will eventually fork it. The burden of trust is no longer on your shoulders.


The problem is that everyone is trying to push everything to one side. That's just not stable. Yes, the maintainer SHOULD (but doesn't have to) make a hand-off explicit at a minimum. Yes, people should take responsibility for ensuring their packages are (and remain) legit. But both of these can break down.

It's like litter. It is not realistic to have anyone clean up everyone's trash. It's also not realistic to expect that things remain clean if everyone only picks up their own trash. Everyone needs to clean up their own trash and a little bit more, to compensate for the burps in the system.


> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

No, not if the project becomes malicious. I'd rather it died and I switched to an alternative I can trust.


Maybe a compromise would be some sort of obvious notification (via the website and also via the npm cmdline software) if a maintainer changed.


> Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

no. What does "a project dying" even mean? The code still runs, and will for the foreseeable future. If it was half complete, then that would be bad, but this was clearly a complete solution. Let it remain complete. This was a quick way on dominics part to kill the project actually


NPM repository is not "open source community". NPM is controlled by commercial organization ("npm, Inc."), which is fully capable of establishing rules, preventing package authors from selling to black hats (or even gifting for free to black hats). The author of the package could have formally given his Github repository to new maintainer without transferring package control . He didn't. Why? — presumably, because there is nothing in NPM ruleset, preventing him from doing so. It does not matter if he was bribed or not — there is an obvious glaring hole in notion, that widely-used digital assets may be covertly "gifted" to third parties.

This isn't the first time that has happened — the story with Google Chrome extensions being sold to hackers should have tought NPM, composer etc. a lesson. Maybe someone should finally sue them to drive the point home?


It's better if the new maintainer's intentions are altruistic. From @dominictarr, the maintainer:

> he emailed me and said he wanted to maintain the module, so I gave it to him. I don't get any thing from maintaining this module, and I don't even use it anymore, and havn't for years.

That's just plain irresponsible. He must have known how popular the library was. Taking ownership of browser extensions and common software libraries is becoming a very popular attack vector. Owners of these need to be a little more diligent when transferring ownership.

Perhaps there should be some sort of escrow jail for new maintainers where all changes are reviewed. Certainly better vetting needs to take place.


This is entitlement speaking, and it's clearly a solution that doesn't scale. Downstream must be responsible for only depending on software from reputable sources, there simply is no alternative.

I hate to have to do this, but the requirement runs right to the core of how this development model functions whatsoever:

    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
    FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
    DEALINGS IN THE SOFTWARE.
I'd suggest one problem here is a user interface / packaging model issue: end users / downstreams may override the version choices made by their upstream dependencies. In the case here, the reputability of that upstream varied over time. Permitting version locking allows a chain of reputability to be exist, allowing a limited amount of trustworthiness to be imparted by upstream's selection of dependencies ("I trust this guy's package, so I trust all their dependencies too")


The real problem is that so far, open source has been mostly more responsible than closed source. mostly. So many companies and people just go 'eh' and accept the defaults rather than really digging in. Which, in a real way, means they get what they deserve, but on the other hand, what is the alternative?

I mean, personally, the alternative I like is a sort of hybrid model, where you pay someone else to do the vetting, like RHEL, but nobody I support is willing to stay within the RHEL package world, or to otherwise pay to really vet the packages they use, so it's defaults all the way down, and it usually works just fine! until it doesn't.


It always takes time for parasites to evolve in a new system.


> Downstream must be responsible for only depending on software from reputable sources, there simply is no alternative.

The alternative is to use distributions that do vetting and staging.


No, there is no alternative. You are responsible for what you ship. Even if you pay for the software, you should vet it as best you can as you're responsible.


How are you going to have the time and skill to vet Oracle anything?


"... as best you can..."

Monitor the network, make sure you know where the data is going. Things like that.


It is irresponsible if you believe--and I don't say this pejoratively or to caricature the argument--that somebody takes upon themselves a responsibility when they open-source software. I used to think this was the case, but less and less often I find this to be true. In a practical sense it becomes a free-rider problem and people with money aren't stepping up.

In the absence of accepted responsibility from (monied) consumers, the way responsibility is ultimately taken on has to be either to affirmatively take it on (as public service to the community) or to pay for it. Which is a sticky problem because it's hard to pay for it and it's invariably not presented effectively in terms of building a business case. Existing solutions, not to put too fine a point on it, suck at building that case. OpenCollective, the remnants of Gittip/Gratipay, etc. - incentives don't align to put money where it needs to go, and J. Random Consumer often suffers the most for it.

I have a pretty strong idea about how to solve it for-reals. At the moment, though, it's a time/money problem for me; I'm not in a place to chase the kind of (relatively small) funding that the problem probably requires. If anyone else is interested in this problem space, please feel free to email me directly; I'd be happy to chat with you and I'm very interested in seeing this done well.


So if I understand you correctly, he was supposed to have run some sort of background check with all the state agencies to figure out if he was a bad actor or someone who really relied on the module and wanted to maintain it. You can’t protect against all possible outcomes from a situation like this. Seems like we’re just looking for people to blame whenever there’s an issue these days


He should just not allow any party not known and trusted to distribute under the original product name.

Let it be known that it's discontinued and let the new maintainer trade on his own reputation.


Free lunch tasted bad? And let's not blame the free lunch people who are always on the hunt for people and ingredients needed to put together free lunches.

Paying for a better, more consistent lunch makes sense at some point, yes?

People are motivated to work, ingredients are vetted, better prepared, etc...

Not blaming anyone, just pointing out an obvious problem by analogy.


> That's just plain irresponsible. He must have known how popular the library was. Taking ownership of browser extensions and common software libraries is becoming a very popular attack vector. Owners of these need to be a little more diligent when transferring ownership.

I'm sure he would do it for $150k/year. He won't do it for $150k/year? Increase the price until he says "Yeah, sure I will maintain it". Or maybe he will find someone else would would maintain it for that money.


How about just not transfer ownership of the original name. Let foo become foo-new or go from john/foo to bob/foo whichever is appropriate thus ensuring that only those who affirmatively added bob/foo or foo-new are ever effected.

Vetting new maintainers sounds like hard work that you might not feel like doing for free no problem just don't do it and don't give them the damn name.


That's fundamentally not how identification on the Internet works. You are whoever you say you are. It's absurd to tie identity to a physical person.


You are incorrect you are who you can prove who you are.

I can prove any of a number of identities scattered around the internet most of which are in fact my real name. Pretending that this is trust less is just not real.

Example I know from a wide variety of sources that certain projects are trustworthy even if I can only verify a pseudonym that itself is trustworthy and infer that authors other projects are trustworthy.

Accounts emails and domains are all useful tools even if not perfect.

People don't normally put years into developing trust in order to distribute malware. It's normally a low effort affair.

Not giving maintainership to random people who send you an email or selling projects to skeevy companies seems like a good way to avoid 80% of issues kind of like washing your hands can prevent a lot of colds.


> Perhaps there should be some sort of escrow jail for new maintainers where all changes are reviewed

That wouldn't really solve the problem. Attackers would just have to wait a bit longer before they push malicious code.


Maybe it should be easier to give monetary rewards so that popular module maintainers get more motivation to care.


Claims here are that dominictarr maintains "hundreds of packages" even though that's too much work for one person.

If maintaining modules earned money, that would be much more incentive to "maintain" thousands of random things you never look at, and to hand over control while keeping it in your name (which he's also being blamed for).


Why refuse? Because you have no idea who the person is, they don't have a history, and it is possible they want to take over the project to insert exploits into it. An ignored project (un)maintained by an ethical person is better than a project just handed off to whomever. At the very least a slight background check should have been done to see if the user has a history of contributing and maintaining open source projects.


Allowing the first person to express interest to "adopt" your project, seems to me to have all the same potential bad outcomes as allowing the first person to express interest to adopt a child from an orphanage.

For children, that's much of the reason for Child Protective Services to exist: to regulate orphanages and adoption agencies such that they will thoroughly vet prospective adoptive parents; and to establish and regulate a fostering system—pool of known-good temporary guardians (foster parents) that can take care of children temporarily when no vetted-as-good outsider can be found to take them more permanently.

Now imagine a software org that acts on abandonware software projects the way a CPS-regulated orphanage acts on abandoned children. Start by picturing a regular software foundation like the ASF, but then 1. since all the software is currently abandonware, the foundation itself steps in and installs temporary maintainers; and 2. since the foundation's presence in the project is temporary, they seek to find (and vet!) a new long-term maintainership to replace the existing one.

Of course, that level of effort is only necessary if you care about project continuity. If it's fine for the project to die and then be reborn later under new maintainership, you can just let it die.


>If the original author has no use for the project anymore and someone offers to take it over from them, why should the author be expected to refuse? Isn't adding another potentially unknown maintainer generally better for the community than a project dying?

Depends on the project, I guess. Open source can always be reanimated.


This takes "charitable interpretation" to an extreme.


No problem with someone forking it based on the license... who needs original authors?


There is a very strong sense of entitlement in that thread.


I had this exact same problem from both sides (not working on a project any more and wanting to find someone to maintain it/wanting to maintain a project someone wasn't working on because I found it interesting). It's not always easy to find people who are interested, and, while giving maintainer access to someone you know very little is usually fine and works out great, sometimes you get results like these.

In the end, I built something to "solve" this, a project called Code Shelter[1]. It's a community of vetted maintainers that want to help maintain abandoned FOSS projects, so if you want your project to live on but don't want to do it yourself, you just add it there and you can be reasonably sure that nothing like this will happen to it.

Of course, you have to get a large enough pool of trusted maintainers that someone will be interested in each project, but it's still better than blindly adding people. I can't fault the maintainer of that project, since trusting people is usually fine, but it's too bad this happened to him. The commenters are acting a bit entitled too, but I guess it's the tradeoff of activity vs security.

[1] https://www.codeshelter.co/


Why would the average joe trust something like this? Your FAQ says each maintainer is vetted and handpicked, but nothing about criteria or how they're picked.

Do you mind explaining this vetting process a little more? How can we be sure that something like this flatmap thing doesn't happen on codeshelter?


Sure! They're either people I know personally and trust (and hopefully people will trust me to do this transitively) or they are people who are already authors of popular libraries and have shown they are experienced in maintaining OSS projects and trustworthy (since they're already pushing code to people's machines).

Trust is definitely an issue here, and trust is something you build, so I hope we'll be able to build enough trust to let people not think twice about adding their projects.


to quote linus: if you don't do security with a network of people you trust, you're doing it wrong.


There's a similar effort in the Python / Django world called Jazzband (https://jazzband.co/). This model will probably become more and more necessary as maintainers need to move on from projects for whatever reason. Having a safe place to transfer a project to with a formal process (announcement of the change, code review before acceptance, etc.) would certainly help combat this issue.


Yes, I was inspired by Jazzband, but Jazzband has two things that led me to develop Code Shelter: It's pretty specific to Django, whereas I wanted something general, and people have to move their projects to the Jazzband org, which many people don't like doing (because they understandably want to keep their attribution).

With Code Shelter you don't have to move the project anywhere, you just give repo admin access to the app and the app can add/remove maintainers as required.

There's obviously a corrective component as well, where maintainers who don't do a good job are removed, but this hasn't happened yet so it's not clear how it will be handled.


If you are a maintainer of a project that you want to move on, what's the problem of adding this to README: "This project is abandoned/no longer maintained.", and optionally "Here's a known fork but I haven't vetted the code so if you use the fork you are AT YOUR OWN RISK: <url-to-the-actively-maintained-fork>", and when someone asks you to transfer ownership, you just tell them that they can fork it? Is it because of the "namespace" issue in some package management systems (e.g. NPM) that the forks can't get the nicer name?


It's half the namespace issue (the release package name sometimes needs to be added) and half that maybe you haven't agreed with some fork that you will make it the official one beforehand. Maybe there isn't even a fork like that.

Besides, projects don't usually go from active to completely unmaintained. Adding it to the Code Shelter is a nice way to solve this when you see development slow down, because you basically have nothing to lose.


I really like this idea, thanks for sharing.


Thank you! I really hope it takes off, it's an effort from the community for the community.


NPM is just a mess, and the whole culture and ecosystem that surrounds it. Micropackaging does make a little bit of sense at least in theory for the web, where code size ought to matter, but in practice it's a complete shitshow, and leads to these insane dependency paths and duplication and complete impossibility to actually keep abreast with updates to dependencies, much less seriously vet them.

The insidious threat is that these bad practices will start leaking out of Javascript world and infect other communities. Fortunately most of them are not as broken by default as JS, and have blessed standard libraries that aren't so anemic, so there is less for this type of activity to glom on to.


I don't see how this problem is limited to just NPM. This is a problem of any package manager. Or any software distribution system in general, really. Look at all the malware found on the Play Store and the Apple Store.

Unless you're willing to meticulously scrape through every bit of code you work with you're at risk. Even if you can, what about the OS software? How about the software running on the chipset? This is exactly why no one in their right mind should be advocating for electronic voting. There's simply no way to mitigate this problem completely.


NPM is quite a different software distribution mechanism than a typical app store.

In an app store, apps are self-contained packages with low to no dependency on other apps in the store, meaning that a single compromised or malicious app can only really affect that app's users. The OS may also isolate apps from one another at runtime, further limiting the amount of damage such an app can do (barring any OS bugs).

On the other hand, NPM packages depend on a veritable forest of dependencies, motivated by the "micropackaging" approach. Look at any sizable software package with `npm list`. For example, `npm` itself on my computer has over 700 direct and indirect dependencies, 380 of which are unique. That's bonkers - it means that in order to use npm safely, I have to trust that not a single one of those 380 unique dependencies has been hijacked in some way. When I update npm, I'm trusting that the npm maintainers, or the maintainers of their dependencies, haven't blindly pulled in a compromised "security update". And `npm` is in no way unique here in terms of dependency count.

So this problem is limited to NPM, as far as the potential impact of a single compromised package goes.


Most Linux distribution package managers also draw in dependencies (alas not that much as npm) automatically. You have to trust that all are vetted correctly.


The difference is that JavaScript lacks so much basic stdlib type functionality that it all has to be replaced with libraries. This dramatically increases your risks, since simple libraries will be created which become near standards, which in turn become dependencies of a huge swath of more complex libraries, meaning that any one of the dozens of crazy-common-but-should-be-in-the-stdlib libraries can be a target for hacking, social or otherwise. Also, it means that any web app no matter how trivial is likely going to itself depend on dozens or hundreds of libraries. Which means that even though the theoretical risks of depending on remotely sourced libraries are the same, the practical risks of establishing trust is exponentially harder for JavaScript than for nearly any other popular language out there.


I think part of the difference at issue is the number of distinct entities (users/organizations) you need to trust due to the acceptance of micropackaging.

With most programming languages, a small-to-medium project might pull dependencies from tens of entities, but with npm, even a small project can easily rely on hundreds or even thousands of entities.

It is easier to weed out unreputable entities when you are depending on fewer entities.


This; on a long enough timescale, any open-source package manager that maintains popularity will have this problem. npm has this problem now because JavaScript leaves a lot to be desired in terms of base convenience functionality (left-pad), and because of JavaScript's massive popularity that won't be going away anytime soon. This whole thing is hugely educational for people designing new languages with integrated package managers. However, while the lessons are pretty easy to grok, the solutions are going to be harder to come up with. I'm excited to see what kinds of stuff people come up with in response to this.


Unpinned dependencies are harmful.

If you aren’t reviewing the diffs of your dependencies when you update them, you’re trusting random strangers on the Internet to run code on your systems.

Espionage often spans multi-year timelines of preparation and trust building. No lesser solution will ever be sufficient to protect you. Either read the diffs, or pay someone like RedHat to do so and hope that you can trust them.


Code review can't catch determined malicious actors. It just isn't a viable protection against that kind of attack.

Take a look at the underhanded c contest for plenty of proof where even very senior developers told up front that there is a backdoor in the code often can't find it! And they can't all be blamed on C being C, many of them would work the same in any language.

I don't know the solution, but shaming users and developers for not reviewing code enough sure as hell isn't it.

All that being said, reviewing changes in dependencies is still a good idea, as it can catch many other things.


There’s no shame in refusing to study dependency diffs. If you consciously evaluate the risk and deem it acceptable, I agree with your judgement call.

What I find shameful is the lack of advisory warnings about this risk — by the repository, by the language’s community, by the teaching material.

This should have been a clearly-known risk. Instead, it was a surprise. The shame here falls on the NPM community as a whole failing to educate its users on the risks inherent in NPM, not the individual authors and users of NPM modules.


Most malicious actors aren't determined, they're lazy and will take the path of least resistance. Code review will catch out those.


> you’re trusting random strangers on the Internet to run code on your systems

I mean that's pretty much how the world works. Even running Linux is trusting random strangers on the internet. Most of the time it works pretty well, but obviously it's not perfect. Even the largest companies in the world get caught with security issues from open source packages (remember Heartbleed?).


When I visit a random website, it is very hard for that website to compromise my computer or my private data. The only really viable way is a zero-day in my browser, or deception (e.g. phishing, malicious download).

When I install an app on my iPhone, it is very very hard for that app to compromise my phone or my private data.

In both of these cases, I can download and run almost any code, and be fairly confident that it won't significantly harm me. Why? Because they're extremely locked down and sandboxed with a high degree of isolation. On the other hand, if I install random software on my desktop computer, or random packages off NPM, I don't have such safety any more.

The prevalence of app stores and the Web itself speaks to the fact that it _is_ possible to trust random strangers without opening yourself up to a big security risk.


I think what you just described are platforms where you don't have to trust strangers.


Yes, and I think they meant to. Node isn't one of those.

Edit: partially because of design decisions around package permissions.


What does that have to do with development? My point was nearly everyone uses libraries that are written by strangers from the internet. It mostly works.


Lack of code signing is what's harmful here.

My maven projects' dependencies are technically all pinned, but updates are just a "mvn versions:use-latest-releases" away. But, crucially, I have a file that lists which GPG key IDs I trust to publish which artifacts. If the maintainer changes, the new maintainer will sign with their key instead, my builds will (configurably) fail, and I can review and decide whether I want to trust the new maintainer or not.

Of course, NPM maintainers steadfastly refuse to implement any kind of code signing support...


What if the maintainer had also given away the key used to sign the previous releases?

I know, it doesn't make much sense why would anyone do that, but then again, I think "why would you that?!" feeling is part of what is triggering the negative reactions here. We just don't expect people to do things we wouldn't.


Even package managers where signing support nominally exists, the take-up is often poor.

IIRC rubygems support package signing but almost no-one uses it, so it's effectively useless.

We're seeing the same pattern again with Docker. They added support for signing (content trust) but unfortunately it's not at all designed for the use case of packages downloaded from Docker hub, so it's adoption has been poor.


I think browsers show how to migrate away from insecure defaults successfully. The client software should start showing big obvious warnings. Later stages should add little inconveniences such as pop-ups and user acknowledgement prompts, eg. 'I understand that what I'm doing is dangerous, wait 5 seconds to continue'. The final stage should disable access to unsigned packages without major modifications to the default settings.


Browser security has heavily benefited from the fact that ther are a small number of companies with control over the market and an incentive to improve security.

Unfortunately the development world doesn't really have the same opporunities.

If, for example, npm started to get strict about managing, curating, security libs, they could just move to a new package manager.

Security features (e.g. package signing, package curation) have not been prioritised by developers, so they aren't widely provided.


actually when publishing to the biggest java package repository (sonatype) you NEED to sign your packages.

also you can't transfer ownership without giving away your domain or github account. But you can add others to also upload to your name, but if an accident occurs your liable, too.


Would you have distrusted this maintainer though? If someone takes it over and publishes what appear to be real bug-fixes, I'd imagine most people would trust them. The same goes for trusting forks, or trusting the original developer not to hand over access.


> Would you have distrusted this maintainer though? If someone takes it over and publishes what appear to be real bug-fixes, I'd imagine most people would trust them.

Quite possibly. But I'd make a conscious decision to do it, and certainly wouldn't be in any position to blame the original maintainer.


The sale of any business that makes use of cryptography will generally include the private keys and passwords necessary to ensure business continuity. Code signing would not necessarily protect you against a human-approved transfer of assets as occurred here, whether as part of a whole-business sale or as a simple open-source project handoff.


If you have tons of depedencies then it's not feasible to check every diff. You may able to do it or pay someone if you are a bigger organization, but a small shop or solo developer can't do this.


"If you have tons of depedencies then it's not feasible to check every diff."

Part of bringing in a dependency is bringing in the responsibility for verifying it's not obviously being used badly. One of the things I've come to respect the Go community for is its belief that dependencies are more expensive that most developers currently realize, and so generally library authors try to minimize dependencies. Our build systems make it very easy to technically bring in lots of dependencies, but are not currently assisting us in maintaining them properly very often. (In their defense, it is not entirely clear to me what the latter would even mean at an implementation level. I have some vague ideas, but nothing solid enough to complain about not having when even I don't know what it is I want exactly.)

I've definitely pulled some things in that pulled in ~10 other dependencies, but after inspection, they were generally all very reasonable. I've never pulled a Go library and gotten 250 dependencies pulled in transitively, which seems to be perfectly normal in the JS world.

I won't deny I haven't auditing every single line of every single dependency... but I do look at every incoming patch when I update. It's part of the job. (And I have actually looked at the innards of a fairly significant number of the dependencies.) It's actually not that hard... malicious code tends to stick out like a sore thumb. Not always [1], but the vast majority of the time. In static languages, you see things like network activity happening where it shouldn't, and in things like JS, the obfuscation attempts themselves have a pretty obvious pattern to them (big honking random-looking string, fed to a variety of strange decoding functions and ultimately evaluated, very stereotypical look to it).

And let me underline the point that there's a lot of tooling right now that actively assists you into getting into trouble on this front, but doesn't do much to help you hold the line. I'm not blaming end developers 100%. Communities have some work here to be done too.

[1]: http://underhanded-c.org/


I'm not convinced that this incident argues in favor of Go's "a little copying is better than a little dependency", which I continue to strongly disagree with. Rather, it indicates that you shouldn't blindly upgrade. Dependency pinning exists for a reason, and copying code introduces more problems than it solves.


I don't think you have to get all the way to the Go community's opinions to be in a reasonable place; I think the JS community is at a far extrema in the other direction and suffer this problem particularly badly, but that doesn't mean the other extreme is the ideal point. I don't personally know of any other community where it's considered perfectly hunky-dory to have one-line libraries... which then depend on other one-line libraries. My Python experiences are closer to the Go side than the JS side... yeah, I expect Python to pull in a few more things than maybe Go would, but still be sane. The node modules directories I've seen have been insane... and the ones I've seen are for tiny little projects, relatively speaking. There isn't even an excuse that we need LDAP and a DB interface and some complicated ML library or something... it was just a REST shell and not even all that large of one. This one tiny project yanked in more dependencies than the sum total of several Perl projects I'm in charge of that ran over the course of over a decade, and Perl's a bit to the "dependency-happy" side itself!


I suggest a different narrative. That node.js achieved the decades-old aspiration of fine-grain software reuse... and has some technical debt around building the social and technical infrastructure to support that.

Fine-grain sharing gracefully at scale, is a hard technical and social challenge. A harder challenge than CPAN faced, and addressed so imperfectly. But whereas the Perl community was forced to struggle over years to build it's own infrastructure - purpose-built infrastructure - node.js was able to take a different path. A story goes, that node.js almost didn't get an npm, but for someone's suggestion "don't be python" (which struggled for years). It built a minimum-viable database, and leaned heavily on github. The community didn't develop the same focus on, and control over, its own communal infrastructure tooling. And now faces the completely unsurprising costs of that tradeoff. Arguably behind time, due to community structure and governance challenges.

Let's imagine you were creating a powerful new language. Having paragraph- and line-granularity community sharing could well be a worthwhile goal. Features like multiple dispatch and dependent types and DSLs and collaborative compilation... could permit far finer-grain sharing than even the node.js ecosystem manages. But you would never think npm plus github sufficient infrastructure to support it. Except perhaps in some early community-bootstrap phase.


Unfortunately, one of the things that makes JS dev pull in so many deps is that it lacks a decent standard library. Meanwhile, the Go standard library is amazing!


Most users of dependencies will never consider whether to trust those dependencies at all.

If you’ve considered the problem and decided to trust them as a compromise to reduce staffing and costs, that’s a fine outcome in my book.

How many health/medical startups are blindly trusting dependencies today, having never thought through the harm I describe?


That's why you lock your dependencies.


Hope you don’t start your project with the compromised version — and then lock it in.


Dependencies need bugfixes and you may even want to use new features, so locking is not a permanent solution.


If you're on point enough to know which features/bugfixes you're getting then you're probably doing enough to be safe already. Just don't go around running npm -u for no reason and you should be fine.

The only way to be truly safe from this attack vector is to own all of your dependencies, and nobody is willing to do that so we're all assuming some amount of risk.


That will work assuming have have audited the code already, but you will also have to audit every (changed dependency)(factorial) every time you bump a dependency!


I would argue that having tons of dependencies is a problem in and of itself. This has become normal in software development because it’s been made easy to include dependencies, but a lot of the costs and risks are hidden.


Maybe not, but you can avoid updating unless necessary. Assuming you only make necessary updates (and at least do a cursory check at the time) and vet any newly added-dependencies as you go, you can greatly reduce your own attack surface. You're still probably vulnerable to dependency changes up the chain, but then at least you're depending on a community that is ostensibly trustworthy (i.e. if every maintainer at least feels good about their top-level dependencies then the whole tree should be trustworthy).


I would caution one to not have tons of dependencies. More surface area in terms of the amount of 3rd party libraries/developers means more chances that one of them is not a responsible maintainer, as in this case. That increases the application's security risk.


Then let's hope you're not using Webpack, which alone has several hundred of them and not small ones, mind you... super complex libraries that trying to "own" enough that you'd be able to securely review code diffs is completely infeasible.


The problem here is that the diff is not source, but "compiled" code. We ultimately come back to "Reflections on Trusting Trust" [1]

  [1]: https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html


I'd love to do this, but how can you review thousands of lines of code that changed?


You can use smaller, more targeted libraries so that the changes you need to review are actually relevant to your project.


In the Node/NPM world, this is pretty difficult. There are many (many) small libraries and everything you depend on brings in many more.


That's a big and totally objective reason to abandom the Node.js/NPM ecosystem, like its original author did.

A language that doesn't have a decent standard library means that you'll have to use huge amounts of code that random strangers used, and the chain of dependencies will grow larger and larger.

In languages like Ruby and Python, you have a decent standard library, and then big libraries and frameworks that are maintained by the community, like Rails, Django, SqlAlchemy, Numpy, etc. That's healthy because it minimises or zeros the amount of small libraries maintained by a single guy, thus maximising the amount of code that you can trust (because you can trust the devs and development process of a popular library backed by a foundation or with many contributors).

With Node, almost every function comes from a different package. And there's no bar to entry, and no checks.

If Node.js is going to stay, someone needs to take on the responsability of forming a project, that belongs to an NGO or something, where the more popular libraries are merged and fused into a standard library, like that of Python's. Personally, I'm not touching it until then.


You can't, you're forced to trust to some degree depending on factors specific to your project. If you're writing missile navigation code then you better check every last diff, but if you're writing a recipe sharing site then you don't have the same burden really.


Ideally you don’t pull in thousands of lines of code in unknown dependencies as a starting point


How does this work practically when the vuln exists only in the minified version of the code?



Most projects are already bundling and minifying code themselves. Any cookiecutter type tool will say that up.


Unfortunately this isn't really doable in today's world of JavaScript development. If you want to use any of the popular frameworks you are installing a metric ton of dependency code. So not only do you have to somehow review that initial set of code, but you need to know how to spot these types of things. Then, once you complete that task, you now have to look at the diffs for each update. And there will be a lot of updates.

What you're suggesting is a great idea from a security perspective. But for typical workflows for JS development it just isn't practical.

Now, maybe this means we need different workflows and less dependencies. But it's so ingrain I don't know that it's easy to fix / change.


But you also have to review the diffs of your dependency's dependencies. And their dependencies. And so on and so on.


Your direct dependencies are not the only problem.


That thread is a huge argument for paid software. It's mind blowing how folks expect people to maintain things for nothing and get mad when it doesn't work perfectly. Some silly choices were made by the original maintainer but give the dude a break. He doesn't owe you a damn thing.


Paid software most often contains open source components that are equally vulnerable as this one, and if you expect paid software to be explicitly better maintained in terms of code than free software... you are in for a surprise (hint: corporate release cycle is a grinder)


I agree. The author released this library under MIT license which explicitly states that there is no warranty. Yet some people in the GitHub issue thread are acting as though the author owed something to them personally.


do you know how many of your paid software contains a similar disclaimer that removes their responsibility for the damages caused by the software? Only you'd never have a chance to look at that code and find this injection...


You're not wrong; quite right, in fact.

My point here is less about payment guaranteeing a lack of bugs or having someone to point a finger at and more at incentivizing the team building it to fix things quickly. Of course, there's always the <Insert Negligent, Well-Compensated Fortune 500 Company Here> problem, but that's a case-by-case issue.


Yes but then you have someone culpable for their mistake.


I agree.

If anything maybe those who depend on unchecked code so willingly have the burden of responsibility?


Get ready for more of these type of attacks as time goes by, and they won't all be as obvious as this one.

Supply chain attacks are increasingly being seen as a good vector and there are thousands of unmaintianed Open source libraries buried in the dependencies of popular projects.

Also worth noting that package security solutions, which check for known vulnerabilities, are unlikely to solve this kind of problem unless they track "known good" package versions and alert when things change.


The (most recent) payload turns out to attempt to steal Bitcoin wallets:

https://github.com/dominictarr/event-stream/issues/116#issue...


Move fast and break things. I view this as an endemic fault of how npm, and to a larger extent javascript, culture works and thinks.

I believe npm to be a giant landmine of untrustwothy packages thrown together. No package is trustyworthy as the dependency chain runs too wide and too deep. And having unwieldy dependency trees is considered a feature of npm.


Unfortuntely, whilst npm is the largest source of packages, it's not the only place where this problem exists.

Most programming languages have this issue to one degree or another, which is that they have a package management repository where there is no curation of content, and generally no package signing (there are some cases where that exists, but the take-up isn't always great).

The challenge is that this now seems to be embedded in the culture of a lot of development, and changing that culture would be very challenging, I'd imagine.


Yes. The fast moving package management dependency hell seems to be spreading, but I believe its root comes from npm/javascript. If we see it elsewhere it is because it is borrowed from javascript as it was seen as "the future" of software development.


The concept of code sharing—which is a good thing, this incident notwithstanding—goes all the way back to CPAN and Perl 5.


> The concept of code sharing—which is a good thing, this incident notwithstanding—goes all the way back to CPAN and Perl 5.

The concept is significantly older, but the implementation of a systematic facility for it rather than more ad hoc mechanisms may date from then. (It's a lot harder to sneak in and use an exploit in books of source code used in the 1970s-80s, though; convenience comes with a price.)


In case anyone is unfamiliar with Perl's version history like I was, Perl 5 and CPAN date to 1994.


This exploit is not NPM specific, could be any programming language. (Though NPM is a bit easier to exploit since there are so many packages and a large tree of deps)


the mindset where handing over a package with 2 million weekly downloads to a complete stranger is a totally OK thing to do IS npm specific.


Would it be possible to put compromised package in Debian repository? Or has something like this happened in the past?


It's certainly possible but to my knowledge hasn't happened. This case specifically where a random person got the authority to publish new versions would be prevented by debians organisational policies. Other distro's are much less stringent and open to this kind of attack though, arch/yaort for instance will happily install straight from github and this exact scenario could have played out there.

The gatekeeper model is a proven one, be it an organisation like debian, a paid curator like redhat or a locked down ecosystem like iOS.


This must be the fifth incident that could have been readily prevented if npm simply required signing for it's packages.

Just do the same thing we have for SSH: on first connection (installation), ask if you trust the other side. That goes into your authorized_keys (lock file). If you connect again (update) and the key changed, print a big scary warning. No blockchain needed.


This was, from the eyes of npm Inc. and everyone else a completely legitimate ownership transfer.

There was a "human error" (Domenic trusting the wrong person) that was completely unpreventable by technological means.

There was no hijacking involved.


There is also never hijacking involved when the SSH key of my server changes, but that doesn't mean I trust SSH Inc. to verify that for me.


In your metaphor this would be not one of your servers, but a customer’s one. Then the customer says: «oh yes, we changed the keys, it’s all ok».


A related question: I'm a NodeJS, Python and Java user/developer; can anyone explain why this problem appears more frequently in the npm ecosystem than in Maven, PyPi, etc?

Is it the Github-ization of package creation and maintenance? Is it the pervasiveness of left-pad style packages? Is it npm's governance? Is it the community's perspective and culture? Have the malicious just not made it to other repo systems yet?


1) Volume -- Javascript is the most popular language right now and it's not really close [1]. Attackers typically focus their efforts on systems that offer the most potential targets.

2) The language -- Javascript has relatively few convenience features in the standard library (at least compared to ruby, which is my point of reference) so more of those features are implemented in userland (and thus imported from npm). I think this is the biggest cause of massive dependency trees -- things like left_pad simply exist natively in many other languages.

3) The ecosystem -- for whatever reason, installing floating package versions was the default for a long time; floating package versions are obviously an order of magnitude more vulnerable to an attacker publishing a malicious update.

Edit: 4) I think it's also probably true that Javascript apps are particularly susceptible to this class of vulnerability, but that doesn't necessarily mean they're less secure overall. Javascript teams tend to move quickly and stay on the cutting edge, so they install lots of packages/updates and are thus vulnerable to malicious updates. Equifax's Java app was compromised for exactly the opposite reason -- they didn't install a security update to Apache Struts for months. That class of vulnerability is far less likely to happen on a fast-moving Javascript app. It's all a tradeoff; the important takeaway is to understand the failure modes for your particular stack and be vigilant against them (which in this case means using a lockfile and doing your best to audit new packages/updates).

[1] https://insights.stackoverflow.com/survey/2018#most-popular-...


- the pervasiveness of left-pad style packages - npm's governance - the community's perspective and culture

Also the low-effort nature of javascript language itself - no strong typing, able to rip out or replace foreign methods, even redefine standard behavior, no private methods, ... if somebody enjoys living in such environment, then they are more likely to also be ok with unmaintained packages, npm bumping your package versions without asking, npm allowing anyone to re-take a package name after it was deleted, developers transferring ownership without as much as googling for the username... but hey, as long as they have cool emoji icons...


> Is it the Github-ization of package creation and maintenance? Is it the pervasiveness of left-pad style packages? Is it npm's governance? Is it the community's perspective and culture?

A nasty combination of all of those, probably some more.

I was very excited with Node.js when I first heard about it here on HN years ago. I enjoyed watching Dahl's presentation. But as soon as the crowd that went "it's JS, I can document.getElementsByClassName(...), so I can code servers" flocked in and npm became a thing, that was the end of Node.js as a nice, bright little thing.

I don't want to defend or encourage gatekeeping, but Node.js brought in lots of folk that did not know what they were doing into software development, and they did not stop to learn a thing or two before creating a monstrosity like npm or making things like leftpad and this possible and popular. And there wasn't any seasoned leaders like Python or Perl communities had, so all that happened with no authoritative opposition. If it was Node.lua or Node.pl, the world would've been a different place today for the software development community.


> I don't want to defend or encourage gatekeeping

Goes on to defend and encourage gatekeeping.


Yes, that's what I do, and thats what the word "but" after the part you quoted indicates.

If gatekeeping is bad, not checking who's passing through those gates when they're wide open is worse.


I wish JS/npm was more centralized... I wish there was some centralized authority who took some responsibility for maintaining the security and stability of core packages.

I think it'd make sense for npm to be this authority. They're a for-profit organization who benefits most from the success of the npm ecosystem. So why don't they try and take over the most popular projects like left-pad and event-stream? Especially the projects that are small and mostly unmaintained.


Lots of tiny dependencies.


Exactly. As a Java dev, I can't imagine creating a dependency for such a tiny thing. I would want a reputable library. My dependency tree can grow pretty big but not nearly as much as my typical node_modules.


The thing with Java, and most other popular languages, is that packaging up a unit that small is painful enough that you don't want to do it.

For instance, if I was going to do something similar to left-pad in a .NET way, I'd have to create a whole Visual Studio project and build a DLL for a single function, which would rightly be recognized as pants-on-head.


Also note the same user owns this library:

https://github.com/right9ctrl/node-scrypt

I would be very suspicious of that as well and audit anything that library has touched.


Good eyes, but this looks like it may be a red herring. That is a fork of the `scrypt` package, canonically found here:

https://www.npmjs.com/package/scrypt

With one maintainer, Barry Steyn, whose referenced repo is here:

https://github.com/barrysteyn/node-scrypt

Somebody would have to pick it up from a git URL, I think. (Maybe they could typosquat, but we'd have to find that first.)


Yeah it looks mostly like a copy and saw most commits are not by that user, but the user has made some commits and not being familiar with how NPM packages work too well, thought I'd drop a warning here.


I'm not a crypto guy but this change really makes me scratch my head: https://github.com/right9ctrl/node-scrypt/commit/52a1cb792bc...


That just looks like a convenience function to me. Rather than passing 1,2,4,8,16 in as the value of N, you pass in 0,1,2,3,4. Has the benefit that it's not possible to pass in a number that's not a power of 2 (which might not be valid, I don't know how scrypt works).


IIRC scrypt is only defined for powers of 2 for N.


Yes, that looks dodgy - a 64-bit 'N' being renamed 'logN', and its value changed to `1` left shifted by its prior self? I think there's a good bet that's just zeroing it out.


It's just doing 2^logN...


Which ought to be just N - but notice the argument passed in is mysteriously renamed. It may be nothing - I'm not even sure what N is here - I just agree that it looks weird.



A funny thing is that on README.md, the `Install from source` section has this: `git clone https://github.com/barrysteyn/node-scrypt.git`.


Looks like the sleuths on github just found out it targets a bitcoin wallet called `copay-dash`

https://github.com/dominictarr/event-stream/issues/116#issue...


We may need permissions for our dependencies (cf. Android apps). Ryan Dahl already did this with Deno, specifically because he saw weaknesses in Node: network, environment variables, file system, sub-processes.

We may need reproducible builds and reproducible minification. If we want developers to audit their own dependencies, in case we deem that practical, packages cannot ship their own minified sources. Auditing the non-minified source is hard enough.

We may need (for-profit) organizations that audit packages and award their official seal which you can trust before you add or update a dependency.

We may need better standard libraries and fewer micro-packages.


> Now we have to run a background check when someone wants to help? The problem is in the tools.

I don't agree with this comment, really. But -- there's almost always room for improvement w/tools. IIUC this unpacks encrypted JS bytecode and then executes it? Is there any way to statically know whether this could happen?

    "var newModule = new module.constructor;"
    ...
    newModule['_compile']()
From what terribly little I know about Javascript it looks like it would be hard to create a lint/warning that finds this sort of thing without false positives. However, an audit tool that merely discloses this and any other things-that-deserve-greater-scrutiny would be valuable. If you could walk your dependency tree and see whether/when this changes, it would be very useful. Of course, to be useful you probably need some kind of dependency pinning too, which IIUC npm does not (yet) support?


That doesn't look that far off from what you'll see in generated code, e.g. something like babel could easily output the lines you've pasted.

The rest of it is extremely dodgy, of course.


I feared as much would be the case. Even in non-generated code there's perhaps some instances where that code would make sense.

However, if you were to audit your dependencies and find a new case of this, you could examine it further to determine whether it passes muster. I suspect code like this is infrequently added to packages.


Eh, again, this kind of thing is common for compiler output, and the use of such compilers is very common.

I think you'd have more luck looking at static strings and warning on any string that has a sufficiently high entropy. Sure, some of them will be hashes for verification, or compressed harmless data, but it it seems like it would be rare enough to be auditable.


Maybe something that would actually run it in a sandbox and say, catch the calls to AES256 decryption and flag for human review would do the job?


npm [recently] now has a lockfile called package-lock.json, in addition to package.json, which defines the entire dependency tree (not just the direct dependencies as package.json does), contains package hashes, and will pin dependencies to specific versions for an application.


This isn’t about Dominic, this is about using node for anything secure (for example, somehow people use node in cryptocurrency projects)

If you have any security requirements whatsoever, and you are using node, you need to do your own audits of the (hundreds of) packages you use. If that’s not practical then node isn’t practical


I guess it's not npm specific, because the same thing can happen in any other open source repo, can't it? Some guy takes over maintenance of a dormant package and then adds code which no one bothers to check.


Well, not really. Anyone can "take it over" in the sense of creating a new, maintained fork, but the various distro package managers have policies that I think generally require some review/approval process to transfer ownership of a package they publish. I don't know anything about NPM, but imagine it's a more lax process than something like Debian. It sounds like the maintainer here did something shady or simply reckless in effectively transferring ownership but not disclosing this to folks at NPM?


there's a split here.

Linux package managers (by and large) have a maintenance process.

Programming language package managers, by and large, do not

So this could happen on rubygems, PyPi, Nuget, etc


Thankfully other languages really do require less less known external packages.


I don't think most people in HN are seeing the real story here. This is not another harmless stunt like left-pad to make us all wag our fingers at the Node community. This is a coordinated, targeted attack on a popular Bitcoin wallet software to steal private keys using a popular open source library and NPM as an attack vector.

This is potentially the biggest Bitcoin heist since MTGox and the biggest news in Bitcoin all year. The last time BitPay got scammed they lost 5,000 BTC and that was back in 2015. I wouldn't be surprised if this is causing the current mass sell-off. Does anyone know how to find real-time transaction volume stats broken down by client, country, etc.?


Most people on HN may not be Bitcoin investors. More HN people are probably programmers who depend on open source.


Makes me wonder if there's a business opportunity. Make your own version of NPM, based off of the real NPM, but some level of auditing for package updates. Don't report suspicious commits from new contributors right away, that sort of thing. Charge companies a few hundred $ a month to use your site instead of NPM, and they can update packages at will without risk of picking up something nasty.

Then again, the fact that nobody has done this already suggests that it wouldn't work.



Schlep is precisely a business opportunity -- tedious drudgery that you have to do regularly OR can pay and not worry about it again.


I think GP was responding to the second paragraph, which suggested that this business might not be viable just because it hadn't yet been tried.


I think we're in agreement, sorry if that wasn't clear.


I never much liked dropping a link as a reply to a comment, because nobody can tell what you think about it.

But I will say that indeed almost all viable businesses involve some amount of schlep. But just because something involves a lot of schlep doesn't make it a viable business.


Would anyone want to support a third party which audits popular dependencies, and digitally signs off after being reviewed? I currently audit source code for private companies, would be cool to pivot and focus on open source projects.

Feel free to email me, jt3@justin-taft.com


This whole debacle has been really interesting.

I've personally met Dominic and chatted with him in the past (although I doubt he would remember me) and he's a great guy. I may not agree with his world views but he's respectable.

Just giving away a repo, and reading his description of it, sounds like something he would do, haha. I'm sure from his view, he's just been open sourcing experimental modules (however big or small) and that's all it is: fun

On the flipside, the inhouse CSS framework where I work actually pulls in event-stream as a sub-sub-subdependency which could have bitten us possibly if the attack wasn't so specific.

It's easily to blame Dominic but ultimately, he's just a guy who made some code for fun and people started using it.

I'm sure when you're in a position like that, one of two things is likely:

1) You either don't look at download/usage analytics because why would you? It's one of many projects and you're not measuring their success by anything more than how you feel. People can use whatever code but it's not exactly a product being sold

or

2) You do look at the numbers perhaps but 2 million people? How do you even quantify that? It could be 1 million bots, 2 million hobbyists, 50,000 enterprises who don't cache internally and npm install over and over, or even all 3 combined!

Anyway, this whole event hasn't happened in a vacuum so at least it'll (hopefully) spark constructive discussion about the state of JS and companies/projects relying on possibly unmaintained sub-sub-sub... you know


Someone post a list of every public package that depend on that one https://github.com/dominictarr/event-stream/issues/116#issue...

A lot of very common packages...


This is one of the gaping security holes that open source has always had, you don’t know where the code came from and your best indication if it’s safe is how popular it is. Npm and Maven libs being the scariest since there are hundreds for even small apps.


Are those points not the same for closed source products? "You don't know where the code came from" and "your best indication if its safe is how popular it is."


NPM needs to make accessing the source code easier. It has always bothered me that the linked repository can be set to absolutely anything the author wants.


I think something like an npm diff command would be helpful. This would allow you to see the changes from a previous version that you just upgraded from. This would somewhat replicate the functionality that commiting your node_modules directory to git would give you.


Not sure how that would help. Anyone can publish anything to npm, it doesn't even need a repository. So unless the source code itself was hosted on npm, and the entire toolchain was controlled by npm, there's not much to do.


Exactly, I think a source code browser on https://www.npmjs.com/ would be nice to have


IMHO there are only 2 long-term solutions to this type of problems: 1) Reduce the number of external dependences as much as possible 2) Pay authors to properly maintain the packages that you rely on. If you use them on commercial projects it's the first and the cheapest possible security measure you should invest in.


The best solution is a module system that obeys the principle of least authority. An event-stream library shouldn't be able to find your cryptocurrency wallet, or communicate back to its author either.


Wow, looks like it was also used in Microsoft and BBC News repos.. unless I read the updates incorrectly. Perhaps the attacker was targeting a specific user?


I think the underlying issue here is ownership of a package, that is the sensitive thing that was given away. This is why I prefer to publish scoped packages, e.g. `@dominictarr/event-stream` not `event-stream` onto npm, so that ownership doesn't need to be given away. If someone wants to give continuity to the project, it can be done under a different fork. Also, it's important to pin versions for immutability. Naive OSS software, with changing owners and evolving versions, is essentially "mutable", and a weak spot for attacks.


Actually in such a case Domenic would have transferred the ownership of the package anyway, probably retaining some power in the process, but the major harm would have been done already.


Another example of how social engineering is usually what gets you hacked as opposed to a software vulnerability


So, I'm not a javascript person. I don't know the ecosystem that well. I'm not really even a programmer except at gunpoint, just a sysadmin.

But even 5 years ago I remember thinking "400 dependencies for a project is just WAY too many" and imagining something like this. And now I see some of the big frameworks require literally thousands of packages. Is there something about Javascript that necessitates this kind of ultra-fine hairsplitting on packages, or could that be made better?


In the web, any dependencies have to be sent to the client as well, so making them small and having the build chain handle grouping them all up in one minified file is suppose to make things quicker for clients.

also languages like python require an affirmed compile time resolved call to the classes so it can more easily know what code is not used or not going to ever be used.

Javascripts string based introspection and runtime generation of code being the backbone to the entire framework also makes it impossible to know what parts of included code are even used, just because no code calls out to it at build time doesn't mean the code isn't going to generate calls to it.


People surely aren't evaluating strings in production code, are they? The LISPer in me just died a little.


Is there a way to get stats on suspicious activity on NPM repos? I feel like this is a service that every node.js project is in need of after left-pad and now this.



Edit: Applies to any package manager, e.g. ruby gems, elixir hex packages, Python eggs.


You could run NPM Audit (https://docs.npmjs.com/cli/audit)


But this wouldn’t catch issues with pushing immediately after being granted access, new github accounts pushing changes to libs with thousands of dependents, libs whose dependencies were yanked from npm. Only published vulnerabilities, right?


My apologies, I assumed you meant ones with known vulnerabilities. Given OP's link that was a bad assumption to make.


check them into your git


How does that help? You still update from time to time get fixes, new features and then you can get some unwanted code which is not discovered yet.


it enables you to run git diff after updating your npm dependencies, so you at least have a chance to detect something shady


In this situation, wouldn't it tell you the package had a version bump?


and/or use fixed versions with package-lock. Or private npm cache if you are concerned.


Are there actually any future plans for JavaScript to add a decent standard library?

Coming from other languages, the lack of built-in functions seems crazy, and the NPM situation just weird. An empty template I use for bootstrapping jquery-based web apps, based on using Gulp 4 for transpiling ES6, SASS compilation etc, has a node_modules folder with 23,268 Files across 3,080 Folders!


I was understanding until I read:

"he emailed me and said he wanted to maintain the module, so I gave it to him. I don't get any thing from maintaining this module, and I don't even use it anymore, and havn't for years."

in the thread. Then, I looked through the commits and realized how not ready @right9ctrl was to maintain a heavily used package. Basic linting and small, iterative changes are absent.

So, when I see Dominic-さん just "handing" the package over -- it seems careless. This is often why -- I try to write my own packages unless I absolutely need some sort of framework. It's trading one set of problems for another (the potentiality for bugs vs. a package as a security issue).


It looks like no one is even sure what the code actually does? Or am I misreading that thread.


Seems correct, the payload is encrypted with AES and it seems that a package.json description is the passphrase but the issue author hasn't been able to decrypt it so far (for reasons listed in the issue).


Looks like they just cracked it in the comments - a bitcoin wallet stealer for an app that had it deep in the dependency tree.


It's attacking a particular package -- some package out there depends on ps-tree directly or indirectly, and its description is the secret key.


Which means there is a finite list to check, unless the package is something internal and not on npm.


from the github issue it looks like a couple people are working on that using publicly available lists, and they've even found a few valid decryption keys, but none that turn the payload into executable javascript


It's AES: Anything the right length will decrypt to something.


Aha: It's:

    npm_package_description = 'A Secure Bitcoin Wallet';


Of course!


No one knows what it does but we do know that it seems to execute some encrypted bytecode. This seems remarkably unconventional and likely a good way to bundle malware.


You read it right


People have to understand that the Node.js ecosystem is different than in other programming languages/platforms. Modules are composed from smaller modules with a granularity so small that it is almost unique. When you're publicly publishing something low-level to npm you really should understand that the community may build on top of it to the point where it is used by many modules and as a result used by many end-users. You are not obligated to maintain it, but it's not really fair to be totally careless either. The community is like society, if you're in it, try to be a good citizen or the community will move against you to preserve it's ideals.


So much irony here. So, the community can be careless and just import whatever, but the maintainer can't pass maintainership on to someone else without it being dubbed careless? Shouldn't this whole situation be the fault of the community because they let in a bad actor? Society is great when everyone is benefiting, but something goes wrong and out comes the pitchforks.


Don't get me wrong, I believe the author's heart is in the right place. I find his willingness to be inclusive and get other people involved refreshing actually, but the problem is that he should know better. He's seemingly used npm enough to know that there is no mechanism within npm to notify downstream modules that he is transferring the package to someone else (and that there is now a new security risk that they must re-evaluate). The course of action taken was simply not thought through. I'll leave it to others to explain what the appropriate course of action should have been, but keep in mind that society is built on trust, otherwise you have nothing to build on. You're too busy doing every job instead of one that builds upon others.


This is how transitive trust works. You vouch for a bad guy - you lose trust yourself.

Far from being ironic, this is the normal functioning of the system.


This is one of the reasons npm packages with a compiled minified dist js files are a bad idea, makes hiding malicious code surprisingly easy.


I'm not sure that it would matter much in practice. I don't think anyone really reads the code for their transitive dependencies.


Anyone know how this was found and whether that process can be automated? There are likely a lot more packages that are infected with the same technique.

I looked at the linked commits: https://github.com/dominictarr/event-stream/commits/master

and it looks so surprisingly normal.

I am pretty sure I wouldn't have caught it even if I was code reviewing it.

Did someone heroically rerun every package's build process to see if the minified output was tampered with?


For what it's worth: it'd be undecidable to catch all vulnerabilities even if you very precisely define the semantics of your virtual machine, because it'd require knowing whether a certain line of code will be executed, which is known to be as hard as the halting problem. Practically, maybe? Maybe one can use ML to check codebase for suspicious code. But in an ecosystem like npm no matter what protection you have against hackers, someone will try very hard and inject vulnerabilities like this. You should defend yourself and use checksums for deps and execute only trusted code.


Not really asking for perfect/decidable.

Just wondering out loud on how to detect even one of the other active exploits that are surely in the wild. This one was live for 3-4 months before it got found by luck?

For example I would really appreciate if they added something to detect/warn when a package needs eval or http requests permissions (yes, I know npm packages don't have a concept of permissions).

Anyway your commonly repeated advice on how to defend yourself isn't practical (short of not using npm). The dependency that got me was nodemon. That ought to be "trusted code" with 17k github stars, 1 million downloads per week but apparently not. As I said already, even if I did check the commits to all my dependencies, I don't think I would've caught this one. It's a one line change in the minified build file of a transitive dependency nested 4 deep. All the commits around it are very normal looking.


I wasn't intending to say reading every single commit of your dependencies is practical. I was arguing that unless you do that, you get absolutely no guarantee of security. If you take code for granted because it has 17k internet points, you're putting yourself at risk. I can't see how this is not very simple.


It's "very simple" that you will also "get absolutely no guarantee of security" unless you personally rewrite every line of code you use in a provable/verifiable language, design and manufacture your own hardware, hide in a cave to avoid all social engineering, and so on.

Hopefully you see why your advice is similarly impractical.


Running the open source projects through organizations like Apache Software Foundation [1] would help. They can have processes for handling abandoned projects and also set guidelines that projects must follow when it comes to including 3rd party packages.

This is not just a security risk. There might be some business implications if somebody would sneak in for example some AGPL licensed software to the dependency chain of a popular package with more permissive license.

[1] https://www.apache.org/


I love reading these threads. People seem to expect way more out of open-source software than is actually guaranteed to them. It is a template of some computer code that may be relevant to your work; nothing more, nothing less. "I can't believe you let this happen to me!" Well, read the license: "THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED". If you don't agree to that, you have no legal right to use the code. Not sure what else there is to discuss.


> Not sure what else there is to discuss.

Morals and ethics.


Is it ethical to use code while not agreeing to the license it is provided under?


The license doesn't say "You can't call the author out when they make a bad decision"


I agree that this is a good discussion to have, but maybe we should include the responsibility that people who so easily include potentially untrusted code have in this?

This is free code, responsibility to check may need to have a social shift to those who use it.


What are the ethics? The developer isn't interested in maintaining it. Someone else was. So the developer gave it to that interested party.

I feel like 90% of the open source libraries I've written are maintained by someone other than me now. If that's considered unethical, next time I'll just not open-source it.


The MIT license isn’t as bulletproof as you think it is.


The license doesn't really matter here. I'm only quoting it to set expectations. People are showing up demanding some sort of quality standard, when they explicitly agreed that there was no such standard. A related issue is that NPM distributed malicious code to people. That sounds like a problem between people that downloaded the malicious version and NPM. Not sure how the author is involved.

To your point, if the author added the malicious code, I don't think the license would necessarily protect him. Intent is key. But he didn't.


> A related issue is that NPM distributed malicious code to people. That sounds like a problem between people that downloaded the malicious version and NPM. Not sure how the author is involved.

It's an author/maintainer issue (and not an NPM issue) because the maintainer explicitly gave the bad actor the right to publish via NPM's delivery channel. The maintainer did so in gross negligence, in my opinion.


If courts agree with you, the world of free software is over. It will be too risky to accept contributions. Github and NPM will be shut down over lawsuits.

For that reason, I think you are wrong.


It is risky. Ask one of the 17 law firms involved with Oracle v Google if open source licensing is too risky.


Might be unpopular, but I'm super okay with maintainers just disappearing and ignoring their projects. Stop putting so much weight on the original package namespace. I know it's a lot of work, but in my experience, when maintainers disappear, the community creates a fork that's better maintained and people migrate to that version.

I think if you want to stop working on a project, and it's on github, just leave a note in the README, archive it and move on with your life. Don't give the keys to someone on the internet.


Regardless of if the author's to blame, the outcome would be no different if the author's Github account were compromised or the author themselves were compromised/decided to act maliciously. In either case, user's of the code also have an effort to act responsibly. Are any downstream package author's that incremented the dependency or failed to lock it to a specific version less culpable? (event-stream is being used by SuperPopularPackageX which allows the compromised version)


Seems like we need a linter that can look for binary, hex, or otherwise encrypted-looking data in source code (I have no idea how you'd teach the linter to look for anything that "looks like gibberish"--I'd leave that to people smarter than me). From past experience with other popular apps that get backdoored with bitcoin miners, anything intentionally obfuscated in what is supposed to be an open source application is almost always suspicious.


You could measure the entropy of single lines and determine if the entropy is high. Although you'd probably get a lot of false positive.


You can probably just measure the entropy of literals, but then an attacker might use reflection to construct arbitrary strings. Not an easy problem.


The problem with JS (Node and browser) is that the standard library is pretty much useless.

What JS/NPM needs is some sort of certification program for secure libraries that would fill the void of the lack of a standard library. NPM of course would give a warning when installing a non certified library.

This would be a tremendous effort, but it would really be the only way for NPM to survive in the long term. NPM is one of the reasons we've moved to Go for our server side code.


> NPM is one of the reasons we've moved to Go for our server side code.

It's one of the reasons, but far from the only reason. With Go, Python, Rust, Swift... is there any reason to use Javascript on the backend? Most attempts I see to "share" code between the browser and the server end up adding needless layers of abstraction and complexity.


I think SSR is probably the best reason for using Node.

Another one is making a GraphQL API that concentrates data for the clients from different places. It could be made in another language, but it's better if front end developers maintain that project.

For bite sized cloud functions Node is also not a bad choice.

But for pure backend projects, yeah, Node isn't the greatest choice these days.


The appeal of being able to use one language across the whole project, rather than one on the backend and a different one on the frontend, is very strong.


I know the pitch for this is very strong but I've never seen any benefit in real life. Javascript was designed to be a client side browser scripting language, it's not equipped with a standard library that competes with other languages that were designed from day one for the server or systems programming. The skill sets of front-end and back-end developers are also very different. In my experience, when you use Node you end up with front-end devs programming back-end servers very poorly or you end up with back-end devs forced to use Node/JS. I've never actually met a seasoned back-end developer who would choose JS for the domain given no constraints.


To the outsider, these threads assume so much context that they don't make sense. (Of course, if you are a computer or you have intimate knowledge of these tools, and these threads make perfect sense!)

Who, what, where, why, and how need to be answered.

Usually when there's a link like this on hacker news, one of the highly moderated comments explains the situation in ways that don't require as much context as a typical commiter would have.


The original maintainer of a widely used npm package had moved on and didn't have time to maintain the package anymore. Someone approached them asking if they needed someone to take the reigns and maintain the package going forward. That person was a hacker who, after having access to publish rights to the package, installed a malicious dependency. Anyone who has updated the original npm package within the last 3 months was hit by the attack. The details of the attack aren't extremely clear, but a few comments suggest the code was aiming to get bitcoin wallet credentials.


And even if you understand what's going from the OP, you're unable to get full context as Github hides the majority of comments in the issue. I've clicked "Load more..." 5 times and still over 100 comments are hidden. Incredibly annoying.


Scoping dependencies to their author by default would help (not eliminate) this problem.

For example, if this had been @dominictarr/event-stream the entire time, and he chose a new maintainer, that would then by definition change the name of the package to @right9ctrl/event-stream, requiring all users to intentionally change their dependency.

Yes, it's painful. But it prevents getting caught out by a malicious patch release.

That said, the whole github/npm relationship is broken to begin with. No simple mechanism exists that I'm aware of to verify that a tag as pushed to GitHub is exactly the same as what is published to npm. And due to various prepublish hooks, it can be very difficult to actually prove this. One solution would be GitHub support for building & publishing packages, with a Travis-like build log, so users can be assured that the package as it sits on npm actually was built from expected source.

@right9ctrl could have just avoided committing this entirely, and just published a malicious patch release. If he had done that for a few days then published another that removes the injection, it might have gone unnoticed for months.


> One solution would be GitHub support for building & publishing packages

Except you can build packages using one of a million different tools. Unless you could only publish packages that were built using specific vetted tools (and vetted plugins for those tools), that wouldn't change anything. If I can change anything in the build pipeline, I can control the output.


Clever.

Something out there depends on ps-tree (directly or indirectly) and the description in its package.json is the AES key for this string.


Aha: It's:

    npm_package_description = 'A Secure Bitcoin Wallet';


This situation reminds me a lot of the hypothetical situation outlined in this post [1]. Definitely worth the read for anyone who hasn't already seen it.

[1] https://hackernoon.com/im-harvesting-credit-card-numbers-and...


All these arguments about Moral/Legal liability are just head-scathingly nonsensical to me.

Look, if you're pulling external dependencies into your project from anywhere, and you don't version pin and hash compare those dependencies every time you build, then you get ZERO security guarantees. Simple as that. You might have some "in a perfect friendly world" expectations, but no guarantees, no recourse and no one else to blame when the world does not conform.

The notion that the author of the package could/should do anything to mitigate that for you is ridiculous. Ignoring many, many plausible coercion/honest mistake scenarios, just consider that a once-reputable maintainer can over time become malicious. That's it. If you have any actual real security requirements, then I'm sorry but you don't get to say "but but but DRY!!!" and call it a day. You are responsible for what software you run in your product.


Semver seems like a huge problem here, blindly pulling in changes is just asking for a lot trouble, intentional or not.

I'd also say NPM's centralized naming structure is also problematic, people don't want to update 1000's of modules to some new latest version of X which was forked and is now maintained, so it's handed off like this.


it's kind of impressive this compromise was discovered _at all_. Anyone know the story of how it was? There seems to be a lot of back story before the linked issue begins.

I'm interested in who noticed the compromise, when, and how. Because with open source, noticing the compromise is about the _best_ we can expect from a "worst case".



Interesting. If the malware hadn't buggily triggered a deprecation warning...


Makes you think how many backdoors out there like this.


I'm also a little bit pissed with the author of this repo. But seriously, how can you vet everyone contributing to your unmaintained open source project..

Good security scanning and perhaps identity verification on Github could help. So at least you can ignore pull requests from authors that didn't verify their identity.


If you can't or don't want to maintain the project and can't find anyone trustworthy to take it over, just don't do anything. This effectively deprecates the project, while not harming its existing functionality.

This maintainer actively ceded control of his library to some random person.


Yeah this one is easy to spot, but what if someone makes valid contributions and for pull request #10 slips in a backdoor. Most open source projects are not able to check that anywhere near thoroughly enough.


The reasonable answer is that he should absolutely not have handed over publish capabilities to a completely unknown third-party. This seems obvious to me.


It's the internet... we're all unknown to each other, and many modern web apps are running a huge stack of unvetted code.

Honestly, a problem like this is overdue, especially for the NPM ecosystem with its propensity for a huge number of transitive dependencies, but also for all other major repos (nuget, cargo, CPAN, Docker, etc.). OS-based repos (Debian, Ubuntu, FreeBSD) might feel a little safer because it's harder to become a publisher, but it's not at all impossible. Perhaps the only reason it hasn't been a target before now is because there are easier avenues for cybercrime.


Then let me ask you, how long should have dominic let right9ctrl contribute to the project before trusting him and giving him publish capabilities? With hindsight, we know that right9ctrl is going to publish a backdoor the second he gets rights. How long do you make right9ctrl wait? And does that accomplish what you want?

If you think that ownership transfer should exist at all, then the attack vector still exists no matter how long you wait to trust right9ctrl.


If I give the maid a key to the house, there's always an attack vector, but that doesn't mean I just go hire some rando off craiglist.

There are a number of factors that could be considered when giving somebody this kind of responsibility, including existing contributions to open source, contributions to the project at hand, and public profile. As far as I can tell, "right9ctrl" had none of these.


You can't vet the people, but you can vet the code. Of course, if you just give someone commit rights, then you can't. But generally speaking the maintainer understands the code they merge into a project, and that is the vetting that takes place.


Nobody's saying you should never accept a patch from an unknown person. "Of course, if you just give someone commit rights, then you can't." is the whole point!


If you're using auto-updating external packages, aren't you effectively giving the package author commit access to your derivative project too?


Sure, but then they have to actively invest time into the project... not everyone can afford that.


Then mark the project as abandoned and add a note suggesting the new volunteer’s fork.


Not sure how useful this would be to anyone but i built this tool last year to learn about Go and highlight issues with dependencies. This tool tries to enumerate dependencies of the project and highlight concerning facts for each dependency such as the size of collaborator, the age of the repository or if it has not been updated for more than 6 months, etc.. The tool currently supports go-dep, npm, pip, ruby-gem but ONLY support githubapi. The tool can be found here if anyone is interested or just want an idea to write something similar for the community:

https://github.com/GovAuCSU/DRAT

Disclaimer: I am not a dev, just a pentester by trade so the code is probably ugly as fuck for many of you =))


PS: if you fork the project, it will have big security warning for dependencies, that is because i have a couple of test dependency files for pip,nodejs,rubygem for testing the crawler job.


I don't believe, that Dominic (the former maintainer, who gave the control of package to complete stranger) is the sole party to blame here. The biggest responsibility lies with so-called "administration" of NPM, who have systematically failed to promote security and robustness in their package system.

The practice of handing control of open-source packages to new maintainers is old and well-established. You can go to Sourceforge and request to take control of any old, low-impact, unmaintained repository and Sourceforge administration will likely grant it to you after a considerable delay and some investigation (at least they used to do so in the past, not sure if they will continue to after all this mess). The responsibility to ensure, that new versions of packages haven't been subverted by malicious actors, have always lied with people, who brought those packages into distribution — the package maintainers. Unfortunately, NPM does not have any "maintainers" in the traditional sense — package authors can't be trusted to remain impartial, and "administrators" just sit on their backs, waiting for devs to fill their repository with quality software. There is zero oversight — you can do anything with your packages as long as it does not outright contradict local law, including openly selling them to hackers, openly incorporating backdoors, and even sabotaging entire package ecosystem by unilaterally deleting hundreds of popular packages.

Covert transfer of control should not even be possible in centralized repository like NPM. If you want to give your package to someone, that act should be registered with repository administration, and future users of package should be warned of it — just like users of services and goods are commonly informed when an existing company changes it's organizational structure. It is one thing to privately give access to Github repo to someone. It is entirely different thing to hand over a repository package — which is automatically distributed onto large number of computers around the world. In later case authors, who failed to announce the change of ownership to repository maintainers, should bear full monetary responsibility for their actions.



Can I ask a simple question not being a javascript guy? Should this be considered a security vulnerability in need of mitigation, ie should I try and identify all the places this/these packages may used in our production code, and ...do something?


This is one reason why you don't depend on trivial libraries you can roll in-house and maintain yourself. Shame on people perpetuating the "depending on is-array or left-pad is totally fine" ideology.


We're going to see a lot more of this. Dependencies are a security risk. If you're writing a secure system that will be storing personally-identifiable information as GDPR defines it, then you need to audit every dependency that you include in your project (and every dependency that each of those dependencies include). And then again every time they change version.

The golden days of including any old gem or npm package that seems to solve the problem you're looking at are over.


Looks like the malicious commits were made with a fake user account. No identifying information, barely any repos, generic profile picture.


Is there a way to run npm and show what would be installed via an 'npm install' short of actually installing it? That combined with a diff tool against package-lock versions would limit the review list. AFAIK, the way it is now you can't tell if one thing changed or one thousand until after completed.


Just add the --dry-run argument.

    npm i --dry-run event-stream


We need to incorporate the fact that people will move on with their lives and transfer ownership of projects.

This could be solved in a few ways.

Maybe instead of signing over ownership, maybe the protocol should be to force a fork and make notes in the README that there's a new repo whose author has not been vetted and should be used with caution.


Is one solution a system where our ssh keys participate in some sort of trust graph like SSL certificates do?

(I expect that question is very naive from the point of view of people who know this area, but I'd be interested to be pointed in the right direction.)


It would be nice if we could pin libraries to a particular maintainer, as well as a version.



I know I am going to be in the minority, but I think it's time for the JS community (and others) to reconsider whether Apache & Eclipse orgs with more "rigid" governance requirements should used more for popular packages.


I see a way to solve this with technology and that is for tools to allow secondary developers to publish on the package repository, but after a mandatory N-day delay. The main owner can then always have a chance to review.


And then attackers would just wait that N days before pushing exploits.


No, I meant every pushed change has to wait N days before being published to users.


Doesn't make sense if the "owner" abandoned the project and N goes to infinity. Then we have the same problem. If you're producing open source software, you have ALL the right in the world to abandon a project for arbitrarily large N days. If you don't want vulnerabilities, don't run code from untrusted sources, simple as that. This whole discussion is a farce. The reason open source comes with a license attached to it is for situations like this.


It makes a lot of sense if you realize many people who don't actively spend time maintaining something might still have a desire to spend 10 seconds glancing at others' changes to make sure their own rears don't get burned.


Sure, but this fails if they don't want to spend that 10 seconds, which they totally are allowed to. Since there are hundreds of packages in your dep tree and it takes only one attacker for all your bitcoins to be stolen, your scheme is not enough.


A lot of butthurt people railing on the guy. Maybe he just doesn't care anymore about a dead package he has no intention of maintaining.

Blame yourselves devs, not the author who donated hours and hours of free work for your benefit.


It’s about ethically handing off a package. No one is forcing him to maintain it. But when you make a package used by millions of people, you become responsible for their safety. To then hand over a package to a hacker or unknown party is just morally wrong.

Just mark it as deprecated and call it a day, like a normal and responsible person.


No -- downstream devs are responsible for the safety of their own apps, users, and org.


Well, he didn't really hand it over to a hacker, did he?


His responsibility as a maintainer is to signal when there's a change in power, since the package is trusted via his credentials.


> the package is trusted via his credentials.

Somebody really needs to explain to me how this works. The dude's bio on github is "antipodean wandering albatross". Nothing against it btw.


>But when you make a package used by millions of people, you become responsible for their safety.

Says you! I don't agree with that. Author clearly doesn't either. You're painting it as though the author knowingly handed it over to some hacker when in reality he had 0 cares about this package and just handed it over to the first guy who asked. I would have done the same probably.


This is incredibly irresponsible.

If you don't have the care to maintain it or do any sort of vetting, you shouldn't have the care to do anything at all. Literally, leaving this unmaintained would have been a better solution.


A dead package with 2million downloads a week, not very dead no matter what the maintainer thinks of it.


I see a lucrative business opportunity in alerting users when pwned/vulnerable NPM packages are used in their projects. Maybe something like this exists already?


Sounds like Snyk:

https://snyk.io/


dependABot does it on github, theres lots of issues in other repositories referencing this issue (visible at the bottom of the thread)


GitHub does it for Ruby Gems.


They do it for NPM deps too.


STOP RELYING ON 10000000000000 PACKAGES FROM DIFFERENT PEOPLE! Create 0 dependencies packages or at least a package with dependencies you control!


If this is possible (a repo being maintained by not the original owner), then you have to assume it's happening.

So how can you be mad?


This is bound to happen from time to time. We should invent some tools to nuke the affected code from orbit.


Tl;dr: event-stream repo was injected with an attack that crawls your dependencies trying to find “copay-dash”. It then attacks it to steal all your bitcoin. The attacker was given maintenance rights to the repo by simply emailing the owner, who gave the rights freely. The owner and npm didn’t do a background check. Because of the MIT license, the owner has no liability/responsibility for his actions.


"Because of the MIT license" is a little misleading. Any FLOSS license out there would disclaim the same sort of warranties.


Instead of blaming the original authors of the package start contributing to OSS.


$ npm ls event-stream flatmap-stream / └── (empty)


Boy am I gonna start versioning real soon.


The more I read about major security vulnerabilities that should not have passed inspection, the more I believe someone put it intentionally or was told to.


its like watching people learn about the warranty clause of software licenses in real-time!


Hi all,

Here's a quick summary of the situation.

WHAT HAPPENED?

==============

A widely-used dependency was handed over to a different maintainer, who proceeded to add a malicious sub-dependency to it. The sub-dependency only contained the malicious code in a single release, and only in the minified version, likely to avoid detection.

WHAT DID THE MALICIOUS CODE DO?

===============================

The malicious code used the 'package description' to decrypt its payload; this was done to ensure that the malicious code would only run when the dependency was used in a specific application.

That specific application was the Copay Bitcoin wallet, from BitPay. The malicious code injects itself into the application, steals the user's Bitcoin wallet, and sends it off to a remote server. It's currently unknown who operates this server.

There may be other forks or projects with the same package description, "A secure Bitcoin wallet", that are also affected.

HOW DO I KNOW WHETHER I WAS AFFECTED?

=====================================

As a developer: Look in your `package-lock.json` or `yarn.lock` for an entry of the `flatmap-stream` dependency, specifically version 0.1.1. Unless you were developing on Copay, the code probably wouldn't have run, but you should still remove the dependency.

As a Copay user: No release of Copay included this malicious dependency. It's unclear whether copay-dash, a fork of Copay, did. Contact your wallet developer for more information.

HOW CAN THIS BE PREVENTED IN THE FUTURE?

========================================

This is probably the most crucial question here. There's already a "lol JS" bandwagon gaining steam in many places, but that's not really the issue here; nor are small modules the issue.

Dominic Tarr is an established contributor to the ecosystem and part of the community, not some random unknown party, so this issue would have existed regardless of the size of the dependency.

I would argue that the real problem here is of a social and partly-economic nature: There's no clear way for maintainers to find a good new maintainer for their projects.

This is not a problem that is unique to JS, either; such support structures are missing in most ecosystems.

As a developer community, we should probably have a serious discussion about the security risks introduced by lack of funding, and by the lack of a support structure for no-longer-maintained packages.

The latter can be solved on a grassroots level, without any payment being involved. The former is going to involve funding models of some sort, and I hope that this discussion can be had in a neutral manner, not as a marketing pitch for a specific startup.

Another important problem is that there's essentially no code review tooling. While in this specific case even a review would have been unlikely to catch the issue, it's pretty much impossible right now to review all of your dependencies in a project (in any language) without going crazy.

Possible solutions to that would include a review tracking system, that integrates with your package management and flags any new dependency in the tree as 'needs review' before accepting it.


nodejs and the javascript community continues to be a shit show run by amateurs.


There are a few issues with NPM that make this kind of thing especially easy/lucrative:

- An ecosystem of massive amounts of transitive dependencies increases the number of people you need to trust. If I wanted to attack a project that used NPM, their package.json dependencies would be a really good place to start. Find the least popular transitive dep they use and email the owner to see if you can be a contributor (repeat for all of their xdeps). If they don't immediately give you publishing rights like OP did, then show some chutzpah and make valid commits until they do. While this attack works on any programming language's dep system, it's easier the more transitive deps a project has. People ITT blaming the OP don't understand this attack always works on a long enough timescale. Do you think there isn't someone out there who would make high quality contributions to an xdep of primedice.com (online gambling site) for 5 years to finally get publish access?

- Anything may run during `npm install`. npm install supports an --ignore-scripts argument to not run any scripts during install. This should be the default.

- Unqualified module names make it more desirable to "take over" a package than just publish your own package "npm install <username>/event-stream", so it contributes to an ecosystem of ownership-transfer that's far less likely to exist on, say, https://package.elm-lang.org/ where everything is qualified by a Github username.

- NPM website doesn't show you source code. The github link on the project page is just a convention. I think the NPM website should have a light source code browser of whatever is in the tarball that you download and execute during `npm install`. Bonus points for reproducible builds from that source.

- Developers don't actually review every bit of code they use and execute, especially not transitive deps. And we certainly aren't going to bother to download the tarball from NPM and unpack it to inspect the code of every dep. Most people reading this don't even know how to do that.

I've thought of some ideas to help the situation, like creating a Github shield that verifies that a conventional build script like `npm run publish-build` reproduces the tarballed code on NPM, but then I would just be doing free work for the NPM organization, and it's still just a hack.


> People ITT blaming the OP don't understand this attack always works on a long enough timescale.

I'm a little dismayed by the great number of comments I had to read before someone pointed this out...


I don't see how the same thing couldn't happen with an elm package? The attacker can commit to the github repo.


Be specific: what exactly would you do in Elm to pwn someone? It would be a much more limited and a much more visible attack.

NPM modules don't even have source code on display. Someone has to download and check the tarball before npm install.

Also, Elm packages are qualified by a github username so there isn't an ecosystem of ownership transfer. No juicy name squatting. People just fork.

Finally, don't forget that my point is "there are a few issues with NPM that make this kind of thing especially easy/lucrative". That's a far cry from "everything else is bullet-proof" but it's tempting to argue with me as if I'm saying that.


+1 for source code browser on NPM


"Hey! This random piece of software I found under the table at the pub does not have my best interests at heart?! My feelings are hurt!"


Everyone is fired.

If you insist on pushing out hundreds of packages on npm, then drop support for them and hand over rights to random strangers, you are acting completely irresponsibly. You shouldn't be given a pass because you're doing it all voluntarily. Your professional reputation should suffer for it.


Easy for people who have zero packages and no pressure to say. Don't contribute to OSS and you have no problems.

If these huge companies profiting off OSS work actually contributed financially and with time, maybe maintainers would happily remain maintaining.


> Easy for people who have zero packages and no pressure to say.

If you have this "no warranty, no responsibility" attitude then how can you have pressure to maintain anything? How does that translate into pressure to hand off the project to strangers? Nobody asked for that.

> Don't contribute to OSS and you have no problems.

Yes, please don't contribute if you have this kind of attitude. It's harmful. If you can't maintain the package, deprecate it. Don't hand it over to an unvetted entity. If we can just agree on that M.O. we'll have a much better situation.

> If these huge companies profiting off OSS work actually contributed financially and with time, maybe maintainers would happily remain maintaining.

Ifs and buts and sugar and nuts. They don't contribute, they never have and they probably never will, you are responsible for your packages even if you are doing it voluntarily.


Dozens of emails / notifications, people pinging / bothering you on multiple platforms, multiple emails, etc.

It's stupid to blindly trust someone's code, regardless of who they are of if they are the current maintainer, that is not how you build secure software.

If you want to start some organization which vets people for maintainers go for it, but don't expect maintainers to do it, I can guarantee you that thousands of maintainers do not. You're responsible for what ends up on your servers.


> Dozens of emails / notifications, people pinging / bothering you on multiple platforms, multiple emails, etc.

Two options:

1. Stop supporting the package, mark it as deprecated, ignore/delete the spam

2. Hand over project to a stranger, endangering all of your users

One of these options is irresponsible, the other one isn't. I'm not asking anyone to do something for me, I'm asking them to not do something for the sake of sanity.

> It's stupid to blindly trust someone's code, regardless of who they are of if they are the current maintainer, that is not how you build secure software.

I agree, but both of us know that pretty much the entire Javascript ecosystem is exactly that stupid. Let me ask you: What have you personally done to vet your dependencies? Have you used Babel or any of the other popular Javascript packages that have a huge dependency tree? If you have, chances are you are effectively blindly trusting all the developers in that tree. You're not checking every single commit that goes into it.

> If you want to start some organization which vets people for maintainers go for it, but don't expect maintainers to do it, I can guarantee you that thousands of maintainers do not.

I don't expect that they do, I expect that if they fuck up that their reputation takes a hit. That would some incentive for not fucking up. Instead, there isn't even any consensus that this guy fucked up. He's taking no responsibility whatsoever.


Is anyone going to bother crawling through all dependents of this library, extracting the package.json descriptions to find the proper key to decrypt the string and find out which package was being targeted?


The aes256 key is 'A Secure Bitcoin Wallet'.


Well, it would not happen if people used Elm :)


This doesn't make sense because Elm is client-side only, and all code that ships is client-side executable.

Node, a general purpose language, has a scope that's much, much larger.


Well this undermines everything dominictarr has done for secure scuttlebutt and other projects, including datproject and its connection to Knight Foundation.

"Oops, I just gave the repo to an unkown dude", OK. Sure. Just replace "unkown dude" with "to my colleagues at 5 eyes secret service organization"

This kind of mistake is not a mistake, not from a dude like dominic.


Please substantiate your own open source projects at any and all sizes so that we may, too, cast aspersions on your relationships with governmental agencies. I'm sure that will improve things mightily.

This is a community problem based on insufficient incentives and the way that the software development community is content to allow individual labor to replace community efforts and you, as a member of that community, are kinda pissing in the pool right now. If you can look past the initial dismay, this is a time for the software development community to look in the mirror, not to pick up your torches. It's a project that he hasn't touched in years. Why should he care? What are you doing to make him want to?


antocv, like many of us values security over feelings. If you want to run maybe insecure code to help be supportive of someone, go for it. His point is valid though and you can ignore it at your own risk.


There's a significant difference between caring about security and one's reputation and jumping to conclusions as antocv has here. Dominic clearly handled this poorly, but there isn't a need to exaggerate for what he "could" have done as a means of impinging his reputation further than what's been done here and now.


Exploring possible threat vectors is not "jumping to conclusions". Again, security is important to many people.


I understand the security risks involved. Security is part of my job and I have on more than one occasion been the guy unwinding the entire ecosystem to see what depends on the latest insecure library of the week.

And somehow, despite this experience, I am also blessed with the minimal capablility to understand the swamp-ass social toxicity that's happening here. I am likewise blessed with the modicum of good sense to realize that this is not a problem that goes away by stuffing one person in the wicker man and setting it ablaze. Systemic problems have systemic solutions and a real part of actual security is applying systemic solutions instead of looking for scapegoats that let you think you're a tough guy on the Internet.

Because it is about feelings. It's always about feelings when somebody fumfuhs about how serious and important it is that they, or a likeminded compatriot, get to be an asshole to an individual about a systemic problem. It's just about their feelings and given place of pride because they're theirs. So maybe we shouldn't play that game at all and maybe we should look at the systemic issues that actually matter.

Haveaniceday.


Seriously, I don't get how the others in this thread don't get this. Nobody's talking about the legalese of the license - he's undermined his _own_ credibility, which is all that really matters in open source. If he's fine with people not trusting his packages in the future, that's fine, but THAT's the trade-off, regardless of how you license the code.


He's been too naive and creduluous in judging a person based on limited information. His coding skills aren't in question — the software was fine while he was still in charge of it.

I suspect Dominic didn't see himself as the maintainer of the package, but as a maintainer of a copy of the package (like how git is designed to work). With this perspective, of course someone else can be a maintainer.

The problem is that NPM changed their upstream from Dominic to right9ctrl without checking. (Maybe because Dominic clicked a button — NPM is still responsible for accepting that decision or not.)


Not to blame them NPM, they're great people, but I often get requests from random people going through NPM support to use an existing name (saying they will bump major).

The centralized naming scheme is to blame here, it normalizes these "cute" names and users have demanded these since day one. They could/should have probably been scoped from the beginning, or people should just stop trying to capture fancy names.

I get tired of receiving emails about those kinds of things personally, you try to just ignore them, sometimes their request sounds reasonable but still, can those people be trusted, who knows.


> The centralized naming scheme is to blame here

It would have been better just to use URLs, so if you see "npm install https://github.com/dominictarr/event-stream" then it's clear you're trusting "github.com".

(Yes, DNS is centralised. That can be fixed independently.)


> NPM support to use an existing name (saying they will bump major).

Sorry, could you please explain what this means? I keep seeing comments about bumping version numbers (e.g. the right9ctrl did some shenanigans with versioning).


Ah sometimes you get people asking to use a name such as my "log" pkg in npm, but bumping from 2.0 to 3.0 so that dependencies aren't messed up. Assuming the person has no bad intentions I think it's a perfectly reasonable request.

I ignored this particular request via email a few times, because I just really don't want to spend time dealing with Node stuff. You then get a request from NPM support asking the same thing, so I agreed to it, that's all it takes to get access to a package really.

I think from now on I'm just going to tell people to come up with a different name, this wouldn't be a problem if the names were scoped by default.


he's undermined his _own_ credibility, which is all that really matters in open source.

Even if I take your sentence dead seriously - the code injection, the crypto wallet stealing, the disruption to the downlevel users, none of that matters only Dominic's credibility matters - where does that get me?

Assume that Dominic was bribed to inject the code himself, with a share of the profits, and he did, maliciously and criminally, and he gets arrested, and people were still disrupted and money was stolen and insecurities were introduced and flamewars were had .. and his reputation is now justly and fairly tarnished for all time..

what good does that do anyone? How does that avoid it happening in future or change how we respond to it?


you forget how seriously lazy developers are, remember left-pad?


How many people here seriously thought “Gee, I was trusting Dominic, but now he's broken my trust”?

So who were you trusting?

What's the risk for Scuttlebutt, Dat and the Knight Foundation? Dominic will be the only maintainer of a project and he'll foolishly hand over maintainership to a government spook?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: