Hacker News new | past | comments | ask | show | jobs | submit login

In the past few weeks:

* NPM 6 stopped working with Node 4. Rather than actually fix it, they just left it broken for several days because it was already fixed in the upcoming release, and I guess they didn't want to do an emergency release: https://github.com/npm/npm/issues/20716

* There was a several hour period where you could publish packages, but then they would 404 when you tried to download the new version.

* They switched to using Cloudflare on Friday, and broke Yarn in the process: https://mobile.twitter.com/jamiebuilds/status/10001984632696...

* Somehow while switching to Cloudflare they blew away a bunch of the packages that had been published during the previously mentioned window. They also blew away all the versions of some packages. Last Friday night you couldn't 'npm install gulp'. Never gave any explanation for this: https://github.com/npm/npm/issues/20766

About a month ago there was a bug where npm would change permissions on a bunch of system files and render you entire system unusable in you ran it as root. But that was OK because the bug was on the "next" version of npm, not the one you'd get by installing 'npm install -g npm'... Except there was also a bug in the current version of npm that installed the next version of any package by default, so it did in fact blow up a bunch of machines.

Now, apparently, they are a teapot.




And the real problem with all of this is that npm has become so ubiquitous that you can't get away from it if you're doing client side development (not even by switching to yarn).

You simply have to endure it, and accept that every now and again, for reasons entirely beyond your control (and at a quite possibly very inconvenient moment) it's going to break.

This really annoys me, but what can one do? At least it's fixed now.


> This really annoys me, but what can one do? At least it's fixed now.

Install your own repository manager? This is the standard in every company I've worked for so far, at least in the Java world. Artifactory supports NPM, so set it up as a proxy.


We use ProGet with npm, docker and nuget feeds. We only ever hit the internet when we install fresh dependencies. After that, they get cached on our local service for all developers and machines to consume. That, coupled with yarn itself, which caches dependencies locally as well so subsequent installs don't even go out to the network, has accelerated our build times considerably.


That's interesting - thanks. Of course, it has a price tag, but so does the rest of our pipeline, along with our time.


As far as the monetary price of implementing such a system, Nexus OSS[1] also supports NPM proxying and is free for basic usage.

[1] - https://www.sonatype.com/nexus-repository-oss


We have Nexus in a subnet for faster installing. We had to write scripts porting over lockfiles from npm to nexus and back.

This had to be added to a precommit hook to not break CI. Seriously, package.json should allow to specify what endpoints should be used if available in a given order. Now it's up to each dev team to handle.

My main concern is that it's brittle. Nexus caches exact versions and nothing more so we don't even have assurance that it will work nicely when NPM goes down.

On the other hand lockfiles are awesome. I missed them back in 2012... copying over node_modules on USB drives was not cool.


You can specify what registry to use with a simple project-based .npmrc file. We have ours point to our Nexus npm proxy.


That's what we eventually do but the lockfiles don't care, a resource url is a resource url. We use yarn too which does proxies only in .yarnrc IIRC.

Do you have an externally available Nexus? Using ours through a VPN beats the main purpose - fast(er) installs. That's why for WFH scenarios we have a script to switch between our proxy and NPM.


Our Nexus setup is internal only. For WFH, we have hundreds of folks using a corporate VPN which routes to our office, and then our office routes to our AWS VPC, which is where our Nexus installation lives. I set this configuration up and haven't had any real issues with it, nor do I see any reason to switch between a proxy and npm.

If a developer is using an older buggy version of npm that doesn't respect .npmrc and changes a lock file to point back to npmjs.org entries, we deny the PR and ask for it to be fixed. Right now that check is unfortunately manual, but there are plans to automate it. It can be easy to miss at times though, since GitHub often collapses lock files on PR's due to their size.

For us, the main purpose of using Nexus as a proxy is to maintain availability and to cache/maintain package versions. If you're using Nexus to make things faster, then you probably shouldn't be using it. If you want faster installs, look into using `npm ci`.


Nexus OSS can't be clustered / put in a highly-available install, which is a paid feature for Nexus.

To ensure that you're actually deriving benefit from your Nexus install, you have to block outbound connections to the NPM public registry from your CI build agents (if you don't firewall it off, you don't want to wake up one day and find that both origin and the proxy are erroring because your proxy never actually cached anything and you never tested tested your proxy... right?), with only the Nexus installation permitted to make such outbound connections. And as bad as NPM may be, there are real maintenance costs to running your own Nexus install (not least of which, managing updates that will take Nexus down and communicating them with your dev team so that CI builds which error out when Nexus is down can be restarted when it goes back up), and thinking that you can do better than NPM is hubris. Running a private Nexus OSS install for the purpose of trying to increase availability for low cost (not zero - you still have to pay the infrastructure costs) is usually a false economy.

If you work for a company with enough operations and infrastructure resources that adding a clustered install is trivial, then you probably have enough resources to pay for an Artifactory license.

TL:DR - NPM has its faults but it's still probably de-facto both more available and better updated than taking on the responsibility of running a proxy unless you have mature ops/infra teams


This one is free, we use it and it works for our needs. We also have it integrated with our Active Directory: https://www.verdaccio.org/


We use Verdaccio and I can't imagine any serious dev shop not using some kind of proxy / private registry for NPM packages. It's really simple to set up and has served us well, aside from minor hiccups.


Sinopia was used at a previous job https://github.com/rlidwka/sinopia


Just to save other users some time, it seems like sinopia is no longer maintained, and doesn't work on Node 8

https://github.com/rlidwka/sinopia/issues/456

Verdaccio, mentioned in a sibling comment seems to be the recommended replacement ...

https://github.com/verdaccio/verdaccio


The OSS edition of Nexus supports npm proxies. Sure there’s a little bit of setup, but it will more than pay for itself the first time an event like this one occurs.


Agreed. It's very easy to setup a private npm registry using Nexus OSS.


As others suggested, any serious deployment should be hosting their own registry / mirror or using a paid service. This also saves you in cases such as the left-pad issue, as your mirror would still have the package. It is unwise to rely on free third-party services which are out of your control, especially for something as important as deployment!

This problem is not limited to npm. I remember a few years back there were similar issues with RubyGems, where it'd go down leaving many developers unable to deploy.

Heck, how many projects do you think would be left unable to deploy if GitHub went down? I remember a few years back they'd have their occasional issues and Twitter would become a storm of angry developers.

For many projects not being able to deploy or develop at any moment, as well as dealing with left-pad style issues is implicitly accepted as a reasonable trade-off.


>This really annoys me, but what can one do? At least it's fixed now.

The weird thing is, open source development is as close to a free market as one can get. Frustration with NPM should result in multiple javascript package managers competing to undermine NPM's dominance, but the only alternative is one that uses the Node registry.


Vendor your dependencies, or host clones on infrastructure that you're responsible for.


Vendoring isn't ideal: it makes, for example, code-reviews a PITA, and you still have the problem of what to do when you upgrade or add a dependency. Then you're back to npm (or yarn). Granted, at much reduced frequency, but it's still there.

Maybe we'll go down the clone route if the cost of npm issues becomes too high relative to the hassle and expense of maintaining our own.


>Vendoring isn't ideal: it makes, for example, code-reviews a PITA, and you still have the problem of what to do when you upgrade or add a dependency.

The status quo in which anything that goes wrong breaks the entire universe because no one vendors is also not ideal.

However, consider the long list of disasters that have occurred with NPM recently, and how many of them would have been less disastrous had vendoring been the exception rather than the rule. Left-pad wouldn't even have been an issue, for example - builds simply would have been unable to update, but nothing live would have been affected.

To misquote Ben Franklin here, the tradeoff isn't between a greater ideal and a lesser ideal, but between security and convenience.


That's why I really like Yarn's "offline mirror" feature. It lets you cache the tarballs themselves locally. There's also a tool called `shrinkpack` that lets you do the same when working with NPM itself, but I'm not sure if it works right with NPM5/6.

I wrote a post a while back about using Yarn's offline mirror: http://blog.isquaredsoftware.com/2017/07/practical-redux-par...


One way to keep vendoring from causing nightmare code reviews is to keep updates to vendored components in their own release/branch/PR. Yes, there will be changes to your code to accommodate the updated packages, but it keeps it very focused so that the code review isn’t also trying to evaluate updates to business rules or functionality.

If you are updating components along with business rule updates, then, yes, it’s going to complicate code review, regardless of vendoring.


Vendoring dependencies typically entails committing at least 200-500 MB of packages into a git repo. No thanks. Availability should be easy to control with running your own Nexus or other internal registry. The rest (package versions, etc) can often be solved with an npm 5+ lock file.


> you can't get away from it if you're doing client side development (not even by switching to yarn).

Explain yourself better or top spreading this nonsense. What do they give you that you couldn't develop without them?


This is true. The case I encountered was, you cannot install `aws-sam-local` with yarn, you will need npm to do the install.


I don't think it's fair to blame them for breaking yarn. The way yarn setup their "mirror" url and the way Cloudflare handles account security conflicted and broke down yarn. I really don't see how any team could have seen that coming, or how they should be responsible for people making creative use of their infrastructure.

Everything else is legit though.


Well, they could have though, "Will this large infrastructure change affect any of the tools which depend on our infrastructure?", set up a small test system using cloudflare, and then test a few tools (like Yarn) against the new system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: