Hacker News new | past | comments | ask | show | jobs | submit login

Possibly if you have very few dependencies, and those also have very few dependencies it will work out OK. But in a lot of circumstances, this is a recipe for shooting your foot off. For example, if you are building a server application in Node and the total number of included packages can reach into the thousands. And then if you deploy to your server and for some reason it has to update NPM, suddenly you have a potential of any and all of those thousands of packages to break, or interact poorly, or whatever. Since you have specified that anything between 1.0.0 and 2.0.0 is perfectly fine, you have absolutely no idea what secret sauce is necessary to get a working application again. So your website is down for hours while you struggle to figure out a working configuration.

/me displays closet full of T-shirts.




If you deploy unknown and untested packages to production, semver is not your problem. The utter lack of testing is. When you actually push to production, your packages should be locked to what you tested.

This doesn't prevent the use of version ranges during development or testing.

Edit: To be clear, I do not work with Node. If this is standard practice for Node shops, that's terrible.


This often results in security patches not being pushed. IME most companies are awful when it comes to maintaining dependencies, things are all to often locked until someone adds a package.


Which is why I think development should be done without locked packages, locking after testing successfully before the production release.

Not taking security patches is bad. Taking down production is also bad.


> Taking down production is also bad

I won't say it's ever good, but for most software some downtime isn't the end of the world, certainly better than having security exploits available.


I understand that view but I think that the reality is that for many customers, downtime is worse than a potential security breach. Customers would rather downtime than a definite breach, but the chance of a breach is not always as bad as definite downtime.

I'm not saying that view is necessarily good, but I think it's common. I also think it's quite common that people assume this is the client view and act accordingly.

I definitely think skipping testing with the goal of getting security patches out sooner is a terrible plan except maybe in the case of an active exploit. You can get both frequent patches and high availability.


There is also lots of Node software running in environments where downtime is unacceptable. And having uncontrolled dependency churn is a great way to break your CI and demoralize your team. I know I've spent way more time than I care to admit debugging failures in test and production because for some reason there was no subdependency freeze. It's the kind of problem that makes you very frustrated.

This discussion is based on a false dichotomy. Yes, shops can be bad at updating dependencies. That's a matter of culture. Technically, I shouldn't have to constantly scan my dependencies for updates, much less security updates. That's what CVEs and release notes are for. Not freezing the dependency tree in CI and production on account of security updates is a bad pattern.


If you define your config to be:

    some-library>=1.0.0,<2.0.0
and your application breaks - then some-library's developer isn't following SemVer anyways, and you are in dependency hell. If some-library's developer doesn't care about compatibility, then it doesn't matter what versioning scheme you use as you will have to whip up that secret sauce anyways.

However, that your issue isn't a problem with SemVer, but a problem with NPM/node. SemVer is working as intended, but getting the "secret sauce" to get everything working can be confusing as you try and figure out the differences between "devDependencies" and "peerDependencies". If every module just managed their dependencies, it could be easier (on your development experience, but not on your hard disk and build times).


> If some-library's developer doesn't care about compatibility, then it doesn't matter what versioning scheme you use as you will have to whip up that secret sauce anyways.

It does. Semver implies semantics arbitrary versioning is not.


Your problem has nothing to do with semver. The solution to your problem is to use your package manager's dependency freeze utility (npm shrinkwrap in your case), and also to select your dependencies carefully, so you don't end up with thousands of subdependencies (I know some npm packages suffer from dependency incontinence - this is a good reason not to use them).


IME this problem is particular to NPM, due to JavaScript's historically-anemic standard library which made a culture of microlibraries inevitable. The lesson is that programming languages should provide rich operations on their standard types so that users can avoid having to transitively depend on ten different versions of left-pad.


The problem might be particularly bad in NPM but it is not unique too it.


Dependency hell isn't unique to NPM, but it's also not unique to semver. The primary driving feature of semver is that it provides the framework for a social contract between library author and library user about how automatic upgrades can be performed. This works great until you the number of transitive dependencies is "in the the thousands", which I've never seen on any project save for Javascript projects.


> you have absolutely no idea what secret sauce is necessary to get a working application again

That's what .lock files are for.


This. You define your loose dependencies, then you compile them down to a lock file that you use to test your service and you deploy with the completely locked down set of dependencies. When you want to upgrade your deps, you re-run the compilation and get updated libraries, which you test again. If you have floating deps in production you're doing it wrong.


"if you are building a server application in Node and the total number of included packages can reach into the thousands"

You know, this should be a hint that the problem is perhaps not with the versioning system.


That speaks more to the insanity of the Node ecosystem than anything else.


Sounds like an excellent argument for not using node


> And then if you deploy to your server and for some reason it has to update NPM, suddenly you have a potential of any and all of those thousands of packages to break, or interact poorly, or whatever. Since you have specified that anything between 1.0.0 and 2.0.0 is perfectly fine, you have absolutely no idea what secret sauce is necessary to get a working application again.

That's not how you're supposed to use npm. You should never perform an npm update directly in production exactly because of what you describe.

The proper way to manage dependencies update in npm is to perform the dependencies' updates in your build process when building your pre-production/staging build and then use the shrinkwrap[1] command of npm to generate a file with all dependencies pinned for production. This way, for a given production deployment, you know exactly what version of which dependencies was being used and you can rollback easily if an update break something.

[1] https://docs.npmjs.com/cli/shrinkwrap


That's a problem with Node, not with semver.

Nodejs is the first time I've encountered a deployment problem whose resolution was "you have to upgrade your package manager to a beta version".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: