Hacker News new | past | comments | ask | show | jobs | submit login
The vgo proposal is accepted. Now what? (swtch.com)
158 points by stablemap on May 29, 2018 | hide | past | favorite | 136 comments



I wish they had just copied Rust/Cargo. I remember reading a comment on GitHub somewhere from one of the Go maintainers who responded to someone expressing a similar sentiment and his reply was basically that Go is somehow different than every other language and they need to explore and find a unique custom solution for their particular use case. Has it ever been addressed anywhere why the tried and true "list of packages" + "lock file" paradigm is not good enough for Go?


There are a lot of suboptimal design decisions that's shared across the state of the art in package management (cargo, pipenv, npm, yarn, etc) that the Go team is not taking for granted.

One is reproducible builds. The standard answer is lock files which is extra bloat and leads to merge conflicts. vgo is trying to avoid it with the "minimal versions" strategy (https://research.swtch.com/vgo-repro).

Another is shared (or recursive) dependencies. The semver answer to this is that if the versions match then they should be shared if not then they should be duplicated. But what do you do if it's a singleton, listens on a port, or exposes a port. Right now, with npm for example, good luck with that, you're in for a world of pain. On the other hand with vgo's "semantic import versioning" they're trying to make version interop more explicit.

Just because 90% of the time mainstream package mangers work doesn't mean it's a solved problem. Kudos for the Go team for trying to advance the state of the art.


The "minimal versions" strategy basically just replaces the lockfile with your dependency list.

Two branches want to update the same dependency? With a lockfile, they both update the lockfile and will have a merge conflict (assuming the updates aren't identical). With "minimal versions", they both update the file that declares the dependency, and, you guessed it, have a merge conflict.


Minimal versions put the burden of background compatibility on the maintainer of the dependency. If such is the case, library maintainers are constrained to refrain from breaking background compatibility at minor version changes, like Go itself does. In this case, you can merge, resolve the conflict by incrementing the dependency's version number, rebuild and retest and if all goes well commit. With lockfiles if branch X requires 1.4, branch Y requires 1.5 and 1.5 breaks 1.4, good luck with that. You'll need to update branch X to support 1.5.


SemVer already solves that problem.


Most package managers ignore the lock files of the dependencies when picking up versions - they only use the lockfile of the topmost project for picking up versions.

The big difference with the MVS strategy in vgo is that the dependency list of a dependency is actually used to determine the version. If package A uses B, which was tested with v1.2 of package C, you will get v1.2 of C, even if there is a later version of C available.

In the typical package manager scenario, the dependency list of B may be pointing to an old version of C which does not even work with B - there is generally no way to ask : give me the latest version of C which was tested with B.


I think package managers should have an option to run a build of your package using the oldest versions instead of newest versions, just to validate that the build works, but that this shouldn't be the normal way to resolve dependencies.


> The standard answer is lock files which is extra bloat and leads to merge conflicts.

Bloat how? A lockfile is a handful of bytes. A thousand lockfiles could fit in the space of a single Go hello world binary. And I've never heard of a merge conflict from a lockfile. What are the actual arguments against lockfiles?


There are situations where lock files, as implemented in some languages, can generate conflicts. Honestly I consider this a benefit, because I'd like to know about different expectations as early and explicitly as possible.

The bloat argument seems frankly absurd to me.


It's pretty easy to get a merge conflict from a lock file. If two developers add a dependency and then try to merge their changes together, it can happen easily.


In that scenario, you're going to get a merge conflict anyway in your manifest (e.g. in go.mod).


It's probably easier to manually merge the manifest (as they're much more human-readable) than a lock file though.


Lock files are easy enough to resolve that npm, in newer versions, will do it for you automatically, if you ask it to.


That's correct behavior though - you can't merge "upgrade A" and "upgrade B" in isolation with literally any confidence that it works.


More things, more state = bloat. Don't you agree that if you can do something without holding state it's invariably simpler? That's what MVS does.

There's also cases where your lock file and manifest are not in agreement. e.g. https://github.com/rust-lang/cargo/issues/4100


> Don't you agree that if you can do something without holding state it's invariably simpler?

Only if the two solutions under consideration are really solving the same problems. Sam Boyer alludes to this very thing in his discussion of MVS:

"If there are two algorithms that satisfy the same requirements, and only one is NP-complete, you pick the other one. That’s axiomatic. Moreover, if you have only an NP-complete algorithm for a particular problem, finding a less complex alternative that does the same job is an electrifying discovery. When such an alternative algorithm is proposed, however, the inevitable question to be answered is whether it actually does meet the original requirements. [...] But, in avoiding SAT, MVS also cuts out some of the complexities that I believe are essential to the domain. Being essential, the problems don’t go away when MVS ignores them. Instead, they’re redistributed into other, often less obvious places. If reading the vgo blog posts gave you a general sense of unease that you couldn’t put your finger on, that might’ve been you intuitively sensing some of these redistributions." https://sdboyer.io/vgo/intro/


>One is reproducible builds. The standard answer is lock files which is extra bloat and leads to merge conflicts.

You can’t have your cake and eat it too


Rust/Cargo had the luxury of being a greenfield project that could adopt semver from the beginning, whereas Go made the mistake of starting out, and then going years, without any official package management solution. As a result, Go has a swathe of applications and libraries that use specific workflows as well as a mélange of community-developed package management tools such as godep, Glide and dep. The semver standard, in particular, has been inconsistently adopted by the Go community.

In other words, any new Go tool either has to support/import existing code, or to wipe the slate clean and say that for a package to be importable it has to follow a new spec. dep decided on the former, and my impression is that this has had unfortunate consequences, because that inherits a lot of historical baggage.

We've been using dep for a while (having escaped the bugfest that is Glide, which used a very similar approach), and it's pretty evident that the solver is buggy and slow and also complicated enough that fixing issues like [1] can only be done by a select few that already understand the codebase. I'm not in a position to judge what the causes of all of these issues are, though I'd wager they're not entirely unrelated to the inherent complexity of SAT solving. The current dep issue tracker is full [2] of reports mentioning the solver, not to mention that dep currently has problems with known libraries such as the Kubernetes client [3] and Protobuf. (Google-related projects have historically used godep.) Again, possibly related to this specific implementation and not necessarily something that would apply to a hypothetical "Cargo for Go", but I don't know.

Any idea how Cargo compares to dep overall?

[1] https://github.com/golang/dep/issues/1306 — this one is a nightmare if you work anything related to Kubernetes.

[2] https://github.com/golang/dep/issues?q=is%3Aissue+is%3Aopen+...

[3] https://github.com/golang/dep/issues/1207


Rust was actually around for a good while before Cargo was adopted. (In fact, there were two Cargos, the older of which bore very little resemblance to the Cargo of today.) Of course, Go was stable for longer.

I have to admit I'm a bit confused as to why the dependency resolution algorithm in dep is seen as slow. The speed of the solver is not a problem in any other package management system I've seen. If it is indeed the solver that is the problem (which, again, I'm skeptical of—I'd have to see profiling data to believe it), then it could just come down to optimization differences between rustc and Go 6g/8g.


> The speed of the solver is not a problem in any other package management system I've seen

This! I've never heard anyone complain about this aspect of a package manager, EVER. vgo seems to be optimizing for a problem no one has.


> The speed of the solver is not a problem in any other package management system I've seen

really ? It has been a large problem in Debian for instance and has enabled a lot of research (https://scholar.google.fr/scholar?q=debian+solver). One of the reason for Fedora's yum -> dnf change was also a change of solver. It's a hard problem that affects a lot of people.


To clarify, I(and the OP) was specifically talking about programming language specific package managers.


puppet dependency managers have also been an issue it's certainly easy to solve poorly


You can't compare languages and os package manager. They are on a completely different scale.


Scala / SBT has a particularly slow dependency resolution


Go has always optimised for build speed. I guess they considered dependency resolution as part of the build process.

Which it technically is, I suppose, but when you're coding and iterating the code-build-run loop you generally don't need to add new dependencies each time. And that's when the build speed matters, of course.


And, most CIs cache build dependencies these days which is an easy work around.


I once waited for a long time for an older Haskell dependency solver to contemplate a situation before it gave up.


i've seen aptitude get really confused about what to install and computing a solution for a few minutes.

once.

10 years ago or so, i don't even remember.


I should clarify that I'm referring to language package managers. Their problem domains are significantly different than those of system package managers.


I'm curious, what makes the problem domains different?

I'm asking because I'm interested in "universal" package managers like nix.


Here [1] is the "dep ensure -v" output for a project of mine. It takes almost 12 seconds even when there are no changes to the actual file. I don't know why, or whether it's actually the solver (though the output seems to indicate it).

[1] https://gist.github.com/atombender/7c28f1d371fcb139e1e742a08...


As you can see from the output, it is not the solver per-se, but the weird idiosyncrasies of go imports and gopath layout. 'satisfy' and 'select-atom' and such are the solver bit and take about 20ms all together. A SAT solver is 20ms, MVS might be 1ms, but who cares about that difference, right?

The top 3 items there are slow because they're:

1. 'source-exists' (~6s) which will do network traffic to find if a project exists to be downloaded or is in the cache; it's network io heavy in most cases.

2. list-packages (~3s) which parses the downloaded source code for import statements to find further dependencies; disk-io heavy + go loader has to do some work

3. gmal - GetManifestAndLock (~2s) which looks for lock files, including of other dependency solvers; disk io mostly I think

Any system designed with the constraint that it cannot use a centralized registry / list, must be compatible with things not using this system (and so must parse their code), etc will have these problems regardless of the algorithm.

Those steps are all doing network/disk-io/go-parsing, and none of that is SAT solving.

I don't think vgo has these problems because vgo is built by the go team and can dictate far more, such as the use of a centralized repo, that all dependencies must use vgo, etc.


Thanks for the explanation!

The fact that dep parses import statements (as does Glide) is something I've never liked. It means that if you run "dep ensure --add" on something not yet imported, it will complain, and the next "ensure" will remove it. This is never in line with how I actually work. I need the dependencies before I can import them! There's no editor/IDE in existence that lets you autocomplete libraries that haven't been installed yet.

It also means that "dep ensure" parses my code to discover things not yet added to Gopkg.toml. That's upside down to me. I want it to parse its lockfile and nothing else; the lockfile is what should inform its decisions about what to install so that my code works, my code shouldn't be driving the lockfile! If I try to compile my code and it imports stuff that isn't in the lockfile, it should fail, and dep shouldn't try to "repair" itself with stuff that I didn't list as an explicit dependency.

I'm sure there are edge cases where the current behaviour can be considered rational, but I don't know what they are. As you point out, dep has to do a lot of work -- but why? Running "dep ensure" when the vendor directory is in perfect sync with the lockfile should take no time at all, and certainly shouldn't need to access the network. Yet it takes the same amount of time with or without a lockfile.


Small note, this isn’t something that you’ve said, but since we’re comparing the two in this sub thread overall, Cargo doesn’t require a central registry either. You can pull straight from version control, and the lock file will even keep track of what HEAD is at the time, maintaining reproducibility. Or from the file system. Etc.

Thanks for your comments here, there’s a lot of stuff I wasn’t aware of. Very illuminating.


That's quite weird. When I run `rm Cargo.lock && cargo generate-lockfile` on the Servo repo (test performed on the cheapest VPS that money can buy) it exits near-instantly (after first spending three seconds trying to git-fetch new versions of the dozen custom dependencies that live on Github rather than crates.io). For reference, here's what Servo's dependency graph looked like two years ago (July 2016): https://dirkjan.ochtman.nl/files/servo-graph.svg ; the number of transitive dependencies is quite large and yet the runtime of version selection is negligible.


It's not strange at all for go.

Because third party go packages may not have a dep file, and because go programmers expect vendor directories to be minimal and not include unused imports, dep parses all of the go code of the project, and all the project's transitive dependencies.

It has to parse every .go file to find all 'import' statements, and it also has to find remote versions by making multiple network requests per dependency (typically 1 http-get + 1 git pull operation).

This is obviously going to be much slower than cargo where it's assumed every dependency is also using cargo and all needed information is present in metadata files... and there's one single fast api to download data from and cache (crates.io).

If cargo had to do the equivalent of `cargo check`-style parsing to find all 'extern crate' and 'use' statements before it could spit out a valid lock, and it couldn't use only 1 request to update all crates.io data, it would probably be closer to the speed of dep.

I think the speed difference is thus largely a result of go's lack of a central repository and lack of a unified packaging solution.


Thanks for the insight, that's very helpful. That would confirm what I suspected: it's not the core solving algorithm that's slow. Rather what's slow is building the graph in the first place.


Yes, I had overlooked that Go probably doesn't have anything like the crates.io index (https://github.com/rust-lang/crates.io-index) to allow instant discovery of versioning metadata. And, AFAICT, even MVS would have the same problem here and would take the same time to resolve, since it still needs to access the network to fetch remote repos to discover versioning metadata; rather than pointing the finger at SAT solvers, it looks like vgo should be tackling Go's lack of a central package host (the vgo manifesto mentions "proxies", but it seems that those are just intended for solving the problem of persistent availability).


We use Chef (Ruby). Cookbook version solving has been a repeatedly painful experience, occasionally never finding a solution. When this happens, you get to manually dig around and find the culprit.


> The speed of the solver is not a problem in any other package management system I've seen.

The package manager in YaST (Suse Linux's sysadmin tool) was notorious for its slow solver (and slow everything-else, for that matter) around 2006, when I started using Linux. It improved a lot in the openSUSE 11.x series around 2007/8 when they switched from a homegrown solver to a standard SAT solver package.


I wish it was as simple as that. Instead it's that that Russ Cox is fundamentally against SAT solvers, and has opted take this alternative approach instead. I think they would have been better off iterating on the things that have worked for other languages and tools, and arguably a SAT solver is one of them.


I think Cargo-like is the direction things are headed even with vgo. In a way, though, I don't think the answer is that Go is "special," just that they wanted to explore the problem space of package management, dependencies, etc. in more detail before locking things down. It may have ended in a slightly weakened ecosystem temporarily, but I hope it helps to ensure that the semantics and behavior of Go's package management is level headed and simple. Hopefully, by Go 2, a lot of the common concerns with the Go programming language will be answered...


The vgo proposal explicitly rejects the "Cargo way" [1]. There's no lock file, and the MVS algorithm requires, as far as I recall, that the go.mod file is modified whenever the developer wants to update to a new version.

[1] https://research.swtch.com/vgo-repro


Indeed, but it does seem like a central repository is possibly the direction it's headed, or at least away from source repositories. Personally, I won't be too upset by MVS if it's one of the only deviations, though I do agree with Sam Boyer on the matter, who I actually had the pleasure of speaking with a bit ago.


Cargo doesn’t require a central repository at all, to be clear.


I'd actually be interested to find out how Cargo's performance compares in a situation as TheDong describes where all of the dependencies are being fetched directly from git.

Part of me wants to say "well of course Rust's tool is faster" but it would be interesting to see just how much crates.io acts as a performance optimization for running builds, installing deps, etc.


Cargo performs dependency resolution, so fetching from git would (and does) significantly slow it down - it wound need to perform git operations to get the tags for each dependency (quick) and then checkout each tag to look for the subdependencies of that package at that version (slow).

Having a central registry is a must for package managers that perform version resolution and want to do so quickly, as it can serve them all the metadata they need to do that resolution.


My perception is that the initial clone is slow, but that’s it. A detailed comparison with actual numbers would be interesting!

When you depend on a git dep, you can say if you want a particular branch, tag, rev, whatever. So it’s only a clone + checkout. From there you read the Cargo.toml, same as anything else. That’s my understanding anyway, it’s been a while since I poked at the guts.


You’re totally right - I was thinking of Composer, which does flat resolution for git sources so has to go through the above. Cargo sidesteps that issue completely by taking the head commit or whatever’s asked for :-)


Cox discusses Cargo here, and why he doesn't like it: https://research.swtch.com/vgo-repro


> The lock file stops future upgrades; once it is written, your build stays on serde 1.0.27 even when 1.0.28 is released. In contrast, minimal version selection prefers the minimum allowed version, which is the exact version requested by some go.mod in the project. That answer does not change as new versions are added.

So with Cargo, you get the exact version you want, ie 1.0.27, and it won't automatically update when a new version is added. And with MVS you get the exact version you want, and it won't automatically update when a new version is added? ...either I'm an idiot, or Cox is using the word "contrast" here to mean "identically".

> Those choices are stable, without a lock file. This is what I mean when I say that vgo's builds are reproducible by default.

Yes, but Cargo uses a lock file by default, meaning that Cargo's builds are reproducible by default too?

I'm open to the idea that vgo/MVS is delivering something amazing here, but every writeup I've seen so far seems to have a miraculous ability to make it sound like a re-branding of the same features every decent package manager has had forever.


The key here is that Cargo ignores the lock files of the dependency, while vgo uses the go.mod files of the dependency.

So, if project A uses B, which uses 1.0.27 of C, then the lock file for A is locked to that version of C. Suppose B now releases a version that was tested with 1.0.28 of C, A will continue to be built with the older version because of the lock file, while vgo would (correctly) start using the new version because of MVS.


Umm. But we started with this:

> Those choices are stable, without a lock file. This is what I mean when I say that vgo's builds are reproducible by default.

You say:

> A will continue to be built with the older version because of the lock file, while vgo would (correctly) start using the new version because of MVS.

That seems to contradict Cox's assertion? If I get a newer version of C automatically when B updates, because I'm automatically getting a newer version of B, then I no longer have reproducible builds; the answer to what version of B I'm using would have "change[d] as new versions were added.", which is the thing Cox is saying vgo prevents.

But my understanding of vgo is that this is actually wrong; I don't get the new build automatically at all; I get it when I update using `vgo get -u`. Which is...the same as using Cargo (Composer, Bundler, Yarn, etc.) right? I eventually run the appropriate update command, the solver runs, and I get the new version of B and C.

Ultimately it feels like a dependency management tool can either lock me in to my current versions until I manually trigger an update to get bug fixes, or it can transparently update things in the background as new compatible versions are released.

I take Cox to be asserting that vgo does the former and Cargo does the latter, you seem to be asserting that vgo does the latter and Cargo does the former, and my understanding is both do the former. It feels like this shouldn't be this confusing to explain what vgo is trying to do. :)


I was not clear. If you say vgo get B@latest in A, you will get the correct version of C. If you do not pull in the latest version of B, you would not get that.

The thing I like about this is that for many transitive dependencies such as C, i do not want the absolute latest version - i would prefer the version of C that B was tested with at time of release. I can override this of course, but this is the default behavior i like.


> while vgo would (correctly) start using the new version because of MVS.

I disagree with the “correctly” part. MVS seems to pick up a newer version mostly by accident. If you’re relying on a transitive dependency to trigger a security update, you are doing it very wrong.


> The lock file stops future upgrades; once it is written, your build stays on serde 1.0.27 even when 1.0.28 is released.

That means that the only difference is that `vgo` doesn't require lock file for reproducible builds, now the question is what's considered so terrible wrong/dangerous with having a lock file?


If there was a security fix in a package, MVS allows build infra to automatically upgrade (assuming packages imported are well maintained by their owners) where as lock-files require manual intervention.

Imagine the packaging and development ecosystem in internet scale.


I'm not sure what this is asserting. There's no association between MVS and build infra. If you have a security fix to apply, you'd run `vgo get -u` to upgrade all your modules; systems like Cargo that use lockfiles would run `cargo upgrade`. This is just as automatic. In fact, vgo is less automatic here because systems like Cargo allow users to decide when they want certain upgrades automatically, which means that new downstream users can get the security fix even without needing to explicitly upgrade. And vgo also requires more manual intervention to upgrade than traditional build systems, because if the upgrade requires a major version bump then you also have to edit the import statements in your source files as well. vgo is both less automatic and requires more manual labor; that appears to be an intentional choice on behalf of the vgo manifesto.


> if the upgrade requires a major version bump then you also have to edit the import statements in your source files as well

As per vgo blog posts, this scenario is not an Upgrade, but replacing one package with another package. A package is uniquely identified by it's import path, so if major version is changed, import path changes, so it is not same package.


Thanks for posting this. It's much more fleshed out than my hazy memory of a GitHub comment although I still don't agree that all this effort it is worth the "simplification" of not having a lock file.


The aversion to lockfiles is especially head-scratching given that vgo has a concept of a go.modverify file (separate from the go.mod file) which is used to store version hashes, which is a function that lockfiles usually fulfill. I think the advantage is supposed to be that modverify files are optional, but in practice I don't see why anybody wouldn't want one given that it provides additional security for free.


I believe this video of rsc explain some concerns about the Cargo's approach and lock file issues https://www.youtube.com/watch?v=F8nrpe0XWRg


Yes, it has!

Russ Cox talks about this, amongst other things, in this talk: https://www.youtube.com/watch?v=F8nrpe0XWRg

I am sure there are written versions of this somewhere...


> I wish they had just copied Rust/Cargo

Cargo is really a suboptimal design. With vgo Go advances the state of the art of dependency management.


For those who missed it, Sam Boyer (maintainer of dep) wrote a detailed post about why he thinks vgo (or rather Minimum Version Selection) is inadequate[0].

The key argument is

With dep, it’s usually easy to point to failures - they’re explicit, verbose (and, currently, often difficult to understand, and printed out at the end of a dep ensure run.

The primary failure mode in vgo, however, is silent false positives - a vgo {get,test,run,build} command changes your dependency graph, and exits 0. Maybe everything’s fine, maybe it isn’t, but it’s incumbent upon you to take additional steps to understand that your build is broken.

I haven't been following this debate very closely, and this post doesn't make it clear - have these points been addressed? Is there still room to maneuver, or is the design mostly settled now? I know the post states that any design flaws will be fixed, but it sounds very much like the more typical lock-file + solver solution has been definitively decided against.

[0] https://sdboyer.io/vgo/failure-modes/


The concerns haven't been addressed. Sam's latest comment for context:

- https://github.com/golang/go/issues/24301#issuecomment-39254...


I just hope they figure out a solution which satisfies the community for the long run. Maybe they have already with vgo, but this is starting to feel vary Javascript-y the way people have been pushed from $OLD_PACKAGE_MANAGER -> dep -> vgo.


Quite the contrary: NPM's reputation in these circles is less than stellar and it's been incumbent in the JS ecosystem ever since Node became a thing. Well, with a few attempts at competition along the way before those working with the browser jumped on board, but nobody is pushing anybody to use anything but NPM and that attitude hasn't changed for years.

This story isn't unique to Go: both Python and Ruby have had their own ordeals with dependency management, there was an article about CMake just the other day...

Maybe Go's mistake is searching for the one-true dependency resolution system and deprecating everything along the way until something good-enough turns up. Maybe it'd be better that developers are encouraged to use dep while vgo is still in proposal stage so that there is an easy migration path from one standard to another, as opposed to immediately rendering obsolete.

I don't really know, neither do I know why Javascript is the scapegoat for this kind of shit, it's practically a rite of passage for a language gaining increasing mind-share.


> Maybe Go's mistake is searching for the one-true dependency resolution system and deprecating everything along the way until something good-enough turns up. Maybe it'd be better that developers are encouraged to use dep while vgo is still in proposal stage so that there is an easy migration path from one standard to another, as opposed to immediately rendering obsolete.

The odd thing is I get the feeling that their search for the one-true system is causing them to repeat every other package manager's mistakes. I'm likely misinformed, but I get the feeling that they think they can do the same thing other package managers have considered or tried, but it will work for them somehow. I don't get the feeling that they seriously considered the criticism of VGo's approach. It smells like hubris. Also, there's Cargo, which is lauded by all, but the proposal doesn't seem to consider that perhaps the Cargo folks had a reason to make their dependency resolution scheme complicated. Again, this smacks of hubris.

I'm happy to be persuaded otherwise, and it would really only take a link to a thread in which some VGo proponent thoughtfully addresses Sam's criticism and the "why not copy Cargo?" criticism (and no, the "Cargo's dep resolution scheme is overly complicated" rationale from the proposal doesn't constitute).


Feels like Go is trying to carve its own path, for better or worse, against conventional wisdom. Plan9 was pretty much the same in a lot of ways and it came up with some really interesting concepts (that sadly haven't panned out that well - I'd love to see a modern OS attempt at Acme, I found that incredibly intriguing and I actually wonder if it could be revived in VR.)

If they come up with something innovative with vgo then great. It's just a shame that the community is opting for a monopoly on the system before it even physically exists. They should be encouraging dep to thrive while doing this experimentation on vgo behind the scenes, because then at least they've got community alignment on dep and not dep, glide, godep, `git clone some-repo vendor/some-repo`, `go get` and whatever else you can do to pull in external code.


> If they come up with something innovative with vgo then great.

For sure, but I actually like Go and I'm unnerved that none of the concerns raised by folks like Sam (read "folks with experience in package management") are being addressed. This approach doesn't inspire confidence in vgo.


Had vgo not included MVS I think they likely would have. But there's a lot of disagreement over the design of vgo which could have been avoided had it just done what almost every other recent package manager has done. This is going to make it more painful than it ought to be, but ultimately since this will be the official solution I think it's likely that it will be adopted and that the churn will end since third party tools are no longer necessary.


Third party tools will still be necessary, to work around short comings.

I imagine a tool to update all your imports to the latest release will be needed by the majority who don't need reproducible builds, so you get versions with known bugs fixed. MVS seems to be essentially pinning to the oldest possible version, and having that entrenched in your software is going to suck the big tech debt when you do need to update. Hope none of the known bugs are data loss or security related, and good luck monitoring each and every one of your explicit and transient dependencies for updates on a larger project. A tool to update your dependencies to the latest version when convenient or for automated CI tests seems required to me.


Look at https://research.swtch.com/vgo-tour (Upgrading)

vgo list -m -u will show you which newer releases of your dependencies. Then you can upgrade one dependencies vgo get xxx or all vgo get -u


Excellent, that seems to address my misgivings. Thanks.


It's worth pointing out that when working in a monorepo, there are no version numbers and no package system to tell you that upstream broke you, or you broke something downstream.

So how do you find bugs? You run tests.

The same can be true if you're working with package management. Most bugs aren't found by the dependency tracking system anyway. Having good tests will tell you about incompatibilities that upstream didn't even know about.

If you're not using exactly the same versions that the maintainer used, you need to run tests.

Maybe people currently don't run tests often enough? But this suggests a different approach to software robustness than comparing version numbers.


Testing and package management aren't mutually exclusive. If you're in a monorepo, all your code evolves in lockstep. A core tenet of versioned package management is to get reproducible builds that only need to break once you choose to evolve forward in time (i.e. upgrade).

Historically, most languages (C, C++, pre-Maven Java) haven't had package management at all, and so dependencies have typically been managed by vendoring the code (or JAR files). JAR files worked okay, but vendoring incurs maintenance overhead that isn't acceptable in today's environment. git submodules are theoretically a solution, but also high-maintenance.


Yes, you are right. And I believe vgo is supposed to guarantee reproducible builds just like other package systems.

However, when you upgrade a dependency, it's still possible that you're using a particular combination of library versions that have never been tested before.

Some incompatibilities can be prevented by looking at version constraints. But you're not left with no error detection if the package system fails to detect an incompatibility; in the end, what matters is that the code compiles and the tests pass.


This methodology places way too much emphasis on the breadth of the tests into a test centric view— say you had a dep that had an SSL vulnerability - most of the time you’re not going to be checking for this type of thing at the level of your app, and doing so - but you bet you need to ensure that you are using the version of the dep that has the vulnerability fixed


I'm not sure they're entirely distinct. For example, if the SSL library exports a constant containing a version number, you could write a test asserting that it's not the bad version.

It's not as good as testing for the vulnerability, but then again no form of version number checking does that. (This is similar to the principle in web development that feature detection is better than version string checking. But sometimes version-checking is the best you can do.)

Checking version numbers in the package system allows for much faster backtracking, making it feasible to try many versions and select a combination that (hopefully) works. But verification can be done using testing.


They haven't been addressed, but then again they exist in `dep` as well and most other package managers, just in different forms.

Russ Cox had a good talk about this which is worth a watch: https://www.youtube.com/watch?v=F8nrpe0XWRg&feature=youtu.be...


>They haven't been addressed, but then again they exist in `dep` as well and most other package managers, just in different forms.

The whole premise of the blog post is that vgo has additional failure modes that other ones do not. Right at the top, the first item in the list of "what this post covers" is:

>MVS has all the same failure modes, plus more, minus pathological SAT.


To an outsider this may sound like there is some sort of process that resulted in this solution, there actually isn't any.

The committee is nothing more than a simple bureaucracy to the point that it is almost a joke how Russ makes a proposal, community is against it, then it gets accepted by the Committee.

It is all just a funny joke.


> To an outsider

> Russ

> throwaway

Something tells me you aren't really an outsider.

Anyways,

> community is against it, then it gets accepted by the Committee

Is this your primary gripe? Because this has happened pretty consistently in the history of almost all open source languages. Think of how many JSRs were hotly contested, only to be accepted. Open source does not mean democratic development, and thankfully so because you might have ended up with this https://i.redd.it/7t1p88ct13ez.jpg


FWIW, I read "To an outsider..." to mean "if you are reading this on HN and aren't involved in the community, this might seem like" as opposed to "I am pretending to be an outsider, let me tell you what about my perspective."


Who says the community is against it? I happen to like it.


Looking at the thumbs up and the comments, the community seems very favorable at vgo. https://github.com/golang/go/issues/24301

edit: and everybody was not agree at the time of Dep launch as it was not simple as Go use to be.


Community is not against it as far as I can see. A minority of vocal people are.

There are a lot of people who liked the idea (including me) and those do not write 30 page blog posts because there is no need.


The “vocal minority” of the community against vgo as it stands today are also mostly the package management domain experts, who have put in the sweat equity.


Parts of the "vocal minority" was also emotional, that doesn't help to focus on the technical args.


I first used $GOPATH and “go get”. Was amazed at how simple and easy it was and it “just worked”. Then you start to run into issues... so I started using Glide. Which worked pretty darn well. Then I switched to Dep because “it was the future” and it didn’t cause me to break out in hives. Now vgo... but Dep works for me so at the moment I don’t plan to switch until vgo is good and baked. Given Boyer’s concerns I hope he maintains Dep for a while as vgo is polished. Too bad this was not part of the vision at the start. Not sure where JS would be without NPM, etc.


> Not sure where JS would be without NPM

Where it was before NPM, i.e. where Go is today. No real versioning or discovery (granted Go has qualified URLs for discovery).


I still wonder why solving a solved problem took so long to solve for the Go community, given they already solved it anyways?

That is, in more proper words, first of all, language specific package management is mostly a solved problem. There are possible improvements, and maybe vgo realises some of them, but that's mostly a bikeshedding problem. What users need is to be able to declare what packages they need, in what version range. And their search for an alternative to fetching source repos is like searching for the cure to ilnesses that already have proven vaccines: you just put up a server and fetch from there. Decentralisation? Put up mirrors.

Then the way this vgo thing happened is the opposite of nice. Tools already existed, and they had to conform to the restrictions of the project (like the, excuse me but, idiotic idea of a $GOPATH); but then one of the Go deities come around and goes, um, I deprecate all of you, break the rules that you had to comply, and because I-am-who-I-am, this is the way to go.

Now Cox's solution might indeed be better (though I think it's an overkill, and do agree to Boyer's articles I read), but this is not the way to run a community. From my PoW, this would not preclude me from using the language if it came up, but I'd definitely be reluctant to send patches to them. Communities with deities and dogmas are always unhealthy. Those that also, additionally, are deep down in yak shaving and bikeshedding are even more so.


> Now Cox's solution might indeed be better (though I think it's an overkill ...

vgo is actually much, much simpler than dep. The sheer number of words in Russ Cox's series of blog posts belies its simplicity. vgo doesn't need a SAT solver. If you look at many of the issues dep is struggling with, they're related to solving N libraries with transitive dependencies up the wazoo.

Cox's long treatise reflects the complexity of the problem space. Developers tend to brush off package management as being simple. But once you include range-based version constraints and transitive dependencies, it gets a bit messier. Look at NPM and Yarn; they're still struggling to get all the details right. On the other hand, there's Ruby's Bundler. It came out in 2009, RubyGems in 2004, and I've never had a single issue with the toolchain (other than messing up my own constraints). I don't know what kind of magic elixir they were drinking, but somehow those guys managed to nail it from day one.


> Look at NPM and Yarn; they're still struggling to get all the details right.

I think that's a bit unfair. NPM has been a horrible package manager in a multitude of ways since day 1. My default assumption if it gets something wrong it isn't because it's hard, but because npm gets a lot of things wrong.

Yes, RubyGems got it right, but so did Composer. And Cargo. And every other language specific dependency manager I've used in over a decade. The lesson I'm drawing isn't that dependency management is uniquely hard, it's that npm is uniquely bad. :)


Comparison to NPM is also unfair because of the rather unique community around it.


Putting NPM on a pedestal of badness is unfair. NuGet deserves to be right up there next to it, but also holding the bouquet and wearing the tiara and waving to the crowd


Simpler does not mean better. ed(1) is definitely simpler than any text editor out there. DOS is simpler than any modern OS out there. Yet we use none of these because complexity is needed for sophistication.


>Simpler does not mean better.

Doesn't mean worse either. If it solves the same problem as dep in a less complex way I'd say it's better, if it actually does is something we'll see once it sees adoption.


I'm having a hard time rectifying the "Ruby's Bundler"+"solved problem" axiom. Take my experience with Chef. It is Ruby. Berkshelf is supposed to solve cookbook versions, but you can use bundler around that. As a "not-a-ruby" guy, it is all a bit confusing actually. All I know is that Bundler and Berkshelf have caused me untold issues. Never resolving dependencies that leads to manually finding what version update broke what. When I heard about using the Minimum Version Solver, my first thought was how that would have saved me so much time during my Cookbook Version wars.


I've not used Berkshelf, but I suspect your problems were related to Chef and the fact that they've built their own dependency system that also interacts with Bundler/RubyGems.

Anecdotally, my company has used Ruby since around 2004, and Bundler since its first release, and we never had any issues. That doesn't mean nobody has ever had any issues (clearly! [2]), but it generally seems like Ruby package management is a solved problem, and that it would be a good model for any dependency system to use.

Bundler does have one feature (or misfeature) that Russ Cox criticizes: "bundle install some_gem" can cause unrelated gems' minor (or maybe it's minor) versions to be upgraded even if you don't tell it to. I've never liked that, and would much prefer to use "bundler update" to perform explicit upgrades. But I don't think that behaviour is at all tied to its solver, or that MVS is needed to fix it.

[1] https://github.com/bundler/bundler/issues/5068

[2] https://github.com/bundler/bundler/issues


It seems a little unusual for Go because they've generally gone with a good, simple, and well-proven model for things. That's sort of the whole appeal: the good kind of boring. I don't really see the point of the bikeshedding they're engaged in now, and they've made the upgrade path for everyone who's been working under dep, and glide before that, more difficult than it needs to be. The rationale behind it isn't really very solid IMO either. But I suppose I am just a mere mortal, so how I could I possibly understand?


vgo is exactly in the spirit of Go: they took a popular mechanism and simplified it.

vgo works 90% like other systems with one major simplification: it replaces a complicated system of declaring version dependencies which requires SAT solvers to implement with a much simpler, predictable algorithm that they believe will work just as well in practice.

As a bonus it also removes the need for lock files, which is what other systems require to fix the issue caused by SAT solver (chosen versions of dependencies are unpredictable and will change if dependencies have newer versions).

As it is with engineering, the new method also has drawbacks, which is partly why the issue is so hotly debated.

As it is with Go, they believe the advantages of simplicity out weight the drawbacks.

We'll have to wait and see until it's implemented and used to really decide. So far Go's strong focus on simplicity served the language pretty well.


The upgrade path for dep users is actually pretty easy and they aren't done yet. Vgo will read your dep lock file if it is present and use it to build its own lock file.

There are some warts here and there but again, it isn't even out yet and it still works pretty well. I feel like most of the complaints are coming from people who haven't actually give it a go. (pun intended)

Russ Cox has also been very responsive with tracking down those problems that do crop up and addressing them quickly. The only really big pain point is if you have a dependency which is > v1 and therefore requires new import paths. You can get around that with virtual versions but that's where the main pain point is.

But ironically that isn't at all the bit people are having issues with vgo about. (rather its insistence on MVS)


Note that none of the vgo precursors (or currently vgo itself) have been official Go tools. dep, for example, is an experiment.

Tools have certainly existed, sure. And they continue to exist. But they have not been official Go tools.


The irony is that dep was an implementation of a widely used and well tested pattern, and that vgo is essentially a new paradigm which has been accepted after just a proposal rather than a full-blown experiment.

This particular problem has been obvious since Go was first open sourced, and the way the solution is being managed is a pain in the ass.


I don’t disagree on any particular point. From a community-oriented perspective it seems odd to me as well, but I’m completely unqualified to critique the proposal on its technical merits.

All I can say is A) I hope it’s a good technical approach and B) I hope it doesn’t result in some kind of massive chasm.


> but I’m completely unqualified to critique the proposal on its technical merits.

If you ever used a working, solid package manager, you're qualified to critique. I've used many languages and their respective package managers, they just worked and got out of my way, even if I used the language to just write a hello world and see what libraries are available. With Go however, there always is a Schrodinger's package manager that is always on there like some reality show, and when you observe it finally, what do you see? Something some guy likes better than what the community was nearly deciding to universally adopt. All these shenanigans are off-putting. Especially when the blinking problem is fetching some tarballs and extracting files out of them after maybe checking checksums. CPAN's been doing that since the good part of the 90's.


Besides the technical details between vgo and dep. The go team just announced and accepted its own proposal, completely ignoring the community's solutions, dep was even called the official experiment, which is sad.


> Now what?

I keep using dep for as long as it's reasonable for me to do so, because I don't like MVS and I don't like how MVS has been basically forced upon us.


Out of curiosity, what are some of the reasons why you dislike MVS?


What is MVS?


Minimal Version Selection

https://research.swtch.com/vgo-mvs


The new dependency resolution algo in vgo.


Soon many packages will start having the module system that comes with vgo and just like aliases, community means nothing to Go.


There is a lot of criticism on the proposal with a lot of good reason. But, I'm honestly very excited about the deprecation of GOPATH, that was one of the most annoying features of go for me.


What does this mean for someone who is starting out with Go?


Nothing. Use dep now. In fact, since you are just starting out, see how far you can get using only the standard library. Later, there will be a seamless upgrade to vgo.


Seconded. Unlike Node/Ruby/most other modern languages, Go devs actively avoid including dependencies if at all possible.

And contrary to rumour, this is not because Go's dependency management sucks. It's more about the pursuit of simplicity, and avoidance of magic.

You can write pretty much anything using just the standard library (and some of the official packages, like the crypto ones). As the parent said, it's good practice to go as far as possible with just the stdlib.


So go is about recreating the wheel, over and over again


Not really. Generally, the wheel you need and the wheel I need are a bit different. Instead of using some bloated "all-wheel," developers choose their custom wheel.

Simple example, SyncMap. This is a general purpose all-wheel, and as such, due to Go's type system, you have to use runtime type assertions against it. If I need a lockable map, I just make one and it is of the type I need, say map[string]*Foo. I just wrap it in a struct with a lock and I'm done.

For more complicated things, most everyone will pull in a package. I'm not going to waste some time making a Redis or Kafka package. For a web server, I may pull in a different muxer, but only if needed.


What is the Go alternative to generics?


specifics?

Joking aside, that's pretty much it. The usual method for go devs is to solve the specific problem in front of you (and accepting a certain amount of duplication) rather than creating generic solutions that have a wider scope than they need. Once it's all working, refactoring can often remove the duplication and provide a better solution than the generic one would have provided (because by then you know the problem domain better).

I've been coding in Go for a few years, and only a couple of times run into the "shit, I need generics here" problem.

And yes, I get that this means that the Go answer to generics is "you don't need generics" ;) Which sounds like such bullshit of course.


Use dep for now, it's the closest thing to a stable package manager for go. And you'll also have the smoothest path to a vgo migration in the future.


Nothing yet. The new tools should land in 1.11, but will be an experimental opt-in, with the aim of full support in 1.12.


How does Haskell's cabal compare to vgo?


Cabal uses a SAT solver. It's in the middle of a beta for replacing it's global package management with per-project package management by default (so-called new-style builds). There were some warts in the past when it couldn't manage different versions of the same package, I think because of a ghc issue but that is in the past.

Most industrial users use stack or nix on top which reduce the use of Cabal's version solver.


I really do not like the “semver-like” versioning string requirement. My packages are already tagged with valid semver releases and now I need to change them by adding a “v” prefix.

If you don’t know what I’m talking about vgo requires the release to be tagged like “v1.0.3” which is not standard semver.


> vgo requires the release to be tagged like “v1.0.3” which is not standard semver

Are you talking about git tags? Tagging a release with a "v" followed by the version number has been done since the very first git repository.

I don't think semver has any official standard; the closest I can find is https://semver.org/spec/v1.0.0.html, which does say "When tagging releases in a version control system, the tag for a version MUST be “vX.Y.Z” e.g. “v3.1.0”."


Yes I am talking about git tags.

I purposely followed semver 2.0.0 which doesn't mention anything about a "v" prefix. Thanks for finding an old mention of version control tagging!

At the end of the day it's not a big deal. I will just duplicate the tags so it won't break any dep configurations.



I've seen a few that used 1.0.3 which is also common among python and other languages. One of the biggest new go projects, istio[1], uses the 1.0.3 style.

I'll switch over in the future and keep existing tags for dep users.

1. https://github.com/istio/istio/tags


This article is about the Go programming language (and environment).


As someone who just recently started using go, this seems really dumb.


The churn in dependency management is one of the worst issues with go. The other is not having a canonical GUI option, instead mostly just bindings to other toolkits with varying completeness and quality.

That said, I think it's worth sticking with go for the quality of the language itself. If you don't need a GUI (or plan to use a web GUI) it's a really nice balance of design decisions that let you actually get things done.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: