It's clear to me that Go didn't get package management right,go get doesn't do package management at all. I think today it is the most important thing the go team has to solve.Otherwise, people will end up using different incompatible solutions.
Projects need a manifest so that proper dependencies are declared. People who say "just use make",or "just use git submodule" or "X feature of your VCS" are just pushing for more fragmentation. I download a package.Now I have to check what VCS feature it uses, wether it uses a shell script, make , or build tool Z to fetch dependencies, do people advocating for these solutions really want to use a language where libraries aren't compatible with each others? and then some people say : just vendor and commit dependencies. Ok but let's say I find a bug in one dep. Patching all the different versions of the same dep is just not something manageable.
I think it's time go devs acknowledge the fact that there is an serious issue here, instead of resorting to the ususal "you don't need that in go".
Godeps is a first step, but we need a official way to manage packages.So everybody is on the same line.
I would really like something like composer for Go, with flat dependencies. Which means that, unlikes NPM that downloads the entire world each time one installs a library, dependencies are flat,which keeps API stables and reduce the the amount of useless libraries and forks.
> I think it's time go devs acknowledge the fact that there is an serious issue here, instead of resorting to the ususal "you don't need that in go".
I'm out of the loop, so honest question: does golang-dev still say that? I thought they were not in denial of the problem, but indecisive about the solution. Like with generics? Or do they consider this a non-issue?
I know Cheney recently came out with his own solution, so it can't be that bad, right?
> I know Cheney recently came out with his own solution, so it can't be that bad, right?
Yes, and so have others. What they really need to do is say "this is our blessed solution" and bundle it. Otherwise, there's just a bunch of competing solutions floating around and nothing is really going to take hold.
And yes I'm well aware that Java doesn't have a dependency manager but Maven emerged as the de facto way to handle dependencies. I think that has a lot to do with developer attitudes. I see Go developers as much more "don't tell me what to do, I'll roll my own solution" than Java developers, and that's why I fear a third-party dependency manager won't win out in the Go world.
> now I have to check what VCS feature it uses, wether it uses a shell script, make , or build tool Z to fetch dependencies
The core of my work is on the operations side rather than on development (although I do a good bit of that as well), so my perspective is that this initial overhead is a one-time cost for using a library, and that I can live with that.
> I think it's time go devs acknowledge the fact that there is an serious issue here, instead of resorting to the ususal "you don't need that in go".
Encountering this and a lot of "well this is the way Google does is so we should too" was a big part of what inspired this post.
Seriously. What I'd like from Go is a structure where there's a source directory for my libraries (including external), a directory for the produced binaries (if any), and a text file that describes all my dependencies (including version #s). Inside each library there could be a file that provides the version # of the library.
While building the program, Go should search in my $GOPATH for the specific libraries, and if they're not found, search the libraries included with my package.
Some of the issues they've talked about included, "What happens when you can't `go get` a library because internet dies/repository moved/etc.?", which IMO is a good reason to include the libraries _with_ your Go package.
I'm awkwardly patting myself on the back here since I'm one of the co-authors, but I really wish more people knew about the package manager we have for Dart[1]. Really, though, I can't take credit, since we basically just do the same thing Bundler does.
It solves every single one of the problems listed here. It has a very simple workflow:
1. You make a pubspec.yaml file to list your package's immediate dependencies and the version ranges you allow for them.
2. You run "pub get". It finds all of your transitive dependencies, picks versions that satisfy the constraints, and downloads them.
It also creates a pubspec.lock file that specifies which versions it picked for everything. You check that into source control and now everyone on your team will use the exact same versions of all of your package's dependencies.
3. When you want to get a new version of a dependency, you run "pub upgrade <package>".
4. To add or remove a dependency, just edit your pubspec.yaml file then run "pub get" again.
We have a central repository where packages or hosted, and you can also pull in packages from Git or your local file system.
Packages do not at all interfere with each other. Two packages on your machine can use different versions of the same dependency without any problem.
Bundler is an absolutely brilliant package manager. My prediction is that every language either has a Bundler-style package manager or will eventually move to one.
Bundler is among the better package-dependency management solutions.
I do wish Bundler would simply be bundled into Ruby at this point; it's become so ingrained that there's no longer any reason to not make it a first-class citizen. The one big problem with Bundler is "bundle exec", which is required because project-local gems can conflict with system-wide gems. If all of Ruby honoured Bundler, we could let the Ruby runtime itself handle the gem isolation.
My main complaint about Bundler and RubyGems (and NPM for that matter) is that we're still unpacking packages with no good reason. RubyGems would be easier to deal with if you could just treat .gem files like Java does with JAR files. (The sore point is gems requiring compilation of binaries, but that's solvable.)
Actually, I have another complaint: Even after numerous security gaffes, gems still generally aren't signed.
There is a long-term plan, I believe, to merge all necessary bundler features into rubygems. Much of this has already taken place, as rubygems has (generic) support for Gemfile and Gemfile.lock
At least in development, if you use rvm gemsets you can avoid having to type `bundle exec`. In fact I learned recently that nowadays you don't even need gemsets:
Interesting. That hack is not RVM-specific (I use rbenv), but it looks like it has been superceded by the use of RUBYGEMS_GEMDEPS in RubyGems >= 2.4. If you do:
export RUBYGEMS_GEMDEPS=-
then all binstubs will apparently be looked up via the Gemfile.
One thing I've never been able to figure out about these package management systems: why do you specify a version range, and not just always specify a precise version? I don't like dependencies randomly upgrading themselves, and then you don't need "pub upgrade" or "pubspec.lock".
> why do you specify a version range, and not just always specify a precise version?
To handle shared constraints.
Let's say myapp uses foo and bar, which both use shared_thing. To solve this, we need to pick a single version of shared_thing that both foo and bar work with. If foo and bar have narrow-to-the-point-of-precise constraints like shared_thing 1.2.3+bug-fix, then it's very likely that no version of shared_thing makes both foo and bar happy. The end result is no solution.
To accommodate that, packages are strongly encouraged to semantically version themselves. Then, when you depend on a package, you use as wide a range as you can. If foo wants shared_thing ">=1.2.3 <2.0.0" and bar wants ">=1.3.0 <2.0.0" then the solver can pick, say, 1.5.3, and both are happy.
Thanks for the explanation! I'm not sure I personally want to rely on semantic versioning, but if you have a hierarchy of dependencies like that I see how that would be pretty useful.
I use specific versions of libraries in my app. If two libraries depend on some third library it's rarely an issue, and when it is an issue I often have to resolve it manually anyway.
Ideally I think each library would have its own internal copy of any dependencies, but there are some issues with doing that at least in go.
> Ideally I think each library would have its own internal copy of any dependencies
That works if and only if the library never exposes the use of that dependency, but it can break in horrible ways otherwise.
For example, let's say library foo uses some hash_table package to define some hash table data structure. Foo returns hash tables from its public API. Now imagine your app uses foo, gets one of those hash tables, and passes it to bar.
Bar also uses the hash_table package, but--since dependencies are encapsulated--has its own copy of a different version on hash_table. How is that hash_table code going to handle being given a hash table that was created from a different version of itself?
In a dynamically-typed language like JS, it may work, but this seems super sketchy to me, which is why I think an app should only have a single version of any given dependency.
Many correlate "sane" with "statically typed". [1]
JacaScript can cope perfectly well with modules A and B relying on different versions of C. It'll often cope even if A calls B with an object it got with C, or some other module broadly compatible with C. Static typing proponents aren't comfortable with the idea, so it's not "sane" to them. Sane or not, many argue it's not just not slowing down the Node community, but actively contributing to its explosive growth.
In C#, you get MethodMissingException if A.DLL and B.DLL refer to different copies of C.DLL and A calls B with an object it got from C, even if the copies of C.DLL are identical.
1: Perhaps I should contrast with strong typing, not static typing. Meh.
Path rewriting means you're distributing modified copies of code, which puts you under significant legal obligation in some cases.
npm is amazing for javascript, but terrible for compiled languages and is especially terrible for security updating; the giant mess of transitive dependencies of the same dependency of varying versions can get messy fast.
One is the shared versions as mentioned in another comment. The other one is security updates. If you release Widget that needs Gadget =1.2.3 that needs Tool =2.3.4 that needs ssl-wrapper =3.4.5 and you find out that ssl-wrapper 3.4.5 has a security issue (fixed in 3.4.6), then you can't just upgrade Widget. I would have to wait for 3 different projects to update their dependency lists to get a secure system again.
Provided that everyone does anything even resembling semver, you could just use Gadget 1.2.>=3, Tool 2.3.>=4, ssl-wrapper 3.4.>=5 instead and still be fairly sure the api won't suddenly change to something incompatible.
bundler is great. i really wish it worked for most major languages. switching between projects in different languages with different packaging systems is a bunch of weight i don't want to have to carry around or time i don't want to waste.
it sounds like you some-what 'ported' bundler to dart... what's your impression of how difficult it would be extend a tool like this to work for many languages?
it seems like i'm always asking them all to do the same thing... go get this at that version from there, include anything it needs, and stick that somewhere this thing can find it.
> it sounds like you some-what 'ported' bundler to dart...
More or less. Most of pub's code is devoted to stuff that's specific to Dart, but it's underlying "philosophy" is Bundler's model.
> what's your impression of how difficult it would be extend a tool like this to work for many languages?
This comes up a lot, but my hunch is that we're unlikely to see a successful language-agnostic package manager. The package manager has to be written in some language, and users of language X generally don't want to be told that they have to install language Y before they can download packages for X.
Also, most of the code for a package manager is fiddly details related to the specific language in question. Thinks like how files are organized, running tests, etc. The amount of code you could share is actually pretty small.
One of the thing Cheney says in his talk about gb is that nobody likes the .lock files[1], I see that dart is using .lock files, bundlers also uses .lock files as does rust.
But other languages seems to get away without it.. clojure with lenigen seems to do it nicely.
A slight tangent, but as someone who is sort of in the "respect but do not use" Clojure category, Leiningen seems pretty amazing. I really hope future language package managers look to it for inspiration; it seems to get practically everything right.
> One of the thing Cheney says in his talk about gb is that nobody likes the .lock files[1]
That may be true if by "nobody" he means Go users, but outside of the Go community things are different. I haven't heard many complaints about them in the Ruby or Dart ecosystems, and the value they provide (repeatable builds!) is very real.
It does, unless I'm really missing something. Since I'm not familiar with pup, can you walk me through what of bundler's functionality you'd be missing? Maybe I just don't use bundler correctly.
It's a workspace layer on top of the go tool, with added support for repository pinning. You 'wgo get' the repo you want, and 'wgo {save,restore}' will pin/fetch it in the future. Also, 'wgo FOO' will do 'go FOO' with GOPATH set for your workspace, so all the normal goodness is still there.
It also makes it so you don't have to manage the GOPATH env var anymore, which I always found to be a pain.
While I do agree that go doesn't come with built-in solution for version pinning, this hating on go get is misguided.
Most package managements solutions that are popular in other ecosystem (pip for python, npm for node, cpan for perl etc.) combine 2 functions: downloading code and versioning.
go get only does the first part: downloading the code. If you ask me they solved the problem better than pip/npm/cpan by getting rid of intermediary and going straight to the source.
go get has never claimed to be versioning solution so while I understand why people make shallow comparisons to pip/npm/cpan and assume that it does and complain that it does it badly (including this article), the reality is that go get is not a versioning tool at all.
There are good reasons why versioning is not supported by default in go: there isn't one way of doing it that is clearly superior to other ways. That's why we have so many solution (including the one advocated by this article) and none of the solutions became a de-facto community standard.
In summary: stop complaining that go get doesn't do versioning. There are plenty of working versioning solutions to choose from, pick the one you like the best.
This is the problem with most of the Go community: Blind.
There are plenty of tools that works quite well: bundler, cargo (even with some young rough edges) and at some degrees npm.
So there are tools and better ways to do it.
It is just that Sir. Pike doesn't like them and Google doesn't need them.
> In summary: stop complaining that go get doesn't do versioning. There are plenty of working versioning solutions to choose from, pick the one you like the best.
The reason why there are plenty of them is just one: none of them works well.
Exact opposite of what you said. You don't have such mess of pkg managers in others languages.
> go get has never claimed to be versioning solution so
> while I understand why people make shallow comparisons
> to pip/npm/cpan and assume that it does and complain that
> it does it badly (including this article), the reality is
> that go get is not a versioning tool at all.
The claim is that by not also solving versioning, go get is an incomplete tool. Stating that it was never intended to solve versioning is kind of beside the point.
> The claim is that by not also solving versioning, go get is an incomplete tool. Stating that it was never intended to solve versioning is kind of beside the point.
You've summed up the entire reasoning behind my post more succinctly than I could. Thanks.
> there isn't one way of doing it that is clearly superior to other ways.
Absence of a clear winner doesn't mean doing nothing is the best strategy.
> There are plenty of working versioning solutions to choose from, pick the one you like the best.
Unfortunately, that doesn't compose.
The whole point of a package manager is to deal with acquiring dependencies and their transitive dependencies. If the ecosystem doesn't agree on a single package manager, you can't handle transitive dependencies.
What do you do when you want to use four packages, each of which uses a different package manager for its dependencies?
The problem is that there's no reason why it couldn't also do versioning, in fact it would be sane if it did versioning, the fact that it doesn't just makes go get look like a glorified git clone.
I'd like to hear these "good reasons" why it doesn't though, what are they exactly?
Wouldn't a common standard on these things be brilliant and totally in line with go's overall philosophy?
> go get fetches dependencies, and their dependencies,
> and so on. But it doesn’t help you to figure out what
> you’ve got. You can’t do the equivalent of a pip list.
You can do go list -f '{{join .Deps "\n"}}'.
Go has so few commands and options that it's easy to be familiar with all of them.
Why is the semantic of `go get` any better than `bundle install`?
There's obviously no way they can fix the behavior in a backwards compatible way because it is so fundamentally broken, and I think that doing something because of a pretty name is absolutely ridiculous when it comes to creating a technical product. You need names like 'heartbleed' if you want to talk to management, but go get is a fundamentally developer focused tool and thus its utility should come over its naming.
I almost see its naming as a bad rap against google developers because it's an un-googleable phrase and who cares if it sounds pretty if it's un-googleable and also utterly terrible.
True story: I was in a group project in a distributed systems class, and we had made a project that relied on a go http routing package (I think it was martini or something). For some reason, my partner's computer wasn't able to present our project, so at the last minute, we cloned it to my computer. Between then and the night before, the router changed it's API, and forced us to do some crazy last-minute adjustments. In retrospect, we probably could have manually reverted that one package's repo, but git is scary to do things with at the last second.
As I understood, the original idea with go gettable libraries was that:
- All releases go on master. Dev goes on separate branches.
- Do not make backwards incompatible changes.
If you need to make big changes, make another project.
A lot of libraries ignored the advice. Whenever a library breaks its API (sqlx for example), I stop using the library. This has made go get usable for me.
What was really difficult for me to wrap my head around was how go dependencies actually work out... via gopath, etc. My first exposure was the bash scripts for a few projects, that I was trying to build in windows... and it was to say an interesting first exposure.
I kind of wish that if go is going to use github targets for dependencies, that they at least support semver pinning and require tagging in github to support this. There are other issues, but I think that would help a lot.
It's always going to be an issue...
----
Of all the developer package managers I've used npm is probably the best... and even that has some issues.
1. Platform binaries are problematic
2. Nested hierarchies result in unexpected duplications
3. Nested hierarchies result in long paths (windows issue).
2, and 3 will be resolved with NPM3, but that brings some interesting breaking changes... beyond that is the fact that npm versions are generally pinned to node/iojs version for most.
1. I'm hoping to see a consistent implementation for tagging modules with binary dependencies, or build dependencies so that there is a build platform in place for at least common targets. Who knows if this will ever happen, and likely will be tied to paid accounts, which isn't exactly a bad thing here.
Like go get, but you specify exact versions, including transitive dependencies. One GOPATH per project. And most importantly, you don't have to use go get to install the tool in the first place.
(This isn't necessarily aimed only at blog post, but guys and gals there are other OSs supported by Go than Linux/OSX. Most of the time just having "go get" work is enough. Trust me, I've done it a lot as an outlier. I use FreeBSD and Windows.)
What's wrong with using something like party[1], or nut[2]? Or just vendoring in a way that go build still works. Seriously, what's wrong with more source files in your project's source repository?
This way I can go get your command (package main).
Yes, I know about GNU make. The point is not GNU make, the point is needing to find out how to install third-party program on my OS of choice to build an app that should be as easy as "go build" because of the Programming Language used.
Maybe once that new fancy OneGet is on Windows (and maybe even becomes as good as brew looks on OSX, which I've never used) it will no longer bother me.
> Makefiles, build: docker build
Ugh, more Linux-Only. Don't follow this please.
Meh. I get your sentiment here, but I build and deploy on Linux and I post about the stuff I work on. I don't post about the Windows stuff that I haven't worked on in years. And I assume a level of intelligence in my readers that they can translate whatever might be generally applicable into their own platform in the same sense that I don't grumble about Raymond Chen's `Old New Thing` not being directly applicable to my own work but still enjoy it.
Seems like the author is suggesting something very similar to what [gpm][1][2] provides. I use gpm for a few projects, along with a makefile that creates a GOPATH at $(pwd)/_build, and it works fairly well.
Yeah, that plus gvp. But if you dig into both they're just shell scripts under the hood (nice ones, though!) and given that I've got a makefile or shell script to build the container, run tests, etc. then adding a third-party tool is just one tiny bit more overhead.
"go get" works for downloading open source code to check into a monorepo [1]. This is how Google does it. But most organizations use git, which doesn't scale enough to support monorepos very well, and it's awkward if you're just using git.
would not it be easier and cleaner to use the Android's REPO tool [1] to pin down the versions and layout your workspace?
At least the REPO allows you to write down the dependency information in a declarative manner instead of Makefile scripting snippets.
I think this is a perfect use case for `git subtree`[0]. It actually pulls everything in-repo, but can squash their history and do some other conveniences. I've always been surprised at its relative obscurity – maybe because of a terminology conflict with "subtree merges" – because it's so useful. I haven't actually used it for go dependencies, but it seems like it would be a good fit.
I really like the OP's structure and makefile, but it looks like a pain to work against Golang's grain here, even though the methodology makes a lot more sense to me.
Maybe I have misunderstood the documentation, but what I truly, truly don't get about Go's $GOPATH is that it wants (1) my project directory to have some kind of canonical path, and (2) to pollute my project directory with dependencies. I have done a bunch of Go development and I still don't get it.
So for example, I have myproject, which I naturally want to organize in this way:
I'm supposed to work within this jungle of dependencies and generated files, in the middle of which sits my project. At least Java, for all its many faults, has the good sense to let you store dependencies nested as JAR files in a subfolder, as opposed to turning your own project into a dependency.
I have tried fixing this by symlinking my project into $GOPATH/src/whatever, but Go doesn't let me to run "go" commands on things outside the $GOPATH, which makes this rather painful to do in a shell.
Adding to the general confusion, Go doesn't manage canonical package paths, so github.com/zenazn/goji/main.go is in the package "goji" and github.com/zenazn/goji/graceful/[asterisk].go are in the package "graceful", but those package names only mean something to the importer. When "goji" wants to use the package "graceful", it imports "github.com/zenazn/goji/graceful", which, being a full URL, must be resolved from a root folder in GOPATH/src.
At least Java (for all its... etc.) had the good sense to make dotted package names mirror the folder structure of a project, not URL, thus being internally consistent. Go wants everything to live in the same sea of things. Vendoring is just half the problem.
Instead of throwing away your project structure, try creating a "go" subdirectory in your project and setting GOPATH to point to it. Yes, you need a separate GOPATH for each project, but it's no worse than needing a separate Python virtualenv for each project.
That's what I'm doing these days. But it's painful.
It means I have to run full commands such as "go install <my package>" or "go get <some package>". To install all dependencies, I have to do "go get <my package>".
If I want to work on a project that depends on other projects that haven't been pushed to master yet, I have to clone those projects and symlink them in manually. It's pretty painful.
Working with certain tools that assume a certain directory structure (protoc, for example, which computes paths relative to your Go root) is also painful.
"go build ." does work.
"No worse than virtualenv" isn't exactly a rousing endorsement. It's one of the poorer package-management systems out there. Go should be as simple and easy as Bundler. There's no excuse these days, I think.
Yeah, sorry. I have been trying different things. Not happy. But at least it seems that I'm not alone in being dissatisfied. It's surprising because Go does a lot of things right, and disappointing because it's such a brake on productivity.
I agree with your sentiments about Java. I wish I could just have my GOPATH contain 1) the source directory for my project, and 2) one or more directories with versioned Go source tarballs for my dependencies (gar files?).
Projects need a manifest so that proper dependencies are declared. People who say "just use make",or "just use git submodule" or "X feature of your VCS" are just pushing for more fragmentation. I download a package.Now I have to check what VCS feature it uses, wether it uses a shell script, make , or build tool Z to fetch dependencies, do people advocating for these solutions really want to use a language where libraries aren't compatible with each others? and then some people say : just vendor and commit dependencies. Ok but let's say I find a bug in one dep. Patching all the different versions of the same dep is just not something manageable.
I think it's time go devs acknowledge the fact that there is an serious issue here, instead of resorting to the ususal "you don't need that in go".
Godeps is a first step, but we need a official way to manage packages.So everybody is on the same line.
I would really like something like composer for Go, with flat dependencies. Which means that, unlikes NPM that downloads the entire world each time one installs a library, dependencies are flat,which keeps API stables and reduce the the amount of useless libraries and forks.