Hacker News new | past | comments | ask | show | jobs | submit login
Taking Go modules for a spin (cheney.net)
229 points by 101km on July 15, 2018 | hide | past | favorite | 71 comments



I've been using go modules in my company for several months now. Everything has "just worked." It's at least 1 order of magnitude faster than dep.

It's worth noting that go modules use a novel dependency resolution algorithm which is extremely simple to reason about/implement, fast, and produces more reliable builds than npm/bundler/cargo. That's why I was excited about it, anyway. It removes the ever-present NP-complete assumptions in this space, so from a computer science perspective it's extremely interesting.


> novel dependency resolution algorithm

I've heard/read this, but I can't tell what is necessarily novel about it...to me, it reads like old-school/boring Maven transitive dependency resolution.

(Not holding out maven as best practice, it's just what I know best in terms of pre-version ranges, pre-lock file dependency management, once those features become state of the art in ~2010.)

...that said, Maven does actually support version ranges; ~10 years ago when I last used it, either it didn't support them then, or we didn't use it, so perhaps that is why vgo seems so familiar. Or I just have a terrible memory.

Anyway, if anyone can correct me on my fuzzy assertion that "vgo is like maven w/fixed versions", I'd appreciate it!


Can you elaborate on why it’s easier to reason about than Cargo et al? I’ve heard a lot of theoretical criticism of Go modules for not taking Cargo’s approach so I’m surprised to hear an experience report to the contrary.


I can't do much justice to the topic in a HN post, but this post specifically describes Go module's dependency resolution algorithm and compares it against SAT solvers like Cargo's: https://research.swtch.com/vgo-mvs. Click the "Go & Versioning" link in the header, and you'll find a whole series explaining the design decisions.


From cargo's source: "Actually solving a constraint graph is an NP-hard problem. This algorithm is basically a nice heuristic to make sure we get roughly the best answer most of the time." (https://github.com/rust-lang/cargo/blob/master/src/cargo/cor...)

vgo's more-constrained specification for dependencies means there is exactly one right answer, and it can be easily and quickly calculated by both computer and human.

Whether or not this will turn out to matter in practice is still an open question.


>Actually solving a constraint graph is an NP-hard problem

How big of a deal is this IRL? Assuming you have 1000 modules, how long should it take to solve the graph?


Not very the parts that make it NP hard are allowing libraries to specify maximum versions (and other more complex version ranges). Most of the time libraries use minimum constraints (~) or (^) which allows the heuristic to work like go's algorithm. For rust, node, and other languages libraries can be imported twice as different versions (without requiring a major version renaming like go) this also allows the heuristic to have an out: if it reaches a really complex case it can just give you both versions. Beyond that package management is a barely disguised 3-SAT solver which we have good, fast solvers. There are definitely some edge cases, but when's the last time you ran any of the following package managers and worried about dependency solve speed? cargo, apt-get, npm (and yarn), dnf, zypper. IO far and away dominates the profiles of these programs, solver speed is basically a non-issue in practice.


does rust have mutable package-level state like go?


It does. There are ways to mark a package as "only once" in the dep graph. For instance, C libraries are required to be marked in this way.

The only once constraint also has a nice out for the SAT solver, if you reach a conflict or something that can't be solved cheaply you just make the user select a version that may not be compatible with the constraints. Bower, dep, and maven work that way.


You anticipated where I was going, which is mutable state + multiple copies of packages seems like a recipe for trouble.

So, I'm not sure how happy I would be as a user if my package installer bailed out and asked me to choose!

Out of curiosity, how do you mark your package as "only once" in cargo? I tried googling, and didn't find an answer, but did find a bug where people couldn't build because they ended up depending on two different versions of C libraries!

It does make wonder if MVS will solve real pain in practice. :-)


> So, I'm not sure how happy I would be as a user if my package installer bailed out and asked me to choose!

Its definitely not a great UX, but at the end of the day the problem can only be solved at the language level or by package authors choosing new names. For instance in java you can't import 2 major versions of a package. Solving for minor versions having to bail out has been incredibly rare in my experience. I only see it when there's "true" incompatibilities, e.g.

foo: ^1.5.0 bar: foo (<= 1.5)

> Out of curiosity, how do you mark your package as "only once" in cargo? I tried googling, and didn't find an answer, but did find a bug where people couldn't build because they ended up depending on two different versions of C libraries!

I think its the `links = ""` flag. It may only work for linking against C libraries at the moment, but cargo understands it!

> It does make wonder if MVS will solve real pain in practice. :-)

Not by itself, the semantic import versioning is the solution to the major version problem, by giving major versions of package different names. Go packages aren't allowed to blacklist versions, though your top level module is. This just means that package authors are going to have to communicate incompatible versions out of band, and that the go tool may pick logically incompatible versions with no signal to the user beyond (hopefully) broken tests!


> Go packages aren't allowed to blacklist versions, though your top level module is. This just means that package authors are going to have to communicate incompatible versions out of band, and that the go tool may pick logically incompatible versions with no signal to the user beyond (hopefully) broken tests!

Yeah, it seems if the Go system ends up not working out in practice, this will be why.

But because of the minimal nature of MVS, you won't run into this problem unless something else is compelling you to go to the broken version. And by the time that's happening, you'd hope that the bug would've been reported and fixed in the original library.

It'll be interesting to see how it plays out in practice.

(Also, if I have a library A that depends on B, which is being reluctant or slow about fixing some bug, I can always take the nuclear option and just fork B and then depend on that. Basically the explicit version of what Cargo would do through deciding it couldn't resolve w/o creating two versions. But I think the incentives might be set up right that the easy/happy path will end up getting taken in practice.)


Packages are made up of modules, and modules can have global state. But doing so directly is unsafe, specifically because it can introduce a data race. Rust also does not have “life before main”, so it doesn’t get used in the same way as languages that do. I’m not sure if Go does?


Even if its not just "mutable" state there are a lot of undesirable situations for multiple package imports in rust:

1. Singletonish things like global allocators, rayon-core, etc

2. You may have made a package static safe to mutate with a mutex, but it could be a bad thing to have different versions of that mutex.

3. Compile time computed tables (unicode table, perfect hashes, etc) could be imported multiple times ballooning the binary.

4. ABI/type compatibility with any reexported types


Oh totally. Thanks for listing those out.


(I replied but the reply vanished. If it reappears, apologies for the dup.)

Yeah, go has a magic function `func init()` which gets called before main. (You can actually have as many init's as you want, and they all get called.)

Probably evil, though so far it hasn't hurt me in the same way as, e.g., c++ constructors have. Maybe because it's more explicit and thus you're less likely to use it in practice.


Cool, thanks!


depends on the complexity. if it is 2^1000 then forget about finding an optimal result.


Is there a catch to vgo's approach? If not, why aren't the cargo people copying it?


The counterarguments are, basically:

1. vgo focuses on the wrong issue (if you're spending a ton of time resolving and re-resolving your dependency graph, maybe the issue is your build process).

2. vgo will get the wrong answers and/or make development much harder

There's a long writeup of some of the ways vgo can go wrong here: https://sdboyer.io/vgo/failure-modes/ and some background here: https://sdboyer.io/vgo/intro/ among other places. And there was a lot of discussion here: https://news.ycombinator.com/item?id=17183101

I'd say there's about a 3% chance vgo ends up being a smashing success that revolutionises package management and gets copied by everyone else, a 30% chance that vgo works well for golang due to their unique requirements but has nothing to offer anyone else, and about a 67% chance it ends up being a failure and being scrapped or heavily revised to scrap the novel, controversial and (arguably) fundamentally broken ideas that set it apart from every other package manager.

But fundamentally, the reason the cargo people aren't copying it right now is that it doesn't even really claim to have advantages over cargo for rust. (There are some quirks in the golang ecosystem which mean you end up analyzing your dependency tree way, way, more than you do in basically any other common language. That makes speed important for golang, but for everyone else, it's almost meaningless.) "We make the unimportant stuff fast at the expense of getting the importing stuff wrong" isn't very compelling. :)

Of course, the vgo people would phrase it as "we make the important stuff fast and we get the important stuff right", so...time will tell. But don't expect anyone to copy this quickly; it remains to be seen if it'll even work for golang, and it'd need to be a huge step up from the current state of the art to make it worth switching for other languages and ecosystems.


> There are some quirks in the golang ecosystem which mean you end up analyzing your dependency tree way, way, more than you do in basically any other common language.

What are these?


This comment from an earlier discussion here covers some of them: https://news.ycombinator.com/item?id=17185335

Basically, dep/glide do a bunch of stuff, including recursively parsing import statements because of How Go Works (tm). Other package managers don't, because they have lock files, and central repositories. Go expects you to just be able to throw a ton of raw code into your GOPATH and have it all magically fetched from github, which is super cool, but also very hard to do quickly, and not really something other languages are clamouring to support.

(A lot of attention has been focused on vgo's solver, and it is much faster, but the solver isn't what takes up all the time; the speedup from dep/glide to vgo seems to be almost entirely related to the changes in how dependencies are declared. Saving 10ms on a more efficient solver algorithm means nothing if the overall process is spending 12s grinding through slow disc and network access.)

And when you survey the language ecosystem, you see a lot of languages very enthusiastically committed to traditional package managers (with lock files) and centralised repositories. Cargo, composer, npm/yarn, bundler/ruby gems - recent history is full of languages happily moving in that direction. Go is an exception, and I don't see anyone actively copying that quirk any time soon.


The catch to the vgo approach required that no package in the ecosystem ever have even unintentional backwards incompatibilities, because you can't do anything other than specify minimum versions. Or, rather, it makes the resulting problems something that need to be addressed outside the scope of dependency specification and resolution.

When you just decide not to address a significant part of the problem, the solution becomes simpler.


> unintentional backwards incompatibilities

You mean a bug? Because that's what that is and it is no different from any other bug, and like any other bug they are outside the scope of dependency specifications as they are unintended.


> like any other bug they are outside the scope of dependency specifications

Known relevant bugs in particular versions of dependencies are not outside the scope of what non-vgo dependency management solutions address.


Maybe they should be. If you can make it work, fixing the bug seems like the obviously superior solution compared to letting it fester and working around it locally with incompatibility declarations, slowly degrading the ecosystem up to the point where you have lots of little islands that can't be used together anymore in a sane manner.


> If you can make it work, fixing the bug seems like the obviously superior solution compared to letting it fester and working around it locally with incompatibility declarations

Fixing the bug creates a new version. Unless you are going to create the mess of unpublishing packages or replacing packages with new different ones with the same identified version (both of which are problematic in a public package ecosystem), the fact that maintainers should fix bugs that occur in published versions doesn't , at all, address the issue for downstream projects that is addressed by incompatibility declarations in a dependency management system, even before considering that downstream maintainers can't force upstream maintainers to fix bugs in the first place.


I’m no expert, and I might even be very wrong, but I read the post about it and it seems to hinge on only resolving a minimum version and assuming all future packages with the same import path are backwards compatible. If I’m reading right it basically treats path/to/package and path/to/package/v2 as entiresly different packages.


You are correct. It bakes semantic versioning into the dependency system making it a requirement vs. just a convention.


As a rule, Rust and Cargo developers aren't satisfied with something until it's difficult to explain and complicated to implement.


I am eagerly awaiting the arrival of the module system, but please note, that it's still experimental, so not all fun and rainbows. As of now, the Go issue tracker still has 53 open issues with the "modules" label[1], and Go 1.11 should ship around August. The system works well for most cases, but there are still rough edges, especially around vendoring, undocumented or not-so-well documented behaviours, and features which may or may not be "in the scope" for the module system.

Overall, I am trying to be optimistic about the future of Go dependency management, but I am not planning to switch the projects I work on in my company from dep to Go modules until most of those rough edges are either smoothed, or officially recognised as "works as intended", with viable workarounds.

[1] https://github.com/golang/go/issues?q=is%3Aissue+is%3Aopen+l...


Agreed.

With that said, some none negligible chunk of the issues you linked fall into cosmetic (better error message), proposals, works as intended (docs could improve) or self inflicted misconfigurations of running a go beta. 53 sounds worse than it is.

I don't know why this had so much drama around it to be honest. And yes, it is far from perfect, it will be approximately as crummy as python/ruby/node which is still an improvement.


Given it's 18.04, you could've `snap install go --channel=edge` to get go from master.

(all praise mwhudson for maintaining the Go snaps -- `snap info go` for the whole story).


> `snap info go` for the whole story

It lists the channels and some versions, and has a short description; "This snap provides an assembler, compiler, linker, and compiled libraries for the Go programming language."

When you said "the whole story" I expected there to be some sort of story but I guess I might have misunderstood what you meant.


I've recently revisited asdf, and based on some testing, started moving my various compilers/interpreters to that. It's a "general" version manager - that works like rbenv for "all" languages.

I'm not sure if I'd use it for deployment - but for development it's quite versatile.

https://github.com/asdf-vm/asdf


I recently started to learn Go and everything has gone smoothly... except the bizarre, unfriendly GOPATH and forced project structure.

It was not fun when VS Code (with the Go plugin) would automatically remove my imports every time I saved the file because it couldn’t find it.


Removing imports sounds like a gofmt issue. That is the linter / formatter for all Go code. If you don't run the linter and try to build the code, it won't work anyway. For example, I added the net/http package to a file without using it and got this error when running `go build` without doing `go fmt` first:

    ./example.go:8:9: imported and not used: "net/http"
So VS Code's Go plugin removes the unused import because otherwise the code is invalid. I'm a vim user and `vim-go` has the same behavior.


I like the consistent, sane project structure, but GOPATH is an odd duck to be sure.


I actually started using the GOPATH structure for all my repos after discovering three full checkouts of github.com/torvalds/linux [1] on my drive.

[1] I know it's not the official upstream remote, but I find this one the easiest to remember.


I do the same, but I don't like that it is searched by default when building a Go program. It's too easy for the versions of dependencies in those directories to have changed. And even if you vendor, it falls back to GOPATH so you can still bring things into your build by accident.


I haven’t followed the history of the feature, but could anyone explain why we had to wait until version 11 to be able to build a project from any location we wanted ? I’m having a hard time believing it’s purely for technical reasons, but then i don’t understand either what has changed that makes it a desirable feature now rather than before.


Go has always had a very strong dependency on GOPATH and the other path-based semantics that are quite core to the language's import system, so it's understandable that a large change would be quite hard to pull off (for political reasons if nothing else). Personally (having been burned by this mis-feature regularly ever since I started using Go almost 5 years ago) I don't really like this design, but to quote George Lucas "it's stylistically designed to be that way".


Go has always been extensively opinionated (gofmt is just one example of that). This is also the beauty of Go - everything is just as expected once you understand the "opinions".

The rather late support of "out-of-tree" building might be caused by a conflict between groups pressing to refrain from the $GOPATH approach (rising in size due to rising Go popularity itself) and Go Dev Team / early-adopters especially as the $GOPATH approach is a central part of Go.

While I can see the intention of Go modules, I believe that many people which belong to the first group migrated from other languages like JavaScript and expect to migrate the workflow as well. I highly encourage everyone to try the $GOPATH approach before rooting for any side.


>While I can see the intention of Go modules, I believe that many people which belong to the first group migrated from other languages like JavaScript and expect to migrate the workflow as well. I highly encourage everyone to try the $GOPATH approach before rooting for any side.

I've tried it. Can we now get real modules?


I'm having a hard time believing the go core team changed its mind upon popular pressure only. Are you sure there wasn't a real-life painful scenario that appeared or grew in importance , and convinced them an alternative system had to be provided ?


The recent(ish) interview with Russ Cox on the "Go Time" podcast [1] sheds some light on this, but I listened to it long enough ago that I can't remember the entire reasoning.

[1]: https://changelog.com/gotime/77


I can't speak to this with absolute certainty, but I have some speculation to offer. The story of GOPATH is tightly intertwined with the story of package management.

Go is a Google project, and Google has a very unique approach to package management: commit everything to the monorepo. The GOPATH is, in essence, a monorepo. If you want to change the API of a library, well, you can just change all its callers across your GOPATH, too. And so for a long time the Go team was unconvinced that package management was a problem.

For example, the Go FAQ [0] still has this to say:

> How should I manage package versions using "go get"?

> "Go get" does not have any explicit concept of package versions. Versioning is a source of significant complexity, especially in large code bases, and we are unaware of any approach that works well at scale in a large enough variety of situations to be appropriate to force on all Go users...

> Packages intended for public use should try to maintain backwards compatibility as they evolve. The Go 1 compatibility guidelines are a good reference here: don't remove exported names, encourage tagged composite literals, and so on. If different functionality is required, add a new name instead of changing an old one. If a complete break is required, create a new package with a new import path.

It is true that if you write perfectly backwards compatible code, then you don't have a versioning problem, but if you think that's a viable solution you're ignoring certain realities of software engineering.

It wasn't until early last year that Russ Cox [1] publicly declared that versioning was a problem and set out to introduce a package manager into the Go toolchain. As it turns out, GOPATH is entirely incompatible with the approach to package versioning that the Go team settled on. You simply can't have two versions of the same package in your GOPATH, unless you're willing to rename one and rewrite all the import paths. Given that public opinion had turned again GOPATH [2], it was finally time to do away with it.

So it took about a year and a half from the time the Go team admitted GOPATH was a problem to shipping a release that made it unnecessary. That's really not too bad. The frustrating part of this saga were the first seven years during which the Go team refused to admit there was a problem at all.

[0]: https://golang.org/doc/faq#get_version [1]: https://research.swtch.com/go2017 [2]: https://github.com/golang/go/issues/17271


The Go team, for years, actually insisted that package management was something the Go community had to go and figure out. As you say, Google uses a monorepo where they check everything into a single tree. One of the core Go developers -- I forget who, unfortunately -- actually claimed at one point that they didn't want to design a package management system because they didn't know how; since Google used a monorepo, designing a real package management system not based on a monorepo would apparently be beyond them.

My personal theory is that what made Russ Cox cave in was his discussions with Sam Boyer. Cox thought Boyer was going down the wrong path, and thought he had a better solution. Unfortunately, the Go community didn't seem to have read the discussions the two were having, because pretty much everyone thought Dep (Boyer's tool) was blessed by the Go team and was going to be the official package management tool. I can forgive the drama of the end result is a real, non-Google package management system, though.

(While I didn't appreciate the drama, I'm somewhat relieved Dep is not going to be the official solution. Dep is okay when it works, but inherits pretty much all the warts of Glide, which Boyer also worked on. Glide has been an absolute nightmare to work with. Dep is in fact worse than Glide in some respects -- due to weaknesses in its solver, it's completely incompatible with certain significant community packages such as the Kubernetes client. Of course, Dep is not yet 1.0, but I would not say things were looking that promising.)


> My personal theory is that what made Russ Cox cave in was his discussions with Sam Boyer. Cox thought Boyer was going down the wrong path, and thought he had a better solution.

That's certainly my understanding of the situation. Matt Farina has a great commented history of dep and vgo [0] if you haven't already seen it. The comments are particularly enlightening.

Still, it's not clear to me what made the Go team get into the package management game at all. As you say, for years they were happy to leave that as a community problem. But something spurred them to declare that Dep was an "official experiment."

> Dep is okay when it works, but inherits pretty much all the warts of Glide, which Boyer also worked on.

Funny, I've had exactly the opposite experience. Glide caused us plenty of trouble at CockroachDB, but Dep has worked flawlessly, if slowly. I've also found Sam to be exceptionally friendly and responsive to feedback [1] [2].

[0]: https://codeengineered.com/blog/2018/golang-godep-to-vgo/#co...

[1]: https://github.com/golang/dep/issues/1927

[2]: https://github.com/golang/dep/issues/460


Indeed, as I said, Dep is okay when it works, until it doesn't. This [1], for example, is a blocking issue, and requires some manual editing of the lock file to get around. I've had other issues. The issues pale in comparison to the horribleness of Glide, but it's interesting just how these tools end up being so damn flaky.

[1] https://github.com/golang/dep/issues/1207


Ah, that is an unfortunate issue, and the error message is unreadable to boot.

> The issues pale in comparison to the horribleness of Glide, but it's interesting just how these tools end up being so damn flaky.

That's the crux of it, isn't it? The dozens of Go package managers that have come and gone over the years have provided us with substantial evidence that building a stable package manager requires several years of development. I think that's why I'm frustrated that the Go team hit the reset button again. Dep has accumulated plenty of bug fixes over the years to handle more and more of these edge cases, but vgo had to start from scratch.

On the bright side, vgo essentially can't fail.


Not only buggy, theses tools didn't follow the simplicity that we like in Go. For example gb was more in the Go philosophy for my taste. I was surprise that it was not chosen as the official experiment.


Personally, I actually never thought Dep was blessed by the Go team to the point that it "was going to be official". So I wouldn't be so fast to say "pretty much everyone". Notably, some loud people "thought" so; I've noticed that many quiet people quietly did not (as can be seen e.g. by the numerous voiced agreements and endorsements of the vgo prototype on the mailing list). Again, I personally expected exactly something similar to what was pulled off in the end to happen. Especially given that whenever somebody started claiming Dep "is going to be official", rsc/rob (don't remember) seemed fast to correct that it's not so, and that it's only an official experiment. Even Sam, after AFAIR being corrected so, was seemingly careful to say only about an official experiment in public emails, readme, etc.

I have numerous thoughts about developments like this one. Mostly, that I've seen a similar situation happen in numerous communities already, Go totally not being the first nor the last one, that the steering commitee have the last say, and may have different taste. I learnt to accept that their choice usually does have merit and usually actually ends for the better. I learnt that it requires a lot of humbleness and sometimes gritting one's teeth, learning to let go of hurt feelings, and accepting that someone else may have reasons you still need to grow to understand. Personally, my own view is that for Sam, this was probably the first time something like this happened, and he wasn't prepared for the hit. And I agree those never stop tasting bitter, given the work one has put in a project of this kind. A good will, hard contribution, being de facto rejected in the end. A child being "lost". But that's not the whole truth, because the child is in this case reborn, though in somewhat different shape. The experiment has served its purpose and brought a lot of value, a significant contribution. On the other hand, I do sometimes wonder, can such situations and misunderstandings be avoided somehow? Or is the world just not perfect enough? And by the way, I also think that Russ was actually taken by surprise by the extent of the reaction. I suppose that's why it took him so long to react, which let the situation and complains get somewhat louder than necessary.

But that's too just my personal opinion. One of many in this somewhat unfortunate situation. I just wanted to also let my steam off in the end, starting to grow more and more tired of the recurring claims that "everybody is surprised". On the contrary, I'm personally one of the people looking forward to vgo, and strongly unconvinced by what dep has become.


What impression you had probably depends on whether you read the Go mailing lists or not. All of my colleagues, myself included, had gotten the impression, who knows how, that Dep was official. We don't follow the Go lists.

The Dep situation is very similar to that of Eric S. Raymond's attempt at replacing the Linux kernel's config tool. Instead of presenting a design proposal and discussing it in public, he pretty much finished the project on his own, perhaps thinking that a working version would lead to adoption by users and thus forcing the kernel team to accept something users liked. Or perhaps he assumed he had clout in the Linux community, which of course he didn't. Either way, this kind of brute-forcing just leads to wasted work and resentment.


That's a very interesting reply for me, thanks. Reflecting on it, actually I don't really follow the list either nowadays, due to not enough time. But I feel I kinda do know the “who's who” of the community, and thus I sometimes just glance through some threads (e.g. on HN) looking only for what did a core member of the Go team say. Recently, I repeatedly feel it's important to quickly find out who's the “important people with power” in an online tech forum. I don’t want this to sound as some kind of a critique; just trying to put down how I believe I came to the conclusion I expressed in the above comment, to try and better understand it for myself.


+1 - I believe that it stems from the monorepo mindset.

That's fine and works perfectly well in the appropriate environment (e.g. large structured work environment with processes etc etc), but for my personal work I prefer to just checkout wherever the hell I want and go from there. Really looking forward to module support so I can use golang for some personal small-scale projects easily without having to go through a lot of the ceremony of setting go up on say a raspberry pi - just checkout and go (no pun intended) will be a breath of fresh air.


Can someone help me understand why people hate gopath? We have a script called govars.sh at the root of every project and it sets the gopath exactly the same way one would use virtualenv in Python. We remove the default one from .bashrc or .profile. This basically makes gopath disappear entirely - we never bother explaining to new devs what it "really" means. Just ask them to use our template project folder structure.


I've the same workflow, but like you said gopath disappear. You just hate gopath also :-) I mean, with go modules you will have the same workflow but without the need to adjust the envs.


At least i have found it limiting when I want have a mix of languages, and i am forced to have all the go packages in the root folder or live with strange package names. Hate is used a bit too often for my liking - nuisance would be more appropriate.

You are right that it is solvable. And in many other languages the package management etc have grown independently of the core language itself, from the community. So the attacks on the golang team seem a bit strange. Maybe they thought that these solutions would come from outside the core language team.


In my experience, it's a real pain to get golang projects to build in Jenkins, and it's probably the same for any CI/CD setup, simply because of the gopath. Jenkins agents expect to set up its build directories a very particular way, and so does go with its GOPATH. Juggling it just right to get it to work is frustratingly difficult.

In fact, the one main reason I'm looking forward to this release is because of the elimination of GOPATH. It'll make my day-to-day operations at work FAR easier.


In gitlab ci, we run govars.sh which not just sets the gopath but also adds the "bin" folder to PATH. Then when we run tests we know where to get our compiled binary from. Our govars also sets the path on where to find our conf.yaml for env config. Didn't know it would cause so many issues in Jenkins!


I disagree that GOPATH is only appropriate for monolithic repos. I think it works nicely with distributed open source development and has some nice properties there that are lacking in "project-based" approaches.

It's true that it encourages working with all the code in GOPATH. This is a good thing. Your GOPATH is a view of the whole Go ecosystem. You fix a bug where it makes sense and it is picked up by all users. Sadly, vendoring already messed this up.

I think it's insanity when every program has a different idea of what code a given import path refers to (like is often the case with project-based package managers and vendoring as well). It's no fun to juggle the version differences in your head while working on multiple projects.

Go modules have some good ideas here. Semantic import versioning hopefully reduces the number of different versions you have to consider.

Doing the version selection not per-project but globally for all of GOPATH should still result in a working (but not necessarily reproducible or high-fidelity) build. It definitely reduces the amount of code you need to deal with.


Huh, vendor/ won't be honored unless you're in GOPATH? Anyone know any more about this? Seems like an odd choice.


See "go help modules".

  Modules and vendoring

  When using modules, the go command completely ignores vendor directories.

  By default, the go command satisfies dependencies by downloading modules
  from their sources and using those downloaded copies (after verification,
  as described in the previous section). To allow interoperation with older
  versions of Go, or to ensure that all files used for a build are stored
  together in a single file tree, 'go mod -vendor' creates a directory named
  vendor in the root directory of the main module and stores there all the
  packages from dependency modules that are needed to support builds and
  tests of packages in the main module.

  To build using the main module's top-level vendor directory to satisfy
  dependencies (disabling use of the usual network sources and local
  caches), use 'go build -getmode=vendor'. Note that only the main module's
  top-level vendor directory is used; vendor directories in other locations
  are still ignored.


Thx.

OK, so that makes it sound like it has nothing to do with GOPATH.

But interesting (annoying?) that vendor won't kick in unless you specifically ask for it.


So far, I get a feeling that vendoring will be a second-class citizen in the world of modules. Unfortunately, I will not be surprised if the team considers removing the support entirely in a future release. I just hope that a viable alternative will be proposed.


yeah, i feel like it's really handy to not be reliant on 3p servers to do a build.


Go is forever changing the pathing and configuration. Even here we have:

> Very nice, go build ignored the vendor/ folder in this repository (because we’re outside $GOPATH)

and

> Oddly these are stored in $HOME/go/src/mod not the $GOCACHE variable that was added in Go 1.10

Maybe Go 2.0 will be stable.


I mean, the point of modules is to get rid of vendor. And if you were building outside of GOPATH previously, you were likely broken (or using a specialized build tool that handled this for you). The second one is because it's a cache. Storing them there isn't a breaking change, it's just a bit odd


> you were likely broken (or using a specialized build tool that handled this for you)

Or you're using the isolated-GOPATH trick.

  $ mkdir -p .gopath/src/github.com/foo
  $ ln -s ../../../.. .gopath/src/github.com/foo/bar 
  $ GOPATH=$PWD/.gopath go install github.com/foo/bar
For example, this is one of my Go projects where you can unpack the release tarball anywhere and `make && make install` just works: https://github.com/sapcc/swift-http-import


Thanks, Dave!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: