Hacker News new | past | comments | ask | show | jobs | submit login
Using Go Modules (golang.org)
302 points by spacey on March 20, 2019 | hide | past | favorite | 118 comments



I've been using Go modules, and I really like them. It's obvious they really paid a lot of attention to backwards compatibility. Being able to vendor using mod is really awesome, as well as see all your transitive dependancies automatically. It's very goish to focus on minimizing your dependancies and actually think about what you're pulling in. I'm not yet sure how I feel about the v2+ stuff. Having a separate module for breaking changes is something I think I'll have to try more to form an opinion on, but I dislike the idea of embedding the version in the name instead of using the mod file.

I do think it's kind of odd that it decides to put packages in GOPATH, but disallows you from coexisting go mod projects in GOPATH. I organize all my code for all my languages using the go repo as dir. I've had to maintain a separate tree for go mod which is not ideal.

After using Go dep and mod, I felt that dep was the super straightforward and obvious way to do it. It's how I would have done it. Go mod is much more Goish, it's opinionated and based on a philosophy that fits with the language. It Gives me hope that if they do add generics they will make them uniquely Go as well.


It does not put modules in GOPATH, at least not where you'd normally find them. Pre-modules github.com/foo/bar would be in $GOPATH/src/github.com/foo/bar. Now they are in $GOPATH/pkg/mod/... (... being based on the URL with some handling for versions and characters that don't work well in filesystems, I don't know the exact details).

The main problems I've had with go modules are fast-and-loose upstreams that happily rewrite history. go mod detects this and stops the world, and you have to manually edit go.sum.

I also feel like "go get github.com/foo/bar" doesn't always get me the latest version. You see that there was a commit 1 hour ago, but then "go get" adds a version like 20181130-93874837 to your dependencies. I assume they know what they're doing? But I'm probably wrong.


> The main problems I've had with go modules are fast-and-loose upstreams that happily rewrite history. go mod detects this and stops the world, and you have to manually edit go.sum.

This is really irritating. I have the same problem with TLS: every time a malicious actor tries to manipulate my connection and steal my cookies I get an error page and can't do online banking anymore!

Sarcasm aside: this is not a problem with Go modules. This is a problem with the libraries you're using and you should avoid those projects until they get their act together.


Internal libraries are a huge pain to build now. I don’t need the rigor of semantic versioning for this, it just gets in the way. My act is together, I have tests that confirm my libraries work, I have a idempotent versioning system that is automatic (git sha) and if something gets merged to master that is my stamp of approval that it works.

Between the versioning behavior and trying to figure out how to document transitive dependencies correctly go mod has made my life as a maintainer harder.


It sounds like the replace syntax might help here. It allows you to forgo semver+version control and depend on modules on your local disk. https://github.com/golang/go/wiki/Modules#can-i-work-entirel...


From the point of view of someone considering what to use, it's often of fairly minimal significance if a problem comes from the language ecosystem or from the language technology.


These two things are not comparable.


This prevent things like Nodejs package hijacking. I think it's a pretty big deal tbh.


> The main problems I've had with go modules are fast-and-loose upstreams that happily rewrite history. go mod detects this and stops the world, and you have to manually edit go.sum.

This exists in other package management systems. At least, it did when I used composer and came across the same issue some 3/4 years back. I think it's one of those issues that are quite rare and in well maintained packages are even less likely.


Go gets the latest tag (if there are tags in the repo). You can force `go get` to check out a commit by branch or concrete hash with respectively @branchname or @commithash123. That's sometimes necessary when maintainers are lazy taggers.


That's what I meant, pkg is definitely in GOPATH. It's annoying because I can't put my code in src anymore, which is where I've been organizing it for years, but I still have to keep GOPATH around for bin and pkg.


What the blog post neglected to mention is that you can force module usage, even inside $GOPATH, by setting the environment variable GO111MODULE=on. I have a go command wrapper script called vgo that sets GO111MODULE=on, which I use when I want to use modules inside $GOPATH. More info here: https://github.com/golang/go/wiki/Modules


You don't really need bin anymore, run go install or go build with a output target. I'm doing platform independent stuff anyway so I have a json file in my projects that describes what execs I want to release and where they are in go and what the filename will be when done. I then create shim scripts for all platforms (in my multi-platform distro), or could easily just make 3 different "distributions". Yes I know this isn't using the tools idiomatically or whatever but I've found it more useful since I work with people who are on multiple platforms.


Fair enough. I clarified because the days of editing code in some dependency of yours are over; it's not easily exposed for that purpose (but you can do it ;).


Which is, of course, totally unacceptable. This is a critical use case for doing deep-dive debugging or for code exploration. Fortunately for us there is the wonderfully named "gohack", which does easily expose your deps for editing (though not as easily as before): https://github.com/rogpeppe/gohack


I think "go get" will only grab the latest tag. If that commit an hour doens't have a new tag, it won't see anything to update.


I use go modules to avoid having to work out of $GOPATH (~/go). When I work on projects or components I have them all separate in different directories in different locations on the filesystem. Next they should put pulled git dependencies under ~/cache/go or something, not in a top level $HOME directory.


Making sure I understand this: I’m some independent developer working in many languages. I like every project I work on to be inside ~/work/ within a subdir I name based on the project.

I used to be annoyed that I had to put every go project into a dir 7-8 layers below that e.g ~/work/go/src/github.com/joshklein/project/cmd/hello_world.go, but now I can have ~/work/go_hello/src/main.go. Right?

I understand this isn’t the main point, but frankly it’s the thing I care the most about.


> I used to be annoyed that I had to put every go project into a dir 7-8 layers below that e.g ~/work/go/src/github.com/joshklein/project/cmd/hello_world.go, but now I can have ~/work/go_hello/src/main.go. Right?

yes


You can put your project in ~/work/my-go-project but then symlink it to ~/go/src/github.com/joshklien/my-go-project (assuming your $GOPATH is ~/go).

But yes, with modules you don't need to do this anymore.

go mod init github.com/joshklien/my-go-project


Not all go tools respect symlinks, since Rob Pike decided that symlinks should not be supported.

https://github.com/golang/go/issues/15507#issuecomment-24158...



Where in your gopath are you supposed to put your source code if you never intend to upload it? All gopath tutorials I see say to put your github in the path like you did, and never say what to do if its not public.


If your project isn't intended to be uploaded (or is, but isn't intended to be consumed by other go projects as a go module), then you can make the project name whatever you want as far as I know.


I actually organize all my projects using the go way, there's nothing that stops you from coexisting Haskell in the Go tree. The advantage is that if you are working on something like the kernel, you can maintain separate branches easily.

That said, mod allows you to put your project anywhere but in the GOPATH, so the answer to your question is yes. The path is virtualized in the mod file.


It awfully breaks when you run into multiple systems that want things done their way. Like Go and ROS workspaces.


When I first started using go I also was a bit annoyed at the fact that I had to have everything in my gopath, particularly when I forked a repo as none of the paths would lead to my fork.

I've since started to also organize all of my code the gopath way and I just set $GOPATH to $HOME. When forking I just add a new remote and work out of the upstream repo, which seems saner now as it reduces object duplication.

I don't know if I will ever move away from organizing my node this way if I can help it.

> That said, mod allows you to put your project anywhere but in the GOPATH

I don't know if I've run into this yet, though the only project[0] I've used gomod with has a makefile so it might do something to handle it.

[0] https://github.com/ipfs/go-ipfs


I just create aliases wrk-projname that cd to the deeper directory. I've been following the pattern with ~/src/repo/user|group/repo (or project/repo for vsts).


That's how I understand it reading that page.


You can put any project anywhere you want.


Why is this inaccurate? Go modules allows you to put any Go project anywhere on your machine and have it work fine - what am I missing?


I'm not an active golang user, but something which gives me pause here is the insistence on a strict 3-number semver.

For a commercial entity shipping software which may include upstream components, it's important to have options for maintaining (and versioning) in-house forks of upstream components. This becomes a problem when you fork v1.2.3 from upstream and want to internally release your fixes, but now your internal 1.2.4 and upstream's 1.2.4 both have the same number but diverge in content.

I like the Debian solution to this, which permits freeform textual suffixes, so that in the hypothetical scenario above, you can release v1.2.3bigco1, v1.2.3bigco2, with the final number indicating the version of the patchset being applied onto the upstream release; then it's also clear what to do when you rebase your fork because you've maintained the integrity of the upstream version.


> I like the Debian solution to this, which permits freeform textual suffixes, so that in the hypothetical scenario above, you can release v1.2.3bigco1, v1.2.3bigco2, ...

Go also allows freeform textual suffixes, it just needs a `+` in there: v1.2.3+bigco1, v1.2.3+bigco2.


Oh wow, so they are not following semver? In semver everything after + is a build tag that does not "modify" the version (i.e. 1.2.3+a and 1.2.3+b are considered to both be the same version, just e.g. built at different times).


The blog post links to the Semantic Versioning spec, specifically item 9.

> A pre-release version MAY be denoted by appending a hyphen and a series of dot separated identifiers immediately following the patch version. Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-]. Identifiers MUST NOT be empty. Numeric identifiers MUST NOT include leading zeroes. Pre-release versions have a lower precedence than the associated normal version. A pre-release version indicates that the version is unstable and might not satisfy the intended compatibility requirements as denoted by its associated normal version. Examples: 1.0.0-alpha, 1.0.0-alpha.1, 1.0.0-0.3.7, 1.0.0-x.7.z.92.

Item 10 explains how + can be used to add a build tag.

> Build metadata MAY be denoted by appending a plus sign and a series of dot separated identifiers immediately following the patch or pre-release version. Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-]. Identifiers MUST NOT be empty. Build metadata SHOULD be ignored when determining version precedence. Thus two versions that differ only in the build metadata, have the same precedence. Examples: 1.0.0-alpha+001, 1.0.0+20130313144700, 1.0.0-beta+exp.sha.5114f85.

Instead of using v1.2.3bigco1 or v1.2.3+bigco1, we should use v1.2.3-bigco1. If we need to add a build tag we can use v1.2.3-bigco1+001.


Sounds pretty similar to the Debian scheme actually, which is pretty well thought out: https://www.debian.org/doc/debian-policy/ch-controlfields.ht...

Where semver is ivory tower idealism, Debian is the battle-tested, pragmatic reality.


This has never been a problem in Go, the repo name encodes the owner of a fork, and two separate repo paths with the same version are still completely different modules.

The nice thing about not doing what debian does is that there is consistency. Freeform usually means diverging opinions, which is not the Go way.


I believe parent's question was: Yes, you will have different repo names. But there is sub version pinning. i.e. The first patch of v1.2.3 is called v1.2.3.1 (or v1.2.3-1 or something). Then the repo itself will evolve, and keep publishing v1.2.3-2, v1.2.3-4 and so on.

Patched versions are "sub" versions of the main version, and those sub versions also keep evolving for the same upstream version.


For a single module, there is a "replace" directive that you can use to point to a local fork:

https://github.com/golang/go/wiki/Modules#when-should-i-use-...

But if you have an internal version that's used in lots of modules, it's probably better to use an import path under your own organization, to avoid confusion.


Does this account for the scenario where you have a dependency of a dependency? For example, if I'm patching a bug in something that several of my upstream dependencies import, do I have to fork/replace everything in that chain to get it to do the right thing?

Debian packages can Replace each other too, and that's certainly an option for managing this scenario, but I feel it's often more disruptive— it's a much more invasive change to have to go in and mutate the package name all over the place, and then it's harder to unwind it later when upstream merges and releases my change and I don't need my fork any more. Maybe this is better in go land?


Not an expert (I just read the docs), but if you only have one binary that you ship, or all your binaries are in the same module, then it looks like a single "replace" directive will do the right thing.

If your binaries are spread across multiple modules, each one would need its own replace directives, so you'd have a bit of duplication. (But maybe this independence is good, since you can upgrade them one at a time?)

If you're publishing a library and don't control the binaries (this is something your customers build) then it looks like "replace" isn't going to work (it's under their control, so they have to do it). If you need to control the exact versions used by your library, you'll want to look into vendoring.


No, you don't need to fork/replace everything in the chain. A `replace` in the top-level `go.mod` applies in all places where that package is used.

Doing a fork/replace on everything in the chain will have no affect; `replace` is only obeyed for the top-level `go.mod`, `replace` in dependencies is ignored.


"the go command automatically looks up the module containing that package"

Looks up where? GOPATH? The tree below the current file? The current directory? What's the "current module", anyway? Where is that stored?

"go: downloading rsc.io/sampler v1.3.0"

Downloading from where? Github? From what repository? The page for "go get" now talks about "module-aware" mode, but doesn't make it clear how that works.

This is the usual headache with dependency systems. There's some specific directory layout required, and you have to know what it is. This article needs to be more explicit about that. Otherwise you end up depending on lore and monkey copying.


> from where?

Given that the import paths are URL-ish, I personally find it fairly obvious where things are coming from - far more so than any other language/dependency manager I've worked with. That's something unchanged with go modules vs GOPATH.


> There's some specific directory layout required, and you have to know what it is

No you don't. If it supposedly works, why would you care where these modules end up in when they are downloaded? There's currently no indication that you need to be aware at all of whatever directory layout the module system needs, as it takes care of it automatically.

Finally, the article does actually mention where the modules are downloaded:

> ... the downloaded modules are cached locally (in $GOPATH/pkg/mod)


> What's the "current module", anyway? Where is that stored?

The article uses "current module" like you might use "current git repository"; it's the module containing the directory that you are currently `cd`ed to. Just as git identifies the repo by looking for `.git` in successive parent directories, go identifies the module by looking for `go.mod` in successive parent directories.

> > "go: downloading rsc.io/sampler v1.3.0"

> Downloading from where? Github? From what repository?

From the same place that non-module-aware `go get` downloads it. The only difference on the "remote side" of that from old `go get` is that it grabs the `v1.3.0` git tag, instead of git HEAD; if you don't specify a version it grabs the most recent `vSEMVER` git tag instead of git HEAD.

> > "the go command automatically looks up the module containing that package"

> Looks up where? GOPATH? The tree below the current file? The current directory?

Looks it up on the network, à la `go get`. Exceptions:

- The `replace` directive in your `go.mod` can manually override where it looks up a specific package.

- Telling Go `-mod=vendor` will have it look up all depended-upon modules in `vendor/`, rather than via the normal mechanisms; you can create the `vendor/` directory with `go mod vendor`.

It doesn't use the network every time; a module cache lives in `${GOPATH:-${HOME}/go}/pkg/mod/`; but you don't need to know that any more than you need to know that a build cache lives in `${GOCACHE:-${HOME}/.cache/go-build}`.

> There's some specific directory layout required, and you have to know what it is.

Not really, the only requirement is that you have a `go.mod` file that identifies a package name for the directory that it's in. So that if I have github.com/lukeshu/foo, it just needs a `go.mod` saying

    module github.com/lukeshu/foo
instead of having the requirement that it be checked out to `$GOPATH/src/github.com/lukeshu/foo`. Beyond having the `go.mod` file, there aren't really any structure requirements.

> This article needs to be more explicit about that.

Does it, though? It's a blog-post, not the "normal" documentation. The full docs are much more explicit and nitty-gritty than a high-level blog post.


As far as I can tell, rsc.io/sampler isn't a git repository, so it's not doing the same stuff `go get` does. If I try to clone it I just get hit by the HTTP redirect to the docs for rsc.io/sampler.

I may be missing something here (it's been quite a while since I did serious Go stuff so `go get` may have changed too)

> Does it, though? It's a blog-post, not the "normal" documentation. The full docs are much more explicit and nitty-gritty than a high-level blog post.

It's a bit weird to have a blog post about the new module system with nothing about how modules can be published, a pretty common task.


> As far as I can tell, rsc.io/sampler isn't a git repository, so it's not doing the same stuff `go get` does. If I try to clone it I just get hit by the HTTP redirect to the docs for rsc.io/sampler.

Why would you try to `git clone` it directly instead of trying to `go get` it?

    $ GO111MODULE=off go get -v rsc.io/sampler
    Fetching https://rsc.io/sampler?go-get=1
    Parsing meta tags from https://rsc.io/sampler?go-get=1 (status code 200)
    get "rsc.io/sampler": found meta tag get.metaImport{Prefix:"rsc.io/sampler", VCS:"git", RepoRoot:"https://github.com/rsc/sampler"} at https://rsc.io/sampler?go-get=1
    rsc.io/sampler (download)
    created GOPATH=/home/lukeshu/tmp-go; see 'go help gopath'
    Fetching https://golang.org/x/text/language?go-get=1
    Parsing meta tags from https://golang.org/x/text/language?go-get=1 (status code 200)
    get "golang.org/x/text/language": found meta tag get.metaImport{Prefix:"golang.org/x/text", VCS:"git", RepoRoot:"https://go.googlesource.com/text"} at https://golang.org/x/text/language?go-get=1
    get "golang.org/x/text/language": verifying non-authoritative meta tag
    Fetching https://golang.org/x/text?go-get=1
    Parsing meta tags from https://golang.org/x/text?go-get=1 (status code 200)
    golang.org/x/text (download)
It's been that way since at least 1.2 (I'm pretty sure since 1.0, but I haven't verified that memory). The process that `go get` follows is described by `go help importpath`.

The TL;DR is that it fetches `https://${pkgpath}?go-get=1`, and looks for `<meta name="go-import" content="...">` to tell it where to `git clone` from (with some hard-coded hacks for common websites that don't support that, like github.com).

> It's a bit weird to have a blog post about the new module system with nothing about how modules can be published, a pretty common task.

Because that hasn't changed. You still publish packages the same way you always have. The only difference in how you publish is that now "vSEMVER" git tags mean something.


Ah, I wasn't aware that the meta tag existed, everything about publishing crates seemed to point to using github (or a self-hosted repository of various supported VCSes)

Thanks.

The example using a less-commonly-known package distribution method is probably causing the confusion in Animats' original comment.


rsc.io is a URL. It's being downloaded from there.


I've been using Go modules for a bit and I really like it. I can easily set up local dependencies using the `replace` keyword as well. For instance, all of the apps in `cmd` which need access to common packages are their own modules, and then I just add:

  replace pkg => ../../pkg
And I can refer to everything inside of `pkg` (`pkg` is its own module as well).


The meat:

> go mod init creates a new module, initializing the go.mod file that describes it.

> go build, go test, and other package-building commands add new dependencies to go.mod as needed.

> go list -m all prints the current module’s dependencies.

> go get changes the required version of a dependency (or adds a new dependency).

> go mod tidy removes unused dependencies.

And you can use some sugar to declare exact versions you want.


The main thing that annoys me about Go modules is that it broke so much tooling. I still can't get GoCode to work properly, and there are 4 or 5 different forks of the repo all trying to add module support, its a mess.


The VSCode go plugin team has been doing a good job of staying on-top of this[0]. Which points to a golang bug[1] where the Go team is tracking tooling programs that need to be updated for Go Modules. So the Go team is aware of this issue and people are working to get all the major tools on board.

[0] https://github.com/Microsoft/vscode-go/wiki/Go-modules-suppo...

[1] https://github.com/golang/go/issues/24661


This hasn't been my experience. VSCode still regularly fails to resolve dependencies for me. Usually it's because I added a go.mod after opening VSCode and I just need to restart the latter, but sometimes even that doesn't work. Hard to say exactly what the cause is since debugging VSCode is quite difficult (or maybe I'm just looking in the wrong places).


This is now mostly fixed with "gopls". It's using go/packages under the hood which is both GOPATH and Go modules aware. gopls has also caching inbuilt, both code completion and jump to definition works very well.

If you're using vim-go, HEAD has now gopls support. I believe vscode also added recently support for gopls.

Other than, I agree that it broke all other tools, but most of them are migrated to use the go/packages package so it'll be better going forward.


It's not really fixed though. When using vim-go with gopls, I see extortionate memory and CPU usage relative to what gocode used to use, and glacial lookup times compared to what gocode used to offer. Maybe it's perfectly adequate for you, and if so that's great! But that has not been my experience. Neither gopls nor any of the dozen forks of gocode I've tried since 1.11 has come close to the pre-modules experience for speed, accuracy and acceptable resource usage.

I tried to use the profiling flags to get some data to file reports, but ran out of yak-shaving time when I realised was trying to flush out a bug in the tool command helpers in golang/go where it would write empty profiles if the process was shut down the exact way vim was shutting it down. That may have been fixed since, I haven't had a chance to look back into it.


There's a reason why Go 1.12 still doesn't set GO111MODULE=on by default. They're waiting until not so much tooling is broken on modules.


Gocode works in vim but for some reason won't do autocomplete in spacemac's go-mode. I haven't found the time to look for a fix yet. There's a fork of Gocode that fixes it, but makes vim's autocomplete very very slow. As someone who switches between vim and emacs I wish this mess gets sorted out soon!


HEAD of vim-go has now gopls support, which has excellent support for Go modules for code completion and jumpt to definition. Give it a try.


Oh, I'm already using vim-go, but didn't know it does code completion too. Thanks for writing such a great plugin!


What's the fork that fixes the problem? I haven't used Go since modules released, but I'm going to be needing it soon, and this is something I need.



I have been using https://github.com/stamblerre/gocode with company-mode in Emacs. It mostly works. (The limitation with gocode, it seems, is that your file has to be saved and the package you want to complete from has to be listed in the imports list in order for it to do anything. Not sure if that is what is broken with modules, or just how it is. I started using gocode after I started using modules.)


Preach, gorename doesn't work either and seems to be in a pretty uncertain state.

This issue is tracking compatibility of common tools https://github.com/golang/go/issues/24661


Can't you just keep using GOPATH until they're fixed?


You can, go mod is completely opt in.


I think the language server implementation (x/tools/internal/lsp) should get rid of the need for Gocode. But I assume it's a while away.


Google API's latest semantic version is behind the doc site (godoc.org/google.golang.org/api). Is this standard practice in Go?

I was coincidentally converting my Go project to use Go modules yesterday. I depended on a Google API which was originally retrieved via 'Go get', corresponded to docs and worked fine as this pulled from HEAD. `go mod` did not work out of the box, as it required the latest semantic version (v.0.2.0) of my this Google API import. This version, however, is behind documentation and broke my code.

I understand I can require a specific commit in the go.mod file, but the strings for specific commit seem cryptic. Where can I look up the version hash that matches the doc site?


Ah. The trick is to substitute in the go.mod file's require block.:

s/$SEMANTIC_VERSION_BRANCH/$BRANCH

In this case, $BRANCH == 'master'. 'go build' will resolve 'master' to a specific commit hash version.


you can also `go get google.golang.org/api@master`


I've returned to university recently. We're learning Make in one of my classes at the moment ("learning" Make seems so weird to me). When going over dependency graphs I couldn't stop thinking of how awesome `go mod tidy` is.


I haven't been using Go modules, to the point where I only have a vague notion of what they do.

I haven't felt the need for them.

At what point do they become necessary? Which pain-points should I be on the lookout for?


Probably at the point new versions of your dependencies break your code. If you have no external dependencies, you should be fine.

It also gives you the freedom to have your projects wherever you want.


Be warned that if you like all the tooling VSCode provides for Go then that is still not available if you are using go modules. They are working on it and there are some rough betas out there for some limited functionality but simple things like renaming variables etc, are still not available.

Realize this is a tooling issue and not necessarily a language one, but do feel like they should be mentioning this. It makes working with modules pretty painful. (we are doing it but man do I miss simple refactors / usages etc..)


I agree, I tried go modules a couple of months ago but the VSCode tools were much slower and cpu usage increased when using go mod extensions. I reverted back to not using go modules at the time, will try again soon to see if it improved. I still like go modules.


All I wanted was a way to safely and easily cleanup my $GOPATH. I don't have a lot disk space and $GOPATH takes up most of it.

This thing apparently doesn't solve my problem. `go mod tidy` simply removes a dependency from a module, but that dependency is still cached in a big arcane directory at $GOPATH/src/mod. Why?

I think we all should be using something like Nix for dependency management that solves all problems, but that is so hard to setup!


Not having a lot of disk space is a minority case for developers now. I wouldn't hold your breath for the Go authors to address it.

If you have a disk space problem, Nix would seem to be the opposite of the solution to that. One of the ways Nix does its magic is to chew through disk space a lot more freely than most distros do.


Also, there are a lot of people who think disk space is an issue. See https://pnpm.js.org/ for example.

(Ironically, pnpm doesn't have an auto cleanup feature.)


npm was especially broken with disk space, above and beyond what Go has ever had, so their problem was much worse.

And my point is not that disk space is never an issue. That's why I said it was a minority issue, not "not an issue". My point is more than Go is not a language about addressing every fiddly minority's issues. It's definitely about the 20% that does 80% of the work. So waiting for a language lead by a philosophy like that to address a disk space issue is probably not a good plan.

To be a bit more constructive, I'd observe I've had great experiences with cross-compiling. On the chance you're disk-space limited because you're developing right on a target resource-constrained device, you may be able to move your development to a more powerful system, even of a different architecture, and cross-compile fairly easily. I often have a workflow where I just "go build blahblahblah && rsync blahblahblah target:blahblahblah && ssh target blahblahblah" and I just press up & enter on a shell when I want to push & test the code. As long as you've got half-decent bandwidth to the target device, it's fine. There may be a couple of other buttons you may want to push to speed that up, because IIRC when cross-compiling it'll end up building the entire app, and you'll want to pre-compile things for the new arch.


But it has automatic and safe cleanup, right? So I can use only what I need at any point in time instead of having an opaque directory full of unused stuff I don't know if I can delete or not.


> but that is so hard to setup!

And sadly, to use. Nix is a great idea but it could do a lot to help itself out if it ever wants to become more than niche:

1. Use a more approachable package definition language; Starlark (https://github.com/google/starlark-go) or Lua would probably be great choices.

2. Make it reasonable to figure out what the "type" of a package dependency is so we can figure out how to use it and/or find its source code

3. Document package definitions. We document source code in typed languages; Nix expression language is untyped and generally less readable--why not document it?

4. Nix tools have a `--help` flag that only ever errors with "cannot find manpage". This is just user-hostile.

5. Using almost-JSON for the derivation syntax, but then providing a "pretty printer" that keeps everything on one line but with a few more space characters.

6. Horrible build output--everything for every build step (megabytes of useless gcc warnings) gets dumped to the screen. Contrast that with Bazel and company which only print errors.

Plus a long tail of other things I'm forgetting.


I didn't manage to use it enough to get to these issues.

I stopped at the point that I had to keep track of an enourmous build.nix file in all my project directories that did a lot of magic if I wanted to use Nix for Go or JS package management.

I also failed completely to search and install executable packages from npm and other minor things I tried.

I did that after going through the complete Nix tutorial and managing to understand pretty much everything and love it.


question: what happens if you wanna organize your software in modules? so for example, a main module (package main) and a secondary module (core). Do you init both as separate modules? and then you use the second as dependency on the first one? do they have to be actually published somewhere to be accessible in this way?


You could use an unpublished version of "core" using the "replace" directive, but this might not be a good idea. If you have two modules that are so closely tied together that they always need to be released simultaneously, it's probably better to make it a single module with multiple packages in it.


thanks!


By definition of you calling main and core modules, yes, you would init both. They would both need to be available via git source control, but otherwise go mod is going to go get the virtualized path that's written into the mod file the same as if you had done it yourself with go get, so nothing has really changed.


There are at least two bugs in this tutorial:

- There's a duplicated section starting with: "Note that our module now depends on both rsc.io/quote and rsc.io/quote/v3:"

- In the example of upgrading a major version, Hello was renamed to HelloV3, but the caller isn't renamed. It should be "quoteV3.HelloV3".


In the "Upgrading dependencies" section when they use "go get", will the command be local to that module? I am used to "go get" working with the idea of a $GOPATH. I almost wish they introduced another sub-command to update a dependency.


not ready yet for production, most of the cli tools such as errcheck dont work. https://github.com/golang/go/issues/24661

other than that it works like a charm, here's a tutorial i wrote: https://getstream.io/blog/go-1-11-rocket-tutorial/


Do note that you can use errcheck with modules if you call it through golangci-lint.


How do you reference local go modules that are under development? The equivalent in node is `npm link`.


with a `replace` entry. e.g.

    replace (
        github.com/repo/module/path vX.X.X => /path/to/github.com/repo/module/path
    )



Every mainstream programming language that came out in the last 20 years has already solved this. Java, Javascript, Ruby, Python, Rust, you name it.

I am baffled to why Go has not figured this out and are presenting recent developments as some kind of breakthroughs - they are not.

To me, and I'm not trying to be disrespectful here, because of the shortcomings it has, Go is in the experimental/keep an eye on bucket. It does some things really, really well but it's far from the panacea most developers that use it think it is.


Go's modules are quite a lot nicer than Java's or Python's dependency management / build tools. JavaScript only recently got its act together. Can't speak to Ruby, but Rust is the only one that got it right the first time. Dependency management is only recently a "solved problem".

> Go me, and I'm not trying to be disrespectful here, because of the shortcomings it has, Go is in the experimental/keep an eye on bucket. It does some things really, really well but it's far from the panacea most developers that use it think it is.

Go is definitely a production-ready language, and indeed it powers much of the most important software that has been written since it went 1.0 (particularly in the Cloud and DevOps spaces). It's not perfect, but it's really, really good and improving steadily.


> Rust is the only one that got it right the first time.

Not really the first time. They made attempts before. Dumped it along with person who created earlier solution and then developed current solution.

To their credit they identified early that an official solution is essential.


> To their credit they identified early that an official solution is essential.

this made me smile.


Not sure what you're talking about, Java's dependency management system is strictly superior.

Most important software is quite a claim as well.


It could be, but also every time I start a Gradle build I can get up and go brew some coffee, drink some and then come back to see it's still building, so it's not without its flaws.


Ironically, I do the same with golang at my current employer. Build times are nowhere near as what's hyped. Much closer to Java, and in fact, due to Java's incremental compilation, golang is often slower.


Go has incremental builds as well. I’ve been using it for years and build times are usually a couple seconds; a couple of minutes for huge projects. Are you sure that’s just build time and not some combination of downloading dependencies and/or running tests? How big is your project?


nah. we’ll agree to disagree on this one. most devops cloud stuff is written in python.

you know what’s written in go? terraform. i have yet to meet someone that has actually leveraged terraform in a production setting and does not think it should be banned. what else? k8s? the favorite poster boy of our generation. solving problems you don’t have and replacing figuring out how to deploy your stuff with the anguish of keeping the cluster up and uptodate.


I'm curious as to how much golang is even used inside Google itself, since they actually have to write maintainable code that does nontrivial stuff, and not follow the latest fads


good news is that it’s buckets. bad news is that it’s C++ and Java


I never had any issues with the GOPATH and vendor approach, but I have experience with using a huge monorepo, so I never had expectations that it would be any different.

But go mod is great, and it is very different than cargo, pip, npm, etc. I don't think they are claiming it's a spiritual revolution, but it solves the problems people who needed traditional package versioning where having in an elegant and Goish way.

Go is also far from experimental. It has areas where it could be better, but it's a panacea compared to writing Java in an IDE with Maven for a living.


if you didn’t experience issues with “vendoring” you were either: extremely lucky, work on a very small codebase and/or on a very small team.

to me, only the idea of using a git submodule pointed to a repo that has all my dependencies is...

repeat after me: a source control system is not a dependency management system.


Maybe the first, but monorepos are used company wide and contain millions of lines of code.

The monorepo approach and vendoring does work, and there are a lot of advantages to having everything work on each revision, you just need to be conscious of it.


hah. no. unless you have proper build/release/test/deploy state-of-the-art tooling the monorepo approach is a disaster. google and other bigco have it? do you?

there is a reason we modularize and version things in software.

in the context in which you pull in external libraries you don’t control, using a monorepo is madness.


given the option I will take java any day or night. we developers like to experiment things and put them on our resumes, but once the kool-aid is gone very few things stand the test of time.

imho, go wouldn’t be a thing if it didn’t have [initial] backing from google.


Go grew as a result of the open source ecosystem that grew around Docker. Google has backed plenty of other languages that have gone nowhere, so I think it's a fallacy to claim that's why Go has risen in popularity. If anything it was containerization and maybe the fact that Ken and Pike were involved more than the Google brand.

Go is also just really great. You don't need an IDE or a big crusty language to write good software. Go is small, compiles quickly, runs efficiently, is easy to teach, and has great tooling that lets you get up in running in just a few minutes. The ecosystem is centered around open source, the Go project itself including the name, logos, and specification are open source, and the Go conference in Denver is fantastic.

Go is not kool-aid.


we’ll agree to disagree here. I believe it’s kool-aid and unless you use it in one of the few niches where it shines it’s a disaster waiting to happen.

the Google brand helped immensely and containers/docker are their own flavor of kool-aid. The art of actually thinking about how you’ll package and deploy your software is lost. Put it in a docker container and fuck it. Fuck security, fuck ever understanding what you’re running, ship it fast/ship it now. My test for docker and its own flavor of koolaid is to ask its supportes what a container is. 95% don’t know. Yep


Not sure I would put python out Javascript as Languages that have solved dependency management. Even Java has shortcomings in this department. Which is why I'm baffled by your comment.


compared to go? it’s state of the art. npm is a huuge succes. pypi is real. for java developers maven central is living the dream.

go? pull shit from git hub and pretend that it’s acceptable and works


It's how basically the entire language is, they keep reinventing the wheel, only in a subpar way and market it as if it's some breakthrough, and people who don't know better drink the Koolaid. golang wouldn't have gone anywhere if it didn't have the Google name behind it. Did you even notice how verbose the testing code was in that post?


we should form a club or something. we’ll call it the gopher 8th light seekers and drink sparkling water + bitch about go while the rest invent the future. in go :)


hah. What kind of future are you talking about? :P It still seems that Java and C# and python are dominant in their respective areas, and golang being relegated to devops stuff, those guys can deal with all the shortcomings of the language because they don't know better, but at least it's not infiltrating other areas. Ironic that the golang authors wrote it as a C++ replacement and it completely flopped there, just shows how out of touch from reality they are.


the “future”. the one with flying cars and strong AI and aliens. where everything compiles instantly and we laugh at peasants that worry about dependency management and sanity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: