Hacker News new | past | comments | ask | show | jobs | submit login
Get familiar with workspaces (go.dev)
167 points by rbanffy on April 11, 2022 | hide | past | favorite | 96 comments



I'm trying to understand the right place to use workspaces.

For example, one of my repos contains a bunch of command line tools (under /cmd/toolA, /cmd/toolB subfolders) that make use of shared libraries. Right now, that's one module. But I suppose these could also be separate modules that are joined by a shared workspace.

They go into this a little bit under the "Work with multiple interdependent modules in the same repository" section. But what they tout as the benefit in that section isn't really a problem in my current workflow. What would be useful is to let my tools depend on different versions of libraries; I think this might be enabled by workspaces? (whereas right now they share a go.mod)


I think of workspaces as "a folder containing all of my projects".

Say I'm working on project A, project B, and misc-utils. Projects A and B include misc-utils. I put project A, project B, and misc-utils in the same folder along with go.work. Then, if I need to add something to misc-utils for project A, I don't have to re-publish misc-utils for it to affect project A, it does so automatically.

Now you can already do this by editing the go.mod of project A so that it depends on the local version of misc-utils, but then you have to make sure to revert to the upstream version whenever you publish project A.

In your case if the dependencies are already in your repo, then you're fine. A workspace is good if you have multiple repos which depend on one another, and you're working on all of them at the same time.


Sounds like virtualenvs in python. I'm excited to play with it.


I always thought venv solved a problem of multiple dependency versions on the same system, without conflicting. Go modules already handled that pretty well. This is specifically for cases where you have modules you are using in some code that you also want to actively develop on, and you would rather not constantly push code to a hosted VCS just to be able to `go get server.com/my/dependency@some/branch`, which is the primary use case modules solve: code versions from remotes in a consistent and verifiable way.

Workspaces just simplifies that workflow so that you can make changes locally in `a` which is used by `b` without ever pushing `a`, and without needing them to be within the same module.


pip install takes a local path not just a url.

So long as `a` and `b` are both in the virtualenv (on the path somewhere) it doesn't matter where you got a from and you don't need to push your changes globally. If you start tweaking `a` it won't effect anybody outside the virtualenv either.


Particularly useful when you use -e — the virtualenv will only reference your package, not copy the source. Very useful for development and I learned it way too recently.


> I think of workspaces as "a folder containing all of my projects".

This really reminds me of the approach that the Eclipse IDE takes.

Some loved it, some hated it - much like how i might want to organize my Go and other projects in rather arbitrary ways, e.g. in a bunch of subfolders where Git checkouts are done, like:

  ~/projects_work/project_{bar,baz}
  ~/projects_personal/2022/project_{baz_but_altered,other}
  ~/Downloads/temp_stuff/*


Think of a monorepo (multiple go.mod files).

As opposed to what you’ve referenced: a single repository that contains a single go.mod.


Yes what you're describing sounds like what I (think) I want to convert my repo to be more like. I'll experiment.


I'm looking forward to the improved editor experience as gopls is now workspace-aware.

There's a couple of repos at work which contain multiple Go modules (each in their own subdirectory, with their own go.mod files). It's a bit of a pain to get LSP working correctly for each individual module if I open the root directory of the repo in my editor.


> There's a couple of repos at work which contain multiple Go modules (each in their own subdirectory, with their own go.mod files).

There's almost no reason to do this. 1 repo = 1 module. Even for monorepos.


Can you elaborate? What if you have multiple commands (main.go) with differing dependencies?


They can (and usually should) exist in a single module. Each compiled artifact will link in only the dependencies it requires, not all of the dependencies of the parent module.


I gather this is an easier way to work with dependencies that you might need to tweak or add some logging lines too or something?


Yeah, basically it lets you use a local version of a dependency rather than a published version. This way you can make changes to the dependency and test that they are compatible with your downstream project without needing to resort to hacky workarounds.


I don't know Go, but this sounds analogous to .net solution / project files - is that kind of what this is?


No. Imagine you have a nuget package your solution relies on. If you want to test a change to that nuget you would need to change the nuget reference to a local project reference or publish locally.

Imagine if you could create a file on the solution level that uses the source code in a directory instead of the nuget without affecting the csproj, sln or nuget references


Something like this would be a game changer for my workplace.

This particular issue is a huge bugbear for our team of 50'ish developers in a .NET development environment, working on a large product with a lot of shared code/models/etc that often has to be updated at the same time as application logic.

It is astonishing to me that MS haven't addressed this issue with NuGet workflow and its hostility towards developers working on internal code.


You can use Directory.Build.targets, and Directory.Build.props to have shared build targets/properties—including package references. When building MSBuild will keep checking parent directories until it finds those files and imports them.

See: https://docs.microsoft.com/en-us/visualstudio/msbuild/custom...

I don't use them for package references myself, but this blog post has some information on how you'd do that: https://www.mytechramblings.com/posts/centrally-manage-nuget...


The links you've provided more deal with how to keep NuGet package versions consistent across multiple projects within a solution. While this does solve an issue, it is not the problem being discussed.

Package version consistency is certainly an issue of merit and deserves attention, however for our company's use case is a minor bugbear in comparison to the issue of package references (for build) vs source or project references (for development) in large codebases that lean heavily on shared dependencies.


Ah yes! I completely the parent's point. I would actually like something like that.

My current approach is to use a directory.build.props file in the folder where I clone into (i.e. outside of all the repos), and use it to set the package output path to a local folder and timestamp the package version. That way every build creates a new package to test locally, but I still have to update the references in the dependent projects when testing (and build everything separately).

Not particularly happy with it, but I don't have to manage that many shared dependencies either.

I wonder if a precompile target has enough access to tinker with the references? I.e. remove items from the PackageReferences item group, and create corresponding ProjectReference items. Probably will need to also call the MSBuild build task on those projects too so they can have a fresh build in case any changes have been made in them too.

But yes, it would be great if Microsoft supplied something for that.


100% agree. These pain points eventually broke me and I just left .net for Go


So a way to easily temporarily override the dependency reference to use a local project rather than a NuGet package? That does sound quite useful. I think this can be done relatively easily in a csproj with conditional references though?


> So a way to easily temporarily override the dependency reference to use a local project rather than a NuGet package?

Yes. Great for debugging too.

> I think this can be done relatively easily in a csproj with conditional references though?

Maybe, but you’d have to do this manually for all the projects in a solution. This is built into the tooling and produces a .gitignore-able file


Project files are a semi complex file format specifying a lot of configuration that a given project has. One project file has no bearing on another project.

Workspaces are a convention built into the language that requires no configuration. This convention applies broadly across projects equally, using identical rules for all projects. You need to download/checkout your repos in a certain way, but then they all play together.


Yes


It most definitely isn’t


Wow. That's not a small thing to internalize. I guess if you use predominantly Go, then that's good and understanabdle, but if you have more languages in the mix (C/C++, Python, C#, Java, Rust), each of them have their own build ways and quirks (at least with Go/Rust/C# there is build single option (AFAIK)), but then do - do I need to clean now? Is this going to really get my latest change? Is the IDE going to see these new symbols, the debugger, etc.

I know people like when their language comes with build system, but that creates a cost when that language/runtime has to be used with others...


I don't see how workspaces makes that situation worse. Go has always had its own compiler, package layout, etc.


I think the whole thing is super easy to use, if only for Go. I've had to recompile and patch few things, and I'm relatively unfamiliar with it, and with my little knowledge was able to get through (my main languages are C/C++ and C#, and did Java (+ blaze) while at google, but never touched anything else but Java there, though saw the light of how easy is to integrate between these languages, given enough time spent on making these rules work (it's not always easy, I'm still waiting for external dart/flutter rules, it's yet another system with it's own build mechanism))


If you have a lot of different languages in the same repo (like a monorepo scenario), you might be better served using something like bazel. It'll abstract away all the language-specific build systems, for better or worse, and make it a bit more uniform to manage. It's not a panacea--basically everything now has a single annoying build system instead of everything having an annoyingly different native build system.


Well I was hinting at it, or the other similar systems (gn, buck, pants, etc.) - you end up needing a language-agnostic (or rather say, language-diverse system) that understand how these compilers / runtimes express output artifacts and actions. It's not always that easy (e.g. "virtual_includes" in bazel and `#pragma once` do not work well, but it's small price to pay (e.g. go back to #ifndef MY_HEADER_H #define it)).


This is a great improvement over the existing workflow.

Didn’t realize I could use this for protobuf libs pulling in from a common schemas repo


Is this similar to .xcproject / .xcworkspace?


Now that Go has generics (and the workspace handling mentioned here) I think it's going to get down to complaining about verbose error handling...


This reminds me of Dan Luu post about outages and post-mortems [1]:

> For more on this, Ding Yuan et al. have a great paper and talk: Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems. The paper is basically what it says on the tin. The authors define a critical failure as something that can take down a whole cluster or cause data corruption, and then look at a couple hundred bugs in Cassandra, HBase, HDFS, MapReduce, and Redis, to find 48 critical failures. They then look at the causes of those failures and find that most bugs were due to bad error handling. 92% of those failures are actually from errors that are handled incorrectly.

[1] https://danluu.com/postmortem-lessons/


I believe this, but I don't think it addresses whether implicit or explicit error handling is more likely to result in correctly handled errors. My opinion is that explicit is better than implicit (and I'm using implicit and explicit loosely here--the `?` operator is technically explicit, but I'm considering it to be implicit since it's easy to overlook a single character).


It's impossible to overlook that single character because your code won't compile otherwise. Unlike Go, you cannot accidentally ignore an error condition in Rust. There is no way to discard the error without explicitly doing so, and even then you still have to decide on a non-error return value to return (because Rust functions that fail provide an error or a value and not both, unlike fallible golang functions).

People who believe Golang is explicit and hard to ignore errors but think that Rust is implicit and easy to ignore errors have never used both languages, full stop.

I challenge you to provide me with an example of Rust code that ignores an error without explicitly and obviously doing so.


I felt the same way about Go until I've tried using it in a large corporate setting. The linters will absolutely scream at you if you ignore the returned error, and you will need to comment //nolint to silence it which immediately attracts the reviewers attention. The net result is that errors are basically impossible to ignore in Go as well.

If you're writing alone in a vacuum without linters, then I agree that Rust is more explicit.


I have accidentally failed to handle errors in golang code in a large corporate setting that was run under multiple linters. This was "simple" and straightforward code, and the last time it happened was within the past month. I wish I remembered exactly what I did, but it was rebased over to be fixed.

If you need sufficiently advanced linters to catch every case, your error handling is not explicit. Especially if those linters are not currently sufficiently advanced.


> If you need sufficiently advanced linters to catch every case, your error handling is not explicit.

You're conflating "explicit" and "statically verified".

Agreed that static verification by default is ideal. Rust wins here.


> It's impossible to overlook that single character because your code won't compile otherwise

I was thinking about the case where there is a `?` in the code but the reader glossed over it, thinking it didn't return an error.

> Unlike Go, you cannot accidentally ignore an error condition in Rust.

There is no way to discard the error without explicitly doing so, and even then you still have to decide on a non-error return value to return (because Rust functions that fail provide an error or a value and not both, unlike fallible golang functions). I agree, I think Rust is strictly better in this capacity.

> People who believe Golang is explicit and hard to ignore errors but think that Rust is implicit and easy to ignore errors have never used both languages, full stop. I challenge you to provide me with an example of Rust code that ignores an error without explicitly and obviously doing so.

You seem to be unduly defensive. This was never my claim. I have used and enjoy Rust. We're all friends here :)


> I was thinking about the case where there is a `?` in the code but the reader glossed over it, thinking it didn't return an error.

The error is explicitly in the function prototype. Every code path that returns must ensure the result is an `Ok(T)` or an `Err(E)`. You just can't accidentally overlook this. Even if you do gloss over it when skimming, the error is handled and dealt with.

> You seem to be unduly defensive. This was never my claim. I have used and enjoy Rust. We're all friends here :)

I'm just tired of hearing that Golang's error syntax is explicit but Rust's is somehow implicit because it's fewer characters, even if they desugar to virtually exactly the same thing. Both are explicit. Golang's is verbose and (ironically) error-prone.


> The error is explicitly in the function prototype. Every code path that returns must ensure the result is an `Ok(T)` or an `Err(E)`. You just can't accidentally overlook this. Even if you do gloss over it when skimming, the error is handled and dealt with.

Some of us are mere mortals. We tire and make mistakes. Perhaps even unworthy of the mantle of "Rust programmer".

> I'm just tired of hearing that Golang's error syntax is explicit but Rust's is somehow implicit because it's fewer characters, even if they desugar to virtually exactly the same thing. Both are explicit. Golang's is verbose and (ironically) error-prone.

I already admitted sympathy to the viewpoint that Rust's error handling is technically explicit, and that Go's error handling would be improved by static verification that errors are handled. What more do you want from me?


I like Java-style checked exceptions and Rust's Result enums, and my ideal error handling would be more verbose compared to Go's error handling.


there’s really no reason this has to be 3 lines

    if err != nil {
        return err 
    }
i’m hoping they find a way to simplify this


Returning an unannotated error like this is an antipattern. Every error return should include an annotation via fmt.Errorf.


Why? Errors are annotated by definition. I wouldn't feel better about returning my own string versus the error which already includes a message.

How much additional context needs to be there and how often is this convention used by simply duplicating the message with basically repetitive text?

panic(err) become "Problem with date formatting: invalid date format" or similar

I also think "antipattern" gets thrown around too often. This sounds more like a preference or convention.


I presume as a method of manually constructing backtrace, as the same error may occur in multiple places it's helpful to understand the context.

I too would label it a preference.

To get out of the error handling tedium in our platform I largely opt to panic instead whenever viable, which gives a nice trace for free. (I am but human, and the error handling particularly grates once you have gotten used to just typing `?`.)


I also panic whenever possible from the caller, in which case I get a pretty clear stack trace. I understand you don't always want that but imo I don't have a lot of cases where I'm ultimately throwing my hands up on a non-exception error.

I think V2 errors are taking more of a lead from the pkg/errors error.Wrap approach anyway.


pkg/errors.Wrap is already accomplished in the stdlib via fmt.Errorf("annotation: %w", err).

Programs which use `panic` as ersatz error mechanisms are fundamentally broken. `panic` expresses an invariant violation that's much different than normal errors.


`panic` is not equivalent to `return err`. It expresses a much more fundamental problem than an error return, and subverts the ability of the reader to model execution control flow. `panic` should essentially never be used in application code, and when it is used it should almost always immediately terminate the program.


Programming is a vast field, "essentially never" is quite a strong statement.

For many errors, in many situations, terminating the process is quite reasonable.

In my particular situation, the greater system will restart failed processes, and retry failed tasks. I find this useful as in many cases my program can just die when something weird happens, simplifying it's own logic.


This represents a false economy, or maybe a local optimum. It's lovely that your code can be simple in the sense that it can assume all kinds of invariants that, if violated, will simply terminate the execution, which can safely be assumed to start up again anew. But it's decidedly not lovely that you can no longer predict what effect an input will have on your code, and can't effectively reason about, well, anything beyond a trivial lifetime/callstack. If your process dies whenever something weird happens, it effectively becomes nondeterministic -- your greater system model has to assume it can die at any instant for any reason.


> your greater system model has to assume it can die at any instant for any reason

Correct. This is something I have to design for in the system anyway, because in practice anything can (and does!) die at unpredictable times. It's typically an inevitable fact of life that a machine/kernel/program will occasionally die, and your system has to survive that.


Of course it can, but the question is what this sort of termination represents. Hopefully, it represents a serious showstopper bug that gets fixed immediately! If your program is built such that call stacks don't have reasonably deterministic behavior, it's essentially impossible to build a usable model of the program as a maintainer.


I mostly agree, but I’m also comfortable using “panic” in an HTTP server to throw 500’s.


Goodness, I hope not! This breaks all your downstream middleware and whatever business logic is invoked by your handlers. If you encounter a 500-worthy problem it's an `error` return like anything else.

`panic` is not for business logic.


> `panic` is not for business logic.

Neither are 500 responses. A perfect HTTP server never responds 500 and there aren't any situations I've ever encountered where there's any valuable error recovery or business logic left to be done once the server has run into a 500-worthy issue.

IIRC, the HTTP server in the Go standard library recovers panics and issues 500 responses, which is what I would expect it to do.


500 maps to a set of business logic errors, sure. Why not?


Keep in mind that the usefulness of stack traces quickly breaks down in the presence of goroutines.


Why do you believe errors are annotated by definition? They aren't?

Panic isn't an ersatz error mechanism.


Pedantry on my part, but annotated does not mean including a stack trace, simply that it includes a description of the error. Given it's a non-nil check, you never gave a guarantee it's valuable, but the error is itself at minimum an annotation.

panic brings important context, but you're right in that it's not an annotation in itself. It's a program flow mechanism, but I'd argue it's very often utilized as an error flow one.

My larger point was that this still doesn't feel like an "antipattern" and I see that word thrown around enough as a conversation stopper that I've become pretty cynical about it.


"Annotated" means given `err error`, you return `fmt.Errorf("annotation: %w", err)` — nothing more or less.


you only need to wrap errors that you did not raise yourself. if your codebase already annotates an error you hopefully are annotating well enough that it’s identifiable. library code you didn’t write is unknown and needs help. wrapping every error is even more verbose and repetitive


that doesn't add much value but make it more easy to identify the offending place where the error occurred. What would be great is a unified way to add context in a standard and automated manner, like a stack trace.


No idea what people have against newline characters, but `if err != nil { return err }` is valid Go code.


go fmt will change that, and code should be uniformly formatted.

Go proverbs: "Gofmt's style is no one's favorite, yet gofmt is everyone's favorite."


agreed, but then what’s the issue?


I struggled with this a lot. Drove me nuts we couldn't get decent code coverage because of error handling for errors I don't even know how to replicate in the first place.

Plus once your code throws one error, every bit of calling code also needs to handle that error. The problem cascades through a codebase quickly. Seemed like a huge violation of DRY principles.

Rob Pike (my understanding as one of the main guys who created the language) has actually addressed this, it's a good read:

https://go.dev/blog/errors-are-values

Tldr refactor your error handling to treat it like code. DRY and SOLID principles would apply and etc. Article makes example of handle your errors in one place rather than 20 by using no-ops on remaining operations after an error occurs.

I don't actually agree with the choice, as it takes one key library which throws errors at every call (like I'm dealing with now) for this to just become a huge pain to do. I had to completely change business logic to implement his suggestion, which isn't always viable (and I'm subsequently finding that out that that wasn't completely viable for us first hand now). Also a lot more boilerplatey type no value add type code needs to be written.

I much prefer unchecked exceptions for the most part, but at least I can understand WHY error handling is the way it is in Go.


Approximately 0% of code bases do this in golang practice. I would be floored if someone provided an example of a large project that does.


In Rust, that could be a Result::unwrap() or propagated using the ? operator.


Explicit is better than implicit


Verbose and explicit are two separate things.

This is explicit and verbose. Explicit does not need to be verbose.


I think that was the comment about Java checked exception, it's one line and it's explicit.

Though personally I feel in this case defaulting to rethrowing uncaught errors is better. Since that's the 99% case. I'd rather it be zero line.


Unless something has changed, Java checked exceptions aren't explicit--it's not generally not possible to tell whether a given function call can raise an exception without inspecting the signature of that function.

> Though personally I feel in this case defaulting to rethrowing uncaught errors is better. Since that's the 99% case. I'd rather it be zero line.

This is ambiguous with the "doesn't error at all" case. If you're looking at source code `foo()` you can't tell whether that's equivalent to `if err := foo(); err != nil { return err }` or just `foo()`. You have to check the function signature to see what the return arguments are (or in Java's case, whether or not it throws).


The error being part of the method signature seems very explicit, no? You will get a compiler error if you don’t handle it in some way in the client code.

Unchecked exceptions are not explicit.


I suppose it becomes part of the signature of the caller, so maybe? I guess I was thinking "locally at the call site, I don't see anything corresponding to `if err != nil { return err }`", but implicit vs explicit error handling probably isn't well-defined, so I suppose it's up to interpretation. I usually think of it more in a local context ("is it evident right at the call site") but the caller's function signature is certainly local-ish. (shrug)


At the call site you will get a compile error. And the signature of the method you call explicitly tells you what errors you need to handle.

Most IDEs will conveniently put a red squiggle line beneath the exact call site as well, and show you the compile error when you hover over it.

And if you choose to rethrow it, you will need to add to your method an explicit annotation.


Rust's Result type is way less verbose than Go's error handling though.

Checked exceptions would have been fine if they composed decently with rest of the language...


With Rust's result type, you either have to unwrap() them or explicitly handle them with match expressions, whereas if you really wanted to in Go, you could just choose to ignore errors.


Rust has extremely useful "?" operator.


Yep, I mentioned that up-thread. It's a very convenient feature.


There's much more to differentiate Rust and Go than support for generics and error handling verbosity. In my mind (and experience) they don't even occupy the same problem space.


I struggle to see what problem space is better for one or the other. I’ve replaced all my web service type stuff with Rust and couldn’t be happier. Also a lot of stuff I used to do in Python is more concise in Rust.

No I hung up my Go boots in 2018, so may be a little out of touch.


IMHO:

Rust integrates a bit better with existing software. Inclusion of Rust in Linux being an example. Go culture tends to value pure Go projects a bit more than Rust's, where Rust wrappers around C libs are more accepted and common.

Go is generally easier from a social point of view, there's not much to learn as it's largely a repeat of what people are familiar with. So introducing it in a workplace is easier as the on ramp is minor.

Ubiquitous greenthreading in Go seems to make the integration piece a bit harder, but the onramp a bit easier.


I was being cheeky. I love Go and have been too busy/lazy to dive into Rust but would like to. Yes, different problem spaces but they often get compared head to head because we need to have stuff to argue over.


I wonder how practical it would to implement much of Rust's error handling syntactic sugar in Golang. I'm sure a lot doesn't transfer over, but Rust seems to have learned a lot from other languages like Golang in this regard.


it's going to get down to complaining about introducing generics way too late.


Only for those who want to make a stink about it.


I find this goes against the go simplicity. I continually find that Maven & Gradle are much easier to use than Go's package management system for versioned dependencies.


When people release 500+ pages book to explain a build system, you know you're in trouble.

https://www.amazon.com/Maven-Definitive-Guide-Sonatype-Compa...

Go is way way easier to use than Maven and Gradle.

I think I use only 4 commands daily to manage dependencies in Go, it's that easy: https://github.com/golang/go/wiki/Modules#daily-workflow


This might be the first time I've heard someone say comprehensive documentation is a bad thing.

Go's module docs are 82 printed pages. Is that better or worse? https://go.dev/ref/mod


That's 36 pages printed (try printing it), and includes the grammar for the go.mod file, information about the ecosystem of services available (module proxy, sumdb) and their protocol definitions, how to access and publish private modules, etc. It doesn't skimp on thoroughness anywhere.


Every place I worked, Java build pipeline has always been cumbersome, it's one thing that one guy knows and you're afraid to change it because it's complicated.


The Sonatype Maven book isn’t actually that comprehensive, unfortunately.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: