My team has ~200k lines of Go code and I'm exceedingly happy with the state of the codebase. Having previously maintained a C++ codebase of similar size, I can say that the pace of changes is higher, the effort necessary for large scale refactorings is lower, and our ability to reason about the system is similar.
A few examples:
- Refactorings are simpler due to the use of consumer-side interfaces. Say you want to inject an in-memory cache above a backing store. To do that you probably have to change the constructor and provide an implementation matching the 2-4 relevant methods. That's it.
- Tracing code is slightly worse, due to having to track callers through said duck-typed interfaces, but on the flip side multi-threaded code is sufficiently simpler to reason about that I call it a wash. Having previously had to do threading in the form of "control flow" state machines, and then fibers (which were better but not perfect, and still aren't widely available), Go constructs are great. Locks where appropriate, channels where appropriate, overall very fast and clean code.
- Performance is good, and reliable. Not as good by cycle-count as C++ - and e.g. the comparable RPC libraries are definitely less mature than Google's very-well-kicked C++ libraries - but on the other hand it scales almost linearly. We started a system at ~5 cores/task under AutoPilot, and then when we next got around to adding more tasks it was peaking at ~60 cores/task at essentially the same per-core throughput. I've never managed to write a C++ server that can accidentally scale concurrency by >10x without hitting _some_ bottleneck.
- We use Go for ~everything. Server code, definitely Go. Client tools, also Go. Simple scripts, bash until they need their first 'if' or flag or loop, then Go too.
- I'd prefer real generics to interface{}, but the number of places it comes up is minimal enough that it's no more than a minor annoyance.
I can't speak to the issues of package management - we dropped compatibility with Go's native package structure fairly early on and went all in with blaze/Bazel (http://bazel.io) to coordinate builds and dependencies and whatnot, and haven't had reason to try modules yet.
> Simple scripts, bash until they need their first 'if' or flag or loop, then Go too.
If you don't mind, can you give a little insight on what this looks like in practice? I'm not sure how to use a compiled language as a script. I've played with executing go as a script using a shebang hack, but I somehow don't think this is how others are doing it.
For reference, the shebang hack I was using looked like this:
//usr/bin/env go run $0 $@; exit $?
package main
import "fmt"
func main() {
fmt.Println("i am a script")
}
It looks more like Go code than a bash script - the tradeoff we settled on is that pretty much as soon as you need to add any sort of logic it's no longer really a simple "script" and you _know_ it's just going to grow into a monstrosity. Better to use a language with real functions, real error handling, that you can actually unit test, etc. In that sense, I guess you could say we write lots of little tools moreso than we write scripts.
For something of this form, if the standard library has the functionality, we us it - os.Mkdir() instead of `mkdir` and so on. But to simplify shelling out, we have a little library that includes the interface
type Runner interface {
Execute(dir, name string, args ...string) (*CmdOutput, error)
}
so it's easy enough to call miscellaneous programs and get the exit code / stdout / stderr. It also supports printing and executing a command, etc.
Iteratively executing a program of this form looks like `go run whatever --flag=value`, though your shebang hack looks like it'd also do nicely.
My team has ~200k lines of Go code and I'm exceedingly happy with the state of the codebase. Having previously maintained a C++ codebase of similar size, I can say that the pace of changes is higher, the effort necessary for large scale refactorings is lower, and our ability to reason about the system is similar.
A few examples:
- Refactorings are simpler due to the use of consumer-side interfaces. Say you want to inject an in-memory cache above a backing store. To do that you probably have to change the constructor and provide an implementation matching the 2-4 relevant methods. That's it.
- Tracing code is slightly worse, due to having to track callers through said duck-typed interfaces, but on the flip side multi-threaded code is sufficiently simpler to reason about that I call it a wash. Having previously had to do threading in the form of "control flow" state machines, and then fibers (which were better but not perfect, and still aren't widely available), Go constructs are great. Locks where appropriate, channels where appropriate, overall very fast and clean code.
- Performance is good, and reliable. Not as good by cycle-count as C++ - and e.g. the comparable RPC libraries are definitely less mature than Google's very-well-kicked C++ libraries - but on the other hand it scales almost linearly. We started a system at ~5 cores/task under AutoPilot, and then when we next got around to adding more tasks it was peaking at ~60 cores/task at essentially the same per-core throughput. I've never managed to write a C++ server that can accidentally scale concurrency by >10x without hitting _some_ bottleneck.
- We use Go for ~everything. Server code, definitely Go. Client tools, also Go. Simple scripts, bash until they need their first 'if' or flag or loop, then Go too.
- I'd prefer real generics to interface{}, but the number of places it comes up is minimal enough that it's no more than a minor annoyance.
I can't speak to the issues of package management - we dropped compatibility with Go's native package structure fairly early on and went all in with blaze/Bazel (http://bazel.io) to coordinate builds and dependencies and whatnot, and haven't had reason to try modules yet.