Really wish these HN links to download pages were HTTPS. Lots of people at Gophercon on untrustworthy conference wifi that probably wouldn't notice a MITM'd download page pointing to non-HTTPS binary downloads.
From the changelog:
> Builds in Go 1.5 will be slower by a factor of about two. The automatic translation of the compiler and linker from C to Go resulted in unidiomatic Go code that performs poorly compared to well-written Go. Analysis tools and refactoring helped to improve the code, but much remains to be done. Further profiling and optimization will continue in Go 1.6 and future releases.
This seems quite a drastic change. I thought compilation speed was one of the pilars of go. In that light, why did they decide to release this anyway?
Compilation speed was just a gimmick anyway; the compiler is fast because it doesn't do many expensive optimizations. It was like bragging that using "-O0" makes a compiler so fast. There are languages that require a lot of work just to compile at all, like C++, but for most languages slow compilation is more a measure of how advanced the compiler is.
And it's not just the compiler, the linker didn't even know if a package is used or not, which is why you have to manually remove unused imports (otherwise they would be linked in, bloating the binary).
You're more than welcome to write C in the fashion that the go compiler enforces, and get much faster builds. Their compiler is young and immature relative to things like GCC and stating that it may be missing expensive optimization features is not "plain wrong".
That suggestion is not really practical as you need to make the same requirement from all your third party libraries. While gcc will spend more time optimizing than go, the main reason for the speed difference at the moment IS unnecessary work done by the preprocessor because of the primitive importing.
No, it's not, unless you count time spent compiling inlined functions and templates as "preprocessing time" (which it isn't—it's compilation time). Time the difference between clang++ on -O0 versus -O3 if you don't believe me.
Others already pointed out that paragraph 1 isn't really correct.
I need to point out that your second point about the linker is totally false. The linker knows and does drop unreferenced code. The import strictness is a language requirement and comes from experience with large codebases at Google and elsewhere. Use goimports in your editor's save hook and don't think about it.
This is the nature of a priority. They are saying they value compilation speed more than the maintainers of C might, and are willing to sacrifice other things (like runtime speed) in that interest.
2x is still means much faster than C++. Go has an strict release cycle[0] and at some point you have to make the switch if you want a compiler easier to develop. I'm sure it'll get much better soon.
The compilation time is slower, not the resulting compiled binary.
> On average the programs in the Go 1 benchmark suite run a few percent faster in Go 1.5 than they did in Go 1.4, while as mentioned above the garbage collector's pauses are dramatically shorter, and almost always under 10 milliseconds.
Python caches the bytecode in .pyc files, so the compilation happens only the first time you import a module. In a typical Python start-up only the main source file is byte-compiled.
They are porting to self-hosted in Go only. And they are willing to trade off compilation speed, temporarily, for getting there at all. Machine translated code will be full of low hanging fruit to get the old speed back.
"By default, Go programs run with GOMAXPROCS set to the number of
cores available; in prior releases it defaulted to 1."
This is exciting, but what are the implications? I remember something once about some 3rd-party code, and maybe even some stdlib code behaving poorly on multiple processors.
With the new scheduler I believe this is much less of an issue. It is huge though because there might be a bunch of data races in the wild that were hidden until now and are about to see the light of day. Hopefully most people test with -race these days.
For high concurrency systems, I'd expect false sharing to start rearing its ugly head.
I only recently got knocked over the head with -race by the oh-so-diplomatic folks on IRC. It's very good. Might as well use it -- otherwise it's like driving with a broken check engine light.
Speaking of false sharing, would it be a good idea to take the Go implementation of Disruptor and use that for buffered channels?
> For high concurrency systems, I'd expect false sharing to start rearing its ugly head.
I'd be really surprised if that results in programs in parallel mode becoming slower than sequential (which is what you get with GOMAXPROCS=1). Cache effects are important, but not that important in a setting like Go, where few people use concurrent read-write data structures anyway (other than channels, of course) due to the lack of generics.
Much more likely is that some programs become slower due to the fact that GOMAXPROCS>1 requires more locks and atomic operations. That's not false sharing, though; that's just synchronization overhead.
I'd be really surprised if that results in programs in parallel mode becoming slower than sequential
I'm going to guess that you haven't played too much with executing things in parallel? In Clojure groups, people have heard beginners express surprise about parallel mode programs running slower so often, it's somewhat shaped the reaction to such questions.
few people use concurrent read-write data structures anyway (other than channels, of course)
One can still get in trouble with channels of pointers to structs.
Much more likely is that some programs become slower due to the fact that GOMAXPROCS>1 requires more locks and atomic operations. That's not false sharing,
Depending on how the locks and atomic operations are implemented and used, the slowdown with locks and atomic operations could be in large part because of false sharing. (I don't know the details for Go.)
I think you're overstating how much false sharing in particular causes parallel slowdowns. In the case of Clojure, for example, it's probably just straightforward synchronization overhead (cost of atomic ops), just as it is in Go.
False sharing is a very specific type of synchronization problem whereby data tightly packed in memory gets shared between multiple processors because the cache line size is larger than desired, even though logically the data in question are completely independent. This can happen, but it's nowhere near the most common type of synchronization-related performance problem in my experience. If your problem actually needs to share the data between multiple processors (e.g. with Go channels), then it's not "false" sharing anymore—it's "true" sharing.
I stand corrected. It's possible that other data might get accessed from another socket while going after a lock, but if every thread is just after the data for the lock, it's just sharing. Semantics got dropped somewhere, but I was still using the term.
> I'm going to guess that you haven't played too much with executing things in parallel? In Clojure groups, people have heard beginners express surprise about parallel mode programs running slower so often, it's somewhat shaped the reaction to such questions.
pcwalton is one of the lead engineers of Servo, a Mozilla Research project to develop a more parallelizable browser engine in Rust. I'd say he's played around quite a bit with executing things in parallel!
What does lack of generics have to do with concurrent vs non-concurrent data structures? It seems like if the problem calls for it, concurrent data structures would be used regardless of generics.
Concurrent data structures (especially lock-free data structures) are really hard to write. Specializing e.g. a lock-free FIFO stack to each type you want to use it with isn't feasible in practice. It's much easier to just protect a built-in slice or map with a mutex, and that's what I believe Go code generally does.
> What does lack of generics have to do with concurrent vs non-concurrent data structures?
Generics allow building reusable forms of complex data structures, reducing the cost in developer effort of each specialized use. By not supporting generics, Go increases the cost of each specialized use of complex data structures, which narrows the range of circumstances where the cost will be justified by the benefit.
Whether or not "the problem calls for it" is always a cost vs. benefit question, and Go -- compared to languages with support for generics -- increases the cost of this particular solution.
I'm not familiar with disruptor, but if you want an alternative to buffered channels maybe mangos[0] would interest you, it supports inproc and IPC among other transports.
> The DNS resolver in the net package has almost always used cgo to access the system interface. A change in Go 1.5 means that on most Unix systems DNS resolution will no longer require cgo, which simplifies execution on those platforms.
This is great news if you're on one of those systems supporting it, using the net module and don't like the libc dependency, or if you're doing a lot of concurrent DNS requests.
I'm excited to see experimental support for vendoring. One nice thing about go is that the dependency management is part of the core language design, but it always fell flat when you needed specific versions of a library. Vendoring solves that huge issue, I hope it works as nicely as they claim.
The garbage collector is now concurrent and provides dramatically lower pause times by running, when possible, in parallel with other goroutines.
By default, Go programs run with GOMAXPROCS set to the number of cores available; in prior releases it defaulted to 1.
But how important is this? It doesn't seem like it affects the average user:
The compiler and runtime are now written entirely in Go (with a little assembler). C is no longer involved in the implementation, and so the C compiler that was once necessary for building the distribution is gone.
We're pretty excited for the compiler and runtime being in Go, because we publish binaries for many different operating systems. Previously this required installing a separate cross-compilation toolchain, but will not any longer.
Rewriting the compiler in Go was a precondition for improving garbage collection, according to Rob Pike. The old C codebase was complex and finicky, and, to an increasing extent, contributors to the Go language often know Go better than C.
So, that more complex garbage collector is a product of that less-sexy rewrite.
Why the down-votes, I think this is a valid point. In case of real-time, e.g. video games, you want to render every ~15ms and doing that with a GC is really NOT trivial. I do not say, GC-languages perform bad, they actually are great for throughput, but doing low-latency stuff is still hard.
The new GC reduces pauses to the single-digit milliseconds. Rendering every ~15ms seems reasonable under those constraints. Plus I don't think a game engine will typically put a lot of pressure on the garbage collector.
I don't think the new GC precludes Go from being suitable for writing games. On the contrary, I think it is now more suitable, and look forward to using Go for writing games myself.
So I admire their goal to reduce latency (it's probably a latency-throuhput tradeoff) and I always get told that real-time (which in my opinion requires low-latency) is possible in GC-languages, I just do not know how to do that reliable. But I hope I am wrong and it is possible to go real-time with a GC as it is often a prerequisite for high-level language features.
ISTR that producing a clean sign wave on a scope requires a real-time operating system, and it was one of the first tests for seeing if your real time linux patches worked.
> Also in the crypto/cipher package, there is now support for nonce lengths other than 96 bytes in AES's Galois/Counter mode (GCM), which some protocols require.
Has anyone spotted the documentation for the new shared library system yet? Plugin-based architecture is one of the biggest things I've wanted in Go and it seems to finally be happening.
I played around with this on my own, and the shared library system will only work with Linux binaries for now. I wasn't able to get it working with OS X or Windows, with the very clear message from Go telling me the "platform target is not supported."
I think it is mostly a getting started point, and not generally useful unless your only target is Linux (which very likely may be the case unless you're working on desktop apps).
I've been playing around with it, here's a neat example that does shared library partial bindings for http.Server in C and Python: https://github.com/shazow/gohttplib
Is it possible to dynamically load libraries that have been compiled with -buildmode=shared in runtime from within a running Go program? The design doc talked about it IIRC.
According to the release notes [1], "For the amd64 architecture only, the compiler has a new option, -dynlink, that assists dynamic linking by supporting references to Go symbols defined in external shared libraries."...so I assume if you want all of your Go signatures visible in your shared Go libraries, you can do it on x64. I have not tested this myself.
the question though is how do you go about actually loading it? can it be done without patching the runtime, or without running some C based "bridge" that loads the shared object file? I'm talking about a plugin architecture, say, where I want to load an implementor of an interface from a .so file, etc.
Based on a cursory overview of the design document, this appears to be done by not putting all of the Go signatures in the library (e.g. ELF headers somewhere) but instead based on a hash. From the design document [1]: "The shared library should include a hash of the export data for each package that it contains. It should also include a hash of the export data for each shared library that it imports. These hashes can be used to determine when packages must be recompiled. These hashes should be accessible to any build tool, not just the go tool." To me this is basically saying that so long as the hashes of the .so file match your binary, your dynamic load should be able to trust the uses of the same interface known at compile time.
I have not yet seen the "plugin" package in https://tip.golang.org/pkg/ as promised under the heading of "A new package" in the design doc.
But I'm wondering - will this mysterious "plugin" package require changes in Go's core runtime, or can I implement whatever functionality it will expose right now. It looks to me like it can't be done without some C bridge/glue while not changing Go itself.
The mysterious "plugin" package does not yet exist, and won't be in the 1.5 release. I would encourage you to try to implement it and send in the changes. The ideas in the execution modes doc are being steadily filled in by people who need them.
Do you have any pointers as to where do I start? I mean I can see how I can probably do it with a C bridge, but I don't know the runtime well enough to attempt to do it without it. Glad it's written in Go now :)
Can anybody clarify what is meant by " Generate machine descriptions from PDFs (or maybe XML).", " "Read in PDF, write out an assembler configuration"."? That sounds . . . ambitious.