Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.5 Beta (golang.org)
137 points by azylman on July 8, 2015 | hide | past | favorite | 80 comments



Really wish these HN links to download pages were HTTPS. Lots of people at Gophercon on untrustworthy conference wifi that probably wouldn't notice a MITM'd download page pointing to non-HTTPS binary downloads.


Yep, the link should be https://golang.org/dl/.

Work is ongoing to make golang.org be HTTPS only.


changelog: https://tip.golang.org/doc/go1.5

It seems to me like they've included Google's trace viewer[0] into the go tool, nice.

[0]https://github.com/google/trace-viewer


From the changelog: > Builds in Go 1.5 will be slower by a factor of about two. The automatic translation of the compiler and linker from C to Go resulted in unidiomatic Go code that performs poorly compared to well-written Go. Analysis tools and refactoring helped to improve the code, but much remains to be done. Further profiling and optimization will continue in Go 1.6 and future releases.

This seems quite a drastic change. I thought compilation speed was one of the pilars of go. In that light, why did they decide to release this anyway?


Compilation speed was just a gimmick anyway; the compiler is fast because it doesn't do many expensive optimizations. It was like bragging that using "-O0" makes a compiler so fast. There are languages that require a lot of work just to compile at all, like C++, but for most languages slow compilation is more a measure of how advanced the compiler is.

And it's not just the compiler, the linker didn't even know if a package is used or not, which is why you have to manually remove unused imports (otherwise they would be linked in, bloating the binary).


This is plain wrong, the main reason the compiler is fast according to the authors[1] is because it handles dependencies more sanely than C/C++.

https://talks.golang.org/2012/splash.article


You're more than welcome to write C in the fashion that the go compiler enforces, and get much faster builds. Their compiler is young and immature relative to things like GCC and stating that it may be missing expensive optimization features is not "plain wrong".


That suggestion is not really practical as you need to make the same requirement from all your third party libraries. While gcc will spend more time optimizing than go, the main reason for the speed difference at the moment IS unnecessary work done by the preprocessor because of the primitive importing.


No, it's not, unless you count time spent compiling inlined functions and templates as "preprocessing time" (which it isn't—it's compilation time). Time the difference between clang++ on -O0 versus -O3 if you don't believe me.


Others already pointed out that paragraph 1 isn't really correct.

I need to point out that your second point about the linker is totally false. The linker knows and does drop unreferenced code. The import strictness is a language requirement and comes from experience with large codebases at Google and elsewhere. Use goimports in your editor's save hook and don't think about it.


It probably takes more effort to write a whiny blog post about the imports behaviour than hooking goimports on save.


This is the nature of a priority. They are saying they value compilation speed more than the maintainers of C might, and are willing to sacrifice other things (like runtime speed) in that interest.


2x is still means much faster than C++. Go has an strict release cycle[0] and at some point you have to make the switch if you want a compiler easier to develop. I'm sure it'll get much better soon.

https://github.com/golang/go/wiki/Go-Release-Cycle


Fast builds is definitely a major feature. Go has attracted more Python developers than C++ developers.

http://commandcenter.blogspot.it/2012/06/less-is-exponential...

Go is great as a much faster Python, where speed matters. If Go compiles slow too much, it becomes less attractive.


The compilation time is slower, not the resulting compiled binary.

> On average the programs in the Go 1 benchmark suite run a few percent faster in Go 1.5 than they did in Go 1.4, while as mentioned above the garbage collector's pauses are dramatically shorter, and almost always under 10 milliseconds.


I'm referring to compilation speed. In Python and Ruby, for example, there is no compile stage. This makes them attractive for rapid iteration.


Meh, most people will never notice it, and if for someone is an issue, you can continue to develop using 1.4 locally until 1.6 comes in 6 months.


There is a compilation-to-bytecode stage + lots of file look ups during start up of the Python interpreter and during every module import.


Python caches the bytecode in .pyc files, so the compilation happens only the first time you import a module. In a typical Python start-up only the main source file is byte-compiled.


How often do you compile a code versus run it? Particularly in cases where speed matters?


Why do you think projects like LightTable and Apple's Playgrounds exist?

http://www.chris-granger.com/2012/02/26/connecting-to-your-c...

http://www.objc.io/issues/16-swift/rapid-prototyping-in-swif...

Rapid prototyping doesn't work as well if you need wait for compilation.

JRebel exists for Java to improve redeployment times:

http://zeroturnaround.com/software/jrebel/


You could argue lots of long running server software has been compiled more times through development than it has been started in prod.


They are porting to self-hosted in Go only. And they are willing to trade off compilation speed, temporarily, for getting there at all. Machine translated code will be full of low hanging fruit to get the old speed back.


  "By default, Go programs run with GOMAXPROCS set to the number of
  cores available; in prior releases it defaulted to 1."
This is exciting, but what are the implications? I remember something once about some 3rd-party code, and maybe even some stdlib code behaving poorly on multiple processors.


With the new scheduler I believe this is much less of an issue. It is huge though because there might be a bunch of data races in the wild that were hidden until now and are about to see the light of day. Hopefully most people test with -race these days.


For high concurrency systems, I'd expect false sharing to start rearing its ugly head.

I only recently got knocked over the head with -race by the oh-so-diplomatic folks on IRC. It's very good. Might as well use it -- otherwise it's like driving with a broken check engine light.

Speaking of false sharing, would it be a good idea to take the Go implementation of Disruptor and use that for buffered channels?


> For high concurrency systems, I'd expect false sharing to start rearing its ugly head.

I'd be really surprised if that results in programs in parallel mode becoming slower than sequential (which is what you get with GOMAXPROCS=1). Cache effects are important, but not that important in a setting like Go, where few people use concurrent read-write data structures anyway (other than channels, of course) due to the lack of generics.

Much more likely is that some programs become slower due to the fact that GOMAXPROCS>1 requires more locks and atomic operations. That's not false sharing, though; that's just synchronization overhead.


I'd be really surprised if that results in programs in parallel mode becoming slower than sequential

I'm going to guess that you haven't played too much with executing things in parallel? In Clojure groups, people have heard beginners express surprise about parallel mode programs running slower so often, it's somewhat shaped the reaction to such questions.

few people use concurrent read-write data structures anyway (other than channels, of course)

One can still get in trouble with channels of pointers to structs.

Much more likely is that some programs become slower due to the fact that GOMAXPROCS>1 requires more locks and atomic operations. That's not false sharing,

Depending on how the locks and atomic operations are implemented and used, the slowdown with locks and atomic operations could be in large part because of false sharing. (I don't know the details for Go.)


I think you're overstating how much false sharing in particular causes parallel slowdowns. In the case of Clojure, for example, it's probably just straightforward synchronization overhead (cost of atomic ops), just as it is in Go.

False sharing is a very specific type of synchronization problem whereby data tightly packed in memory gets shared between multiple processors because the cache line size is larger than desired, even though logically the data in question are completely independent. This can happen, but it's nowhere near the most common type of synchronization-related performance problem in my experience. If your problem actually needs to share the data between multiple processors (e.g. with Go channels), then it's not "false" sharing anymore—it's "true" sharing.


I stand corrected. It's possible that other data might get accessed from another socket while going after a lock, but if every thread is just after the data for the lock, it's just sharing. Semantics got dropped somewhere, but I was still using the term.


> I'm going to guess that you haven't played too much with executing things in parallel? In Clojure groups, people have heard beginners express surprise about parallel mode programs running slower so often, it's somewhat shaped the reaction to such questions.

pcwalton is one of the lead engineers of Servo, a Mozilla Research project to develop a more parallelizable browser engine in Rust. I'd say he's played around quite a bit with executing things in parallel!

https://github.com/servo/servo


What does lack of generics have to do with concurrent vs non-concurrent data structures? It seems like if the problem calls for it, concurrent data structures would be used regardless of generics.


Concurrent data structures (especially lock-free data structures) are really hard to write. Specializing e.g. a lock-free FIFO stack to each type you want to use it with isn't feasible in practice. It's much easier to just protect a built-in slice or map with a mutex, and that's what I believe Go code generally does.


> What does lack of generics have to do with concurrent vs non-concurrent data structures?

Generics allow building reusable forms of complex data structures, reducing the cost in developer effort of each specialized use. By not supporting generics, Go increases the cost of each specialized use of complex data structures, which narrows the range of circumstances where the cost will be justified by the benefit.

Whether or not "the problem calls for it" is always a cost vs. benefit question, and Go -- compared to languages with support for generics -- increases the cost of this particular solution.


I'm not familiar with disruptor, but if you want an alternative to buffered channels maybe mangos[0] would interest you, it supports inproc and IPC among other transports.

[0] https://github.com/gdamore/mangos


Performance is supposed to be much improved: https://docs.google.com/document/d/1At2Ls5_fhJQ59kDK2DFVhFu3...


Obviously the new GC and GOMAXPROCS take the spotlight, I was just surprised about the new trace function.


Thank you...this is what should have been posted in the original link.


Of particular note, Solaris support in Go 1.5 is greatly improved thanks to the work of Aram Hăvărneanu; Oracle sponsored most of that work.

Notably, cgo is now supported on Solaris as of Go 1.5 beta.

The same should hold true of various OpenSolaris-based distributions such as Illumos, et al.


You welcome. Mdb, DTrace, and SPARC64 support coming soon too...


Please file bugs to add anything missing from the release notes.


> The DNS resolver in the net package has almost always used cgo to access the system interface. A change in Go 1.5 means that on most Unix systems DNS resolution will no longer require cgo, which simplifies execution on those platforms.

This is great news if you're on one of those systems supporting it, using the net module and don't like the libc dependency, or if you're doing a lot of concurrent DNS requests.


If you use Go and you can, please try it out and file bugs. http://golang.org/issue


I'm excited to see experimental support for vendoring. One nice thing about go is that the dependency management is part of the core language design, but it always fell flat when you needed specific versions of a library. Vendoring solves that huge issue, I hope it works as nicely as they claim.


These are pretty big to me:

The garbage collector is now concurrent and provides dramatically lower pause times by running, when possible, in parallel with other goroutines.

By default, Go programs run with GOMAXPROCS set to the number of cores available; in prior releases it defaulted to 1.

But how important is this? It doesn't seem like it affects the average user:

The compiler and runtime are now written entirely in Go (with a little assembler). C is no longer involved in the implementation, and so the C compiler that was once necessary for building the distribution is gone.


We're pretty excited for the compiler and runtime being in Go, because we publish binaries for many different operating systems. Previously this required installing a separate cross-compilation toolchain, but will not any longer.


Rewriting the compiler in Go was a precondition for improving garbage collection, according to Rob Pike. The old C codebase was complex and finicky, and, to an increasing extent, contributors to the Go language often know Go better than C.

So, that more complex garbage collector is a product of that less-sexy rewrite.


Great explanation, that explains a lot!


> The "stop the world" phase of the collector will almost always be under 10 milliseconds and usually much less.

Well we can toss out Go as a suitable language for real-time applications such as video games and audio processing.


Why the down-votes, I think this is a valid point. In case of real-time, e.g. video games, you want to render every ~15ms and doing that with a GC is really NOT trivial. I do not say, GC-languages perform bad, they actually are great for throughput, but doing low-latency stuff is still hard.


The new GC reduces pauses to the single-digit milliseconds. Rendering every ~15ms seems reasonable under those constraints. Plus I don't think a game engine will typically put a lot of pressure on the garbage collector.

I don't think the new GC precludes Go from being suitable for writing games. On the contrary, I think it is now more suitable, and look forward to using Go for writing games myself.


It is funny that just after I wrote this, I spotted another article on the frontpage: https://sourcegraph.com/blog/live/gophercon2015/123574706480

So I admire their goal to reduce latency (it's probably a latency-throuhput tradeoff) and I always get told that real-time (which in my opinion requires low-latency) is possible in GC-languages, I just do not know how to do that reliable. But I hope I am wrong and it is possible to go real-time with a GC as it is often a prerequisite for high-level language features.


ISTR that producing a clean sign wave on a scope requires a real-time operating system, and it was one of the first tests for seeing if your real time linux patches worked.


> Also in the crypto/cipher package, there is now support for nonce lengths other than 96 bytes in AES's Galois/Counter mode (GCM), which some protocols require.

This should be "96 bits".


Wrong binaries in go1.5beta1.linux-386.tar.gz, don't work on linux 386. Looks like they are for amd64.


Thanks for the report. We're using a new tool to roll the release binaries, and you have discovered a bug. On it.


It's now updated. Please download the new file (sha1 70112cca6a7225bacac4a32ffab2d97bcd7613f4) and confirm that it works for you.


It works now. Thanks. Even tried cross-compilation, works too.


Has anyone spotted the documentation for the new shared library system yet? Plugin-based architecture is one of the biggest things I've wanted in Go and it seems to finally be happening.


I played around with this on my own, and the shared library system will only work with Linux binaries for now. I wasn't able to get it working with OS X or Windows, with the very clear message from Go telling me the "platform target is not supported."

I think it is mostly a getting started point, and not generally useful unless your only target is Linux (which very likely may be the case unless you're working on desktop apps).


The closest thing to documentation is still the design document: https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGh...

I've been playing around with it, here's a neat example that does shared library partial bindings for http.Server in C and Python: https://github.com/shazow/gohttplib


Is it possible to dynamically load libraries that have been compiled with -buildmode=shared in runtime from within a running Go program? The design doc talked about it IIRC.


According to the release notes [1], "For the amd64 architecture only, the compiler has a new option, -dynlink, that assists dynamic linking by supporting references to Go symbols defined in external shared libraries."...so I assume if you want all of your Go signatures visible in your shared Go libraries, you can do it on x64. I have not tested this myself.

1 - https://tip.golang.org/doc/go1.5#compiler


the question though is how do you go about actually loading it? can it be done without patching the runtime, or without running some C based "bridge" that loads the shared object file? I'm talking about a plugin architecture, say, where I want to load an implementor of an interface from a .so file, etc.


Based on a cursory overview of the design document, this appears to be done by not putting all of the Go signatures in the library (e.g. ELF headers somewhere) but instead based on a hash. From the design document [1]: "The shared library should include a hash of the export data for each package that it contains. It should also include a hash of the export data for each shared library that it imports. These hashes can be used to determine when packages must be recompiled. These hashes should be accessible to any build tool, not just the go tool." To me this is basically saying that so long as the hashes of the .so file match your binary, your dynamic load should be able to trust the uses of the same interface known at compile time.

I have not yet seen the "plugin" package in https://tip.golang.org/pkg/ as promised under the heading of "A new package" in the design doc.

1 - https://docs.google.com/document/d/1nr-TQHw_er6GOQRsF6T43GGh...


But I'm wondering - will this mysterious "plugin" package require changes in Go's core runtime, or can I implement whatever functionality it will expose right now. It looks to me like it can't be done without some C bridge/glue while not changing Go itself.


The mysterious "plugin" package does not yet exist, and won't be in the 1.5 release. I would encourage you to try to implement it and send in the changes. The ideas in the execution modes doc are being steadily filled in by people who need them.


Do you have any pointers as to where do I start? I mean I can see how I can probably do it with a C bridge, but I don't know the runtime well enough to attempt to do it without it. Glad it's written in Go now :)


maybe it can be done with some unsafe magic parsing ELF files or something?


Can anybody clarify what is meant by " Generate machine descriptions from PDFs (or maybe XML).", " "Read in PDF, write out an assembler configuration"."? That sounds . . . ambitious.

https://talks.golang.org/2015/gogo.slide#22



Rob Pike talks about it here: https://youtu.be/cF1zJYkBW4A?t=29m6s


go1.5beta1.darwin-amd64.tar.gz don't work on mac os x 10.10


It works on my 10.10 machine. Can you please file an issue?

https://golang.org/issue/new



cool Release! i realy like the all the improvements in tooling. thx!


is this worth learning



what does this have to do with go


Yes.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: