Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.10 Release Notes (golang.org)
208 points by 0xmohit on Feb 16, 2018 | hide | past | favorite | 152 comments



One of the most important changes in Go 1.10: Go is now suitable for working with the containers! Hooray!

What I mean here is that this problem is finally fixed: https://www.weave.works/blog/linux-namespaces-and-go-don-t-m... GH issue: https://github.com/golang/go/issues/20676 The fix itself: https://go-review.googlesource.com/c/go/+/46033 Relevant part of the release notes: https://golang.org/doc/go1.10#runtime


This is a very important fix for writing things like file servers (think smbd, sftp...), which need to use seteuid/setfsuid.


Quickly tested it on our OSS repo [1] and based on several averaged runs, 1.10 is both faster to compile (even not taking caching into account) and produces smaller executables, a welcome reversal of the past trend.

Compile times on Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz:

  Go 1.9.2     7.2sec avg
  Go 1.10      6.1sec avg       -15.3%
Binary sizes:

  Go 1.9.2     50,727,552 bytes
  Go 1.10      49,604,128 bytes -2.2%
   
[1] https://github.com/gravitational/teleport if you want to try it yourself, run `make clean` followed by `time make`


Do you executables run faster, or slower?



that would require more testing, I only had a few mins to play with it, but yeah - I'm curious too.


> "If the environment variable $GOROOT is unset, the go tool previously used the default GOROOT set during toolchain compilation. Now, before falling back to that default, the go tool attempts to deduce GOROOT from its own executable path. This allows binary distributions to be unpacked anywhere in the file system and then be used without setting GOROOT explicitly."

At last! Yes!


IMHO, all programs ought to behave like this and be self-contained within a directory.


Standard practice back in the MS-DOS days, what Microsoft now calls xcopy deploy, because that is how we used to do it.


  The go test command now caches test results: if the test 
  executable and command line match a previous run and the 
  files and environment variables consulted by that run 
  have not changed either, go test will print the previous 
  test output, replacing the elapsed time with the string 
  “(cached).”
This is my favorite part of the release.


I don't understand the utility. Why am I running tests again if nothing has changed? Is this just to speed up multipackage testing use cases?


Atom's Go integration re-runs tests after save to a .go file. It's very convenient because you can see test failures and test coverage changes immediately.

But it's slow if you modified a file that is not involved in tested code paths.

This is just one concrete example. In general when the code size grows, very quickly it becomes impossible to tell if a given edit could possibly affect tests or not. So you want to run `go test` just in case so making it faster is a good thing.


But this isn’t doing ‘intelligent’ caching, just like go doesn’t do intelligent test checking. If the test binary changes (I think that implies basically any change in the package) the tests rerun.


In general the bar for making changes is: is it better that it was yesterday?

Before 1.10 `go test` used to re-run tests unconditionally.

Since 1.10 it'll avoid re-running them in some cases.

Neither me nor the announcement claims "intelligence".

The rules are simple and effective which is the Go Way.

While you could try to make even better, more "intelligent" rules for determining wether to re-run the tests, those would quickly run into engineering complexity chasing diminishing returns. That is not the Go Way.


There are lots of things on my list you could do that were simple & effective that would provide me more value than this.

Historically the argument against them have been about adding needless complexity. I wonder who was asking for this and how they got it prioritized.


It was requested by rsc in 2015: https://github.com/golang/go/issues/11193

It was implemented by rsc subsequently, after discussion on the issue: https://go-review.googlesource.com/c/go/+/75631

I'm not sure what you're trying to insinuate.


I’m not insinuating anything. I’m flat saying this seems like a low value feature. Other higher value features are denied on the basis of unneeded complexity.


I mean, the project lead wanted a feature and then implemented the feature, clearly it had value to him.


The difference is that this feature (ideally) only affects implementation complexity, not interface complexity. It can also be dropped again tomorrow without breaking the world.

This makes it wildly different from language changes, which require a lot more consideration.


Larger projects can now effectively be TDD'd in. Or rather you modify or fix a test in a test file and the tests compile and execute very fast. Or even simply the call to `go run`, `go build` and `go test` will be significantly faster.

This is the kind of difference I'm getting

GOCACHE=off time go build

       16.36 real        37.02 user         6.43 sys
GOCACHE=on time go build

        0.66 real         0.53 user         0.37 sys
Number of files including vendor dependencies (some files may not actually be imported)

find . -name '*.go' | wc -l

    2031


Those 2 runs are between 2 runs where something different change right? Thats effectively just outsourcing the 'did I change anything' memory to the cache system?

I'm ok with that I guess and I suppose I see the benefit when running multiple package test runs where their aren't dependencies between them. But I've just not seen the utility in practice, and it seems like it can introduce bugs where the cache is used incorrectly.


You are right, no changes. With a change, most objects are still valid as cached and can result in similar speedup. I'm also wary of caching introducing bugs and intend to continue to CI without it as a safety net. May even require a no cache build before pushing commits.


I imagine if you're working on a project with 3 packages and only 1 package changed then the test will automatically skip the other 2 with no flags or changed command needed? Or perhaps if your working on one large package and only one file changed?


Yes that’s my understanding.


Not a Go developer but bazel in Java does the same thing and it's really helpful when writing test cases. Say I am editing two test methods within a test suite of 30 test cases, it will run only those two tests. For the rest 28 it will show the cached output.


But thats the thing, this is not incremental compile/test. Its just checking the binary output (at least as far as I can tell).


That's different. Go will rerun all tests (for that binary)


What does "files ... consulted by that run" mean? Does it track file/directory operations that occur during a test?

I'm a heavy user of table-driven tests that typically use YAML files that set up the parameters for the test scenarios. These use filepath.Glob(), ioutil.ReadDir(), and so on. For "go test" to invalidate its cache correctly, it would have to intercept those calls and record the dependencies. That'd be awesome.


Only source files, unfortunately. You could use a caching build system like Bazel to fork with data-driven tests, but most users don't need that functionality.

It would be nice if tests could give hints like t.DisableCache() or t.DependsOn(filename).


I think this is misinformation. go test does track files read during tests and records a hash of their contents.

Bug: https://github.com/golang/go/issues/22593 Commit: https://go-review.googlesource.com/c/go/+/81895


I looked at the code mentioned in the sibling comment. They do track file access.

Incredibly, they've modified the standard library (os.Exec(), Stat() and so on) to log the activity to an internal mechanism used by the testing package. I'm rather flabbergasted by the close coupling (it's a feature that would've been impossible for anyone outside the Go team to implement), though it's certainly a pragmatic solution.

Tracking seems to only occur within the current process, so if you call out to a subprocess to generate files, those files won't be taken into account, but I've not gone through the code with a fine comb.


In other words, there are ways to 'fool' the dependency tree. My first thought was that this might be a contrived situation. Then I realized I have exactly that in a project I'm presently working on. It does a lot of file system operations and I use shell scripts invoked from test code to set up files for testing. I'll have to investigate how to force tests to run if I change test scripts. I suppose changing the `go` test case to result in a test failure and then modifying the script to produce correct results (or a valid failure ;) ) would do the trick.

I guess changing the scripts would be detected by `go` so perhaps I don't have a problem. At present none of my scripts invoke other scripts.


Just read the scripts from your test function, then go will check their hashes and rerun the test if they changed.


One thing not mentioned in the release note is that the plugin package now supports macOS: https://godoc.org/plugin

Also, "[this package] has known issues" was removed from the doc.


I don't know why they left it out, it's the best news I could have hoped for from 1.10. All this after I set up a semi-complicated series of make targets to compile these plugins in Docker containers and use volumes pipe them out.


It's buried in the release notes:

  Compiler Toolchain
   ...

   ... plugin now works on linux/ppc64le and darwin/amd64


Barely anybody uses them though.


They were previously linux only. When Windows support lands, there's a lot we'll be able to do in our open source product in terms of modularity. It's a pretty big deal for us.


Would anyone be able to explain to me what this means (from the release notes [1])?

> Go 1.10 is the last release that will run on OS X 10.8 Mountain Lion or OS X 10.9 Mavericks. Go 1.11 will require OS X 10.10 Yosemite or later.

Would Go programs just not compile on those OS versions, or not run entirely?

[1] https://golang.org/doc/go1.10


> Would Go programs just not compile on those OS versions, or not run entirely?

It means we are not going to test on those old systems anymore, so we make no promises about either.


Also, how is this not a breaking change? They talk so much about BC but cut minor releases that don't run on the same OS versions as the previous release? Same with FreeBSD:

> As announced in the Go 1.9 release notes, Go 1.10 now requires FreeBSD 10.3 or later; support for FreeBSD 9.3 has been removed.


This is what the Go 1 Promise has to say about operating systems. They very specifically call that out as an area where they do not guarantee compatibility.

https://golang.org/doc/go1compat#operating_systems


FreeBSD 9 has been EOLed a while ago.


> Go 1.10 is the last release that will run on Windows XP or Windows Vista. Go 1.11 will require Windows 7 or later.

The OS X >= 10.10 and Windows >= 7 requirements of Go 1.11 is good reason to maintain compatibility/support for 1.10 for some time. I don't have a sense for the impact to *bad systems in use.


Not run at all I think. Go is more strongly coupled to systems than most languages as it implements a lot itself, eg it directly uses the (non stable) syscall interface on osx.


I have been using the 1.10 release candidate for a while, and I have been really appreciating the build caching. It makes running tests after a small change as much as 10x faster for me (especially on projects that import huge libraries, like the AWS library).


As someone with no skin in the game, it seems to me that the philosophical conundrum of Go is whether one needs to use a language designed to produce the kind of code consistency it gives to Google-scale orgs or if you're better off working with a more "dangerous" or harder-to-read language which may create more technical debt but also allow you forge your own path as your devs prefer.

I keep looking at languages like Go and frameworks like Angular as being steered for huge-org problems, but I wonder if startups are better off incurring the debt of riskier, less conservative languages and paradigms and worrying about the technical debt only once they've reached the scale where it a) actually matters to your agility in delivering customer value and b) you have the capital to make it worth solving the problem.

Maybe this is reading too much into the generics kerfluffle, but I would love to be enlightened if anyone can clarify this further.


My experience from a host of language experiments of late: ranging from F#, to Rust, to Nim, to C#, to Go, to C++

Is that it just doesn't matter very much. F# has very powerful generics, and there are particular circumstances where that saves me a lot of typing. But you can still get it done in C# with less powerful generics, or Go with none.

F# has sum types, and replaces null with an option type - great ideas, but you can approximate these ideas in any language if you need to.

Similarly, Nim's metaprogramming has let me do some really clever things, but again, it just saves some typing.

I've found that language features and properties are often given more weight than they deserve. The really hard problems I've had when coding are just the actual problems. And while a particular language can be life-changing in particular problem domains, its rare that they are life changing in general.

Go is maybe the language I think is least interesting of all of the ones I've played with but I currently prefer it over the others because

1. stable, mature language and tools (unlike nim) 2. fast compile times (unlike rust) 3. compiles to native executables

F# might be my favorite if it had fast compile times and native compilation. Rust might be my favorite if I ever grok the borrow checker. But if someone said I had to use language X to do project Y I would almost always be fine with it, unless there were insurmountable performance problems caused by the language.


Go is like the Six Million Dollar Man language, compared to C. It's everything C can do, but much better.

Both C and Go are very, very simple languages; there aren't many keywords, and it's pretty easy to grok both at a functional level. It takes, of course, a bit to reach the second-level understanding you need to make full use of either language, but it's easy to hit the ground running in both.

We sometimes get stuck on one language being better than others because it has X feature or Y concept, and I've seen some people essentially root for one language over another. But at the end of the day, languages are tools to get your work done, and on balance the work is going to be hard whichever route you go. Use the one that suits the talents you have available and that broadly complements the needs that you have.


> It's everything C can do, but much better.

Are you sure about that? I don't think you'll be running Go on an AVR. Likewise to this day Go still can't pin pointers so interop with native C libraries is a royal pain(this is something that C# and Java both support).

People love to bag on C but there are domains where it can go that a GC'd language can't. I wouldn't be so quick to equate the two.


I love C. It's the language I used when I first "got" programming, and it's stuck with me, even though I don't use much professional use for it. I don't think you can count me among those who "love to bag" on it.

I don't think Go needs to be a like-for-exact-like analog of C to be a valid replacement. For many C programs, the memory management pattern they use is something that could be handled just as easily with GC, and done so in such a way as to lift a significant burden on the programmer.

I mentioned this in another comment, but definitely, if you need to manage your own memory—don't use Go. I must disagree that GC disallows Go as a replacement for the language. But I will respect your opinion if you think it does!


I agree that you can claim it as a replacement for some sub-domains. However if I can't do embedded programming then it's not an all-up replacement.

Much if modern C usage is in this space even if it's not as visible to the average developer.

GCs have a host of problems like needing a 2x working set, pauses and the fact that you can't control allocation locations for better cache usage. Go may be fast but it will never unseat C/C++/Rust in performance for the reasons above.


Yes it is definitely true that coming to a nearly-complete understand of Go is very very easy compared to Rust, F#, or Nim.

I haven't used it with teams of people on any way yet but I imagine it makes it somewhat easier to drop into code that isn't yours and understand it. There are less ways to create voodoo. Which can be a pro or a con, depending!


Everything C can do but better? Sure you are not confusing Go with Rust?

Go does have a GC, not manual memory management

Go can't interact performantly with other languages.

Go is like Java (not C). Its kind of nice but deliberatley sabotaged by (Big Company).


I'm pretty sure I'm not confusing it with Rust!

That said, I don't think there's anything more I can say which can add to the conversation. I'm happy to allow what I have said to stand.


Go can do some kind of manual memory management, via global allocated arrays or just calling OS APIs directly.

C was owned by AT&T, and now by everyone that can afford to have a seat at ANSI/ISO C meetings, apparently it is easy to forget this little fact.


> It's everything C can do, but much better.

No, go has a garbage collector. It is not a replacement for C. Languages like C++, Rust or Ada are.


Like I said: Go does everything C does, but better.

I suppose you could write your own malloc in Go... if you preallocated a bunch of bytes in a huge chunk, and wrote some functions to use that up. But that sounds like a pain.

If you really really specifically need to manage your own memory, then I agree, you shouldn't use Go. But that doesn't mean that Go is not a valid replacement for C. Do you think a majority of programs that are written in C are done so because they need to manage their own memory?


> Like I said: Go does everything C does, but better.

It doesn't do everything C does, it doesn't allow manual memory management at first place.

So you can't pretend that a garbage collected language does everything a language with manual memory management does.

Manual memory management is a feature, Go does not have such a feature. Deterministic behavior in a program is fundamental when writing hardware drivers or real time programming, Go can't do that.


while it is not fully manual memory management, you can use sync.Pool and it gives you better performance as you can reduce garbage generation.


What does deterministic behavior has to do with "better performance" ?


As already proven by many attempts to implement safe OS, a GC is not an impediment in a systems language, religion and disbelief against GC are.

Joe Duffy has putted it quite nicely regarding how many in the Windows team saw Midori, even though it was proven to be running right in front of their eyes.

RustConf 2017 - Closing Keynote: Safe Systems Software and the Future of Computing by Joe Duffy

-- https://www.youtube.com/watch?v=EVm938gMWl0


> a GC is not an impediment in a systems language,

You'd have to define what you mean by system language at first place before making such a claim.


A programming language capable of building the whole stack from the boot loader, usually written in Assembly, drivers, all the way up to user space applications.

The only programming language for a single platform, besides a little Assembly for hardware integration.

The traditional CS definition of systems programming languages on OS research literature.


> [go is] everything C can do, but much better

I have to disagree with this one. I tried go briefly, and what I found was that it provided a couple of relatively nice high-level features (multiple return, nested functions, interface{} (although that doesn't count it's basically void*)), but that ultimatley it tried to constrict what I could do in a way that c didn't. Add to that that c somehow has much better metaprogramming than go even though literally every language ever made has better metaprogramming than c.


>Go is like the Six Million Dollar Man language, compared to C. It's everything C can do

No, it is not. Put macros back in, pointers and memory without GC before claiming Go can do the same things C does.


>F# has sum types, and replaces null with an option type - great ideas, but you can approximate these ideas in any language if you need to.

Yes, i will approximate a Formula 1 car by connecting two bicycles side to side and fitting a lawnmover motor.


And your analogy would hold if C# was to Go what a Formula 1 car is to 2 bicycles.

If you insist on car analogies, Go is a good-at-most-things car.

It's not the fastest car, it's not the most gas efficient car, it's not the cheapest car, it's not the best looking car, it's not the most technically advanced car but it scores in 10% percentile on all on those fundamental characteristics.

Other cars are more bimodal. One car is very fast but also very expensive. Another uses the latest high-tech titanium making it very light but designers only included very small gas tank. C++ is like Homer Simpson's car: wants to have everything and ends up as a frankenstein.

And I'm already regretting this car analogy because it's so unnecessary.

Go gets top marks on things that matter.

Fundamentals like: generates fast code, compiles quickly, has garbage collector, array bounds checking, excellent concurrency, great standard library, lots of third party libraries, generated binaries use little memory etc.

Go might not have something that your favorite language X has but language X is very bad on at least one of those fundamentals.

It might be 10x slower, compile code 10x slower, rely on gigantic runtime, lack any reasonable concurrency etc.


> Fundamentals like: generates fast code, compiles quickly, has garbage collector, array bounds checking, excellent concurrency, great standard library, lots of third party libraries, generated binaries use little memory etc.

Why doesn't Java count?

* Because it doesn't have "excellent concurrency"? (I posit its concurrency is far better than C, Python, Ruby, JS, or pretty much any mainstream language)

* Or because "generated binaries use little memory"? (That's the stereotype and arguable, but Java is used as the primary user space language on low end, memory constrained mobile devices. AFAIK, Go can make no such claim.)

* Or something else?

FWIW, Java has other fundamentals like custom generic collections, standard package management system/conventions, nullable types/optionals, and a workable error strategy.


> F# has sum types, and replaces null with an option type - great ideas, but you can approximate these ideas in any language if you need to.

How? You can make up for it somewhat with linters, unit tests etc. but it's tiring and nowhere near as robust as a type checker that enforces null checks for you.


oh I agree, and understand. You lose compile time enforced type safety against nulls when you approximate it, and I do think it is the right approach to use option types instead of null, and that more languages should do this.

But it is just one parameter of a thousand affecting how easy it is to write good code, it sometimes has runtime costs.


Can you please tell me how you implement exhaustive pattern matching which you've in F# in Go?


   if (...) {
       ...
   } else if (...) {
       ...
   } else {
       ...
   }
is pretty good 99% of the time


Is that it just doesn't matter very much. F# has very powerful generics, and there are particular circumstances where that saves me a lot of typing. But you can still get it done in C# with less powerful generics, or Go with none.

Having less lines of code looks like a big win to me.


Expressiveness can be good–but at a conceptual cost, because you must now understand how those fewer lines of code do the same thing as a program in another language that isn't as expressive.

I'm glad it's a big win for you! But in the main, I think, it's a tradeoff. Expressiveness has a cost; simplicity has a cost.

Personally, I think it's best not to get too attached to specific concepts or languages, because times change, and needs change. If you're a good programmer, you can learn any language and any concept you need to.


Maybe it is sometimes? Maybe not? Nobody does a great job of quantifying this stuff. Sometimes a lot of lines where each line is very simple and clear is net easier to deal with than a few very short lines that are cryptic and whose details are hidden from you. Sometimes it doesn't matter either way, because once the code is done, it works forever and you never have to touch it again. Sometimes the languages that save you lines of code cost you at runtime, or compile time, which is a loss.

So I don't think it is always a big win, but maybe sometimes it is.


> F# might be my favorite if it had fast compile times and native compilation.

i am sure you must have heard of ocaml and reasonml .. why didn't you consider them


I do!

Ocaml has some tough to live with syntax for me, the threading issue, and tiny ecosystem.

But yes it interests me! ReasonML fixes the syntax and maybe it will lead to a bigger ecosystem, that would be great.


I've tried ReasonML, but it's principally focused on JavaScript; support for native compilation is meager.


That is too bad. I would have assumed it transpiles to OCaml as step 1 then you just Ocaml's compiler?


It does, but the tooling for Reason->OCaml->JS is better than the tooling for Reason->OCaml->Native. Or maybe the OCaml tooling is fine and the documentation is nil. In any case, most people I asked indicated it wasn't really mature yet.


Being currently involved in a startup using Go, I go back and forth with this in my mind on a regular basis. On one hand, development is much more arduous than I have experienced on other projects with more featureful languages. During development, I really find myself missing what those languages have to offer.

But on the other hand, when the day is done and I look at what has been created, I'm really impressed with the quality of the project. A good team can write good software in any language, but I do believe this language has helped this team write better software than they might have in another language.

I suppose that, like everything, there are tradeoffs. I'm not not yet sure which tradeoffs are most important to me.


I typically summarize it as "go takes twice as long to write and half as long to debug. Since you spend a lot more time debugging code than writing new code, this is a good trade-off"


No.


What are you missing when programming in Go?


.map()

.filter()

Not having to write useless type casting boilerplate like this: https://github.com/majewsky/sqlproxy/blob/f5b297e7dce14c0453...

Not having to write `if err != nil { return nil, err }` every second line. (Or, in other words, Rust's question-mark operator.)


For me, an implicit way to convert between similar, but not identical, types. A real-world open source example I am aware of having a similar issue is Upspin[1]. There are ways to mitigate this, but you (well, I) cannot always control all the types third-party dependencies rely on.

There is a clear path to working around this case, but I find it to be a bit tedious. I miss languages where, through generics, duck-typing, or similar that I can simply pass the application's structure to the third-party library and it "just works".

[1] https://github.com/upspin/upspin/blob/master/upspin/proto/pr...


I don't have a strong opinion about your premise about large vs small orgs. But what I will say is that I largely dislike go as a language, yet I still usually pick it for solving problems even if they are just for myself.

The reason for that is largely the 'come back to it' factor. I'm reasonably able to identify go code I wrote and put away. I can't say the same for other languages I've used professionally.


If you have enough irons in the fire, all of those different versions of "you" might qualify as a "large org," depending on your definitions of "large" and "org."


I'm suspecting (but can't back it up with empirics until I get a large team using golang), that because of the advance in popularity of golang and huge increase of golang codebases, the choice of golang is just as much a business decision as it is a programmer decision. If golang does prove to be a "get shit done" language, your VP is not going to be as concerned about what devs think if it can be used to get to market faster. Doesn't make any diff if it is goog or a 3 person shop.

I can give you some counter-testimonial for .net when I ran a 50 dev project. Skillsets were all over the place, the architects wrote boilerplate the devs didn't understand, and the codebase was messy, thanks in due part to .net drawing commodity developers and Anders making sure csharp has 5 different ways to do everything.

Lang history driven in large part by new features is turning out to be bad for business, IMO. I want something solid for my team that's easy to code, easy to test, and easy to consistently hire for. And I want them all to be SME's on the code they write for their domain. What that gets me is a lot of devs who will effort point stories about the same, no matter what team they are on. My gut tells me I can get to that state of bliss with something like golang faster than I can .net. Lot of mitigating factors built into that I get it, but that's what my gut tells me.


I made the experience that no matter what language you choose, and no matter the size of the problem you are trying to solve, after about a week of writing code you will have to start refactoring parts of it. While up to that point a more dynamic or "risky" language may have given me some advantage in development speed, it might make it harder to change things without breaking parts of your application by accident.

Having a compiler that catches 80% of the errors you can make during big code changes already starts becoming a real advantage.

I don't think that Go is a perfect language, but for most of my use cases it is the most efficient one overall.


for me what is great about Go is it removes BS and incidental complexity, & is just really well-designed and pleasant to work with. i happily choose it for personal projects.

semi random, but for another example of someone enjoying the hell out of go for personal projects, take a look at:

https://github.com/fogleman

that guy's amazing, one of my programming heroes.


> for me what is great about Go is it removes BS and incidental complexity, & is just really well-designed and pleasant to work with. i happily choose it for personal projects.

Go was the first compiled language I learned coming from Python and scientific computing. And I felt the same way and was so delighted to write fast native multithreaded code without ceremony.

But then I learned some other languages that have elegant and powerful type systems (namely, Rust, OCaml and Haskell) and now I'm somewhat disgusted at my former self. Go could have been great, but it ain't. It's a blub language if ever there was one.

I know my opinion on these things doesn't matter. But I feel resentful of Go, I feel like I was duped.


For me Go is a smooth brain C with an enormous standard library, making it very pleasant for weekend side projects or quick and dirty "scripting". Not having sum types is a huge pain sometimes and generics would be nice but otherwise I wouldn't want the language specification to change too much. Tail call optimization would also be nice but that's on the implementation side. In particular, I wouldn't want Go to become more restrictive like Rust or Haskell.


I wanted to use go for scripting but working with gopath weirdness made me hate my life and went back to nim (which is perfectly fine, I just still envy the go ecosystem).


This is the opposite of my experience. I picked up Go 5 years ago because it was the only (reasonably mature) programming language I could find that didn't require a turing complete programming language (Python, CMake, Gradle, etc) or a sufficiently complex configuration format that it may as well be a programming language (ANT, MSBuild, etc).

GOPATH took me a few hours to figure out at the time, but that was because the documentation was incredibly sparse at the time (this was before Go hit 1.0). Now it's just:

    mkdir -p ~/go/src/hello
    echo "package main" >> ~/go/src/hello/main.go"
    echo "func main() { println("Hello world") }" >> ~/go/src/hello/main.go"
    go build hello
    ./hello


You do not need any of this. You can use `go run` with a source file outside the GOPATH. Put the following in a file, make it executable and execute it:

  ///usr/bin/env go run "$0" "$@"; exit $?
  package main
  
  import "fmt"
  
  func main() {
  	fmt.Println("Hello World")
  }


My point was to show that GOPATH is easy; not the fastest path to hello world. :)


Go's type system is indeed atrocious, but I'd say that the only major problem. For a high-uptime server, I'd probably prefer to go a BEAM-based sanguage, and for numerical computation and data analysis, I would prefer using Julia (especially if it were more well adopted) but as "a replacement for C" it's quite nice (still need C for things like cross-platform libraries, NIFs, anything kernel).


I don't mind Go's type system. It's a pain for some things, but I haven't found a better one. The languages that have sum types and generics are too restrictive to be practical in virtually all of the software I write (Rust, Haskell) or they lack a decent ecosystem (ReasonML, OCaml, typed-racket). I would welcome a language with a better, practical type system. Bonus points if it shares Go's runtime.


Ocaml’s ecosystem feels much better than Go’s. At least Ocaml has such a great package manager, better compiler, repland an allocation profiler. The only thing it lacks is multicore runtime.


I disagree. Here are a few things about OCaml's ecosystem that are strictly worse than Go's, and which I find to be particularly frustrating:

1. Documentation. It's lacking or spread thinly across the Internet, and it's mostly low quality in my experience. 2. Windows support was dodgy last I checked. 3. Build tooling is byzantine and fragmented 4. Standard libraries - The official standard library is limited and frustrating - Other standard libraries are limited and frustrating - There is more than one standard library 5. Devs seem more interested in adding language features than making it useful to people who want to write real applications

> At least Ocaml has such a great package manager

As long as I'm consuming packages, it's great. Making packages is a nuisance. Go's dependency and distribution tooling isn't pretty, but at least it gets out of my way--`dep` does what I need it to do and I haven't had any problems. Still, both Go and OCaml's package management story is a far cry from Rust's Cargo.

> better compiler

What does this mean? Certainly not "faster" or "produces faster code" or "works better across platforms".

> repl

Granted, but I program in Python professionally and rarely touch the repl. I use it more often as a command line calculator than anything. If I were doing data science, I might feel differently, but for application development I don't miss it.

> allocation profiler

Go has had an (excellent) allocation profiler built into the toolchain for years. Besides, it's much easier to intuit about (and control) allocations in Go than it is in OCaml.

If I could borrow anything into Go from OCaml, it would be generics and sum types. The rest (syntax, runtime, tooling, libraries, etc) I'll leave be.


Go’s dependency management story is not just “not pretty” it is a total disaster as recent story with hijacked repo had shown. It does not allow proper versioning, repo management, total garbage. And how is making packages is a problem? You jast add opam file and that is all. You can do versioning, pin any opam or git repo, or even some fs resource.

Yes ocaml compiler is light years ahead Go’s one in terms of optimizations and code analysis (and even correctness [1]). It is still not as great as haskell’s one, but very good, especially with new lambda middle end. Go compiler is straightforward as hell and can’t do much inlining or optimizations that ocaml can.

Windows support was always tier 1 in compiler. Jbuilder also supports windows and even cross-compilation with windows as a target. Opam now compiles on Windows was too.

As for libraries opam now have thousands of these and ocaml libraries’ quality is often much higher than the quality of libs of this other imperative language due to quality of language.

>go standard library

lol no math for ints, no data structures beyond map, but crappy http and Json serialization.

[1] https://blog.janestreet.com/proofs-and-refutations-using-z3


> it does not allow proper versioning, repo management, total garbage.

This reads like you're trying to pass a stylistic objection off as a functional objection. If that's not the case, feel free to add substance.

> And how is making packages is a problem? You jast add opam file and that is all. You can do versioning, pin any opam or git repo, or even some fs resource.

Ignoring the one-off syntax choice, opam files require you to bring your own installation and build scripts.

> Yes ocaml compiler is light years ahead Go’s one in terms of optimizations and code analysis (and even correctness [1]). It is still not as great as haskell’s one, but very good, especially with new lambda middle end. Go compiler is straightforward as hell and can’t do much inlining or optimizations that ocaml can.

I don't care about "light-years of optimizations" if the end result is still slower than Go.

> lol no math for ints, no data structures beyond map, but crappy http and Json serialization.

This is an enumeration fallacy. I can rattle off a laundry list of OCaml standard library problems as well. This isn't constructive discussion.

> As for libraries opam now have thousands of these and ocaml libraries’ quality is often much higher than the quality of libs of this other imperative language due to quality of language.

I find this not to be true, at least in the case of OCaml vs Go. OCaml libraries are typically overly-abstract or they prefer unintuitive operators and identifiers and generally take much longer to grok than the time they save to use.

Again, I agree that Go the language needs work, but the ecosystem is fantastic. Of course, improvements to the language could benefit the ecosystem (for example, sum types would bring sanity to JSON), but I can still be far more productive in Go than in OCaml.


>opam files require you to bring your own installation and build scripts.

What do you mean?

>if the end result is still slower than Go.

Benchmarks?

>OCaml libraries are typically overly-abstract or they prefer unintuitive operators and identifiers and generally take much longer to grok than the time they save to use.

Examples? I’ve never seen any of this in a whole mirage stack, lwt, angstrom/faraday, yojson or any other major ocaml lib.


> What do you mean?

    build: [
      ["./configure" "--prefix=%{prefix}%"]
      [make]
    ]
> Benchmarks?

I'm not asserting that Go is faster, I'm saying that you need to provide evidence that OCaml is faster if you want me to believe that OCaml's compiler is "light-years ahead of Go's in terms of optimizations". Perhaps you didn't mean "OCaml is faster", but only "OCaml is hard to very hard to make fast, but the OCaml compiler does a fantastic job despite". In which case you'll need to justify why that matters to me if it's not actually faster than Go (especially if the performance is worse and/or less intuitive).

> Examples? I’ve never seen any of this in a whole mirage stack, lwt, angstrom/faraday, yojson or any other major ocaml lib.

It's been a while, so the details are foggy. Looking at the RealWorldOCaml page for yojson, however, immediately takes us through an example with a dozen `|>` operators and functional combinators. Perhaps this is a documentation problem and there are more straightforward mechanisms, but that's no great comfort to me as a user. Something as common as JSON should be dead simple, at least in the main cases. And while I have lots of criticism for Go's JSON handling, it is dead simple in the main cases.

Note that this isn't a particularly good example of OCaml libraries being overly abstract; it's just the most concrete example I could provide in a 5 minutes.


>build:

Nowadays it's just build: ["jbuilder" "build" "-p" name "-j" jobs ]

It's just one line of config and it is worth it since it gives an opportunity to use any build system you want to (or for some reasons have to) use. Still much better then pulling "deps" from github repos directly.

>I'm saying that you need to provide evidence that OCaml is faster

>"OCaml is hard to very hard to make fast, but the OCaml compiler does a fantastic job despite". In which case you'll need to justify why that matters to me if it's not actually faster than Go (especially if the performance is worse and/or less intuitive).

Sure, you can write manually a ton of boilerplate for each numeric type in go, just like it's done in gonum. I'm not sure if it is good since more code means more issues and bugs, and I definitely prefer to write or hack something like this:

https://github.com/inhabitedtype/httpaf/blob/master/lib/pars...

instead of this:

https://github.com/valyala/fasthttp/blob/master/header.go

(and this is only a header)

and let the compiler do all the dirty work.

> with a dozen `|>` operators

how the hell is `|>` an obscure combinator? It's just a "unix pipe's" typesafe kin. I've never seen any usage of obscure combinators in ocaml beyond parser-combinator libs ofc but these usually are well documented and you wouldn't use them if you don't like combinators anyway.

>Something as common as JSON should be dead simple

It is dead simple, just add [@@deriving yojson] to your data structure, and you'll get to\of_yojson functions, what could be more simple?


> Nowadays it's just build: ["jbuilder" "build" "-p" name "-j" jobs ]

It's just one line of config and it is worth it since it gives an opportunity to use any build system you want to (or for some reasons have to) use. Still much better then pulling "deps" from github repos directly.

I strongly disagree here. Deps is rather ugly, but it works and it's easy to use. A language should have 1 build tool that works for 99% of use cases, and the rest of the tooling should support that by default. This is especially painful for beginners, amplified by the fact that the docs point you to Makefiles by default.

> Sure, you can write manually a ton of boilerplate for each numeric type in go, just like it's done in gonum. I'm not sure if it is good since more code means more issues and bugs, and I definitely prefer to write or hack something like this:

I'm confused; you're responding in the context of compilers producing fast code, but you seem to be advocating for a more generic type system than Go has, which I've already agreed about.

> how the hell is `|>` an obscure combinator? It's just a "unix pipe's" typesafe kin. I've never seen any usage of obscure combinators in ocaml beyond parser-combinator libs ofc but these usually are well documented and you wouldn't use them if you don't like combinators anyway.

I didn't claim it was obscure; I claimed that it's not useful--it's optimizing for fewer keystrokes and not readability. I can appreciate code-golf for its own sake, but I need my team to be able to jump into code quickly, and that often means that naive is better than clever and a little boilerplate is better than terseness. BTW, the HTTP parser that you shared is a good example of this. The Go version might be imperative, but at least it's clear.

> It is dead simple, just add [@@deriving yojson] to your data structure, and you'll get to\of_yojson functions, what could be more simple?

Like I said before; this might not be a good example, it's just the first one that came to my mind. Maybe the problem here is that the good documentation is so hard to find. Still, I've come across lots of unnecessarily abstract libraries before, I just don't recall the details.


>I claimed that it's not useful--it's optimizing for fewer keystrokes and not readability.

it makes code much more readable

    produce_collection ()
    |> Col.filter pred
    |> Col.map to_json
Only pure logic of the program remained, no garbage like intermediate vars, loops, etc.

>The Go version might be imperative, but at least it's clear.

Depends on how do you define clear. If huge manually optimized pile of imperative spagetty is clear, than yes. My definition of clear presumes sophisticated abstractions making hard things easier to reason about and comprehand, not a pile of hand-written state machines.

>Still, I've come across lots of unnecessarily abstract libraries before,

So many that you couldn't find an example?


Look, my remarks about OCaml were meant as constructive criticism, and you're plainly taking them personally. Frankly, such defensiveness doesn't make for very interesting conversation, so I'll duck out and leave you to the last word.


>but I haven't found a better one

search again


Hmm, still nothing.


i've played around a bit and with haskell and ocaml, very interesting languages. i've seen people do stuff with haskell that's amazing, though more in the realm of almost mathematical exploration than large, end-user software. (maybe there is some, i just haven't run across it.) rust also looks extremely promising (shame about the slow compile times, though that's getting better)).

but i wouldn't feel "duped".

those are much more ambitious languages than go; they're in a different space. go is simple, easy to learn, small...

(a lot what's nice about go is beyond the language too -- the whole ecosystem is just very pleasant to work with.)


To be fair, Go never promised you the future. The documentation and community is pretty oblique about their values.


I have no advice to offer but for myself I wrote a project of few hundred LOC and it like a manageable version of Bash/Python/Perl etc. One benefit I see is it can be used on windows. mac, linux machines without any special setup , once I generate binary for specific platform.

Since it is structured code, I keep adding more features without much breakage. I know people can do it so many other languages and ways. My point was that Go can be used for variety of person or single developer projects. It is more tuned towards making new/interesting things in boring language vs learning interesting language to reimplement existing things.


that was my opinion too, I even thought what a stupid name Go, is another Google thing shoved down our throats, but then I worked with it and from node.js where you install a few packages and you see npm install saying 1201 packages installed, from java where you do a change and wait 30 seconds to recompile, just to see you got it wrong, and you get a huge stack trace and you need a huge frameworks for basic web stuff, from net core that I can't get it to work on mac for the love of good, I can't think of a better language at this moment, it allows me to go to machine language go without any VM or interpreter.


I wouldn't classify it as large vs small. It's a better or worse fit depending on how the team writes software. Here is one example:

Go has a simple syntax that rarely changes and has been backwards compatible since 1.0. It even has a built-in code formatter to keep things consistent. This means that you can write a service with the confidence that it will compile on the latest version of Go when you need to change it. Even if that is months or years later.

Additionally, the dependencies for Go programs are stored on disk, relative to the project, in a simple hierarchy. I can't think of a single other language or package manager that makes it easier to vendor all your dependencies. And I do so for most Go projects I work on (I mostly use it for standalone services, not libraries).

Here are the typical steps for reviving a project that I haven't touched for over year:

    git clone ...
    cd ...
    go test ./...
This can be very useful for the some teams and utterly useless for other teams.


Elixir vendors dependencies as well by the way. It's very useful as you pointed out.

Additionally, I agree with your points about Go.


> backwards compatability

Eh, you get that with most languages, though.


> vendor all your dependencies

this has pro's and cons.


Yep. Vendoring isn't "good" or "bad" in general, it's "appropriate" or "inappropriate" for a specific project.


One refreshing thing with a language w/o classes and inheritance is that you aren't worried about having the design make use of those features. That's one problem I had with C#. I was constantly redesigning my class hierarchy instead of getting things done.


A big benefit of using Go for me is that it influenced how I write Java. Keeping the use of class inheretince to a minimum (by using composition) and having interfaces for nearly everything makes for a much more flexible code base.


I sort of get the comparison, but my experience between Go and Angular has been completely different. Angular reminds me much more of something like Spring. Go on the other hand, while not particularly exciting or expressive, is mostly out of my way and lets me just get shit done. It’s a bit of an overused point now, but it’s boring in the best way possible. That’s pretty unique IMO and is a much different experience than you get from working with Angular.


Can someone tell me what is going on here: the release notes [https://golang.org/doc/go1.10#language] state that

> A corner case involving shifts by untyped constants has been clarified, and as a result the compilers have been updated to allow the index expression `x[1.0 << s]` where s is an untyped constant; the go/types package already did.

But the docs [https://golang.org/ref/spec#Operators] state that

> `var u = 1.0<<s // illegal: 1.0 has type float64, cannot shift`


It's probably to do with the fact that one is a "number".

If I'm correct, that means in `x[1.0 << s]`. 1.0 is a "number", therefore the shift is valid, but in `var u = 1.0 << s`, it's a float64, and therefore the shift is invalid.

see also: https://blog.golang.org/constants


Ah, thanks.


@cshenton 's answer is not perfect. The "var u = 1.0<<s" in go spec is illegal is because s is a variable, so 1.0 will be deduced as a flaot64. However, in "x[1.0 << s]", s is a constant, so 1.0 will be deduced as int.


> Options specified by cgo using #cgo CFLAGS and the like are now checked against a whitelist of permitted options. This closes a security hole in which a downloaded package uses compiler options like -fplugin to run arbitrary code on the machine where it is being built.

How strong is the guarantee of "no code execution during build"?

I'm currently working on a Go build system and my understanding is that the Go compiler was designed for compiling untrusted code. You'd have to exploit the compiler on order to get RCE, right?

(I'm sandboxing it either way, but it's good to know for risk assessment)


Here's how it works.

`go build` on a pure Go code only executes the compiler and linker of the go toolchain.

If you add cgo to the mix, `go build` might invoke system C++ compiler (gcc, clang, mingw) to compile the C part of the code and system C++ linker.

The issue was that gcc allows arbitrary plugins and if the right cgo flags were provided, gcc would load and execute such plugin, leading to executing whatever code was in the plugin.

That hole was plugged in 1.10.

This is the whole story. There are no other vectors of executing code that is not part of Go toolchain.

Then again, what is your threat model?

If you worry that blindly compiling my evil package will execute my evil code during build, you should also worry that linking my evil package will execute my evil code when you run your compiled executable.


Threat model is building (but not executing) untrusted code.


i love golang


Have they fixed already the GOPATH crap?


>The GOPATH environment variable now has a default value if it is unset. It defaults to $HOME/go on Unix and %USERPROFILE%/go on Windows.

If this is what you are talking about - they 'fixed' it in 1.8 a year ago.


IMHO, fixing it would be being able to build go code on random directories without env hackery.


I think this is on the roadmap but I'm not sure the priority. Check out this proposal https://gist.github.com/sdboyer/def9b138af448f39300cb078b0e9...

EDIT: Found the proposal on the dep roadmap: https://github.com/golang/dep/wiki/Roadmap


They fixed compilation caching not to install in $GOPATH/pkg anymore which feels like a good step toward getting rid of it all together, eventually.


I am using Deps with Visual Studio Code. My project builds from any directory even if it's not in the GOPATH.


In case you are wondering: "generics" is mentioned 5 times (now 6) in the comments right now.


That was the first thing you searched for too?

I just come here to eat popcorn and learn about type theory.



We've changed to that URL from https://blog.golang.org/go1.10.



Thanks! Done.


There are no significant changes to the language specification

This is my least favorite part of the release.


It is already turing complete! :D


This my most favorite part of this release ;).


Does it have generics yet?


An honest question, why was the flagged?


No, still isn't Java.


> There are no significant changes to the language specification.

I'll continue hoping goderive becomes a language that displaces go.


Go++




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: