Hacker News new | past | comments | ask | show | jobs | submit login
Down the Golang Nil Rabbit Hole (urth.org)
181 points by zdw on March 30, 2021 | hide | past | favorite | 157 comments



You can hit this bug without using reflect. It, to me, is Go's third biggest wart. I'm particularly fond of this bug, because I got bit by it and it crashed prod during an important sales demo.

    package main

    type Compressor interface {
        compress() []byte
    }

    type GZipComprosser struct {
        internal byte
    }

    func (c *GZipComprosser) compress() []byte {
        return []byte{c.internal}
    }

    func DoCompress(c Compressor) []byte {
        if c == nil {
            return []byte{}
        }
        return c.compress()
    }

    func main() {
        var c *GZipComprosser
        c = nil
        DoCompress(c)
    }
This is the "nil interface" trap and will crash with an NPE even though are "checking" nil.


An example of this which I think every gopher hits at least once is the use of an error struct instead of an interface in a return value:

    func returnsNilError() (int, *MyErr) {
      return 0, nil
    }
    // case 1: no other use of error, nil works
    {
      i, err := returnsNilError()
      _ = i
      fmt.Printf("err is nil: %v\n", err == nil)
    }
    // case 2: someone adds a couple lines of code before that call...
    {
      _, err := os.Open("/tmp/file")
      i, err := returnsNilError()
      _ = i
      fmt.Printf("err is nil: %v\n", err == nil)
    }
    // Wait, what? Someone adding a line of code before your code changes how your nil check works?
Playground: https://play.golang.org/p/-t1Hq8WMF8l

This, I think, is the reason everyone uses "error" in their return values in go, even though using a specific type (like "*os.PathError") would make it much easier to know what possible error conditions a function could have.


Can you elaborate on how and why err==nil changes in your second case? A link to the docs would be helpful.

Thanks!


It's a combination of two things. First, type inference from "short variable declaration" (see the spec here: https://golang.org/ref/spec#Short_variable_declarations ) and secondly, nil interfaces vs typed nils stored in an interface type (see this FAQ entry https://golang.org/doc/faq#nil_error ).

Another way to explain the difference is the following:

    // case 1 is the same as
    var i int
    // inferred type of *MyErr because that's what returnsNilError returns
    var err *MyErr
    i, err = 0, (*MyErr)(nil)
    // err is of type (nil)(*MyErr)

    // case 2 is the same as
    // inferred type of error because os.Open returns that
    var err error
    err = os.Open("/tmp/file")
    var i int
    i err = 0, (error)((*MyErr)(nil))
    // err is of type (error)((*MyErr)(nil))
Note well that "err" is a different type in those two cases. Type inference, plus the fact that short variable decleration re-uses an existing variable if the type is compatible, can result in previous code changing what type a variable is.

It's well known that an interface with a type is != nil (i.e. "var nilErr *MyErr = nil; var errIface error = nilErr" results in an 'errIface' that is not equal to an untyped nil).

I hope that explanation made sense to you.


Thank you!


I recommend writing `if _, err := ...; err != nil { ... }` where ever possible, to avoid the case above where you overwrite the existing error.

It is not possible to always do this; `if val, err := ...; err != nil { } else { use(val) }` is a Bit Much.


Sure, with = (in this case the shorthand :=), what happens before can change how it works.

The second := in case 2 should have raised a visual alarm. The fact that := even assigned for err a second time means inference could have happened, like

  err = (*MyErr)(nil)
To catch this, you could have written the nil check like:

  fmt.Printf("err is nil: %v\n", (*MyErr)(err) == nil)
and the compiler will croak. Or, to prevent this, replicate the inference explicitly:

  fmt.Printf("err is nil: %v\n", err == (*MyErr)(nil))


Back when we thought "Go 2" was going to have lots of breaking changes, I proposed making the comparison to an interface to nil illegal [1]. It was more or less shot down, since `error` is an interface, and basically every package library would need to be rewritten.

[1] https://github.com/golang/go/issues/30865


Well, instead, they could add a new mechanism that unifies nil checks while allowing the old mechanisms to continue working. Maybe generics will allow us to write a single “IsNil” function, though, which will be good enough for me.


They can't, because the two nils are distinct.

I hypothesize, but can not prove, that the root problem isn't that there are two nils. The root problem is that people incorrectly think nil is always invalid. It is not. This is perfectly legal (https://play.golang.org/p/71BNAyXIzRv )

     type Thing string

     func (t *Thing) DoSomething() {
         if t == nil {
             fmt.Println("nil thing")
             return
         }
         fmt.Println(string(*t))
     }
Nil pointers are perfectly legal values. You can call methods on them no problem. I have places where I use them; for instance, I have a memory pooling implementation where the nil pointer simply fills requests via "make" directly and does no pooling, so you can swap it out easily to test if it is causing some other bug.

So I think people think of there as being "two equally invalid values", but it's not true. There's the nil interface, which is pretty useless, but an interface containing a nil pointer in it is legal and may very well completely successfully implement the interface. Checking to see whether the interface "contains" a nil pointer is almost always an error. It is not something you should ever do when you have an interface value.

The real error occurs when you put the nil pointer in an interface value that can't correctly implement the interface, which is also nothing particularly special; putting anything in an interface value that can't implement the interface is an Already Lost situation. That the error then propagates around is bad, but the error occurred earlier and it's already too late to fix it by the time some other bit of code receives this broken value. Good code never has any reason to penetrate the interface and interrogate the underlying type for whether or not it is nil; if you are routinely encountering this problem, if you stop putting invalid implementations in an interface value, the problem will go away. I use all these features routinely and don't encounter this problem, because I know to never lie by putting something in an interface value that doesn't implement the interface.


> Checking to see whether the interface "contains" a nil pointer is almost always an error. It is not something you should ever do when you have an interface value.

So here's where I do that:

https://github.com/gwd/session-scheduler/blob/master/handle_...

The pattern is basically:

    var foo = interface{}

    switch (something) {
    case A:
        foo = Foo()
    case B:
        foo = Bar()
    }
Foo and Bar both return different concrete types which are nil if the request can't be handled. I want to do something if either Foo or Bar return 'nil', and also if neither case A nor case B are taken. So I've got:

    if foo == nil || reflect.ValueOf(foo)IsNil() {
        // handle the error
    }

    // Pass foo elsewhere
I mean yeah, I certainly could arrange things differently; but in general I think that's not an unreasonable thing to expect to be able to do.

> The real error occurs when you put the nil pointer in an interface value that can't correctly implement the interface... putting anything in an interface value that can't implement the interface is an Already Lost situation.

First of all, that's not the case in my example above: nil can satisfy every method of `interface{}`.

Secondly, my compiler refuses to compile if I assign something other than a nil pointer that can't implement the interface. If putting nil into an interface is such a bad idea, the compiler should prevent you from doing it.


"First of all, that's not the case in my example above: nil can satisfy every method of `interface{}`."

Yes, that's true; in this case the lie is elsewhere. You said "interface{}", but you have a contract you expect the value to be able to keep. It's a contract the type system can't enforce, true. But it's a contract you're setting later in your RenderTemplate call, implicitly, and you're breaking it. The error occurs on the lines where you unconditionally set the "display" value to this contract-breaking value without checking whether it fulfills the contract you expect of it later.

(I have a number of places in my code where I document something like "this function takes an interface{} but this value is expected to be able to be passed to encoding/json to be turned into JSON without error". Yes, the type system can't enforce that and it would be nice if it could. Nevertheless, if someone else calls that function and passes it something with a channel in it or something, the bug isn't in the code using the interface{}, it's in the code that passed it something that broke the contract.)

You need to not slam a contract-breaking value straight into the "display" variable without checking it.

The simplest thing is using an intermediate value, which will be typed:

     // above the switch
     notFound := false

     // at the display line
     userDisplay := UserGetDisplay(user, cur, true)
     if userDisplay == nil {
         notFound = true
     } else {
         display = userDisplay
     }

     // use the notFound value here to decide to display the not found page.
The probably-better thing over all is to use polymorphism instead of a switch statement. I don't have a great Go-specific link, but here's a simple example: https://www.codementor.io/@uditrastogi/replace-conditional-s... I'd write a method that can return a notFound status or something instead, or an error I can turn into "not found", and never try to write any code that splits between different types with interface{}.

It'll probably make you happier in the long run if you learn to do that. I find it a very common pattern in HTTP handlers, because it's very common for the HTTP interface to simply not directly map to an object hierarchy cleanly.

"I mean yeah, I certainly could arrange things differently; but in general I think that's not an unreasonable thing to expect to be able to do."

Most languages pride themselves on handing you a ton of tools and telling you to do whatever you want. You then don't have to be an expert in each individual tool, just know how to use the toolbox to get the whole job done. Go hands you a smaller toolbox, and expects you to be an expert in each one. In my experience, it can generally get the job done just fine, but you need to be ready to use each tool very well.

I say this without sarcasm or harsh intent: If you're looking at my suggestions and you're saying "yeah, but I don't want to write that way", you're going to be fighting Go forever. There are in fact good ways to write that sort of code in Go. I do all the time; I don't think I have a single instance of the pattern you're using here in my entire codebase. But you need to lean in to the tools and let them guide you, not try to force them to do what you want. If you don't want to do that... and that's is perfectly valid, I'm not saying that's bad, but if you don't... you should probably stop using Go, to save your own sanity.

Also... to be clear... this is all a software engineering discussion moreso than a Go defense. It is a good idea in all languages to keep track of contracts and be sure not to put things in the variables that can't fulfill the contract. It is never fun to break a contract in some bit of code, and then try to pick up the pieces later. While Go's type system is certainly far from the strongest, there aren't any languages short of the dependently-typed languages that can express all the contracts of interest, if even they can. This is not a good pattern anywhere, you just have fewer tools to hack around it in Go.


> Most languages pride themselves on handing you a ton of tools and telling you to do whatever you want. You then don't have to be an expert in each individual tool, just know how to use the toolbox to get the whole job done. Go hands you a smaller toolbox, and expects you to be an expert in each one. In my experience, it can generally get the job done just fine, but you need to be ready to use each tool very well.

I disagree with the characterization that "Go expects you to be an expert in each one". That's what C does, with all its arcane implicit type conversion rules, memory models and UB traps lying everywhere. Coming from C and learning Go, it was refreshing, for instance, for integer overflow to be defined, and for it to be impossible to even compare `int32` to `int64` without a cast.

Which is why my actual proposal is along similar lines: The "magic" of comparing an interface to a bare `nil` contains a trap, so disallow it. If you ended up with something like this:

    if display == interface{}(nil) {
        ...
    }
It should be pretty obvious that if `display` is `*UserDisplay(nil)`, this comparison will be false. That would prompt you either to use reflection, or to use something else to track whether `display` contained something valid (e.g., a separate boolean variable, like you propose).

Furthermore, I do stand by the comment I made in the discussion of my proposal regarding comparing `error` to `nil`: It turns out, probably the single most common pattern in Go is in fact not safe. Anyone anywhere could accidentally return `*MyError(nil)`, and suddenly all the error checking code all the way up the stack is completely broken. Yes, as you say, doing so would clearly be a bug; but the whole point of having a type-checked language is to be able to prevent this type of bug from cascading throughout the system.

Golang is still, in fact, my go-to language for new projects. And in most things it's just great -- I know how to use the type system to prevent all sorts of dumb mistakes, and 75% of the time, if it compiles, it Just Works. It's because Go is so good at preventing stupid errors that I find the "interface trap" so frustrating.


You are still speaking as if an interface containing a nil is the same as a nil interface. They are not. ∗MyError(nil) as an error value is perfectly valid. It means there is an error, and whatever created that value is promising that you can call .Error() on the resulting value and get a string. If you get a panic instead of a string, the code that made that promise is where the bug is, not the code that called .Error()... and in Go, it is perfectly possible for that nil to produce a string perfectly validly. Calling methods on a nil is not invalid Go. That is a concept ported over from C++ or similar languages inappropriately. It is as wrong as worrying that you can't create a value on the stack in a function and return it. Go does not work that way.

Until you correct your understanding, you will continue to find Go frustrating. nil and invalid are not the same thing. This is not a matter of opinion; it is how the language works. As long as you persist in thinking otherwise, you will remain confused.

If you checked whether err is nil by checking the interface value, and then penetrating the interface to see if the underlying concrete type is a pointer and than pointer is nil, that is a bug in the code doing that. You should never do that. It is never correct. It is objectively wrong to think of them as the same thing and to treat them the same, because they are not.

The correct solution is what I outlined; do not create the lying interface values in the first place. Then they can not propagate through your code and mess you up.

I'm not trying to debate whether Go is "right" or "wrong" here. It is perfectly acceptable to come to a correct understanding of how Go works and still think it is not the best way to build the language. Personally I'd still be happy to get non-nillable pointers, in the same way C# managed to add them later, so you can put me in that category myself. But you still have an incorrect understanding of how Go works, and I'm writing this not to defend Go but to try to save you that frustration. However, if you keep resisting the correct understanding, you will continue to be confused and frustrated and write bugs.


I bet there's a ton of Go code out there with methods that assume the pointer receiver cannot be nil without actually checking. The need for the nil checks along with != err everywhere makes for some incredibly verbose code.


I find that would be a bit pointless. Unless you can actually recover from that, it's better to just panic, since you clearly don't want a nil receiver


Then we're back to the nil interface problem: how do you know whether thing you got is nil or not?


And I agree with the proposal committee for declining your request. For a language that prides in having great backward compatibility, what you were proposing wouldn't fly.


The author did say "Back when we thought "Go 2" was going to have lots of breaking changes". It seems reasonable in that light.


Exactly; if we're going to "rip the bandaid off", just do it all at once. The idea seemed in line with other aspects of golang: for instance, you can't compare an int32 and an int64 directly -- the compiler will throw a warning if you don't cast the int32 to int64 before comparing. This is obviously because, as old-school C programmers, they recognized that automatic promotion is a trap that caused all kinds of issues. I'd argue that the "nil interface is not nil" trap is quite a bit worse.

As it turns out, a lot of the improvements they were thinking about could be done without breaking backwards compatibility; which of course has some huge advantages.


They weren't wrong. You might as well start an entirely new language for that change, even though it looks so small.


> It, to me, is Go's third biggest wart.

Out of curiosity, what are the first two?


Not him.

For me it's:

1. context.Context and putting everything under the sun there, untyped (which is a design fault, not just "people using it wrong")

2. reusing the looping variable in for loop

2.5. general awkwardness of go templating (I plan to write whole article about this, but too lazy to actually write it)


I struggle with context.Context as well. It really comes across as a design cop-out.

The language makes a big deal about memory management and data structure ownership, and as a result it’s sorta a pain to send stuff around on the stack, both at initialization and runtime. As a casual golang user, the context mess feels like an escape hatch to help with those problems. I’d prefer some GC compromises built into the language itself instead.


I'm confused about how a "GC compromise" would help with having to pass things around the stack. It sounds like you just want a request-scoped global or something, but I don't know what that has to do with the GC?

FWIW, I think request-scoped globals are terrible and is among the many things I hated about Flask. I think generics will largely solve this problem by allowing us to write generic middleware with a type-safe context and probably even functions to compose the middleware and automatically propagate the context variables.


context.Context is used in a number of situations, including for a "don't call it a request-scoped global". And whatever, web frameworks always end up with one of those. Would love to see languages do better, but ok.

My concern with its use in golang is that so much seems to get jammed into it. My experience with it is that it ends up being a holding pen for all the things that other systems do with initialization-time dependency injection / resource lookup sorts of things, but replaced with "oh just pass `ctx` as an argument everywhere".

From what I've seen of Go, it looks like that happens because of the state management idiomatic constraints of the language, which in turn seem GC-related.

In a garbage-collected language, I'd put a bunch of context-y things into some sort of request executor datastructure, the web framework would manage the initialization and retention of that, and request handlers would operate against those intialized data structures. That doesn't line up all that well with idiomatic Go, and so we end up needing to hide those sorts of things somewhere else. Enter context-abuse.

Of course, there are plenty of use cases where those sorts of patterns aren't relevant. Right tool for the job and whatnot.

Again, I'm a casual golang user at best -- I bet I'm missing some newer / better approaches, and would love to learn more about what they are.


Go has GC? I don't understand your message that much :)


Yeah but my experience with idiomatic go has been heavily value-biased, and a bunch of useful patterns for state management tend to lend themselves more towards a reference-oriented system. In my experience, this is outside the happy path, but necessary, and so people end up jamming all sorts of things into context.Context to get around the idiomatic paths.


My belief is that a) context should never have had a dual role of being able to hold arbitrary data (just add a map[interface{}]interface{} to the *http.Request ...) and b) it should not have existed as a means of work cancellation. Right now, one is forced to have a context argument just so that it can be passed from some upper level to a lower level that actually needs it, even if your function itself doesn't really need it. Since goroutines are built into the language, a means of cancelling them should've been built in as well.


But how do you cancel something without a handle to that something?

What (I think) you are suggesting implies globally scoped, mutable variables that affect control flow dramatically. Sounds really hard to reason about and very munch anti-Go-explicitness. Context is the handle by which an outer context holds on to the nested ones.

The only non-bonkers way to have a handle like that would be to give every function a getContext function to query some kind of context tree and signal parent/children. This is totally against Go explicitness as well.

Maybe something like Scala's implicits to sugar it, but that's another not-go-looking pattern (although it would be nice)


The go statement could return a handle to the go routine. There can be a cancel magic function that cancels the go routine. You can regain the task local storage by using this handle as a key into a map of task data. This would have been a breaking change particularly in CGo threads but it’s a vastly simpler model for the user.


This is just hypothetical, but the go statement could be an expression that returns a cancel function. From there, the sky is the limit. And in the other side, have a runtime.Done function that returns a channel that will need closed when the said cancel function is called. There are a bunch of other runtime functions that are goroutine specific already, so there is precedent


I hated Go templating until I really dug into it and started loving how flexible and not-OOP it is.

It makes me want to port it to Rust, since so many of the major Rust templating libraries are Jinja copies or Handlebars copies.


What would you call fixing #2? "Loop's vars are const within loop's scope?"


No, it can just use a different variable each loop turn?

Go authors themselves admit it might have been a design mistake, but is unfixable without breaking compatibility.

https://golang.org/doc/faq#closures_and_goroutines

> This behavior of the language, not defining a new variable for each iteration, may have been a mistake in retrospect. It may be addressed in a later version but, for compatibility, cannot change in Go version 1.


> No, it can just use a different variable each loop turn?

How does that work? The call-frame needs to be of fixed-size, but it's rarely knowable at compile time how many iterations a loop will perform. Unless we begin heap-allocating loop variables or treat each loop iteration as a function call (both of which will kill performance, I think), I don't see how this could work?


You seem to have some fundamental misunderstanding about what costs memory. As the documentation says, the following works fine:

    for _, v := range values {
        v := v // create a new 'v'.
        go func() {
            fmt.Println(v)
            done <- true
        }()
    }
The scope of the variable has no effect on how many values may or may not be allocated.


> You seem to have some fundamental misunderstanding about what costs memory.

Probably, hence my "How does that work?" question.

I don't doubt your example--I know it works, but I'm struggling to understand how it manages to work, since (as I understand it) the closure presumably captures the environment, which presumably means taking the address to the new `v` which presumably is the same address for each loop iteration?


https://en.wikipedia.org/wiki/Funarg_problem

Whether a loop is involved is irrelevant, it comes up any time a value referenced a closure might outlast the environment which created the closure. That can happen in loops but also if conditions or normal function calls. The value might be promoted to the heap, but in trivial cases like the above it could be copied directly on the new goroutine's stack. Or if escape analysis says the closure can't outlast the environment, it will use the same stack.


That makes sense, so most likely Go's compiler is promoting the new variable to the heap (I doubt it's smart enough to recognize that the new `v` is only used by the closure which in turn is only used by the new goroutine and thus safe to allocate `v` in the new goroutine's stack). In any case, thanks for the link, I didn't know there was a formal name for the problem.


So again: Allocations of regions of memory representing values, not variables, are what does or does not get promoted. If `values` is an array of strings, do you think the following makes a new string on the heap (that is, a new 16 byte heap allocation, internally pointing to the same variable-sized region of memory)?

   for _, v := range values {
        go func(v string) {
            fmt.Println(v)
            done <- true
        }(v)
    }
No, the 16 bytes will be copied onto the goroutine's stack.

Do you think it's smart enough to do the trivial transformation of the first into the second?


> So again: Allocations of regions of memory representing values, not variables, are what does or does not get promoted.

Right, I understand the difference between a variable and a region of memory.

> If `values` is an array of strings, do you think the following makes a new string on the heap (that is, a new 16 byte heap allocation, internally pointing to the same variable-sized region of memory)?

No.

> Do you think it's smart enough to do the trivial transformation of the first into the second?

I guess I'm surprised that the semantics for the `v := v` in a loop thing are "this will get allocated somewhere else such that it's guaranteed not to be stomped on by another loop iteration". For example:

    for _, v := range []int{2, 4, 6} {
        v := v
        mut := sync.Mutex{}
        go func() {
            mut.Lock()
            fmt.Println(v)
            mut.Unlock()
            done <- true
        }()
        go func() {
            mut.Lock()
            v = 1
            mut.Unlock()
            done <- true
        }()
    }
If it allocates `v` on the goroutines' stacks, then it won't catch the mutation, so presumably it allocates on the heap at least in that case. In whatever case, the semantics are surprising to me.


First, I consider warts of the language to be things that are side effects of the way the language is designed. This means that I would have loved for Go to have non-nullable types, Pike didn't bless Go with that feature. Same for generics and the other popular criticisms. That said the other 2 are

1. Error handling. Not only is `if err == nil` (or my preferred `if _, err := foo(); err == nil`) a lot of noise, errors are completely opaque and might as well be strings.

2. Dependency management. It was a mess for a long time, go modules fixed a lot of issues in it's own way but still has problems (e.g. the "v2" workaround). My theory is they expected everyone, in some way, to "self-host" their modules (like how Java would have com.google.foo.bar, eventually everyone else would have "http://go.company/package", but GitHub became central repository of software for everyone and linking directly to packages was no longer sound).


Errors in Go have been queryable chains since 1.13; it's pretty seamless with `%w`. Rust has the `?` shorthand, which is very convenient, but also the annoyance of type matching errors, particularly across different libraries, to the point where third-party error handling libraries like `anyhow` are pretty important. Leaving nil itself out of the picture (obviously, I'd rather Go had Option types), I don't know that either language has a definitive edge here.

Errors in modern Go are definitely not just strings.

Before wrapped errors in Go, this would have been an easier case to make against the language.


Anyhow's tagline is literally "a better Box<dyn Error>", which gets you nearly the same result modulo some pretty printing.


Yes, Anyhow is almost as easy to use as Go's current error handling.


You're significantly more experienced than me, so I'll defer to you here. What you're saying doesn't make sense to me though.


You're probably more experienced than I am with Rust!


I might be misunderstanding go, but I think the big difference is you use anyhow to get looses error handling at the edges of binaries. It'd be weird to see in a published library, and unusual to see deep in an internal library that supports a binary. Whereas in go you do that sort of error handling everywhere.


I think %w makes sense in libraries, too. They're structured errors that behave like strings. It's a neat trick.


Most errors in Go are still just strings, potentially with some comparison possible witht sentinel values like in the "io" package. As and Is give some nice tools, but its still up to documentation to decide which one to use, and different packages have different conventions for returning error types vs error values.


Most errors are probably strings since there probably is nothing you can customize as a handling for such particular error types. Now, if one could do something with a certain error class, then such error are usually not strings.


#1 has improved a bit lately, but I did get some mileage out of making my own type which carries an optional HTTP status, a severity level, the core error and an optional payload; and it implements Error(). It allows for easy logging, and can be used to return to the http client with a variety of messages.


How are errors opaque? If you want a FooError:

   foo := &FooError{}
   if errors.As(err, &foo) {
       fmt.Println("foo error!")
   }


Lack of generics and clumsy error handling would be my guess.


In practice, those have honestly been the two largest assets to me. No generics means not worrying about a coworker going off the rails and creating a library where it's not needed (me being that coworker often in my past), and clumsy error handling means factoring said error handling into your APIs, making you much more deliberate about where and how errors are eaten. Having used Java extensively, those two 'warts' have honestly saved me a lot of heartache.


How much do your colleagues (ab)use goto statements that Go provides? :)


C++ has goto, Java has “break <label>“, nobody uses it but you see a lot of generics abuse.


I’ve seen break label a handful of times for nested loops. Whether that was the best choice is of course always up for debate. It’s definitively possible to replace and not a necessary language feature.

Maybe it’s a good thing it’s not widely known, generics are trivial to discover.


What can be classified as generics abuse in Java?


Using generics anywhere only a single type is ever used. The ambiguity really just hurts readability and creates a landmine for future engineers that think other types can be used willy-nilly and things will behave as they should. Especially if some optimizations get done underneath that assume or asserts some set of types, but doesn't enforce them at the API level.


Using java is abusive towards generics, they didn't deserve that?


I've literally never seen them. Not even once.


One of them has to be the for loop reusing value thing.

    func processItem(item *string) {
      if item == nil {
        fmt.Println("nil item")
        return
      }
      fmt.Println(*item)
    }
    
    stuffToProcess := []string{"one", "two"}
    wg := sync.WaitGroup{}
    for _, item := range stuffToProcess {
      wg.Add(1)
      go func(s *string) {
        time.Sleep(1 * time.Millisecond)
        processItem(s)
        wg.Done()
      }(&item)
    }
    wg.Wait()
    // What do I print?
Playground here: https://play.golang.org/p/mouvT1BkpNJ


gosec[1] will warn you about this, at least.

    [reuse.go:26] - G601 (CWE-118): Implicit memory aliasing in for loop. (Confidence: MEDIUM, Severity: MEDIUM)
    25:    wg.Done()
  > 26:   }(&item)
    27:  }
[1] https://github.com/securego/gosec


Easily enough fixed by item := item (that is, shadow the variable with something new that won't be overwritten)


Can‘t this be fixed by changing bits to be passed by value?

https://play.golang.org/p/srAyKZN2NTH


Yes, it is odd to pass the address of a local variable from the enclosing scope like this to a go routine.


Well that's true in the sense that usually you'd just close over it, which is the classic broken case of closing over a mutable binding: https://play.golang.org/p/40FRmvhifBl

I assume GGP's contrived example is to show the underlying mechanics of the issue.


I think these two rightfully get highlighted, but I've found it difficult to organize complex Go code (though clearly some people manage), which I think is now my biggest issue with the language.

It's inability to deal with cyclic references, the lack of overloading, the lack of package-level visibility and the lack of static functions all contribute to a lot of ceremony.

You end up with functions like newUserWithPassword and NewRole (casing intentional), instead of User::new() and Role::new()

You could just have a NewUser that takes a NewUserOpts which exposes a fluent interface. But again, a lot of ceremony.

You can use a package per type, but still no overloading, and you'll need a 3rd package to bridge the two if the reference each other.


> It's inability to deal with cyclic references, the lack of overloading, the lack of package-level visibility and the lack of static functions all contribute to a lot of ceremony.

After 4 years of Go, this is usually a design thing. Once I figured out how to separate and isolate domains of interest better, I stopped running into these issues entirely. My code, in and outside of Go, is much better for it. I also learned that much of "thread-safe" programming is taught this way, which makes sense.


F# has similar constraints, and gets similar complaints, and yet advocates (like I) think it’s a feature, not a bug. Most problems can be decomplected, and I like that Go enforces this particular constraint.


Do you possibly have any example projects you would be willing to share?

I've been getting into Go for the past couple of months, but I still struggle with this.


I never really hit this bug, but I realized that was because I was very rarely using pointer receivers. I would rarely use mutable objects because I found immutability easier to deal with, and if I didn't need mutability I would prefer to pass by copy. I'm not really sure what the performance implications were. It would be an extra allocation each time I passed my value object into an interface, but I probably wasn't doing that in a tight loop anywhere anyway. Passing by copy would probably also add a bit of overhead, although my structs were rarely more than a few members, so I suspect this cost was similarly marginal. I also think it helps that my mental model of interfaces is quite accurate--when I see a nil-comparison on an interface, I intuitively understand that it's operating on the interface and not the enclosed value. None of this is to excuse the use of `nil` in Go, only that it can be mitigated to a degree.


You should have been checking if the `c` pointer receiver in compress() was nil before dereferencing it though. That way even if you wrongly assumed that checking for interface nil was equivalent to checking for the underlying type's nil, your program wouldn't have crashed.


I wanted to argue with the commenter too, because you're right; you can and usually should nil check inside the method body. That said, it's very counterintuitive that you checked == nil and did not enter the if body. Further illustration:

https://play.golang.org/p/FUS2-CcHflX

The more I played with the code, the more I have to agree with the commenter. This is not obvious behavior.

You can also get into obscure behavior if your implementation does not have a pointer receiver. Take this code for example: https://play.golang.org/p/MjjGMAZL3-v How can I get a nil pointer immediately after I checked whether it was nil? This one, I do know has bitten me before. Infrequently, but it has.


Wait, am I misunderstanding, or are you saying the code would usually be written as:

    func (c *GZipComprosser) compress() []byte {
      if (c == nil) {
        return []byte{}
      }
      return []byte{c.internal}
    }
?

I don't think that's common at all (1) nor is it a sane burden to put on developers.

(1) https://github.com/golang/go/blob/master/src/bytes/buffer.go...


It depends on the context. My guess as to why bytes.Buffer doesn't perform the nil check is to not shield developers away from the fact that they can't do very much with a nil pointer Buffer: there's no backing allocation and the code should panic early to flag it out. But if you decide that a nil pointer receiver is a valid state for the receiver to be in (especially if it is going to be a struct field in which case the zero value is always nil), you should do the nil check. That's just life with Go, remember you are talking about a language that sprinkles `if err == nil` everywhere. Checking for nil pointer receiver (in every method) is just par for the course.


If you add nil checks to each receiver method, potentially, you are adding thousands of unnecessary branches to your program. It would be quite inefficient. You will ruin performance & readability altogether.

Better way is what you described, accept nil state as invalid, require user to handle it (user must initialize before using your method)


> Checking for nil pointer receiver (in every method) is just par for the course.

Or not.


It's pretty common. Common enough that it's built into the go tour: https://tour.golang.org/methods/12

Worth mentioning that though the stdlib is a great reference, you will definitely see multiple programming styles and different practices across the stdlib codebase, some of which are no longer considered best practice but also tend not to change at all due to the Go 1 compatibility promise.


Yeah, I've never seen this before. However, I _did_ learn that the receiver can be a nil pointer, which I've never thought of. It seems to me that this should fail spectacularly anyway, and shouldn't need to be handled there...


Got bitten by this too in my early days of learning go. This was around v1.4 and until now haven't seen or had the need to pass a SomeInterface(nil) to any function or method call. To me it's more a consequence of the type system design (in most languages you can't call a method on a nil/null object) than a feature


I know why it crashes (now, I hit this bug almost 6 years ago), the problem is it isn't intuitive or idiomatic.

1. Checking for nil receivers is rare and hardly idiomatic. (Although I have seen methods that are designed to work with nil receivers)

2. (*T)nil != nil is surprising.

A newbie looking at this code would not expect it to crash which is why I consider it a wart. In this specific case, the code was structured such that if the compressor was nil then no compression would happen. Then in some, hard to remember case, when you opted into gzip the whole thing would crash. I think I had been writing Go for about 18 months at that point and that snippet got past review. Ultimately the design had problems, (depending on `nil` to mean anything was probably dangerous and it's clearer to give NoCompression it's own type) so I'm not really arguing that the above snippet should work.

Once you understand why it's there it makes sense, but ultimately I think it's an unfortunate side effect of designing a language with nullable types.


Checking for nil receivers is rare and hardly idiomatic.

I think newcomers are exposed to this behaviour now. For instance, it is step 12 in the tour of Go:

https://tour.golang.org/methods/12

I learnt Go before the tour was available and learnt a lot from going through the tour later on.


Oh wow, I hadn't seen that one yet. Gross, I don't want to have to check for nil receivers everywhere. I hope that's a wart that gets fixed.

In Java land at some point we were taught to never use null anywhere, instead use the null object pattern.

And in Go I'd like to extend that, never call a method on a nil receiver if you can avoid it. I think it should be up to the caller to do the nil check, IF the receiver can ever be nil.


> Oh wow, I hadn't seen that one yet. Gross, I don't want to have to check for nil receivers everywhere. I hope that's a wart that gets fixed.

Hahaha oh you sweet summer child. It's considered a feature.

And since typed nils will never get fixed despite being an actual wart (though the FAQ tells you you're holding it wrong you idiot: https://golang.org/doc/faq#nil_error) there's no chance nil receivers will get removed from the language.


this is probably a consequence of the language design.

you also end up with things like this: https://play.golang.org/p/wgaLGOqFDZ9


Checking for nil in the receiver function feels like an antipattern. Like checking for null this in a c++ function.


Yes, even when I expect nil receivers because I'm adding a method to a map or slice type, checking for nil is usually the wrong pattern (vs. 2ary lookup or len, etc).


Yep, got bit by this exact behavior too, returning a typed nil error as the error interface. The check for err == nil said false, although we were returning a nil typed error...


Nils are typed initial values, that should be clear from the existence of nil slices. Don't check nil on an interface type. Check in compress(), where a concrete type is established.


And implement that check in each and every function that has a receiver? That sounds like a recipe for bad performance (unnecessary branching), as well as unnecessary readability affects.


Not necessarily. It's suggested as a way to maintain the typing decision the original parent made, but which itself is undesirable. The pattern can be changed to,

  func main() {
      var c Compressor  // signal do nothing
      DoCompress(c)
      c = &GZipComprosser{}  // normal call
      DoCompress(c)
  }


Playground link for convenience https://play.golang.org/p/mEVfizXY_Mf


Things like that make me shake my head every time I hear "Go is easy to learn."

No. It's not. There are way too many dark corners with things lurking there to bite you. It just "looks easy" until you try to do something non-trivial.

Used to love it but the more I learn about it the more I fall out of love. I can't help to wonder, would it be so successful if it weren't for all the Google-hype and the push because of kubernetes?

I don't mean to bad-mouth it. It's just that I invested time and effort to what now seems to me a bad language. A good language -IMO- should get out of the way once you spend some time with it. Be as transparent as possible so that you can focus on the real problem. And do that in a safe way. But here I am in 2021 having to worry about all different types of nils. Or having to decide which way to go about marshalling JSONs. Why put at least one inefficient way to do it? I know, it makes code reviews fun. And don't get me started with all the conventions and whatnot that half the community sees as holy - if they are why not putting them in the compiler like they did with gofmt and get done with it? I mean this whole thing started off on very good premises -cut down complexity, simplify, get unproductive arguments out of the way- but somehow it looks like it stuck somewhere halfway there in a grey zone that converts it to the new java fast. No problem with java here but I guess if they wanted to create a new javaesque ecosystem/community then why go to all that trouble. So my guess is that they didn't.

Dunno...

Had I time and not job market restrictions I'd be looking to Elixir for concurrency and Nim for everything else. But things being what they are I'm stuck with the gopher.


Time, Enums, Final, Sets - all nightmares in Go and make me miss Java. These are the basics and they are hard or missing.


And now with records and pattern matching coming to Java, even more so.


I have the same feeling every time someone says C is "simple". It's simpler than C++, yes, but it's definitely not simple. There are a LOT of dark corners.


There are no hidden calls, mostly there is one way to do something. No exceptions, so you can see entrance and exit points of a function. There is no "abstraction" feature, so you tend to keep your code away from unnecessary abstractions.

You can find counter-examples ofcourse, bad programmers can create a mess from anything and everything.

My question is, as an example, how come Linux kernel's code ,nginx's code or memcached's code 100 times more readable than any other project in another "better" language.

This is why people say C is simple. You have limited options and you tend to end up with a "simple, readable" solution.

No one says "C is perfect, you should use it for anything and everything, it handles all the issues of computers, no gotchas, no surprises, 3 year old kids should start with C programming."


Unless you're working with strings, which are a hell of a nightmare to use safely. It's become a lot better in terms of the stdlib, but even stuff like the supposedly safe vsnprintf may quickly ruin your day and open potential overflow exploits if you forget to accout for the space of a null terminator or accidentally write the return value to a size_t instead of an int.

Error handling isn't obvious either. The linux kernel does it the right way with error code returns and results in pointers, but that's not the obvious way at all.


Until you try to do something like zero a buffer to clear sensitive data just before it goes out of scope, and find that there's no way to do so that compilers actually have to obey.


Does Go have more dark corners than Nim, or are you just more used to Nim's dark corners?

I picked up Go when C# was my most familiar language, having dabbled in many over 30 years, and Go was the easiest to learn language I've ever dealt with. Plenty of things I don't like about it (like that nil exists! option types please!) but ease of learning and reading other people's code was amazing.


> I can't help to wonder, would it be so successful if it weren't for all the Google-hype and the push because of kubernetes?

Yes - Go would have been successful without the hype. There is a lot of underlying thoughtfulness to the design and language ergonomics when it comes to working in teams. Go lets you write simple code in a simple way - and that's the bulk of the code I read and write. The more hairy bits can be a bit unwieldy in Go, but it's still serviceable.

As a recovering Perl programmer, Go has an anti-TIMTOWTDI philosophy, and I love it. It absolutely demolishes a lot of bike-shedding: I bet gofmt alone has saved humanity millions of work-hours. For any system that requires a modicum of maintenance in a team setting, Go edges out most other languages: Go code is darn readable. For writing once-off/one and done systems, Go loses its biggest advantage.


"Easy" is relative. Go has been one of the easier languages to pick up and...go. I can read most go code that isn't my own without too much trouble. Modern java is also good in this regard, and has the added advantage of so many existing libraries.

Every language has warts and tradeoffs, many which have nothing to do with the language itself. Can it be hired for? Is is supported? What's the ecosystem like? How easily can it be deployed in a container?


I'm hitting the same kind of problem writing serde code in go, trying to marshal/unmarshal data to/from structs. The biggest problem is how SLOW reflect is, requiring all sorts of complicated caching to ensure the least number of reflect calls are made in aggregate (have a look at the JSON codec in the standard library for an idea).

The other terrible part is io.Writer. Since these are interfaces, the compiler can't tell what the actual implementation will do with the []byte parameter, even though we humans know perfectly well. So if you try to write from a stack allocated buffer, the compiler will hoist it into the heap, every time (and allocations are one of the slowest parts of go). I started off by writing a complicated heap-based buffering mechanism until finally I said to myself "nah... it can't be THAT bad - they've got to have some kind of mitigation for this!" So I changed everything to very simple writer.Write, and the serialization performance dropped by more than half (from 1700 ns/op to 3500 ns/op). Allocations went from 5 per op to 37 per op!

Would be nice to have a "nostore" modifier for slices to tell the compiler that the receiver is not allowed to store it (thus preventing this constant stack-to-heap copying).


I've arrived at the opinion that any use of reflect is a red flag that the author is trying to do something that Go wasn't designed to do and the code will probably be extremely flaky.

It's not a 100% match, but it's a good heuristic in most cases.

As my experience with Go has increased, the number of times I need to touch reflect has decreased.


How do you marshal/unmarshal JSON?


You write a custom marshal/unmarshal method for every type, of course.


That's what it does already. But you can't dynamically create a value of a decoded type without reflection, nor can you examine an arbitrary object for encoding without reflection.


Wait. Is that what the built in utility does? I thought it was reflection based. How does it do this without generators?


It's a mix of cached builders and reflection. It uses reflection to build an appropriate builder object for that specific type. Note that it is still using reflection in the builder though.


I wasn't really referring to the std lib. Though, to be honest, given the amount of hassles I've had marshaling and unmarshaling json in Go, I could well include it ;)


There are very good reasons to pick Golang and "correctness" is not one of them. Go is very pragmatic and it seems like most people are very prepared to forgive a lot of corners like this for a language that is succinct, hard to be clever in, easy/fast to compile, and performant.

While we're here I'd love to talk about Go's lack of memory pressure control/feedback though. How people are working around it/using Go in production -- is everyone just right/over-provisioning servers? using `ulimit memlock`? Hoping on swap?

I ask because I just ran into this working on a tiny project whose only job was to serve a single file[0] (with few resources) and was immediately bit by the OOM killer (I used docker to artificially limit resources).

[EDIT] I can't help but add this, as the language hipster/elitist I am. Don't pick languages that have nil/null in them in 2021 (and after) if you can help it. Strictness (or "correctness") around values starts with nil/null/undefined. That said, if you actually want to produce valuable software and produce value, use whatever the fuck gets you to v1 without too much suffering.

[0]: https://gitlab.com/mrman/kcup-go/



Wow.

So I fully believe that Go will eventually solve this and be even more valuable, because if people are creating so much value with it right now it is likely to get exponentially better as the Golang team considers and takes on things they avoided in the past (ex. Generics).

I've glanced at those holes but I don't want to go down them, I have written a large personal project in go in like 5 years and I probably won't ever again because Rust. Also if I really wanted to be productive I'd pick Typescript.

I do have one for you, in kind[0].

[0]: https://github.com/kubernetes/kubernetes/issues/53533


> I ask because I just ran into this working on a tiny project whose only job was to serve a single file[0] (with few resources) and was immediately bit by the OOM killer (I used docker to artificially limit resources).

Not a fan of Go but unless your memory limit is extremely low there's no reason for that to happen, the GC does work. So clearly there's some sort of memory leak in the user code, either data being appended to a global, or (more likely?) a goroutine leak of some sort.

Maybe check your globals (is there a global slice or map?), and whether all your goroutines will terminate on their own or are guaranteed to only be spawned once? Else it'd be a similar issue in a dependency, looking at kcup.go fasthttp would be the most likely culprit.

edit: looking at their bug tracker, it might also be something like https://github.com/valyala/fasthttp/issues/518: the request is not considered responded to so the goroutines never terminate. I don't know the fasthttp API at all tho, just looked for "leak" on their bug tracker and that stood out


So the thing is I limited the memory to 100MB (because how much memory could you need to serve 1MB, even if every go-routine stack is 2-5kb and the references are all to one piece of data?)

This was an issue, but I found that even with giving it more memory things didn't get much better -- the performance was still much worse. Weirdly enough, net/http did well memory wise (and reasonable performance wise, for Go), but I went to fasthttp since that was what was at the top of benchmarks for go, but the simplest you-would-think-this-would-work code didn't produce the results I would have expected.

[EDIT]

> edit: looking at their bug tracker, it might also be something like https://github.com/valyala/fasthttp/issues/518: the request is not considered responded to so the goroutines never terminate. I don't know the fasthttp API at all tho, just looked for "leak" on their bug tracker and that stood out

So I actually got some help from a reader which was pretty awesome (https://gitlab.com/mrman/kcup-go/-/merge_requests/7), the fasthttp code performed much better after that.

I should also mention that I got some advice that didn't pan out -- in particular using fasthttp.ServeFile (https://gitlab.com/mrman/kcup-go/-/merge_requests/9)


> I ask because I just ran into this working on a tiny project whose only job was to serve a single file[0] (with few resources) and was immediately bit by the OOM killer (I used docker to artificially limit resources).

This looks like some badly-written, memleak-inducing code. You can cause that in pretty much any language.

To Golang's defence - we use InfluxDB and Telegraf on hundreds of production hosts and I've never seen those processes grabbing unhealthy amounts of memory.


> This looks like some badly-written, memleak-inducing code. You can cause that in pretty much any language.

Could you point at the line that is an obvious memory leak? There are three versions right now:

v1 - net/http (no leak, but perf wasn't great in comparison)

v2 - fasthttp

v2.1 - fasthttp (+ []bytes)

v2.2 - fasthttp (mem leak fixed thanks to a contribution)

All of these versions are <50 lines of code. The issue was not the code I wrote, but my misuse of the underlying library (fasthttp in this case) -- but I don't think any part of the code I wrote stands out as particularly memleak-inducing. I'd like to think writing very similar code against net/http and fasthttp producing such different results is a bit of a problem, my golang code aside. The interfaces look the same, act the same, but seem to have wildly different consequences.

> To Golang's defence - we use InfluxDB and Telegraf on hundreds of production hosts and I've never seen those processes grabbing unhealthy amounts of memory.

This is reasonable, I expect InfluxDB to be very well written software, but I'd like to point out:

- the Go devs at InfluxDB are way better than me at writing go

- memory usage and management (and not blowing up your server) is a key feature of InfluxDB

- I was trying to serve... a single file

At this point I'm quite tempted to try this with PyPy + Falcon or NodeJS and see how similarly naive code would perform with 2 cores and 100MB. Those other languages have different characteristics (harder to deploy for example) but it feels like all I'd have to do is make the right library choice (which I tried to do for Go) to get some pretty decent performance (which I expected to get with Go).


You just admitted yourself in a reply to a sibling comment that there was a memory leak. What am I missing here?


> This looks like some badly-written, memleak-inducing code. You can cause that in pretty much any language.

I must have misunderstood you because it read like you were suggesting the code could be determined to be memory leak inducing at a glance — wanted to know which part of it looked like that if so.


> While we're here I'd love to talk about Go's lack of memory pressure control/feedback though. How people are working around it/using Go in production -- is everyone just right/over-provisioning servers? using `ulimit memlock`? Hoping on swap?

none of these in my case. i simply used go’s benchmark tool to figure this out. i kept running it and trying out stuff along the way. at the end of development i ended up with a very performance-focused codebase.

result: i now have a running golang app that’s been up for 2 years without a single restart, indirectly serving millions of users on just 4 cores and 4gb of ram. no memory leaks, no cpu issues. very impressive compared with the standard java solutions that that company usually pursues.

this being said, it really depends on your use case. this golang software was a proxy that was handling an LRU cache. definitely not rocket science.


> none of these in my case. i simply used go’s benchmark tool to figure this out. i kept running it and trying out stuff along the way. at the end of development i ended up with a very performance-focused codebase.

That sounds like "right-provisioning", or am I misunderstanidng you?

> result: i now have a running golang app that’s been up for 2 years without a single restart, indirectly serving millions of users on just 4 cores and 4gb of ram. no memory leaks, no cpu issues. very impressive compared with the standard java solutions that that company usually pursues.

This kind ofexperience is exactly what I was expecting to get out of Golang, very easily. That said, 4 cores sand 4GB of RAM is a lot of resources for a single binary, also there is the magic of swap to save you if you ever get in trouble (the app would theoretically get slower to the point of becoming unresponsive). I'm not sure how complex this application is, and I'm certainly not arguing for Java (because that's both hard to optimize, tune and write in the first place) but I wonder if there's a difference in complexity here. Is it the usual 3 tier app kinda deal?

> this being said, it really depends on your use case. this golang software was a proxy that was handling an LRU cache. definitely not rocket science.

Ahh there it is, thanks for sharing this bit -- Go would function as an excellent LRU cache -- I assume this program is performing a redis like functionality for another app? Are you offering the LRU functionality itself to users? I am actually kind of interested in what it does now...


It sounds like you have a memory leak in your golang app. That shouldn't be happening. Time to bust out pprof.

I'd say though GC is what limits golang as a systems language. If you are doing lots of memory heavy things, I'd say golang may not be the correct language to use.

[1] https://github.com/prometheus/prometheus/issues/6934


Definitely a memory leak, but the code is like < 50 lines -- it's a memory leak (and/or my bone-headed misuse) in the underlying library, fasthttp.

I got some help with fixing it from a reader though:

https://gitlab.com/mrman/kcup-go/-/merge_requests/7

Again, the goal was to serve a single file -- I should point out that v1 was using net/http and it did quite well (memory scaled up and down as necessary), but the combination of resource contraints (on my side, I only gave it 100MB of RAM), and use of fasthttp that wasn't ideal/correct caused the issues. It went something like this:

v1 - net/http

v2 - fasthttp (memory leak but I didn't notice at this point)

v2.1 - fasthttp (use []bytes, still has the leak)

v2.2 - fasthttp (leak fixed thanks to Pavel)

All these solutions performed worse than their rust counterpart though, so I was surprised at how much work Go was making me do more than anything.

GC is certainly what limits golang as a system language, and if I think about it I am certainly standing on top of a lot of abstraction on the Rust side (hyper, tokio, structopt), but this exploration made me more of a rust fan than anything. The ~30 mins I spent figuring out how to efficiently pass the references was painful at the time but the payoff has been hours of not futzing with the code.


I've read your code and there is no reason to use all those libraries, it could have been done in a performant way with std libs. Why would you use fasthttp?


Did you read the post (there’s an attached blog post)? I was worried I wasnt giving golang a fair shake because the performance (under constraint) was so bad in comparison. Fasthttp did end up being faster also, I just had to fall into a bit of a pitfall first.


There's a proposal in github about being able to set an upper memory limit. Right now, you'd have to live with the possibility of being evicted from a k8s cluster however. That being said, if you think your go process shouldn't be consuming "that much memory (user definition here)", it's time to start profiling. I've uncovered my fair share of pebcak memory leaks that way.


> There's a proposal in github about being able to set an upper memory limit. Right now, you'd have to live with the possibility of being evicted from a k8s cluster however.

You've activated my trap card! I actually have another issue with k8s -- their rejection of swap seems like a bad judgement (though I'm sure it helped them ship faster and accomplish other things -- I just made this comment[0].

> That being said, if you think your go process shouldn't be consuming "that much memory (user definition here)", it's time to start profiling. I've uncovered my fair share of pebcak memory leaks that way.

Yeah but I was trying to serve... a single file with contents in memory. I'm pretty biased for Rust, but at some level isn't it a bit ridiculous if I have to break out the profiler for that? I spent like 30 minutes figuring and ultimately searching for the right combination of `move` and `async` to figure out how to give the async closures the right reference, but that code is much easier to read (for me, since I have paid the Rust tax), much more correct, and performed very well without me having to pull out a profiler.

[0]: https://github.com/kubernetes/kubernetes/issues/53533#issuec...


Do you mean the language specification is succinct or that code in the language is succinct - I thought Go favoured the former over the latter?

Edit: No implied criticism of Go intended - while not to my personal taste it seems like a fine language that a lot of people clearly get a lot of value from.


> Do you mean the language specification is succinct or that code in the language is succinct - I thought Go favoured the former over the latter?

Succinct was the wrong word -- for complex things Go is definitely not succinct in that sense you're absolutely right. It's more succinct than Java for doing what I wanted to do (waiting for someone to drop a nice tight Jersey/JAX-RS example in 3...2....1...), but what I should have written was "simple" -- Go code is simple to read and it's much harder to strangle the rest of your team (and your future self) with complexity.

> Edit: No implied criticism of Go intended - while not to my personal taste it seems like a fine language that a lot of people clearly get a lot of value from.

I would disagree on it being a fine language, but I have watched enough of rob pike and go and the team members' public talks to know that this is exactly what they intended and that's great. It's a pragmatic, powerful language with good performance, and people are deriving a TON of value from it.

To be fair I should also note that I am the exact kind of person who Go is not built for -- a yak shaver and amateur type theorist. I'd pick Go over Java in a heartbeat, but I think there are some other good choices out there if you're free to roam.


You describe yourself as a type theorist but would pick go over Java even though it lacks generics?

That doesn't seem to fit.


I’d pick Go over Java for other separate reasons than types.

If you’re on the JVM and you want good types I’d look at Scala and Kotlin and maybe clojure first before Java. Java’s type system is pretty wasteful, the marriage of structure and functions is a big mistake, first class functions didn’t land until in 1.8 IIRC, it’s got nullable types but people rarely use Option<T> as much as they should, the list is long. Go gets protocols and interfaces more right in my opinion as well. Maybe I shouldn’t blame Java for not being able to be up on PL research that wasn’t mainstream yet, but Haskell has been around since the 90s.

Java’s type system is better than Go’s (because Go’s is almost nonexistent) but it’s getting strangled with Spring soup and the unbelievable ceremony around everything in Java that makes me not want to pick it.

If you want good types period, Haskell or Rust.


I think that a proper understanding, and awareness of the import of, of nil, null, None and other empty/bottom types and values is a very pressing need in programming languages and databases (see the intricate discussion on SQL NULL [1]).

It is wrong to dismiss this as mere "WTF" caused by pragmatic implementation goof-ups. There is a deep philosophical issue here which needs to be one of the first issues to tackle when implementing a system, rather than an after thought.

An analogy I can think of is recursion: the base case needs as careful a thought as the inductive case, otherwise the code will go into an infinite loop. Empty values and types are probably the base case of a type system, and have to be handled meticulously and in a logically sound manner from a programming language theory persepective.

[1] https://en.wikipedia.org/wiki/Null_(SQL)


Too bad nil/null _aren't_ typically bottom types or "base cases", but rather are real, runtime values that are sentinel values of some kind, rather than valid references to a value of some type. A distinct sentinel value that, for some usually historical reason, your compiler often thinks is OK to pretend is some interface, right up until it blows up at runtime. I get the confusion, though, since the same compilers that make that mistake often attribute some bottom-type-like characteristics to nil/null to facilitate using it (eg, considering it a valid possible value of all other expressable interfaces). Empty values and other sentinels are meaningfully different from a type-theoretic bottom type. Real bottom types don't come up much in mainstream languages - TypeScript has `never`, Scala has `Nothing`, and both typesystems have distinct representations of various `null`s and other empty values. Such uninhabitable bottom types see most of their use in describing generic constraints in situations where variance comes into play.

I think the real "WTF" here is that the go compiler quietly makes a simple `nil` written in your code into one of potentially many nil-like sentinels at runtime (contrary to what one may think), coerced depending on usage, and, moreover, that a simple `==` comparison or reflective call alone is insufficient to detect all of them. The semantics of how such a sentinel is coerced (and reflected) is surely in the realm of language esoterica and not simple beginner knowledge.


Add in Nan's as well - and then try to discuss the difference between 'show 0% show 0, show a blank and show a null and what's a NaN' in the report to senior management.


Trying to use Go as a declarative language seems to me ill-suited. The language works best when fully embracing imperative forms. You can use interface, composition and function values for effectful stuff, but it's no DSL.


Yeah, I think the `detest` package is using wrong tool for the job.

(And there is already testify and diff.)


I looked at the code and it did not feel good. Go's ecosystem focuses on line-of-sight coding, that is, happy path has the least indentation, unhappy path (e.g. test failures in the naive / default unit test approach) should be indented. This tool seems to do a lot of indentation.

I know it's a bit of an arbitrary thing, but it's a good rule of thumb. And it's consistent with the other 99% of Go code, which is really important.


Since this thread started discussing warts and has people versed in go comment, can anybody explain what we have https://golang.org/src/strings/compare.go?

Why is “symmetry with package bytes” so important that it is deemed a good idea to add a function that “no one should use” _and_ intentionally make that function slow?

Also, if that’s a good idea, why doesn’t https://golang.org/pkg/strings/#Compare shout out harder that that function shouldn’t be used?

(Came across that when googling whether string comparison in go uses locale)


Yikes. This is like JavaScript's null/undefined, but even worse because they don't even have distinct names. No thank you


nil is just the empty value for slices maps pointers interfaces functions and channels.

the only slightly confusing part is that interface objects are proxies that wrap a pointer and a type, so a nil pointer inside an interface is non-nil.

Writing idiomatic go this does come up but not nearly as when you do stuff like this.


> is just

Considering you replied to a JavaScript comparison I, as a JS dev, would expect to understand at least one word of what you just wrote. If this is the easy explanation I think Go is out of my league.


That’s the wrong attitude! Go is not out of anyone’s league. Certainly not someone who already can program.

Not understanding a comment on HN is not particularly surprising and need not be a proxy for ability. I think this is unfairly self critical. (At least for anyone else reading this, maybe the original poster meant it in jest).


Every time I see any Go code, it looks like the language was designed with primary goal being as different from everything else as possible. If there is X ways something is done (and you could then relate to that knowledge), you can be sure Go uses something different.


Genuine question: what languages have you tried and are comparing Go against? I ask because if anything, Go is as mediocre as they come with regards to it's design. There's very little about Go that stands out as being different. For many, that's it's attraction: it doesn't try to do much different and instead focuses on the basics. For others that's what's off putting about Go: it's basic and doesn't offer anything new or different.

Also comparing Go to languages like Javascript is never going to end well because they're entirely different paradigms in almost every category aside the C-like syntax (dynamic vs AOT statically compiled, functional and object orientated vs imperative, loosely typed vs strictly typed, multi-threaded (albeit with green threads) vs single threaded).


Just from top of my head some of the programming languages I used professionaly in past 30 years - C,C++,C#,F#,Pascal,x86 assembler,D,Java,Javascript. And I still stand behind what I said before - Go is trying to be as different as possible, for no good reason.


Well if you have experience in C and Pascal than really Go shouldn't feel that alien to you. The only thing it brings that differs from them is more verbose error handling, green threads, interfaces and channels. And of them, only the error handling is unique to Go -- everything else is borrowed from other languages.


But it's not really an empty (using the first example) slice because I can have either an empty slice or a nil slice, right? So in Rust or Scala terms, every instance of a type is really an option of that type? It just seems like a totally unnecessary complication to me.


Null references: the billion dollar mistake.


Typed nils: when you think a billion dollar mistake is not enough money.


Off-topic re: OP blog's and domain name.

These are nice and subtle nods to the tetralogy of the Book of New Sun. House Absolute is the palace of the Autarch, the ruler of the Commonwealth, a southern hemisphere nation in a far future when the Sun is dimmed.

(If my memory serves me well...)


In inheritance OOP languages, interfaces are an implementation detail of the class. In Golang, they are proxies. You have to think about them differently.

Also I agree with the other commenter that all this verbosity isn't very idiomatic. If you can't just write if a != expected { t.Failf(...) } for each thing, then use testify, because they definitely have been thinking about this long enough to get it right:

github.com/stretchr/testify


testify gets basically everything about Go testing wrong. It even takes its arguments in the wrong order.

https://github.com/golang/go/wiki/TestComments#assert-librar...

https://github.com/golang/go/wiki/TestComments#got-before-wa...

They did at least:

https://github.com/golang/go/wiki/TestComments#compare-full-...

Although the diff-ing is significantly more limited than cmp.


Wow when did they add that to TestComments?

I really disagree though. The native testing framework is overly clunky and verbose. It also makes things harder to debug when they fail. Just look at the example it gives! You don't even know why the test failed, it combines 4 different tests in one line. Are you supposed to inspect the entire object that gets printed out to figure it out? How terrible.


Yeah, it's a really bad example. I would just cmp.Diff, and if I only cared about 4 fields, I would delete the data from got.FieldIDontCareAbout before comparison. (You can also make cmp.Diff do this if you want to do it every time.)

I think that an evolution of tests would be:

   assert.Equals(got.Bar, want.Bar)
   assert.Equals(got.Baz, want.Baz)
then:

   if got.Bar != want.Bar || got.Baz != want.Baz {
       t.Fatalf("bar and baz:\n  got: %v\n want: %v", got, want)
   }
then:

   want := Thing{Bar: xxx, Baz: yyy}
   got.Quux = ""
   if diff := cmp.Diff(got, want); diff != "" {
      t.Fatal("bar and baz:\n%s", diff)
   }
(cmp.Ignore and friends should also be considered for less misleading output, if you don't care about Quux)

Basically, cmp.Diff should be your go-to testing framework. It's easy to use and the output is excellent.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: