Gotta love how the initial problem (a typed language in the 2000s having nil and empty interfaces) degenerates in workaround suggestions such as hacking nil and the interface system to give it a special type.
It is interesting how Go seems to be doomed to head down the same path as other languages it accused of being "bloated." No one intends to design a complicated language but the reality is that the problem space is complicated.
The only thing that isn't so forgivable with Go is that these aren't new problems this time around. Go is still struggling to get over challenges that were first seen a long time ago. It's a great experiment on designing for complexity vs trying to avoiding complexity.
> It is interesting how Go seems to be doomed to head down the same path as other languages it accused of being "bloated." No one intends to design a complicated language but the reality is that the problem space is complicated.
It's not surprising, it stems from the arrogance of Go designers who think they can eliminate complexity by deeming it irrelevant and making the user carry the weight of the complexity they refuse to deal with. Simplicity isn't hard, like Rob Pike says, it's a trade off.
If you're not going to have enums in your language for instance, you are forcing your users to implement their own, badly and in incompatible ways.
If you're not going to have generics, well you'll get this stupid situation where users are expected to "augment" the compiler with code generators, leading to an increase in complexity in building a program, or worse, ignoring compile time type checking since it's the path of the least resistance when dealing with generic container types.
The more experienced a developer I become, the more strongly (and negatively) I feel about nils and nulls and their ilk. I have sympathy for C.A.R.Hoare who in 2009 apologised for the apparent invention of null references in ALGOL W (1965), calling them a "billion-dollar mistake". I've come to regard them as a data singularity, and when I design data structures and interfaces today I am deliberately avoiding/outlawing them; all my relational fields are NOT NULL and I choose either meaningful defaults, or EAV or equivalents instead; in method parameters I would rather something not exist than for it to accept a null reference or value. And I believe that the resulting code is more modular, more easily refactored and more reusable a result, errors are better handled, and the resulting data structures and calling arguments more easily interpreted, more readily queried and destructured, and are (so far) proving generally better fitted to real-world domains.
Funny, I'm the opposite. The more experienced I've become, the more I've found that nil-punning is ultimately what I actually wanted.
And I'm all for the idea that relational fields should be NOT NULL. I also fear that this doesn't really work for backwards compatible thinking. If I serialized some data down to disk before a field existed, I don't expect it to be there when I check it later.
You can be tempted to think it should just be the zero value of the type you are using. Or you can add some extra boilerplate around accessing. I think either works. Just make sure you aren't getting carried away. And, try to do anything that cares about the absence or presence of something at a layer from where you get that something. Don't punt the decision down your codebase.
(That is, Optionals are great at the layer, don't pass them as parameters to inner code, though. Obviously, YMMV. And, quite frankly, probably will go further than mine.)
Agree completely. Google removed "required" fields from proto3 because they cause problems for compatibility and version skew. And even in proto2, which had "required" fields, people quickly learned to avoid them. Anything that goes on the disk or wire should have only "optional" and "repeated" fields (as a bonus, "optional" is encoded the same as "repeated" with zero or one values).
Real type safety is sum types. If I need to express something that is present or missing, I should use a Maybe monad.
Having an extra field for "missingness" is less safe because the type system won't enforce that it is either missing or set, you could have it set to a value but marked as missing which is still ambiguous.
If missing data is a valid value, pick a valid way to encode it. Null might work, but realize you could have to reason for the value. Actually missing, or just not collected it recorded.
I concede there may be no difference in those meanings.
My problem with null is that it doesn't nest. For example, if I do a DB search for a particular column of a particular row and get null, does that mean the row doesn't exist, or it does but the column is empty? With optionals, you can distinguished between this with e.g. `Nothing` vs `Just Nothing`.
In databases NULL does exist; it is an explicit statement of having no contained value. (There is a container here, the contents were not specified. A distinct statement from /knowing/ the contents to be empty. (zero, zero-length string, etc))
Conceptually NULL or nil is an appropriate concept for results that have no meaning, such as if an error occurred or if a passed value is not required or valid. (Though some structures can contain data that is 'incomplete' or 'not checked' and thus while a valid structure might not be 'validated' in the sense of conforming to a more specific set of expectations.)
Sort of... and it seems to me that Go wants you to think this way about values within a struct (i.e. the "zero" values).
But isn't that a really clunky way of checking whether field is set? You can't just check field because the "zero value" could be a legit value, e.g. zero. So you have to first check field_is_set -- and now you have to make sure that's always correct and that nobody ever sets field by itself.
Or worse having to inspect a specific field value (or worse compare the whole thing to a reference object) to determine if the result is actually valid or not.
The question I ask in these circumstances: Will a method/function //always// return valid work if the program continues to run?
How about a 'find' function of some type? Find the nth thing, find matches of X, etc.
That's one type of function that might return no answer.
If a list or set of some sort is expected I'm happy with a zero-length list in this case. However lists aren't the only time this happens. The most recent example to come to my mind is finding the Nth item in an arbitrary sequence. That item might be out of bounds (not exist). Nil is appropriate for that case.
Can't that be determined by looking at the row count? A row count of zero means it doesn't exist. A row count of 1 with a null in the selected column means the row exists the column is null.
I avoid the word "empty" when referring to anything SQL related, as it is ambiguous in three value logic.
SQL syntax, at least what I'm familiar with offhand, makes you explicitly say things such as WHERE (a.cola = b.colb OR a.cola IS NULL OR b.colb IS NULL) or similar syntax but with distinct variations for joining 'left' and 'right' tables on an expression (which, BTW, can be noticeably slower than the WHERE version, depending on which database you're using).
SELECT a.id, b.name
FROM a LEFT JOIN b ON a.id = b.id
If you get a NULL in the name field, you don't know if that's because there's no record in b for that id, or if there is a record in b for that id but it has a NULL name value. Sometimes that difference will be important.
NOT NULL bugs me too, but not so much because nulls are possible, more so that I think it should be inverted since that's the common case (at least for me).
Go is a lesson in how complexity can't be eliminated, only distributed properly from the beginning so that one doesn't have to hack it in later with messy special-casing that needs you to know how the compiler represents things under the hood.
What happened to "lightweight typesystem that reduces cognitive load"?
It's really easy to criticize where mistakes were made. The intention was to make a simple language and it worked. The idea resonates with many many engineers even ones such as I that love writing powerful pure fn code.
The intention was great and the result wasn't that great, but it still works pretty dann well. Go is an open language and they are asking for well thought out proposals on where & why the problems exist. Followed by ideas and/or examples to make it better so let's all try.
I think Maybe Types would be an amazing feature to add. Closed types would also be an outstanding win from a UX perspective. Neither of those concepts would add more cognitive load then they remove in my opinion.
From what I've seen, this holds only as long as you keep the proposals minimal and restricted to aforesaid hacking around the limitations built into the language. I'm happy to be shown evidence to the contrary: have there ever been any proposals, reacted to in a not-completely-negative way, that were like "uh, maybe we didn't have the right idea about <something basic>, let's do this instead"?
I'll argue there won't be. Every community has a culture: Go's is delightfully warm, friendly, and inclusive, but also surprisingly distrustful of learning that there are easy-to-understand but powerful language features they could be using to write maintainable code without "getting a PhD in type theory from the nearest university" (to strawman a certain [type of] person [I've often encountered when arguing about these things]).
Go has done many things right (aside from the community, good concurrency and really fast compiles come to mind) but language design is not one of them.
Go has done nothing new for fast compile times, like any old timer coder will remember from Algol linage of compilers, with Turbo Pascal for MS-DOS being a good example of how long ago those fast compile times are known.
I'm sadly too young and ignorant of CS/technology history to be well-acquainted with the "old times". (It's something I intend to fix.)
Even so, I'm all for praising the good things that Go does: if nothing, because of the tremendous mindshare it's getting and the number of people it reaches.
This is just one example, there are plenty of other languages to choose from with a module based compilation model, only C and C++ toolchains have lousy build times given their textual inclusion model.
> What happened to "lightweight typesystem that reduces cognitive load"?
Oberon is an example of a true lean programming language. The complete language reference takes up only sixteen A4 pages. The compiler OBNC implements the latest version of the language:
IMO the language made a mistake by allowing nil to satisfy any interface . When I write a function like
func DoStuf(i ILoveGoer) {
i.LoveGo() // Panic on nil
}
its hard to reason about because it doesnt look like you have a pointer, looks like you definitely have a value. IMO a nil should not be allowed for an interface. So the only way to create an interface var is in conjunction with assignment.
Nil interfaces don't necessarily panic when their methods are called, just when nil variables are dereferenced. This is because the receiver is just another argument. For example, if `i` were this implementation of LoveGoer it wouldn't panic on a nil receiver:
type JoeLovesGo struct{}
func (jlg *JoeLovesGo) LoveGo() {
fmt.Printf("Joe Loves Go! jlg is %v\n", jlg)
}
Or to encapsulate a nil in a Maybe monad, so that you only have to deal with it in contexts where you explicity denote acceptance of nils. Then the type system won't let you get away with ignoring the possibility of a nil.
> The only other option is to not have nil values.
Rust has a bottom type (!)[0] without it implementing all traits by default while using a different type (Result) for error propagation. Plus having nil/null as a a value of the bottom type violates some aspects of bottomness.
The problem is "null" being a value in all (non primitive) types, not only the bottom type. You specify "String", but you can have "null" too. When everything is optional, how do you specify that function foo really takes a String, not null? (https://stackoverflow.com/questions/4963300/which-notnull-ja...)
Emulating the sum type val|err with a product type (val, err) - where val and err are implicitly nilable - is one of Go's biggest design smells in the first place.
I see no reason that a good proposal and example implementation wouldn't be accepted for addition to go. The Either/Maybe monad is so powerful and is incredibly straight forward to use. So the argument from a language user perspective is already won, it's makes the intention of code much clearer and gives the type checker massive help in verifying that your intention is the only possibility at runtime.
I expect the implementation and grammar/syntax addition would be the most difficult as that seems to be what one of the main focuses was during Go's infancy, make the language as easy as possible to parse/lex.
> I see no reason that a good proposal and example implementation wouldn't be accepted for addition to go.
I'm sorry to bring the age-old "but generics" thing up, but how do you even implement (what Haskell-alikes call) Functor without parametric polymorphism?
The only ways I see are
a) Elm-style List.map/Array.map/Maybe.map
b) Rust-style Functor/Monad operations on specific types like Result
Which of these do you think Go would be more receptive to?
I thought it did not need to be mentioned, but dynamically typed languages don't count for this comparison. Every value is like a union of every type, and compile time type checks are impossible.
It goes like this: the claim is that dynamically typed "don't count for the comparison" because "compile time type checks are impossible". Even if we suppose that nil values or union types are useless at runtime (they aren't), it is not true that dynamically typed languages could not be analyzed statically. Not only compile time type checks are possible, they are sometimes expected to happen and already part of some language's design.
Python added type hints recently, but in Common Lisp, there is not only a null type (which contains the nil value), but also an empty type: there is no practical use in defining a type for which there is no possible value at runtime, except if the language and its type system are designed to support static analysis.
It it expected that a compiler can optimize away things that are known in advance to be impossible, or help you detect errors statically.
(And that empty type is the one named by the symbol nil.)
The nil type is useful at run-time because it constitutes the bottom of the type spindle: just like in set theory the empty set is a subset of every set, including itself, the type nil is a subtype of every type, including itself.
This can be used at run-time; e.g. (subtypep nil 'integer) -> t.
We can't just exclude this value from the type domain on the grounds that it's static only. "Sorry, you don't get a bottom plug on your type spindle at run time ...". :)
I enjoyed the blog article and I would like to gently reiterate the notion that a _typed nil_ in Go 2 would change the semantic of _nil_, as seen in the example expression at the end of the article:
var b *bytes.Buffer
var r io.Reader = b
fmt.Println(r == nil)
We might need to use other expressions to capture the _nil_ type of above assignment but we should enable the _value only_ equality check with `r == nil`
Yes. The interface holds the concrete type of the value, if there is no concrete type it will be nil, so if you assign a nil to an interface-typed variable directly, you'll have a (nil, nil). If you first assign the nil to a pointer type T then assign/convert that to an interface type, you'll get (* T, nil). Here's a trivial demo:
var a interface{} = nil // (nil, nil)
var b *int = nil
var c interface{} = b // (*int, nil)
fmt.Println(a == c)
Of course most such cases are not that trivial, rather they're cases where a function takes an interface-valued parameter and checks for (param == nil), if the caller passes in an actual object there's no problem, if they pass in a concrete value no problem, but if they extract the nil literal to a concretely-typed context (variable) things go pear-shaped to various levels of fuckedness (depending what is done in the other branch).
And that's vicious because something as seemingly innocuous as "extract variable" on an immutable literal can break your code.
Thank you for your answer. I was writing from the phone and didn't make myself clear. My question is: what are the legitimate use cases for interfaces that are half-nil?
Ignoring the compatibility guarantee for the sake of discussion, I feel that nobody would notice if the compiler tomorrow started short-circuiting the equality check of interfaces against nil to return true if either tuple value is nil. But maybe I'm missing some use-case.
> My question is: what are the legitimate use cases for interfaces that are half-nil?
I think that's two different questions:
* Is there a legitimate use case for nil not being nil? I don't think so.
* Is there a legitimate use case for having "typed nil" interfaces? Kinda, Go supports and encourages calling methods "with nil receivers", and doing that through an interface requires that the concrete type (the non-nil half) be conserved otherwise you can't dispatch the method call.
Yes it is. An interface is basically a struct of the concrete type and the value. While the outer interface value still has a type, if no value was assigned to it, the concrete type field is still nil.
My question misses the important words "a situation where it makes sense."
As I wrote below:
Ignoring the compatibility guarantee for the sake of discussion, I feel that nobody would notice if the compiler tomorrow started short-circuiting the equality check of interfaces against nil to return true if either tuple value is nil. But maybe I'm missing some use-case.
This may be the "computing industry sentence of the year", if they had an award for "sentence of the year", which I'm sure they don't. Whoever they are.
while nil is assigned to t2, when t2 is passed to factory it is “boxed” into an variable of type P; an interface. Thus, thing.P does not equal nil because while the value of P was nil, its concrete type was *T.
I wish people would quit the evangelical crap like that because you can find random bad design patterns and pitfalls in any language. At the end of the day general purpose languages have to fit a large criteria of needs for a wide criteria of developers while evolving and maturing along the process. So there will always be instances where a decision seems right at the time but later turns out to be bad. And even if you take Apple's approach of breaking your language with each iteration (as they do with Swift) you still end up with lots of wasted developer time porting your code with each new release.
Honestly if you aren't a skilled enough programmer to navigate the nuances of any particular language then you really are no better than kids playing in drag-and-drop environments like Scratch.
Does this look like a design mistake or a bit of oversight? There is no excuse why one should have to do
result, err := Foo()
if err != nil {
...
}
over and over again in a language designed after 2000. The usual argument is that Go's simplistic design "reduces complexity", yet explaining something basic as why the error handling system doesn't behave the way one expects needs you to know how the compiler represents interface types.
The next decade will see Go adding in most or all the complexity real-world software asks for, without ever admitting that maybe it should've been supported from the beginning without the hacky workarounds.
> Honestly if you aren't a skilled enough programmer to navigate the nuances of any particular language then you really are no better than kids[...]
Even though the bit in italics is pretty much the opposite of the "Go/JS pitch", I'll bite.
Sure, learning to code at a high level means you need to take time to learn things (which is the opposite of the pitch). I just don't get why teaching yourself to "navigate the [brokenness]" of a language that was out-of-date the day it was released is preferable to learning to write in an expressive language that won't artificially handicap you or provide you with an arsenal of footguns.
In all the time you were casting to and from interface{}, you could be exploring and using powerful and practical new ideas that make it less likely you'll suffer for failing to check one of those "err"s.
> Does this look like a design mistake or a bit of oversight?
Honestly, I don't mind error handling in Go. I've used a wide variety of languages in production systems (I've lost count but it's more than a dozen) and I've found Go to be surprisingly good at giving detailed, context aware breakdowns of where issues arise and allowing me to easily handle them. Sure there are a thousand different ways to do this and Go picked the ugliest, but in spite of that I've found it to be surprisingly effective - even in the more complex projects I've written like murex (my alternative UNIX $SHELL) and the odd Linux FUSE file system I've written to scratch a particular itch.
In fact for as many complaints like yours I've read there are also as many seasoned developers complementing Go's error handling. So I really think that particular issue is more a matter of personal taste rather than poor language design.
However I'm _not_ going to defend the nil thing nor how interface{}'s are (ab)used as an alternative to generics. Those _are_ just bad design choices.
But for all of Go's sins, I still find myself more productive writing Go code than I have in any other language for a long time (probably since writing Pascal in Borland's Turbo Pascal back in the 90s). Hence why I defend Go. I can understand idealistic opinions about language design (Go is opinionated too after all) but frankly what really matters is a developers ability to get an idea into something executable. And I feel a lot of the complaints about Go really miss the point about how productive that language is to a great many people and without sacrificing too much control to be useful in a practical sense.
The error handling idiom in Go is one of its strongest properties. Forcing developers to accommodate the error path inline and at each callsite makes code resilient and robust. It would stand to benefit from a Maybe feature, but even as-is it's a huge net positive versus e.g. exceptions.
People who complain about it don't grok the Go ethos. That's fine, it's not for everyone.
I'll note that The Fine Article is about why one of those established idioms confuses newcomers. Even better, it bites people because the type of an interface needs to be nil for something to work. (Insert appropriate expression of astonishment here.)
Enforcing patterns hiding inadequacies in the implementation works less well than one would imagine, e.g. look at all the languages that Go sought to improve upon and the baggage of "design patterns", GoF-alikes, etc. that they brought with them.
result, err := Foo1()
if not err {
Panic(err)
}
result2, err2: = Foo2(result1)
if not err2 {
Panic(err2)
}
This might allow you to skip the nesting of blocks, but why would you do this when multiple languages exist where you don't have to thread around error messages everywhere? For instance,
result1 <- foo1
result2 <- foo2 result1
(Yes, that's Haskell syntax, but you can do things with a similar lack of pain/verbosity/error-prone-ness in many languages.)
I used to fight Go's error system by creating wrapper functions to mitigate the need for nested blocks - essentially trying to "Haskellify" the code a little (for want a _very_ crude description). With time I realised I was spending more time over thinking a solutions and pontificating instead of just writing code and handling errors. Since then I've given up fighting against the language idioms I personally disagreed with and as a result I've come to appreciate some of the benefits they came with but I previously had overlooked. Ok `err != nil` is about as ugly as it comes, but it's still effective in getting the job done.
At the end of the day it doesn't make any more sense to apply Haskell methodologies to Go than it does to complain that Haskell is missing some fundamental features of Go. They're distinctly different languages. But despite this I've noticed you spend a lot of time in various Go discussions on HN moaning that Go isn't more like Haskell.
I use Haskell examples because that's the language I'm most comfortable with, but, e.g. the Result type used in Rust is another example of how this can be done better.
Ergonomic error handling or generics/parametric polymorphism aren't "Haskell methodologies". Go is one of a very small number of languages that have been designed in the last decade and lack features like this.
The reason I participate in HN comment threads about Go is largely how entertaining I find comments strenuously rationalizing Go's inadequacies. In one (very recent but definitely memorable) case[1], I was told that
> You [should] first reconsider your need to make a generic data structure and evaluate on a case by case basis.
Rust is my favourite language, I beg to disagree that its Result or Option types are more concise or require less boilerplate. The only real differences are that Rust's are type checked and harder to use.
Sure, in an unrealistic subset of cases, try! can hide the mess. But these aren't fair, and especially nor is your comparison.
Any practical code using Results soon ends up wanting to mix the errors from multiple sources. This requires a lot of boilerplate effort to make everything interop, and the machinery to reduce this is both complex and not standardised. If you don't go the upfront boilerplate-and-machinery route, things look awful.
And of course, if you use something else, like an Option, you're back to
let foo = match bar() {
Some(foo) => foo,
None => return None,
};
Go is much more consistent, and less pathological.
Your example is especially disingenuous, though. For example, you chastise Go with
defer file.Close() // BUG!! this returns an error, but since we defer it, it is not going to be handled
but ignore the fact that this "bug" is nonoptionally hardcoded[1] into the Rust program. Which is it then?
Rust's error handling looks nice on fake examples, and manageable inside self-contained libraries. My experience of actually using multiple libraries is that Rusts error handling is a choice between tangly handling of nested errors or verbose attempts to early-exit.
I didn't say ergonomic error handling nor generics et al were Haskell methodologies. I'm saying you keep coming into Go threads just to troll that Go isn't as good as Haskell. This isn't even the first thread this week you've been making Haskell vs Go comparisons.
Again, at the risk of repeating myself, it's not Haskell, it's "many languages other than Go, of which Haskell is one I'll be using by way of example".
(I'm not sure why you made that edit, but I've made a mental note of what the last bit said. Arguing on the internet is a difficult, if useless, skill, and I'd hate to be tiresome.)
1. Your assumption is wrong, I definitely didn't mean panic.
2. I think the arrow means assignment in Haskell and you are just referring to monadic errors? To use them the same way proper error handling is done in Go, you would just have more nesting and multiple unwraps, which is marginally different than Go syntax (but definitely with more compiler checking.)
The arrow syntax in Go is used by channels.
Apologies. In any case, monadic errors in Haskell allow you to make "failing early" automatic. Even a simple use of optional types can make a difference. For instance, here you're failing with the same error every time, like here:
a, err := squareRoot(x)
if err { handle(err) }
b, err := log(a)
if err { handle(err) }
c, err := log(b)
if err { handle(err) }
you can just use an optional ("Maybe") type: write a, b, and c with types like
a :: Double -> Maybe Double
and then do
a <- squareRoot x
b <- log a
c <- log b
If the computation of a fails, the whole computation fails. The compiler takes care of all the error-checking plumbing. I think the ergonomics of this common kind of situation are really suboptimal in Go, which to my knowledge doesn't support anything remotely similar.
That's right. But I prefer to decorate each error as it comes back from the callee, writing what I was trying to do that failed. This gives a human readable trace of the problem, and also a unique signature for the error itself.
> But I prefer to decorate each error as it comes back from the callee
That's trivially feasible and still shorter than the Go version:
a <- decorate (squareRoot x)
b <- decorate (log a)
c <- decorate (log b)
Outside of the do context, your return value is just that, a value, you can manipulate it using the language's regular tooling. And you can decorate the do context itself if you want the same decoration for all calls in the block.
Stuff like null or nil is a problem that's been identified for decades, and there are relatively widely-used type system approaches that solve it very nicely (sum types). It's not okay in a modern programming language to repeat the mistakes of decades-old ones in the name of "simplicity".
Checking the value of a property [edit: return of method] after you've nil'ed the parent object is enough raise an exception in most languages. So yes I'd say that's an edge case. Where Go gets it wrong here is because nil isn't really `nil` you get a silent `false` rather than an obvious crash + stack trace. But regardless of the bad design of Go around the usage of "nil", the code would have failed in pretty much any other language anyway.
You're not "checking the value of a property after you've nil'd the parent object", you're checking if you were given a nil. This issue can occur for any function which takes an interface-typed parameter. That's usually how it happens: somebody passes in a `nil` which comes from a pointer-typed variable: https://play.golang.org/p/ADTvLDDrw6
> But regardless of the bad design of Go around the usage of "nil", the code would have failed in pretty much any other language anyway.
No, it would not. In Java, null is null whether it's typed as a concrete reference, as an array or as an interface.
> You're not "checking the value of a property after you've nil'd the parent object", you're checking if you were given a nil.
Sorry, it's a method not a property, but I think my point remains valid with regards to the example in that article. Just to be clear, I'm not trying to defend nil here, but I do think it's important to understand the issue because I think the authors code would have failed regardless of the language. So Let's break the code down: first they create a struct exposed via an interface{}
type T struct{}
func (t T) F() {}
type P interface {
F()
}
func newT() *T { return new(T) }
Then they create an initialized variable that object type:
t := newT()
t2 := t
...and set that interface{} to nil:
if !ENABLE_FEATURE {
t2 = nil
}
Then they check the value returned from a method of the struct - bare in mind this is after the struct has been `nil`ed:
If there's a likelihood that they could be working with nil interfaces then they should be first checking the value of the interface before checking the value of the methods within it. Most OOP languages would raise an exception / print runtime error (in the case of JIT dynamic languages) or downright crash if you tried to access methods or properties of a nil / null / whatever type. So I'm not defending Go's behavior but their example is peculiar to say the least.
That all said, I do feel your examples are a lot more relevant to this discussion than the one that prompted the discussion to begin with.
>Most OOP languages would raise an exception / print runtime error (in the case of JIT dynamic languages) or downright crash if you tried to access methods or properties of a nil / null / whatever type.
Most OO languages wont report an interface as non null if its value is null. Go will.
Indeed - I wasn't defending Go's behavior here. I was just replying to your question about whether it is an edge case or not. Since the code would be broken in any OOP language I do consider this to be an edge case. But that doesn't mean I like nor would defend Go's nil type system.
> Sorry, it's a method not a property, but I think my point remains valid with regards to the example in that article.
Not really, Go supports (and encourages properly handling) nil method receivers.
> Most OOP languages would raise an exception / print runtime error (in the case of JIT dynamic languages) or downright crash if you tried to access methods or properties of a nil / null / whatever type.
Setting aside the fact that this is not quite true[0], there is a gulf between "failing" as a clear runtime (or compile-time) error at the point of an incorrect invocation, and "failing" by silently yielding a nonsensical state and possibly but not necessarily faulting at some other point later on. PHP gets regularly and deservedly panned for the latter.
[0] nil is a message-sink in obj-c (any message can be sent to nil and will be a no-op returning nil), and you can make Ruby or Smalltalk behave that way (or any other) as their nil is a regular object with a normal type which you can go and extend
> there is a gulf between "failing" as a clear runtime (or compile-time) error at the point of an incorrect invocation, and "failing" by silently yielding a nonsensical state
Of course. But then I also made that point too. Frequently in fact and in the very post you're replying to as well. Plus also in the other reply that echoed the same point you're raising here. I'm not justifying Go's behavior here. Absolutely not! It is unexpected and bad. But we have already established and agreed on that point so moved onto another question regarding whether the authors example is an issue that is likely to arise often. I was attempting to explain why I felt it was a poor example and not trying to justify Go's behavior - which at risk of repeating myself: we all already agree is bad.