> Everyone who wanted to do basic things like read input or write output would have to understand how option type, in addition to discussion of what templated types are
I don't necessarily think that's a bad thing. Something that may be null is already an option type. It's "some T or null". Making the compiler force you into checking which it is, is no difference from checking for the presence of null. There is literally no difference in complexity - it's actually simpler because you don't have the cognitive overhead of remembering where a certain variable has and hasn't been null checked (After a null check, your variable has magically changed type to "some T, definitely not null".)
x = getsomething
if x != nil
dosomething(x)
and
x = getsomething
if x.hasvalue
dosomething(x.value)
The programmer alone has to remember this - remember to check, and the status of each variable that can be null. Was x checked before? scroll up to see...
Add 10 variables and some nested ifs and ask the programmer which variables can and can't be null at any given place. This is the job of a compiler, not a programmer. And if my experience is any guide, the less experienced programmers, which are the ones we are discussing now, would be happy to have 100 locals or 12 levels of nested ifs in a method. They need this added help more than the people who ask for it.
Note that these is absolutely zero gains with the second approach if the compiler doesn't enforce it.
And I very much don't envision that the Go compiler would be modified to make sure that `x.value` is never referenced unless the presence of that value has been confirmed by an earlier statement.
> Note that these is absolutely zero gains with the second approach if the compiler doesn't enforce it.
The compiler enforces it by the fact that you can't pass in the return value from one thing (that might not be a valid value) into the next function that takes an argument that must be a valid value. Your objection that "but I can unwrap/call value without checking so it's useless" is a commonly raised objection - but it's a false one.
With nulls
var x = get_something
do_something(x) // ok even if x is null
With option/maybe
x = get_something
do_something(x) // compiler err
x = get_something
do_something(x.value) // ok, *concious action by developer*
x = get_something
if x.hasvalue
do_something(x) // ok
The difference in safety between having to make a concius unwrap IS the whole point of the feature. This works exactly the same everywhere. You can always ".unwrap()" in Rust for example. Having to unwrap, or unwrapping without checking first isn't wrong, in fact it's often right. Unwrapping the value first means "extract the value if it exists, or otherwise panic right here". This decision to "assume success or otherwise panic" is a clear intent that the next developer can read.
In ML, if you have a type “foo”, then you can also use the type “foo option option option” without creating any new nominal datatypes. In particular, using a value of type “foo option option option” is just as easy as using a value of type “foo” or “foo option”:
case wrapped_foo of
NONE => ...
| SOME NONE => ...
| SOME (SOME NONE) => ...
| SOME (SOME (SOME foo)) => ...
On the other hand, with your proposed alternative, you have to create a wrapper struct for every nullable kind of thingy:
type MaybeFoo struct { foo *Foo }
type MaybeMaybeFoo struct { foo *MaybeFoo }
type MaybeMaybeMaybeFoo struct { foo *MaybeMaybeFoo }
And, as if that weren't offensive enough, you have to manually pack those pointers into structs and then unpack them back.
1) You're not "abstracting away" anything like that. If you're talking about operations you can abstract over `a option`, that has nothing to do with nesting.
2) I would consider explicitly nested option types to be a code smell. If `foo option option` happens incidentally via a module functor or something, fine, but if you ever write that by hand, you probably want to explicitly flatten that to a 3 or 4 case sum type with explicit constructor names.
3) Nested structures like this are very unlikely to have synthetic, abstract names like MaybeMaybeFoo. Instead they are more likely to have operational, domain-specific, concrete names names. These structs are likely to _already exist_, since Go lacks type inference for function parameters: You're already going to have to create types in order to talk about these things.
4) You're "manually" packing/unpacking by writing (SOME (SOME ... constructor calls and in your pattern match. The difference between &Foo{&Bar{...}} and (Foo (Bar ...)) is irrelevant. Meanwhile, pattern matching has a syntactic advantage over a `== nil` check, but again, I'd argue that intentional nesting of numerous expected absent values is a code smell. If you really need that, you'd probably employ the null object pattern, which is easy to support in Go because methods can be called on nil instances. I'll say: I've never had to do that in tens of thousands of lines of production Go.
1) When I make an abstract type “foo” whose underlying implementation (hidden to clients) is “bar option”, I'm abstracting over an option type.
2) What if, in client code, I want to make a value of type “foo option”, completely unaware that foo's underlying implementation is “bar option”? Is it still a code smell? Do I have to know what abstract types defined by other people are implemented as options, so that I don't wrap them in options of my own?
3) Meh.
4) My pattern matching is completely safe. OTOH, with your proposal, you can't safely unwrap those nested structs of nullable pointers in a single step, because you need to unwrap the first layer just to check whether the second layer is safe to unwrap.
---
2) The problem exists whether there exist non-disjoint types or not. Oftentimes I do want to use nested types, because that's the way the data is naturally structured, i.e., that's what shortens the description of the operations that act on the data. Flattening the nested type into a single sum with 4 constructors would merely impose on myself of undoing the flattening every time I want to operate on this sum.
3) It's abstraction for the sake of making code easier to reuse and verify.
1 & 2) Read my comment again. My remark about functors (in the ML sense) addresses this point. I'm quite well versed in how type abstraction works. Your comment about being aware of underlying type details is completely non-sensical with respect to nesting. The problem you're worried about applies to non-disjoint union types, ie dynamic typing, not Go's typical use of type-safe pointers. Nil is type-safe in Go, there's no risk of confusing:
a nil *Foo with a nil *Bar
3) You say "meh", but abstraction for the sake of abstraction is exactly what drives working programmers away from so called "better" languages.
The only type I'd make (if it wasn't in the standard library) is the regular Option<T>. If I have an instance of Option<T> and the method takes a T, then I'd have to check and unwrap it. I can't see the need for creating multiple structs assuming you can nest the Option<Option<T>> when required (and heap allocation is OK)? Perhaps I misunderstood something.
If I have an instance of Option<Option<T>> then I'd have to unwrap it more than once. However, if the language has no concept of such automatic unwrapping, the chance that you would ever end up with a nested option is pretty slim. Basically, the monadic type use of these is an effect of the language doing that, not the other way. In e.g. C# where I use option types extensively I'm yet to see a wrapped one. In F# it's different.
In C# there is a feature in which nullables (e.g. int? and int?? or int???) actually work similarly because there are "lifted operators" which sort of half-achieve this, e.g.
int? x = 10;
int? y = 20;
int? z = x + y;
but to be honest this to me feels mostly quirky because it doesn't extend to much else in the language.
> Basically, the monadic type use of these is an effect of the language doing that, not the other way. In e.g. C# where I use option types extensively I'm yet to see a wrapped one.
And C# pays dearly for it. Why doesn't C# have anything like C++'s <algorithm>, or Rust's composable iterators, or Haskell's plethora of composable abstractions (foldables, traversables, lenses, etc.)?
> In C# there is a feature in which (...) but to be honest this to me feels mostly quirky because it doesn't extend to much else in the language.
Agreed. Hardcoded special cases are always a bad idea.
---
Sorry, HN won't allow me to make a new post because “I'm submitting too fast”, so here goes my reply:
C++ has a finer-grained hierarchy of iterator concepts:
(0) InputIterators correspond to C# IEnumerators.
(1) OutputIterators allow you to write to the sequence's current element.
(2) ForwardIterators allow you to dereference the current element arbitrarily many times before moving on to the next.
(3) BidirectionalIterators allow you to move both forward and backwards in the sequence of elements.
(4) RandomAccessIterators allow you to skip arbitrarily many positions in the sequence of elements in constant time.
And there are algorithms defined in terms of these refined iterator concepts.
> Why doesn't C# have anything like C++'s <algorithm>, or Rust's composable iterators, or Haskell's plethora of composable abstractions (foldables, traversables, lenses, etc.)?
I'm not very familiar with the details of Rust iterators yet, but there isn't much I'm missing from Linq at least yet (apart from the cost control available in Rust and C++ that it's pretty natural is omitted from C# where it's natural that operations such as iteration can heap allocate).
What is the difference between <algorithms> and the similar Linq (or rather the enumerable extensions enabling linq)?
More golang Stockholm syndrome. These arguments are always weird to me. Go has plenty of complex features that go developers don't seem to think cause too much cognitive burden (automatic gc, structural subtyping, etc.). Why is it that relatively simple and ubiquitous language features like exceptions and generics are just way too much and cause an unacceptable amount of complexity? I just don't buy it.
Turn the argument around. Are there any Java, C++, etc. developers arguing for the removal of these features from their languages? Are there people who say that even though these languages have generics you shouldn't touch them because they're bad? Do any devs with option/maybe types really want to go back to unsafe code that can fail if you forget to check for nil/null? Show me someone who hasn't drunk the go kool-aid who still makes these sorts of arguments and I might actually start listening.
I mostly agree with you, but I there could be an argument being made for C++- the generics there are done through templating, and I've seen people froth at the mouth about the complexity of C++'s template metaprogramming.
I agree though with your initial reaction to Go and generics- it seems strange to argue not to include generics due to complexity when the alternative is either type assertions from interface{} types, code generation through some sort of [preprocessor](https://github.com/cheekybits/genny), or copy-pasting code. All of these are more complex than a simple generics implementaton! Golang could even include some sort of generics less complex than Java since it wouldn't have to worry about co vs contra variant types, and Java's generics aren't that hard to work with.
I agree that template metaprogramming can get overly complex. I think it would probably be a mistake to bolt a full C++ template metaprogramming system onto go. But just a standard generics system like Java's would go a long way to improving the language.
> Are there any Java, C++, etc. developers arguing for the removal of these features from their languages?
I will. Generics in Java are a net loss in my opinion. They provide marginal static safety, no additional dynamic safety, no performance benefits, and lead to substantially more complex APIs. I'd have been happier if they were never added. I would prefer a Go-like type-assertion construct or an analogous "occurrence typing" feature.
C#, on the other hand, has a sensible generics implementation that provides meaningful utility thanks to value types. However, I am not ashamed to admit that, in the absence of value types, I'll resort to Whatever<Object> plus some casts without a second thought if I find myself doing even the slightest bit of work to satisfy the compiler.
With the exception of value types, there is virtually no difference between Java and C# in terms of parametric polymorphism. Liking one and finding the other a net loss is a bit mystifying.
Besides, the introduction of generics has added a tremendous amount of safety to what used to be typecast littered Java code pre 1.5.
How is it mystifying? You said "besides the one difference you cited, they are not different". I explicitly listed that difference as being the thing that I feel justifies the feature's existence!
> the introduction of generics has added a tremendous amount of safety
It's added marginal _static_ safety. It added no dynamic safety. Static safety is welcome, but frequently not worth the lengths people go to achieve it. Hence my comment about instantiating generic types with Object in the event that the cost outweighs the safety.
> to what used to be typecast littered Java code pre 1.5
Occurrence typing would dramatically minimize the syntactic overhead to casting. There have been research variations of Java that accomplished this: instanceof checks insert implicit casts on branches where the instanceof check is true. Similar extensions have been explored for collections to further omit the instanceof checks without requiring explicit type parameterization. For example, if a collection is only written to privately, you can infer casts from what types are inserted in to the collection. This can achieve the same safety without massive complexity increases to sub-typing, static dispatch, reflection, type signatures, etc.
> There have been research variations of Java that accomplished this: instanceof checks insert implicit casts on branches where the instanceof check is true.
By the way, Go already has this. If you have a variable, say `foo`, with a generic type, say `interface{}`, you can say
switch foo := foo.(type) {
case MyFirstConcreteType:
//foo is a MyFirstConcreteType instance here
case MySecondConcreteType:
//foo is a MySecondConcreteType instance here
default:
//foo is still the generic type here
}
> Similar extensions have been explored for collections to further omit the instanceof checks without requiring explicit type parameterization. For example, if a collection is only written to privately, you can infer casts from what types are inserted in to the collection.
This could be exactly what Go needs. Any link to some reference material on this topic?
I have a whole bunch of references related to dynamic languages and type inference, but can't find the specific one I'm thinking of on my hard drive. Sorry.
Yes, google c++ style guide (https://google.github.io/styleguide/cppguide.html) recommends to avoid exception as well as complex template metaprogramming, and if one has to be used, user visible api should avoid templates if possible.
>recommends to avoid [...] template metaprogramming,
Fyi...the Google style guide doesn't ban generics/templates. It's discouraging the "template metaprogramming" which is a technique beyond parameterized types (generics) ... e.g. using recursion in Turing Complete template programming to calculate compile-time values.
It uses generic type parameters. If you ask the author if the library would have been easier and "less complex" to write if generics language feature were removed from C++, I think we'd be certain the author would say "no."
Even without asking the author, if anyone believes the library is suffers "simplicity debt" because it uses generics, they can fork the code, rewrite it, and then prove to skeptics that the hashing library was much clearer and more maintainable without generics.
I didn't say google banned templates, you quoted me yourself, and the quote states exactly what you're saying, that template metaprogramming is recommended to be avoided.
Go isn't against generics, they are open to discussion about adding them if right approach to implement them without adding much of extra complexity would be found. Generics do add complexity, it is bearable in most cases, especially given the benefits, but it still exists.
Authors don't want to end up with something that C++ has, where you have to manually limit yourself in order for your code to not be too complex and unmaintainable, at the same moment, adding templates just to have them won't add any immediate benefit of large magnitude.
Well, your response was to grasleya and he wrote: "Why is it that relatively simple and ubiquitous language features like [...] generics are [...] an unacceptable amount of complexity?"
[...] Are there any Java, C++, etc. developers arguing for the removal of these features [...]?"
And your reply was: "Yes, google c++ style guide recommends to avoid [...] template metaprogramming,"
... and therefore, that makes it look like you're conflating "generics" with "template metaprogramming".
If your intention was to respond with a random tidbit unrelated to the proposed generics for Go, it means you're just introducing a non-sequitur and confusing readers trying to reasonably follow the context of the thread. The op (grasleya) wasn't talking about "template metaprogramming" to compare to "generics" in Go.
Google recommends avoiding exceptions for a completely different reason though: Google has a massive C++ codebase that was built when C++'s particular implementation of exceptions had huge technical issues that made dangerous to use dangerous to use. Now that Google has a bunch of code that was written as if exceptions didn't exist, they have made the decision that it will be easier for Google if they just continue to pretend that exceptions don't exist, rather than having to rework (and possibly break) a bunch of already-working code.
So that is the point. Go design doesn't allow these complexities, so there will be no confusion when one should use them and when not. This somewhat limiting, of course, but turns out it's not the end of the world, benefits may outweigh the limitations. If you really need those, C++ is always available to you.
Go allows a wide range of complex metaprogramming through reflection, struct tags, ... to say that Go is void of that kind of garbage is lying.
Go also has a bunch of weird rules regarding type conversion and assertion, its type system isn't covariant... Go has its share of problems, so much that go maintainers themselves keep on introducing type unsafe API in their own std lib. Go has a type system problem, period.
Finally all platforms supported by Go aren't first class. Windows doesn't support Go plugins or Go shared libs AFAIK for instance.
It certainly has problems, no sane person would seriously argue that it is literally and objectively perfect. Reflection is complex, unsafe library is a hack, that's exactly what is stated in the official description of both libraries and both recommended to be avoided if possible. No one tries to say opposite.
Yet, code in Go tends to be easy to read, uniform, very well suited to work on in a team. No megabytes size of code style is required, in most cases there would be no problem to understand code written by other person because of how simple the language is.
> code in Go tends to be easy to read, uniform, very well suited to work on in a team
This was not my experience working on a moderately sized Go codebase. In fact, it was one of the messiest agglomerations I've ever had the displeasure of working with (and I worked at Twitter when it was still a monorail). Everyday code that is simple in most other languages was a drawn-out mess in Go. The type system was an obstacle to be worked around. Metaprogramming (via go generate) was a joke.
That statement would be more credible if you identified that unholy mess of Go code so that we could form independent opinion.
Or if you showed a few examples of "code that is simple in most other languages was a drawn-out mess in Go" or how Go's "type system was an obstacle to be worked around" when compared to type system of other mainstream languages like C++/Java/C#.
As it stands, you're in a minority. The consensus is that Go codebases are among the most readable.
Go codebases are readable in that you can figure out what a given line is doing - but when 3 out of 4 lines of code are duplicated error handling, it's difficult to figure out what the "success case" of even relatively simple functions is.
> As it stands, you're in a minority. The consensus is that Go codebases are among the most readable.
Your statement is as unsubstantiated as his. But at least the parent doesn't feel as arrogant claiming he represents the majority. Go doesn't magically makes code more readable than any other language. It's just something some gophers like to think since they often start projects from scratch.
> The consensus is that Go codebases are among the most readable.
Among who--other gonuts/gonads/gophers? In my neck of the woods a Go application over a few thousand lines is already a yellow flag and a Go application, period, requires defending the choice over TypeScript or Java. (Difference being, of course, I'm not holding up my experiences as objective.)
I find the statements about the type system very credible. I've been programming go full time for nearly 3 years and not a single day goes by that I'm not bitten by it.
As far as whether go makes it easier or harder to make easy to manage code bases I don't know that it matters one way or another. I've written bad code in other languages and go. I will say, I've never seen a large go code base, and I suspect if I did it would suffer very much from things that people complain about with the go language.
> This somewhat limiting, of course, but turns out it's not the end of the world, benefits may outweigh the limitations.
The situation is not really benefit vs. limitations. The style guide is put as general guide. It's a proven way of writing code inside google that most people agree and think is acceptable.
It's about taking a common ground on how to use a language, rather than finding a sweet spot of trading-off. Benefit vs limitations is part of the equation, but it's put in a context that only makes sense inside Google.
Therefore it's not self-evident that the benefit vs limitation argument automatically applies to general use cases outside of Google.
Or, I do not like people use Google or any other large organization's approach to automatically prove that those rules makes sense in different settings.
I can buy using generics sparingly (especially in the form of complex template metaprogramming). But never using it at all? Can you imagine convincing the C++ community to give up Boost because its use of templates is too complex?
They also seem to admit that if they were to do it over again and start from scratch they'd use exceptions:
> Things would probably be different if we had to do it all over again from scratch.
This is when exceptions where not actually used in the code. The bloat is merely from enabling C++ compiler flags to generate rtti and exception support code.
> It seems that thinking on exceptions in low-level languages is firmly in the "no way" camp.
Except Go isn't particularly close to the metal anyway. The only way Go is low level is in the same way Java is low level: it's very limited in terms of the programmer's ability to create abstractions.
> If exceptions are so great, why Rust and Swift aren't using them?
Technically speaking, Rust does have exceptions with its unwinding feature. The panic! macro throws an exception, and catch_panic allows you to catch it. It is implemented exactly the same way as C++ exceptions, with the accompanying code bloat and performance pessimization, and you have to think about exception safety exactly the same way when writing unsafe code.
Rust has unwinding, which indeed opens a whole can of worms, but to say that it has exceptions is to miss the point. Rust has nothing anywhere near close to first-class resumable exceptions as surfaced in, say, Java. Not only is the `catch_unwind` function (it's not called `catch_panic`, btw) deliberately difficult to use (specifically to deter people from using it as a general error handling mechanism), the language deliberately defines an alternate compilation mode where unwinding does not exist and hence any attempt to "catch" it will fail. As a result, I have never once seen Rust code APIs that use `catch_unwind` as a mechanism for error handling; the purpose of that function is to prevent unwinding from crossing FFI boundaries.
> If exceptions are so great, why Rust and Swift aren't using them?
Because they made a mistake by not including them?
More seriously, you are approaching the argument the wrong way. You don't judge a feature by how popular it is but by analyzing objectively whether the presence or absence of that feature leads to higher quality code
Why would anyone try to convince Boost users to give up boost? On the other hand, designing new language, it is reasonable to consider ways of reducing code cost and complexity by avoiding such features. Large fraction of C++ developers won't understand how half of boost libraries works at all and that is type of complexity Go tries to avoid. There are tradeoffs, that is undeniable, but it is a valid approach to address this exact issue.
If think the popularity of Go as a language (despite not having these features) is an argument that they're not actually needed.
If they were really needed, then you'd have significant numbers of people trying Go and then leaving because it isn't expressive enough.
Not that that doesn't happen, but we're seeing the language gain mindshare as more people appreciate the simplicity, partly due to not having these features.
I think it's fair to argue that these features shouldn't be added to go. Not because they're bad features, but because they'd basically require a rewrite of the stdlib and, as a result, all go code.
So, generics and exceptions would improve go. It just wouldn't be go anymore.
I assumed that was what the term "Simplicity Debt" was referring to.
Heh, perhaps the right answer is "Let Go hit its natural limits and kill itself" rather than trying to get its stewards to fix it. I suppose that might teach a whole generation of programmers what happens to codebases over time when you don't have the ability to build robust abstractions (or teach the rest of us something if it actually works out). It's just scary to actually let this go (pun intended) because the more successful Go becomes, the more likely each of us is to have to work professionally in a language without e.g. parametric polymorphism.
> Are there any Java, C++, etc. developers arguing for the removal of these features from their languages?
No, but I'll bet money that there are Java, C++, etc. developers who have abandoned those languages for Go. Developers often vote with their feet before trying to change a standard.
Personally, I think checked exceptions are sometimes a headache in Java and I've worked in places where the default when encountering a checked exception was to catch and throw RuntimeException with a helpful error, which is pretty similar to Go's error/panic model.
Checked vs unchecked exceptions is kind of a separate thing from exceptions vs no exceptions. There are plenty of modern languages without checked exceptions but few with no exceptions period.
There is a valid argument about whether exceptions are the best way to signal errors or not. But most modern alternatives are some form of capturing errors directly in the type system, so that return types can carry error information (option or error types etc), and so that callers are forced to check for errors.
Oh I'm firmly in the monadic error return camp. Its just a misnomer that go doesn't have exceptions. It has both exceptions and by convention error returns. Its sort of the worst case scenario.
I will say that there is some good post compiler tooling around making sure that error handling isn't skipped but its definitely a weakness of the language.
- C++'s std::array is an example of marrying fixed sized arrays with templates.
- I'd say keep the built-in types and their behavior unchanged but offer a way of providing those for user types. I guess the Deletable interface is one idea but maybe the "delete" override should have nothing to do with interfaces. I think something like func (t Type) operator delete(k KeyType) ? I guess in Go a function is an interface but I wouldn't start with the interface here.
- Allowing e.g. delete or slicing on user types does not need to imply those types are polymorphic with the built-in types. This might be a little confusing but I think it's still worth while to draw that line. I don't think it's necessary to allow creating a function that takes a Deletable to take either the built-in map type or a user type.
- range support requires some sort of iterators. Could be done through co-routines like Python's generators or something more like C++ iterators? co-routines could be a cool addition to the language :)
I think you'd want to try and "contain" the change as much as possible. More syntactic sugar than fundamental changes and as little change as possible to the built-in types. Otherwise you end up with a completely new language.
And how many decades did it take for C++ to support std::array?
What people seem to ignore is that templates are a bottomless pit of complexity.
Swift is on version 3 and they still didn't finalize the semantics of generics. Not due to lack of trying. Swift was trying to design and implement generics from day 1 and they still haven't.
That's how complicated designing generics is.
I prefer stable language with fast compiler to Swift/Rust situation, with language taking years to mature and very slow compiler. Judging by popularity of Go, so do many other people.
While the languages probably will mature after couple more years, I don't see much hope for making the compiler as fast as Go, regardless of the effort. See C++ compilers.
> Right now, to understand how io.Reader works you need to know how slices work, how interfaces work, and know how nil works. If the if err != nil { return err } idiom was replaced by an option type or maybe monad, then everyone who wanted to do basic things like read input or write output would have to understand how option types or maybe monads work in addition to discussion of what templated types are, and how they are implemented in Go.
I can't really follow the argument here: how would using option types replace the existing idiom? Instead of err being a pointer, err is an option type and the only thing that changes is that instead of writing if err != nill, you write if err. And if comparing nill vs option types works in isolation, then I also don't see where the jump to "in addition to what templated types are" comes from, or the connection to slices and maps.
In fact, if Go had option types, would it still need nill at all? By which I mean, on the user-side of things (under the hood I suppose null will be used for implementing an option). Because if it doesn't, and I strongly suspect one doesn't in a language without pointer arithmetic (which Go happens to be), then we can simply replace every nil with an option.
The discussion on how templates interact with existing features and complicate things is fairly clear, but the option types are treated as a lot more complex than they really are. In practice it is a compiler-enforced if-not-null check on pointers, because nill is a type instead of a value. That's about it. That level of compile-time enforcing also seems to be perfectly in line with the safety the Go authors want their compiler to enforce, given how strict it is with errors.
You can easily use them without really understanding what a monad is - I'm living proof of that. The only extra thing you somewhat need to understand is sum types, which are a lot simpler. Then again, Go doesn't have them and their usage overlaps with interfaces - how to deal with that might be a more useful discussion regarding option types in Go.
> A powerful motivation for adding generic types to Go is to enable programmers to adopt a monadic error handling pattern.
That has never been my impression. The main drive for generics is safer code, period. It doesn't necessarily translate to a different way of handling errors (Java and Kotlin mostly use exceptions despite supporting parametric polymorphism).
Rejecting generics because they would force a switch to monadic error handling is completely misunderstanding the value of generics, a sentiment that seems to be widespread in the Go community.
Rather than extending number-of-return-values overloading to user-defined methods, I’d say just deprecate it altogether. It might have made sense in a world where generic functions are magic and weird and so you want to minimize the number of them (but on the other hand don’t necessarily mind making them even more magic). But in a Go with generics, operations like “access value that may not be present” can just be regular named methods. That applies to both user-defined collections and the builtin ones. Would help pay down the debt...
(EDIT: usertype would be the templated type, so <userType> or whatever the syntax ends up for that. so the above would be the instantiation of the template for a particular type)
My point is that you don't really know if the stdlib types actually have one method that returns 1 or 2 value or two methods one returning 1 value and the other returning 2 where the compiler chooses based on the call site which one to call. Just because [] looks like one method to the user doesn't mean it has to be one method.
Having two different functions is a way of dealing with this while minimizing change as it's just syntactic sugar and not a language change. Variable length tuple types are a much bigger change. They'd also presumably be a lot more expensive. Ideally a map type implemented by the user should be as performant as the built-in map type.
Tuples wouldn't be more expensive if implemented the way most languages do, where a tuple type has a fixed length and series of element types, e.g. '(Foo, error)'. In other words, structs with some syntax sugar.
But they're the wrong abstraction for this anyway. A better choice would be a type like maybe<Foo> (as mentioned in the post), that only lets you get at the contained Foo if it exists, rather than the current practice of returning a fake default Foo value in the case where it doesn't.
Having two overloaded functions would work, but it would increase the complexity of the language compared to dropping the overloading feature. Yes, for consistency you'd want to make that change to the builtin types as well, and that would be churn. But only in code that's already being churned: if generic collections are added (and maybe some immutability stuff), I think you'd want to inspect just about any code that uses slices or maps to see if a new collection type might work better or better follow the new idioms. (Mind you, I don't suggest actually breaking backwards compatibility, just leaving some of the existing stuff in a permanent supported-but-deprecated state. So "change everything that uses slices or maps" isn't as bad as it sounds - it'd be a recommendation, not a requirement.)
maybe<Foo> or it's C++ equivalent optional<Foo> is more expensive. You now either have a bool + a value, or a pointer that can be nullptr. So you're consuming more memory at the very least and if you want to panic on accessing a value that doesn't exist that means an extra comparison as well. Granted there are situations the compiler can optimize this away in C++. I also find that starting to use optional in C++ code leads to it being used everywhere and for everything which introduces new run-time failure modes that can't be detected on compile time and in general IMO messes up the code.
An alternative is to split find and access into two separate operations like C++ or to provide "in" like Python. Is there any language where a map/set access returns an optional/maybe?
FWIW I agree the current solution is clunky. It's clunkiness was evident prior to the template/generics question :)
I was envisioning that the maybe-returning version would be a separate method, equivalent to the current two-return-value variant, which of course already has that overhead.
Having the default return a 'maybe' would be a possibility; I'm pretty sure the practical performance difference would be completely negligible, given all the other stuff a map lookup has to do, but it might have worse ergonomics. In that case you'd probably want a builtin optional-unwrapping operator like some languages have, so it's not too verbose if you expect the element to be there.
> Is there any language where a map/set access returns an optional/maybe?
Swift is one. It has ! as an unwrap operator, along with other syntax sugar for optionals, so it's not verbose:
5> let q = ["a": "b"]
[snip]
6> q["a"]
$R2: String? = "b"
7> q["x"]
$R3: String? = nil
8> q["a"]!
$R4: String = "b"
9> q["x"]!
fatal error: unexpectedly found nil while unwrapping an Optional value
A maybe<T> in a language with halfway decent support for sum types is most likely a tagged pointer to a T, if it is reified at all.
A clever compiler is likely to do the same thing for a pair of (bool, T): represent the bool as a tag in the pointer, store the T. If the value is reified at all.
What new run-time failure modes do you get with optional? Is it just what happens when you blithely ignore the "nothing" possibility and attempt to extract the contained value "on faith" ?
How do sum types make multiple return less useful? Multiple return is more like returning a single value of a product type—like a tuple or, less sexily, struct—than like returning a single value of a sum type. What am I missing?
The difference in behavior between the one-return-value and the two-return-value version is that the former returns a zero value when the key does not exist in the collection. The one-return-value version is only there, basically, because you can use its return value inside an expression, whereas multiple return values have to be unpacked into variables first (unless you `return` them directly).
If Go had generics, the overloads could be merged into a single operator[] that returns a Maybe<ValueType>. Then that Maybe type would have an Or() method to supply a default, e.g.
>Right now, to understand how io.Reader works you need to know how slices work, how interfaces work, and know how nil works. If the if err != nil { return err } idiom was replaced by an option type or maybe monad, then everyone who wanted to do basic things like read input or write output would have to understand how option types or maybe monads work in addition to discussion of what templated types are, and how they are implemented in Go.
God forbid we'd all have to learn such a simple concept (in a language with several non-orthogonal concepts and special cases and all the channels coordination nuance that lack of generics prevents of abstracting in a standard library to boot).
>Obviously it’s not impossible to learn, but it is more complex than what we have today.
Well, adding a feature we'll always be more complex than not adding it. In the end, simple assembly style instructions are the simplest both for implementation and for learning fewer concepts as a developer. But abstractions make things easier to write and read after one has learned their concepts. And it's 2017, even "lowly" JS developers understand such functional concepts today...
>What began as the simple request to create the ability to write a templated maybe or option type has ballooned into a set of question that would affect every single Go package ever written.
This is more similar to how some molehills grow into mountains. People have been asking for generics as a standalone feature first and foremost -- orthogonal from replacing error handling. Now suddenly we can't discuss adding Generics without taking into consideration the big burden of changing the error handling too?
>On the surface this sounds like a grand idea, especially as these types are leaking into the standard library anyway. But that leaves the question of what to do with the built in slice and map types. Should slices and maps co-exist with user defined collections, or should they be removed in favour of defining everything as a generic type?
Whether the answer, the mere addition of the ability to have standard user defined generic collections and collection libraries sounds awesome in itself, regardless of what would happen to the ill-thought maps and slices that specialcased genericity in the backend.
This doesn't seem like a discussion of how to best add generics, but more like analysis paralysis.
>David Symonds argued years ago that there would be no benefit in adding generics to Go if they were not used heavily in the stdlib. The question, and concern, I have is; would the result be more complex than what we have today with our quaint built in slice, map, and error types?
A better question is: would some more complexity hurt us, especially beginning with in an overly simplistic language, and even more so, when said complexity actually unifies and harmonizes ugly special cases.
>People have been asking for generics as a standalone feature first and foremost -- orthogonal from replacing error handling.
There are many languages out there that have generics. Go is build on different principals. Just use something else, no need to enforce your ideas on people who don't want them.
Maybe go users should start going into every language discussion they can find and ask for generics to be removed.
First, Go wasn't designed with "avoid Generics" as some kind of principle. That was the quick 1.0 version, but even before 1.0 it's creators have discussed (and usually post-pone to some future) the addition of generics. Russ Cox: " I don’t believe the Go team has ever said 'Go does not need generics.' What we have said is that there are higher-priority issues facing Go."
Second, it's not some "non Go users" who ask for Generics in Go. Rather the opposite: non Go users could not care less. They have their generic C++, Rust, D, Nim, C#, Java or whatever. Generics are among the most asked features from actual Go users. Sure, there are old school Go users (and most of the Golang core team) who think they're just OK without it, or not worth the trouble. But a large majority of the Go community does ask for them. I didn't pull this out of my arse (besides it being obvious if you read Go blogs and forums).
Here from the official 2016 Survey Results: "When asked what changes would most improve Go, users most commonly mentioned _generics_, package versioning, and dependency management. Other popular responses were GUIs, debugging, and error handling." (emphasis mine).
If you can successfully argue that generics should be removed from other languages, then you should feel free to argue that.
The reason why nobody is arguing that is that it's hard to argue successfully, because simple ML-style generics are incredibly useful and have negligible if any drawbacks in virtually every statically typed language.
I would participate in an argument only for a language that I am invested in. When it comes to go, most people who argue about generics, do not write go and probably wouldn't touch it even if it had generics.
Also, an important issue with such arguments, is that generics is a feature, so obviously it is an easy side to choose for an argument. You can either have or don't have a feature. People will instinctively select the have option because common sense dictates that it is better to have something even if you don't use it. People want to quantify an argument and having is better than not having.
People don't want to argue about language design —there are few people on the planet that could really argue about language design anyway-, they need language MHz to compare.
> When it comes to go, most people who argue about generics, do not write go and probably wouldn't touch it even if it had generics.
Although I have no more proof of the contrary position than you have for this assertion, I must still insist that it is bizarre. Generic types are not some bleeding edge piece of CS research, of interest only to academics. We have already fought this war in Java, for heaven's sake. Even despite the problems with Java generics, I don't think many people would want to go back.
There is of course a complexity cost, but there is also a cost of not implementing generics, that code is more brittle and more reliable on casts and runtime assertions. Is this really so awesome?
Not the gp, but I think my own experience may also be relevant.
Since the birth of Go it's been impossible for many years to create something like a thread-safe reusable hashmap[1], which is pretty annoying when all your code is multi-threaded in goroutines. And creating such hashmap was impossible due to the lack of generics. If you wanted to build your own concurrent hashmap, you had basically 3 bad options :
- you use `interface{}`, which is bad because of the dynamics dispatch (slow)[2] and it's also not type-safe.
- you use code generation, which is bad because I haven't left the JavaScript world to start using babel for Go …
- you create a specialized hashmap for every type you need, this basically means copy-pasting the same piece of code ever and ever and changing the type signatures, which is error prone and a hassle to maintain.
[1] this is eventually gonna get better in the next release, since a concurrent hashmap will be included in `sync`.
I too feel like containers are the main area where it is painful. The sort of built-in generics we have in Go are useful in most circumstances, but it'd be nice to have custom containers which could take any type, instead of having to specify them separately. Errors is the other area where it feels like they could help (though the article points out possible problems with this).
That said your third approach (make custom concrete types as required) has not been a huge problem for me in practice, and is probably the best choice for now.
>I would participate in an argument only for a language that I am invested in. When it comes to go, most people who argue about generics, do not write go and probably wouldn't touch it even if it had generics.
Citation needed.
>Also, an important issue with such arguments, is that generics is a feature, so obviously it is an easy side to choose for an argument. You can either have or don't have a feature. People will instinctively select the have option because common sense dictates that it is better to have something even if you don't use it. People want to quantify an argument and having is better than not having.
This non-argument could be applied against adding ANY feature at all.
The idea that there are more enlightened users who care about "language design" and the unwashed masses who want to add everything and the kitchen sink to Golang and spoil it does not hold water (Besides, even if it was true, people who care about language design do not particularly see 90% of Algol 68 + CSP as anything to write home about).
People dont ask for all kinds of BS to be added to Go -- the ask for certain things that are actual pain points.
And the list the 2016 Survey post gives is quite sensible (Generics, better error handling, better packaging, etc) and specifically applicable to Golang pain points.
I remember reading a comment from one of the go developers that the lack of generics is not some big design decision, it just didn't make it into 1.
The more defining features are goroutines and the type system. (Structs, embedding, interfaces, ...).
Go has multiple built in generics anyway. (Slices, maps, channels), so it's not like it could be some big principle to not have them.
Generics would take away the ridiculous amounts of boilerplate that go requires, and make the language much, much nicer to use.
And safer too. The amount of `interface{}` in a lot of go code is such a smell.. always makes me wonder what the point of using a statically typed language is if you use interface{} and type casting all the time.
I certainly appreciate the desire to avoid two main ways of doing things.
Curious about the race conditions being common. Is this from go encouraging go routines? Seems most programs don't need or use multiple threads for a single task.
Basically all these articles about Go make it sound like it was designed to be some kind of teaching language with a low conceptual barrier to entry, except Google intends it to be used in real settings, which is the problem.
Go is designed to minimize the cost of bad programmers. It's a decent idea, and likely scales better than languages that try to maximize the value of great programmers (lisp,Haskell,etc).
The part of this argument I don't really understand is that it seems like the cost of bad programmers is higher, not lower, with a "simple" language where it's _normal_ to have type-unsafe code and mutable state everywhere.
Go is type safe (unless you use "unsafe" package, that is, but it's no different from Rust).
Or show me how you would subvert Go's type system and tell me which language you're comparing it to that wouldn't allow similar type system subversion. And no moving goal posts, please. "Type safe" has a certain meaning.
The value of immutability is so overblown, as shown by literally every mainstream language being mutable by default.
I'm not cherry-picking. This is just the first search result. If you understand how immutability works from first principles, it just does much more work.
Ultimately I wouldn't describe Go as "minimizing bad programmers" but "minimizing a damage that well intentioned but ultimately mis-guided good programers can do to the codebase".
For example I believe that your enthusiasm for immutability is well intentioned but I would rather not take 30x perf hit for theoretically safer code.
Bad programmer will write a bug that can be fixed.
A mis-guided good programmer will write a compile-time parser that only he can understand (possibly only for a week after writing it), will baloon compile times, produces incomprehensible error messages and you won't be able to convince him that a standard recursive-descent parser is 10x simpler to write and read.
Go prevents mis-guided smart programmers from inflicting too much pain on everyone else.
It's been seven years since: that's GHC 6.13, and GHC 8.2.1 is being released soon. Performance is even better now as long as laziness doesn't bite you (which is much less rare than people think).
> you won't be able to convince him that a standard recursive-descent parser is 10x simpler to write and read
I don't know what a "compile-time parser" is, but someone used to a good parser combinator library is going to be difficult to convince.
It reads like the formal spec of the language. Hand-writing recursive descent parsers isn't fun (given a sufficiently complex grammar) and expecting people to accept it as the One True Practical Non-Ivory Tower Way (and expressing dismay at not being able to convince them otherwise) is suggestive of monoculture, the kind that prides itself on copying code and using wrappers over the two or three first-class data structures that exist in the language for everything.
Go is designed to be a language with a low conceptual barrier to entry; that’s quite evident from its design. That doesn’t mean it’s just a “teaching languge”, or if it does, you’ll have to explain that to all the people happily using it in production.
Some of those things are a direct result of Go's simplicity.
Good tooling in particular is available because Go syntax is deliberately simple. It's simple for humans and simple for tools.
Go shipped with Go parser in standard library from day 1. Go syntax hasn't changed from that day.
Having Go parser in standard library means that building tools like go get, linters, auto-formatters, code coverage tools etc. is very simple.
Contrast that with C++ or Rust or Swift.
A C++ parser alone (clang) is probably a bigger project than the whole Go compiler. Rust and Swift don't even have a stable syntax yet and as far as I know they don't have the equivalent of Go's ast package.
If Go didn't aggressively tame complexity then it would have ended the same way as Rust or Swift. Sure, it would have generics, but also slow compilation time, language that takes years to reach stability, tooling that is exponentially more difficult to write etc.
That's engineering: making decision about what is more important and accepting that less important things have to be dropped.
If features are most important, then you accept complexity. See Rust or Swift.
If simplicity is most important, then you accept less features. See Go.
I for one am glad that I have a choice.
I would rather have Go than a third language driven by the same ideas and priorities as Rust or Swift.
"Google language" is Dart and it does have generics. So low conceptual barrier seems reasonable reason for success. Rust does have all features you listed and of course generics. Both started around 2009 so it is about same age as Go but not as popular among developers.
>"Google language" is Dart and it does have generics.
Only Google never made a good job at promoting it, and it's a dynamic language meant as a better-JS, so not the same market as Go.
>Rust does have all features you listed and of course generics. Both started around 2009 so it is about same age as Go but not as popular among developers.
Rust took until 2-3 years ago to finalize its design and be stable. Go was already stable for years by that point.
And Rust comes with an uncommon, and hard to grasp at first, memory management model which stops many in their tracks -- it's not the presence of generics (or traits, or option types or whatever) that's the issue.
I like programming in Go, but I don't understand the downvote on your comment. And I agree that Rust adoption is slowed more by the memory management model than type parametricity.
Of course you can find people using all manners of languages/systems in production. That doesn't mean they are well designed. And I don't think it's necessary for me to reiterate at this point why Go is poorly designed.
It's also a MUCH younger language than those you listed, that said I don't think Go or any other of the 'new languages on the block' will reach anything near the wide use of languages like C++ or Java, atleast not in my lifetime.
Well, Java had that wide an adoption (in the enterprise) already 10 years after it was introduced. It appeared on 1995 -- but 2005 it was already the de-facto enterprise backend language.
Ditto for C#, but of course C# was pushed by MS as the native solution for Windows, a platform that already had 95% market share.
1995-2005 was the time when traditional IT industry grew to such humongous proportion. Global delivery model, offshore development and so on were developed during that time. So either way Java/C# came at right time or IT industry grew due to them. Lots of new software and users were created which did not exist before and not the wholesale rewrite of pre-existing software else e.g. COBOL could have gone from banking sector.
So asking Go to be popular as Java in 10 years is to live up to expectation that past languages weren't asked.
What articles are you talking about? This couldn't be further from the truth. Go is a practical language meant to be used in production settings. It's not a haskell. For example, it's support for protocol buffers is second to none imo.
Well, it wasn't. Haskell was designed primarily as a test bed for writing language features to support PhD theses. The great success of Haskell is the amount of new language ideas it generated, which we now see in many other places.
Being a real-world language was absolutely one of the design goals of Haskell. See point 1 of section 2.4 in "A History of Haskell: Being Lazy With Class"[1]. It succeeded at being suitable for writing large systems, too. It's easily the least-bad language I've used for writing software where garbage collection is acceptable. It hits a sweet spot between expressiveness, maintainability, and performance that I haven't experienced from any other language.
Question: Have you used Haskell to write real software? I find that people who have used it tend to have much more precise criticism than "isn't designed for writing real software". It certainly has lots to criticize, just like every other language. But most of the criticisms I see of it seem to have gone through 20 rounds of telephone, with basically no relation to the reality of using Haskell.
The title: "Go at Google: Language Design in the Service of Software Engineering"
It puts building production software as the first most important goal.
It then goes in depth about how design decisions were driven by things the authors learned from Google's vast experience of writing production software.
It methodically describes pain points of large-scale software production and how Go's design decisions are trying to address those pain points.
Unlike Haskell, Go wasn't designed for teaching. It wasn't designed for research.
Go was designed for writing production software that needs to be reliable.
>So building production software is stated as 3rd most important goal.
Ehrm, no? It is to be taken with no implicit meaning on ordering, i.e. teaching == research == applications, more or less.
All of these priorities are weighed in when there are talks in the community about removing or adding features.
Haskell is a lot better for writing _reliable_ software. It just has a higher barrier to entry, if you come from an imperative background (not much if it's your first language).
>Unlike Haskell, Go wasn't designed for teaching. It wasn't designed for research.
Several times I've heard mentioned that the strength of Go was the simplicity and on boarding new developers, along with them not getting confused about what code means. I'll agree it certainly wasn't designed for research though, since it seems to want to ignore the progress in CS over the last 20 years.
Haskell is very serious about stability, they don't just add or drop features without a long deprecation cycle. Most experimental things are language extensions, which are entirely opt-in.
The main problem Haskell has had for adoption (IMHO) was the lack of good _pedagogical_ material that would hold the users hand and teach them the concepts of the language - there have been superb efforts into fixing this in recent years such as with the Haskell Book etc.
Well, all that really says is that the marketing for Go was successful, and that Go is sufficient to build said infrastructure (just as the languages preceding Go were).
No, it's meant as a warning against the trappings of explosive popularity. A language like C++/Java/JS is difficult to modify because of how entrenched the usage of the language becomes. In that sense avoiding "success at all costs" (which is how it should be parsed) allows "agile" development of the language.
I think the people who make and use Go have had a different experience than you, and judge a language to be suited to production with different criteria. Simplicity, great concurrency (that is, way better than most others even if not the best), speed of compiling, static binary, garbage collection.
I'm not OP, and I don't necessarily agree, but Haskell can produce static binaries, has GC, and has a great story for concurrency. It uses a M:N threading model similar to Go.
But this is an argument more about community than about language design. Granted there are a lot of problems with the Haskell community but deifying poor design decisions as "simplicity" is generally not one of them. For the horror that is string types, though, maybe.
The size of community is a direct result of the popularity of the language.
Haskell had almost 20 year head start over Go. That Go already surpassed it in the number of libraries speaks to the fact that people are picking Go over Haskell.
Following Occam's Razor, people pick Go over Haskell because they find it a better language, where "better" for them is probably different than how you define it, given that you think Go is "poorly designed" and it's a given that "poorly designed" language can't possibly be "better" than even mediocre language and I gather you place Haskell a bit higher than "mediocre".
> Go is a prime example of the "worse is better" doctrine, that's why it's popular
Also known as making trade-offs; a universal engineering theme. Engineering projects that refuse to make trade-offs don't fare too well - you have to pick a point that's acceptable to you on a strength/weight graph or you will be forced to use exotic materials that tend to come with their own cost (yet another trade-off: cost/strength). This is fine in a prototype or in a research paper, but when it comes to production, you will not win the fight against reality.
Edit: For programming languages, I imagine the graphs are roughly features vs. complexity (directly correlated) and then complexity vs. hiring pool size (inversely populated). You have to pick a point on both graphs, and Google picked values some people on HN disagree with and they find that offensive for some reason.
>This is fine in a prototype or in a research paper, but when it comes to production, you will not win the fight against reality.
Indeed, that's the argument I'm making. Where's the disagreement?
>or programming languages, I imagine the graphs are roughly features vs. complexity (directly correlated)
You know Simple Made Easy is a ubiquitous recommendation here and I don't really have anything to add to it. In Haskell as in Lisp all the complex "features" are described in terms of far simpler features. Languages that build in these features explicitly are complex, languages that let you choose don't have to be.
This is why C++ is complex and Common Lisp is simple, even though they have a pretty similar feature set all things considered.
As far as I can tell Go was designed by people who think C got most stuff right, and they intended to make a language that was hard to screw up in. I think they failed here though because neither option types nor generics are that hard to understand and eliminate huge classes of errors.
As I understand it in Go it's pretty common to cast stuff from interface{}. This is a bad pattern because it eliminates type safety. Somebody wrote on HN a while ago, and I'm inclined to agree, that you either have a static type system with generics or you have a dynamic type system. Go is no exception.
I've heard from multiple people teaching Haskell that it's much easier to grasp for people with no prior programming experience compared to those with one or more languages under their belt. It makes sense on some level - core Haskell is about taking plain data and transforming it into something else. Much easier to explain than public static enterprise beans. I think that learning Java/Python/JavaScript/C#/PHP as a first language burdens you with too much baggage, while a lot of what Haskell teaches can be successfully applied regardless of what language you end up working with.
It's truly unfortunate that Haskell got the reputation of being mainly about monoids in the category of endofunctors, and zygohistomorphic prepromorphisms. The core principles are simple, solid, and lend themselves well to writing production code.
Uh, what? Learning Haskell covers a huge surface area, many things you learn are applicable to tons of other languages. How does that make it a poor teaching language?
I'm interpreting "Teaching language" to be the first language students learn. How can one explain the need for monadic IO properly without explaining how regular printf and the like work. Also, Haskell is quite scary to the beginner (Although I personally find it much simpler/more consistent than the Java and C++)
If it's (say) the third language they learn, then Haskell is absolutely perfect for the exact reason/s you have stated. Personally, I actually learnt Haskell by accident (Read SPJ's book about lazy functional language implementation, realized SPJ started Haskell then learnt more from there).
That's only if you're used to imperative languages. Of course even in functional land a slimmed down language like Scheme is probably better for teaching.
I don't necessarily think that's a bad thing. Something that may be null is already an option type. It's "some T or null". Making the compiler force you into checking which it is, is no difference from checking for the presence of null. There is literally no difference in complexity - it's actually simpler because you don't have the cognitive overhead of remembering where a certain variable has and hasn't been null checked (After a null check, your variable has magically changed type to "some T, definitely not null".)
and The programmer alone has to remember this - remember to check, and the status of each variable that can be null. Was x checked before? scroll up to see... Add 10 variables and some nested ifs and ask the programmer which variables can and can't be null at any given place. This is the job of a compiler, not a programmer. And if my experience is any guide, the less experienced programmers, which are the ones we are discussing now, would be happy to have 100 locals or 12 levels of nested ifs in a method. They need this added help more than the people who ask for it.