Hacker News new | past | comments | ask | show | jobs | submit login
Go Contracts – Draft Design (googlesource.com)
249 points by dkarbayev on July 29, 2019 | hide | past | favorite | 199 comments



This is too similar to interfaces for me and seems like it would only add to the complexity of the language, documentation, and compile time for a small increase in generic code.

One of Go's core strengths is simplicity. I've personally witnessed both C and Python programmers jump right into Go projects with minimal assistance. Learning new packages is absurdly easy because the code is so simple and documentation practically generates itself. I worry that widespread use of this feature would negatively impact the language in that regard.

Lacking a true "generics" feature has had an influence that people tend to overlook, which is that the Go community is less reliant on dependencies. One of Go's proverbs is "A little copying is better than a little dependency" [0]. The ecosystem is better documented and healthier because of it, avoiding the require("left-pad") world of the dependency abyss.

The language lends itself to being used for smaller, more concise code bases where things are written with specific and narrowly-defined purpose. It implicitly discourages "Architecture Astronauts" [1] and that strength shouldn't be overlooked or sacrificed for a few new container options.

[0] https://go-proverbs.github.io/

[1] https://www.joelonsoftware.com/2001/04/21/dont-let-architect...


Missing generics certainly had some influence on golang, but not sure I understand your point. How does adding generics lead to a reliance on dependencies, or the absence of generics reduce reliance on dependencies?

The relative lack of "dependency hell" in golang compared to other languages is because of the great standard library. Golang also never had an official/proper (imo) solution for package management until just this past year or so (although people worked around that a bit)


I think the argument is that generic code, being, well, generic, promotes reuse at an abstract level, rather than a concrete level. Abstract code can be more difficult to understand than specialized code that does only what is needed in a particular use case.

I like the generics proposal, but it seems like a valid concern.


To clarify further, I think the concern is that people will start adding dependencies for things that could be implemented concretely in a dozen lines or so, e.g., a generic slice.Map function or even a linked list.

Note that this comment is not advocating for or against this proposal or generics in general; it just took me a few re-reads to understand the concern and I hope to save others some time. :)


I hope that if this design is accepted they would also include some common generic types/algorithms in the standard library instead of leaving it to a 3rd party. For example an updated sync.Map (that doesn't use interface{}), some things from SliceTricks [1], or maybe the Set example (although I don't really have a problem with map[thing]struct{}, Set is arguably more clear).

IMO I think having a standard library that judiciously used generics would be almost pure upside for most Go programs. Being able to do math.Sin(x) instead of float32(math.Sin(float32(x))) just seems like an improvement in every way.

[1] https://github.com/golang/go/wiki/SliceTricks


Agreed. I think this would go a long way toward addressing the original concern.


That's exactly what needs to happen. We will never arrive at a more powerful platform by self-gratification and refusing to stand on each others' shoulders. One well-tested reusable implementation is dramatically better than a hundred incompatible bespoke ones that happen to work and a hundred more that are subtly broken.


They're saying they think every project duplicating code that could be a shared generic dependency is actually a virtue.


>One of Go's core strengths is simplicity.

I also remember reading some one said their Software has increased complexity due to Go's Simplicity. ( Or something similar along the line )

It really is a balancing act. And it seems there is no one size fit all solution.


> Software has increased complexity due to Go's Simplicity.

Which way are you meaning this. The way I read it was that because GO was simple, we were able to build more complex systems. But you seemed to indicate that was a bad thing?

Yes making complex that can be made simple is the way you want to do things, but some things simply are not simple, and thus require complex software.


No, that the simplicity of the language forces complexity in the code written in it.


I have thought about this for about a day, and I can't agree with you.

Every time I have seen complex GO code and investigated how it got that way it was simply the result of one of the following. 1) Writing GO code before fully understanding the language. 2) Trying to write GO code like it was some other language.

I have seen some crazy complex GO in my days, and after spending hours and hours of trying to figure out what the code was doing -- I have always been able to simplify the program to a nice concise bit of code -- and I can assure you there are much better programmers out there than me.

Often the complexity is people trying to implement generics in some crazy round about way, or have prioritized dependency injection, or code coverage test to a extreme.

My favorite example of this is https://github.com/rdadbhawala/dutdip -- and here is a good talk on it https://www.youtube.com/watch?v=HBLJV0cPRBM

The code base that implemented this in production is an amazing mess, I can assure you of that.


For example check Kubernetes talk from FOSDEM.


That's not been my experience of Go. Complexity is contagious - if your language is complex your code tends to be complex. Go's language design intentionally avoids complexity and Go code is some of the most readable code I've ever seen.


"Complexity is contagious - if your language is complex your code tends to be complex."

Going from assembly language to Go (increasing language complexity) decreases code complexity.

Stated a different way, would removing a feature from Go (say Goroutines) necessarily make a Go codebase simpler? If so, then I have to question why they added it in the first place.

Maybe your statement could be defended given enough context and caveats, but the overall relationship between language and code complexity is clearly more... complex... than you imply.


In general, I have a lot of sympathy for your concern--I've used Go and other languages with generics, and the difference is noteworthy. I personally think generics is worthwhile on balance, but I respect that others don't share this opinion. I'm glad you brought it up because most conversations about generics outside of the Go community pretend as though there is no tradeoff at all. That said, what sort of generics proposal would be sufficiently powerful (or perhaps that's not what you meant by "different from interfaces"?) to alleviate your concern? And note that this proposal is quite a lot more powerful than interfaces because interfaces aren't generic over multiple types, while contracts are.


Left-pad had nothing to do with dependencies being bad and everything to do with the fact that you could delete dependencies from NPM.

I predicted at the time that people would take away the wrong conclusion—"dependencies are bad"—from the left-pad incident. Sadly that prediction seems to have come true.


I don't really see how that's an unfair conclusion to draw.

The more dependencies you have, the more likely you are to get one that's going to cause a problem down the line, be it due to conflicts with other libraries, subtle bugs that take years to manifest, or someone deleting something from upstream package manager (as was the case with left-pad).


"Subtle bugs that takes years to manifest" are way more likely to occur with code you write yourself because you didn't want to import someone's production-hardened code than with popular dependencies.

Take crypto code. Which do you trust: a homebrew implementation of AES-GCM because nobody wanted to add a dependency, or libsodium?


I feel like crypto is an outlier; you should definitely use well-vetted libraries for that for a whole plethora of reasons.

However, I think the reason that people got upset over the left-pad thing was because it was a library that most engineers could write fairly easily, without adding in another failure point.

If you rely on the the language/platform's built-in libraries, typically the amount of wheel-reinventing you are doing boils down to writing simple wrappers, without adding in another external point of failure. These libraries are typically more production-hardened than something you'd find on NPM or Maven or Nuget.


Or take leftpad. The point is there is a spectrum along which the cost/reward calculation varies. Absolutely don't roll your own crypto and absolutely roll your own leftpad. Maybe this is a bad conclusion, but I think it's important to address that not all dependencies are crypto, at any rate.


Another example is internationalization. Most of the reason why software is bad for people who don't use non-Latin text is that people keep writing homebrew text code instead of using dependencies. This has ugly cultural implications, and it's entirely because of the "copying is better than dependencies" philosophy.


> Or take leftpad.

Leftpad was a problem with a particular package management system, not with code reuse. It's true that avoiding code reuse is a way to avoid package management problems, but it's kind of like the way decapitation is a way of avoiding mental health problems.


As pcwalton already said, the problem with left-pad was with npm and not the concept or the usage of dependencies per se.


This thread is debating that premise. Your argument is ultimately circular: “the problem with leftpad was npm because the problem with leftpad is npm”. pcwalton may well be correct (we often disagree and frustratingly he’s probably right more often than me) but not because of this circular argument.


It isn't as simple as being the wrong (or the right) conclusion. It needs to be seen in context. In the context of who is saying it, in the context you understand it and in terms of what actual code we're talking about.

What I mainly took away from the left-pad incident was that if people will add dependencies for code that is so simple that it takes less than a minute to write(1), then what else will they add dependencies for? I was a bit surprised at the sheer poor engineering judgement involved in adding massive amounts of dependencies so carelessly when they were so fragile as they proved to be in NPM.

Raise your hands those who check all dependencies before every release? Did you look at all the change logs? Did you check for issues, security vulnerabilities, or that it may be time to upgrade to a newer version (or not)? Did you follow all the transitive dependencies and do the same? Do you routinely evaluate each dependency with an eye to replacing it? No? Yes?

Using known good implementations that you know to be maintained, is good. Overdoing creates new problems. Making a culture of it creates a community of fragile, ratty code.

Dependencies are bad if you overdo them because with scale problems tend to morph into new kinds of problems.

(1) You usually do not need to do all of what left-pad does. Usually you need one specific case. And once you have done it a few times (for a few different cases), it becomes like a kata you can write in seconds. It is much faster than adding a dependency. Not least because adding a dependency actually means you should spend enough time reviewing the dependency - in depth the first time, and perhaps just checking the change logs or commit logs on subsequent versions.


If all else is equal, then additional dependencies are a bad thing. Otherwise the only reason to not depend on everything would be code size... And if it's a compiled language with even just a minimum of optimizations then even that wouldn't be an issue.

As with most things - it is a trade-off.


That’s a very slippery slope argument that could apply to almost anything. Fortunately, most people realise the benefits of generics - and it has nothing to do with dependency management (which is already a mess by the way).

I’m more worried about eccentricities in the implementation that we’ll get out of the golang team. Then again, maybe we’ll be pleasantly surprised.


I agree with your position, but you’ve missed the point. Everyone recognizes the benefits of genetics; the debate is about whether they’re worth the cost. Not everyone can even bring themselves to admit there are costs.


Agreed. What I don’t get is why people try to change Go all time instead of simply using different languages for different problems. If you absolutely need generics in a project, there are a ton of languages out there that will fit your bill.


This proposal is by one of the core Go maintainers, and the others have never come out against generics just expressed their concerns about how to implement them in Go while maintaining compile and execution speed. It's a bit over the top to suggest that they quit using Go just because you disagree with them.

I primarily program in Go. While I think often HN makes too big a deal of Go's support for generic types currently being (more or less) limited to slices, maps, and channels (those go pretty far for a lot of programs, especially when combined with interfaces), I don't get the passionate anti-generics attitude this post seems to have attracted either. I'm wary of architecture astronauts just like everyone else, but I think at least the Go team could probably use generics judiciously in the standard library in way that would benefit most programs (like using sync.Map without casting to interface{}, or doing math on float32 without needing to do float32(math.Sin(float32(x))), etc).


Yeah, for me Go is a specialized tool mostly for dealing with IO. The entire standard library and all of the builtin types are really designed for IO-centric applications which is why it's a natural fit for servers/microservices. Small, narrowly defined applications that mostly coordinate via sockets and serialization and it does those things really well.

In those worlds you're more likely to move data between logical units in JSON or Protobufs (both of which Go is good for) than to need proper generics so I don't understand the push from this subset of the community.

As far as container types and data structures, look at the number of well built storage solutions that don't seem encumbered by the lack of generics at all like: boltdb, bleve, BadgerDB, InfluxDB, etc.


Generics help with reducing of code duplication. This doesn't really depend on what you want to do exactly, more on how complex your code base is.


Microservices? I'm sorry but what do you mean? Don't you use business logic in your microservice or you just do I/O without actually doing something with the input? Generics is for that part.


What I really like about Go is the consistency of their tooling. gofmt, godoc, the GOPATH convention, single binary deployments. I like all of that.

Maybe we could port some of those ideas to other languages so that Go programmers will feel at home :)


Static compiling, common across all OSes before it became an option alongside dynamic linking.

Fast compile times, already being enjoyed by many with Turbo Pascal for MS-DOS, or Object Pascal for Mac OS, among other languages with similar support for modules.

Godoc, javadoc did it first.

Gofmt, indent was one of the first and every IDE has done it since ages.

Other languages already have those ideas.


I'm sympathetic to many anti-generics positions, but this argument is lazy.

You rarely know all of your requirements up front, and walking back a language change a year or more into your project is almost always prohibitively expensive. At that point, you find yourself with a handful of equally-bad language-hybridization options. One of Go's objectives is scalability--as Go projects mature, they should not find themselves boxed in by the language. In the case of generics specifically, you have a slightly better option than language hybridization or changing languages outright which is generating generic code, but that's only slightly better (because now you need code generation built into your CI/CD pipeline which is inherently worse than a naive generic type system implementation and it also imposes a high cost on all clients of your generic code, since they will also have to adopt your same generic build system as the standard Go toolchain isn't your-generic-build-system-aware).


Sometimes we don't get to choose project requirements, nor the projects.


That’s not the language’s fault, nor is it a reason to downscale what makes it great.


It is, because its designers created a language in 21st century while ignoring what has been done since 1976.

The generics proposal hints that the design is reminiscent from Ada generics. Well its first standard was released in 1983.


Maybe because they're working in companies that has "one language to rule them all" approach, and that language happens to be Go?


My guess is because they didn't choose the language. Many businesses prescribe the language(s) they use.


It will take a language deciding to remove generics before I believe this is anything but stubbornness.


... but you can't remove major features while retaining compatibility. So for the most part you can only add stuff.

So, people switch to new simpler languages. The way you remove things from a language is to create a new one.

Then these new simpler languages become popular, and then people want their critical favorite bit added. And eventually the language is no longer simpler. And people switch to another new language, and the cycle repeats.

Because we couldn't convince people to just use the existing complex language if they want that. Because, really, this is all a struggle to influence other developers. These days it's not enough to work the way that is best for you, you need to find ways to harness a critical mass of other developers, and at the same time contain the mess and damage that a huge number of miscellaneous developers out in the world can do to your ecosystem.

It kinda sucks. Where possible, I like to use older, less popular, more minimal: libraries, frameworks, tools.


Lots of languages have breaking changes between major versions.


Is it lack of generics that has resulted in fewer dependencies, or the incredibly full featured stdlib?


a bit of both?


It seems as a proposal to overlap interfaces a lot.

- Interfaces gather methods. The concrete type referenced through an interface doesn't need to know about the interface. Interfaces can be used to decouple operations from the details of the thing being operated upon.

- Contracts gather methods. The concrete type referenced through a contract doesn't need to know about the contract. Contracts can be used to decouple operations from the details of the thing being operated upon.

Disjunction: Interfaces are resolved at runtime and result in indirect calls. Contracts are resolved at compile time and their contract-nature is lost in object code, they just become direct calls. Oh and contracts can do operators.

It feels to me like the risk here is not more syntax but less orthogonality. There will be two features competing to be the standard way to do the contract-like things.


This is because interfaces and contracts are conceptually similar. They both give you a way to abstract over a concrete implementation of some set of operations. That in turn lets you use multiple different types that support those operations.

However, they operate at different times. Interfaces are a runtime abstraction and contracts are at compile time. Interfaces are dynamic and contracts are static. This manifests itself in important ways:

* Contracts give you a way to enforce homogeneity where interfaces permit heterogeneity. Say you want a function that takes a list of objects that you can call `Foo()` on. If you define that function as accepting a list of some Fooer interface, then the list you get at runtime may contain a mixture of all sorts of different types of objects, all of which implement Fooer. If your function excepts a list of a generic type implementing a Fooer contract then at runtime you get a list of objects all of the same type, one which supports supports Fooer.

* Because of the above, there is a runtime overhead to working with interfaces. When you pass a concrete type to something expecting in interface, that value needs to be boxed. When invoke a method on it, you need some amount of dynamic or virtual dispatch at runtime. With contracts, you can generate code at compile time that calls directly into the chosen type because you know at compile time exactly which concrete type has been chosen.

So I think you're right to point out the similarities. But I think having both and having them be similar is a good thing. It lowers the cognitive load while giving more things users can express. I don't think the features will compete any more than interfaces and generics compete in other languages.

There are analogues of this elsewhere. C++ basically recapitulated this same history decades ago. Initially it only had virtual methods but no generics. You would see collection libraries that tried to define collections of arbitrary types using interfaces, but it never quite worked right. A list of ints isn't really a subclass of List<T> even though there is some polymorphism going on. Templates resolved that.

Likewise, Java added generics because interfaces were not sufficiently expressive.

Rust has traits and trait objects. The two features are very similar and reuse a lot of mechanism, but also let you do different things. Traits are basically like contracts and are instantiated and specialized at compile time with no runtime overhead. Trait objects are like interfaces or vtables. They give you runtime polymorphism at the cost of some performance.


> However, they operate at different times.

This seems like it would be solvable by simply allowing interfaces to constrain generic parameters. Something like:

    func foo(type A Comparable, B OtherIface, C /* unconstrained */)(a A, b B) C {
        ...
    }
You'd need to allow for type parameters on interfaces, of course:

   func lookup(type A Indexable(B), B Hashable)(set A, idx B) {
       ...
   }
But there's no need, IMO, to separate interfaces and contracts.


How do u do something like a graph tho, that involves two types (nodes and edges)?


    Graph(N,E)
What's the problem you see?


afaik there's no such thing as an interface for multiple types in Go; i.e. you can have (the equivalent of) `Foo implements Comparable` but not `Bar, Baz implement ConvertFromTo`. `Graph(N, E)` mentions two types, so you can't really express it as an interface, at least not in current Go.


But this is a single interface, with two parameters. It's effectively just a function that returns an interface type.

     Graph(N, E)
specialized with

     Graph(int, int)
is roughly equivalent to:

     interface Graph_int_int


ah, that makes sense! but what about something like

    ConvertFromTo(F,T)
? i mean, we need a single method

    func convert(f F) T
but where should it go (which type should implement it)?


The issue there has nothing to do with generics. You need overloading for what you want (and, IMO, overloading is a bad idea). Don't write code that way. Since conversion is rarely a generic pattern anyways, you just pass a conversion function in.

     func processStuff(type U, V)(a U, b V, convert func(U) V){
         bleh(a, convert(b))
     }

     processStuff(123, "foo", strconv.Itoa)
Adding the parts needed to do generic conversion functions is not a great idea. That way lies madness and SFINAE.


Duh, I missed your comment about giving interfaces parameters. Nm!


I agree. The way I think of this is that generics are compile-time polymorphism, and interfaces are run-time polymorphism. It seems natural to me that the language constructs to manage those things would be similar.


A couple of additional notes:

* Regarding performance: a compiler could devirtualize the virtual/interface calls if it can prove there are no necessary dynamic dispatches

* contracts additionally differ from interfaces in that the former are generic over multiple types while interfaces are only generic over one type.


This overlap also struck me as something worth considering.

Just to add some more data points to your disjunction section:

When talking about composite types using interfaces or type variables, the difference becomes more apparent:

   func Concat(type T stringer)(ts ...T) stringer
vs

  func Concat(ts ...fmt.Stringer) string
On the type-parameterized case once you pick the concrete type, all the elements of the slice must be of the same type.

On the other hand this means that you can use existing slices. One of the most common questions I found when teaching Go to newcomers is "why can't I convert a slice of X where X implements Y into a slice of Y"?

i.e.:

    type sint int
    
    func (s sint) String() string { return fmt.Sprint(int(s)) }
    
    func demo(vs []fmt.Stringer) {
     for _, v := range vs {
      fmt.Println(v.String())
     }
    }
    
    func main() {
     a := []sint{1, 2, 3}
     s := make([]fmt.Stringer, len(a))
     for i := range a {
      s[i] = a[i]
     }
     demo(s)
    }
with contracts you can just pass a concrete instance of the composite type:

    func demo(type T fmt.Stringer)(vs []T) {
     for _, v := range vs {
      fmt.Println(v.String())
     }
    }
    
    func main() {
     a := []sint{1, 2, 3}
     demo(a)
    }
So, even putting aside runtime optimizations, there are perfectly valid use cases where you either want e.g. "a slice of any value as long as each value implements a given method" or "a slice of a concrete type which implements a given method".

The former allows you to lift your value into the abstraction (e.g. you can pass your implementation of an io.Writer anywhere some code expects one). This is generally unidirectional; e.g. the external code doesn't give you back an instance of your own type (because in order to do it, you'd have to employ a runtime type assertion to get it back).

The latter allows you to consume some abstraction on top of your data. This code can produce new values of your types (without hacks like factory interfaces) and the compiler knows about that.


Yeah, it looks like "contracts for homogeneous, interfaces for heterogeneous / pluggable" is a useful distinction.

But what worries me is that it's an extremely subtle, implementation-understanding distinction. They are still extremely similar and their dissimilarity seems to be of the sort that will bite people. Beginners definitely, and sometimes even pros.


I wonder: why not just accepting interfaces as contracts (a subset), e.g. literally allowing

    func Foo(type T fmt.Stringer)(T) T
and allowing runtime values of type contract, e.g.

    contract SignedInteger(T) {
        T int, int8, int16, int32, int64
    }

    func Foo(x SignedInteger) bool { return x > 0 }

i.e. decoupling the runtime boxing vs the compile type parameter resolution from the "interface specification language".

I do understand that the need for the new "contract" syntax becomes apparent when you have contracts over multiple variables, but that doesn't mean in the single type variable degenerate case there is no overlap. Perhaps embracing that overlap would help people understand more quickly where the actual difference is (in the way types and function signatures uses them, rather than what operations they describe)


Every object-oriented language that has generics has ended up with separate constructs and overlapping use-cases for compile-time dispatch vs. runtime dispatch. C++ has both templates and virtual methods, and there are many problems where you could comfortably use either. Likewise, in Java you can choose between using a generic class or defining a class hierarchy that depends on dynamic dispatch for behavior.

The overlap seems to be a bit of "essential complexity" - if you support both compile-time and runtime dispatch, then the programmer has to choose which to use, and there are many cases where either would work.


Hmm, I’m not sure that’s quite a fair representation of Java. Contracts are interfaces in Java. Using generics just gives you compile-time type-safety (partial safety, anyway) without having much effect on how things work at runtime.

Using interfaces for generic type constraints feels very natural, even though there are some rough edges you can bump into when trying to do overly clever stuff.

It’s surprising to me that Go wouldn’t just use interfaces as “contracts” in the same way. Why add a whole new class of entities?


Because they have different semantics. Contracts are generic over multiple types and as /u/munificent mentioned elsewhere, they allow you to write code that enforces homogeneity. You can write a generic function that takes a slice of elements that have a Foo() method and all instances in the slice will have the same concrete type. This would allow the compiler to easily specialize those function calls, so no virtual function overhead. This means we could have an efficient, straightforward sort implementation that performs as well as a hand rolled implementation.


Java compilers have been doing devirtualization for years with JIT and AOT tooling.


Agreed that devirtualization exists and improves performance, but it doesn’t address the semantic problem in that interfaces are about heterogeneity (and thus dynamic dispatch) while contracts are about homogeneity (which naturally lends itself to static dispatch). Additionally, devirtualization is a lot harder to implement (at least with any sort of useful comprehensiveness) as the compiler needs to prove to itself that the concrete type never varies which implies plumbing information across compilation units and (I think) breaking independent compilation of compilation units or at least making compilation dependent on some earlier whole-program-analysis pass.


Contracts support multiple "receivers". If I want to write a function that converts between two arbitrary types, I can write a `func convert(Type1 Type2 Convertible)(a Type1, b Type1){...}`, where Convertible in a contract with one method that takes a Type1 and receives a Type2. There's no way to do the same thing with interfaces as an interface can only reference a single type, a single receiver.


And on the flip side, interfaces allow heterogeneous lists; []io.Reader can have several different distinct types that implement io.Reader in them, whereas the contract version would be "a slice of a type that implements io.Reader" but it would have to all be the same type.

I won't deny they're not orthogonal, nor that we're probably going to see some confusion about which to use when from beginners, nor that there are probably interfaces in the wild that really ought to be contracts (I'm pretty sure I've got a couple myself, though I haven't fully checked until the proposal is stabilized), but neither do they quite stand in for each other. Perhaps if this was in the language from day one, more work would be put in to finding a way to make just one "interface/contract" feature with some kind of (probably confusing) parameter in it that lets it serve both purposes, but doing that today doesn't seem to be on the table.


Rust and Haskell both use something called existential types to implement their versions of Go's interfaces. An existential type is basically a type meaning "any type that implements the given type-class/trait/concept", so you can use to create an array of different types that all satisfy the same contract. Type-classes and traits are roughly Haskell and Rust's equivalents of Go's concepts. Interestingly, Go contracts actually support a feature missing from Rust: multi-param contracts; contracts that refer to two or more types.


... and Rust does unify both of these use cases with traits, that is, what you're referring to is a "trait object", which is the same thing as Go with regards to interfaces. The difference is when you use a trait to bound a generic, in which case they get monomorphized and become "direct calls" in the OP's parlance.

I haven't re-read it recently, but https://softwareengineering.stackexchange.com/questions/2472... did contain a pretty great explanation of how these things relate.

It's also worth noting that we also saw some confusion from people not realizing these two use cases for the same feature, and so added some syntax to differentiate the two.


"An existential type is basically a type meaning "any type that implements the given type-class/trait/concept", so you can use to create an array of different types that all satisfy the same contract."

I can't speak for Rust, but when I was first learning Haskell I was bitten by believing it worked that way, but it doesn't:

    Prelude> :t sum
    sum :: (Num a, Foldable t) => t a -> a
    Prelude> sum [1::Int, 2::Int]
    3
    Prelude> sum [1::Float, 2::Float]
    3.0
    Prelude> sum [1::Float, 2::Int]

    <interactive>:25:16: error:
        • Couldn't match expected type ‘Float’ with actual type ‘Int’
        • In the expression: 2 :: Int
          In the first argument of ‘sum’, namely ‘[1 :: Float, 2 :: Int]’
          In the expression: sum [1 :: Float, 2 :: Int]
You can kinda make it work by wrapping it behind yet another type that basically boxes the different underlying types, but that causes its own problems (including, for instance, the fact that no amount of boxing will make this example work; sum is also demonstrating why Haskell won't do this). Basically in Haskell, either do it a different way (and there's usually a different way that will make it work better), or go to full on heterogeneous lists that don't even need the values to be the same type like HList, but unless I've missed an implication of one of the more recent type extensions, you can't have a simple container like this that has heterogeneous types based on their interface. (I know you can make complex ones, because I have myself, but not the way I'm fairly sure you meant.)


In Rust, you do need some kind of indirection here; it is often described as a Box, but it can be any kind of pointer type.


> whereas the contract version would be "a slice of a type that implements io.Reader" but it would have to all be the same type.

From what I understand, you could still pass a []io.Reader to a contract version of the function. A slice of interfaces would still implement the contract (and is the same type, i.e. io.Reader). However, you could also pass a []os.File, which is not possible to do with the interface version of the function.


Sorry, yes, in Go, they could all be "an interface value that is io.Reader" as well. But that is still a subtly different flavor than what the current []io.Reader means.


If interfaces could have type parameters, then you could just do something like:

func convert(Type2, Type1 ConvertibleTo(Type2))(a Type1, b Type1){…}

However, that wouldn’t work as well if you wanted multiple methods with different receiver types, e.g. if you wanted to require convertibility in both directions.


"Specifying a required method looks like defining a method in an interface type, except that the receiver type must be explicitly provided."

This seems fine to me, I don't feel that these features compete in the same way you're suggesting. With interfaces, the receiver type is not constrained; with contracts, it is explicitly provided. Contracts are resolved at compile-time while interfaces are resolved at runtime. That doesn't seem like a stretch, but I'll have to dig in more with the proposal.

If anything, extending the range of const compile-time capabilities is a good thing.


I can't help but agree. Go, to me, was known for its shorthand cleverness in design - capital letter instead of public, for replaces whiles, etc. It was a breath of fresh air. This overlap -feels- a bit wrong when viewed in that context, in my opinion.


Some people here seem to not like the similarity to interfaces, but I think, it should be even more like interfaces. In fact, I like the interface syntax even better than the contract syntax. And the type parameter list pollutes the otherwise very clean syntax in my opinion.

So you might ask what I am suggesting. AFAIK there are two major problems preventing interfaces to be used as generics:

1. There is no way to require the methods of builtin types aka operators.

2. Compile-time type safety is very limited

The reason why interfaces have no access to operators and other language features is that the language is kinda rough around the edges. And I really don't want to hurt anybody with this statement. I love the language, but I think there are some things that aren't perfect.

For example, just imagine every array/slice would have a method (e.g. Position(int)) that would be the equivalent of the square brackets and the square brackets would be just some syntactic sugar like

  arrayA[0] == arrayA.Position(0)
Same thing for operators like +

  x := 1
  x + 2 == x.Addition(2)
It would eliminate a lot of cases that make contracts necessary. In essence, it would mean that you could actually address built-in types in interfaces.

The second problem is related to the type parameter list. In my opinion, it would be better to simply put those types on extra lines like:

  type T Generic
  func Print(s []T) {
    // function body
  }
I mean, there should be no problem in making them available for more than one function and it would definitely clean up the function signature and look more Go-like.


Agreed, though I think the syntax problem (too many parens) could be made more readable and Go-like by emphasizing the generic aspect and making it stand out visually -- leaving no doubt that you're in a section that's generic, and perhaps dissuading overuse by making it noisy. Perhaps something like:

  generic func(T, U) Map(slice []T, mapper func(T) U) []U
For "contracts":

  type Set generic interface(T comparable) {
    Push(T)
    Pop() T
  }
Every time you want to refer to a generic type you'd have to use the generic keyword, which encourages use of concrete types:

  type IntSet generic Set(int)
This is more or less how Modula-3 does it. Generics have dedicated syntax with a "generic" keyword.


Agree & Voted.

Another alternative that I would like is to have angle brackets < > for better readability, so that definition and call would look as follows:

  // 1
  func Print2<type T1, T2>(s1 []T1, s2 []T2) { ... }
  Print2<int,int>( ... )
  
  // 2
  var v Vector<int>
  
  // 3
  func (v *Vector<Element>) Push(x Element) { *v = append(*v, x) }
Sorry, if it gives you C++ nightmares.


That looks much better! Somewhat like associated types of rust.


If you poke around the files ending with .go2 here[0], you can see some examples of contracts/generics being used. This linked review is a prototype of the implementation for the proposal.

[0] https://go-review.googlesource.com/c/go/+/187317


This is why even the most basic form of generics would be amazing for reducing boilerplate: https://go-review.googlesource.com/c/go/+/187317/3/src/go/pa...


I haven't been following these proposals and I want generics of some kind in Go (I've had my fill of code generation), but good God does the proposed syntax of using parentheses for yet another thing (in addition to method receiver, parameters, return types, logical grouping, arguments) make the language suddenly feel cluttered and hard to read.


Honestly asking what "generics" offers over passing and handling interface{}.

  func myBlah(thing interface{}){
    switch thing.(type) {
      case int:
        // handle my int
      case string:
        // handle my string
      default:
        // default handling
    }
  }


See [0] which is a code generated piece of the ActivityStreams spec. As it is RDF based, it is an open ended set. Not worth doing by hand.

[0] https://github.com/go-fed/activity/blob/master/streams/gen_t...


Eventually it will morph into Lisp


Not sure why becoming way more simple and readable is a bad thing.


And it is basically notation noise. The idea is to inform the compiler as to which type names are generic.

   // proposed
   func Print2(type T1, T2)(s1 []T1, s2 []T2) { ... }
vs

   // less noise
   func Print2(s1 []'T1, s2 []'T2) { ... }
This proposal does -not- seem to be informed by the same 'outer space' mindset that gave us the Go language. A reminder that mere capitalization in Go language informs visibility and access.

Beyond mere syntax issues, the semantics of generics itself is adding huge complexity to what is a very capable and accessible 80% language.


interesting

> func Print2(s1 []'T1, s2 []'T2) { ... }

where would you add the claim that T1 and T2 must obey some named "contract"?


They'd both be T1 in that case. T1 T2 means they're different contracts.


I don't think parent implied that they should obey the same construct.

He asked where you'd denote the contracts each type much obey (e.g. the focus being: in which place, assuming, I guess, that you intent to stray from the current proposal of defining that).


Oh, oops, yeah, I misread "some" as "same". Denoting what contract they obey is just the name, isn't it? T1 and T2 are names of contracts.

I guess what you can't do by just using the contract name is say that two arguments must be of the same type. Nevermind.


> Oh, oops, yeah, I misread "some" as "same". Denoting what contract they obey is just the name, isn't it? T1 and T2 are names of contracts.

No, T1 and T2 name two generic types to be used in the body, the question is, where do you specify what contract they fulfill?

Eg if we convert an example from the doc to your suggestion:

    func SetViaStrings(type To, From viaStrings)(s []From) []To {
becomes

    func SetViaStrings(s []’From) []’To {
but how are where do you suggest we specify that From and To implement the viaStrings contract?

It doesn’t make sense (under this proposal) to replace them with a contract name directly, because the contract in this case takes two type arguments with different expectations of each. You’d need to change the whole proposal such that each contract constrained only a single type, and each function argument could specify potentially a different contract, rather than the current proposal, in which you can only specify one contract that constrains all of the types involved.


I really don't think generics (or contracts) will benefit the language. Most people who miss it are just missing the collection frameworks from Java/C#, and there are solutions to this anyway. (go:generate being one of them if you really need collections for X types).

I did miss generics when I first started writing Go, but after doing so for about a year at work and 2 years before that, I don't find me missing them at all.


I have the exact same feeling. I was using generics extensively in C# particularly. I don't miss them at all after having spent a lot of time working with and understanding Go. I can't help but feel it will just make the language far less attractive in the long run.


Yeah I believe so as well. All languages having the same features would also be a downside. Just make a choice and stand by it. (Like Haskell, they are not pressured into doing things just because Java does so).


Like Haskell, they are not pressured into doing things just because Java does so

I do not mean this as a criticism of Haskell, but Haskell has a lot of GHC-specific language extensions:

https://downloads.haskell.org/~ghc/latest/docs/html/users_gu...

I am not sure it qualifies as making a choice and standing by it.


I agree. I am not missing generics at all. It's a pain once is a while but it passes. But addressing that minor downside in no way in my mind justifies destroying what actually makes Go an effective language.


It is interesting to compare the mostly negative reception here with the more postivive over at reddit.com/r/golang [1]

Granted, that community in average probably contains more "Go fans". There was no lack of criticism at the try proposal over there though.

1: https://www.reddit.com/r/golang/comments/cifdwf/the_updated_...


I read through this yesterday and I'm fairly confused about all that.

1. This is a long proposal. Excluding the implementation/issues/comparison parts, it clocks in at 7700 words. Compared to the 27k-word long Go spec, this is epic. I know the proposal is more verbose than the eventual spec (if accepted), but still... 2. There are some valid points in Nate Finch's criticism of the previous proposal (https://npf.io/2018/09/go2-contracts-go-too-far/). Be it philosophical arguments or just plain syntax ones - like writing `func (...)(...)(...)` in function declarations.

I don't know. I'd like some form of generics in the language, but I'm not sure if complex designs like this are likely to be embraced by the community.


I think the complexity / length is a good indication as to why there's resistance to adding generics to Go; frequently, Java generics are cited since (and this is from the top of my head) it almost doubled the size of both the Java spec and its implementations (compiler, runtime). That is a lot of complexity to manage, and runs counter to what I believe is the current mindset of the Go language.

And in Java it didn't stop there; the people (person?) behind generics moved on to create Scala, a language with even more features and a notoriously complicated 25-step compilation process (see https://typelevel.org/scala/docs/phases.html) (compare with Java's six phases https://www.javatpoint.com/compiler-phases and Go's... three? Not sure, I found https://getstream.io/blog/how-a-go-program-compiles-down-to-...).

edit: Actually most of my knowledge comes from https://news.ycombinator.com/item?id=9622417, I had bookmarked it.


> And in Java it didn't stop there; the people (person?) behind generics moved on to create Scala, a language with even more features and a notoriously complicated 25-step compilation process

What does this have to do with anything? Scala is complex to compile for reasons far, far beyond generics. Are generics somehow guilty by association because an engineer who was once involved in an implementation of them went on to build some other complex thing?


I'm afraid that's not the case. Generics are very easy to implement under the following reasonable conditions:

1. you don't care about performance (you simply 'box' everything),

2. you don't care about executable size (you simply specialise every generic definition to their concrete use cases -- this is what C++ compilers do),

3. you don't do reflection on types in generic definitions.

As "cinnamonheart" mentions in a sibling post, ML's generics are straightforward, and remain, despite hailing from the 1970, even today a shining example of programming language design. Unfortunately, Scala had to violate all three points above to maintain compatibility with Java, and the JVM.


> And in Java it didn't stop there; the people (person?) behind generics moved on to create Scala,

Java and Scala are two very different languages, it doesn't makes sense to compare them. What you're suggesting is basically a slippery slope fallacy.


I don't think generics necessarily need to have a complex specification. Standard ML's specification, despite parametric polymorphism and parameterised modules, is incredibly small and easy to understand.


The compiler should do as much work as it possibly can. The alternative is making me do it at 0.0001 MHz because I'm made of meat.


I agree that the proposal is far from succinct.

But it is not really written a s a technical proposal, more an explanation with plenty of detail and abundant examples. The important information is kind of drowned out by the noise.

But other than that it is not really that complex. Generics aren't easy, but they are not something that needs to be invented. Implementation options and tradeoffs are well understood.

The duplication introduced by having both concepts and interfaces is unfortunate though.

Regarding syntax:

No language has really managed to provide a really good syntax for generics, imo. The default <T> is often ugly. Haskell chose to have them declared in a separate function header which is much nicer, but also annoying when having to modify two lines, etc.

I think this is a reasonable tradeoff. With special syntax highlighting for `(type T)` it will be easy enough to parse visually.

The proposal does contain some particularly unfortunate examples though.


It doesn't have contracts, but Zig's syntax seems pretty good:

https://ziglang.org/documentation/master/#comptime


The draft design is long, however, I don't think by itself that's an indication of complexity, and this would be a fraction of the size in the actual spec. This is tutorial-like in places, with many code examples (more than a third of the doc), and a whole section (somewhat less than a third) on issues and discarded ideas.


> but I'm not sure if complex designs like this are likely to be embraced by the community.

Spoiler alert: It won't.

If you think the Go community had a strong reaction with `if err != nil`, just wait for this one.


This proposal is so hilariously complex relative to everything else in Go, I have to wonder if the Go team has already decided that they aren't adding Generics; they release a galaxy brain generics proposal which even the strongest proponents of the feature have to admit is way too much, it inevitably gets struck down, and now they can say "look team, we tried and you said you didn't want generics."

Genius level play from Ian and Robert.


Perhaps a 3-part medium post or a series of 10 tweets be more suitable for most discussed (lack of) feature for years?


I don't like this at all. This is a classic example of a feature where, in the right hands its very powerful, but in the wrong hands it will ruin the simplicity of the logic it touches. Go's magic was really in its ability to be placed in the wrong hands and still end up with a serviceable program.


I find the position that the real solution to incompetence is to just give people blunt knives to be utterly unsatisfactory. If someone cannot understand how exceptions work for example I have a hard time seeing how removing this massive burden from their minds will result in good outcomes. Being easy to use for people that probably would be better off doing something else is not a solid design principle.

Java went down this route, ah, we will remove operator overloading, that will make it simpler, because nobody can ever write a function named equals which does not do what you expect it to... and the end result is horrible and does not prevent people from doing amazingly dumb things.


To quote Rob Pike:

>The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

If Google, with its intensive algorithmic interviews, can't manage to hire people capable of understanding something like:

  func max(T: Comparable)(a, b T) { 
    if a < b { 
      return b 
    } else {
      return a }
    } 
  }
And why we'd prefer that over having to rewrite the function for int8, int16, int32, int64, uint8, uint16..., then what hope is there for anybody else?


I don't think that's what Rob Pike was saying. It is more along the lines of, if you hire smart people who don't know C++ and then teach them C++, it is still a very long learning curve since the language has so many gotchas. (And you can make the same argument for Haskell.) You spend a lot of time on becoming a language expert rather than learning about the domain you're actually interested in.

It's better to have a language that can be picked up easily and that you can be competent in without needing to be an expert. And this is something Go is known for.

The example you gave seems reasonable for any Googler to understand (once the errors are fixed). The contracts proposal does make Go a bigger language, but it seems worth it.


It makes me sad that people think the problem with C++ is that it's powerful, not that it was saddled with backwards compatibility with an ancient language and then has accrued over thirty years of its own baggage on top of that.


If this is too complex for them to understand I would be really worried with "good software" they are building. I agree hiring good people is hard, but structuring your process for people that cannot understand the concept of Go Contracts may not be the optimal solution to this problem - at least in my view it is not.


> If this is too complex for them to understand I would be really worried with "good software" they are building.

Go have a look at all the Android architecture issues, rebooted at every IO, build performance problems and constant regressions in Android Studio.

Or since we are on a Go thread, how the Kubernetes code actually looks like (they did a talk about it at FOSDEM).


What's the source of this quote?


rob pike lang.next 2014

it's the talk that starts with sawzall


The funny thing is; your code has a syntax error (two, actually).

Its dumb to nitpick, but its symbolic of the problem; even if you're so sure that you have no mistakes, that what you've written is well designed and will work well... you're probably wrong. Even the simplest blocks of code will have issues, and the language should do everything it can to help this.

No one likes to be told that all they deserve is a blunt knife, but frankly, it's more true than it isn't. I'll gladly wield that blunt knife if it means I get to build awesome products that don't break at 2am.

(Edit) I made a mistake in my analysis of your code; there's actually three errors. Everyone makes mistakes. Our tools shouldn't encourage us to make more.


Now imagine if I'd copy-pasted that code for every builtin int and float type; there'd be ~ 36 errors.


That's a fair point, but it harkens back to the argument we see a lot for generics; "we'd have to repeat that 5 times for int8, int16, int32, int64, and just int, not to mention floats!".

I think there's a problem of complexity with that core argument. Why are you replicating this function 5 times? Why not just use `int`? Or just `float`? In other words, you're creating a solution that is (probably) too complex, and to manage that complexity you're asking for more language complexity. How about we attack the root of the issue: why is the solution so complex? Can we reduce that complexity?

More often than not, we can. Developers tend to over-engineer things, and Go's type system is a forcing function toward simplicity. Every once in a while, we run into a problem that would benefit from generics. But its very rare. The cost of "having" generics in every situation, including many where you don't need them, far outweighs the cost of being forced to grab `interface{}` or `reflect` in the very few situations where you don't have them. The problem is: It's not as obvious that this cost is higher. That's the challenge the Go team has had over the past few years; convincing us, who hold a loaded gun pointed straight at our foot, to not pull the trigger and turn this language into Java2.


Would the mistakes compile?

In many cases having a simpler less powerful language increases the amount of code you have to write. In this case it would be one place to fix, if the function was implemented for n different types then a mistake would have to be fixed n times. If Go Contracts reduce duplication they help at eliminating an anitipattern.


As a reliability engineer I disagree. It's nice to know that your code has few antipatterns in it by design. "Blunt knives" is a really bad way to think about it. You have plenty of very sharp knife, and they are easy to use and have clearly defined scope and purpose.

A knife that's sharp and unweildly makes me nervous even in the hands of an expert. Keep unweildly constructions out of go!


> It's nice to know that your code has few antipatterns in it by design.

But does making the language incredibly simplistic really do this?

The way I see it some devs will have more antipatterns than other devs - and this will be the case even if you make the language as simplistic as possible. The solution here is not to make the language simpler, but to rather look at why people are using anti-patterns.

> "Blunt knives" is a really bad way to think about it.

Is it? Most of these things result in having to write more code and does not really prevent you from doing something stupid. I'm not even sure it reduces the surface area of all possible stupid things in a meaningful way.

> Keep unweildly constructions out of go!

What makes a construct unweildly?


"Most of these things result in having to write more code and does not really prevent you from doing something stupid."

I see this accusation a lot, but it does not manifest in my code after years of usage (and, yes, I know Haskell and am pretty good with it, so it is not a lack of knowledge of the possibilities).

If you write Go in Go, it's a pretty good language for production code. It doesn't give you all the goodies, but it's got some stuff of its own that's pretty cool, like it's baby structural typing (I think it is still massively underestimated how useful this is in practice over languages that require you to declare conformance to an interface, this has major second- and third-order consequences to code bases), the syntactic ease of composition, and the select statement (channels on their own, meh, but put them in a select statement and they're very nice). On the whole I'd say it's usually a bit less slick than Python but noticeably more pleasant to work with than Java.

If you, on the other hand, want to write Haskell in Go, or C++ in Go, or Javascript in Go, you're in for a bad time. Certainly from this angle Go is a nightmare. It's not a good idea in any language, but it's fatal in Go.

(I'll also say it's hard to put a finger on all the details why, but it's a really easy language to refactor in. I've been handed 5,000-ish lines of "sorta OK code" from various code bases in Perl, Python, Go, and Java in my job, and of the four, Go is by far the easiest to go into and brush it up to a higher quality level.)


I'll also say it's hard to put a finger on all the details why, but it's a really easy language to refactor in

I'd be interested in some data on exactly what refactoring operations are most common in your practice. Back in the Smalltalk Refactoring Browser heyday, "Extract Method" was by far the most common operation. I'm not so sure that's going to be the case for golang. My situation might be idiosyncratic, but I find myself thinking, I did this with a struct, but this really should have been an Interface!


In these particular bits of code, the "brushing up" I'm referring to is removing global variables and init functions (or local equivalents) and putting them into structs with New functions (in Go, local equivalents in the other languages), adding the local equivalent of dependency injection (even with the framework the code was using, most difficult in Java) and making it so development test code was not directly hitting production infrastructure with that (!). Mostly I wasn't doing "extract method" types of refactoring in these particular cases.

Go isn't necessarily the easiest in any particular category, but it probably wins globally. Python is the easiest to refactor some code without making any modifications to some code that uses it, with all of its magic methods and such, but if you keep doing that to a given code base, after a few rounds of that you get into the sorts of situations where you really have no idea what "a.b = c.d(e.f, f.g())" is really doing. On the other hand, you end up missing that stuff when you're trying to refactor the Perl, which is nowhere near as good at that stuff. Java's the worst one, and I have to admit I ended up giving up even trying to clean it up; even with IDE support it takes a lot more cognitive effort than Go, the changes run deeper and spread farther, and in the end, you only end up with as much as you would have gotten with Go, not anything particularly better.

It's funny; on the one hand I'm all about how the cognitive affordances of programming languages produce different results, on the other hand, when it really comes down to it, an awful lot of code is written in a way that means those affordances don't even matter because it's just too darned easy to assume the entire rest of your infrastructure is up and it's OK to bring in production resources because they're there and they're going to work... and what can your language really do about that? Who cares if that code is written with mutable or immutable variables? We lost at a much deeper level than that.


Java reversed that route around Java 5 when they added a crapton of language features (generics, annotations, autoboxing, enumerations, varargs, foreach) that considerably increased its complexity and made Java to not be just a very simple OOP language. Before that Java had very minimal changes in the language itself and all enhancements were on the library side.

All the forces around (and especially outside but still influencing it) Go seem to push it alongside the same path. So far the Go designers are resisting bloating the language, but i wonder for how long that will be the case. There is this mindset that a language can never be good enough and stick with its existing features but instead it must always get new features - which ends on an inevitable bloat. This sort of need for featuritis is like locusts moving from one project to another.


I keep hearing this myth about how Java pre-5 used to be a pristine language, until it was touched by annotations and generics - and I doubt it, because I was there.

Before annotations, Java libraries were basically using gigantic XML files to configure things like dependency injection and routing. I don't like the way Java libraries use exceptions at all, but you can't seriously say you want to get to the early J2EE days. Look at the mess that was required to create an Employee entity: https://docs.oracle.com/cd/A97688_16/generic.903/a97677/ejb1...

Before generics, foreach and autoboxing Java was code was littered with ugly code like this:

  List incrementIntList(List input) {
    List result = new ArrayList();
    for (int i = 0; i < input.size(); i++) {
      Object obj = input.get(i);
      if (obj instance of Integer) {
        Integer int = (Integer)obj;
        int i = int.intValue() + 1;
        obj = new Integer(i);
      }
      result.add(obj);
    }
    return result;
  }
While in Java 5 you could finally write:

  List<Integer> incrementIntList(List<Integer> input) {
    List result = new ArrayList<Integer>();
    for (Integer i : input) {
      result.add(i + 1);
    }
    return result;
  }
with Java 8, we even got arguably better:

  List<Integer> incrementIntList(List<Integer> input) {
    return list.stream().map(i -> i + 1);
  }
I fail to see how the first version is more readable. Yes, the Java compiler had to become more complex to accommodate this, but every other Java codebase in existence became considerably simpler. I can't see how this is not a win.

The forces pushing for the same kind of ergonomic improvements in Go are not outside forces, but very much the Go designers themselves. This draft your written by Ian Lance Taylor and Robert Griesemer and it looks like a complete overhaul of the contract mechanism proposed by Russ Cox. These people are part of the Go core team.


I wasn't referring to libraries, i was specifically referring to the language itself and the language was simpler. Also i never worked with J2EE, in fact i worked with the complete opposite: J2ME. While it had a lot of pains, these were largely with the implementations not the language nor the APIs both of which were very simple. Though even before J2ME i mainly dabbled with applets which again were mostly simple (that was Java 1.1 btw thanks to MS Java).

Also i've written a lot of Java at the past and most of it was during the pre-1.5 days and i do not remember ever writing anything like the example you used as an ugly one, that looks contrived. For starts, i think i've only stored multiple object types in an array list very few times, most of the time it only stored a single type so i didn't need to check for the instance. Furthermore these array lists were most often hidden behind a higher level API, so you'd get "addFoo(Foo foo)" and "removeFoo(Foo foo)" that would add the object to the list (and often do additional bookkeeping, like set some flag or call a method in foo).

And of course i do not remember ever needing something like incrementing an integer list.

Sure you can make a mess of a codebase, but doesn't mean you have to. Just because someone (apparently a lot) believe that a SomethingFactoryFactory is a good way to approach things it doesn't mean that you have to follow it. Often a plain old "new" is just enough.

Also when i refer to outside forces i do not refer to whoever writes those drafts, i refer to those who ask for features that cause the drafts to be written.


It is an interesting problem to try to balance out. Java and C# are two very similar languages, but the latter decided to add so many different features into the language. It would be nice to do a study to see how people write code in those languages, and pick up existing code bases.

I don't think operator overloading is the best example to give here. It can mask things to make it less clear what's going on. I think Kotlin strikes a good balance, where they overloaded math operators for things like `BigInteger` without going overboard.

Other than that, Java has way more features than anything golang even hopes to offer. Golang literally dumbed down the language to such an extent that many people find it frustrating to work with. Whereas Java is getting some very cool features in the near future (pattern matching, record types, etc.)


> I don't think operator overloading is the best example to give here. It can mask things to make it less clear what's going on. I think Kotlin strikes a good balance, where they overloaded math operators for things like `BigInteger` without going overboard.

What Kotlin does (AFAIK) is map operators to functions: https://kotlinlang.org/docs/reference/operator-overloading.h...

    Kotlin allows us to provide implementations for a predefined set of operators on our types. These operators have fixed symbolic representation (like + or *) and fixed precedence. To implement an operator, we provide a member function or an extension function with a fixed name, for the corresponding type, i.e. left-hand side type for binary operations and argument type for unary ones. Functions that overload operators need to be marked with the operator modifier.
Say I have a custom type that I want to be able to compare to other instances of the same type (>, <, ==, etc). Java gives you the Comparable interface and then you use the compareTo method. What prevents me from just also using this in a way that makes things less clear?

If I need to support addition for my types (+) - I have to make some method, add(), but I can also similarly make an add method that does not do what someone expect it to. Sure, I should not, but you should also not use operators in that way, and you should not do many other dumb things.

As long as a language is turing complete people will be able to do dumb things with it. The solution to this problem is not to make the language more verbose or blunt. The solution is to make the language more conducive to static analysis.


> I find the position that the real solution to incompetence is to just give people blunt knives to be utterly unsatisfactory.

I always think about it like this: When designing a tool, you can make one that prevents behavior from users on the low end of the bell curve, but only by also preventing behavior on the high end as well. I think about this a lot in the context of teaching. The kind of classes that ensure no child gets left behind also tend to prohibit any child from racing ahead.


> I always think about it like this: When designing a tool, you can make one that prevents behavior from users on the low end of the bell curve, but only by also preventing behavior on the high end as well.

If both tools are turing complete then I'm not sure you can - and I would like to understand why people think it is possible - I have seen incompetent people do horrendous things in every language I have ever used. I think this whole avenue of trying to dumb down languages is a dead and and I think a much better option is to focus on making languages that allow more static analysis.


> If someone cannot understand how exceptions work for example I have a hard time seeing how removing this massive burden from their minds will result in good outcomes.

Most software doesn't need scientists, just bricklayers. And they must lay those bricks fairly easily and without making it undecipherable for their supervisors.


I've seen wrong hands doing terrible things with combinations of `interface{}` and reflections!


This has been a story in programming for literally decades.


I still think that, for the most part, Go doesn't need contracts (yet). I think it's common, coming from the object oriented world, to expect polymorphism to be intrinsically tied to method overloading. But it doesn't have to be.

Of the 12 examples at the end of the draft, there is only one real contract used, `comparable`, which could be easily done without contracts (but with generics) by passing in an extra argument `lessThanOrEqual func(T, T) bool`.

The other kind of contract is harder to do without contracts, as it is the contract which combines all numeric types. However, most of the examples would be possible to write, albeit a little more messily, with a parameter `toDouble f(T) double`. Alternatively, we could go the route of baking in a few special contracts into the language, such as a numeric contract, just like how map and slice are baked in polymorphic structures.

What do we gain from writing code this way? It leaves the Go language with more freedom from the future. Everyone agrees Go needs polymorphism. So add polymorphism. But as we can't agree on contracts, we can just leave it out. We'll get 90% of the benefits upfront, and then we can wait and see what are the actual edge cases we run into with contracts, and decide where to go from there.


I think it's common, coming from the object oriented world, to expect polymorphism to be intrinsically tied to method overloading. But it doesn't have to be.

Please provide some examples. For example, I don't think polymorphism in direct variable references is all that good.

It leaves the Go language with more freedom from the future. Everyone agrees Go needs polymorphism. So add polymorphism.

This statement confuses me. Go has polymorphism.


If you go and look at the Map/Reduce/Filter, Channels, Containers, and Append sections of the examples, there isn't any use of contracts. All that is required for those sections to work is to declare a type parameter, and then allow that parameter to be filled by any type, but still typecheck that all of the variables with the same parameter are the same type at the calling site. There's no need to have a contract for which methods can be called on the type, because we don't need to call any methods or access any fields. In my experience, this fills most use cases. It allows generic data structures, and generic functions over those structures.

If you need a little bit of behavior for those types, such as for a TreeMap data structure or a Sort function, Go's higher order functions make this possible without contracts, by passing in functions. The draft's Containers section uses this approach to construct a TreeMap. The SliceFn function, described in the Sort section, also uses this approach:

    func SliceFn(type Elem)(s []Elem, f func(Elem, Elem) bool) {
     Sort(sliceFn(Elem){s, f})
    }
Although this does use method overloading via contracts in implementation (via Sort), it doesn't need it. I expect that Go's existing interfaces (which yes, are a form of polymorphism, mea culpa) + unbounded parametric polymorphism are enough to solve 90% of Go's current issues.

The more complex cases are still possible to handle with parametrically polymorphic structs containing functions. Yes, that's bad code, but we'd be able to look for this use case, judge how often it occurs, and use it's use cases as data to inform further discussions on contracts.

The only case I struggle to easily deal with is the numeric case. My solution doesn't allow generic numeric functions, although I think there's room to explore that in more detail.


If you go and look at the Map/Reduce/Filter, Channels, Containers, and Append sections of the examples, there isn't any use of contracts.

I see. I'm of the opinion that the Go team might be more cautious and introduce a more limited form of generics first, which seems in line with what you're saying.

In my experience, this fills most use cases. It allows generic data structures, and generic functions over those structures.

How would one make something like ConcurrentMap? Perhaps some operations on a map could be exposed and "overridden?"


It would be simple to write a ConcurrentMap.

What is more difficult is writing a function which accepts both ConcurrentMaps and TreeMaps. In Java, they would both inherit from Map. In this proposal(/Rust/Haskell) they would implement the same contract(/trait/interface). In my proposal... there isn't an elegant solution.

In some cases, you can write the function to be generic over an insertion function `insert func(T)`, if all you need is assertion, and then you can pass `treeMap.insert` as the parameter. If you really truly need to be generic over the entire `Map` interface, that is one of the cases that I claim only takes up 10% of the cases. In that case, you could define a type which contains all of the functions a Map needs, and then you could construct that and manually pass that in.

This can be viewed as a desugaring of contracts. One implementation that contracts could have would be to, at runtime, construct and pass a value containing the functions that the contract specified are possible.


It's "typeclass", not "interface". Though it works similar and is, in fact, just a way to define a constraint.

As of your described way of desugaring the contract, looks way too cluttered. It's exactly the reason why Reader and ReaderT exist in Haskell. Even though the proposal's contract syntax doesn't look very beautiful, it's still drastically better than keeping track of drilled structs of functions, the latter will become a mess really really fast, so having a way to conveniently constraint the accepted types while having a full-blown parametric polymorphism is essential for a modern language. Not sure exactly why anyone would be against that.


Sigh, I really need to read my posts before submitting. You're right, of course it's typeclass.

The whole point is to implement the bare bones of a parametric polymorphic system without constraining further development of the language. Additionally, it discourages complexity and requires explicitness, two goals of Go.

As far as my description of desugaring the contract, that is literally how Haskell implements typeclasses.


Well, yeah, Reader is that kind of thing.

I guess that the complexity people are concerned about is that once you implement a parametric polymorphism, HKTs are inevitable and that probably requires some brain flexing in order to get the meaning of a structure, which defeats the main purpose that is being explicit and simple. Pretty much a deadlock. I now understand why the community is so in odds with the parametric polymorphism as the concept but I'm also sure that simplicity of use and convenience of writing a robust solution is the two things that require it. You can't keep your head near the sand and avoid HKTs when you want flexibility and you can't allow that type of polymorphism whilst striving for what's Go trying to do.

While understanding the both standpoints, though, I still hope that Go will get parametric polymorphism as it's just so much more pleasant and convenient than carrying around a mess of structures to be explicit no matter the costs.


> Go has polymorphism

polymorphism is often distinguished into ad-hoc polymorphism (interfaces) and parametric polymorphism (generics) – i'm guessing GP was referring to the latter?


You're right, in my head I was writing parametric polymorphism, and it just didn't make it's way to my fingers somehow.


> Everyone agrees Go needs polymorphism.

Huh? This couldn't be further from the truth.


You're right, I overstated that. It might be more appropriate to say "The core development team has made it clear that they intend to add parametric polymorphism in some form".


The proposal has a very short statement about implementation. If you think about this very carfully, you realize that part of the compilation speed of go allows you to compile a single package into code without having to leak out all of you abstractions.

If you have contracts/generics then you need to have un-compiled code as part of your exports. Which is a huge break from the current approach.


>We believe that this design permits different implementation choices. Code may be compiled separately for each set of type arguments, or it may be compiled as though each type argument is handled similarly to an interface type with method calls, or there may be some combination of the two.

If the latter approach is taken then no un-compiled code needs to be exported.


Yeah, I'm very interested in how they plan to integrate this into separate compilation. My understanding from C++ and C# is that this is a hard problem.


In Rust, we end up storing the necessary metadata inside of the pre-compiled library, so that when you use it from another library, the compiler knows how to do the right thing.


D, Delphi, Eiffel compile just as fast, just to cite three possible examples with AOT compilation to native code as their default toolchain.


Why “Contracts” and not “Generics”? To make it less contentious, maybe, or to manage expectations?


Probably because they don't want people talking cross-purposes or confusing their proposals with what is implemented in other languages, as they call out in the intro:

"As the term generic is widely used in the Go community, we will use it below as a shorthand to mean a function or type that takes type parameters. Don't confuse the term generic as used in this design with the same term in other languages like C++, C#, Java, or Rust; they have similarities but are not the same."


Those four languages all have completely different implementations of parameterized types, but calling them all “generics” doesn’t add any extra confusion.

To the contrary, it’s actually helpful to use the same word for something with roughly the same uses (eg generic type-safe container classes) even if the details are very different.


Contract is just the feature is defines the abilities of the genericized types.

A contract in Go enables generic programming or as other state it "generics".

In C++20 there is concepts which is very similar. https://en.wikipedia.org/wiki/Concepts_(C%2B%2B) in constraining the generic type before template expansion.


Contracts in programming language have a very precise meaning.

https://en.wikipedia.org/wiki/Design_by_contract


I don't particularly disagree. My point was more that given what the authors wrote in the quoted sentence, I thought that - rightly or wrongly - was likely the reasoning behind the naming decision.


Indeed, this could end up being confusing. Kotlin is adding a feature to the language with the same name (https://blog.kotlin-academy.com/understanding-kotlin-contrac...) and it has nothing to do with Generics. Kotlin contracts allow functions to be annotated with information about e.g. the nullability, the type, or the amount of times a thing will be called. The compiler can then use this to infer that code that otherwise would not have compiled is actually safe without that check. The main use case seems to be smart casts of nullable types to non nullable or of abstract types to concrete types.

Go uses curly braces. Lots of similarly braced languages (Java, Kotlin, Scala, Typescript, ...) have generics and type parameters. Maybe generics is not a bad name for this.


Maybe someone really wants to bother Bertrand Meyer.


golang likes to reinvent the wheel and call it something else. We've already seen it with exceptions (which they called "panics"). Contracts already mean something else (see contracts in C++, C#, D, etc.).

What they're proposing here are basically trait bounded generics. They just don't want to use the word "generics" because of the following attitude:

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."


Man you Go haters LOOOVE to trot out that quote from 2014.

But you know what, go is easy to learn, and that's been awesome in my experience. I can (and have) hired people who have never written go, and they're productive in a week or two.


That's not really a good metric to target. They can be productive in a week, and because the language is very weak and verbose, you'll end up with a messy code base. I see this all the time at an employer.


....in Go?


I think that's an unfortunate way to see it. Ignoring the possible resilience to using more recognised naming conventions and having gone around the houses in languages for good and bad I'd go more along the lines of "software engineering is hard, the tools to do it well don't need to be"


Reality is complex, and having weak tools and languages just pushes complexity further down the line. It's quite evident of the extremely verbose, messy code bases that come out of using golang in even slightly complex domains.

The vast majority of the claims behind golang (e.g. "large scale development", etc.) are not only baseless, but practical review of golang code bases shows that it's actually a step backward, and those claims do not manifest.


Searching “trait bounded generics” reveals nothing generic. Only rust results.

Did you mean ad-hoc polymorphism?


See how Rust or C# do generics, as well as Java or Scala (I think this feature is also called "Concepts" in C++, but I'm not sure it supports it yet).

You specify a type parameter, and then statically write out that it has to adhere to a certain interface/trait. This analysis is done at compile time, not runtime as with virtual dispatch.


They’re not the same thing. Contracts are a means to define constraints for parameterized types.


The counterpart of "Contracts" in C++ is "Traits" not "generics".


This is really bike-shed-y, but:

I'm disappointed they didn't go with «» or similar to set apart type parameters. The answer in the doc is that they "couldn't bring ourselves to require non-ascii", but there's an easy way to handle this without _requiring_ non-ascii characters: let "(type …)" and "«…»" be syntactically identical, and have gofmt change the former to the latter. Basically everyone uses gofmt all the time, so all the code everyone reads would use the more visually distinctive syntax, but it would be easy to type using ascii only.


Why not just << and >> then?


That doesn't work for the same reason that < and > don't work, right? They are existing operators in the language, so the parser would need unbounded lookahead. Using a different set of characters is unambiguous (and additionally is more compact).


I think the examples at the end of the document really show the power of the proposal.

But, there are some aspects that will make code appear more complex or challenging to read. It will certainly make it more challenging for beginners.

Appearance of the code with type switches and issue with the idea of Iterator and doing something two different ways were the only parts that stood out to me besides increased complexity


"Although functions may have multiple type parameters, they may only have a single contract."

Anyone else find this limitation a bit disappointing? Seems like a somewhat arbitrary restriction that limits the usefulness of this feature. I hope it doesn't take another 10 years for this to be changed...


it looks like contracts are composable the same way interfaces are so I don't know how much of a limitation this will be in practice.


So I am a bit unclear on this from the proposal.

Composing contracts is a slightly more verbose fix for allowing multiple contracts for a given type (just make a composed contract and specify that).

Using composed contracts, allowed:

    func Foo(type T PrintStringer)(s T) {...}

I read this as, while a function can have multiple type parameters, only one contract can be specified in total.

Not allowed (function uses "setter" and "stringer"):

    func Bar(type B setter, S stringer)(box B, item S) {...}

Maybe I'm misunderstanding though.

In other languages with parametric polymorphism, the real re-use comes in by allowing functions like Bar to be used for any combination of "constraint implementing" types.


> Maybe I'm misunderstanding though.

As I read it, while you can't do:

    func Bar(type B setter, S stringer)(box B, item S) {...}
directly, you accomplish the same thing via

    contract SetterStringer(B, S) {
        setter(B)
        stringer(S)
    }
    func Bar(type B, S SetterStringer)(box B, item S) {...}
so in practice it's basically the same thing, you just have to explicitly specify the contract the function conforms to via a composition of the two other contracts.


This is a good point.

So maybe my concern is more a verboseness issue rather than expressiveness.

That said, it would be nice if there was some commentary on whether an implementation like Swift’s was considered and ruled out for some reason. As it reads now, I stand by my original comment that this restriction seems a bit arbitrary.


I’d be very interested in that as well, and I agree it seems verbose simply for the sake of maintaining a one-one func-contract correspondence.

Maybe there’s some trade-off I’m not seeing here, though.


I think you're correct and I misread that part of the spec. That does seem to be an important limitation.


I cannot fully express the frustration I deeply feel with people constantly proposing changes to Go that will turn it into something that is no longer Go!

I know where this specific proposal is coming from, but I feel that Robert and Ian are being pushed by the constant noise made by people coming from other languages, people who like those who made the horrible "try" proposal, seem to be trying really hard to ruin Go by turning it into yet another complex monster.

Not a day goes by without someone making a new proposal that is trying to change the very thing that made Go so unique and lovable!

All the proposals that has been made so far exists in several of the other popular programming languages. Use one of those if you really want the added complexity - leave Go just the way it is!


I am totally against generics, if you want generics, go to Python, or somewhere else that allows them. The beauty in Go is that it is static, and generics are not strict. I agree with wybiral [1]. This only serves to add complexity that gains very little, and opens Go to being less strict and there for less reliable and more problematic at compile time .

[1] (https://news.ycombinator.com/item?id=20555477)

edit: changed "strict" to "static"


Took me a while to realize that this is not about "contracts" the way this concept is generally understood, but Go's versions of generics.


I really see no reason for these. The difference could be easily seen in the types themselves. I’ve posted that elsewhere but you could have an

  func thing(stuff _.T, stuff2 _.k)


  Or

  func thing(stuff g.K)

And so on.

Also want to point out that there is a LOT of Golang code that is being shipped and used in the world and none of it uses generics. Do we really need them?


A lot of assembly code was being shipped before we had better languages. That's no reason to be stuck with a type system of the previous millennium.


I honestly think generics are like oop. Vastly overestimated.


> Type assertions and switches

I find it especially unorthogonal that you can type switch on generic parameters like they are interfaces. If I have a method that takes interface parameters, should I just always use generic (unboxed) parameters instead?


Why not extend the interface ? It would be less of an interface and more of a generic but may provide easier upgrade path, wouldn't it ?


There's a whole section in the document literally titled "Why not use interfaces instead of contracts?"


How (and when) does a Draft Design become a Proposal to be discussed officially on GitHub?


tbh I just want sum types but whatever




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: