I wish I understood how people can say 'this will result in so much more weird boilerplate garbage code' as it solves a problem that has resulted in an absolutely ludicrous amount of weird boilerplate garbage code. You're not trading off generic code vs non-generic code. Problems are generic. For solving generic problems, you're trading off the generic code that best expresses them, for codegen, dynamic downcasting, or Ctrl-V. The first two are not simpler (in the Go way) than generics, and the third is an antipattern in every programming environment.
I’m a big fan of Go adding generics, but I can see some of what people are worried about. I don’t think it’s “business logic that would benefit from being made generic”, it’s that generics allow people to write new data structures that aren’t currently reasonable to write in Go. For example:
- Sets
- Trees/graphs
- Stacks/queues
- List-like data structures with different performance characteristics than slices
- Monadic data structures you find in functional languages, like Option (for dealing with the lack of a value), Either (one of two possible values), Future (basically either a success or an error, computed asynchronously), etc.
I’m sure we’ll see popular libraries providing all of the above, and more. Generics will make for a wider variety of programming styles in Go, which has downsides as well as upsides.
Go will likely get a bit more functional/declarative/high-level, a bit less imperative/procedural/low-level. It’s not about to become Haskell or Scala, but maybe a bit more like TypeScript or statically typed Python, a bit less like C.
Spending lots of effort designing your type system to match your problem domain (or rather, what you think is your problem domain) is a pitfall which was made less likely by "classic" Go because the available types were very straightforward. Which also made it easier to refactor your code when you inevitably realized your initial approach didn't match the problem domain that well after all (or the problem domain changed, as it tends to do when you maintain an application over several years). And I totally understand that people are worried having generics will change this.
Because generics rarely help with that. I love generics for writing libraries and for giving me libraries which are easy to use but my business logic is almost never generic. Most problem domains I have worked in have not been very generic.
The downside is they’ll make Go code less uniform. Do you represent nullability with a nil-able pointer, or an option monad? To transform a list, do you loop over a slice, or map/flatMap/filter over a more functional data structure? etc.
Pre-generics, Go has been a very limited language. On the downside it’s verbose and error-prone, on the upside it’s very uniform. I think generics will make it less verbose and error-prone, but also less uniform - stylistic differences from programmer to programmer will be greater, similar to other high level languages.
> Pre-generics, Go has been a very limited language. On the downside it’s verbose and error-prone, on the upside it’s very uniform.
Except that this kind of uniformity inherently tends to disappear in larger projects, to be replaced by increased complexity. Java was also designed to be a "simple" language, one trivially understandable by hordes of low-skilled code monkeys, and yet look at the added layers of complexity in many Java enterprise frameworks. More full-featured languages, when properly designed (especially, avoiding the feeping creaturitis of something like Scala), make it easier not harder to manage inherent complexity at a larger and larger scale of development. But Go is now basically stuck with its constrained feature set, and Go developers cannot easily avail themselves of these tools. This is why so many devs criticize this particular design goal; it's pretty much incoherent.
In my experience, that happens when people consciously manage the way they use their language, or have the trained instinct to do so unconsciously. At a certain scale choice of language doesn't really influence this all that much (as long as the language isn't Perl I guess, I've never seen that done cleanly.)
I've worked on really neat big Scala codebases and really terrible big Go codebases – a determined developer can make a mess regardless of language, and for some cases it's actually easier to end up in that place with Go than with a more expressive language with a more comprehensive stdlib. Go isn't well-equipped for a number of things, and if you have to do those a lot, generated code and workarounds like the pseudo-sets people build from maps really become liabilities.
From what I've seen as far as complex codebases are concerned, I'd say Go can be very readable for small-ish programs that lend themselves well to Go's opinionated standard library and way of doing things, but I'd definitely want to use something more expressive at scale or for anything that doesn't fit into Go's limitations well.
I couldn't disagree more. Everything I've read in Java was a huge messy pile of spaghetti code. C++ and C can definitely become unreadable very quickly. Rust, on the other hand, I find harder to mess, although there are still lots of ways to do it. And then there's Golang, I've read so much Golang code that I'm surprised when I run into something that isn't readable. Happened to me once, obviously the code had been written by Java developers (factories lol)
> when properly designed (especially, avoiding the feeping creaturitis of something like Scala)
What I find interesting here is that Scala has the reputation of being a complex language, while its syntax and grammar are among the simplest and most straightforward there is: https://images.app.goo.gl/K1NTGUP4G9T6wfY48
I believe that this is what "scalable" means in the eyes of its creator: it offers very powerful abstractions to be adequate for mundane tasks (e.g. file-level scripting) up to complex systems and problem spaces.
Yes, as a consequence, people have created libraries and codebases with Scala that are inadequately complex and abstract for the problem at hand, but that's not something one should blame on the language but on the project/company culture. Same is true for any language: C++ templates can be abused in ludicrous ways, some of Java's design patterns make you wish inheritance would be forbidden, python's module imports potentially overwriting globals and other footguns.
No language can be dumbed-down so much that the resulting codebase becomes immune to some amount of "abstraction management", not go, not even python. Once this is understood and accepted, it becomes obvious that it's not worthy to castrate a language on the basis of "simplicity", because eventually this complexity will be warranted at a point in the future or for some aspect of the project.
I think the go designers did the right thing here, I don't think this will bring dramatic changes in how the language is used, and will actually help with "managing abstraction" where it's due.
Java code got complex after adding generics, not before. In a big Go codebase, even if it’s complex, the styling and patterns are the same, so it’s easier to follow. That might change to some degree with the addition of generics.
That's not true. Java was notorious for AbstractFactoryProducerArtifactFactory and awful inheritance-based patterns long before that.
You can't dumb down a language and expect the complexity of the problem space to go away (developers will have to cope with it one way or another), and you can't expect the programming language alone to be the sole judge of what's an acceptable level of abstraction (or architects would have been made redundant by now).
Wish I could upvote this comment more. This is the best summarized take on the introduction of generic in Go. It’s exciting, but frightening at the same time. Will this destroy the simplicity and the readability of Golang?
I'm kind of not sure what's necessary that wasn't there before. Go has:
- List
- Array
- Dictionary
- Set (map with empty value)
- Stack (easily emulated with list)
- Queue (channel)
Other than the ones mentioned before, the only ones I've ever used were a priority queue and sorted map (AVL-tree), and even those were rare. Other than these I used others that were so application-specific, that it made no sense to be generic. Based on this there's no strong case for why generics have to be in the language. Not hating, but I'm not convinced either way.
Sets tend to have methods like difference, union and intersection - with a map[T]bool, if you want to find the difference, union or intersection of 2 sets, you have to do all of that with much more verbose (and error prone) loops. Also, the type being a set carries a semantic meaning, with a map[T]bool it’s not obvious from the type alone that we only care about the keys, not the values.
There is a slight difference between the two. With a map[†]bool, you can do something like:
if myset[thing] { ... }
With a map[T]struct(), you end up with:
if _, ok := myset[thing]; ok { ... }
Sometimes, the saving in space is TOTALLY worth the extra verbosity. Sometimes, the slight clarity from "non-existent keys return the zero value" wins.
I also don't want to be hunting down whoever creates a map[T]bool and doesn't leave a comment explaining it's a set ;)
Hopefully we won't have to worry about it for much longer regardless, since I imagine we'll be using proper sets (with methods) some point in the near future.
I would say that "exists in the set" is determined only and exactly by "mySet[thing]" returning true, not that there happens to be a key in the map.
But, that's one of those "requires extensive documentation" (and ideally, wrapping in a custom type and provide methods for checking and manipulating the set(s)).
How? Why? If you use the map[T]bool, you obviously need to pay attention to the bool value, even in a range. Whereas in map[T]struct{} you don't need to pay attention to the values.
In either way, even with that type of data type underlying a set, I would (probably) provide a functional API to interact with it.
Yes, as I said, if you use a map[T]bool, you need to use something like:
for k, v := range m {
if v {
}
}
But it also allows you to use "if m[key] { ... }". It is literally a trade-off for what convenience you want. And in both cases, you should (probably) wrap an abstraction around the raw map.
And now with generics we have a way to make this decision/optimisation once, and have everyone else benefit from it without having to be aware of the details!
There's a lot of developers that misuse/shouldn't use any powerful feature. If the solution is to give them a nerfed language, then you also nerf it for the users that would use the feature well in productive, useful ways.
Yes, it’s a trade-off. In an organization with thousands of developers the right trade-off is likely different than an org with 20.
What I don’t understand is why people insist on making Go like every other language that already does what that are looking for. You want a language with generics, purely functional blah blah blah, great there are tons of choices, use one of them. Why insist on reducing the diversity in language choice, and trade-offs?
This is what policies are for, not entirely new languages: if you don't like generics and don't trust the end developers at your organization to not use them in stupid ways you should maybe (I say "maybe" as this problem just seems so lame of a problem to have: if your army of end developers can't be trusted then you should fire them and hire some real developers) have a way to turn off generics that isn't "use a language that doesn't support them at all" (which I will note is itself nothing more than a policy decision anyway).
I’m not talking about generics specifically. I’ve been coding for decades. Go is by far the easiest language I’ve used when it comes to jumping into an arbitrary code base and making sense of it.
Again I ask, why insist that go become the same as all the other languages when you can just use one of those? Why demand less diversity?
Which part of this is not generalizable to literally every conceivable complaint about a language whether it makes sense or not? Why would I want diversity between languages where you do have to copy-paste lots of boilerplate, and languages where you don't? If there was a good, simple-in-the-Go-way solution to this problem other than generics, feel free to suggest it, but repeating 'why care about anything ever' is just pure noise.
As I've said, this isn't about generics specifically. There are a whole lot of us who find Go code bases far easier to read and understand that most other languages. We believe there is a value to the language's simplicity, and we want to keep it that way. We've written Go for years, and haven't missed many of these features you want. In fact, we think there is a decent chance that Go code bases are easier to jump into and understand BECAUSE of it's simplicity.
You have many many choices that work the way you want. Go use them, why try to dictate to us that we can't have the choice we want? We disagree about the value of these things, and their trade-offs. That's what diversity is for, you have languages that work your way. Why insist that one of the few that doesn't has become like all the others?
You are posting a criticism of the feature, in response to a comment that is an answer to that criticism.
If your code did not use generics, it can continue to not use generics. If you did not interact with any libraries whose need for generics was apparent, you can continue to do that too and the libraries you were using aren't going anywhere.
But just because you don't interact with a problem doesn't mean it doesn't exist, and the problem has historically been solved with dynamic downcasting, code generation, or copy-pasting, all of which result in codebases far harder to read and understand than most other languages. For these problems, you are not trading off generic code vs simple code, you are trading off generic code vs incomprehensible code. It is great that you have never needed a tree-map instead of a hash-map, but many have.
Go's simplicity is in many ways it being the spiritual successor to C, and even C has generics nowadays because they are an enormous value add.
You’ve missed the point. There are trade-offs that you are ignoring. There is no hard science that proves either of our opinions are right. I believe diversity is the better path in the face of such uncertainty, I don’t understand why you are against diversity here, and you continue to not directly answer that question.
“ If your code did not use generics, it can continue to not use generics. If you did not interact with any libraries whose need for generics was apparent, you can continue to do that too and the libraries you were using aren't going anywhere.”
This shows a profound lack of understanding of software development at scale.
Of course I'm not directly answering a polemic gotcha. If I work in Go, and I have a pain point in Go, and I want that pain point to be addressed, that's not being 'against diversity' or being 'for diversity'. It is being against wasting lots of time for nothing. Why are you against productivity? You continue to not directly answer that question.
Unlike you, I’ve answered your question many times. You are only looking at the benefit for a narrow use case without accounting for its holistic costs. We have a difference of opinion about those costs, and there is no hard science that can prove either of us right. I have acknowledged your point of view, and the possibility that it could be right. You have not reciprocated in any way, or even acknowledge that you understand the point I’m making.
Diversity doesn't have anything to do with holistic costs. You shift between whether you're talking about holistic costs and whether you're talking about diversity depending on which part of the point you're making.
Can't you and yours just freeze at go 1.7? That way, you get your diversity. Pretend every code base that's using go 1.8 is using java or something. A lot of us who write go everyday think generics aren't complicated and are tired or repeating ourselves.
This shows a profound lack of understanding of software development at scale. Also, as I've said many times in the thread above, I'm not talking about this specific implementation of generics.
> why insist that go become the same as all the other languages
You're arguing a strawman. No one insists that Go become the same as all the other languages (which are themselves nowhere near the same as each other). People are just asking for generics.
C++, Java, C#, and Haskell all have generics and are still wildly different languages even with respect to their approach to generics.
Policies do work if you have a process to enforce those policies. Without policy how do you define the process?
"Good intentions never work, you need good mechanisms to make anything happen."
http://nickfoy.com/blog/2018/4/7/good-intentions-dont-work
I also predict it will be practically useless since those five libraries that you absolutely depend on will all be rewritten to take advantage of generics, and then you're left with the lose-lose situation of "maintain a fork of the last non-generics version myself" or "get stuck on the last non-generics version and never receive security updates again".
We change the language; people who formerly enjoyed the language now have to put up with a language they rather not.
Their only option is to quit programming since there are no programming languages they enjoy left.
This. I joined a Java- and Scala-centric team, yet Go is continually harder to avoid for no fault of our own, so I’m glad we’ll at least have a less obstinately broken dialect.
I've been coding for decades. I've written large scale systems in functional languages. I was a believer, but after many years of experience, I didn't find it to add a whole lot of value. I didn't find that there were fewer bugs as promised, it wasn't faster to develop and ship features, and most importantly, it was hard to jump into other peoples code and make sense of it.
You might have a different opinion, that's fine, you have tons of languages that work the way you want. That's what diversity is for. Yet so many of you want to remove that diversity for a reason that I don't understand. There is no science that has proved which of us is right, so the best option in my opinion is to let diversity flourish. I can't understand why you all are against that.
We must be talking past each other, and maybe we don't disagree all that much. I'm not talking about generics in this form, and I think the Go team did a good job in their approach and implementation. However, you can read in this thread many people that are demanding more, specifically with the generics implementation. It's not good enough because you can't create monads, do purely functional programming, etc.
I'm hopeful that this implementation is hitting the sweat spot, and that the Go team won't chase the demands to make Go like many of the other languages. I think the biggest win for Go is not having class based inheritance. Fortunately, I haven't seen anyone demanding that.
The syntax isn't the only driver of language popularity. Ecosystem/ tooling/ major sponsors/ large precedent projects etc all enhance a languages adoptability and big shops want to pick a language with a low risk and large supply of coders.
That was one of the goals and virtues of Go imo; the language was nerfed. It made the hard things easy, and prevented the really hard things from being implemented in silly ways.
It guarantees that when people do things in silly ways, their silliness is splayed across so many characters and lines that you'll never have the time to change an appreciable portion of them.
Languages are very powerful in shaping expression of programmers, new and experienced. Limiting expressivity is how we eg got from assembly programming to structured programming. Same memory safety, garbage collection, etc..
Goroutines and channels seem comparatively simple compared to the representational symbolic changes involved with generics, which make the code look and read more like Greek. The beauty of Go is (was) largely in its simplicity and python-esque (or maybe better than python) readability.
With generics, reading and reasoning about code becomes [even] more challenging.
Personally for me, at this juncture I'm finding that for new projects I might just as soon spring for Java or maybe even prefer it to Go. For the straightforward mvn-style dependency management, if nothing else.
It's all moot now, though. There's no going back, this puppy is cooked!
I spent 4 months in '21 rewriting a golang backend to properly use channels for concurrency.
The previous developers did misuse it and left a steaming pile for me to find.
However contracting pays well, so I don't care.
You might want to reflect on your reasons for jumping ship because your tooling gained a new feature. Sound's rather irrational to me.
Exactly. I work with rust and ocaml all day long, but reading a Golang codebase in contrast has always been such a pleasure because it’s just so easy to read. I hope generics don’t change that. There was really nothing like Golang out there.
Go already had generic structures anyways. Go maps supported generics. Or did you always use map[interface{}]interface{} everywhere for the purity of your anti-generics position ?
The built-in types that supported generics were limited to the essentials; slices, arrays, maps, etc. Those basic building blocks were obviously sufficient to allow Go to reach the level of success it has. It's not an argument to allow for unlimited generics.
Problems are not generic. "I need a N-ary binary tree over arbitrary comparable types" is not a problem. Problems are expressed in terms of domain concepts that are unique to the problem space, the organization, the business need. Some _implementation details_ or maybe patterns that can be used to solve those problems may be generic. That's separate.
That’s true, a binary tree isn’t a problem, it’s a tool for solving a certain class of problems. Since you can’t build reusable tools in pre-generic Go, (and, to be honest, only a limited set of tools with Go’s new generic system), programmers are forced to reinvent half-baked ad-hoc solutions all over their code base. I once read an article about a C codebase that contained dozens (!) of specialized linked-list implementations. Large Go codebases, from everything I’ve seen, aren’t all that different.
Specialized collections are not necessarily bad. ClickHouse has dozens of hash table implementations, each tuned to a specific use case. It's what you do when you need something to go very fast.
Several specialised implementations of a concept with different algorithms is quite different to several implementations of the same algorithm because the language does not support their reasonable reuse. Rust gives you “specialised collections” via monomorphisation.
And that is the exact joy of using a language like C or Go; you need to sort your things, just add in two pointers and make it a list you sort on. Eight or now sixteen extra bytes and you have a good tuned data structure. I don’t want to use a bunch of generic structures, I want to use the ones that solve my particular use case best.
I always say "Just right is all wrong". The time and standardization cost is often just too much for the small gains. I'd much rather have everything be reusable, abstracted, and one size fits all, aside from the tiny fraction that matters enough to optimize.
Good points. But for myself, I like it when the structure of the code and data structures exactly fit the need. Like the old quite, “show me your header files and hide the C files, and I will understand your program.”
"I need a priority queue for X" isn't generic, and "I need a priority queue for Y" isn't generic either. But well if you're doing it a lot, would be nice to not just reproduce the same bugs over and over again, yeah?
Everyone's worried about Go turning into Java and I can't speak too much to the utility (I have written maybe 50 pages of go in my life) but anyone who uses Python's standard lib can point you to 10 different things that are nice to have bug-free versions of and that "just work" thanks to Python's "generics".
It depends on the level of abstraction you're addressing. One level may be "i need to store things with a quick search function", another may be "i need a storage of ordered names and expiry date for things", etc until you get to "I need a binary tree which orders by comparable types".
Seems like you are not aware of the kind of problems they solve. Systems engineers have been writing generic programming code for years to build databases, message queues, file systems, etc where you do need data structures that support arbitrary types/functions.
It’s not a satisfying answer, but the truth is that there just isn’t that much Go code out there suffering from a lack of the kind of basic generics being introduced. It exists, but it’s not the norm.
Instead, what is likely to happen is, a lot of problems that generics can be applied to, they more often will be, simply because the option is on the table, regardless of whether it really improves anything. Go taught me to stop worrying and trust the For loop, and now we’re probably going to have dozens of utility libraries with Map and Reduce routines that people will use for extremely simple loops that don’t need them, probably resulting in slower code that takes longer to compile that needs a slower and more complicated optimizer to make up for it. In my opinion, the antithesis of what makes Go great.
Not saying generics is all bad, or that reduce is bad, or anything like that. But once a programming language has a new way to do something, you can be damn well sure people will use it, often even to their own disadvantage. Go doesn’t have pattern matching control flow, so the best degree of soundness you can pull out of an Optional type is ugly visitor-pattern type interaction. And yet, I’m worried routines will superfluously return Optional types anyways, just because you can, even though it’s not really any better than just returning a type and a boolean.
And yes. Actual code that actually needs generics for real will benefit, undeniably. Data structure and generic algorithm implementations will benefit greatly. The ugly sort package can finally be cleaned up. That’s great. But, most Go programs simply aren’t suffering from a lack of basic generics. Operative words: most, and basic generics. Many programs simply don’t need any data structures other than a basic growable vector type and a basic hashtable, and Go has both and they are built-in and work generically. Yet the more complex problems that C++ templates and Rust generics solve will probably not be aided at all by Go generics, which are simply much more limited in scope.
And despite that, this is quite an increase in compiler complexity. I know many are thinking “how hard could it be? Tons of languages do it today”—but Go isn’t really like all of the other languages. The simpler compiler is a huge benefit to the compiler and the whole ecosystem. Arguably, Go’s simplicity enabled them to experiment with things that wouldn’t be very easy to do in other languages, with regards to its GC and threading model. They’ve got it figured out of course, but it really isn’t a simple walk in the park. Any single aspect has a lot going on:
My hope is that the relatively mature ecosystem of Go will somewhat encourage people to keep their code simple and use generics sparingly, but with such a major shift in the language it’s hard to imagine what is “idiomatic” won’t change over time. I feel Go’s brutalist simplicity is its strong point and that it will just be an inferior option if you try to write code in it like C++ or Rust.
I really, really try to ignore my own impulses on Go simplicity, because I don't understand it and lots of people who say they do understand it are prone to saying the same things which are different from the things I say, so there is clearly an Outside View to take. But I cannot help but view this as a case of special pleading. You have this great spiritual successor to classic C, and then there's also channels tacked on, with native syntax, that are the least pain-in-the-ass solution to standard problems like iterators. So you get lots of people using channels like iterators. This is a problem at several levels, but I never see anyone complaining about it - not even Go's detractors, who are instead focused on where the stdlib shunts its complexity to. Repeat for a few more idiosyncrasies. Even the basic collections are guilty of this - how many people use an explicitly deduplicated array, or a hash table to struct{}, when what they want is a set?
The problem you're describing already exists, there's no preventing it or getting rid of it, and generics are tame by comparison - at least they semantically mean exactly what the developer intended them to mean, don't come with non-obvious performance downsides, and don't clobber existing concepts like error handling.
As for optionals vs separate bools, love 'em or don't, but yes, there is a downside, which is in my view possibly Go's only regression from C: no longer being able to stick the function call in the if-statement directly without additional fanfare.
I don’t think Go channels really panned out well. However, I do think that Go’s thread model and scheduler have turned out great. I think channels could’ve made sense, but they wound up being both more complicated to use and less performant than I think was hoped for. However… channels aren’t a solution for iterators. They’re a solution for communicating between threads. Go, like C, simply doesn’t have anything geared towards solving the problem iterators solve. Superficially they can fulfill some similar needs, but personally I feel they’re both practically and philosophically not in tune with that use case, even though you can certainly make it work in some cases to surprisingly decent effect.
Now as for the rest. Generics don’t automatically make code slower. In fact, in some places, the practical effect should make code faster; for example, the sort package currently defeats escape analysis even though it shouldn’t, and a monomorphized version should not suffer that consequence. Also, obviously, the fact that it removes runtime dispatch will also lower overhead. I understand that.
However, generics DO have some consequences:
- Every part of the toolchain—the compiler, linters, etc.—needs to understand generics and handle them properly. This is a non-zero cost; generics-heavy code will have measurably slower compile times. Go compilation times are fast, and that’s a feature.
- Generics improve the ergonomics of certain patterns that are absolutely slower. It is totally possible for someone to write a “map” function today for every type they could ever want, but it’s not really practical, and since it forces you to just write the loop out anyway, it really begs the question why you would bother in most cases. With generics, it should be relatively easy, but without aggressive optimization, it will be less efficient than the simple For loop. The map call and closure would need to be inlined for the optimizer to be able to get back to the same point. Having to do this, again, will slow down compilation more.
Honestly, you’re right; it’s far from the end of the world. However, I am pessimistic because to me, it’s unclear the costs of generics will be outweighed by the benefits of generics, but once the cat is out of the bag, it’ll be hard to ever go back on it.
Personally, I believe there are other improvements that would’ve been more valuable to consider first, like, personally, sum types and basic pattern matching. Sum types feel like something that, while it would have costs, has a potential to benefit a wide variety of programs and make certain code less error prone in a way that I don’t expect generics to.
As far as I know, that’s only for interfaces, and it’s more like a union and not a sum type. I’m talking about sum types for structs, like Rust has with enum.
(I realize I’m replying ridiculous fast here, but I just happen to click over to Comments on the first minute. Dumb luck.)
Of course, I could just use Rust, but it’s much harder to make idiomatic Rust libraries than it is idiomatic Go libraries, and I think that fact stunts the ecosystem a bit.
Go may encourage you to use a map to an empty struct for a set, or a weird if statement syntax that contains a full statement instead of a more elegant looking match or for expression, but I find its little quirks to be manageable and without a terrible amount of consequence. Generics will probably bring some good, like the end of needing weird slice tricks, but I’m still concerned it won’t be overall worth it.
I don't know who is saying it, but you can certainly find GitHub repositories of people who have been playing around with generics creating it. Presumably it is people being silly or playing around to see how far they can push the feature under experimental conditions, but developers have also been known to unfathomable things in real code, so I can kind of understand the sentiment. Channels lead to a lot of weird and garbage code when they were new and shiny, when developers felt they should use them for the sake of using them, so the idea isn't unprecedented either.
Generics is the big news, but the fuzzing support is amazing too. I added a fuzz test to an app I have in about 5 minutes. It was no harder than adding a normal test.
I expect to see a huge boon in users of fuzzing techniques which will benefit projects across the board.
I'm very happy to see fuzzing added to Go, and I don't write Go at all (or every plan to). Bringing concepts like this to the masses is a win for everybody.
I don't consider those kinds of people engineers at all. They're people who have learned a syntax, some patterns and, thanks to their human brain, can adopt it slightly to certain situations. But they're not engineers.
Engineers constantly ask why, explore the unknown and are aware of almost all possible solutions available to them. using this knowledge they select the correct tools and build the solution.
What you've described are skilled labourers.
I'm harsh with my judgement because I believe our industry has a lot of "statically minded" individuals who stay in their lane, never going outside their boxes, to explore what else is out there to help them solves the engineering problems they're faced with... and it's pissing me off :)
Engineers are people who can create a solution to a problem in an efficient and effective way. Specialists who know every detail are craftsmen who can create the best possible solution but they aren't necessarily good engineers.
> Engineers are people who can create a solution to a problem in an efficient and effective way.
How do you create an "efficient" and "effective" solution if you're not aware of as many options as possible, i.e. fuzzing for security vulns?
How is your solution "effective" if I can throw a bunch of Unicode at an input field and crash your application because the "engineer" didn't know what fuzzing is?
I've seen a common pattern of specialist devs who can write loads of code and complex tests, but never actually talk to a user or check if the software is doing what the users really want.
Sure is a lot more effective if they're good at communicating and actually meet business needs...
Not every application needs to have 99.9999999% uptime. There is plenty of honest to goodness productive output coming out of "shit" codebases that just tackled one use-case after another, never giving a care about some weirdo dumping in a bunch of unicode.
I might use fuzz testing to test some validation logic I will add later on, things like strings having to match a certain pattern. Regex is Klatchian to me, but the previous guy left behind a big set of expressions that are currently used in validation; using fuzz testing I should be able (if I understand it correctly) to find gaps in those expressions.
I knew I needed some extra sanity checks around input parsing and sanity checking shorter than expected buffers. Fuzzing the function that parsed input caused panics and other errors that I was able to fix by adding bounds/sanity checks in the right places.
This will no doubt be overshadowed by Generics, but Workspaces[0] are going to be incredibly useful for companies using a monorepo with shared packages.
Finally! Helps with both monorepos and multirepos, as illustrated in the tutorial you've linked.
(After going through a bunch of associated docs, I'm under impression that their quality is going down, slowly. At least when comparing it with the stdlib docs' quality.)
In celebration of generics release, I was playing around with supporting Optionals via generics. If anyone's interested I am happy to make this a real project
Use `New` instead - it's the idiomatic name for a constructor in Go. Drop the `Optional` part - the module name suffices - the caller will see: `opt.New`. Personally I just call the module `optional`, I think it's clearer: `optional.New`.
func (o* Optional[T]) Unwrap() (T, error) {
1. `Unwrap` reminds me of Rust's .Unwrap, which panics. Seems confusing.
2. There's no error here, the situation is equivalent to a missing key in a map - it's enough to return a bool.
It is just an assumption on my part, I figure that the cost of pointer chasing is
generally going to outweigh any disadvantages. It can also help enable optimizations, for example slices and maps that do not contain pointers do not get scanned by the GC ([1]).
I fully agree with your approach. The important thing is that the user now has the choice to use a pointer or not: they can always use a pointer to the optional if they want to use pointers.
Pointers to pointers are generally something you try to avoid if possible.
Agreed on performance being in the air. Tbh, I like the non-pointer option because using a pointer was causing the implementation to feel a bit weird. This cleans up a bit of the handling that was annoying (for example in the old implementation Make(&"something") didn't work because you can't reference a constant without first making a variable.
distinguishing `Some(null)` and `None` is often considered a feature of Optional ;)
to use a tired example: when getting a value out of a map via some `myMap.get(key)`, you may want to distinguish
"not present" = `None`
and
"present, with value null" = `Some(null)`
the right solution is to just not have nulls in the first place, then there's no problem ;)
This looks kind of cool, but I think there's no version of this as a library that could be considered idiomatic(TM), after all "a little copying is better than a little dependency"
You can, and in fact, that's how idiomatic Go's handled this up until now. But it only works in that specific case, and even then there's a small issue with readability - if you're not already familiar with this pattern, it's not immediately obvious that the string is strongly coupled with the bool. An Optional is very explicit about that.
The problem becomes clearer once you venture out of this exact case. How would you embed this pattern in a struct?
Like so?
type A struct {
value string
valueOK bool
}
Or maybe with a pointer?
type B struct {
value *string
}
This option is the most common one in Go code today.
You can perhaps see the issues already - these are awkward, neither properly communicates that what we want is optionality.
But the real issue is that now that these are in a struct, nothing prevents us from accessing the value without checking if it's valid first:
a := A{value: "", valueOk: false}
// oops
fmt.Println(a.value)
a := B{value: nil}
// oops
fmt.Println(a.value)
This wasn't an issue in your example - if we attempt to only access the value, and omit assigning the bool, the code will not compile:
I've used optionals in many languages, and this is the first I've heard of it. What else do you really need except map and... flatMap (aka `then` aka `and_then` aka `>>=` aka `bind`).
You misunderstood - I saw them in codebases that were new back when .NET 2.0 was new.
To be more precise, it was when .NET 3.5 was new (2008). That brought a lot of visibility to Nullable<T> because it was featured prominently in LINQ. But ?. didn't come until C# 6 (2014). I was wrong about ??, though - that one was added along with Nullable itself.
My main fear with the introduction of generics was the lack of stdlib support. I know they want to play it safe and are planning to change this in future versions, but the last thing I want is that I have to download some github.com/foobar/... library for every common sense generic type I might want to use.
Generics will drastically improve datastructure libraries.
That said, for people worrying about overcomplication, fortunately methods can't have type parameters. That means ergonomic monads are not possible to implement, and we'll most probably not see the whole functional story play out in Go.
I'm curious why you say fortunately? I find it to be quite a down side that type parameters are missing. In fact it's quite confusing, because there is nothing that indicates a constrained interface can't be used in place of a regular one (until you try it). Personally, I think they should be included simply because they are a side effect of the generics syntax and it's much more confusing without. Also, they increase clarity and type safety, which are two of Go's main advantages.
Also, type parameters were supposed to be included, but were dropped because it made the implementation more complex.
This is quite tricky to implement properly if the compiler doesn't do some form of specialization. (Basically, you need a different vtable entry for each instantiation of a generic method, or you need a runtime lookup to find it). But given that the Go compiler is doing complete specialization, I'm surprised they left this out.
Generic methods (on classes) are the way to emulate first-class polymorphism (roughly, passing around type constructors) without explicit support in the language. It's a limitation that will eventually catch up with you.
Yes. I'm keeping up with the developments in functional languages, and many of them have lots of ways to do the same thing, with the "recommended" way changing once every 2 years.
I intentionally choose Go where idiomatic code 5 years ago is mostly the same as idiomatic code now. Where no matter who writes the code, it ends up being fairly similar. Where I can easily dive into any open source library, understand the code base in 10 minutes, and make the fix I need. Where there's little-to-no bike shedding.
I understand others might have different preferences, but as another comment mentioned, there are a bunch of languages satisfying those preferences very well (which I like very much as well overall, but I don't like using them for serious day-to-day coding). No reason to try to make everybody happy with a single language.
Besides, I also think monads aren't a very good abstraction to use in most of your day-to-day code, so I'm happy I can avoid code using them.
> Besides, I also think monads aren't a very good abstraction to use in most of your day-to-day code, so I'm happy I can avoid code using them.
On the other hand, I think people are stuck in Monads all over the place without even realizing it's a Monad or having the language capabilities to work with them ergonomically.
Definitely! I'd be worried though that functional developers coming to Go (for whatever reason) would try to write Go functionally. That will fortunately be very hard thanks to this limitation.
Nothing specific against functional programming per-se, but as you said, Go's not the language for it.
None with the same performance characteristics, ease of use, and ecosystem size. I'm reasonably competent with Rust, but sometimes I don't want to bash my head against the borrow checker or litter my code with copy/clone. Go hits a nice sweet spot, my main quibbles are that there's no native iterators/comprehensions or sum types with compiler checked exhaustive matching. With generics we can get libraries that allow for more functional style programming, and golangci-lint has a linter for exhaustiveness.
Insanely fast compiler. Great type system. Functional programming at its best but still relatively pragmatic and easy to mix in some imperative code if need be.
It is a bit more complicated than Go and the ecosystem is maybe a bit more patchy but other than you will have a good time.
The compiler is fast in that is compiles code quickly, just as Go and the execution speed of produced programs are similar to Go (depending on what abstractions you use).
It seems there are a bunch of languages in the fast but not quite as fast as C/C++ category: Java, C#, Go, OCaml, SBCL (Lisp), Haskell
Most JVM languages (Kotlin/Scala/Clojure) including Java :) In fact kotlin/java has better peak performance than Go. Main advantage for go is the reduced
memory footprint.
The thing I hate about the JVM is that I'm always trying to tune it. So many options and nobs to turn. With Go I just run execute the program, no tuning necessary.
There are many benchmarks showing Java faster than Go. The trick is picking one that people agree is representative.
It shouldn't be surprising though. "Peak performance" doesn't include start-up time, and ignores memory. If you discount both those, there's no reason that JITted native code wouldn't be faster than a runtime that includes goroutine scheduling.
Scala is pretty close to Go in terms of popularity and I would argue that it has a much bigger ecosystem due to the JVM. It is also pretty much a typed python in terms of ease of use, and the JVM has stellar performance — other than small running scripts, for many kind of workloads Java’s state of the art GC will have better throughput than Go’s.
Scala is on its way to die actually, I worked couple of years with lot of services in Scala, they were all replaced by regular Java or C# over the years, Scala missed the train and it's more and more difficult to find people that want to work with that language.
It's a stagnant language that will not get more popular, it peaked.
> It's a stagnant language that will not get more popular, it peaked.
I can't speak to whether it's "stagnant", but at least as a percentage of questions on Stack Overflow, the claim that "it peaked" checks out. The peak for Scala was late 2016. Go surpassed it in 2020.
It just got a huge revamp with Scala 3, please do check it out if you hadn’t yet - it fixes some pain points while has many many exciting features, eg. can be null-safe with a single compiler flag (basically type T will not be able to represent a null, a java method that does return T will have a signature of T | Null). In great hands it can be insanely productive and well-maintainable, but of course I know that giving it to developers that look at the project more as a learning/experimenting thing can cause quite some damage due to the language’s strength.
They just released Scala 3 this this year so hardly stagnant. The community seems to promoting more pragmatic functional approaches recently which I think is going to make Scala much more popular in the future.
I understand not keeping up with it but 'it's on its way to die' is just spreading FUD. It's more popular than ever. Disney+, Lego, Canada Post, basically every large bank–plenty of people are using it and adoption is increasing.
What metric do you use for popularity? I'm more involved in the systems side so I'm definitely over exposed to Go projects. I would have considered Scala to be at least an order of magnitude less "popular" than Go. Kafka is the only open source project in Scala I can recall running into but there's probably a whole enterprise world I don't see.
For open source projects, it’s very popular in the “big data” world. Kafka, as you mentioned, but also Spark, Flink, Summingbird, Scalding, etc.
And yeah, it’s a pretty popular backend language at some large companies. Twitter, LinkedIn, Netflix, the Guardian, Starbucks, AirBnB, Coursera, AT&T, etc. all make significant use of Scala.
Go is certainly bigger, but Scala has plenty of adoption too.
a small note on redpanda. Is not a `rewrite`. It is an entire new storage engine, different consensus protocol, different on-disk format, different internal RPC, no external dependencies, ...., in fact when i started it it was using flatbuffers as the API. What we did later to make money is add a kafka-API translation layer, but in terms of code size, is a very small part. The way i'd recommend thinking about it is a new storage engine that is compatible w/ kafka clients. https://github.com/redpanda-data/redpanda/tree/dev/src/v/kaf... - this is basically all the kafka-related code, the rest is net-new design.
Rust has first-class functions. Not only that, but it also has immutability by default, and ADTs with pattern matching. If OCaml is functional, why wouldn't Rust be?
Because of lifetime of captured variables in closures, mostly.
You can't really pass around functions freely like you would in Haskell/ML unless you start Box-ing/Rc-ing everything. But then it's not really idiomatic Rust anymore, IIUC.
Rust doesn't really have first-class functions - you can't take an arbitrary snippet of code and turn it into a function value, because sometimes it will no longer pass the borrow checker.
Well, one big one is lack of tail call elimination. If you can't write even a simple tail-recursive algorithm in the language, it's not a functional programming language.
A bit offtopic here, but it's a bit weird to see this argument that using .clone() in Rust is "littering" the code. The feature is there for a reason, and using it will still give you vastly more efficient code than in most other languages; quite possibly including Go.
Sure it very possibly might outperform Go, and almost undoubtedly outperform python and ruby, but it might also make refactoring for further performance down the road very difficult. When I reach for Rust, it's because I want maximum performance right off the bat. Otherwise I'd just use go, or even java/c#.
For me the most puzzling aspect of Go is channels. Ada tried CSP (Concurrent Sequential Processes that Go channels are based on) in eighties and people quickly realized that it lead to bad performance and was unsuitable for a lot of useful cases. So Ada got standard mutexes and signals.
So why CSP which does not allow to implement priority delivery or multicasting and makes cancelling much harder compared with normal polymorphic message queue per thread? Moreover, Go crippled CSP even further as one cannot select on channel and file descriptor.
Fortunately Go does provide mutexes and signals so one can ignore channels unless required by Go API. Still without proper sum types polymorphic and type-safe message queues are not possible.
> Still without proper sum types polymorphic and type-safe message queues are not possible.
This isn't really true.
You can define a channel where the message type is an interface, and then you can use a type switch on the receiving side to downcast that interface to various concrete message types.
What you can't do without sum types is ensure that you don't need a "catch-all" branch for if/when a value is sent down the channel that isn't one of the concrete message types you expect, but it will still be type safe and polymorphic.
That’s a weird definition of type safe… with very very low standards.
What OP probably means is that there’s no way to build a channel transmitting a list of various types while having the compiler warn you if you’ve forgotten a possible type, or if you’re trying to unwrap into an impossible one.
> That’s a weird definition of type safe… with very very low standards.
Thanks.
> What OP probably means is [...]
I already completely addressed this point in my previous comment. Your comment doesn't appear to add anything new to this discussion. What OP actually sounded like they were saying is that you would have to do an ad-hoc union type with a struct containing the fields of the various things you would want to send, and having to hope you don't access the wrong fields at the wrong time. That would be type unsafe.
I've written a lot of Rust professionally over the years, among other languages. I'm very familiar with what strong type systems look like. Being told my standards are "very very low"... such a great way to keep this conversation constructive. Type safety is a spectrum, and what I described in my comment above is far from "very very low standards" when talking about practical applications. It's not describing the gold standard -- I'll be the first to say that I wish Go had proper sum types -- but some people apparently forget what a lot of other popular languages deal with in terms of type safety, leading to hyperbolic statements about other people's standards.
What I have done in the past when I've needed to pass various different types across a channel is to look at what the receiver of these messages actually need to do, factor that out into an interface, make a "chan myInterface", then make sure that every data type I need to pass across actually implements that interface.
The main thing I am working on (slowly, because time and stuffs) is a client library for an saync-protocol messaging/forum platform, where each request will (eventually) receive a response, so I send the request, then construct a request-specific data type to pass across to the listener, together with the ID of the request, so that when the response returns, it can dispatch to the proper decoder for the response.
That actually seems easier to describe in code, than in words, but this margin is too short for that.
Go has shared memory and mutexes, but using them means giving up on memory safety because the whole rigmarole that Rust does to preserve safety in concurrent code is totally missing in Go. So that's a meaningful reason to stick to CSP - sure, it might create easy deadlocks but at least it won't totally crash your app with a potentially exploitable fault.
Go could have provided message queues with priorities Erlang style or something closer native solutions in Linux/Max/Windows that are known to work. They are safe, allows to avoid deadlocks just as channels and much more flexible.
And with mutex one can build those as necessary, even if interaction with channels is ugly.
Hm, is this pattern the best Erlang has and VM does not optimize it to avoid the need to lock the queue twice? If so then you are right double-select is equivalent even if it is rather messier in Go then in Erlang with the need to provide 2 channels.
> Go could have provided message queues with priorities Erlang style
Erlang doesn't have priorities; you can fake them with selective receive, but IIRC selective receive is considered a code smell and something that should generally be avoided or minimized.
(You can also have a separate process act as a priority queue channel by, well, storing messages in an actual priority queue structure and sending from its head, and that's a little more robust.)
Avoid channels except for straight forward, producer-consumer workflows. Use mutexes and other traditional concurrency primitives for other stuff. Channels are a huge shotgun with caveats.
The thing about channels is, while they might be slow for tight inner loop type compute code, it lets you scale across all the cores very linearly. And, given a channel to flow the work thru, you can manage the number of workers at each stage however you like - growing with more load, dropping work with more load, back pressure with more load. All concisely and explicitly. And a lot of network server things are classic producer/consumer things, so it’s great to have a solution for that where you get concurrency without any thread worries.
Go uses escape analysis to ensure memory safety of pointers. Taking a reference of a local variable means it might end up on the heap, and calling new might allocate on the stack. It all depends on if Go can determine that the lifetime of the pointers is less than the stack frame.
It's good if you don't want to care about those details, it's bad if you really care about those details for performance reasons.
It's pretty straightforward, knowing when to use the stack vs heap can improve your performance. Having both paradigms available in the language means you can start out relying on the GC for everything. Later when you're ready, you can look to make more intelligent use of the stack.
Hits the sweet spot for me between the one extreme of having to malloc/free everything myself and the other extreme where everything is immutable, garbage collected and you dont know what causes allocation.
The former is a nightmare for security and productivity, the latter is a nightmare for performance.
Java can decide to heap-allocate or not heap-allocate at runtime based on what optimizations the JIT has performed so far, and so it can change over the course of a program's run. I don't think you can get more implicit than that.
Why would Java ever heap-allocate a local, regardless of optimizations?
It can decide to heap-allocate or stack-allocate objects (when you new them) based on escape analysis, which is a JIT optimization. But that's very different from heap-allocating variables.
Then I'm not even sure what you're talking about when you say "heap-allocate even a local variable" if you don't actually mean the value of that variable. If you draw that distinction, "variables" are never allocated; spaces for values are.
Regardless, whether something is a local or not in Java still varies depending on the JIT's decisions. (Or rather, the JIT does not see local/non-local variables, only lifetimes, and whether the lifetime may extend past the JIT's current view.)
In Java, the value of the variable of a class type is not an object - it's a reference to the object. Those references are always stack-allocated. Variables of primitive types like ints are also, of course, stack allocated.
In Go, if you have a local of type int, and that local is captured by a closure, it will be allocated on the heap to extend its lifetime. Java doesn't need to do this because it requires captures to be effectively final. On the other hand, C# does the same thing as Go.
You don't have to think about stack vs. heap allocation in Go unless you're writing performance sensitive code. You could pretend that Go heap allocates everything and still read and write Go code without any problems.
I think it's at least partially about value vs (mutable) reference semantics in Golang's case and the hope is that the compiler will know to whether heap or stack allocate based on escape analysis.
I have been using the RC at work lately, and I have to say: The generics implementation is quite nice, and RUTHLESSLY slashes boilerplate and copy-pasta.
This is going to turn Golang from "that one boilerplate-y language without generics" into my favorite language.
I've been using the https://github.com/samber/lo library, and it is very nice to be able to do "map/reduce/etc..." on golang structs.
I would really enjoy a chaining library, and some tooling to make type coersion a bit cleaner.
I don't mean to disparage the wonderful work of the lo developers but, this is in many ways what I feared generics would bring. If you aren't going to lean into the simplicity and single way of doing something, you are likely better served in another language.
I am probably an old man yelling at clouds and I hope those who like this style get tons of value from it. I just don't see the benefit of trying to retrofit some of the behavior of other perfectly useful languages into Go. JS/Java/C#/Python/Ruby is a fine language. Let each of them have their strengths. I feel like trying to bolt things together this way only serves to take away what is special and valuable in Go.
I should probably shut up and just be happy lots of other people are using a language I enjoy :)
> I don't mean to disparage the wonderful work of the lo developers but,
> this is in many ways what I feared generics would bring.
What is, anything in particular about that "lo" library? Just the way it uses generics, or moreso that it's yet another library to learn to be effective? Can't argue with the latter, but as far as the former goes, I think that looks quite reasonable. It's very explicit, there's no magic going on. I think there's definitely a bar for "so much abstraction that it is discouraging to developers not into that sort of thing", and IMHO that doesn't cross it. Monads, sure, that's a tall order for many folks. But this is pretty tame.
But I am a fan of abstraction so I'm not the best judge.
Efficient map/filter/reduce chaining basically requires JIT to fuse the operations and/or a GC prepared to deal with huge amounts garbage. In Go, filter(h, map(g, filter(f, a))) will be immensely slower than the equivalent for loop.
Great point. Although, that sounds like a typical example of "let's write it the ergonomic but less performant way first, then profile and refactor as needed". That seems very tractable for that sort of refactor, especially with type safety.
Also I would not be surprised to see JIT-like behavior from go tooling, first party or otherwise, if that sort of approach takes off.
And now we introduced 2 new problems:
1. We have 2 ways of doing stuff (boring loop vs map)
2. We have a new hidden way to introduce performance degradation by importing a library that uses map
You might argue that you should know what you're importing, but in practice, it can hurt the overall ecosystem. I'm still supporting the introduction of generics, but the tradeoff should be clear to everyone.
> If you aren't going to lean into the simplicity and single way of doing something, you are likely better served in another language.
Generics were widely asked for by the community for many reasons, one of the main things being reducing boilerplate and having basic polymorphic functionality that other languages provide.
I'm not trying to disparage you but: You don't know who I am, or my capabilities. Please don't be condescending to me and assume I just "don't get it" or that I am too lazy to learn the "go" way of doing things.
I am seeing that a lot in this comment section. You are judging others and coming to conclusions because we like something you do not like. I can totally understand your perspective, because I also see the danger of introducing more "power" into a language that was elegant and simple.
I respect your disagreement with introducing generics to Golang. However, please extend us the courtesy of assuming we are at least remotely competent.
Man, I totally missed the mark if this is how you interpretted my comment. I never meant to imply any competence level. Certainly didn't meant to direct any judgment at you. Simply trying express the issue I had with the lo package.
I'm really happy we have gotten generics. I think some of our lower level tooling will be much easier to build. I know lots of the community will likely be happy to have non reflect based ORMs come out of that work. I have a coworker who has expressed interest in building a new json:api parsing library with generics. Happy we have it.
The map/reduce/filter and some of the other _ style helper functions from lo was what I was trying to speak to. Those idioms just don't seem to fit well in the language. The clarity of the single way to do something in Go has been immensely refreshing. I appreciate that so often Go code looks like Go code. I also don't find myself pondering which iterator to use or chain together to accomplish something. I just quickly go to a for loop and move on. In Ruby/Java/JS/Python I often find myself deciding which chainable to use instead of just solving the problem and moving on. The language is certainly not without its warts or shortcomings. To me it feels like if those kinds of options are things you really enjoy using in a language, it would feel more natural to use a different tool. Sacrificing that clarity seems to remove some of the biggest reasons to use the language as opposed to something with fewer limitations.
Sorry if I came across as condescending. Not a stereotype of the community I intended on perpetuating.
It's worth remembering that Java and C# didn't have generics initially. And when they did get them, there was a lot of pushback along the same lines; sometimes almost word-to-word about lost simplicity etc.
18 years later, the C# and Java developers are doing fine.
It's worth remembering that C# did have them pre-release, Microsoft decided to release .NET 1.0 without them to avoid postponing the release date until they were fully done.
This is clearly described in the HOPL papers regarding generics in .NET.
I find that the function boilerplate of Go is too long because there's no equivalent of an arrow func. I can't see those kind of libraries taking off until there's an easier way to write a closure.
So, I don't want to tear too much into someone's side project, but a lot of this is exactly what people are worried about.
As an example, let's take the `Chunk` function, since that's something I know I have three implementations that differ only in type in one project, pre-generics:
result := make([][]T, 0, len(collection)/2+1)
length := len(collection)
len(collection)/2? If you're splitting a list of size M into N chunks, you know exactly how many chunks you need a priori, and it's not M/2+1.
for i := 0; i < length; i++ {
chunk := i / size
if i%size == 0 {
result = append(result, make([]T, 0, size))
}
But worse: You allocated a whole new slice to store the chunk! What a waste.
But, you know, the real frustrating thing in the end is that it allocated anything at all. Chunking can also happen iteratively on the source slice:
func [T any]([]T, int) (chunk []T, remaining []T)
And then it will be zero-alloc.
But, once you see this, you see we could write something that works on indices:
func (len int, size int) (split int)
And get nearly the same effect, without any generics.
Go < 1.18 forces you to factor down to the last one if you're pursing highly abstract code (or you write it inline if you're not, and it's as fast). It's not pretty, and you need one more line of boilerplate to slice the slice, but it is fast. Go 1.18 has a nicer option that gets rid of the line of boilerplate, but it's no longer the easiest solution to reach for.
I'm happy to see generics in golang. It will simplify a ton of my higher-level code like REST endpoints, and be a minor boon to low-level code. But this library, as useful as some parts may be, is also a great example of why they're a dangerous tool.
Actually, I take that back, I will shit on a side project. `lop.PartitionBy` strongly suggests they're just copy/pasting whatever implementation they can to show off generics, without making sure it makes sense at all.
I've found generic code much easier to read once I started playing with it myself. It might be worth implementing a generic sorter (or some other easily generic construct) just to get more familiar with the new syntax.
I have already started having trouble understanding open source code using generics. I can see the upsides of having generics, but have lost the enthusiasm with which I used to browse Go code. I am still a junior engineer, so maybe it will get better with time.
Every abstraction tool will get misused, especially early on. I bet the Go community will figure out how ways to mitigate that in the long run, given it’s culture around readability.
Also, I think that a lot of supposed "misuse" that people are complaining about is actually using the right tool for the job, and the complaints stem from treating the tool itself as inherently too complicated (and thus any use of it as automatically suspect) because it's new and not conceptually trivial.
There was a lot of that back when Java and C# first got their generics.
Interesting. I'm curious how code generation is/was used to work around the language not having having support for generics? Or was this something specific to Graphql?
The kubernetes api (more specifically the k8s apimachinery) code uses codegen in order to create code so that you can convert custom CRDs to go structures rather than working with json. It's one of the more opaque bits of writing go code that works with k8s.
> Existing libraries resorted to code generation a lot.
guess what generics is in a lot of languages? code generation! the files just aren't saved to disk.
IIRC on a podcast I seem to recall hearing that it was implemented this way for Go, at least for the prototype, and it seems likely to me that it is also done this way in production, but not with textual code.
Personally, I'm waiting until 1.19 to start seriously exploring generics. By then, the dust will have settled a bit, folks will have some good lessons learned to share, and there may be more packages in the stdlib for generics.
I feel like a lot of people will wait (although perhaps not up until 1.19) for the simple reason that tooling support is still not quite there yet. See [1] and the related issues. Static code analysis is reasonably popular in Go projects, and just disabling all failing tools until the tooling is there is not something most people would agree to, I think.
With that said, of the tools that I use about ⅔ are already reasonably compatible. Unfortunately, the other ⅓ are among the most essential ones.
Please stop posting this on every comment you agree with, as you're not adding anything useful to the discussion, while also sending alerts to people for no good reason.
I don’t see Go ever making headway in data science. For one, it’s not fast enough to implement algorithms directly but lacks a decent FFI; it would be burdensome to create the metric ton of new libraries that would have to be put into place. And there are Python libraries for basically everything already, from numeric computing to Bayesian time series to Spark to Torch and TensorFlow.
Plus Go’s kind of weird, relative to the languages that many data scientists have experience with.
Sure, it’s much faster than Python, agreed. But is it fast enough to implement highly optimized numerical algorithms in a way that can compete with Fortran or C? That I’m not so sure of, and if not you’re stuck with the crappy FFI.
The general excitement about generics on this thread makes me uneasy.
Please keep in mind that generics are a pretty big hammer - they can add a lot of complication if not used with caution. Only use a language feature if you absolutely must.
I see it the other way around, generics reduce complication and allow for code that's a lot more elegant and simple than without. It definitely adds complexity on the compiler side of things, but using them in C#, TypeScript, and Java has been only a plus. Unless you count that time I tried to do stuff with generic interfaces in Entity Framework Core, but that was EF Core's fault as opposed to generics's.
As with everything else, sometimes this is true, sometimes it isn't. In languages with pervasive generics, figuring out how things work can mean following several extraneous layers of indirection as any opportunity to parameterize something is always taken. But writing the same stupid loop to delete something from a slice is also unclear and complicated. The point isn't that generics are bad, just that they should be used judiciously.
...and that time when people reinvented boolean expressions and their combinatorics under the disguise of Predicate<T> and Matcher<T>, making test errors hard to comprehend for very little additional benefit. etc etc
There is also the problem that people don't generally have a good grasp on how to use type bounds[1] correctly... I believe Go avoids the worst of it through its simple type system, but still... this complicates a language significantly.
That goes against the definition of the word complication. To complicate something is to combine and intertwine it with other concerns. To fold them together is to complicate them.
To generify a function is to complicate it with the ability to accept multiple types rather than just one.
There are totally great use cases for generics but all the cases I’ve seen are in library code not in general application programming which is where most developers spend most of our time. But of course the shiny toy can be hard to resist and so generics are often overused and abused.
As a general rule, if you're referencing the dictionary definition of a word to make your point, you're just playing semantic games.
You know what people also find complicated? Hundreds of lines code being repeated with superficial edits because of golang's lack of ability to abstract higher-level ideas. It's a stupid toy example, but for a very large number of people
nums.take(20).select(&:odd).reduce(&:+)
is less complicated than
sum := 0
for i, v := range nums {
if i > 20 {
break;
}
if v % 2 == 1 {
continue;
}
sum += v;
}
The former is less complicated for those people (including me) because it expresses high-level intent rather than low-level implementation. Multiplied over a large code-base, the ability to express intent adds up to an enormous saving in mental overhead rather quickly. You can always drill down and focus on implementation where it matters, but with golang there's almost zero ability to separate a program's detailed implementation from a high-level description of what you're actually trying to accomplish.
The Ruby version of this is also less bug-prone (did you spot the bug in the go example?). And it's also easier to apply to new contexts: the former works automatically for infinite enumerations.
There are two bugs in the code, and your (intentional?) use of non-idiomatic Go is the cause of one of them. Since you don't mention it in your follow-up comment I assume you didn't mean to write it in the first place.
Still, there is a middle ground that remains both efficient and readable:
sum := 0
for _, v := range take(20, nums) {
if v % 2 == 1 { // or if odd(v), if you like
sum += v
}
}
I'd venture most of the clarity come from the generic take (specifically, being able to implicitly take min(len(nums), 20)), not the generic filter/reduce. The second point of clarity comes from Ruby's dynamic method dispatch and large method set on numeric types; no thanks. Only as a third-tier aspect does select/reduce come into play, and I think that one is much more questionable (e.g. reduce or fold, and either way how do I explain this name to new programmers? -- and &:+, what a messy identifier).
I legitimately cannot spot the bug. Can you write out the equivalent Ruby code in Go pre-generics? I really did try but I honestly cannot understand what your Ruby code is supposed to do. I don't know what select and take are supposed to do, and I can guess at reduce but the symbols in the method are utterly arcane to me. Does Take grab exactly 20 items or does it do something like nums[:21]? Depending on how Take is implemented it could be an off-by-one error in your loop? Usually in this case I'd check for equality instead; that's the only thing that looks off to me in your Go code.
In my opinion the Go code is dead simple and unambiguous with no nuance hidden away in methods whose exact semantics I certainly don't have memorized. I'd have to look at 3 different function signatures to figure out what those Ruby methods do every single time I was reviewing code like this.
The code is equivalent, except the golang version accidentally loops an extra time.
`&:meth` is just Ruby syntax for "a function that calls the `meth` method on its argument". So `&:foo` is shorthand for `func(x) { x.foo }`.
`take(n)` just returns the first `n` elements.
`select(fn)` iterates for every element for which the function passed to it returns true.
`reduce(fn)` combines the previous iteration with the result of calling the provided function on the next iteration.
Thus `take(20)` gets the first 20 elements, `select(&:odd?)` iterates over every odd value in what's left, and `reduce(&:+)` adds every remaining element together.
> In my opinion the Go code is dead simple and unambiguous with no nuance hidden away in methods whose exact semantics I certainly don't have memorized.
These functions and Ruby's `&` syntax took a few sentences to explain. They are fundamentally not that complicated, although nobody would expect you to understand what they did before you've seen them just like any other function call.
You didn't know what they do, which is fine. But now you do, and it is completely normal for people to assume that you should be capable of working with them in the same way that someone would expect you to write 2 * 5 instead of writing 2 + 2 + 2 + 2 + 2. Writing the full `for` loop in my example is the equivalent of the latter, with the equivalent pitfall that it's really easy to accidentally write an extra sixth addition when you only meant to put five of them.
> In my opinion the Go code is dead simple and unambiguous with no nuance hidden away in methods whose exact semantics I certainly don't have memorized. I'd have to look at 3 different function signatures to figure out what those Ruby methods do every single time I was reviewing code like this.
No, you wouldn't, for the exact same reason you don't look up the documentation for `for` or `range` or `if` or `*` every time you use them. They are simple, straightforward abstractions that any programmer will have more or less fully internalized given fifteen minutes of playing around with them, and which you will likely use multiple times a day in any language that supports them.
I can assure you that bordering on 0% of programmers who regularly use languages with these functions did not understand the example.
There was a time when I, too, did not know what to make of functions like these. I saw someone write a similar comment and I thought "that's totally inscrutable". And then I learned what those functions do and I now use them almost literally every single day. They are amongst the most useful and universal abstractions I have ever come across.
I think you overestimate my ability. I've spent > 10 years programming in languages that have features like this (take and select excluded I think? Or maybe they had different names?) and I've just never internalized functions that are generic like this. Usually what I end up doing is opening up a REPL or making a toy program so I can iteratively see what happens to the collection. This probably makes me a bad programmer but I'm just being honest: It's too much cognitive overhead for me when I'm trying to spend the rest of my brain power on keeping track of the actual problem I want to solve -- which usually means interpreting code written by 30 other engineers over long period of time (read: it's a mess).
I was reflecting on this and the one language where I don't feel this way is SQL. I'd consider it my strongest language, and it's an extremely functional language. I have no trouble deciphering complex SQL, but it takes me an enormous amount of time to figure out what happens in those long function chains in general purpose programming languages. I'm not sure what it is about SQL that makes me feel this way, but maybe it gives some hint why our brains seem to work in different ways?
I think you underestimate your ability. As a programmer, you've already learned and internalized abstractions that are far more complicated than anything here.
`select` is often aliased to `find_all` and that's what it does: finds all elements matching what's passed to it. `take` is sometimes aliased to `first`. Expanded ever so slightly:
nums.first(20).find_all { |n| n.odd? }.reduce { |sum, n| sum + n }
This reads: take the first twenty elements, find all the ones that are odd, and reduce by adding them together. The only "clunky" bit here is the word "reduce" which IMO there isn't an equivalent common English word for that gives a good intuition for what it does.
It might not look like it if you haven't internalized these abstractions, but they reduce cognitive load. Dramatically. You don't have to read through a complicated set of conditionals and control flow statements in an explicit loop, you can read the bits entirely linearly and (almost) in straight English. Once comfortable with these, you can quickly glance at pretty much any expression like this and know immediately both what it does and have extreme confidence it doesn't contain bugs, because there just isn't anywhere for bugs to be.
Don't sell yourself short. Take the time to learn these, and you will be a better programmer for it.
Perhaps it comes down to a difference in where we spend most of our time? I'll give you an example.
I once spent a full day debugging a problem that came down to the implementation details of .zip. The author had assumed that .zip would add extra null elements to the output array if the inputs didn't match in length, which is sadly not the behavior of our programming language. We determined this was the bug after breaking out the REPL and running line by line because it was hard for us to visualize exactly what was happening in functional methods like these (there were more around the .zip call). We ripped out the .zip and turned it into an explicit for loop because we wanted the behavior for the case of "these arrays are different length" to be extremely obvious to the reader. The author and myself probably learned that zip had this behavior at some point, but its terseness hid a ton of nuance in code review that we decided we cared about later on in a way that explicit looping did not.
So, I get that internalizing functions like this can reduce cognitive load in some cases and it's certainly shorter. However, I spend a large percentage of my time looking at code where small semantics like the above matter a great deal. What happens if the length of this array is less than 20? What is the default return value if there are no items: None or 0? When I loop over something, it's often extremely important -- something that operates on an absolute ton of elements, altering its behavior is a big change in business logic type of stuff. Too often we've run into edge cases like the above that just flat out need to be explicit.
I think if I were working on smaller teams/codebases with more homogeneous experience levels I might feel differently. On my current team I will take the tradeoff of "the average function takes a bit longer to parse" if it means that everyone can reason about any given bit of code without trouble. We're trying to minimize how bad things can be, e.g. never have multiple programmers sitting around a computer trying to figure out what the bug is in a nested list comprehension with gratuitous use of function chaining. We use Go over other languages nowadays because we believe that for our team, explicitness results in the lowest cognitive burden on a global level. I think it's fine that other languages make other choices -- sometimes I program in Haskell for fun -- but if I come back to something I wrote long ago, I always break out the manual to remind myself what exactly certain expressions do.
> We ripped out the .zip and turned it into an explicit for loop because we wanted the behavior for the case of "these arrays are different length" to be extremely obvious to the reader.
This could have been accomplished by just extending the shorter array to the length of the longer one with no loss of clarity (and likely greater clarity, as now I don't have to read your custom implementation of `zip` every time I read this call site).
The broader point is that you found a bug where someone used a function incorrectly and instead of fixing the usage of it, you simply wrote the function inline. This same story could have been with any function call, but for some reason it seems you think that iterator methods are special and different somehow? Any function can be called incorrectly, but the solution isn't to just universally replace function calls with inline equivalents. You had a bad experience with not understanding one of these types of functions, so instead of taking a moment to internalize what they do, you decided to swear off of them entirely? I honestly, genuinely cannot understand this perspective.
> What happens if the length of this array is less than 20?
In 100% of implementations I've ever encountered, it returns fewer than 20 elements. If you want exactly 20, call `take` and then pad its length with whatever-valued elements you need. Explicit.
> What is the default return value if there are no items: None or 0?
Up to you! Pass the default return value as the first argument to the `reduce` method. Explicit.
These aren't deep and particularly confusing semantics around these methods. These are just garden-variety "I instinctively avoid these functions so I don't know the basics of how they work" types of questions. Making the answers to these questions explicit does not require splatting out the entire contents of their function definitions inline. That's not explicit, it's verbose.
> the average function takes a bit longer to parse
Code is read dozens if not hundreds of times more often than it's written. Code must be written to minimize the effort needed to understand it. The entire point of functions is to assist with this. The entire point of these specific iterator functions is that they do a phenomenal job of this, to the point where virtually every single programmer who works in languages with these idioms will understand what you mean when you say you're mapping an array.
You're absolutely capable of the same, but for some reason you've decided that these functions are magic and scary and should be avoided. They're not, and regularly avoiding them actively decreases the clarity of your code and is far more likely to increase your bug count than decrease it.
> if it means that everyone can reason about any given bit of code without trouble.
Where is the floor on this? One engineer decides that `map` or `select` or `all` isn't worth bothering to learn, so nobody gets to use them? What if they decide a `for x := range y` is too much work, does everyone go back to `for x = 0; x < y.len(); x += 1`?
These functions are basic. They aren't fancy functional magic that only Haskell wizards will ever hope to comprehend. They are used in an enormous variety of languages where their users overwhelmingly find them to be a net increase in clarity while eliminating the possibility of entire classes of common derpy bugs, no differently than `for x := range y`.
Have you thought about the amount of allocations and copying your chained generic map-reduce thing would cause? Go for loops are good because they are simple, even if you have to use more than one line.
Generally direct refutation of a central point is a constructive argument. Here’s another example of direct refutation:
You’ve given a strawman argument, specifically you’ve given one implementation which has abstracted the details (we don’t see the code for take, select and reduce). That’s just an arbitrary decision you’ve made, the equiv go example you could have posted might be:
You’ve presented these different levels of abstraction and then argued against a point that wasn’t made. A strawman argument.
In the interest of steelman-ing your argument, the interesting difference would be in the comparison of implementations of take() or select() or reduce() - but ruby is a dynamic language so there’s not really a comparison to be made.
We can still say how we might approach Take() or Select() or Reduce() or Sum() though if we need them to be generic over argument types - in the absolute worst case (so not using go generate to help us here or an interface or the new generics functionality) in the worst case e we would have repeated definitions of these functions. Code that any junior developer will be able to safely reason about and change. Code that has utterly obvious risks (you might introduce a differing behaviour in one implementation of Take() for example) - so obvious that it’s trivial to defend against with nothing more than generative testing. Again, painfully simple code. Zero cleverness. Any developer of any experience level can quickly make a valid change.
> Generally direct refutation of a central point is a constructive argument.
Selecting your own definition for a word and basing an argument around the definition you chose for it is not direct refutation of a central point. Again you appear to just want to play games where you get to declare yourself the Internet Argument Winner and pat yourself on the back instead of actually giving a shit about the perspectives of those who disagree with you.
> You’ve presented these different levels of abstraction and then argued against a point that wasn’t made. A strawman argument.
Those functions don't exist in go, and up until generics were just added, they couldn't be without copy/pasting their implementation for every single array type you wanted to implement them for. You're essentially making my point for me in that the only way these functions can be written now without resorting to copy/paste in every project that wants to use them is thanks to generics.
I presented this argument because it is a common refrain in the Go community that functional iterators like map, filter, reduce, and take are unnecessary and add complexity and are unnecessary. Far from a straw man, this is a direct example of a case where people have pleaded for generics while people like yourself have argued that it increases complexity. You can find examples of those types of perspectives right here in this comment section.
And this is just one example of an area where golang has historically foisted complexity onto its users rather than solve it internally.
I think the code you linked shows exactly why generics are needed for these kinds of methods. By introducing a single generic, you could reduce it from 500 lines to 20, and make it work for all types, not just the built-in ones. It would also remove the need to have different method names.
I think that makes it less complex, as the developer doesn't need to think about the exact underlying type of the array when calling Take (which is irrelevant to its implementation).
> in the worst case we would have repeated definitions of these functions. Code that any junior developer will be able to safely reason about and change
It's also code that many junior developers will forget to change in all the places when they fix a bug in one of them.
>> Code that has utterly obvious risks (you might introduce a differing behaviour in one implementation of Take() for example) - so obvious that it’s trivial to defend against with nothing more than generative testing.
Really obvious risks are usually easier to handle than more obscure ones.
Whether or not you like the Ruby version is beside my point, which is that which one of those is "more complicated" is a matter of perspective. The Ruby one is almost strictly a wrapper around the golang one so it does add absolute complexity. But the golang one is relatively more complex, because the Ruby version uses higher-level equivalents, and those abstractions are good enough that I don't ever have to actually reason about what happens underneath the covers.
Put another way, even the golang version is an abstraction around an absolutely massive amount of hardware and electrical engineering complexity that you virtually never have to think about. The absolute complexity of what happens when you compile and execute those instructions is extreme to the point that no single human or even room of humans collectively fully knows what's going on. And yet we manage just fine.
Yep, the `userNames` thing was my own dumb mistake while editing in a comment box and was not intended as a reflection of the language. The capacity/length issue is the actual bug I was intending to include.
But... also it's a mistake that's not even possible in the Ruby example due to not having to handle the "irrelevant" internal details of appending to the array, so maybe there's a point to it after all.
I grant you that the API for capacity and length when creating a slice in Go is bad because of this exact mistake, and I believe a few of the Go authors said they regretted it and would like to change it. However, it's odd to classify this as an advantage of Ruby, because pre-allocating memory for a large slice like this has been one of the biggest single performance advantages when we moved from Python to Go, and it's effectively a one line change. For known-to-be-tiny slices we don't even bother.
and its an optimization the ruby version doesn't necessarily have. you could have just as easily written:
var names []string
for i, user := range users {
if user.isAdmin {
names = append(names, user.Name)
}
}
and hilariously it likely still would be faster than ruby. I've managed large golang and ruby projects and the difference in maintenance work between the two is insane and its not a good look for ruby.
The thing that worries me is people trying to reconstruct features from other languages with generics. One of the things I like about Go is that no matter whose code you're looking at, it's almost certain that you're already familiar with any feature you'll see, because the set of features is relatively small.
I highly recommend checking out the Type Parameters Proposal[1]. It's surprisingly easy to digest and has more realistic and complex examples than the quickstart guide or any other blog post out there.
My favorite feature is a tiny, couple line bug fix that I pushed hard to get included during the feature freeze. Full details here, but a summary is below: https://github.com/golang/go/issues/51127
For the past ~8 years, many hundreds of people reported on GitHub - and likely many multiples more have encountered and not reported - a program that didn't work with the error "cannot unmarshal DNS..."
This error seems to be caused by two things:
1. DNS proxies not adhering to the DNS spec. This would be due to VPNs with integrated DNS proxies such as Cisco AnyConnect, internet connection sharing on Windows (which affects DNS resolution in the Windows Subsystem for Linux), or even just incorrectly implemented recursive resolvers. Specifically, if these servers receive a DNS message that utilizes RFC 1035, section 4.1.4 "message compression", which is an extremely simple compression scheme, then they may transmit a DNS message that doesn't utilize message compression, which means the proxy isn't in response size 1:1 but almost always increases the size of DNS responses.
2. Go's net/dns resolver enforcing a strict 512 byte limit on responses. This is the same behavior as musl (and I would implore a musl maintainer to increase this limit) but not in glibc or most OS distributions' default DNS behavior.
I read every single GitHub issue related to this and found the frustration of both end-users and OSS software maintainers overwhelming. End users lacked the tools to diagnose why the software didn't work, maintainers couldn't reproduce the issue.
This bug affected major projects: Mesos, Docker, Consul, Terraform, and more. For nearly a decade. Sometimes (as in Mesos) these were projects where they controlled the DNS server and client, so they could implement DNS message compression and solve it. In other cases, such as with Docker or Pulumi, where software runs on end-user machines, maintainers may have had to "close, can't repro" issues. The error was inscrutable and otherwise very skilled maintainers wrote this off as a fluke or user error.
And these are just the tools that are typically used by skilled technical users - usually engineers. It's hard to estimate but not difficult to imagine underestimating how many bug reports and issues end-users encountered where a cryptic error message, if shown, never made its way to GitHub.
Software that works is better than software that - sometimes cryptically - does not. And when things just work, well, very few people using Go will realize that this fix prevented them from experiencing an afternoon or a day or more of frustration; but I will :) and I think that's what makes contributing to OSS awesome.
This has been an open issue since 2015. It's a pain because every single last tool using go cross compilation fails to use the proper DNS resolver and thereby doesn't work when using work VPN's.
This is tools like: kubectl, vault, concourse (fly), and many other binaries that get built on CI, and unless the company builds on macOS builders and uses CGO_ENABLED=1 DNS resolution is just broken.
Very interesting and very helpful. TLD-specific resolvers on Mac not working is the missing piece I needed to understand why our team builds binaries on Mac and cross-compiles to the other two.
Sure, but there's usually some purported rationale. E.g. with direct syscalls it was to avoid FFI overhead. I'm curious as to what it was here; presumably trying to keep everything async?
I read through the whole issue thread. I admire your firmness on pushing through while at the same time maintaining professional and analytical tone in your responses. To me it is a good example how I'd imagine most open-source issue discussions should go. I understand both sides, and trade offs taken. Nice work.
Although the "your best bet to get a desired change through isn't via a private call with a maintainer." comment directed at you made me chuckle.
Any idea why they have such a low byte budget? testing.Fuzz would be so much more readable. I'll accept test.Fuzz if 9 characters is the limit for any compound identifier. Is this sort of single letter class naming common in the standard library?
Not in the standard library in general, but in the testing package it's the convention. There's testing.B, testing.M, testing.T, and testing.F, and *testing.TB.
it's super common everywhere, also in third party libraries (`bson.M` is what exactly?). it's one of the reasons that makes go a very displeasing language to work with.
This is the part anyone writing performance-critical code will find troubling:
> The compiler can choose whether to compile each instantiation separately or whether to compile reasonably similar instantiations as a single implementation. ...
I mean in Java vs. C# fights people still bring up that C# generics are reified while Java generics rely on casting and erase type information at runtime, so probably not.
When you hear most critics of Go (or any language for that matter), they talk as if Go is merely an alternative syntax for their favorite language. Of course they're bothered by lack of a missing features and unfamiliar ways. As tired of an analogy as it may be, programming languages are like natural languages. Trying to learn Japanese by translating sentences, idioms and proverbs from English word for word will only end in frustration and confusion.
If you read the history of Go, the whole point behind its creation (as with many others) was to start with a clean slate so new norms and ways of doing things could form, in service of large distributed teams and long term maintainability. Lack of generics, as unrelated as it may seem, is actually the result of the those and other goals. Specifically, Go authors avoided complicating the language, runtime, and compiler early on and thereby avoided negatively affecting long term maintainability of both the language and the code written for it (v1 compatibility promise) before having a good grasp of how generics should be implemented in this new language or if it should be at all. Copying the so called tried and tested implementations from existing languages would make little sense as generics are so intertwined with the rest of a language.
TL;DR approach learning programming languages like you would approach learning a new natural language with the purpose of living among its native speakers.
Most critics of Go are well aware of its history and purported design goals. The problem is that the set of features that actually are in Go (like channels) versus those that weren't (like generics) or still aren't (like enums) doesn't really make sense when taking those design goals at face value.
For example, generics. Like you say, the original claim was that they didn't want to do them because they thought that more time is needed to figure out how to do them "right". And so they waited for a decade, and ended up implementing them in a way that's not fundamentally different from most other languages that had them all along. What, exactly, was gained here to justify the productivity lost while waiting?
I will also add that to someone who knows a few PLs, Go is a very boring language in a sense that there's very little new there, neither in terms of individual parts, nor in terms of how they're combined together. So the notion that this combination is somehow so unique that its evolution is like walking on untested ground doesn't pass the smell test.
> What, exactly, was gained here to justify the productivity lost while waiting?
The goal isn't to wait long enough to come up with something novel. That's what research projects are for, and Go is on the exact opposite of the spectrum. If after deliberation, it turns out that a mostly similar solution is the way to go, then that's doing it "right".
Also, the lost productivity that keeps getting mentioned is overblown. The minority who actually need it (a subset of those who ask for it) are, well, a minority but a loud one. I have developed in Go for 8 years. I only ever reached for generics twice and in both cases, copy pasting the implementation in different types added a insignificant effort.
> Go is a very boring language in a sense that there's very little new there
You hit it on the nail! It's boring. That's what you need when you're targeting large distributed teams and want your code base to survive decades and still be readable and navigatable by pros and amateurs alike. You don't want clever or smart. You don't want cutting edge and novel for novelty's sake. You don't want magic. You don't want to have to keep the context of 20 files in your head to understand one line of code. You don't want a syntax that can be interpreted in 10 different ways in different contexts, ... . You want the sweet spot between simple and practical.
Back to:
> Most critics of Go are well aware of its history and purported design goals.
Your comment about Go being boring and its lack of novel features, is clearly demonstrating that you've either missed or misunderstood the design goals. And many similar comments on other parts of the language from others I've interacted with is what I was basing my parent reply on.
Right, the design goals are it is a language for people not clever enough to deal with programming languages.
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike 1"
"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical. – Rob Pike 2"
Approaching 50 here, and no major issues dealing with project deliveries across Java, C#, JavaScript, TypeScript, C++, Transact SQL, PL/SQL, PowerShell, bash, ....among plenty of other stuff, naturally depending on project specific workloads.
Maybe it is some devs that can't fathom how to transport water buckets to the top with all those stairs.
> Approaching 50 here, and no major issues dealing with project deliveries across Java, C#, JavaScript, TypeScript, C++, Transact SQL, PL/SQL, PowerShell, bash
Very commendable — I don't touch JS that I didn't write. Forget C++ altogether :D
I didn't mean my comment as a negative — in fact, I see all those attributes as essential so one doesn't produce junk at a fast rate just cause s/he can code for 20 hour straight and refactor endlessly. Of course I know devs older than me who are still in mass production mode and don't deliberate much. In fact, I often have to ask people to slow down and seek simpler and lazier solutions for their sake and mine further down the line.
It's not boring in a sense of being boring to learn, read, or write. Ada or Java are languages that are "boring" in this sense, but which nevertheless offer far better abstractions than Go.
Go is boring in a sense that it's not innovative. Go's claim to innovation is that the feature set is tuned for new users, but that's not actually the case.
Certainly a better take than 'I don't like generics, it's somehow less complicated to downcast interface{} everywhere'. It was very interesting reading the justification for not having generic methods, and how that clashed with my assumption of what generics should be, or for that matter what methods should be. Go is the spiritual successor to C, IMO - for all that people use it nowadays for its low-level capability, people forget that its original purpose was to do what every other language of its day did, but way, way more simply.
> in service of large distributed teams and long term maintainability.
Except those didn't really manifest. The opposite was actually true, golang is worse for large teams and maintainability compared to languages like Java and C#.
> Go authors avoided complicating the language, runtime, and compiler
Which ended up complicating end users' code. There is no way around it, complexity has to exist in one space or the other. They just made writing the compiler easier, which made writing and reading golang code much harder, and ended up with a weak language.
Years of experience working on several large golang projects, and seeing what sorts of issues and bugs programmer face when working on them. Things that they wouldn't have encountered in Java or C# for instance.
Furthermore, the burden of proof is on the golang authors making the claim. I have not seen any rigorous studies that underly the design decisions they claim will result in more maintainable large code bases.
> How do you figure that? I see large projects like Kubernetes and Docker thriving with contributions.
That's despite of golang, not because of it. C is quite bad for large scale projects, yet Linux has a lot of contributions because it is a large project (a tautology of sort).
> Some users, sure. Majority of projects, as evident in Go annual surveys clearly shows they are doing OK without it.
A biased survey that mostly pro golang people would take anyway.
A request/suggestion for the Go team: include instructions for how to use the Windows zip! It's not a big deal but I had to figure it out by `ls`-ing around.
It's not a feature, and it's not a good idea to refer to people with @nicks, because Twitter's @name convention is much stronger and the two don't overlap perfectly (I am not, for instance, @tptacek on Twitter). I use 'nick to refer to people here, in keeping with our lispy ethos, but really anything other than @nick is fine.
It's worth noting that on IRC it refers to someone being an op, and that it's not used for pinging people. (I sometimes see people who are used to other chats put the @ at the start of an IRC message)
On IRC, typical protocol is to just use people's nickname, with a colon after it if you're replying specifically to them. "@" is uncommon for tagging people.
I’m surprised go didn’t add real enums before generics. This seemed to be much more orthogonal to the type system, and blend well with the language. I guess there was caveat i’m not aware of.
I hope this gets merged into the upcoming Yocto `kirkstone` LTS release branch, since that branch will use the same pinned version for years to come. The previous `dunfell` LTS release branch provided Go 1.14, which can no longer build the latest versions of many projects.
Not for me. There is no ?: ternary operator. Having to write 6 additional lines of code for each tiny conditional sucks. It's not frequent, I think I've only had a single case in a few years where I had difficulty working around, but when it happens - it's not fun.
err := h(cond(x) ? f(x) : g(x))
vs
var temp T
if cond(x) {
temp = f(x)
} else {
temp = g(x)
}
err := h(temp)
Hopefully, this can be improved with generics, e.g. now it should be possible to create e.g. `func Coalesce (type T) (...args T) T` function to simplify common scenarios.
Not only that, but despite all of the other syntactic sugar Go is lacking (usually sorely, such as a “try” error handler), the switch statement is really just an “if” statement in disguise.
var someVar, anotherVar string
// ...
switch {
case someVar == "whatever":
fmt.Println("Tell me how, exactly,")
case anotherVar == "nope":
fmt.Println("this compiles to a jump table?")
default:
fmt.Println("Spoiler: it doesn't.")
}
> the switch statement is really just an “if” statement in disguise
If the switch is over a fixed set of strings, then can't the backend generate a perfect hash function ala gperf and then proceed to use the computed hash value to implement a jump table?
Go currently uses a binary search for integer switches (and I believe serial comparisons otherwise). Jump tables are in development and likely to appear around 1.19 or 1.20. As you say, there's a lot of edge cases to consider when figuring out whether a table or comparisons are the better choice.
Note that my example “switches” over two independent variables. Even if the Go compiler did generate a table, for this example it would really be generating two tables: and then we're back to the “how is this better than an `if`?” question.
I love stuffing things into one-liners. Not hard to see what is happening if you understand it enough
Reason is I prefer to move my eye balls than scroll the screen. Too many lines and you forget what preceded what you're seeing or have to keep scrolling to make sense of everything
If you only had a single case in a few years, then why would you need a ternary operator? Your anecdotal data experience is actually confirming their decision to leave this out of the language.
I can't find that code but IIRC it was me working with with AWS SDK and it infamously exposes all sort of stuff as *strings and *ints because there is no sane way to have Maybe<string> in Go. So I had a whole screen of those `if resp.foo != nil { v = *resp.foo } else { v = fallback }`, sometimes deeply nested (for resp.foo.bar[idx].baz, because no null chaining operator either). Yes, a purist may argue that's AWS SDK problem and they should've went with something else, and that Go's decision to not have ?: still stands true. Sadly, I haven't had luxury to wait another decade until Amazon maybe sorts it out, yet I still wanted to write something I could read back.
I don't disagree typical use results in poor readability. I'm not gonna put those ?:s just about everywhere. Yet I do think that there are sometimes infrequent exceptions where ternary would've been objectively better.
In what way? Almost every other major language has them without major issues.
Go devs saying generics are a bad thing reminds me of people in Oregon freaking out over being allowed to pump their own gas while the other 49 states have been doing that for decades.
never once was it said by the go devs that generics are bad. it was asserted that generics are complicated to get right and they wanted to take their time.
Is it a worse programming experience to understand an esoteric library's use of [T U], or to need to choose between codegen vs interface{} for the most common use cases? The ability to misuse a feature should not be understated, but I think people have been jumping over the missing stair of interface{} for so long that they've forgotten how much of an incredible pain it is.
I think about Go with pleasure until I remember importing. The miseries I've experienced with modules - the way imports are satisfied......just so hard to understand. Built-in support for SCM sites like github (yuck!). Just struggling to put code in a place where some other bit of my code can use it. The C/C++ preprocessor is an abortion and go seems to have made something that solves all the problems I had with that but ends up much worse to use.
Really? That’s honestly not an issue I have. My day to day dependency management story is just using go get and go mod tidy. It just works fine, and I work with a fair share of services and dependencies.
SCM support being baked in, I can't add support for our company git server since I'd need permissions to set it to return some kind of headers which I'm never going to get.
Now my code has references to github all over it and no chance to replace that dependency without a massive code change - something that may not be even practical.
The "replace" directive .... ha ha ha. Never got it to work for me - a total joke. Obviously it must work in some simple situation but apparently not mine.
The misery of trying to fork a library or just the subcomponent of that library that I need to change and use.
The strange special directories (internal, vendor, ....)
WTF is a "package" really and how does that relate to where you put code? It just seems to be a complete disconnect with where you put things or how you import them and is very confusing.
executables having to be alone in their own cmd directory.
Essentially that I hit mysterious import related problems that require continual careful reading of the docs to work out what I'm falling foul of.
Working out what is the real difference between GOPATH and GOROOT - you can read the docs many many times without understanding the straightjacket that the wise Go developers think should fit everyone and so you get problem after problem until you start to understand why "computer says no."
Creating read-only files in your home account so that you cannot "blow away" (without some trouble) the caching that is hiding or causing your latest module importing problem.
Sorry, the whole business is a great non-joy to have to deal with.
glad other people see that as a problem as well. it irritates me to no end esp considering possible solutions are fairly straight forward but people can't get buyin from the golang team.
for the record it stopped working in go 1.15. 1.14 was the last version it worked in.
Works with github, doesn't work (for no reason that you can understand at first glance) with the company bitbucket server. There's no programmatic or configuration mechanism I can use to fix it - it needs changes on the server which I cannot make. etc etc.
This is very exciting! Generics will be helpful for some, I'm sure, but my reading of the winds is that people will find other idiosyncracies of Go to latch onto and complain about. It seems to me the next object of hatred is the lack of sum types.
I would like to understand a bit more about where a lot of the Go criticism comes from. Of course some amount of it comes from direct frustrations people have with the language design, but I suspect that doesn't account for all of it. It seems to me that the intensity with which some people fixated on the absence of generics cannot be explained just by frustration with writing non-generic code, which by all accounts was annoying but not overwhelmingly so.
So, for those of you who are willing to explore the part of this that goes beyond a simple rational analysis and criticism of language design, whats bugging you?
Sometimes I think the core irritation with Go is its simplicity. The suggestion that easy to learn languages can be effective too is somewhat humiliating, as we get attached to the pride of mastering more complex languages, and the suggestion is that some of the struggling we went through was unnecessary, and that folks who wont struggle as much might be able to be effective too, in fact the suggestion is that maybe our pride was misplaced.
Theres also a suggestion, in the fact that Go has opinions, that other opinions might be "wrong". I think this may bug people, who would take up those other opinions or see merit in them, a lot. There's also a bit of an anti-expertise "who are you, Go creators, to say you know better than me how to design this program/engage in software development?"
In any case, it's something I've been trying to figure out for a while and I don't think I have a complete explanation still. Curious to see what others think.
> Generics will be helpful for some, I'm sure, but my reading of the winds is that people will find other idiosyncracies of Go to latch onto and complain about. It seems to me the next object of hatred is the lack of sum types.
I'm a big Go proponent, and I'm pretty "meh" on generics. They'll make some code easier to read and write, but they'll make a lot of code a lot harder to read (because contrary to popular belief, "readability" doesn't always increase with abstraction or terseness).
That said, sum types would genuinely be helpful--the workarounds are error prone (making sure all cases are handled by code) and inefficient (unnecessary allocations if the implementation is backed by interfaces or wasted memory if it's struct with a field per variant). Note that simply supporting sum types doesn't guarantee that they'll be implemented efficiently--I'm afraid of the possibility that the Go community might settle for an interface-based implementation rather than a union-based implementation.
On balance, however, Go is still the most productive language I've ever used (and by a healthy margin). People make way too big a deal about type system minutia (having a basic type system is important, but going too far beyond that can be counterproductive) and they are far too forgiving for things like bad tooling, poor performance, or impoverished standard libaries.
I think people are going to go off the rails with functions that abstract over for loops. I see it all the time in Javascript code, where people iterate over an Array multiple times because that's what the API makes easy.
For example, say you want to separate a slice into two slices, one that contains matches and one that contains everything else. Right now, you'd just write:
var matches, mismatches []whatever
for _, x := range xs {
if x == condition {
matches = append(matches, x)
} else {
mismatches = append(mismatches, x)
}
}
This is twice as slow, but it's less lines so I have a feeling that people will have an uncontrollable desire to use it as often as possible. I hope that my feeling is wrong.
(Do note that slices.Search is not a real function yet.)
My experience writing Scala was that there's a modest list of ways people iterate over collections that covers 99.9% of cases. For this example, many languages have a Partition function.
This is a perfect example of the fundamental and probably irreconcilable disconnect between Go advocates and detractors. To me, your "bad" example is much better: not only is it less code, it directly expresses the intent rather than making the reader infer it from the sequence of manual loops and ifs.
While I agree that it'll probably increase, for non-performance-sensitive stuff (or small lists) that doesn't really seem like a problem. And for cases where it is, the answer is the same as it has always been: profile and optimize.
Though now it's much easier to make a "matches, mismatches := slices.Split(xs, func(x whatever) bool { return x == condition })" helper, so that improvement even reduces the number of lines further.
This is a wild guess, but maybe the Search function could be a bit better than expected if it was a bit smarter about allocation size than the regular version you posted. From what I understand, the initial capacity of slices is 0, which would mean that you get a few reallocations with the "naive" version. Search, on the other hand, could allocate a slice with a capacity that's a half of xs, so you only need one reallocation at worst.
This, mixed with the "Partition" function proposed by sibling comments (which would only iterate once) could lead to code that's faster than the regular approach, while also being shorter to write.
On the other hand, it's not as free as it seems, as now developers have to know the Partition function, and how it works. It may also have pitfalls that I haven't considered. As always, it's hard to find the perfect sweet spot between how much stuff developers should know and how much stuff should be explicit in the code all the time.
Sum types and pattern matching would be great, and I think they would be much more useful than generics. Unfortunately, Go lacking generics has become a legitimate meme. An article evangelizing .NET recently popped up here and annoyingly used Go’s slowness to get generics as a legitimate point.
I don’t think generics will greatly damage what Go has going for it, but I hope it also doesn’t block the way for better error handling, sum types, and other things that could potentially improve code robustness.
I guess I disagree about the legitimacy of the meme or any point that could be made from Go's "slowness to get generics". The only people who think that generics will be a slam-dunk net benefit for Go are the folks who can't articulate the costs associated with generics. The actual net benefit is so small and the error margins are so wide that it is entirely reasonable that Go didn't rush generics, instead prioritizing larger and more certain gains.
I mean legitimate meme as in, it is a point that is widely assumed true at this point, not as in it is actually accurate in its assessment. I never really thought Go needed generics.
What do you build where you see that "healthy margin"? I build mostly webApp, and API services (boring business stuff). Reaching for Go or PHP is about the same (read JSON, business rules, database). For this stuff PHP was hardly better than CGI-Perl - except for application performance - where PHP was better than CGI and Go is better than both. What am I missing in using the tools that I don't see that margin? Maybe I'm measuring the wrong thing?
I've only worked in PHP once, in college, so take the following with whatever level of credibility you wish. I generally work in infosec related projects that often come at strange intersections, such as embedded systems and web applications or real time operating systems in safety critical situations which must deal with data from unreliable or weakly secured sources (802.11, LoRa etc.) So, mostly my experience with Go is the need to write one-off tools, which usually provide either a CLI or web-ui, to inspect things embedded devices are doing as a sort of "black box."
I'm sure generics are going to be useful in some instances. Also of note 1.18 is bringing built in fuzzing, which is nice.
Where I'm getting my margin of benefit over writing tooling in say Python or Ruby, or other languages that fit this domain:
1. Fewer choices: The build system, module layout, and networking APIs don't lead me down the road of trying to figure out which library is best, thereby wasting time learning different libraries. I have never had any desire to google around for external libraries etc. Everything seems to just work fine.
2. Clear design intent: Once you've figured out what you're supposed to do with language idioms, like passing around ReadClosers or whatever it becomes super easy to read code written by other people who've also come to understand the intent. I don't notice this until I try to go back to doing something in Python and remember how vastly many approaches folks take to the same problem (like how should a database object be passed around.)
3. Works for my workflow: I spend most of my life in terminals fussing with serial communication, or fighting with mqtt or an APK that won't load like I expect, etc. The tooling with Go for nvim is incredible. I've found that now that I've gotten fairly fluent in the language I will often crack off small scripts in Go to automate dumb things I don't want to do in Bash or whatever, this is maybe an anti-pattern, depending on who you talk to.
Summary: My big margin over other languages just comes from consistency of design and interfaces and the fact that Go integrates with my workflow. It's very easy to figure out how to do a thing the first time I'm working on it. It's also easy to throw together a one off tool in an afternoon, use it for two weeks, push it up to my github and forget I did it until the next time I need it.
> I've found that now that I've gotten fairly fluent in the language I will often crack off small scripts in Go to automate dumb things I don't want to do in Bash or whatever, this is maybe an anti-pattern, depending on who you talk to.
I do the same. Most everything you need for quick scripting against services and the like is built in. No hunting down libraries. Then the build output is a fully contained binary that can run on my m1 mac, x86 server, whatever.
> It's also easy to throw together a one off tool in an afternoon, use it for two weeks, push it up to my github and forget I did it until the next time I need it.
And when you do go back to it, the code is easy to read and understand.
Comparing Go and PHP, as a compiled language Go's performance is undeniably better. Plus it has a cleaner design without all the cruft that accumulated in PHP over the decades (in PHP's favor however, you have utility functions that make it really easy to accomplish basic tasks, such as `json_decode` and `file_get_contents`, which are not so straightforward in Go).
One example I'm currently confronted with: I'm writing a long-running backend task in PHP, and since the rest of the backend is based on Laravel, it's using Laravel too. The script doesn't really do much, and as far as I can see there should be no memory leaks, but unfortunately it can still only run for ~ 3 hours before it quits because of running out of memory. I'm sure it will be real fun debugging that. Makes me want to switch to Go...
It's entirely possible that for your purposes, Go doesn't yield much of a benefit. I see that "healthy margin" fall out of a bunch of different aspects rather than one big aspect:
1. Working with basic static type support helps me to not worry about silly errors which helps me move faster. Beyond that, it helps team members understand each other's code correctly--when working with dynamically typed languages, type documentation would often be incomplete or become outdated/incorrect and I would spend a lot of time trying to understand the correct type. Moreover, static typing enables a lot of IDE tools that save a ton of time (e.g., goto definition). This is probably more of an advantage for more complex applications rather than simpler CRUD apps.
2. Performance. I've worked on a lot of Python projects where we've spent a lot of time optimizing, and all of the options for optimizing Python have hidden pitfalls (e.g., typically "just rewrite the slow parts to use multiprocessing/C/etc" end up slowing your program down because the costs of marshaling data exceed the gains never mind the costs of maintaining that code). Idiomatic, unoptimized Go is already 10-100X faster than Python and you can pretty easily squeeze more performance out of it via parallelism or moving allocations out of tight loops). If application performance isn't important to you, then this probably won't be particularly compelling.
3. Deployment artifacts. By default, Go compiles to single, reasonably small, static binaries so you can pretty easily build and distribute command line tools to your entire team, you can build tiny Docker images (tiny images mean shorter iteration loop in a cloud environment), etc.
4. Simpler Docker development workflow. With dynamic languages, there's not a great way to iterate. Normally, you'd want to mount your development directory in a volume, but Docker Desktop (or whatever it's called these days) will eat all available CPU marshaling file system events across the VM guest/host boundary, so you end up having to do increasingly convoluted things.
5. Employing and onboarding. Anyone from any language can be productive in Go in a few hours. No need to restrict your hiring pool to "Go developers". Moreover, for other languages, it's not sufficient to merely know the language, you may also have to know the framework, IDE, build tooling, etc that the team uses. The ramp up time can be significant, and this can be a meaningful problem in an industry where people change teams or even companies after a few months or years.
This list is pretty narrowly fixated around dynamic languages because I've done more Python development than I have Java/etc and also because your point of reference is PHP. It's hard to compare Go generally to all other languages.
It was fun to read your response compared to mine :) You mentioned some things I missed, and we had pretty different reasons. Just shows a well made tool carries a variety of values for a variety of users.
> It seems to me that the intensity with which some people fixated on the absence of generics cannot be explained just by frustration with writing non-generic code, which by all accounts was annoying but not overwhelmingly so.
You speak for yourself here! Miles of copy pasted code, slightly altered who knows how, impossible to refactor without either using code generation or eschewing type safety altogether was my experience. Its impossible to write entire classes of general purpose libraries with type safety! Its just beyond belief to me that "copy paste" is an actual, bona fide best practice in the community, or that a 'map' function is unexpressible!
> There's also a bit of an anti-expertise "who are you, Go creators, to say you know better than me how to design this program/engage in software development?"
Go the compiler/runtime is super impressive! Cross compiling small, binaries with a switch is awesome, so kudos! Go, the language, on the other hand, sucks; it actually sucks so much, that historically, we couldn't even write libraries to work around the things that suck. That is frustrating!
> I would like to understand a bit more about where a lot of the Go criticism comes from.
Go is really s souped up version of C. It's a design rooted in the 70s with some fixes to make it a good language for writing small networked apps.
Insofar as that goes¹, the language is fine.
However the creators of Go responded to critiques of the language in a patronizing manner and talked down to their own programming community at Google as being unable to handle complex languages.
Basically the Go community got the exchange off on the wrong foot. Personally, I sense egos were at stake. They created a language which is simple by design, but then still wanted to claim a sort of opinionated superiority for it.
If they'd said, "oh go is just a simple little language we enjoy using for such and such, perhaps you will too" then people who don't like Go wouldn't have felt provoked.
I’m glad you brought this up because I agree Rob pikes quote about go being simple for average programmers has been very provocative, because it’s been interpreted as “Google devs are too stupid for a good language like Haskell, so if you use go it’s because you’re stupid too”.
I’m partial to a different interpretation, that’s more like “go doesn’t require as much thinking as Haskell, so you can use your thinking for the problem you’re trying to solve instead of the language”.
With that interpretation, go is better not just for stupid programmers, but even for the smartest programmers.
That line of thinking only works up to a certain point. I could say assembly language is simpler, now you don't have to think about functions or basic blocks, you can save your thinking for the problem you're trying to solve! But sometimes pushing complexity into the language instead of onto the users is the better way to go. Higher-level abstractions make things easier. I would rather spend my thought cycles on the business problem, than on reimplementing sum types, or casting interface{} everywhere, or propagating errors by hand.
Certainly, whether or not go has made the right trade offs on simplicity is arguable. I’m not certain whether those features you mentioned would be worth adding to the language or not, but I do know that my experience interacting with Go code has been much easier than my experience interacting with Java code. Haskell & Rust have been fun for me to play with, but they were also harder for me to work with.
Anyways, I can’t give the final word on this question, but those are my two cents.
> I’m partial to a different interpretation, that’s more like “go doesn’t require as much thinking as Haskell, so you can use your thinking for the problem you’re trying to solve instead of the language”.
I think you're being too charitable here. Rob Pike is a very smart and articulate guy. If he had wanted to say "go doesn’t require as much thinking as Haskell, so you can use your thinking for the problem you’re trying to solve instead of the language", then he would have. Instead he took a dig at google employees. Make of that what you will, but I don't think it's case of him mean anything other than exactly what he said.
> Basically the Go community got the exchange off on the wrong foot.
Indeed. Then there was that whole controversy over the naming collision with the Go! language. Basically there's an unspoken etiquette in the PL community that there should be zero name conflicts between languages, as there are enough good names out there, it really shouldn't be an issue. The Go! language had been around for a long time, and I realize it was an obscure, little known language, but that's true for 99% of languages out there. So the idea that Google can just invent a language out of the blue with the same name as yours, after you've been using the name for a decade, and then just use their size and clout to essentially destroy your language is.... well that's unsettling to people in the PL community. The way they handled the situation didn't exactly win over hearts:
The naming similarity is unfortunate. However, there are many computing products and services named Go. In the 11 months since our release, there has been minimal confusion of the two languages, so we are closing this issue. Status changed to Unfortunate.
> However the creators of Go responded to critiques of the language in a patronizing manner and talked down to their own programming community at Google as being unable to handle complex languages.
Can you provide examples of that? Because the closest thing I can remember is Go creators saying that C++ had too many features interacting in weird ways, and that they wanted to avoid that. Which is a perfectly normal design goal.
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
This attitude I think is the reason most experienced deveploers I've worked with are put off by the language. You don't have to be a "researcher" to understand sum types or generics. There's a large part of the industry that thinks we should limit ourselves to concepts that can be understood by a beginner, and I think it's holding the industry back. I can't think of another industry where people think this way.
That seems like a pragmatic and honest way of looking at software engineering, consistent with Russ Cox's later thoughts on the difference between programming and software engineering[1]. It sounds to me less like “this is a language for beginners and underperformers” and more like “this is a language using which large teams of programmers with varying levels of experience can be productive together”.
> This attitude I think is the reason most experienced deveploers I've worked with are put off by the language.
This is not my experience. Of the six team leads with whom I've discussed the language (after they've had some experience with it) only one was put off by it. The rest were either glad that “something simpler is finally here” or were of the opinion that Go is “just another language”.
(Interestingly enough, the one person who was not impressed was a big fan of purely functional programming, while all others were C++/Java/Ruby kind of people. Which is consistent with my own personal observation that fans of purely functional languages tend to dislike Go, for the most part.)
> Which is consistent with my own personal observation that fans of purely functional languages tend to dislike Go, for the most part.)
It's not only your personal observation (though I would remove the "purely" from "purely functional languages). A good indication of that: on the Go 2020 survey, the 6 most popular responses to "Which critical language features do you need that are no available in Go?" are features that functional programming languages have. All of these features are in ML and its descendants (SML, OCaml, Haskell, Scala, etc).
Of the top 6 features in this chart, I see only two that are associated with functional languages: the vague "functional language features" catch-all, and ADTs. The rest is commonly found in mainstream imperative languages today.
My point is that functional languages have all these features, even if some are found in non-functional languages these days. Even functional langauges themselves are often not only functional. You can do OO in Scala, imperative and OO in OCaml. OCaml is actually a good example to talk about, as the recent developments are all about adding multicore, better concurrency and better tooling, something that Go was very good at from the start.
In a way, you can see that the people using these languages that may appear very different are converging to the same "ideal". And that's not a surprise, both languages have a very Unix-y origin, and origin in Pascal/Modula. Go added to that CSP, OCaml added ML. These days OCaml is adding things to better handle the CSP part, and some people are asking for Go to better handle the ML part.
As a fan of pure functional programming who very much dislikes Go, I can vouch for this statement. But in the spirit of giving credit where due one thing that the pure functional crowd can learn from Go is the value in making really good tooling and really try to genuinely make easy things easy.
> There's a large part of the industry that thinks we should limit ourselves to concepts that can be understood by a beginner
I've never fully understood this weird obsession either. You'd never hear a group of master craftsmen like plumbers or masons talking about making their tools and trade more "beginner friendly". They naturally expect beginners to learn the trade and eventually become masters themselves.
The cynic in me thinks it's large corporations that are pushing the whole "beginner friendly" narrative as a way to keep employees both 1) lower-skilled and 2) productive. If you help beginners develop into intermediate and then advanced programmers, guess what? You have to pay them more.
> If you help beginners develop into intermediate and then advanced programmers, guess what? You have to pay them more.
That doesn't seem like a good argument with regards to Go specifically, since Go is among the most high-paying technologies, at least according to Stack Overflow surveys[1]. At the same level as “LISP” (which Lisp is it, StackOverflow?) and only slightly below Rust and Scala, both of which are way more complex languages.
I like go just fine, and I'm learning it as we speak. But I think the high pay scale has more to do with *where* it's being used more than anything else.
Go is very popular in SV, whereas most fortune 500 and other legacy companies are java world.
My issue is more with this current obsession that everything must be beginner friendly. The fact is beginners don't stay beginners very long, so it's a dumb group to optimize for.
I don't think this is a very good analogy. The craftsmen I know use basic, simple tools that work well. They just handle them masterfully. The more complex tools are often for beginners who need the hand holding.
Sorry can't find any since google results seem to get worse year over year.
Basically I gleaned this impression from the go message groups more than 5 years ago.
Go is pretty nice, but IMHO, it's no masterpiece.
There's a reason we keep copying C like syntax, and it's not just familiarity.
C could be criticized on many points, but in my opinion it was a brilliant case of language design: Only 30 keywords, the stdllib was a separate thing (not common at the time), great as a systems programming language.
If you think of C of the 1970s as a sort of souped up macro assembler, you're not far from the truth, and it was wildly successful, also, because of what it left out.
Go might have had similar ambitions, but the execution was not thought out as well. If you want to design a truly great language, it turns out it's not enough to merely leave things out.
I got the sense that the Go people wanted to be taken seriously on the same level as great achievements like C and UNIX, but the creators quickly saw through the incoming critiques that they hadn't pulled this off.
You know when you create something and then receive criticism? If the criticism is clearly wrong or due to a misunderstanding, you're not upset about it typically. You explain that the misunderstanding, and typically the person critiquing says, "ah ok..." and moves on.
But the critiques which really get to you are usually the ones which you know deep down are right. I think maybe this is why the creators of Go seemed so touchy about it.
EDIT: And for the record I program in Java and Rust mostly, a bit of Python too. I hate C++ (a sprawling mess) and also don't give a crap about Haskell or monads.
> I got the sense that the Go people wanted to be taken seriously on the same level as great achievements like C and UNIX, but the creators quickly saw through the incoming critiques that they hadn't pulled this off.
One of the creators of Go is Ken Thompson.
"I did the first of two or three versions of UNIX all alone. And Dennis became an evangelist. Then there was a rewrite in a higher-level language that would come to be called C. He worked mostly on the language and on the I/O system, and I worked on all the rest of the operating system."
But with generics, methods and interfaces, proper strings, maps etc., first-class functions, built-in concurrency, a module system, automatic memory management, a well-rounded standard library and so so so much more.
Not even that -- it's a souped-up version of Oberon with these features. (Although of course, out of those, Oberon had already had a module system and automatic memory management.)
Go has a lot more in common with Alef or Limbo than it does with C, but close enough. It feels like it's been forever stuck in the early 1990s, and now that it's gotten generics it has perhaps reached the late 1990s.
I like parts of Go. It's refreshing. I'm excited for this release. What I don't like is how its weak parts are consistently defended instead of having its shortcomings be acknowledged.
Before any rational analysis and criticism can begin, we need to look at the context of these arguments. I'll do that through some of the comments on this post.
- Issues people bring up are dismissed as non-issues or as personal preference / opinionated, regardless of their impact.
- Critics are asked for rational analysis, but proponents are allowed to use something as airy as "simplicity", a word that has essentially lost all meaning in programming.
- Missteps by representatives / stewards of Go are given the most charitable interpretation, critics less so.
- Finally, in so many cases, counterarguments to critics are dripping in condescension.
I don't think folks are going to get rational analysis. I also don't think they actually want it, whether they realize it or not.
In case anyone is interested, here are my technical critiques of Go:
- Boilerplate increases the surface area that a bug can hide in. The fact that most of the boilerplate is around error handling is especially worrying. Yes, the flexibility of "Errors are values"[1] is nice. But I also don't know any languages where errors _aren't_ values, so the main value add seems to be reduced boilerplate compared to individual try/catches around each function call.
- Go manages to repeat the Billion Dollar Mistake[2]. Things like methods working on nil receivers is cool, but not worth the danger or messiness.
- Even worse, for a language that claims to value simplicity, the fact that nil sometimes doesn't equal nil[3] is... honestly, I can only consider that a bug.
> But I also don't know any languages where errors _aren't_ values
(look, i know you understand how exceptions work, please bear with me)
yes, Exception objects are technically values, but you don't return them to the caller; you throw them to the, uh, catcher. basically, you get a special, second way of returning something that bypasses the normal one! but in the EaV approach, errors just are returned like every other thing.
the uniformity of EaV comes in handy when doing generic things like mapping a function over a collection - don't have to worry if something will throw, because it's just a value! and that lets you go to some pretty powerful places w.r.t abstraction: see e.g. haskells `traverse`.
but yeah, EaV needs some syntactic sugar to reach ergonomic par with exceptions, otherwise you get if-err-not-nil soup :P
Right, so the concept in Go is misnamed. It's not Errors are Values, it's Errors Have Normal Control Flow.
Returning errors makes many things more manageable, definitely. But where they really shine, like in the mapping example, isn't possible in Go. Unless I'm mistaken with how go generics work.
(By the way, I'm a huge fan of how error handling works in Rust and other related functional languages. Definitely not advocating for the classic way of doing Exceptions).
> But where they really shine, like in the mapping example, isn't possible in Go.
oh yeah, definitely! Go's version of EaV with multiple returns is pretty lackluster compared to a proper Result type. afaict it's kind of "the worst of both worlds" -- all of the boilerplate of plumbing errors manually w/ none of the benefits.
Definitely. But Go's boilerplate for error handling is an overcorrection from the implicit propogation. As folks have mentioned, Rust's `?` or Swift's `try` strike a nice middle ground.
That's what I'm saying - any language with exceptions can be treated as if it was all Rust-style Result<T, E>, but with implicit ? after every expression. Well, and E is an open variant (like e.g. extensible variant types in OCaml), unless the language has checked exceptions like Java.
This comment has the tone of being genuinely inquisitive, yet it mostly comes up with negative and depraved reasons for as to why someone one might dislike $lang.
- Rational reasons (frustrations with the language design) are briefly acknowledged but then some reason left by the wayside
- It is suggested that people don't have principled objections to $lang; they will "latch onto" something else if $misfeature is fixed
- Simplicity, a universally praised quality, might be the problem
- ... Because it can "humiliate" someone who has wasted time on needless complexity
- "$lang has opinions", which all languages have whether they acknowledge them or not. Why should this bug people especially when it comes to $lang?
- "Anti-expertise" for some reason, even though creating any "enterprise-ready" language takes expertise and many years of effort. (This also seems to be at odds with the "anti-simplicity" point?)
- ... But aren't then people who argue for $lang also anti-expertise? They are after all taking a stance on something that they are not experts at; they should take a neutral stance and defer to the experts
I would say that it is rude to suggest that someone might not really be arguing against X and instead ask what they are really upset about, since the follow-up questions and implications are absolutely never flattering. They inevitably just dive into the gutter with suggestions like "maybe simplicity humiliates you".
This needs to be higher up, and summarizes my main cultural oppositions to golang. I'll also add:
- Extensive laundry lists of frustrations, sharp edges, antipatterns, and ways the language makes it trivial to write incorrect code are dismissed as nits or minor annoyances.
- Missing features that have no way of being worked around are dismissed off-hand as unnecessary complexity.
- The language authors repeatedly ignore mistakes learned from other language projects and rush headlong right back into them, with predictable consequences.
- Verbosity is mistaken for "simplicity" while bugs scaling roughly linearly with lines of code is quite literally one of the few reliable bits of data we actually have in this industry.
You may think these reasons are negative or depraved, but they are none the less legitimate reasons that motivate real people. Why should we pretend they can't possibly be a factor?
I work at a Go shop that was a Python + RoR shop when I joined. I don't dispute Go's benefits at our scale. As a matter of personal preference, I don't enjoy Go.
Go is Blub (http://www.paulgraham.com/avg.html). I think the industry is in a place where Blub makes sense! Lots of VC money is floating around, and schools are training a lot of people to hire with that money. Commercially relevant ideas most often succeed by raising a ton of money, then hiring a ton of people to realize the vision. In the Blub essay's terms, a modern company doesn't want to beat the average, they want to reliably hit the average with the 10s or 100s of people they're hiring.
This resembles the conditions at Google when Golang was conceived, a lot more than the "4 people want to max their individual leverages, whilst eating rice and beans to extend their runway" world the Blub essay was inspired by.
Go wins because it's generally simple and consistent. People like simplicity rather than kitchen-sink languages. And this doesn't only apply to the grammar and type system, but also to the tooling--Go doesn't require you to:
* Pick a testing framework / runner
* Learn a DSL to manage dependencies
* Learn how to configure the build system to ship static binaries
* Debate code formatting rules
* Build a CI pipeline to build and publish source code packages
* Build a CI pipeline to build and publish documentation packages
> Go wins because it's generally simple and consistent. People like simplicity rather than kitchen-sink languages.
Rust often gets accused of being a "kitchen sink" language, but the Rust folks have tried doing the simple thing and it came with severe limitations - Rust 0.x was pretty much indistinguishable from a glorified Go. It even had green threads and GC!
It's no coincidence that they, much like Go, provide a test framework, build system, dependency management, CI, code formatting etc. out of the box. Because these things meaningfully improve productivity.
> Rust often gets accused of being a "kitchen sink" language, but the Rust folks have tried doing the simple thing and it came with severe limitations - Rust 0.x was pretty much indistinguishable from a glorified Go. It even had green threads and GC!
Rust is certainly a big language, but its features work together toward a coherent philosophy/paradigm, so it isn't really what I think of as a "kitchen-sink language". C++, Java, C#, etc have accumulated features from different languages and paradigms based on what was fashionable at the time rather than based on an overarching philosophy or paradigm.
That said, it isn't that Rust's "bigness" absolved it from limitations--it just opted for different limitations: notably a less productive programming model. That tradeoff makes sense in the context of Rust's charter--to be a maximize for safety and performance--but it's not an optimal tradeoff for general purpose software development (at least not as it exists today where productivity is king).
> It's no coincidence that they, much like Go, provide a test framework, build system, dependency management, CI, code formatting etc. out of the box. Because these things meaningfully improve productivity.
Agreed. I think Rust gets a lot of tooling and ecosystem stuff right (because these don't require it to compromise on its charter).
I don't know if it's that people like simplicity as much as simple things should be done simply. If I'm in C and I want some user input and I want to concatenate that input to another string, I have four hours minimum research ahead of me to not add a terrible vulnerability to my application right off the bat. User input should be simple in the simple case with complexity hidden behind weird toggles.
Git is great with this. You can get by just fine in 99% of cases with git add, commit, and push and those three could be explained even to a child. However, you can also fine-tune your pipeline just the way you like with Git's endless tweaks.
In fact, Go was designed to enable average programmers to be productive on large systems. You might call that "Blub"; I don't.
In fact, I claim that Paul Graham is completely wrong in that essay. To see why, think about Lisp and Haskell. When Lisp users look at Haskell, they know they're looking down, and they know why. "How can you get anything done in Haskell? It doesn't even have macros." But when Haskell users look at Lisp, they also know that they're looking down, and they know why. "How can you get anything done in Lisp? It doesn't even have a decent type system."
This situation - both languages certain that they're looking down when they look at each other - shows the problem in Graham's analysis. He assumes that languages can be placed on a one-dimensional axis labeled "power". They can't be, and the Lisp/Haskell issue proves it. That renders Graham's analysis invalid.
Instead, it might be better to think of languages as living in a multi-dimensional tree. Think of the program you're trying to write as having a vector in that multi-dimensional space. (Not so much the program itself, but what it makes it difficult to write.) Pick the language that extends the farthest in the direction of the vector defined by the program.
What Go did is recognize that "ability to get a large team of average programmers to maintain and extend a large codebase for decades" was one of the dimensions of the space. That makes it "Blub" by Graham's definition, but I don't think that's meaningful. Instead, if that's what your program needs, pick a language that does that.
> In fact, Go was designed to enable average programmers to be productive on large systems.
The fallacy in that argument is that effective programming "in the large" requires more features, not less. Sure, there are highly dynamic programming languages that cannot meaningfully support programming beyond the scale of a "personal side project". Ironically for Paul Graham's "Blub" argument, LISP is quite clearly one such language. The same applies to languages such as Python, Ruby or ECMAscript, which would have been among the main alternatives to Go when it was first released.
And one can also fairly argue that Go has in fact made it simpler and easier to "program in the large" compared to C++, Java or C#, which were the other major alternatives to Go around that same time. So Pike's argument is not wrong as far as it goes. It is merely of limited applicability, because the jury is very much still out wrt. more recent and arguably higher-level languages like ReasonML, Kotlin, Dart, Erlang and yes, Rust. It's silly to conflate these languages with the likes of Python or C++, that is indeed the kind of argument that "Blub" was written for!
When compared to say, Ocaml Go definitely lacks some of the niceties of a type system (even now with generics). With ocaml i use recursion in almost every case i need to reduce / map something. Pattern matching + type constructors are VERY powerful and produces very safe code.
That said, i really like Go as a language. It lacks many higher level abstractions, and the coding style is very procedural. It's boring, and stops you from being to clever. Simply put, i get shit DONE in Go without much thinking. sometimes i work for hours in Ocaml to get a semi "easy" function to work correctly. With Go it's just write and things usually work.
Go has very good concurrency primitives for 95% of use cases. Ad hoc concurrency in Ocaml is still a pain (the new effect system WILL make this better) as you need to pick and choose between Ltw or async and then pray libraries support the one you chose.
For me Ocaml is still a language (that i love) for "being smart about it". And good ocaml code is truly beautiful and safe. Go on the other hand is about getting shit done. I can be productive in Go and then refactor later.
I really like that Go has such a good core that comes with the essentials, so you rarely need higher level dependencies (i would not write a databada driver myself).
Go sits in a spot that's well designed and has the right tools and brings good enough safety to the average app.
Quite frankly Paul's article makes little sense to me. Unless you trying to code in some brainfuck using language A vs language B is of little concern from my experience. I will of course exclude cases when language is simply and obviously unsuitable for the domain. Writing low level video codec in Python is definitely not the best idea. Quality of the programmers on the other hand matters more.
Paul is famous for his essay about Lisp - arguing that his startup beat other startups because Lisp is such a better language than C++ or Java which his competitors were using.
Unclear to me if this is really the meaningful reason that Paul's company succeeded (I'd say obsession with programming languages can be a counter-indicator for productivity), but it clearly resonated with a lot of people since that essay is arguably the onramp into Paul's fame. A lot of programmers read that essay back in 2001. His popularity as an investor who understood engineers arguably catapulted YCombinator
I think that whatever his achievements, those are the result of his business savvy and being able to execute. Praising Lisp just shows that it is his favorite language and he's using his clout to promote it, nothing is wrong about it of course. But as I've already said I believe that he could've used plenty of other languages with exactly the same outcome business wise.
I don't think he was obsessed with languages. He picked the one he knew best.
I also think the whole blub thing becomes less and less relevant with every passing day. Every language has evolved significantly over the last 20 years - when he wrote that article - much less the last 26 years since he started viaweb.
I personally prefer FP languages. I don't think they give me magical superpowers... okay, well, maybe BEAM languages in certain circumstances. But I also know people that work magic with C and do things I could never dream of.
I would suggest that the core complaint is with the lack of ability to create abstractions, which leads to writing a lot of boilerplate. Which in turn makes Go code quite hard to read for anyone who thinks about programming from a top-down perspective doesn't for example want to have to decipher loop indexes to determine that the code is doing a .filter().map() operation.
I disagree, Go is easier to read than most modern languages, it's easier because it has a few keyword and did not add major features in the last 15 years.
I can't say the same for Java / C#, Rust / C++ etc ...
I think that's a valid perspective. But there are many of us who don't find Go harder to read than those languages (well, maybe not C++). I think we must recognise that to some extent simplicity is subjective. What is easy to one person isn't necessarily easy to another and vice-versa.
Personally, I think in abstractions. So a language that represents those directly is much easier for me to reason about than one that makes me constantly translate those low-level procedural code and back again.
The key abstractions that I want access to are standard ones like "map", "filter", "reduce", and data structures like "Option", "Result", "Future, "ConcurrentHashMap".
I suspect that this is like arguing that if the Latin alphabet is more practical than the Chinese script, we should just jump straight to binary symbols. The fact that binary symbols would make writing impractical again doesn't negate the fact that the Latin alphabet is more practical than the Chinese script.
> So, for those of you who are willing to explore the part of this that goes beyond a simple rational analysis and criticism of language design, whats bugging you?
I think the reason Go gets a lot of criticism is that at a language level it misses a lot of constructs and features. So for everything it doesn't have, you'll find someone who is really used to leverage those constructs or features when they program and who favor that style, thus they will be bothered that Go doesn't have them, as it makes it less enjoyable for them to use Go.
Now normally this wouldn't be an issue, you'd say, just don't use Go. But here's the thing, the best part about Go isn't the language, but the compiler and runtime. That's the combination which attracts a lot of criticism, because everyone would like to be able to use the Go compiler and runtime since it creates beautifully efficient, small and low memory binaries with cross-compilation, and it has a good set of libraries.
I'd say on that front Go is unparalleled honestly, I can't think of any other compiler that can produce binaries for various targets that easily. Its cross-compilation is simply excellent.
That means everyone would want to use Go, but not because of the language, but because of the compiler and runtime. As all these people come to Go, they're forced to use the language, and very quickly realize this doesn't have the features or constructs that they'd want to program with, and there goes the criticism.
The trade-off is just hard to swallow, if you come from a more expressive and powerful language, with more features or styles, the Go language will be a harsh reality of how minimal it all is, so much so that they decide not to use Go.
So what's happening is a bunch of people are waiting for Go to have feature X,Y,Z before they jump back to Go, and that creates loud voices, and the reason is that they want to use Go the same way they currently program, but have access to its awesome compiler and runtime.
> I'd say on that front Go is unparalleled honestly, I can't think of any other compiler that can produce binaries for various targets that easily. Its cross-compilation is simply excellent.
Zig, perhaps? My impression is that it's similarly capable.
> since it creates beautifully efficient, small and low memory binaries with cross-compilation, and it has a good set of libraries.
Other than good cross-compilation, I really don’t see any of that. Even niche languages that have been around forever hit this exact trio like D. Also, the performance is ain’t that good to begin with, for more complex scenarios the comparatively dumb GC of Go will be a bottleneck, compared to something like Java.
Cross-compilation is a big deal I'd say, I wouldn't brush it off as insignificant.
Even if I ignore cross-compilation, what other language has easy and fast native compilation, that produces relatively performant and low memory programs, is properly garbage collected, and has a good library ecosystem as well as a good concurrency story?
What people don't realize is that the simplicity of the language also contributes to the compilation speed, so adding features to the language at the same rate as other projects (that I don't want to mention here) would jeopardize one of Go's major advantages.
Our dev team is made of meat. Forcing us to read and write boilerplate is not only demoralizing and error-prone but also millions of times slower than having the compiler generate that code.
This might have been a good tradeoff when they could barely fit a C compiler in a PDP-7, but that was half a century ago.
The syntactic choice of using square brackets instead of angle brackets for generics appears to be due to optimising for parsing. User clarity is decreased to allow for code optimisation IMHO.
Angle brackets require unbounded parser look-ahead or type information in certain situations (see the end of this e-mail for an example). This leaves us with parentheses and square brackets. Unadorned square brackets cause ambiguities in type declarations of arrays and slices, and to a lesser extent when parsing index expressions. Thus, early on in the design, we settled on parentheses as they seemed to provide a Go-like feel and appeared to have the fewest problems.
As it turned out, to make parentheses work well and for backward-compatibility, we had to introduce the type keyword in type parameter lists. Eventually, we found additional parsing ambiguities in parameter lists, composite literals, and embedded types which required more parentheses to resolve them. Still, we decided to proceed with parentheses in order to focus on the bigger design issues.
Why not use the syntax F<T> like C++ and Java?
When parsing code within a function, such as v := F<T>, at the point of seeing the < it's ambiguous whether we are seeing a type instantiation or an expression using the < operator. Resolving that requires effectively unbounded lookahead. In general we strive to keep the Go parser efficient.
Why not use the syntax F[T]?
When parsing a type declaration type A [T] int it's ambiguous whether this is a generic type defined (uselessly) as int or whether it is an array type with T elements. However, this could be addressed by requiring type A [type T] int for a generic type
Aside: it isn’t clear why not just make white space before the angle bracket significant - their examples of parsing problems are only if the code is not formatted “correctly” using gofmt. I think making the user type a space before greater-than would fix the parsing problem, and that seems like a reasonable compromise to reduce confusion for go vs most other languages. Square brackets usually mean indexing or arrays, and go breaks that pattern flummoxedly.
F<T> versus f < t >
are syntactically different to read anyway ( whitespace is significant to pro grammers and writers for syntax ,for example).
How does using [] for generics decrease user clarity? They're both just grouping characters. If the argument is that [] is easy to confuse with indexing, then by the same token <> is also easy to confuse with comparison operators.
One could also argue that [] is the better choice precisely because it is similar to indexing a map - a generic type can be seen as a map of types to other types.
If you look at some of the contortions that e.g. C# has to go through when evolving the language, due to numerous ambiguities between <> used for types and <> used in expressions, I think it's still a very sensible choice.
Square brackets in go have their own parsing issues (see my original comment). I haven’t found much discussion on how any parsing issues were resolved (presumably any issues needed resolution for backwards compatibility?).
As an outsider it appears to me that the choice for square brackets has been made mostly for stylistic non-technical reasons, but perhaps politically that would be difficult to rationalise to the user-base! Making whitespace significant before comparison/shift operators (matching the gofmt layout) surely solves the stated practical issue of lookahead parsing; although I admit many language geeks would say significant space characters were an ugly solution.
To agree with you: there is a really good discussion on the downsides of <> here —https://lobste.rs/s/fmcviy/language_designers_stop_using_for — certainly square brackets are used for generics in other languages (Scala, Python, Nim, Eiffel).
Regardless, the point is moot, given that the syntax is now concrete!
> then by the same token <> is also easy to confuse with comparison operators
Not really, given that comparison/shift operators are not used in types, whereas [] is used in types. Perhaps go-lang wanted to reserve angle brackets for constraint operators a la Scala <: et al. </wink>.
Disclaimer: I have little experience with generics or language design. I am just very curious if there is a divergence between the stated reasons for the design, versus any unstated reasons.
I have an irrational dislike of go because for every X it's missing the community's answer is "you don't need X" (until they decide you do). Package management, generics, sum types, decent error handling, etc. It's an arrogant approach that rubs me the wrong way. It reminds me of "You're holding it wrong" and seems ingrained in the go ethos.
That's not to say I'm not productive in go, but I could be vastly more productive.
I think my biggest non-specific, slightly silly irritation when using Go is just how annoying I find it to write clear code in it, because the focus on simplicity feels like it too often comes at the expense of clarity, correctness, and expressiveness.
I don't like that I'm not able to express simple constraints that I find a really important for improving code quality in other environments – things like "this field cannot be nil and it's a programming error if it is", or "all possible values of this field must be handled".
There are lots of other bits, but I'd say the general pattern is being quite frustrated about having to do and remember things that the computer should be handling for me – such that working in Go always feels like having a small rock in my shoe.
It's a shame, because as a language—and wider ecosystem—it gets so much stuff really right.
This is my critique of go too. It's really easy to write code that works well for all of the cases I can think to test. But it is strictly impossible to write code where I can make guarantees of robustness.
These little warts that prevent that are everywhere: lack of sum types (no Option / Result), half-assed enums, no real ability to make things const, inability to express exhaustive switch statements, having to downcast to `interface{}/any`, nil, typed nil, and on and on and on.
The longer I develop software, the less I care about being able to write software that works and the more I care about being able to write software that doesn't fail. The former is table stakes, and I can throw together something that works in virtually any language. The latter is what not only prevents me from spending nights and weekends fixing problems, but also allows me to finish software and truly move on to new projects without having to constantly play whack-a-mole fixing new issues that keep coming up.
> So, for those of you who are willing to explore the part of this that goes beyond a simple rational analysis and criticism of language design, whats bugging you?
I coded primarily in Go at work for several years, although that was several years ago. I understand that package management and now generics have changed in major ways since then. But at the risk of being out of date, here are some language-level complaints I had at the time, focusing less on convenience/taste and more on things that I felt made it harder to write correct code:
- Nil pointers. Go doesn't have any built-in way of distinguishing between pointers that can be nil and pointers that can't.
- Two kinds of nil. Go distinguishes between typed and untyped nil pointers.
- Automatic zero-initialization of omitted fields. Adding a new field to a struct has a tendency to make existing callsites incorrect without producing any compiler errors.
- Mixing declaration and assignment on the left side of the := operator. Copy-pasting a line from an outer scope to an inner one silently changes assignments into shadowing declarations.
- Calling append() without capturing its return value is always wrong. This mistake is easy for linters to catch in simple cases, but aliasing creates more complicated cases.
- It's easy to make an implicit temporary copiy of an object without realizing it, like by using a value method receiver or by looping over a slice of values.
These are examples of things that made me feel like I was "fighting the language" to try to write correct code. I'd appreciate feedback about any of these that might have changed in the last few years.
I would agree with you in the general case, but I yet to hear any sort of sane explanation for why it is the way - and it is not like a typical tradeoff where something arguably worse would have been chosen otherwise, it is just a small semantic change to a keyword.
It enables one to conditionally set up cleanup within the function. If it was block scoped, you couldn't do something like `if shouldDoSideEffect { defer something() }`
If it was scope-based, there would be just as many people complaining that it isn't function based.
That's slightly different, though. The first is "when this is true, earlier in the function, queue up running something at the end of the function." This is "at the end of the function, if this is true, do something."
They _should_ be identical, unless shouldDoSideEffect changes.
Yes, mutability is bad. One shouldn't change that. But avoiding "you're holding it wrong" footguns was the original goal, right?
Don't get me wrong, having both would be nice. Maybe something like `after` for function scope and `defer` for block scope. But that starts to erode the "simplicity" of Go, I guess.
But with block-scoped defer you'd have to write something like:
if condition_A {
acquire_A()
} else if condition_B {
acquire_B()
} else {
acquire_C()
}
defer func() {
if condition_A {
release_A()
} else if condition_B {
release_B()
} else {
release_C()
}
}()
Maybe the syntax here could be better with some kind of defer-if added to the language, but it's the non-locality of the cleanup that's the real downside. The whole point of the defer keyword is to let you put cleanup right next to resource acquisition, to make code easier to audit and maintain, and we lose that here. (Also this version might not even be correct, if the conditions change their values over time.)
To be honest, I think Rust's combination of destructors and move semantics is really the ideal way of doing cleanup, with C++ coming pretty close too. But given a defer keyword, I can see situations where having it block-scoped would be painful.
> So, for those of you who are willing to explore the part of this that goes beyond a simple rational analysis and criticism of language design, whats bugging you?
Error handling is a big one for me. This could be generalized to lack of expressivity; it takes an unreasonably large amount of code to accomplish anything. Generics will help with the general case, but they don't do anything for error handling.
Lack of sum types, as you already mention, are another.
Go really likes forcing you to either write a ton of tests or discover your errors in production. More expressive features (sum types, etc) effectively mean that compiler provides those tests to me for free. _Can_ I write them myself? Yes, obviously, but that brings us right back to having to write way too much code to accomplish anything.
Farther down the list is escape analysis. Rather than just giving me control over whether allocation happens inline or indirect, if I have some performance critical submodule, I have to sit there and box with escape analysis; this is not only time consuming, but extremely brittle.
> Error handling is a big one for me. This could be generalized to lack of expressivity;
this has been shown over and over to be incorrect. go programs are within the ballpark of other languages in terms of LOC. usually the examples given are niche use cases that impact an extremely small amount of code.
golang is looking for ways to make it more ergonomic just hasnt found a decent path forward yet. interestingly your sum types complaint might allow for it eventually.
> Go really likes forcing you to either write a ton of tests or discover your errors in production
this is just not true compared to most languages. evidence please. given the prevalence of dynamic languages in the wild today its most likely the opposite. i know my golang tests tend to be less in number than java or any dynamic language.
>Farther down the list is escape analysis. Rather than just giving me control over whether allocation happens inline or indirect
also not true, golang absolutely gives you the ability to control this. but its guarded by rules. rules that prevent you from screwing up and causing memory issues. this is a good thing.
> It seems to me the next object of hatred is the lack of sum types.
Sum types may be coming in a close future release, since the notation
type T interface {
int | string
}
that can currently only be used for constraints on type parameters can be “easily” made to be a runtime thing as well. I cannot find the GitHub issue right now (perhaps, it wasn't an issue, but a comment in a very long thread, and GitHub is being annoying with those), but it seems like nobody is currently opposed to that, and the only reason it didn't make it into 1.18 is that it's already a pretty big language change.
type Number interface { int | int64 | int32 }
var n Number
The default value for n has to be nil, rather than 0, because Go can't choose between the types (int 0, int64 0, etc.). And of course, you can't do n + x (assuming x is also a Number) because x might be a different kind of number. Still, it's a pretty good simplification from the current restrictions, and it builds well on existing concepts.
I'm not the target market for your question, but I stopped using Go mainly because I felt constrained. I prefer functional programming style and that was very difficult in Go. More than that, I hated having the language dictate to me where my code had to live (this has since been fixed). I also disliked the lack of a package manager (also since fixed). I do dislike the verbosity of Go programs (some of which is forced by the language, other of which is self-inflicted by devs writing very long functions), although this is more of a style choice, and style is something that you can get used to.
I don't dislike Go though, and I still use it occasionally. I love the go formatter, and I like the built in channels (although now that I've experienced Elixir/Erlang I think the actor model is pretty great).
I think of it as a flavor of ice cream: some people will like it, others won't. Even if I hated Go, I still believe that the diversity in languages is a good thing and I have yet to see a language that didn't contribute to overall progress.
> I'm not the target market for your question, but I stopped using Go mainly because I felt constrained. I prefer functional programming style and that was very difficult in Go.
Yeah, Go is opinionated. I like functional programming style too (because I like being abstract), but it's hard for me to deny that it's easier for people to read code which is more concrete and standardized (fewer ways to express the same idea) even if there is more boilerplate.
> More than that, I hated having the language dictate to me where my code had to live (this has since been fixed). I also disliked the lack of a package manager (also since fixed).
These things have been fixed for a long time. It sounds like you tried Go when it was quite young.
> I think of it as a flavor of ice cream: some people will like it, others won't.
I agree. To expound on that, I'll also posit that people have different goals in choosing a programming language. Some people want a language that makes them feel clever / express themselves aesthetically and others want a ruggedly practical language / Get Shit Done (I definitely fall into both camps). Go is squarely in the latter camp.
Yes absolutely, I did most of my Go programming in the 1.5 days, so (as I mentioned) some of my dislikes have been fixed.
> but it's hard for me to deny that it's easier for people to read code which is more concrete and standardized (fewer ways to express the same idea) even if there is more boilerplate.
Yes, absolutely agree. When working on a team, especially a big team, that's a huge benefit that shouldn't be overlooked.
> I would like to understand a bit more about where a lot of the Go criticism comes from
Because I loved the language so much but I feel like it decided to be on the wrong level of abstraction, making many things, uselessly frustrating and verbose. What can be a very readable code in other languages can be hard to read in go because of its verbosity. And the lack of generic has been very limiting of a while, and many discussions with the main team turned to "what are your usecases that really need generic". So I felt like the language won't evolve the way I wanted it, the despite its redeeming qualities.
> What can be a very readable code in other languages can be hard to read in go because of its verbosity
I suspect that a lot of people conflate "readable" and "terse" or "abstract". For example, beyond one or two chained methods, a foos.map(...).reduce(...) style quickly becomes unreadable while the equivalent for loop is still easily understood (this is a large part of the reason why complicated list comprehensions are discouraged in many parts of the Python world). It's also why it's often easier to understand error paths in Go than in a language which makes heavy use of exceptions despite differences in boilerplate.
I see where you come from with those super smart one liners (guilty of it when I started programming). But sometimes, you can do something nicely in one line instead of three or four, and it makes it easier to parse.
I agree, but those one-liners win by only a little bit when they do win (e.g., it's only a little easier to read foos.map(...) than it is to read the equivalent for loop) but they lose by a lot when they lose (it's a lot harder to read .map(...).filter(...).flat_map(...).reduce(...) than the equivalent for loop). Generics tend to win by a lot in very specific, rare cases--so I appreciate that Go will have better support for those cases, but there's no way the community will restrain themselves outside of those cases (no language community has succeeded in this capacity to date).
I don’t know, I have seen my fair share of indecipherable for loops depending on some side effect shared by two-three loops, where you don’t even stand a chance to understand it.
A more robust type system can help you deal with coordination between various nooks of a codebase.
For example, let's say our platform supports multiple authentication methods. If we need to add a new auth method, how do we know all the spots we need to update? TypeScript can handle this using a sum type:
With the above approach, static analysis tells us when we missed a spot. That adds a ton of safety and expedites feature work (since engineers don't need to manually find all the spots that need updating).
I did not find this enlightening (and I’ve read it before). This criticism is fixated on what I would consider minor annoyances in using Go, and doesn’t explain why they’re such a big deal.
I think Go's criticisms come from elitism more than anything--folks can't flex their algebraic programming muscles and instead have to do error checking inline or do some WET things due to lack of generics like some amateur.
To me, Go is never the wrong choice, but it may not be the best choice for some projects.
I don't think Go's criticism only comes from advanced type folks.
People criticize that handling errors is manual, annoying and error prone.
People criticize that you can't implement your own or new data-structures generically, nobody wants to use a non-generic data-structure, and Go's own data-structures are generic, so people complained that users couldn't do the same.
People criticize that the lack of higher level list comprehension or stream-like API for data manipulation is tedious. Implementing a filter as a for loop every time gets repetitive.
People criticize that duck typed interfaces can make it hard to track what implements what, and if it makes logical sense or not.
Etc.
Those are things Java and Python programmers are used too for example, so it's not like the criticism is just from Haskell folks.
I'm a Go proponent, but I also feel this. Sometimes I want to be very abstract and clever. I've written a static site generator in Rust, not because a Go version would be too slow or error prone, but because I liked the idea of being gratuitously abstract and efficient. I feel clever and I like the aesthetic of very abstract programs. But this is almost always at-odds with the objectives of software development--very abstract/terse code is often much harder to understand than concrete/verbose code. There are significant material advantages to being able to onboard a developer with zero language experience to a project in a matter of hours rather than weeks or months.
Agreed. I don't think the hate is that hard to understand. If you're coming from a more expressive language (which is most of them), it's a bit of a shock. I got over it and learned to love it, but I've seen others that don't, and I understand.
I'm not sure it's elitism to be looking for type-system features that are well-over a decade old and generally non-controversial and considered vast positives.
I don't think they necessarily need to go crazy in this regard(as simplicity is one of Go's best strengths), but I don't think it's elitism.
> I would like to understand a bit more about where a lot of the Go criticism comes from. Of course some amount of it comes from direct frustrations people have with the language design, but I suspect that doesn't account for all of it. It seems to me that the intensity with which some people fixated on the absence of generics cannot be explained just by frustration with writing non-generic code, which by all accounts was annoying but not overwhelmingly so.
> In any case, it's something I've been trying to figure out for a while and I don't think I have a complete explanation still. Curious to see what others think.
Couldn't the same be said for every programming language, ever? People often spend more energy complaining relative to the pain that they went through, I don't think there is anything specific about to Go about this. Just look at every discussion about C, C++, Java, JavaScript, Python, Ruby or even better, PHP.
The silly things that bothered me the last time I tried to use it: 1) lame un-opininated formatter (no line wrapping); 2) obtuse-but-mandatory directory structure for modules once you move beyond a single file (maybe this has improved?); 3) embarrassingly large binaries.
I like the simplicity of the language. What I did not like was the complete denial of functional patterns for problem solving. The language has first class functions but language's "best practices" and the core library are imperative and all 3rd party libraries (rightly) follow suit.
I appreciate that the language designers had a specific vision for their language. I have much respect for them. They have a lot of experience and expertise. I prefer functional patterns so I decided to discontinue go and learn a different language.
For some, generics will be enough. For others, it won't. It's the nature of people rather than a person - there's lots of us and we all have our own preferences.
Personally, lack of generics never kept me out of Go. I did a lot of programming in it for about a year when 1.0 first dropped. I think it is a fine language, both with and without generics.
I do prefer simpler languages though - those that give me broad tools to accomplish many tasks rather than many tools to accomplish specific tasks.
> I'm sure, but my reading of the winds is that people will find other idiosyncracies of Go to latch onto and complain about. It seems to me the next object of hatred is the lack of sum types.
This is a needlessly dismissive perspective. Put another way, now that one of golang's most prominent deficiencies has been addressed, people will switch their focus to other areas of frustration.
> Sometimes I think the core irritation with Go is its simplicity.
The core irritation (well, my core irritation) around Go is that this simplicity just kicks the cans of complexity and verbosity downstream onto its users, and—in the worst cases—hides the fact that this complexity exists, so encourages writing simple, obvious, and subtly incorrect solutions. Subtly incorrect is by far the worst kind of incorrect, as it results in code that works for all the obvious test cases but breaks at 2AM in production. Go doesn't feel simple to me. It feels half-assed.
Let's take a recent example I had to work with: truncating a string to the first 63 characters. Easy, right?
s = s[:63]
Simple, obvious, and subtly incorrect. Why? UTF-8. Strings in go are just a microscopic wrapper around immutable byte arrays. I could almost consider this okay if there was some type in the language that wrapped a byte array and an encoding to provide a real string type. But not only isn't one included, a reliable one can't even exist. The range keyword iterates over bytes for byte arrays and over UTF-8 code points for strings and there's no way to express anything different. String straddles this uncomfortable middle where it's both a byte array and a UTF-8 string and which one it is depends implicitly on which operators you use on it. Strings are "just" a byte array, except they're supposed to contain UTF-8, but not only is there no way to enforce that they do, the language makes it trivial to accidentally corrupt any UTF-8 string you do have (e.g., through indexing). If you're writing a function and somebody passes you a string you have no way to know or enforce that it's intended to bytes or ASCII or UTF-8 or any encoding whatsoever without iterating through to check. There is no type that actually encodes an enforced requirement of a valid UTF-8 string, nor can one even be written as a library.
So the way you're "supposed" to truncate a string is:
s = string([]rune(s)[:63])
At least I think that's the magical incantation. Do you do this everywhere? Do your coworkers? Yeah, I didn't think so.
Compare this to Rust and Ruby, which take two diametrically opposite (but both IMO reasonable) approaches. In Rust, strings are UTF-8. You cannot construct a String that does not contain a valid UTF-8 byte sequence (at least not without unsafe). You cannot perform safe operations on a string that result in an invalid byte sequence. If you are handed a string, you know that it contains valid UTF-8 bytes. You can iterate over either the bytes or the characters, but you have to explicitly decide which. In Ruby, strings are byte array / encoding pairs. It could be UTF-8, UTF-16, Shift-JIS, EBCDIC, or any other encoding you want it to be. When you're handed a string you can generally assume it's been parsed and validated under that character set. Indexing, iteration, and other operations are all encoding-aware. If you want the bytes, you can choose to access them.
The string equivalent in golang is, to put it bluntly, half-assed. It's mostly just a byte array but some keywords assume it's UTF-8. If you're given a string you generally assume it's UTF-8 but there's no reasonable expectation that it's valid since the language makes it ridiculously easy to violate the encoding rules. Go strings aren't even a decent building block for an encoding-aware type since range can't be made to work on other encodings. For a language written by none other than Rob Pike himself, I simply cannot fathom how golang arrived at this design.
Keep in mind this is just one of a myriad places where the language punts complexity to the user but, at best, doesn't give them the tools to actually reliably handle it and, at worst, pretends like the complexity doesn't even exist so it's not even obvious something needed special care in the first place. Simple, obvious, and subtly incorrect.
None of your proposals truncate a Unicode string to the first 63 characters. Go and Rust are at least honest about this. Ruby lies about what you're indexing and makes you pay O(n) to do it. (And no, that's not the way you're "supposed to" truncate to a codepoint offset in Go, but it's also true that there's no function to do it in the standard library - in part because that's rarely what you really want to do and it would get abused to do such.)
> When you're handed a string you can generally assume it's been parsed and validated under that character set
But it might not have been. It's a weakass guarantee, the same Go gives you, except in Go it's UTF-8-or-garbage, vs. Ruby's could-be-anything-and-could-still-be-garbage.
> this simplicity just kicks the cans of complexity and verbosity downstream onto its users
I still think that append() is the arch-example of that. I can't think of another high-level PL that doesn't have an atomic add-item-to-collection operation (either in the language or in stdlib) without requiring the user to coordinate all the moving parts.
> Generics will be helpful for some, I'm sure, but my reading of the winds is that people will find other idiosyncracies of Go to latch onto and complain about. It seems to me the next object of hatred is the lack of sum types.
Lacking generics is not in the same league as lacking sum types or other features. Only a few major static languages have sum types, whereas most major static languages have generics.
I doubt that sum types are going to be a big target, given the lack of support for them in many other mainstream languages. The lack of generics was standing out so much precisely because Go was the only mainstream statically typed PL without them by choice.
people hate Go because Go is unapologetically designed from the framing of programmers being members of a labor class having differing levels of expertise. It's not the design itself; it's the design goals themselves that people are opposed to.
This Rob Pike quote gets bandied around in these discussions:
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
Ten years ago when I learned Go at the Recurse Center, I was early in my career and I picked Go as my language of study because I believed it was a good bet for my career prospects. I said to myself "I bet this is the next Java, and I bet that people who learned Java very early did very well for themselves". This bet proved correct; Go has been enormously useful to me in my career. Many people who go to RC pick languages like Haskell or a Lisp instead; things that are more interesting and challenging. While Haskell is well known for being mentally stimulating and very enriching, it is also well known for not seeing much industry adoption. You can sub a handful of other languages here, but basically just think of any very beautiful cutting edge experimental language that doesn't have a lot of industry jobs. I'm stick to Haskell for consistency in this post but you could just as easily say Lisp or Scala or arguably Rust these days; it's not about a specific language, it's about a specific grouping of languages that prioritize the beauty of the language and the effectiveness of the solo developer.
People get upset at Pike's framing that Go should be learnable by people with less-than-expert experience. As a person sees technology's ability to serve as a vehicle of class mobility, I see this as a great and noble design goal, because I think class mobility is fundamentally good. Ask yourself this: why do people consider it bad that a language acknowledge that it should help not only people with lots of experience, but also people with very little experience, and teams with mixed levels of experience?
My experience of programming Go for ten years and for being in these spaces for that time frame is that criticism of Go almost uniformly comes from people who can afford to invest heavily in things that have a low expected ROI; great and beautiful languages that don't have a lot of job prospects. Most people on Earth cannot afford the time commitment it takes to learn programming at all, let alone learning aspects of programming that they can't monetize.
I have never met a Go programmer that is bothered by the existence of Haskell. I have met many, many Haskell programmers that are bothered by the existence of Go. Ask yourself why that is. Ask yourself why someone would be so angry about something so wildly successful and useful to people. It's not even remotely a two-way street; the hatred goes one way.
This leaves us with the final problem: why can't people acknowledge that different people have different goals, and that those people's goals are just as legitimate as their own? Why are other people's goals threatening? It's because the existence of Go and the massive success of Go causes people to confront some aspect of class consciousness. It is impossible to witness the enormous success of Go and not confront the idea that programmers are laborers, and for some people, the idea of being a member of the labor class is simply not acceptable; criticizing Go thus becomes not merely an act of design criticism, but also as vehicle through which the critic can express something about their own socioeconomic class. There are many valid criticisms of Go, but there is also a category of Go criticism that exists more as a form of class signaling than anything else, and in these public forums, those two forms of criticism often get entangled in complex and opaque ways.
> This leaves us with the final problem: why can't people acknowledge that different people have different goals, and that those people's goals are just as legitimate as their own? Why are other people's goals threatening?
Because swaths of code written in a language where it's easier to make mistakes affects all parties and entrenches said language.
Imagine a book on programming an unreadable language like Befunge was given out to low income schools across the world, those people ended up organizing, and by sheer number and force cranked out software the industry depended upon.
Is it unethical to see the downside, point it out, but appreciate what was accomplished?
I think about the dilemma of socioeconomics preventing language choice from being a consideration and language mattering for writing better software a lot though.
> I have never met a Go programmer that is bothered by the existence of Haskell. I have met many, many Haskell programmers that are bothered by the existence of Go. Ask yourself why that is.
This is easy.
Go programmers are more likely to claim language choice doesn't impact software much.
Haskell programmers are more likely to claim language choice does matter a large amount.
That means there's not as much reason for Go programmers to criticize other languages because they don't see it as mattering much.
I think "language doesn't matter" is a hypocritical position that people make in bad faith though.
That said, Go programmers definitely criticize other languages including Haskell for not being inferior to Go's brand of "simplicity".
Counter point: Generics make me far more inclined to use Go regularly. If error handling is improved to be safer and more ergonomic, it'll be incredibly compelling for me.
Generics were the biggest feature keeping me from using the language, and now that's a done deal. I'm excited.
Can you point me to a survey that shows that a majority of developers would like to try Go? If not, how can you be so sure that those who avoid it because of <insert missing feature> are a minority?
As a counterpoint, here's a survey that suggests that only 14.54% of developers have any interest in learning Go. How many of those who don't are turned off by lack of features? I don't know, but I'm not sure how you could either.
Personally I considered it something of a feature that Go repelled people who prioritized "writing in their personal style" or "showcasing their powers of abstraction" above all other concerns. The result was standard, readable, boring code which made Go a very productive tool.
No one meaningfully "requires generics", people are just reluctant to set their ego aside and learn a different approach. Some use cases may benefit from generics, but even then "require" is too strong.
Honestly, one of the biggest things that drives me away from Go isn't the lack of features, it's this condescending attitude that comes from so many people who espouse Go. I don't need to invest in a language whose community is so hostile.
If you wanted to be persuasive or helpful, you could try explaining what the other approaches to work around lacking generics might be, rather than just insulting people for being unable to figure out on their own how to avoid copy/paste spamming their codebase.
> Honestly, one of the biggest things that drives me away from Go isn't the lack of features, it's this condescending attitude that comes from so many people who espouse Go.
I could say the same about many of Go's critics, but I don't because that wouldn't be constructive. Rather than taking undue offense at my comment, why not articulate a counterexample to prove that there are valid reasons why a person (rather than a use case) might require generics?
> If you wanted to be persuasive or helpful, you could try explaining what the other approaches to work around lacking generics might be, rather than just insulting people for being unable to figure out on their own how to avoid copy/paste spamming their codebase.
Because these are already well-understood and have been debated to death. Instead of .map().filter().flat_map().reduce() you use a simple for loop. Instead of `LinkedList<ItemType>` you use `type List struct { Item ItemType; Next List }`. The thing that we generally aren't* agreed on is whether those extra characters are actually the end of the world versus the benefits associated with simpler, more concrete, more standard code.
Anyway, I wasn't "insulting" anyone, I was describing Go critics' objections in roughly their own terms (i.e., in so many conversations I've had with Go critics esp over generics, the conversation eventually terminates at some variation of "Go forces me to think about programming differently than I'm used to").
> in so many conversations I've had with Go critics esp over generics, the conversation eventually terminates at some variation of "Go forces me to think about programming differently than I'm used to"
At some sufficiently large number of people, you need to acknowledge that how they think isn't wrong, the language is.
This is really vague and unhelpful. I’m going to interpret this to mean you’re not interested in actually explaining the problem, sorry if I’m mistaken.
There's not really any further way to "actually explain" that, often, when a person says something, context matters and it isn't universally applicable.
I'm not even sure why I'm having to explain this. It's is a pretty obvious thing.
Obviously its not obvious to me. I've had enough of your games, if you're interested in an actual productive conversation, feel free to reply, otherwise I want nothing to do with you.
For what it’s worth, this is the read I’ve gotten from this thread as well. I’m not sure how else to interpret the parent’s remarks—his responses seem evasive rather than clarifying.
Yea I noticed that in your interactions with them too. Also while I’m here I want to compliment you on your articulations of the strengths of go throughout this comments section! They’ve been very well put/compelling and helped me understand better how to express the things I like about go in the face of its detractors, which is a constant struggle for me.
> there’s pretty wide consensus even among Go detractors about these qualities
[citation needed]
> there are plenty of people willing to think differently and reap the benefits
"Reap the benefits" is, again, subjective. It's something you like, not something objectively beneficial or worth the trade-offs in all, most, or even necessarily many projects.
This is precisely the unduly arrogant attitude amongst Gophers that others have already mentioned several times in this thread.
You’re welcome to disbelieve me on the consensus. I’m not trying to persuade you of anything.
> "Reap the benefits" is, again, subjective. It's something you like, not something objectively beneficial or worth the trade-offs in all, most, or even necessarily many projects.
Of course, productivity isn’t for everyone. Sometimes for personal projects I’ll opt for things that exercise my creativity and cleverness over the purely productive or pragmatic.
> This is precisely the unduly arrogant attitude amongst Gophers that others have already mentioned several times in this thread.
Hah! This made me laugh. Yes, I disagree with you therefore I must be arrogant. (:
lol...you even take criticism arrogantly. Impressive. It wasn't a personal attack, it was a statement of fact, but thanks for making my point even more clearly for me - not only in your response to that, but in your assertion that your opinion based on your experience is fact. Appreciated.
Restated: You aren't arrogant because you disagree with me. You're arrogant because the things you say and the way you say them are arrogant.
Considering the alternative is often either "copy and paste a bunch of code with a type changed in a bunch of places and commit the result" or "vtable dynamic dispatch", it's perfectly fair for a person to say that they require that feature.
Generics also aren't the only useful feature that Go lacks (or, as of today, lacked).
You still don't "require generics", you might require more performance than idiomatic Go offers, but in this case you're also excluding everything slower than C/C++/Rust irrespective of whether or not the language supports generics.
> Generics also aren't the only useful feature that Go lacks (or, as of today, lacked).
Agreed, but it's foolish to frame language debates purely around the presence or absence of useful features. Many languages have too many features including a great many misfeatures which degrades the overall experience, and your analysis doesn't account for that. Further, there are costs associated with things like generics which your analysis similarly ignores. Further still, it ignores things like tooling, ecosystem, learning curve, etc. This is how you end up with C++.
> You still don't "require generics", you might require more performance than idiomatic Go offers, but in this case you're also excluding everything slower than C/C++/Rust irrespective of whether or not the language supports generics.
Performance only drives one of the two alternatives.
> Agreed, but it's foolish to frame language debates purely around the presence or absence of useful features. Many languages have too many features including a great many misfeatures which degrades the overall experience, and your analysis doesn't account for that. Further, there are costs associated with things like generics which your analysis similarly ignores. Further still, it ignores things like tooling, ecosystem, learning curve, etc. This is how you end up with C++.
This isn't an analysis. It's a counterpoint to a single individual on a social media website.
Go's ecosystem and learning curve are great. The best thing I can possibly say about its tooling is "it's better than C and C++". Seriously, how many dependency management paradigms have we gone through so far, and the best thing we can come up with is `go mod`?
> Performance only drives one of the two alternatives.
Right, so use dynamic dispatch by default.
> Seriously, how many dependency management paradigms have we gone through so far, and the best thing we can come up with is `go mod`?
Agreed that the road to go mod has been tumultuous, but go mod is in the top tier of dependency managers, which is perhaps a sad indictment of dependency managers. Most languages still don’t offer reproducible builds, and several mainstream languages require that you generate your list of dependencies in an imperative DSL—or at least these build systems are the most popular, which brings us to another problem: some languages don’t even have a single standard tool! From dependency managers, let’s look at build tools: how many build completely static binaries by default? How many make you script the build in an imperative DSL? How many punt altogether? How many languages compile to native code in seconds or faster? How many languages can trivially cross-compile virtually any package? How many languages still make you spin up a CI job to build and publish source code packages? How many make you spin up a CI job to build and publish documentation packages? How many support testing out of the box? Benchmarking? Fuzzing? Formatting? Profiling?
As far as I can tell, Go trounces virtually every other language on tooling (Rust seems to do pretty well). One area where ago doesn’t do as well is debugging—I know Go has delve, but I understand it to be limited (haven’t tried it in a long time though to be honest).
You can dig a hole of any size with a shovel, given enough time. But I wouldn't fault people who refuse to dig a large hole with a shovel after they've seen an excavator at work.
This analogy just reduces down to “I think generics are more fit-for-purpose” which is just another way of saying “I think generics are better”. And I certainly think there’s a fair amount of merit to generics, but the folks who have seriously programmed with and without generics are much less fervent supporters of generics (if they support them at all) than the generics-advocates who have only used languages that make heavy use of them. I won’t try to analogous this to power tools or cars or anything else because such analogies are never enlightening and usually at least a little corny.
Or put another way, 1/10th the surface area of code to write, maintain, review, and bug check. And much less interface{} abuse.
That's taking "tenfold" at face value. I don't see that happening personally. The go community has enough of a nucleus of devs with a certain mindset that I don't see crazy Haskell levels of abstraction taking over. What I do see is more type safety coming to interface dynamicism.
Or an expectation of competence from developers and expecting them to consider tradeoffs of their design- in other words doing their damn jobs as opposed to the language baby sitting them away from tools they might use incorrectly.
There’s nothing wrong with a language being strict with what you can do to force good programming styles. An expectation of competence is nice if you can afford it, but it’s also nice to have languages that mean you don’t have to take that risk. It’s why I liked Go and why I disliked working in Haskell. Look at Lisp - large teams get tied in knots because of the freedom. Go was nice because it just stopped you doing ‘smart’ things. Now it has generics you may as well use Rust or something else.