Hacker News new | past | comments | ask | show | jobs | submit login
Go is about to get faster (dominictobias.medium.com)
212 points by is0tope on Jan 23, 2022 | hide | past | favorite | 171 comments



I don't believe Go is about to get faster, at least not in an especially unique way compared to past releases.

Historically the Go compiler has been seeing performance improvements every release, so historically, Go has always been getting faster.

When it comes to Generics, the benefits are not so straightforward as simply recompiling with a new binary. I just rebuilt one of our larger services with Go 1.17 and compared it to 1.18 and the benchmarks come out roughly the same after variance. There is a slight improvement, but what's interesting is that we don't use generics anywhere.

My employer has a blanket ban on Generics in Go until at least the next release. I know others that do too. There is also nothing in our code base that is screaming to be rewritten with Generics. We collectively thought about it and no one could come up with anything worthwhile. Internally, we're still not sure when they're even warranted. Sure, there's a few three line for-loops that could be removed, but none of those are in a hot path. Yawn.

If Go generics radically change the Go ecosystem overnight, they will ruin the language. The ecosystem prefers stability and compatibility over features, and it's pervasive and good. I update Java and JavaScript dependencies regularly and it's a fucking nightmare compared to Go.

Lastly, Go had attracted a large community of developers who eschew unnecessary abstractions. It's lovely and refreshing. I can't say the same about Rust or Swift or Scala, where complexity is sometimes celebrated as simplicity when it's really just convenience wearing lipstick.


Java was released in 1996. It got generics in 2004, 8 years later. I don't remember anyone panicking and yelling from every corner that the world is ending. Everyone understood the benefits and it was a much awaited feature. And yes, after Java got them, it became easier and more typesafe to use. End of story. And by the way, Java too was a very simple language and generics didn't ruin it.


I never liked generics in Java - or basically the stuff they added together with them, like annotations and autoboxing - i felt they complicated and "ruined" the original language which was conceptually very simple. But by the time Java 5 became mainstream (i was working on desktop software so i had to keep compatibility with older runtimes anyway - not that i minded at all) i had already moved off Java aside from some of minor projects of my own (but i dropped even those after Oracle's acquisition of Sun as i didn't feel like relying on it). I do remember getting annoyed with NetBeans and Eclipse "helpfully" adding little squiggles everywhere in my code about how i could "improve" it with ranged fors, replacing typecasts in container usage with generics and adding those ugly @override annotations (which i manually removed whenever NetBeans added them on its own when it was creating code via the GUI editor). I guess i could disable them but as i wrote, i moved on after that anyway.

Of course i never panicked or yelled about it nor really even bothered mentioning it (aside from on HN now and then, usually in Go posts since the whole "generics in go" remind me of that time :-P). But me not doing that doesn't mean i didn't think of it and i'm guessing others did too.

My guess is that nowadays you see more complaints about generics in Go because Go largely attracted a higher percentage of people who liked its simplicity than Java ever had (and more people in absolute numbers in general as i'd bet that there are way more programmers nowadays than in late 90s/early 2000s, so you're bound to hear more voices). After all in terms of features as a language Go didn't had anything to offer over Java whereas Java had a lot to offer back in the late 90s/early 2000s.

(amusingly the language i use most nowadays is Free Pascal which is a hodgepodge of arbitrary features and a far cry from the original Java simplicity - though i do often find annoying how much of a "mess" it is)


> eschew unnecessary abstractions

The problem is, no one can agree on what is the 'necessary level of abstraction'. Where you stand on that scale determines whether you love or hate Go as a language.


I also do hope the eco-system stays as dependable and conservative as it is. There's no practical need to overhaul it, and in the end, it's about what works. Don't mess it up.

I do have some use for generics in my code: I've got a few conversion functions too many, but since they're narrowly typed, I don't expect a speed improvement, just a shorter file. Perhaps some of the (un)marshalling can be done more efficiently, too. Nothing worth losing any sleep over, so waiting another version sounds like a good strategy. I'll just play around with it a bit.


If I understand correctly, what you’re saying is “Go won’t immediately get faster with this release, it will immediately get faster with the next and subsequent releases”. Whether your employer uses generics or not, it’s very likely large parts of the standard library will get rewritten with them soon, and if OP is correct, it will improve the performance greatly of certain operations.


> it’s very likely large parts of the standard library will get rewritten with them soon

It's not. The Go 1 backwards compatibility promise means existing interfaces won't change. Take the Math package for instance: I'm not aware of any planned breaking changes -- if you are, please share. If they were to add support for non-int64 operations, they would likely be separate functions or a new package. This, without sweeping code changes, nothing changes.


The Go team has been discussing converting some functions to generics _if_ they can maintain the compatibility guarantee. Without verifying, I believe this was discussed in Go Time ep 216.


This article is about optimized algorithms. Those could be written as specialized implementations without generics, but with generics people can build reusable libraries.

Did you even read the article?


Not really fair. You could already get this performance, when you cared, by writing the specialized datastructure for the specific types you cared about. So sure, generics let you write type-agnostic libraries that don't pay a boxing/unboxing penalty, but that's not really news!


I do a lot with specialized data structures for things like kNN or sparse adjacency matrices, in multiple different contexts. The fact that Go now has generics means I might actually consider it as a language, because I don't need to mess around with eg. codegen to have a maintainable base implementation.


Yeah no argument generics are very useful! But still, it's a bit of a clickbait headline -- Go isn't getting faster, it's just getting more ergonomic to write fast code. :-)


Maybe I'm alone in this by, codegen is fine? After all, that's what generics is at some primitive level. The Go compiler is so bloody fast that recompiling is practically free. And it's very easy to write a code generator any number of ways.


But isn't maintaining codegen more effort (and brittle) than just maintaining a generic implementation? Maye it's a matter of taste, though.


Codegen breaks debuggers.


kNN's and matrices? Go doesn't have operator overloading; are you sure you'd be happy using it for that?


Deques are great. I think there's two ways I'd consider designing this library differently:

1) The resizing seems worse to me than a linked-list of fixed-size ring buffers which use a sync.Pool. 2) (more out there) some of the time when I've implemented this sort of thing, it's been to allocate memory for unmarshaling. It might be nice to have a mechanism to store values and allocate pointers.


Compiler question: what is the most worthy replacement for the Dragon Book, these days?


Engineering a Compiler 2nd Edition by Keith D. Cooper and Linda Torczon is highly regarded.


You may also find https://craftinginterpreters.com/ useful.


Go Faster.

Real missed opportunity here


Whaddya know :-o

It turns out that that complex technology was warranted?!?


I think that some of the concerns around generics, (like mine) are based on the over-use of generics. But, I am very happy for the addition of generics as they'll probably do more "good" than "bad". We'll just have to be pragmatic.

Edit: Reminds me of this funny video https://youtu.be/-AQfQFcXac8?t=63


Why is this a concern with generics, but not with e.g. if statements? They too can be over-used (i.e. over-nested)


If statements are not an abstraction though.

But I do actually try to minimize them, to the extent that's reasonably possible. And will especially try to minimize "else if" and "else", when reasonable, since I find it increases the cognitive load when reading something.

Interfaces are a better comparison. Interfaces are very useful and I've often used them with great success, but with generally see if other simpler "more direct" solutions work before using interfaces, as those solutions typically less abstract and easier to understand. I've definitely seen some code with "interface overuse" where it could be quite hard to find out what "foo.InterfaceMethod()" actually really does.


If statements are an abstraction over conditional jumps.


Your example here is a little on the absurd side don’t you think? While the lack of generics in a language can be annoying for some people it’s not a showstopper for most use cases. Whereas removing conditionals would make it a non-starter for most people. And the fact that Go does now have generics should demonstrate that the language is trying to be pragmatic rather than just trolling the community.

It’s worth noting that Go vetting tools do capture a few instances of “misusing” ‘if’ statements so taking your comment sincerely (which is more than it deserves imo), your worst case scenario is already covered.


> Your example here is a little on the absurd side don’t you think? While the lack of generics in a language can be annoying for some people it’s not a showstopper for most use cases. Whereas removing conditionals would make it a non-starter for most people.

I don’t want if-statements at all in my code, if-expressions at the most, but they are strictly worse than pattern matching anyways.


Congratulations on being the exception that demonstrates why I said “most people”.

The great thing about our current era is that you have more choices in languages and compilers than you have hours in your life to learn them all. So you can be as fussy as you want and there will still be an ecosystem out there for you.

There wasn’t nearly this many choices when I started out in IT. And most of the options that weren’t even free.


> Congratulations on being the exception that demonstrates why I said “most people”.

You said it would be a showstopper for most people, suggesting that a lack of if-statements is a fundamental problem that needs to be worked around.

I just showed that there exists serious, non-absurd arguments against if-statements, namely that they are strictly inferior to if-expressions, which again are strictly inferior to pattern matching, since they significantly reduces boolean blindness.

At some point (or rather for over a 1000 years) “most” people computed π by polygon approximation, but it turns out that method has severe limitations that more modern methods avoid. Should we still have chosen the method that “most” people were familiar with, over applying infinite series to the problem?


My company has choices; I don’t. I joined an explicitly Java-centric team (even Scala for some use cases) yet Go is getting harder to avoid.


It’s never been a better time to be a developer with regards to choice of employment and pay so boohoo for your entitled first world problems. Go cry me a river.


Not giving the programmer a choice and forcing them to be pragmatic was one of the main selling points of Go. Allowing generics at all will likely make them seep into he ecosystem over time and destroy part of that value proposition.


The claim that Go 'forces people to be pragmatic' applies a pretty strong value judgement to the way things Go does. You could say "Go forces people do things the way Go wants and this way is wonderful for many use cases" (I don't disagree), but whether that way is pragmatic or not depends on the use case.


There is nothing pragmatic about Go's approach to error handling.

It's the equivalent of putting your head in the sand and pretending nothing else in the world exists.


Programming is much more about understanding guarantees and curating the error path than it is about pulling a ticket from Jira and cranking out a happy path as agilely as you can.

Error handling in other languages is like confetti. “Pop! Surprise!” A disorganized game of “52 pickup” using an under designed grab bag of control structures. No thanks.


Lack of adequate control structures, or for that matter, programming language support, is almost never the reason why code has inadequate error handling. The main reason is that developers often leave error handling as an afterthought because it distracts them from thinking about the main flow, and it's hard to program when you are constantly distracted. That's why you have overbroad exception handling and the consequent improper handling. The problem is with the brain, not with the programming language.


> developers often leave error handling as an afterthought because it distracts them from thinking about the main flow

Go asserts that "error handling" and "main flow" (a.k.a. "happy path") code paths are equivalent, and that you can't ignore the bad stuff as you program against the good stuff. Error handling _cannot be_ an afterthought -- this is the path to fragile, even broken, code. If a programmer can't adapt to this model, then they're gonna have a bad time with Go, because with Go it's non-optional!


No, it really doesn't do that. Result types which force you to check for an error to get the non-error return value from a function, or Swift `try` which doesn't even let you call a function which may fail without writing some sort of error handling do that. Go leaves it up to the programmer to remember to check for errors, and lets you silently discard the error result.


> No, it really doesn't do that. Result types which force you to check for an error to get the non-error return value from a function, or Swift `try` which doesn't even let you call a function which may fail without writing some sort of error handling do that. Go leaves it up to the programmer to remember to check for errors, and lets you silently discard the error result.

The compiler is one way that language invariants can be expressed and enforced. But it's not the only one -- another "layer" is convention and code review.

Concretely, something like

    v, _ := f(...)
would _never_ pass code review at any reasonable organization.

If you use the compiler to express invariants like this, then the language unavoidably becomes more complex. In some contexts, that makes sense! In others, it doesn't.


Relying on code review to catch bugs is just leaving it up to a different programmer. Code review is wonderful, but expecting it to catch every occurrence of even just the categories of bugs easily caught by code review is wildly overoptimistic.

Ignoring the second return value is also the easy case to catch. Ignoring an error-only return from a function doesn't produce a warning, and won't stand out in code review unless you adopt a standard of never writing void functions.


> [...] unless you adopt a standard of never writing void functions.

To be fair, that is a good principle to follow ;) Non-void, especially pure, functions are much easier to understand in the context of an entire codebase.

> Ignoring an error-only return from a function doesn't produce a warning,

It does if you use the standard set of linters, which with Go you almost certainly do. Go has a strong linting culture, in part due to one being built-in. By the way, in vscode said standard set of linters is one click away in the config.

---

There are only 2-3 cases that I know of where failing to handle an error passes by silently. They are edge cases, not something you'll run into writing everyday code.


> Relying on code review to catch bugs is just leaving it up to a different programmer.

Go is a language for programminging "in the large" -- it explicitly defers a lot of stuff to layers "above" the compiler, like code review. If you don't buy that as legitimate that's fine!


Defering things to other layers such as code review is the exact opposite of Go making it non-optional to explicitly handle errors. Idiomatic Go has developed very rigid patterns around how to handle errors specifically because if you want to write happy-path code which ignores errors the language itself does nothing to discourage you from doing that, and in fact makes that the easiest thing to do.


I really don't understand the point you're making.

> if you want to write happy-path code which ignores errors . . .

. . . it's obvious that's what you're doing in code review, and it's trivial to correct. Right?

---

> if you want to write happy-path code which ignores error

If you're doing this consistently you're going to have a lot of `value, _` i.e. underbar assignments in your code, and `_` isn't passing any code review process in any reasonable Go shop I've ever seen.


To loop back to the original claim I disagree with:

> Go asserts that "error handling" and "main flow" (a.k.a. "happy path") code paths are equivalent, and that you can't ignore the bad stuff as you program against the good stuff. Error handling _cannot be_ an afterthought -- this is the path to fragile, even broken, code.

Go doesn't assert anything about error handling. Go itself is extremely unopinionated about error handling. If you want to do something with the errors it returns you can, but if for some reason you want to "ignore the bad stuff as you program against the good stuff" it won't try to get in your way.

Idiomatic Go never just drops errors on the floor and ignores them, but it does involve a lot of just bubbling errors upwards without spending any time thinking about what happens in the error path. It's very easy to just write `v, error := foo() if error != nil { return error }` without considering if everything's actually in a sensible state if you return here or if bubbling the error up is even the correct thing to do. It _is_ the correct thing to do so often that anyone reading the code is also going to just glance over it.

If a programmer doesn't spend time thinking about error handling, Go does nothing to encourage them to do so. If they do, Go offers very little to help them do so. It certainly compares well in that regard to C and languages where every expression can throw an exception, but that is a very low standard.


> Go doesn't assert anything about error handling. Go itself is extremely unopinionated about error handling. If you want to do something with the errors it returns you can, but if for some reason you want to "ignore the bad stuff as you program against the good stuff" it won't try to get in your way.

The compiler will let you do this, but reasonable code review won't.

> Idiomatic Go never just drops errors on the floor and ignores them, but it does involve a lot of just bubbling errors upwards without spending any time thinking about what happens in the error path. It's very easy to just write `v, error := foo() if error != nil { return error }` without considering if everything's actually in a sensible state if you return here or if bubbling the error up is even the correct thing to do. It _is_ the correct thing to do so often that anyone reading the code is also going to just glance over it.

Not true? Keeping error handling in the same frame of reference as happy-path code makes programmers consider the two things equivalently, as peers. `if err != nil { return err }` is _not_ idiomatic Go! At a minimum you annotate the error. And that's often enough to be the right thing to do.

> If a programmer doesn't spend time thinking about error handling, Go does nothing to encourage them to do so.

Simply lifting error-path code into the same visible control flow as happy-path code is by itself a huge win for making programmers think about error handling. If you don't agree, fine. Many many others do.


Relying on "convention and code review" is still leaving it up to the programmer(s).


Of course! But software engineering in the large -- which is the domain that Go targets -- is ultimately not a technical domain, but a social one; the programmers who maintain the project are the ultimate authority. That's fine!


Didn't you just claim: > Go asserts that "error handling" and "main flow" (a.k.a. "happy path") code paths are equivalent, and that you can't ignore the bad stuff as you program against the good stuff. But it seems the go relies on a good code reviewer for error handling.


> Go asserts that "error handling" and "main flow" (a.k.a. "happy path") code paths are equivalent,

Yes...

> and that you can't ignore the bad stuff as you program against the good stuff.

No? I don't claim that -- at least in the sense that the compiler permits you to ignore errors if you want to. But that's not some death blow against the language. The compiler is one of many mechanisms that can be used to assert code quality.


Most (all?) languages, Go included, also leave it up to the programmer to remember to use the result value, so I can't really sympathize with your point.


The happy path is more important; it needs to be stated clearly and concisely, and reviewed and maintained more often. The error path is pretty much just “can we retry at some level, or just give up and tell the user nothing happened?” Even if you believe they are equally important, they are very different, and intermixing them makes each harder to follow.


They are different, and I'm biased towards a belief that error paths are more important, but I agree that intermixing them makes each harder to follow. This is a limitation of the medium - text, and different languages attempts at solving this limitation through more text (try/catch, on error, etc.) have mostly failed, IMO.

Go's method of treating them like any other value is the most honest approach I've seen. No one freaks out when a FizzBuzz program does different things with different categories of integers.


The happy path and the sad path are equally important. Shuffling the error path to some second-class zone is the root cause of a huge class of bugs and reliability issues. Error-handling code is equally important to business logic and inter-mixing them is necessary to write reliable software.


It sounds like you're referring to exceptions. While I would take exceptions over the stringly-typed nightmare that is Go errors, you should know that there are far better error handling schemes out there.


Go errors are not stringly typed. You can assert that the error itself, or any error it wraps, is it a particular type through errors.Is and As.

If your past experience involved people comparing error.Error() strings I’m sorry. That’s awful and I can see why that would be a nightmare


Go asserts that errors are the same as any other type of value, and don't deserve special accommodation at the language level. There's nothing objectively bad or wrong about this position.


Being pragmatic could also mean using generics when appropriate.

Not having generics isn't a pragmatic choice per se


Exactly the same can be said about checked exceptions.


C++ used to have opt-in checked exceptions ("dynamic exception specification"), that were deprecated in 2011 and removed in 2017. That the once standard feature was removed from C++ speaks a lot.


Definitely nothing bad can ever happen with complicated programming techniques.

https://pbs.twimg.com/media/C6l_OwSXAAA_Fmo.jpg:large


Yeah, ramp it up to all the way to 11, C++. Or as they call it, "strawman".

I don't think many people would call Java 1.5 or C# 2.0 super complicated.

Do you know what Java 1.5 and C# 2.0 had in common, that Go didn't? Generics.

For reference:

- Java 1.5 - release date: September 30, 2004

- C# 2.0 - release date: January 22, 2006

That's ~18 years ago and 16 years ago, for whoever's keeping track.


Generics ended up making Java's type system unsound. Not a huge deal in practice, but the following paper sheds some light on just how complex Java's type system really is one you start poking around a bit:

https://dl.acm.org/doi/pdf/10.1145/2983990.2984004


Well, it's even worse than that, they caused a breaking change in Java and 2 parallel type systems, if you can call them that, with primitives.

They did the same for C#, where 2.0 was a breaking change from 1.1.

I still think Go should have bitten the bullet earlier. Though I'm still glad they're doing it now. Just that now there's a ton more code and they're going to divide the ecosystem until adoption is widespread, from the stdlib to every minor corner of the ecosystem that needs it.


> Well, it's even worse than that, they caused a breaking change in Java

What breaking change? They implemented type erasure specifically not to break existing code.


Bad wording, sorry. It was breaking for C#, for Java I wasn't breaking, just ecosystem diving. For example, Hibernate still doesn't use generics fully, from what I remember (they had their half baked system from before).

Adding them later is such a big change that touches so much code you can date it into pre generics/post generics:-)


I think generics on their own are very simple, bordering on second nature. Their interaction with traits (or "interfaces" as some languages call them) can make them complicated. I guess Go needed to find a way to make that interaction work.

By generics "on their own", I mean essentially the Hindley-Milner type system, which supports generics and can do static type inference in linear time: https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_sy...

Somehow, the "generics wars" in Go made generics into the scapegoat of programming language complexity, when perhaps it was Go's "interfaces" feature all along. Maybe the enemy was here the whole time!


oooh fun, this one is mine :) I've had this happen.... three times? In my nine years writing Rust. It exists, but you aren't forced to even use things like this; this is because this particular library is trying to stretch the type system as far as it can go.


Is there anything in Go that will help prevent those things that definitely can never happen?


Wishful thinking :-D


People used to blame Java for that.


My thoughts exactly. The whole article sounds like: “Whoa, so it really IS useful!” Duh!


It's a shame that this generics debate is framed as a "go programmers discover generics".

That has never been the source of contention, and the benefits were noted from the initial discussions of the language before it reached an audience. Like all things it's a trade off. Will you be providing more value with generics, or are other features a higher priority. That's obviously a matter of perspective.

Go has many advantages over other languages, and strict typing, generics and raw speed has never been part of that. If those things are paramount for you, there are better languages suitable.

Go has been fantastic for the problem space I'm targeting. Fast enough, efficient enough, portable enough, simple enough that hiring is easy. This might not be true for what you work on, but I'm not writing a video decoder.


> go programmers discover generics

It kind of was though.

It was endless arguments from Go users that generics aren't needed and can be solved through a combination of cut/paste, code generation or simply dismissing its usefulness altogether. And on the other side the rest of us trying to explain the many, many use cases that demand generics.


The main use case of generics is generic containers, and the two most popular generic containers - lists and maps - have always been part of the language (including their built-in generic functions). So in most cases, in a typical web project, you could indeed get away without generics entirely. Only in a few cases you'd be forced to use codegen/reflection/duplication - when you have to use complex algorithms which lists/maps can't satisfy. However, the Go team and the community had always been open to the idea of adding generics to the language (the official FAQ already in 2013 talked about potentially adding generics in the future), they just weren't sure how to implement them properly without overcomplicating the language (which is advertized as being simple), and generics can easily add a ton of complexity. So the introduction of generics was postponed because there were other more important features to iron out, considering that slices/maps already covered most cases. I think the breakthrough happened with the realization that code is more often read than written, so it's OK to make the language a little more complex for a library's author, most users won't see a considerable increase in complexity.


> It was endless arguments from Go users that generics aren't needed

There was (and still is) a small minority of Go users that insist generics are bad, but in my observation it's nowhere near the majority view. As a concrete datapoint, the GitHub issue on this[1] has 2k upvotes and 150 downvotes. A sizeable minority, but a minority nonetheless.

As is often the case, "the Go community" is far from heterogeneous, and it's hard to really get an idea what "the average Go programmer" thinks, and easy to criticise the most extreme elements.

Certainly for the Go team itself the position was always that 1) generics are useful, but 2) we're not quite sure how to best implement them. Some people disagree with that viewpoint too, which is fair enough, but it's really quite a different one.

[1]: https://github.com/golang/go/issues/15292


Not sure what you've been reading, but I've been reading lots of Go users who don't like generics arguing with lots of other Go users who do like generics. It's not a "those Go users" vs "us sensible people" flame war, it's a multifaceted debate with lots of sides, all of which contain some Go users.


I think you're misrepresenting the debate.

I've never seen anyone argue that generics aren't a useful concept, but I've seen plenty argue that it isn't worth the cost in incurs. That is a quite a different debate then the one you are outlining.


> plenty argue that it isn't worth the cost in incurs

The other "plenty" were arguing that it was worth the cost.

So color me surprised that this side was right.


There is no such thing as being right in such matters. Simplicity vs. expressivity is always a subjective trade-off. Otherwise, Zig could not possibly be a good language or Rust would obviously be a complete failure, for example, both of which seem like narrow-minded things to say. Sometimes more simplicity, sometimes more expressivity is better. It depends on for whom and for which tasks.


Golang competes in exactly the same space as Java and C#. With a small twist, but the same space.

That put a huge burden of proof on the Golang designers as the design space had been studied extensively for 3+ decades. And they skirted around that burden of proof for a while, I never liked their reasoning.

They could have just come out and said the didn't like generics and don't need them for their use cases.

Zig and Rust are in whole 'nother ballgame so there's no point involving them here.


That's a bit of an old-school language flame war response. It seems obvious that there is no objective criterion for determining when a language is simple enough and when it isn't. Not everybody uses Go for the same hypothetical "design space", and the implicit claim in your post that the same criteria do not apply to Rust or Zig is also dubious, as if they couldn't possibly work as languages in the same "design space." We're talking about general purpose languages. To give you an example, Rust is overengineered and way too complex for my needs, it would seriously hamper productivity. It's important to be aware that other programmers' mileages differs.

The bottomline is that it's never fruitful to argue that feature sets that go into "simplicity" do not heavily depend on subjective programmer preferences, their tasks, and the intended application domains. Programming languages are mere tools, nothing else. Choose the one appropriate for your goals.


> We're talking about general purpose languages.

Well, kind of. But in practice, you have to agree that even theoretically "general purpose languages" either fall or get pushed into niches.

For example, Ruby. In real life, mass usage, Ruby is at this point a 1-trick pony, web apps with Rails. While Rails was the rage, it tried and did break out into DevOps, but all those efforts have kind of fallen by the wayside (Chef - a shame, really, Puppet, etc).

Another example, Rust is a general programming language, but few people will actually use it for Line of Business GUI apps, for example. It's just too hard to get started with it and there's no point, you gain too little from its advantages to work so hard through that learning curve.

Same thing for Java and C#. Most companies that will hire you to write something in those languages will probably do it for backend services, plus Java Android apps because Google chose it for that (and then chose Kotlin) and C# Windows apps (but how many people are really writing those these days?). For Go almost everything I've seen is also for backend services.

So in day-to-day life they are direct competitors. Kudos to Go that it got this far and we're putting it into the same league with Java and C#, it's quite an achievement!

I do agree with your principle but in practice most projects are brownfield and your tools have already been chosen by someone else :-)


In fairness, “all languages are tools; choose the most appropriate one for your task” is also an old-school language flamewar response.


"This is encouraging news for a language already known to be unusually both fast and easy"

Hmm it's not known for that, is it? Go is known to be quite slow due to a weak compiler and a GC that optimizes for latency over throughput. And is Go really easier than a language like Java or in the dynamic world, JavaScript? It sure does seem to have a lot of weird edge cases and potholes to trip over.

Anyway I see a bunch of comments saying generics shouldn't make anything faster. They can make code go faster in combination with specialization and value types. It's one of the reasons C++ can go very fast whilst still supporting higher level abstractions, albeit at the cost of generated code size. It's also the reason they're adding value types and specialized generics to the JVM. I don't know if Go is doing C++/Rust style specialization or not, but it at least has the potential to do so.


> Hmm it's not known for that, is it?

It’s a hugely generalised statement so the real answer is “it really depends on your context”

Is it fast compared to other easy to learn languages? Definitely. Is it fast compared to other systems languages? Lol no. But is it fast to compile compared to other systems languages? Generally, yeah.

Is it easy to learn compared to other systems languages? Very much so. Is it easy to write complex low level systems logic? No. Is it easy to write dynamic code? No.

The problem with generalised statements against general purpose languages is that the scope is so wide that people will cherry whatever context they want to suit whatever argument (for or against) they choose.


> Is it easy to learn compared to other systems languages? Very much so.

I think this is debatable and depends on your background.

If you come from a Java/C background then many things are unintuitive e.g. error handling, dynamic interfaces, package/library management, whitespace parsing, style enforcement etc.

Golang is easy to learn right now because it's basic like say Java 1.2. But since it likes to ignore what other languages do I wouldn't say it's an inherently easy language.


> If you come from a Java/C background then many things are unintuitive e.g. error handling, dynamic interfaces, package/library management, whitespace parsing, style enforcement etc.

I’d argue that doesn’t make Go hard to learn, that makes Go hard to adapt to for people who are used to and like Java. Ie You’ve still learnt the language of Go even if you haven’t warmed to its idioms.

But semantics aside, your comment here beautifully illustrates my point about how generalised statements can be skewed to suit any argument. Take your last paragraph where you used the term “easy” in a multitude of ways, each carefully selected to promote your existing opinion of Go:

> Golang is easy to learn right now because it's basic like say Java 1.2. But since it likes to ignore what other languages do I wouldn't say it's an inherently easy language.

This isn’t an impartial point you’re making. It’s an opinionated one. Which is fine in itself because you’re entitled to opinions but it does demonstrate the pointlessness of arguing about generalised statements where everyone is going to infer their own context based on their own biases. Ie Bob might find language X “easy” under a different context than Sally finds it difficult. But those context are not equivalent despite the adjective they’ve used being the same.


Even then, it's remarkably easy to read Go programs. This is very important.


I do code reviews for Go every day and it is easy but also frustrating and tiresome.

The ridiculously antiquated error handling and the absence of modern FP constructs e.g. filter/map means that 1 line of Scala/Swift/Rust/Java etc code = 10 of Go.


> e.g. filter/map means that 1 line of Scala/Swift/Rust/Java etc code = 10 of Go.

Honestly, fuck this noise. I hope this never makes its way to Go. Densely packing logic is not inherently better. I've done enough Scala/Rust/Java reviews and those are frustrating and tiresome.


On the contrary I hope generics in Go mean that bug hives of mutation in loops get replaced ASAP with higher order functions. I for one have plans to work through several code bases and do just this starting the second Go 1.18 lands.


There's nothing "antiquated" about Go's error handling. It expresses a position that you might not agree with -- that errors are no less important than happy path code -- but that's not right or wrong, it's just a position.


the lack of filter/reduce and other higher order functions is precisely what keeps me out of go. I don't want to be restricted to what I learned about programming in my first year. Go forces me to code like a first year college student.


This was one of my huge struggles with Go before I just out and out gave up. So much 'good' Go code at my office was just CS101 spaghetti.


not to mention, you don't gain much performance for it either. I can understand c or zig's choice of tradeoffs with simplicity being paramount when you're going close to the metal but for go's performance, Elixir, which is my daily driver, gives me comparable performance while giving me higher order functions, built in message passing, parallelism and actual error handling tools.

Compared to GenServers, Tasks and Agents, go-routines and javascript async are toys, like fischer price vs fischertechnik.

Go feels like a bunch of unneeded compromise for people needing to work with junior programmers.

That said, if I needed a quick and dirty low level application for controlling an OS process or a lambda function on the cloud, I could see go's benefits. I could certainly build a poc faster than I would be able to in rust and the overhead of the runtime/startup is much smaller than elixir.


Go is incredibly hard to read. There's no intentional. Everything from error handling to flow control to business logic all looks exactly the same. On top of that naming conventions and package constraints tend to absurdly long and verbose (and therefore hard to read) programs for what should be trivial.


Obviously false (shrug)


Yes I saw that internally at Google in early days of Go, people often said that Go is going to replace either Java, or C++, depending on context. So I felt like it's going to take a niche somewhere in between those two..


Go isn't even easy to learn compared to high level languages let alone system language. It is the only language I ever gave up on after 3 months of struggle I concluded I'd rather work with anything else.


You’re conflating “easy to learn” with “easy to program for” sigh. And that wasn’t even the point of the post so you got that part wrong too. In fact did you even bother to read the thread before jumping in with that post?

The point wasn’t whether Go is easy or not, it’s that “easy” means different things to different people (like here where you conflate “easy to learn” with “easy to use” thus “easy” meant something different to you vs the rest of the English speaking population). So arguing about a generalised term with zero context is stupid. Just as stupid as the editor wars of the 90s and Linux DE wars of the 00s. I guess some people just like to browbeat their peers into believing personal preference isn’t a thing.


Wow, which language did you find easier to learn than Go? HTML doesn't count.


This is going to sound weird but I picked up both Rust and Erlang far faster than Go.

Edit: my point isn’t that others shouldn’t enjoy it but it being easy is not universal or to be taken as given.


>Hmm it's not known for that, is it? Go is known to be quite slow due to a weak compiler and a GC that optimizes for latency over throughput. ?

Lol what ? Go is generally 'the' tool me and quite a few of my friends reach for when they want todo some heavy-lifting without getting bogged down in the weeds of things. Sure you millage may vary.

And in your view Go might be slow (compared to) ? But Go is definitely NOT slow for any sort of general computing. Faster than NodeJS, (ok i'm cherry picking now) mostly on par or with C# or Java depending on your benchmark-magic of choice. Super easy to do multicore and concurrency with it and definitely faster to compile than Rust.

Does it have warts or not the best language ? Absolutely ! But it's definitely not know to be "quite slow" and having a "weak compiler" ?? To be honest in this is the first time I've heard that exact claim "quite slow" :/


Of course it's faster than JavaScript. I meant slow compared to comparable languages like C++ or Java (statically typed with high effort compilers).


It's not slow compared to Java or C++ !?? LOL show me benchmarks where is slower and I will show you benchmarks where it's faster !


>> Go is known to be quite slow

I understood it was comparable to Java or C#, both of which are extremely fast - both in terms of execution speed and more so in terms of developer productivity.

The benchmarks I’ve seen generally back that up

https://plummerssoftwarellc.github.io/PrimeView/report?id=da...

9th overall - faster than c++ for example in Dave Plummers primes benchmark. I quite like that benchmark because it’s really just a bunch of memory operations, a decent view of the compilers ability to generate decent code. Although my concern with this benchmark is that not all languages got the same optimisation effort. Looking at git commits, it seems like rust got almost more optimisation effort than every other language combined.

Also i don’t like the unfiltered view of that benchmark - lisp comes out fastest language of all but that’s because languages like c++ and Rust can only really pre calculate results at compile time (constexpr etc) - whereas that lisp implementation is doing everything before compilation. Unfair because other languages don’t have a programmable precompilation reader phase.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

In the area of java, Swift etc i don’t like these benchmarks at all though. The c# one for example often relies on c libraries or vector intrinsics etc so it’s not really like for like.

https://www.techempower.com/benchmarks/#section=data-r20&hw=...

Between java and c#


> … not really like for like…

Except when it is: this C# program seems to have been transliterated from the Java program —

mandelbrot C# .NET #5

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

mandelbrot Java #2

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

And in the listing —

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


That highlights my point pretty well - you've chosen one of the c# impls that doesn't use anything more than a few compiler hints but it's slower than the java solution.

However if you look at the overall results, c# is about 25% faster on mandlebrot, and when you peek under the covers - https://benchmarksgame-team.pages.debian.net/benchmarksgame/... - you find it's using AVX vector instructions.

Not like for like in the summary view which is why i think those results are misleading.


> … slower than the java solution.

!!

4.10s Java #2

4.11s C# .NET #5

----

> … overall results, c# is about 25% faster on mandlebrot…

You say "overall results" but the webpage says —

"Always look at the source code.

These are only the fastest programs."


Your missing the point, javascript is easier but in the class of languages go is in, its a day and night difference.

rust, c/c++, java etc. Compared to these languages, go is much easer to learn and use. Java can be put the side, the real comparison is with c/c++ and rust.

By taking some trade offs on the performance side, go has a much better UX than rust/c.. Literally open up a few tabs, go to github and compare a page of go code vs rust vs c/c++..


Go is fast.

Generally speaking, you can expect a well-written Go program to be no more than 50% slower than well-written C. Normally the Go program will only be 5% or 10% slower.

Saying this another way, a carefully written C program that takes 100 seconds to run can be converted to Go and might take 150 seconds to run, in a bad case. In a typical situation the Go program will take 110 seconds to run.

Of course, a bad programmer can make anything slow.


I'm not sure how you get from 50% slowdown to fast. That's my point - in the space of ahead-of-time compiled, statically typed languages Go produces code that runs significantly slower than other languages. Even if you include JITCd languages it's not that fast. It's an intentional choice, so I really don't even know why this is proving controversial - Go's designers were always crystal clear that their priorities are:

1. A fast AOT compiler (which therefore, cannot spend much time optimizing things)

2. A low latency GC (which therefore, sacrifices throughput)

3. Ease of use (which therefore, means avoiding complicating the language with optimization related features)

Not included: raw performance.


Because there's a point where additional performance doesn't matter.

Using C instead of Go is a mistake for all but the most performance critical applications. Use C for small pieces of code that need maximum performance. Use Go everywhere else.

If you're writing a compiler, use Go: it's easily fast enough and very programmer-efficient. If you're writing SIMD convolution code, use C. For example, here is my compiler (written in Go) that generates AVX-512 neural net convolution code in C: https://NN-512.com

For almost all purposes, Go is a better choice than C. Go is almost always fast enough by a wide margin, and the resulting program is easier to write and higher quality.


> I'm not sure how you get from 50% slowdown to fast.

A mere 50% slowdown over C given Go's ergonomics? That's amazing.

Although, you and the parent are talking about different things. He didn't say "Go is 50% slower than C" he said well written Go is up to 50% slower than well written C. I'd expect average Go code to be 100%+ slower than average C code, which is likely what you're talking about.

> That's my point - in the space of ahead-of-time compiled, statically typed languages Go produces code that runs significantly slower than other languages.

That's not the space Go is in. You forgot a huge factor - memory management. Go has a GC and therefore is not anywhere near the same space as C/Rust/C++. Thus, such comparisons are inherently flawed - Go isn't competing on speed with C/Rust/C++, but with Java/C#.


I didn't forget memory management, point 2 in my list is about the tradeoffs involved in GC. What you and 37ef_ced3 are giving me here is a list of reasons why it's OK for Go to be slow or at least slower than C++, which is fine. Go inhabits a particular point in the design space and it balances performance against other factors - great. My beef was with the statement that Go is known to be "unusually both fast and easy". It's not known for that. It's known to be a sort of middle-ground jack of all trades. If someone told me they were using Go because it was fast, I would think they didn't know much about programming languages, or I'd have to read a lot of context into what they were saying (maybe they only had experience with Python before, as is the case for many Go programmers).


I don't think your point is controversial, its just missing context.

Having a language that can hold its own weight in the AOT, statically typed world while also having the UX thats comparable to javascript is insane.

Rust, C/C++ is a last resort, your'd only touch it if you absolutely need it.

To have a language with better UX than Java, C#, Python etc. that greatly expands how far you can go without resorting to C/C++ is super kewl!


> Go is known to be quite slow due to a weak compiler and a GC that optimizes for latency over throughput.

I don't know how you're measuring "slow" but from where I sit in the software ecosystem Go is typically one of, if not _the_, fastest language available. It's entirely correct to optimize for latency over throughput, for my use cases.


It's been a long time that I developed in Go, but at the time I though it was a better python. As easy as python, but faster. At least for backend stuff.

Does it have a lot of weird edge cases? Probably yes. I'm no expert in Go, but I don't think it has more or less than any other language.


Generics shouldn't make something faster... If they do, then there is a bug or missed optimization in the compiler.


~~If the type of something is "any", then that means your types are only truly known at runtime, and you therefore have to check the type of everything while the program is running. Am I off here? I don't use Go.~~

Maybe it's not so much runtime type checking, as the fact that a type of "any" means that the compiler doesn't know anything about what objects are in the deque. If you're using arrays to represent your deques, then this means you can't actually store the entries of the deque in the array, but only pointers to those entries, no? I think the keyword for what I'm saying is boxing/unboxing.


> I think the keyword for what I'm saying is boxing/unboxing.

This is correct. When a function/struct accepts an interface value, it should be able to work with values of any underlying type. Underlying types can have vastly different sizes (from 0 to N bytes), and at the same time, we want to have a single machine code implementation per function, and a single memory layout per struct, known at compile time, to avoid massive code bloat (which we would have if we tried to account for all possible sizes). The simplest way to do it is to make an interface value store a pointer to actual data - pointers are always of the same size, no matter what the underlying type is. A pointer means "escaping to the heap", i.e. we can't safely store data on the stack anymore, because the pointer can be saved somewhere else and used later, when the original stack frame is long gone, which would be unsafe (we'd access dangling/overridden data).

So the main overhead is additional GC pressure because we have to allocate on the heap. Type checking has its overhead, too, but IIRC it's just a conditional branch.


I’m also not much of a Go programmer, but looking at the benchmark code[1] it doesn’t actually use the values after they are retrieved, so if the compiler is smart it may never have to check the type at runtime. It does, however, have to wrap the primitive in the `any` type, which seems to be what’s making it slow.

Question for Go programmers: does using the `any` type result in an additional memory lookup if the wrapped type is a sized primitive like an int, or is the value stored directly alongside the type information?

[1] https://github.com/sekoyo/deque/blob/86df0003850acaf3039c2d6...

Edit: I wrote that before you added the second paragraph, which sounds right to me.


> does using the `any` type result in an additional memory lookup if the wrapped type is a sized primitive like an int, or is the value stored directly alongside the type information?

The answer to this question is complicated because it's changed several times.

Prior to Go 1.4, it stored word-sized or smaller values directly without allocation.

Go 1.4 started a big GC rework, part of which required fields not contain possibly-pointer-possibly-other-values. https://github.com/golang/go/issues/8405

Obviously this made people unhappy, so it kicked off several rounds of optimizations to claw back common cases over several versions, the most recent of which was 1.15's introduction of 256 interface constants which are automatically used for any numeric value that fits in a single byte (including booleans). 1.9 also made constant strings preallocate matching constant interfaces if it sees them passed as an interface.

This is also why there's so much benefit to inliner improvements; any inlining improvement translates rapidly into devirtualization improvements.


yes, interfaces have a memory redirection

https://research.swtch.com/interfaces


This article is outdated; the second thing it says under "Memory Optimizations" does not happen anymore, and that's what's being asked about.


It depends on the semantics of 'any'. The compiler could look at your code and figure out all types ever used with that 'any' and the optimize each case. The types could be known at compile time in some cases, if the semantics of 'any' allow for such a check and do not simply tell the compiler to not optimize for the types of that 'any'.


Generics absolutely (can) make things faster, if the collection is generic in the first place: the lack of generics would require RTTI and possibly boxing, the article specifically mentions the use of `interface{}` so that seems overwhelmingly likely.

Hell the first benchmark just pushes a bunch of `int` onto the stack, with generics it just does that, without it has to pack the int into an interface{} quadrupling its size. The performance difference may well just come from the increase in allocations (and reallocation), and cache pressure, that sort of things. Nothing complicated, really.

> missed optimization in the compiler.

Go's compiler is pretty simplistic and not a heavily optimising one, so it'd also be unsurprising if type erasure also hindered it.


It’s the other way around. The use of opaque references instead of generics means that the compiler does not have type information to make optimizations. The classic example is qsort in C vs std::sort in C++ - the generic version of the algorithm in C++ is faster because the compiler has the type information required to optimize it properly.


There are compiler optimizations that infer/invent types at compile time.


The way I understood it the “slowness” came from using interface{} in the queue implementation. With generics you get a performance boost from annihilating the run-time overhead. Yes?


> With generics you get a performance boost from annihilating the run-time overhead. Yes?

I don't know whether creating an interface{} has a runtime cost (though it may if the type pointer has to be looked up at runtime?), however there's also a memory overhead: I didn't look at everything but the first two benches (pushfront and pushbach) use `int` values, which are 32 bits. But interface{} is 2 words (128 bits).

This quadruples the memory requirements, meaning the backing buffer may well need either more reallocations, or to make larger allocations & have to span multiple cache lines.


I think you’re on to something. Just reread this article: https://blog.gopheracademy.com/advent-2018/interfaces-and-re... And there’s definitely an overhead associated with the interface type due to run-time reflection. I guess generics does away with this.


Do you even know how generics work? Any implementation of generics should at least remove the type checks that a dynamic (interface{}) implementation has to incur.


It will if it means that the compiler can generate code which uses static dispatch rather than dynamic dispatch - it can be an improvement over using interface{} which I understand is the common way of dealing with cases that need generic behaviour in Go (with alternatives being code generation and simply copy-pasting).

As far as I understand, with interface{} the compiler can't just decide to generate static dispatch based code because there's not enough information to generate that in a generalisable way (the code can depend on information that is only available at run-time rather than compile-time) and it would be bad for programmers who expect dynamic dispatch to be used because they're optimising for something else e.g. binary size or compilation speed.

Obviously generics wouldn't be a performance improvement over using, say, code generation or doing a bunch of copy-pasting (and, given Go's lack of function overloading, a bunch of annoying renaming and some logic to dispatch to those methods which actually might cause more of a performance hit...) but those approaches tend to cause other problems.


These generics are not the way Java generic works. These are essentially specialized code genereted for each type (just like C++/Rust).

You can have a look at the essembly code generated by Go program while using a slice of interface{} vs using slice of specific type on compiler explorer.

Edit: Removed C ... somehow have a habit of writing C/C++ together :)


C doesn't have reified generics. You could put C# in that list though.


Yup... old habit of writing C/C++ :)


Not for a given development effort and API complexity. Sure you can reimplement a list sort for each of your types.


Why?


Something fishy here. A generic data structure is exactly the kind of code that you wouldn’t expect to perform better with generics. At most you’re getting the benefits of unboxing and the absence of a single conditional jump for the type check when you convert back from interface{}. But that shouldn’t make much difference to a series of tests that just push and pop items for a queue. I suspect that any realistic code using this data structure would see no tangible performance benefit. These tests are probably just picking up the cost of an extra allocation required to box a primitive type as an interface.

If you’ll excuse the blog spam, I did a dive into the performance of interface{} here: https://adrummond.net/posts/gogenerics

(The blog post was written before it was certain that Go would get generics.)


> At most you’re getting the benefits of unboxing

Which is.. 3x? or 10x?

> These tests are probably just picking up the cost of an extra allocation required to box a primitive type as an interface.

I've seen unboxing deliver 100x speed ups cause it allowed the compiler to auto-vectorize multiple loops, fuse them, optimize stuff better etc. Stuff that it can't do if it doesn't know the memory layout of things, or if there are "optimization barriers" inserted in between all these steps (`interface {}`).

---

TBH it's 2022 and it seems that "Go" devs are just seeing fire for the first time in their lives. Yeah, fire heats. And yeah, this type of stuff if why people don't put Go in the same bag of languages as C++ or Rust.

Go feels fast if you are coming from Python, but if you actually look at what the hardware can deliver in terms of FLOPs and BW, Go programs run at 0% of hardware peak (and no, just because your task manager shows go threads using 100% of the CPU doesn't mean that it is achieving 100% of peak CPU perf).


Exactly. Go was (still is tbh) slow. that is why these improvements are possible. Fast is relative ofc but if you are going to introduce the words "low level" in your description then don't be surprised when C/C++/Rust rear their ugly (but very fast) heads.


> Go was (still is tbh) slow.

"Slow" isn't an objective metric. Slow how?


It's less that it isn't objective but more that it's relative.

As I mentioned in my original comment once you start throwing around the expression "low-level language" or "systems language" you invite comparisons to C/C++/Rust.

Thus you will end up with a list of points like this:

It makes less efficient use of hardware due to poor vectorization and lack of JIT to take advantage of this information which would be apparent at runtime. Namely escape analysis would be particularly good. I imagine that Go on Graal would be a very very fast language if time was invested there.

Also until recently generics weren't a thing so it made it impossible to do -high performance- generic data-structures without code-gen. They were possible, but slow as balls due to interface{} and reflection.

It's garbage collector is mediocre for latency, nothing impressive. But it's straight dog shit for throughput. Definitely not -fast- compared to JVM/Hotspot/ZGC.

So, on the good side of Go it provides easy/good access to primitive types and primitive arrays of said types so as long as you know what you are doing you can generally massage the compiler into doing something not completely stupid.

That said if I want to write really fast code in something that isn't C/C++/Rust I would much prefer Java or C#... or more logically if FAST is the primary concern then I would probably pick C++ or Rust.


> It's garbage collector is mediocre for latency, nothing impressive.

Go's GC is best-in-class for latency. What makes you think otherwise?


It's on par with other latency optimized GCs for pure pause time but because it's architecture pays a gigantic throughput cost making it only a mediocre GC overall. ZGC puts out the same pause numbers with monstrous throughput in comparison.


Even in these microbenchmarks it’s not 10x.

Boxing can be faster or slower depending on the context. For example, if the objects you are storing are large, unboxing them (rather than storing pointers to them) could well make the queue perform worse, as extending the arrays will require more memory to be copied.

There’s no doubt that the unboxing permitted by (Go’s) generics will sometimes lead to better performance. But I think that the article is exaggerating.

> TBH it's 2022 and it seems that "Go" devs are just seeing fire for the first time in their lives.

Come on, let’s keep this patronizing and flamey kind of commentary off HN. I for one am a language nerd who’s tried plenty of languages with various forms of generics. I’ve even been paid to write Haskell code! That said, I was pretty happy with Go without generics.


You can just verify using any language with powerful optimizations and whose generic system supports both boxed and unboxed generics, e.g., Rust.

The difference between passing an unboxed generic and triggering monomorphization vs passing a boxed type-erased generic is night and day in terms of performance.\

Boxed generics generate one version of the code that need to work for all types, independently of layout, and dispatch via the generic ABI (aka `interface {}` in Go).

Unboxed generics generate one version of the code per type, removing memory allocations, and allowing the compiler to optimize the function for each type and each layout independently.

This increases code size and compilation times, but in many cases allows dozens of compiler optimizations that aren't possible for the boxed generic case. The code produced by these optimizations allow others to run, etc. And a 100x improvement in perf isn't uncommon (i've seen order of magnitude larger improvements and regressions in perf than that from switching back and forth between boxed and unboxed generics in Rust).


We can already see the comparison in the benchmark in the original blog post.

I edited my previous post to point out that I’m familiar with Rust and other trendy languages. Please don’t assume that I’m not just because I don’t also hate Go.

The speed up (or even slow down!) from boxing is highly dependent on the nature of the code, the size of the relevant type, and the usage patterns for the data structure.

In the context of a generic data stucture, unboxing is not likely to unlock a huge number of optimizations. This is in contrast to e.g. generic code for performing matrix operations, where unboxing could make a huge performance difference.

My original post said “A generic data structure is exactly the kind of code that you wouldn’t expect to perform better with generics.” You then responded by taking about performance improvements deriving from autovectorization, which are relevant to generics in general but not to generic data strucures like a deque.


> In the context of a generic data stucture, unboxing is not likely to unlock a huge number of optimizations.

Aside from the optimisation of "Not having to allocate and deallocate memory", you can unlock the optimisations of "removing one step of pointer chasing" and "better memory locality" - arguably the top two things one should be thinking about when writing fast code.

If your datastructure has to access the memory, boxed values hurt you in two ways. One is that on each access you have to chase a pointer to who-knows-where. There's also no reason to believe that this value is going to be stored next to other values in the datastructure.

This hurts you in two ways - the obvious one is that extra pointer hop is another chance to miss cache. The less obvious way is that loading one element doesn't do anything to bring nearby elements in from the cache. If your datastructure tends to access nearby items together (say a btree), this is extremely valuable for performance. An even better feature is that if you start loading some sort of metadata nearby elements (again, like a btree might have), the prefetcher might already start loading your data.

You also get inlining benefits in this case - a virtual call out to some comparison function might compile down to a single instruction.

You bring up a good point, that there are cases where boxing matters like having extremely large elements (you still get inlining with generics though). It also matters less for datastructures that don't actually inspect their elements in any way. But in my experience, if you look at something like a map type datastructure, you don't see keys being 1KB objects - they tend towards things like strings, integer IDs, or fairly small structs consisting of the above.

Benchmarking these effects can be a little tricky - small microbenchmarks can reside entirely in L1 or L2 and get extremely unrealistic allocation patterns (only thing allocating == nice, contiguous, allocations), so a realistic benchmark has to apply some degree of cache/allocator pressure that's relevant to your workload.


> n the context of a generic data stucture, unboxing is not likely to unlock a huge number of optimizations.

This is not true, the difference between a Vec<Box<T>> and a Vec<T> is huge. You can't easily vectorize a Vec<Box<T>> cause the elements can be in different memory locations, and this is important because... people do loop over all elements in data-structures super often.

And not only vectorization, but removing the memory indirection would make much better use of caches, depending on the size of T, multiply your memory BW by a big factor, etc.


Hopefully you’re not looping over all the elements in your deques super often, or there wouldn’t be much point in using a deque!

We’re talking past each other here. Yes, unboxing will sometimes give a performance improvement. But I’m skeptical that any realistic code using the deque structure in the blog post would see much of a gain. To repeat, even in the micro benchmark in the blog post, the performance gains are only around 3x. In most realistic application-level code, the code paths that lead to insertion of a new element will also do some allocation, so that the cost of boxing is lost in the noise.

Vectorization strikes me as a fairly niche case. It’s revealing that you use Vec as an example. In any case where autovectorization is going to give a useful performance improvement, you probably can just use a Vec. And of course, the Go equivalent of an unboxed Vec in Rust doesn’t require generics anyway. So I doubt that there is a large body of existing Go code that will suddenly become ripe for autovectorization once data structure libraries switch over to generics.


> Hopefully you’re not looping over all the elements in your deques super often, or there wouldn’t be much point in using a deque!

I do this all the time: one thread fills the deque, while another thread consumes as much of it as passible.

A modern deque is just a contiguous (mirrored) array in virtual memory. From the point of view of the algorithms, it looks just like a Vec.


If I understand the scenario correctly, you're pushing items onto the end of the queue and consuming items from the beginning. You don't really need two arrays in that case. A single array can be efficiently extended at one end and shrunk from the other end. (Admittedly, the way slices work in Go makes it awfully easy to leak memory if you do that in a naive way. But that's a separate issue from generics.)

Edit: I forgot to mention that channels are also a good fit for this sort of use case in Go.


There is a single contiguous memory allocation, which mirrors itself.

One thread produces elements and pushes them at the tail (e.g. I/O bytes, in batch), and one thread consumes as many elements as possible in batch from the other end (e.g. all bytes available, in batch).

The VA mirror is required to allow processing all elements in the deque as if they were adjacent in memory, instead of having to "chunk" them depending on how the deque currently wraps.

This is the library I am using, the array contains an explanation : https://github.com/gnzlbg/slice_deque


It's a cool library. What I'm not quite getting is why your use case requires a deque rather than just a regular queue. If you just used a regular queue then you wouldn't need any virtual memory tricks to keep things contiguous while keeping the required big O characteristics.

Enqueue by appending to a slice; dequeue by incrementing the index of the oldest element. To avoid space growing indefinitely, copy the backing array to a smaller array every time the backing array becomes k times larger than the queue. If you're not enqueuing items at both ends of the queue, that's all you need for an efficient queue that maintains contiguity.


> In the context of a generic data structure, unboxing is not likely to unlock a huge number of optimizations.

Reading between the lines here from your other comments, I think you might be excluding simple arrays here, because in Go you could already have monomorphized functions over unboxed arrays of elements without the new Generics feature? (I say "Generics feature" because that's what is new; I'd say Go already had support for "generics" special cases for arrays.)

Because generics over simple arrays is somewhere that unquestionably, vastly improves performance, in most normal situations. That's the impetus for "data oriented design", and the reason for, e.g. pandas using Arrow as a backing store, or why Clickhouse absolutely flies, or the basis for q/kdb: data locality to take advantage of CPU cache lines.

I think that in many situations, iterating over an array of contiguous data is faster than a more "optimal" algorithm (from a Big O perspective) juggling pointers. I've seen a couple talks on this, I think one by Godbolt and one by Strausberg, and they were kind of counterintuitive to me (what do you mean iterating over an array is faster than having a map?), but very thought provoking.


Yes, exactly.

I can see why people say that Go already had a sort of built-in generics feature. However, I think the same could be said about any language that allows you to define arrays with different element types (e.g. C). In common parlance, a language has generics iff it permits you to explicitly abstract over types.

I wish it had occurred to me earlier that C vs. C++ is a good test case here. If generics really led to huge performance improvements in typical application-level code, we'd expect C++ programs to run much faster than C programs. Typically, they don't. Of course there are exceptions, such as code that benefits from libraries like uBLAS.

(I'm aware that you can fake generics in C using macros, but there are also comparable code generation tools for Go.)


Let me wade in: I think what you're saying is plausible, because the CPU cache should be an important variable here. Boxing should cause cache misses.


Its not only CPU cache, its the whole CPU.

Having one function per type results in more instructions in total, cause you end up with more copies of the function, but less instructions when dealing with one type, like in a collection of values of one type only.

Less instructions is linear speed up. Single copy of the code enables inlining, constant propagation for type sizes, which enables vectorization of loops. Vectorization alone can buy you up to 32x on modern hardware if the code is flops limited.

Avoiding pointer indirections also trash the cache less, and the BW of caches is orders of magnitude higher than the bandwidth of ram (multiple TB/s vs lower 100 GB/s).

And caches have lower latencies so CPU threads wait less on memory.

So your speed up ends up: less instructions * more FLOPs * more BW, and that can easily be 10x * 32x * 20 ~= 6000x in the worst pathological case.

A 1000x speed up / slowdown per element due to switching between boxed and unboxed generics is less common than a 100x speed up in Rust, but it does happen.


If only they'd had them from the beginning and not bolted them on


Then we probably wouldn't have had Go for the last decade, and we wouldn't have all the real world use cases to guide a correct implementation of generics. "Get it completely right the first time" has never happened in the history of software.


Unboxing is a huge optimisation though isn’t it? No need to allocate when inserting a new item into the queue (beyond the standard allocations which happen when growing a slice), fewer pointer indirections when reading the queue, easier garbage clean up at the end (a single deallocation of a slice).

I think (but am not sure) that the Go compiler can add some small primitive types and structs (precisely: those that fit in a machine word) to a []interface{} by storing them inline, removing the need for an allocation. So perhaps you will only see major speed increases when using larger structs.


With larger structs it’s a double-edged sword. Unboxing them also means that the deque implementation has to copy a lot more memory when extending the underlying slices.

Unboxing is typically an optimization, for sure. I guess I disagree that it’s a ‘huge’ one except maybe in the case of arrays/slices (where you can do it anyway in Go without generics).


That's an incredible memory you have. Word sized values can be inline, but they still have the type overhead. I only know this since I just checked


All those times in the table seem... Slow...

For example, BenchmarkPushFront-10 takes 9 ns... That should consist of one write to memory (to update the pointer in the object). With decent compiler optimizations and a modern out of order CPU, I think we ought to be expecting one of those per clock cycle, so we ought to be seeing performance of 0.5ns. Perhaps even faster if the compiler manages to vectorize stuff (although I'm not aware of any compiler that can vectorize memory allocations).


Well we don't know what is being pushed or what was the function call overhead. Edit: the -10 suffix can mean there is a 10x loop in the benchmark?


The -10 indicates the number of CPUs used.[0] But the `ns/op` should per iteration of the benchmarking loop.

[0] https://blog.logrocket.com/benchmarking-golang-improve-funct...


Skimming the code it looks like one operation is 10 push front, not just 1.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: