I don’t think Go channels really panned out well. However, I do think that Go’s thread model and scheduler have turned out great. I think channels could’ve made sense, but they wound up being both more complicated to use and less performant than I think was hoped for. However… channels aren’t a solution for iterators. They’re a solution for communicating between threads. Go, like C, simply doesn’t have anything geared towards solving the problem iterators solve. Superficially they can fulfill some similar needs, but personally I feel they’re both practically and philosophically not in tune with that use case, even though you can certainly make it work in some cases to surprisingly decent effect.
Now as for the rest. Generics don’t automatically make code slower. In fact, in some places, the practical effect should make code faster; for example, the sort package currently defeats escape analysis even though it shouldn’t, and a monomorphized version should not suffer that consequence. Also, obviously, the fact that it removes runtime dispatch will also lower overhead. I understand that.
However, generics DO have some consequences:
- Every part of the toolchain—the compiler, linters, etc.—needs to understand generics and handle them properly. This is a non-zero cost; generics-heavy code will have measurably slower compile times. Go compilation times are fast, and that’s a feature.
- Generics improve the ergonomics of certain patterns that are absolutely slower. It is totally possible for someone to write a “map” function today for every type they could ever want, but it’s not really practical, and since it forces you to just write the loop out anyway, it really begs the question why you would bother in most cases. With generics, it should be relatively easy, but without aggressive optimization, it will be less efficient than the simple For loop. The map call and closure would need to be inlined for the optimizer to be able to get back to the same point. Having to do this, again, will slow down compilation more.
Honestly, you’re right; it’s far from the end of the world. However, I am pessimistic because to me, it’s unclear the costs of generics will be outweighed by the benefits of generics, but once the cat is out of the bag, it’ll be hard to ever go back on it.
Personally, I believe there are other improvements that would’ve been more valuable to consider first, like, personally, sum types and basic pattern matching. Sum types feel like something that, while it would have costs, has a potential to benefit a wide variety of programs and make certain code less error prone in a way that I don’t expect generics to.
As far as I know, that’s only for interfaces, and it’s more like a union and not a sum type. I’m talking about sum types for structs, like Rust has with enum.
(I realize I’m replying ridiculous fast here, but I just happen to click over to Comments on the first minute. Dumb luck.)
Of course, I could just use Rust, but it’s much harder to make idiomatic Rust libraries than it is idiomatic Go libraries, and I think that fact stunts the ecosystem a bit.
Go may encourage you to use a map to an empty struct for a set, or a weird if statement syntax that contains a full statement instead of a more elegant looking match or for expression, but I find its little quirks to be manageable and without a terrible amount of consequence. Generics will probably bring some good, like the end of needing weird slice tricks, but I’m still concerned it won’t be overall worth it.
Now as for the rest. Generics don’t automatically make code slower. In fact, in some places, the practical effect should make code faster; for example, the sort package currently defeats escape analysis even though it shouldn’t, and a monomorphized version should not suffer that consequence. Also, obviously, the fact that it removes runtime dispatch will also lower overhead. I understand that.
However, generics DO have some consequences:
- Every part of the toolchain—the compiler, linters, etc.—needs to understand generics and handle them properly. This is a non-zero cost; generics-heavy code will have measurably slower compile times. Go compilation times are fast, and that’s a feature.
- Generics improve the ergonomics of certain patterns that are absolutely slower. It is totally possible for someone to write a “map” function today for every type they could ever want, but it’s not really practical, and since it forces you to just write the loop out anyway, it really begs the question why you would bother in most cases. With generics, it should be relatively easy, but without aggressive optimization, it will be less efficient than the simple For loop. The map call and closure would need to be inlined for the optimizer to be able to get back to the same point. Having to do this, again, will slow down compilation more.
Honestly, you’re right; it’s far from the end of the world. However, I am pessimistic because to me, it’s unclear the costs of generics will be outweighed by the benefits of generics, but once the cat is out of the bag, it’ll be hard to ever go back on it.
Personally, I believe there are other improvements that would’ve been more valuable to consider first, like, personally, sum types and basic pattern matching. Sum types feel like something that, while it would have costs, has a potential to benefit a wide variety of programs and make certain code less error prone in a way that I don’t expect generics to.