Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.22 (go.dev)
419 points by bestinterest 7 months ago | hide | past | favorite | 160 comments



I've been mostly writing Typescript the past 3 years - and recently started writing code in Go. Initially I was a little apprehensive, lack of array functions, slightly less flexible type-system, etc.

But after spending some time writing Go I now had to re-initialise a typescript project for a small-ish team (4-5 devs). The amount of time spent on things such as linting, selecting the correct library for server routing, the correct server, coding standards, basic error-handling and enforcing it with a custom error or Result type to get out of this nested try/catch hell which still loses the majority of errors. Setting up testing and mocking. Setting up Prisma and what not - and finally the PRs are still a hit and miss, some ok, some make use of weird JS functions..

Don't get me wrong, I really do like Typescript. But I gotta say after all of that it's just great using a language with a fantastic standard library, proper type-safety, with some coding standards built-in. It's obviously not without quirks, but it's pretty decent - and great to see that routing has now also moved into the standard library, another bit that you don't have to worry about - can't wait for some map/filter/find slice functions though!


In my CTO newsletter I recently wrote about TS vs Go releases, with Go 1.22 as an example. TS gets more and more complicated with each release, catering to the power users. Go adds things that makes it simpler to use, like the (missing) range over integers. It's like game sequels, they add more and more canon and game mechanics, then game sequels (or comics) need to reset to make them more accessible to newcomers again. Programming languages can't do this, so I'm happy that Go keeps this in mind.


All languages that keep evolving get more complex over time. The changes are always intended to make programs written in the language simpler.

Go moves at a pretty slow pace, adding only minor new features (and thus minor complications) in most releases. Even in 1.22, they are previewing a new feature, range-over-functions, which seem to be basically C#/Python's iterator functions - a feature which will, of course, complicate the language - but make certain programs simpler.

As a general rule, the more features a language has, the shorter program that implements a particular algorithm can be, but the harder it is to learn, and the bigger the chance that it will be misunderstood. There are exceptions where certain features make languages more verbose (e.g. access modifiers), but typically only in minor ways.


"range-over-functions"

Yes, something people coming to Go would have assumed worked before looking at all range cases, but didn't.


range-over-functions is the experimental new feature where a function can generate a sequence by executing a bit at a time, i.e.

  s := []string{"hello", "world"}
  for i, x := range slices.Backward(s) {
    fmt.Println(i, x)
  }

  func Backward[E any](s []E) func(func(int, E) bool) {
    return func(yield func(int, E) bool) {
        for i := len(s)-1; i >= 0; i-- {
            if !yield(i, s[i]) {
                return
            }
        }
    }
  }
I don't think anyone expects this to work in Go as it is today, it's just a new feature that will make the language more complex, but it will make certain kinds of programs simpler to write.

I should also note that the official name is "range-over-function iterators", I called it by a wrong name earlier.


Yes might depend on where you come from. As someone with decades of Java experience - not a fancy language over most of its lifecycle - I was mystified why there is no Iterator support as in Java for loops.


Oh, now I understand what you mean - you're thinking of this as a way to do `for (T x : collection)` in Java.

I see this as more like the `yield return` functions of C#, which I definitely wasn't expecting. That can of course also be used to implement an iterator for a collection, but it seems much more general.

Given that before generics Go had exactly 3 types of collections, and that those were all iterable with range, I guess I never thought about this thing missing from the for loop.


Iterator is not only about collections. Its not as powerful as a yield around continuations but you can return whatever you like.

~15 (?) years ago I wrote a "famous" blog post on how to use Iterator as a poor man Maybe/Option.

   for (String s: m Option<String>) {
     // Executed on Some
   }


An idomatic way to do this now is to start a goroutine and range over a channel it writes to. Less ergonomic and more error prone, but it works.


The thing with Typescript is that it is only a fancy JavaScript linter, so the only way to justify newer releases is to keep adding up the type system, there is nothing else when language features that aren't type system related are supposed to come from JavaScript evolution.

So they either say they are done, or keep adding type theory stuff until it implodes, I fear.

Actually I am looking forward to type annotations in JavaScript now in the roadmap, being good enough for general use cases.


Is there a more simple JS linter that does 90% of what basic TypeScript does? I mostly use simple types (basic times, Promises, arrays, and a bunch of interfaces) and I find TypeScript valuable for that. It saved me a few times from accidentally treating a Promise<Whatever> as Whatever for example – and other things.

But I heard an interesting argument: It's not TypeScript vs. vanilla JS; it is TypeScript vs. whatever else full-blown linting/IDE comfort you can get by still writing vanilla JS with no transpile step.


I guess the recent movement started by some projects to go back to JSDoc type annotations kind of answers that.


Do you know if the Javascript type annotations is progressing? I didn't hear anything after the initial proposal.


They held a meeting a few months ago so it's alive but probably still years away.

https://github.com/tc39/proposal-type-annotations/issues/184


I don't have the source at hand but I remember seeing that they wouldn't support it until it had progressed as a JavaScript proposal. Their reasoning was that the decorations API is really weak, and it will likely be changed meaning a complete rewrite of the TypeScript decorator implementations.


Yes, but TS users can stay on the "type-newbie" path, which is still a huge improvement over vanilla JS and doesn't take much effort. What I've had issues with is devs who came from the vanilla JS world and love it, so they go out of their way to avoid utilizing more complex types when they would add no-cost safety (aside from the initial minutes or hour spent learning the feature).


And that's dangerous; give people a lot of advanced options and they will inadvertedly use them, and nobody will dare to touch it, and it'll cause a lot of headaches, etc etc etc. Scala made this mistake as well. Go is the antithesis to TS and Scala, and I hope they keep it up.

I also hope but doubt that they will do something few other languages dare: remove features.


Go certainly has a very different philosophy, but I don't think it's necessarily superior. Typescript is not as academic as Scala, but it gives power to developers who are willing to put in the effort to learn. With that power comes greater efficiencies and type safety.


> TS gets more and more complicated with each release, catering to the power users.

My understanding about the use of advanced/more expressive TR features is that it's OK if you don't use them, and don't bother wasting time for most products. Bot if you are writing a library of framework in TS, go ahead especially since they are meant to improve the experience of consumers.


every feature that makes a language more complicated will eventually hit you. It might make the typescript compiler slower, harder to refactor, your lsp might also get slower, etc...


One person on the team will use them, if you don't put up a linter that prevents usage.


They just keep adding new features for fear of losing their position because they can't decorate the release notes. MS doesn't give high marks for bug fixes. Thus the bugs keep growing.


I feel very similar to your experience.

What made me stay in go is its amazingly unified build toolchain. The things you can do with go:embed and go:generate blow my mind in every other project.

The golang.org/x package is also another thing, where there is pretty much every internet RFC related implementation available, ready to use.


Can you give some examples how are you using go:embed and go:generate?


We use go:generate to generate services and types from protobufs.

    //go:generate protoc --go_out=src/generated/ protos/service.proto
Our CI pipeline is a dockerfile that looks vaguely like this:

    FROM golang:1.21 as build
    go generate
    go build
    go test

    FROM scratch
    COPY --from=build ...
The CI steps are: docker build <XXXX> && docker push <XXXX>

We have a goland project template that has options for generate, build, test that matches what we do in CI rather than having that _one_ edge case that is the difference between `make build` and `go build`. That difference has caused more than one outage in my career.


One example that comes to mind is building a single-binary full-stack application.

You can use whatever frontend framework you want, and just embed the html/css/js/asset files inside the binary with go:embed. In case of dynamic set of files, you can also write a Go utility to generate the embeddings with go:generate.

In addition to the ease of distribution (no more assets/ directory - just a single binary executable!), it also increases speed of the application, as it no longer has to perform file system reads in order to serve a webpage.


A good example of a Go project using embed to pack its html/css/js assets in a single binary is PocketBase:

https://github.com/pocketbase/pocketbase/blob/master/ui/embe...


Last I checked, AdGuard Home also did this.

https://github.com/AdguardTeam/AdGuardHome


I've used go:generate to generate a set of structs based on an XSLT document. That said, since XML is fairly uncommon these days in popularity, the generator was a bit buggy still.

And I've used go:embed to include .sql files for database migrations and querying. I should really spend some time on this POC I made (sqlite, goose for migrations, and an implementation of temporal tables) and publish it as a demo.


Not OP, used embedded to add ebpf code compiled for a project, helps to only ship the binary. Same thing for shipping swagger static html stuff to host an OpenAPI server.


> can't wait for some map/filter/find slice functions though

Till that day comes, you could use the "lo" library (inspired from lodash). It's my goto Swiss army knife for golang projects.

https://github.com/samber/lo


On the other hand, I advise you NOT to use this kind of library and write simple, fast go code most of the time, with the occasional generics helper. Why the hell would I clutter my code with, for example: https://github.com/samber/lo?tab=readme-ov-file#fromentries-...


I've had many cases in the past (not in go) where I've had to make use of that exact same function (in typescript, in F#, and in C#). it is actually quite useful when doing any amount of data manipulation (map/filter/reduce chain that often ends up into a list of key-value pairs, which then get turned into a map/dictionary of sorts).

At least in my job(s over the years), turning a flat list of db records into a more complex, nested (potentially on multiple levels) data structure before handing it off to the front-end is a very common. I've seen it done with "simple, fast code" (although not in go specifically, but in other languages), but it very quickly turned into huge messes of of long nested for loops and was very difficult to read. LINQ, Lodash, java's streams... I sincerely can't understand how go developers live without them. They make me a lot more productive both at reading and writing the code.


Part of the issue is that Go has a variety of design choices / limitations that conspire to produce different design patterns in this area than what you might see with e.g. Java.

For example: let's say we want to implement something akin to Java's Comparator interface.

Java allows interfaces to be extended with default implementations. It also allows methods to specify their own generics separate from the entire interface / class.

Thus the "comparing()" method can take in a Function<T, U> that extracts a value of type U from T that is used for comparison purposes. The return type is Comparator<T>.

(Generics simplified a bit, there are other overloads, etc.)

There's also thenComparing(), which allows chaining Comparator instances and / or chaining Function<T, U>.

As a consequence, one can use .thenComparing() to build up a Comparator from the fields on a class pretty quickly. Especially with lambda syntax.

Go doesn't support methods having different type parameters than the overall interface / struct.

Go also doesn't have default implementations. It doesn't allow function or method overloading.

Go does have first class functions, however.

To build the equivalent capability, you'd most likely build everything around a comparison function (func[T any](a, b T) int) and write a bunch of functions to glue them together / handle useful operations.

That impacts the readability of a long chain of calls, especially since Go doesn't have a lambda syntax to make things a bit tighter.

Getting rid of the limitation on method-level generics would make this _significantly_ more ergonomic.


You wouldn't write that exact function. You'd have some complicated pipeline that ended up in a map; and it might be easier to follow the logic using map / filter / fromentries.


I strongly agree. Map / filter isn't included, but a fair number of the various utilities are included in the standard library in the `slices` and `maps` packages.

`context` also helps solve a bunch of the channel related use cases in a more elegant (IMO) way.

There are only a handful of things in that package I wish were included, such as "Keys()" on a map.


FYI - “Keys()”, “Values()” and others have been pulled because they’re likely to be implemented using the new range-over-function paradigm.

They were included in the experimental packages on google.com/x.


Unrelated, but this link puts me into an infinite refresh loop on two mobile browsers on iOS (Firefox and DDG)


Klingonization of go code. I don't like it.


Only if you're willing to take the cost associated with that; it adds a "DSL", an extra language to the language, reducing its goal of simple and readable code. Compare also with using a testing library that adds human-readable assertion phrases (expect(x).toBe(y) etc).


Why not use the generic-based built-in slices?

https://pkg.go.dev/slices


Yeah, it's quite productive in a counter intuitive way, not having a ton of features just removes a lot of tiny decisions you have to make in a richer language.


I’ve worked for a while at a client using typescript, after a while I started calling it “Tricks Driven Development”, every time I had to do something I’d read the docs, but then someone would communicate some trick not on the docs that was possible to use


Go is beautiful, productive, readable. I love Go. It's my "C with niceties".


I feel the same way. I was recently evaluating TypeScript and Go for a small project at work, having little experience with either. I went with Go almost entirely because I’d made decent progress in solving the problem by the end of my timeboxed investigation period. After a similar time with TypeScript, all I’d really done was get bogged down trying to work out what tooling I should be using.

Compilation, testing, and automatically formatting TypeScript all have multitudes of options with their own pros and cons. All that stuff is just built into the Go toolchain. Yeah it’s not always perfect but it’s more than good enough and, importantly, it’s ubiquitous. There’s nothing to think about or argue over.

That said, I’ve been wanting to try Deno. My understanding is that it takes a much more Go-like approach to running TypeScript.


Yeah someone I know has been banging on about go, too. I have a personal project I want to do so maybe I'll pick go for it.

The main pro and con to the JS ecosystem is the fact that it's so flexible; you can mostly do things how you want and there are so many libraries in each space all competing. But because you can do anything how you like it becomes hard to decide which library you want to use (or a diff one every project) and every dev has a different way of accomplishing the same thing.

I used to write internal libraries and frameworks a lot in my career and one of my mantras was to use TS to try to lock devs into doing things a certain way but there only so far you can go.

My main worry with Go is like, is everything built in? Are there multiple say, Web server libs like with nodejs or only a single option? How are deficiencies addressed if there are fewer options and fewer ways og doing things?


Go is the language for getting sh*t done.


Best summary of Go. That should be the headline of the Go website.


Interesting ...

>> The amount of time spent on things such as linting, selecting the correct library for server routing, the correct server, coding standards,

I don't fully agree here. Those points are pretty straight forward and coding standards (not formatting, for TS prettier is pretty standard, btw.) need to be defined even in Go projects. Also, Deno has much of the setting up solved.

>> basic error-handling and enforcing it with a custom error or Result type to get out of this nested try/catch hell which still loses the majority of errors.

Fully agreed. Maybe I have a big lack of understanding of error handling in NodeJS, but how on earth do I find out what functions can throw errors and what are the errors thrown? If someone could enlighten me I'd be really grateful. To be on the safe side I would need to run my whole code in a try..catch block. How to decide how to handle different errors if I don't know which errors can occur? On the other hand, just yesterday I had to debug a Rust panic in a smallish code base. Even with stacktraces turned on it took half an hour to find out where there error occurred. Still, Go and Rusts error handling is much better. IIRC, in Java you see all types of exceptions that can occur in the docs of a function?

>> Setting up testing and mocking. Setting up Prisma

Again, not a big thing IMO. If you like an ORM, Prisma is one of the best.

>> Don't get me wrong, I really do like Typescript.

Yeah, that's the thing. Typescript is such a fantastic language. Writing Go feels like using a hand-axe. Typescript's null handling alone makes it 10 times better (I hope everyone is using it, but that underlines your point regarding the conventions needed ...). I recently found a lib that gives us compile time checked pattern matching like in Rust.


I got to spend a couple years writing Dart (not Flutter) and found it to be the best of both worlds. Such an underappreciated language.


Yeah, Dart is way better / easier to debug, than Typescript in my opinion


Interesting, Dart for backend? Do you build APIs? How is the ecosystem? I heard good thing of Swift as well, but I have concerns that it's to niche for backend stuff.


I was working on Google Assistant at the time; the dialogue manager was written in Dart. I was mostly writing libraries and platform code...so APIs in a sense.

The ecosystem isn't huge outside of Flutter, of course.


The problem of typescript is the lack of convention and the emphasis on configuration, the reverse made Golang a great language.


Other replies miss the point - the problem doesn't lie with Typescript itself exactly. Setting up a nodejs/js project with all of the fixings (linting, Typescript, spell checks, builds if needed, etc) is quite tedious.

Sure you can accept some template project or CLI tool to kickstart things if just starting out, but at some point you will need to tweak the configuration and there is an enormous realm of options.

I'm surprised no one mentioned this already, but a runtime like Deno goes to great lengths to solve alot of these pain points. You get testing, linting, bundling, and Typescript out-of-the-box with sane settings. If Deno worked better with GRPC I'd probably be using it right now in my work projects!


> can't wait for some map/filter/find slice functions though!

just use a for loop. 90% of the time the code will actually be clearer, and faster.

the hoops people just through to avoid 2 extra lines of code is mind boggling.


Deno fixes most of the issues you're describing.


I love go. It's just so simple.


If you find the official release notes a bit dry, I've made an interactive version: https://antonz.org/go-1-22


Cool, that does help. It wasn't immediately obvious what the problem with the for loop sharing thing meant, but seeing it run and give unexpected results helped. :)


Thank you this is awesome. I find it so much quicker to read and understand things like this with examples.. and these are runnable / editable! Sick.


The Compact and Replace examples leave me puzzled, are they correct?


They are correct.

> Functions that shrink the size of a slice (Delete, DeleteFunc, Compact, CompactFunc, and Replace) now zero the elements between the new length and the old length.


Oh, it's great.


really cool


I've been writing Go for 9+ years, but for the last 4 years, I had to write a lot of Dart (for Flutter). I consider these two languages to be on the opposite sides of the complexity stance. Dart tries to add and implement every feature possible, but Go is the opposite.

Two observations:

1) I'm spending a lot of time fighting multiple ways to init stuff in a class (i.e., declare the variable and set the value). Depending on the final/const/static/late prefix, there are multiple ways to init it in the constructor or factory or initState() method of Flutter's StatefulWidget, and god forbid you to refactor any of that – you'll be forced to rehaul all the initialization. Dart's getter feature (which makes functions look like variables) also adds confusion, especially with a new codebase. I constantly find it embarrassing how much time I need to spend on such a straightforward thing as variable/field initialization. I often find myself missing the simplicity of Go.

2) Compared to Go, Dart has all the juicy stuff like maps/streams/whatever for packing all in a single line. It's very tempting to use those for quickly creating singleliners with hard-to-understand complexity. Sometimes I even start feeling that I'm missing those in Go. But when I get to debugging those or, especially, junior developers looking at these write-only singleliners, with an array of ugly hacks like .firstWhereOrNull or optional '(orElse: () => null)' parameters, they are very confused, especially when cryptic null safety or type errors stops them. Rewriting those singleliners as a simple Go-style for loop is such a relief.


Tip about Dart : Don't use initState for stuff that doesn't directly concern UI (setting the hint of a text field, for example).

Most of my Flutter pages rely either on having very few things to do, or having a MyPageController object such that the parent creates a controller, initializes it however it likes and the child page's behavior is dependent on that controller. A typical example of this would be the parent being a page containing a list, and the user wanting to edit a list element. Create a controller, give it the element, and send the controller to a child page where the user does the editing. On return, the parent can look at the element or other variables/callbacks in the controller to decide what should change in the UI.

This also allows finely-controlled interactions between widgets without having to delete with InheritedWidget or the likes. Of course you should use a state management library with this, even though a lot of the time I don't.


Some people joke that state management in Flutter is like http mixers/routes in Go. But I think it's much worse :)

Granted, UI state handling is not an easy task, and it's not directly related to the topic of complexity of the languages. I had an article written a few years ago about a thought experiment of Flutter being implemented with Go. It's a bit naive, but still fun to think about. [1]

[1] https://divan.dev/posts/flutter_go/


I agree, Go’s simplicity is the best thing about it. I think the same thing about ranges, I think I end up typing the same number of characters - but spreading it over three lines makes it feel “longer” than a comprehension like “forEach(f)”. But then I write the range longhand, and it’s no big deal.

Speaking of initialisation though, I do wish Go had an idiomatic way to initialise struct fields to specific values. I don’t care about the lack of constructors; mostly, I just wish I could have bools initialise to true sometimes.


Sure, it's always a tradeoff. Yet my pet peeve is that people rarely talk about the social aspects of the programming language. It's called "language" for a reason. We express our thoughts using this language ("I intend this code to do such and such"), and we expect other people to be able to understand what we intended, and we want to make sure that they understand exactly as we want them to.

I judge languages on their ability to collectively construct mental maps in the brains of the developers who work on the same project. If they all read the same code, will they be able to understand the task and intent of that code without additional explanations? How cognitively hard would it be?

Gottfried Liebnitz was obsessed with finding a Universal Language, which was exactly about this – making communication clear and lacking misunderstanding. I feel like Dart (and most other languages) approach is the opposite of that. Creating multiple ways to express the same intent is a sure go way to introduce misunderstanding and fracture the speakers of that language into dialects and groups. Go's, on the other hand, is really good at making this "reconstructing mental map" part a joy.


I don't know Dart at all, but I used Java from version 1.0, and watched as it morphed and morphed again - from for-loops, to collections with iterators, then "upwards" to list and map comprehensions, closures, function pointers. My younger colleagues were writing code I could barely understand; having left that world several years ago, I still find idiomatic modern Java difficult to mentally map to intent, as you put it. The feature set is undisciplined.

So I completely agree with you. I think it's unfortunate that some people appear to mistake simplicity of construction with simplicity of thought. Go's ingenious simplicity - its elegance - is a virtue. Unfortunately it also reflects what Dijkstra said: "Why has elegance found so little following? ... Elegance has the disadvantage if that's what it is that hard work is needed to achieve it and a good education to appreciate it".


My experience with Go is quite the opposite. Python may be way slower to run, but it maps to my intent extremely well. In Go, every time you try to find an item in a slice, or convert a slice to a map for faster search, or whatever, one piece of intent turns into a whole paragraph of boilerplate or a completely ad-hoc helper function.

Reading Go feels like legalese.


initstate is an override of statefulwidget, its no different than any other frameworks lifecycle events and its a flutter thing, so its not really something you would confuse with a constructor or something. I can understand not knowing if you should create a normal constructor or a factory i guess, but getting confused about initstate is not a reason go is superior.


We should be optimizing for expensive experts’ ability to communicate concisely with each other. Keeping the power tools of syntax out of novice hands just deters them from developing expertise.


> When io.Copy copies from a TCPConn to a UnixConn, it will now use Linux's splice(2) system call if possible, using the new method TCPConn.WriteTo.

Interface upgrades, yet again, transparently giving us more zero-copy IO. Love how much mileage they’re able to get out of this pattern in the io package.


Go standard library is absolutely littered with these kinds of tweaks. Not necessarily bad, but it does make an integration with non-standard library less satisfactory because such special casing is not scalable. Still within the expectation though.


> Enhanced routing patterns

> This change breaks backwards compatibility in small ways, some obvious—patterns with "{" and "}" behave differently— and some less so—treatment of escaped paths has been improved. The change is controlled by a GODEBUG field named httpmuxgo121. Set httpmuxgo121=1 to restore the old behavior.

That's a great enhancement now that the future of Gorilla Mux is shaky. But why doesn't that go against the Go 1 compatibility promise?

> It is intended that programs written to the Go 1 specification will continue to compile and run correctly, unchanged, over the lifetime of that specification.


The way they are using to route around the Go 1 compatibility promise is to gate these backwards incompatible changes on the value of the go directive in go.mod. If it says 1.22 or later you get the new library behavior, otherwise you get the old one. We'll see how well this ends up working in practice.


This mimics Perl's "use 5.x", although the latter is scoped per module. Perl's backward compatibility track record validates the soundness of this approach.

Perl can also selectively enable features, in a way not dissimilar to Python's "from _future_ import X", except the latter is about forward compatibility with a future default, whereas Perl is all about backwards compatibility as a sane default.

I guess Go does it at the mod consistent level because it needs a global view of features whereas Perl can dynamically alter itself (including its parser) live.


Usually the Go team scrapes from GitHub and open source programs how people use something before breaking them; I suspect they found little usage of { and } in HTTP handler paths. They also provide a way of opting out the new behaviour, so they don't force you to change anything in your code (but yes, it does require you to set a new env var).

The change to the for loop semantic is another example in this release; it effectively is a breaking change.

All Go programs continue to compile and run, though with minor behavioural changes. I think Go took a pragmatic approach, and that was one of the reasons for its success.


I don't see the point of using semver (or at least telling us about the same guarantees) and then not making use of it.

If there were 10 breaking chances we should be at 11.x now, not at 1.x with 20 environment variables.


From a purist perspective, you're right - the contract has been broken, and a major version should've been incremented.

However, Go has always been more of a pragmatic than a purist language. For example, they've analyzed tons of code and found that most of them had bugs caused by having `for` loop with a single variable being mutated. So they changed the `for` loop (in a technically backwards incompatible way) in order to make all those bugs disappear. In a way, they modified the formal contract to make it more aligned with the de-facto contract that users were expecting.

I personally think that kind of pragmatism beats purism any day. Maybe the fact that I've never personally been affected by Go backwards incompatibilities also plays a role... But I've yet to find a single person who has :)


This has nothing to do with pragmatism. Just update the version number. This is the pragmatic way.


I agree this is a (rare) mistake with Go. If they're going to break versioning using go.mod, they should at least break it in a way that makes it better. I'd much rather have my code fail to compile when we change go.mod to 1.22 than have to detect subtle runtime issues like this.


Because then they would lose their pithy advertising slogan. Minor or not, it is a non-compatible change.


> In Go 1.22, each iteration of the loop creates new variables

This was previously discussed here:

https://news.ycombinator.com/item?id=33160236 - Go: Redefining For Loop Variable Semantics (2022)


Perhaps I'm a dinosaur but I don't like the range-over-function addition. I don't think it adds enough convenience to justify the complexity it adds to the language, and the functional style feels at odds with Go's explicit, imperative (albeit verbose) and feature-lean style, which I think was one of its major strengths.

For the same reason I think the range-over-integer feature is a misstep. Go's lean feature set and minimal mental load have always been major strengths and differentiators of the language.


So at the moment this feature is optional / experimental and opt-in; if they make it standard on, I hope they add an opt-out mechanism of sorts, so that developers are discouraged to use it if it's not commonplace.

I find that one big problem with software developers is keeping developers from adding complexity, or "flexing". Especially developers earlier in their career, present company included, tend to overcomplicate a solution instead of just solve the problem in a straightforward albeit inelegant and wordy fashion and move on.


Yep, one of the big wins for Go was preventing this by just not supporting complicated, "clever" code. It doesn't have a ternary expression, "i++" is not an rvalue, you cannot pack a huge amount of work into a single line, you can't overload operators or even functions and so on.

The opposite extreme is arguably C++, which I personally quite like (probably because I use it only for solo projects and don't try to collaborate with anyone), but I can't deny that it's an awkward, gnarly monster of a language. It'd be awful to see Go end up like that.


I also think this feature feels premature. Ideally, it should be introduced after lambdas and generic parameter packs. The Generics support in Golang is not sophisticated enough to support this at the moment leading to the strangely imposed limitations


> Ideally, it should be introduced after lambdas

Like syntactic sugar over `func`? Since func can already be anonymous and passed around just fine, I don't expect them to add additional syntax for functions.


I'm ambivalent as well. It doesn't even save lines of code compared to passing an inline function.


The addition of `sql.Null[T]` is great. I'll probably start using that in new projects. In current stuff, I'm relying on sqlboiler's null [0] whose API is very similar — it works the same way as `sql.Null` will, but has an additional `IsSet() bool` method that tells you whether or not the value was ever explicitly set, to help you distinguish "intentionally null" from "null due to uninitialization". (Sounds nice, but I've never once used it.)


Curious when you need to know when null is intentional or not. There's no corresponding type like that in the db.


For example when you are using partially populated record to update the database. If field is null intentionally it means it should be updated to it. If it's not set the update statement should not touch it.


This is a great explanation. Sqlboiler allows you to send an update that will infer which values have changed and should be written — I bet that’s related to this. TIL!


Thanks. That makes sense. I don't have that pattern in my code but I can see that some might.


Looking forward to revisting the stdlib mux and possibly removing chi. I really appreciate seeing this stuff move into the stdlib.


Came here to say the exact same thing. chi has been reallly great for a couple projects--in fact, it's easy to forget it's even here. Moving them into the stdlib means they'll always be maintained and ensure that approach is used in many Go programs.


For those using Go for production, do you get to switch to the latest versions quickly, or do you get stuck in older releases?

In the open a lot of projects seem to avoid newish features. I like to use `any` (from 1.18 ~ 2 years ago) where before we had to use `interface{}`, even if I'm not using generics (although I've been told the latter is more "idiomatic" :-/).


At Square, we typically wait until the first point release before updating the default version in our Go monorepo. Individual application owners are free to upgrade sooner. This time, we had some wiggles, so the Go team beat us to 1.22 before we could upgrade our default to 1.21 :-)

Waiting is more or less just customary at this point; Go releases seldom break things.


Benjamin Wester (whom I just found out reads this and saw my comment, so now I don't feel weird randomly mentioning by name!) is the true hero of our monorepo's upgrades.


We usually wait to experiment with new features until we find a good problem that would fit, but we upgrade compiler version pretty much right after they release. For instance we waited on actually using generics for about 6 months until we were able to experiment a bit and make sure they were really useful, the devx was good, and build/test speeds weren't significantly impacted.


> build/test speeds weren't significantly impacted

it must be getting a little repetitive to test every version and find that Go continues to build crazy fast every time :)


I remember early in the go release cycle (talking around go1.4->later era) when I would rebuild our application with a new version and marvel at how much faster they made it. GC was another thing which was crazy, they halved GC pause time for like 5 or 6 releases in a row. like 1s -> 500ms -> 250ms -> 125ms etc etc etc.


The test doesn't take much time :)


At my $DAYJOB devs are free to jump on latest builds for containers we operate (SaaS / API services).

But we also deliver end-user apps with Go, and those have to stick to 1.20 for compatibility with older client OSes. 1.21 made significant cuts and so we will likely be on 1.20 for some years to come.


Which older OSes? The last 1.20 dot release was earlier today, some years from now it will be stale and probably insecure, though that matters less for internal apps than for internet-exposed ones.


Windows 7, Server 2012, macOS 10.13 + 10.14, and so on. Together it makes up nearly 10% of our user base.

We have a very slow-moving customer base and their choice of OS is out of our control. At this time last year there were still more Windows 7 users than Windows 11 in telemetry.

Those old OSes are leaving security support, but actually Windows Server 2012 still gets updates from Microsoft (ESU) until 2026 so i'm surprised you dropped it already.

Yes, Go1.20 will get insecure, but until that usage drops, not much can be done.

Well, one thing: in the past, in order to maintain support for Server 2008, for a long while we built the app with both Go1.10 and Go1.(newer) and switched at install time. I don't recommend it! Every year was more difficult as the open source packages drifted away from compatibility and our build system had to still support GOPATH. Eventually we finally abandoned Go1.10!


I thought the 1.21 cuts were for out of support Windows versions only? Eg Windows 7.

You’ll have to also factor that 1.20 won’t receive security updates any more.


Server 2012 was dropped in https://github.com/golang/go/issues/57004 and it's still in security support. My numbers are somewhat similar to @olivielpeau in that thread.


Our CI builds and tests using the 'latest' container for Go, and we have a fairly robust test suite so it essentially auto-updates as things go and breaks the pipeline if anything doesn't work, at which time we either decide to pin the latest working version while we update or hold off on the update to fix breaking changes.

Very rarely, like once every few years, there's some subtle edge case bug that shows up with a third party package and the new version, but honestly we'd never catch that if we were manually upgrading anyways, as by definition these bugs avoided our test suite.


I switch right away.

If you're worried about supporting older Go compilers, you could always have build conditions for older versions that define missing things like:

// +build !go1.7

type any = interface{}


We try to keep up to date. We'll be on 1.22 by the end of the month, give or take (all going well)


At work, we upgrade when there's a compelling reason (structured logging in 1.21 came close; we haven't finished migrating from our old logger yet). Before deploying that version, we typically bump versions in all our Go repos and run the tests in those repos. Having the same version across apps gives some people on our team a bit of comfort (with Go, it's not as important as some other runtimes).


Usually we switch after the first point release.


I switched right away. All of our images are tagged anyway so rolling back is hella easy.


I don't think this is necessarily true, most of the standard library docs have switched over to using any, even when generics are not involved.


Upgrade our binaries immediately, then bump go.mod `go` directive as we touch things.


I’m super excited to see range over functions - iterators. It’s one of the only things I miss from Java.

It took me a few reads to understand how the work, but in typical Go style (once I got it) it was so much simpler than the equivalent in most other languages I’ve used. Looking forward to it being promoted out of experimental.


You find

  func Backward[E any](s []E) func(func(int, E) bool) {
    return func(yield func(int, E) bool) {
        for i := len(s)-1; i >= 0; i-- {
            if !yield(i, s[i]) {
                return
            }
        }
    }
  }
simpler than C#'s

  IEnumerable<T> Backward(T[] s) {
    for i := s.length-1; i >= 0; i-- {
            yield return s[i];
    }
  }
I think that's quite hard to defend, honestly. Granted, Java doesn't have anything comparable, so I'm not sure what you're comparing this feature to.


Obviously your C# code is simpler; yes, I’m comparing to Java. I haven’t seen this style of iterator previously.


Compared to implementing the equivalent Iterator in Java, yes, the Go code is much simpler. But there are better solutions than Go's in several other languages - C#, Python, TypeScript, C++, even in Rust (experimentally).

They all use the same or very similar syntax to my C# sample above, and have the same semantics (plus or minus some details about memory allocation and exception safety). Go, as usual, came late to this party, and is being needlessly different and clunkier.


I love how easy upgrading Go has become. Change 1.21.6 to 1.22.0 in your go.mod, done.


What does this have to do with upgrading? Your go.mod just specifies the minimum version you need to build the code.

If you do not use new things available in in 1.22 your go.mod may have 1.21 (or even 1.13 if you are not using generics and can't be bothered with a few deprecated things. Like ioutil.ReadAll() which was deprecated since. 1.16)


> What does this have to do with upgrading?

Go automatically downloads the appropriate toolchain based on the version specified in a project's go.mod file. Once Go (>= 1.21) is installed on a machine, there is no need for manual updates anymore. https://go.dev/doc/toolchain


Sorry not a native speaker.

My understanding of "upgrading" a toolchain or dependecy in a project means, changing to a new version of the toolchain or dependency and make the project work again? With 1.22 memory consumption should be less and PGO has better performance from my understanding.

When I change the minimum version in my project to 1.22.0 it will download 1.22.0 and use this for compilation. I would call that "upgrading"

  ~/Development/inkmigo$ go version
  go version go1.22.0 linux/amd64


No need to be sorry. Not a native speaker either.

I get your point. I'm just to used to the fact that Homebrew updates packages on the machine and services are build during CI\CD pipelines inside of a container.



That's fine if you don't have any downstreams. Painful when upstreams change this for no functional gain.


Range over function is actually good. Now we can implement proper iterators for collections. In particular, it should be possible to have abstractions like clojure-style lazy sequences.


It’s good to see them starting to make for-range loops work on user-defined ADTs.


Still holding out hope for a "go run" flag to easily run module programs with replacements in go.mod

      go run k8s.io/kubernetes/cmd/kubectl@v1.28.2


Yes. Many times I've forgotten to remove my local "replace" in go.mod before git pushing. Would be nice to have a runtime replace for local dev.


If you use go.work (workspace) you don’t have to touch your go.mod file. Just don't commit go.work


dont get me wrong, i enjoy go and use it professionally with little complaints.

but its kinda funny seeing several of the examples people used for go's pick-up-simplicity being added to the language, like generics and apparently generators etc.

but im still fine with it as long as there is no extreme hidden logic on the level if operator overloading, crazy ctor chains, etc


Is the memory issue reported in go 1.21 version in linux OS resolved in go 1.22?

https://github.com/golang/go/issues/64332


> "For" loops may now range over integers

The future is now, old man!


      for i := range 10 {
What was the syntax for this before?


     for i := 0; i < 10; i++


I must be getting old because I almost wish they didn't add this as it's slightly ambiguous whether it's ranging from [0, 10) or [1, 10] and anything ambiguous is probably going to haunt me at 3am some day


> anything ambiguous is probably going to haunt me at 3am some day

Man, I hope you never wondered how ERRNO behaves with CGo or that you didn't miss to update any initialization of that struct you recently added a new field to, that haunts me at 3AM.


I always just assume it is consistent to whether arrays start at [0] or [1] on whichever language I am working on.


It iterates 10 times staring with 0, seems pretty clear that the result is [0,10).


Interestingly, this does not raise any error, rather has no effect.

for i := range -10 { panic(i) }


"yes, please run this loop minus 10 times" - statements dreamed up by the utterly Deranged


Back in my days, you had to do it this way

for i := 0; i <=-10; i++{

  panic

}


Back in my day you would do

    for (size_t i = 0; i < -10; i++) {
        panic
    }
And it would panic 18446744073709551606 times.


Sure, presumably for the same reason a C-style for loop wouldn't do anything weird there either.


Thanks. The new syntax emulates the python `for x in range(10):`.


Wish you could do

  for i := range [3:10] {


I think you could write something using the experimental Rangefunc feature that does that.


You can, which makes me wonder why they added the range-over-int functionality when it could have been a function in the proposed iter package:

   for x := range iter.N(10) { ... }


Better yet, you can name it something like `iter.ZeroTo(10)` and it immediately clears up questions about whether it begins from 0 or 1.


My guess because it really doesn't save that many characters compared to

for i:= 3 ; i<10; i++ {}

I didn't count but it might actually be longer.

range 10 is at least shorter


I'm not sure what the syntax is now; the release notes say "see the spec for details[link]", but in the link I don't see anything in there about using integers as a range expression.


  for i := range [10]struct{}{} {


While the phrasing is funny, it's really more "we now have a direct equivalent to 10.times in ruby"


Algol 68 had a loop where pretty much any part was optional, so both "TO 10" and "FOR i TO 10" (if you needed a loop variable) were possible.

(Back when Go came out, there were some Algol 68 comparisons, IIRC)


On the risk of sounding like a massiv fanboy (which I, unashamedly, am):

    - Range over Integer
    - Direct support for methods and wildcards in the path directly in `net/http`
    - More free optimization
    - Improved tooling
Sorry again for the fanboism, but Go really is the gift that just keeps giving :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: