Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] I Want Off Mr. Golang's Wild Ride (2022) (fasterthanli.me)
71 points by wheresvic4 on July 25, 2023 | hide | past | favorite | 37 comments



The April 2022 update at the end of the article has a good short summary:

> If you're looking to reduce the whole discourse to "X vs Y", let it be "serde vs crossing your fingers and hoping user input is well-formed". It is one of the better reductions of the problem: it really is "specifying behavior that should be allowed (and rejecting everything else)" vs "manually checking that everything is fine in a thousand tiny steps", which inevitably results in missed combinations because the human brain is not designed to hold graphs that big.

My pet theory is that this corresponds to the "two cultures" of software engineering: do you value up-front work and abstraction to reduce cognitive load and debugging, or would you rather (try to) pay more attention and spend more time debugging to reduce how much you have to learn and think up-front? Go seems pretty firmly in the latter camp. That's exactly why I am not interested in the language either, despite the various things it gets right.


I think this is true generally, but I’d narrow down this particular trade off further: a complex language can do more validation, that otherwise has to be manual. As such, it’s a trade off between language complexity and boilerplate.

It’s easy to argue against boilerplate: just find an example where a particular language feature shines and show that it reduces amount of code. Less noise reduces risk of human oversight. Arguing against complexity is much harder, similar to arguing against a suffocating bureaucracy: the cost is much more deferred, global, and often is only evident when it’s too late. Death by a thousand paper cuts.

And even with the most elaborate type acrobatics, we still have to validate things “manually”, so we need a “standard process” anyway. So while reducing boilerplate can be worthwhile, it’s not sufficient.

In order to argue language complexity, imo, you need truly compelling and recurring real-world examples. Rust strikes many heavy-weight birds here, but has some sources of immense complexity that yield very meager returns, like async Send, Sync, Pin gymnastics and function coloring. Go otoh suffers some very repetitive boilerplate, like the `if err != nil` 3-liner, for instance, which was elegantly solved in Rust with `?` etc.


This is interesting. I think though that the first option eventually leaks though, and that (unfortunately) you will have to spend time debugging things, sometimes at great cost for really sophisticated abstractions. The tradeoff, and the tipping point are clearly not obvious though!


You can't get away from some debugging, but I've certainly seen massive differences in how much debugging I've had to do between different codebases!


This is a very interesting discussion. I agree with you. I’m an application developer but I’ve had to debug what I consider to be very hard issues ( network driver bugs, kernel bugs, compiler bugs, hardware bugs, etc ), every time in what I thought to be perfect aabstractions. I mean, the compiler should compile properly, the kernel shouldn’t lie, and my memory stick should behave. The thing is, it eventually fails. So I have a maybe abnormally high level of appreciation for debuggable systems :-)


I'm not sure exaclty what he's talking about with Serde, in Go when you deserialize data into a struct it is strongly typed. It's not up to the dev to check that you received a string or an int.


I have not done deserialization in Go, but my understanding of what he meant via context was that because everything has default values, it is possible to end up with a struct even if the JSON (or whatever) isn't actually well-formed. I don't know how true that is or to what extent it causes issues.


Not to my knowledge, for example if it's json data, if the json is malformed it will error out, if you have etra field you can error out as well: https://pkg.go.dev/encoding/json#Decoder.DisallowUnknownFiel...

For input that does not have the fields of the destination well you can use other libraries that check that for you or use pointers.


> For input that does not have the fields of the destination well you can use other libraries

Right, this is what I was thinking, but that this is the case is the point being made here, and it's consistent with the rest of the post. The default option doesn't really check things for you.


Unless you deserialize into a type whose properties are interface types, in which case you're definitely checking...


I don't see how creating abstractions reduces the cognitive load. You create new concepts before you even need them + once you notice that the abstraction was wrong, now it's much harder to reverse.


Abstractions reduce cognitive load because they let us consolidate multiple bits of information into a single concept.


I think the fundamental disconnect here between people who appreciate abstraction and those who are seemingly in perpetual fear of it is that we rarely guide engineers on how to make good abstractions.

You hear complaints ad nauseam about how abstractions are always leaky, they are confusing, etc. And yet here I am typing this into a text box displayed in a web browser running on top of an operating system and countless libraries, rendered from a mix of content of HTML, Javascript, and CSS source which was delivered over TLS-wrapped HTTP using TCP sockets routed over an IPv4 connection which was repeatedly translated back and forth into Ethernet frames, all on a machine which schedules the execution of binary blobs compiled from a multitude of languages to the AArch64 instruction set onto any number of virtual cores while carefully managing resources like permanent storage, processor cache, random-access memory, guided by electrical signals from a keyboard which are decoded into meaningful glyphs according to my configured layout and character set, all displayed upon an OLED film based upon an HDMI-encoded signal transmitted over a bundle of wires.

Which is to say we spend every day comfortably resting upon a truly mind-boggling tower of abstraction layers which—for the most part—work pretty well. So not only clearly can it be done, but it also must be done in order to provide anything like the computing experiences we expect and rely on in day to day life.

Rather than shy from abstraction because bad abstractions are bad, we should spend more effort on learning how to design and promote good abstractions, since they are something upon which our entire profession is inescapably built.


I have created and maintain a 130k SLOC Go codebase over the last four years or so, and there are a number of things about Go that irritate the shit out of me. I still get bitten by taking the address of a loop temporary on a surprising basis, for example, even though I am acutely aware of the issue. Or, I haven't really found any use for generics yet, because (as best I can tell) it's impossible to specialise and pretty much every potential use-case I have found for generics so far seems to boil down to lots of shared functions and a few specialised functions.

None of those real irritations are addressed in this article, which seems to be an extended complaint that Go isn't as portable between Linux and Windows as the author would like it to be. If you're unfamiliar with Go and attempting to evaluate the language, there's not much in here that I could recommend one way or the other. You would be better spending the time on doing the Tour of Go, then reading one of the various "pitfalls" articles.


> seems to be an extended complaint that Go isn't as portable between Linux and Windows as the author would like it to be

I don't think that's true. The author is using that lack of portability as an example of what they think is wrong with Go:

> And they're symptomatic of the problems with "the Go way" in general. The Go way is to half-ass things.

> The Go way is to patch things up until they sorta kinda work, in the name of simplicity.

And I get the point they're making. The issue isn't that file permissions don't work consistently cross-platform in Go, it's that they can't work consistently cross-platform in anything but rather than add complexity Go has papered over the cracks. If that's an acceptable approach in a core file API who knows how often it's employed elsewhere in the runtime? The following section the post shows this mindset as implemented in monotime and the many potential footguns the official implementation introduces.


> The issue isn't that file permissions don't work consistently cross-platform in Go, it's that they can't work consistently cross-platform in anything but rather than add complexity Go has papered over the cracks.

Interestingly enough, C of all languages had enough sense to not require permissions as an argument for fopen(3).


Like all languages you can make a mess or make something wonderful. It's also really hard to remove your own bias from the project itself and the drama (or lackthereof). I happen to love go, honey moon started in 2016 and hasn't stopped for me yet. Today's commits: https://github.com/andrewarrow/settle-down/tree/main/app

Start at welcome_controller.go and follow the flow. Notice no structs for the sake of structs I make heavy use of map[string]any which serializes to json so nicely without any `json` modifiers.


Previous discussion with 400 comments:

https://news.ycombinator.com/item?id=31191700


Ah, Go. I wrote a whole networking stack in it a few years back. I admit, I did get an incredible amount of performance from a very limited bit of hardware with it. However, it was dependency and module hell. Even the Go expert couldn't get it figured out enough to make it compile every time.

I'm really glad I got off that ride.


I wonder if proper vendor support helps with this or not, its relatively recent to Go


Doesn't rust suffer from the same, way too wide dependency web?

I feel like its an unfortunate consequence of having a good package manager which encourages proliferation of too many very small dependencies.


I'm new to Rust, and I noticed this just the other day.

I was playing with some Rust-based blockchain system, and the number of external dependencies pulled in (recursively) to build the code truly amazed me.

I don't really know, but I'm guessing that a similar C++ project would have required many fewer dependencies.

Maybe a C++ code base would tend rely on a small number of big libraries?

Or maybe a C++ code base would (for some reason) be more inclined to use binary libraries provided by already-installed Debian packages?


As a C++ dev, I generally see few but often large dependencies. Boost used to be a large one, but it's less relevant these days with C++11 and onward. There are others like QT, Eigen, GoogleTest, FFMpeg, Intel's OneAPI, etc.

Sometimes I see single-header libraries you can just drop in, other times I see large Cmake projects that you add as a dependency to compile when your project compiles.

I've never seen the kind of dependency chaining that languages like JS or Go show.


Actually Go does not suffer from that problem since it has a wide std lib, the original post took a liberate example of something that is not common.


This doesn’t need to be the case. It didn’t happen with Perl (CPAN) and Java (Maven), for example.


Do those allow multiple versions of the same dependency? With npm and cargo (as opposed to composer, etc.) you are never forced to resolve those types of conflicts so you can just keep installing dependencies forever without ever having to trim the tree.


The more I read about Rust, the more I've come to respect it. I still think:

* It's a specialized tool not suitable for all (or even most) projects

* It needs a bit more time to both develop as a language and ecosystem

But it's clear that it's not half-assed and a lot of thought went into it.

Interesting to hear about Go's development as well. It was starting to pick up when I was in college, and now it's had a few more years. Bit disappointing to see it's so messy.


I'm surprised the author didn't talk about error handling in Go.

That is, by far, my biggest pet peeve with the language.

Most languages have try/catch patterns, but Go opted for multivariate returns and a discrete error type...without pattern matching!

At least 20% of Golang code I look at is the if err != nil pattern, which is a crazy amount of repetitive boilerplate. I don't think the must pattern is a good alternative either in many cases.


I see "How to complain about Go" (2015) [1] needs an update.

[1] https://medium.com/@divan/how-to-complain-about-go-349013e06...


Great article.

This opinionation runs deep into the Kubernetes ecosystem as well (one of, if not the, biggest Golang project out there).

Here is an example: https://github.com/kubernetes/kubernetes/issues/53533


Meta: does flagging and vouching work differently for links than for comments?

This account has enough karma for flagging, and I can see the title of this submission is prefixed with [Flagged] but I don't have an option to Vouch.

How does that work? I flag/vouch pretty rarely so I'm not always sure.


I think vouch not appearing might be a bug when somebody flags + unflags (I accidentally flagged, then unflagged, and it didn't revert the "flagged" state. Not showing vouch could be related). I've emailed dang.


i knew it was about rust before i even opened this


More like a multiplatoform Go vs Rust comparation. Tbh I wouldnt pick Go if the software needs to run on Windows.


(2020)


(2022) with the update


Mostly this article seems to be whining about Go's file handling being more based around Unix-y stuff, and not really being suitable for Windows file systems.

So what? If you don't use it as intended, it might not do what you want. Don't use it on unsupported niche operating systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: