Hacker News new | past | comments | ask | show | jobs | submit | AnonymousPlanet's comments login

No, it's about not being lied to when looking up a work of fiction.

Revisionism of historic facts and artwork is one of the oldest forms of political manipulation and has never served a good purpose, no matter how well meant. If you want to alter a story, make it clear that you altered it, don't replace the original with your version and then lie to people.


The article is talking about the Disney adaptation of The Little Mermaid. I don’t think anyone went to see that assuming that it was a 100% faithful adaptation of the original text (insofar as such a thing exists in this case) so I don’t see that anyone is being lied to.

Next stop: GitHub git will be enhanced and only fully work with Windows Explorer git

Don't forget Microsoft Account for the best experience!

> Agree that gnome still looks like Apple envy.

As opposed to Windows envy that apparently drives pretty much all the others, including every single version of KDE.


KDE follows established UX patterns, some of which predate Windows 95 in fact but come from another once famous DE that had three letter name ;)


I sometimes test new Wayland features in KDE since they tend to be more on the bleeding edge when it comes to Wayland protocol support, but visuals and UX are stuck in Windows XP times and I don't enjoy it at all.

Some 15 years ago Gnome rewrote things from scratch and that made people angry, but looking back at it now, it seems like it was worth it.

I use both MacOS and Gnome, and I can easily say that Linux with Gnome is years ahead of MacOS. Nothing beats the polish of Gnome and it's extensions. It is less cluttered, snappy and doesn't get in the way.

The only thing keeping Gnome back is it's developer documentation. It is hard to find good detailed information about available APIs, everything is hidden somewhere deep in C code which is not searchable without Gnome Gitea account. If they want to see the year of the Linux desktop having Gnome at the center of it, they have to improve in the developer documentation area a lot.


Following that logic, regulated industries would be going after anything resembling Microsoft Office with a flamethrower. It would be product suicide for any piece of software, like e.g. Microsoft Office or Microsoft Windows, to offer even optional AI capabilities.


Yes, and the Fortune 500 et al. are all telling Microsoft that they will be forced to do anything required to protect their businesses, including ceasing all business with Microsoft.

Microsoft needs to tell their shareholders to fuck off and quit backseat driving, but Satya Nadella is just yet another CEO who trades profits today for the end of the company tomorrow.


The mistake you're making is assuming that large companies make logical decisions.


I wouldn't be surprised if the person voicing those announcements would be more happy to be called female than gender neutral.


She's trans, so probably yes. But in an interview she says she is very happy to having been chosen as the gender-neutral voice of Berlin's subway.


In the US. The DMCA is only relevant because GitHub is operated by a US company. Nintendo would have to try to make the same arguments for EU law, e.g., if the software in question would be hosted in the EU. That might be more complicated.


Go is a very opinionated language from it's inception. We could probably argue for all eternity about code formatting, for instance. But Go went and set it in stone. Maybe it's part of good engineering to keep things simple and not allow hundreds of ways to do something. Maybe the people who use Go are the ones who just want to write and read simple and maintainable code and don't want it to be cluttered with whatever is currently the fashion.

You could look at Lisp. It's kind of the opposite of Go in this regard. You can use whatever paradigm you like, generate new code on the fly, use types or leave them. It even allows you to easily extend the language to your taste, all the way down to how code is read.

But Lisp might violate your set of absolutes.


We have completely lost the plot by assuming that just because there are disagreements on somethings then any choice is equally as good as any other. Go is opinionated and it is opinion is wrong.

Not having exceptions (but then having them anyway through panic, but whatever) is a choice - but the other reasonable alternative is the Maybe monad. What Go did is not reasonable. I might be okay if they had been working on getting monads in, but they haven't.

I have a specific hatred for Go because it seems perfectly suited to make me hate programming: it has some really good ideas like fast compile time speeds, being able to cross-compile on any platform and being a systems language without headers.

But then I try to write code in it and so. much. boilerplate.


> But then I try to write code in it and so. much. boilerplate.

If boilerplate is cause for you to dislike the language, fine.

But your unnecessarily strong language "Go is opinionated and it's opinion is wrong", "What Go did is not reasonable", "I have a specific hatred for Go" speaks more about you than Go.

Go's choice of trade-off much was practical and reasonable, engineering-wise.

Your opinion is entirely wrong.


Thanks for your response AnonymousPlanet. I agree there is value in the pursuit of a minimal set of features in a PL which brings many benefits. And of course the opposite - an overly feature packed and/or extensible PL as a core feature has tradeoffs. Over this range of possibilities my preference probably falls somewhere in the middle.

I see an effect where the languages whose primary goal is a particular set language design choices (such as strict memory safety over all else) grow a cult following that enforces said design choices. Maybe in the pursuit of an opinionated language, even if the designers are reasonable at the language's inception, the community throws out logic and "opinionated" becomes a in-group out-group tribal caveman situation.


> Maybe in the pursuit of an opinionated language, even if the designers are reasonable at the language's inception, the community throws out logic and "opinionated" becomes a in-group out-group tribal caveman situation.

I think you've got this backward. It's not that the particular choices are important. It's a thing happening on a higher meta level than that.

Some programming languages are, by design intent, "living" languages, evolving over time, with features coming and going as the audience for the language changes.

"Living" languages are like cities: the population changes over time; and with that change, their needs can shift; and they expect the language to respond with shifts of its own. (For example: modern COBOL is an object-oriented language. It shifted to meet shifting needs of a new generation of programmers!)

If you were able to plot the different releases of a living language in an N-dimensional "language-design configuration space", these releases would appear to arbitrarily jump around the space.

Other languages, though, are, by their design intent, "crystallized" languages — with each component or feature of the language seeing ever-finer refinement into a state of (what the language maintainers consider) perfection; where any component that has been "perfected" sees no further work done, save for bugfixes. For such languages, creating a language in this way was always the designers' and maintainers' goal — even before they necessarily knew what they were creating.

"Crystallized" languages are like paintings: there was an initial top-down rough vision for the language (a sketch); and at some early point, most parts of the design were open for some degree of change, when paint was first being applied to canvas. But as the details were filled in in particular areas, those areas became set, even varnished over.

If you plot the successive releases of a crystallized language in design configuration space, the releases would appear to converge upon a specific point in the space.

The goal with a crystallized language is to explore the design space to discover some interesting point, and then to tune into that interesting point exactly — to "perfect" the language as an expression of that interesting concept. Every version of the language since the first one has been an attempt to find, and then hit, that point in design space. And once that point in design space is reached, the language is "done": the maintainers can go home, their jobs complete. Once a painting says what it "should" say, you can stop painting it!

If a crystallized language is small, it can achieve this state of being entirely "finished", its spec set in stone, no further "core maintainer" work needed. Some small crystallized languages are even finished the moment they start. Lua is a good example. (As are most esolangs — although they're kind of a separate thing, being created purely for the sake of "art", rather than sitting at the intersection of "work of art" and "useful tool" as crystallized languages do.)

But large crystallized languages do exist. They seek the same fate as small crystallized languages — to be "finished" and set in stone, the maintainers' job complete. They just rarely get there, because of how large a project it is to refine a language design.

You might intuitively associate a "living" language with democratic control, and a "crystallized" language with a Benevolent Dictator For Life (BDFL) figure with an artistic vision for the language. But this is not necessarily true. Python was a "living" language even when it had a BDFL. And Golang is a "crystallized" language despite its post-1.0 evolution being (essentially) directed by committee.

---

The friction you're describing, comes from developers who are used to living languages, trying to apply the same thinking about "a language changing to serve the needs of its users" to a crystallized language.

Crystallized languages do not exist primarily to serve their users. They exist to be expressions of specific points in design space, quintessences of specific concepts. Those points in design space likely are useful (esolangs excluded); but the expectation is that people who don't find that point in design space useful, should choose a different language (i.e. a different point in design space) that's more suited to their needs, rather than attempting to shift the language's position in design space.

Adding a bridge is a sensible proposal for a city. You can get entire streets of buildings torn down to make way for the bridge, if the need is there. But adding a bridge is not not a sensible proposal for a (mostly-finished) painting. If you want "this painting but with a bridge in it", that's a different painting, and you should seek out that painting. Or paint it yourself. Or paint the bridge on some kind of transparent overlay layer, and hang that overlay in front of the painting.

Conveniently for this discussion, Borgo here is exactly an example of a language that's "someone else's painting, but now with the bridge I wanted painted on an overlay in front of it." :)


Best comment in the thread. I think defining certain languages as "crystallised", rather than "set", explains well the underlying structure has taken a specific shape based on specific tenets. Well said.


> Go is a very opinionated language from it's inception.

True.

> We could probably argue for all eternity about code formatting, for instance. But Go went and set it in stone.

This is part of the story that Rob Pikes uses to justify how opinionated Go is, but it's a bit stupid given that most language do fine and I've never seen any debates about the code formatting after the very beginning of a project (where it's always settled quickly in the few case where it happens in the first place).

The real reason why Go is opinionated is much more mundane: Rob is an old man who think he has seen it all and that the younger folks are children, and as a result he is very opinionated. (remember his argument against syntax coloring because “it's for babies” or something).

It's not bad to be opinionated when designing a language, it give some kind of coherence to it (looking at you Java and C++) but it can also get into the way of users sometimes. Fortunately Go isn't just Rob anymore and isn't impervious to changes, and there is finally generics and a package manager in the language!


Rob Pike... and Ken Thompson, and Robert Grisemer.

Firstly, Ken Thompson is a master at filtering out unnecessary complexities and I highly rate his opinion of the important and unimportant things.

Secondly, the Go team were never against generics, the three early designers agreed the language needed generics but they couldn't figure out a way to add it orthogonally.

Go has gone on to be very successful in cloud and networked applications (which it was designed to cater for), which lends credit to the practicalities of what the designers thought as important, HN sentiments notwithstanding.


> Secondly, the Go team were never against generics, the three early designers agreed the language needed generics but they couldn't figure out a way to add it orthogonally.

This is a PR statement that has been introduced only after Go generics landed, for years generics were dubbed “unnecessary complexity” in user code (Go has had generics from the beginning but only for internal use of the standard library).

> Go has gone on to be very successful in cloud and networked applications (which it was designed to cater for), which lends credit to the practicalities of what the designers thought as important

Well, given that the k8s team inside Google developed their own Go dialect with pre-processing to get access to generics, it seems that its limitations proved harmful enough.

The main reason why Go has been successful in back-end code is the same as the reason why any given language thrive in certain environments: it's a social phenomenon. Node.js has been even more successful despite JavaScript being a far from perfect language (especially in the ES 5 era where Node was born), which shows that you cannot credit success to particular qualities of the language.

I have nothing against Go, it's a tool that does its job fairly well and has very interesting qualities (fast compile time, self-contained binaries, decent performance out of the box), but the religious worship of “simplicity” is really annoying. Especially so when it comes in a discussion about error handling, where Go is by far the language which makes it the most painful because it lacks the syntactic sugar that would make the error as return value bearable (in fact the Go team was in favor of adding it roughly at the same time as generics, but the “simplicity at all cost” religion they had fostered among their users turned back against them and they had to cancel it…).


70% of cloud tools on CNF are built with Go; Kubernetes is just one of many. Also, since Kubernetes was originally started as a Java project you should consider whether the team was trying to code more with Java idioms than with Go ones.

Nodejs has been more successful than Go in cloud?


> 70% of cloud tools on CNF are built with Go; Kubernetes is just one of many.

Yes, that's what's called an ecosystem effect. But k8s has been the biggest open source codebase for a while, so it's far from insignificant.

> you should consider whether the team was trying to code more with Java idioms than with Go ones.

Turns out generics, the “Java idiom” in question, was eventually added to Go after many years, so maybe it was in fact useful and it's not just k8s devs who where idiots following “java idioms”…

> Nodejs has been more successful than Go in cloud?

Nodejs has been more successful than Go in pretty much everything except in orchestration tools (because of the ecosystem effect mentioned above) which is a tiny niche anyway. Go is a very small language on terms of use compared to Nodejs, or PHP, which are arguably language with a terrible design.


> I have nothing against Go, it's a tool that does its job fairly well and has very interesting qualities (fast compile time, self-contained binaries, decent performance out of the box), but the religious worship of “simplicity” is really annoying.

Typical Gate keeping the gate keepers of simplicity and pretty sure you code 23.5 hours a day on Haskell


> Typical Gate keeping the gate keepers of simplicity and pretty sure you code 23.5 hours a day on Haskell

I've no idea what you mean, you should keep your argumentation simpler ;)


Damn that’s a comeback that’s not complicated


Seriously, if you feel patronised by how someone designs a programming language, it might be best to move on. It's obviously not for you. Especially when you feel compelled to bad faith assumptions and ageism over it.

For those who want to feel the wind of coding freedom blow through their hair, I can recommend to spend some time learning Lisp. It offers the most freedom you can possibly have in a programming language. It might enlighten you in many other ways. It won't be the last language you learn, but it might be the most fruitful experience.


Most of people who tend to brag about Lisp's (Common Lisp) superiority, never actually used it. It is not as impressive as many legends claim.


Doesn't ring true; why would a non-user of Common Lisp evangelize it?

Are there online examples? Can you point to someone's blog where they are proselytizing regarding Common Lisp, but it's obvious they don't have experience in it (perhaps betrayed by technical problems in their rhetoric).


Can you name a language that provides more freedoms? I used Lisp as an example for that side of the spectrum because I'm familiar with it, having used it for many years in the past. But maybe there are better examples.


What kind of "freedom", precisely, are you talking about? Freedom to write purely functional programs? Well, then you need Haskell or Clojure at least. Freedom to write small, self sufficient binaries? Well you need C or C++ then. CL is a regular multiparadigm language with a rich macro system, relatively good performance but nonexistent dependency management, too unorthodox OOP, with no obvious benefits compared to more modern counterparts, and a single usable implementation (SBCL). If I want s-expressions based language I can always choose Scheme or Clojure, if I need modern flexible multiparadigm language I'd use Scala


All of them. You can do imperative, functional, and oop programming in lisp. As for small libraries, it’s because cruft is an actual hindance in lisp. It’s like unix tools, you can do a lot of stuff with them, but a more integrated tool that do one thing better will fare worse in others. A big library brings a rigid way of thinking to lisp flexible model. Dependency management? Think of it like the browser runtime, where you can bring the inspector up and do stuff to the pages. It’s a different devloment models where you patch a live system. And with the smaller dependency model, you may as well vendor the libraries if you want reproductibility. Unorthodox OOP? CLOS is the best oop model out there.

The thing is that Common Lisp has most of what current programming languages are trying to implement. But it does require learning proper programming and being a good engineer.


You should probably reread what I wrote, and lay off your patronizing attitutde. "It is just better, you do not get it" won't work here. Yes you can do functional in Lisp, as you can do it in even in C, but why? The support for functional style is laughable, compared to Haskell or even Clojure. CL advocates are fanatically fail to accept bitter truth: CL is dead language with once great set of features which now present in many many other languages.


> most language do fine

No, they don't. Most languages turn dealing with code formatting, into an externality foisted upon either:

• the release managers (who have to set up automation to enforce a house style — but first have to resolve interminable arguments about what the given project's house style should be, which creates a disincentive to doing this automation); or

• the people reviewing code in the language.

In most languages, in a repo without formatting automation, reviewers are often passed these stupid messy PRs that intermingle actual semantic changes, with (often tons of) random formatting touch-ups — usually on just the files touched by the submitter's IDE.

There's a constant chant by code reviewers to "submit formatting fixes as their own PR if there are any needed" — but nobody ever does it.

Golang 1. fixes in place a single "house style", removing the speedbump in the way of automating formatting; and 2. pushes the costs of formatting back to the developer, by making most major formatting problems (e.g. unneeded imports) into compile-time errors — and also, building a formatter into the toolchain where it can be relied upon to be present and so used in pre-commit hooks, guaranteeing that the code in the repo never gets out of sync with proper formatting.

"Getting in the way of the users" is the right thing to do, when "the users" (developers) fan in 1000:1 with code reviewers and release managers, who have to handle any sloppiness they produce.

(Coincidentally, this is analogous to other Google thinking about scaling compute tasks. Paraphrasing: "don't push CPU-bottlenecked workloads from O(N) mostly-idle clients, out to O(log N) servers. Especially if the clients are just going to sit there waiting for the servers' response. Get as much of the compute done on the clients as possible; there's not only far more capacity there, but blocking on the client blocks only the person doing the heavy task, rather than creating a QoS problem." Also known as: "the best 'build server' is your own workstation. We gave you a machine with an i9 in it for a reason!")


> No, they don't.

Really, they do: there a millions of us coding in those other languages just fine, and automatic formatting has been a thing for decade, and I'm not aware of a single language out there that doesn't have such a formatting tool.

The only exception with Go is that you cannot change the default settings. But that's it. In any other language you can use a code formatter with the default settings and the “speedbump in the way of automating formatting” you talk about doesn't exist anywhere but in your mind.

> where it can be relied upon to be present and so used in pre-commit hooks

You know that a failing git hook aborts the commit? So that with any language, if the formatter isn't installed in the machine, the commit cannot be performed, which means that the formatter can actually be relied upon anyway. In practice, the hardest part is making sure people all have the git hook installed (that's not that hard but that's the hardest part).

As I said before, Go has many useful properties, but automatic formatting is definitely not what makes Go relevant, and the endless stream of Gophers who argue this are just ridiculing themselves in front of everybody else.


> You know that a failing git hook aborts the commit? So that with any language, if the formatter isn't installed in the machine, the commit cannot be performed, which means that the formatter can actually be relied upon anyway.

When making a trivial fix PR to an upstream FOSS project, if I find that a missing third-party linter install has force-rejected my commit (that I know has correct syntax)... then I just give up on making that PR. I can't be assed to install some random linter. (Third-party linters have a history of being horrible to install†.)

Small amounts of friction can be enough to shape behavior (see https://en.wikipedia.org/wiki/Nudge_theory.) Aggregated over a large project's entire community, this can make an appreciable difference in code quality over time.

† Mind you, a linter that exists as a library dev dependency of the project is fine, too. I had to pull the deps to build and run the tests, so the linter will be there by the time I attempt to commit. It's just linters that are their whole own projects that give me a jaw-ache.

> and the endless stream of Gophers who argue this are just ridiculing themselves in front of everybody else.

I don't even use Go! I mainly write Elixir, actually. Which also has a built-in auto-formatter.

To me, the nice thing about the formatter being built into Elixir (and of-a-piece with the compiler), is that when I use macros, the purely-in-memory generated-and-compiled code can be inspected in the REPL, and shows as formatted (because it passes through the auto-formatter), rather than looking like AST gunk. Without having had to pay that auto-formatting cost at compile time (because that would also be a cost you'd pay at runtime codegen time, which you might do a lot of if you've built a domain-specific JIT on top of the macro system.)


It's easy for programmers to focus on the technical details and forget the big picture. The technical aspects of automatically formatting code are relatively easy to solve. The difficulty is in the social parts. That's what Go solved by bundling gofmt with the language.

As a result, almost all Go code out there is formatted the exact same way and nobody has ever had to have the dreaded code formatting discussion about Go at their company. Eliminating such bikeshedding for every user of the language is a solid win.

That's why all the languages proceeding Go have adopted the same approach, e.g. Rust and Zig. Python's Black formatter has been directly inspired by gofmt as well.

What is provided by default really matters.


Ironic given how much effort is going into Bazel remote build executors.


Snarky response: that's more steps toward the long-held dream of the Google operations department: to be able to just issue all devs cheap commodity Chromebooks, because all the compute happens on a (scale-to-zero) Cloud Shell or Cloud Workstation resource.

Actual response:

• For dev-time iteration, you want local builds; for large software (e.g. Chrome), you make this work by making builds incremental. So it takes a few hours to build locally the first time you build, but then it's down to 30s to rebuild after a change.

• But for building releases, you can't rely on incremental builds; incremental builds (i.e. building on top of a cache from previous arbitrary builds) would be non-deterministic and give you non-reproducible builds, exactly what a release manager doesn't want. So releases, at least, are stuck needing to "build the world." You want to accelerate those — remote build infra is the way to go. Remote, distributed build infra, ideally (think: ccache on Flume.)

These remote/distributed builds do still cohere to the philosophy in the abstract, though — a remote build is not the same as a CI build, after all; the dev's own workstation is still acting as the planner and director of the build process.


Appreciate a proper response to my throw away comment :)

> incremental builds (i.e. building on top of a cache from previous arbitrary builds) would be non-deterministic and give you non-reproducible builds

Isn’t this exactly what Bazel solves?


It tries, but it's really more of an operational benefit (i.e. works to your advantage to enable build traceability and avoid compile-time Heisenbugs, when you the developer can hold your workstation's build-env constant) than a build-integrity one (i.e. something a mutually-untrustworthy party could use to audit the integrity of your build pipeline, by taking random sample releases and building them themselves to the same resulting SHA — ala Debian's deterministic builds.)

Bazel doesn't go full Nix — it doesn't capture the entire OS + build toolchain inside its content-fingerprinting to track it for changes between builds. It's more like Homebrew's build env — a temporary sandbox prefix containing a checkout of your project, plus symlinks to resolved versions of any referenced libs.

Because of this, you might build, upgrade an OS package referenced in your build or containing parts of your toolchain, and then build again, Bazel (used on its own) doesn't know that anything's different. But now you have a build that doesn't look quite like it would if you had built everything with the newest version of the package.

I'm not saying you can't get deterministic builds from Bazel; you just have to do things outside of Bazel to guarantee that. Bazel gets you maybe 80% of the way there. Running the builds inside a known fixed builder image (that you then publish) would be one way to get the other 20%.

I have a feeling that Blaze is probably better for this, though, given all the inherent corollary technologies (e.g. objFS) it has within Google that don't exist out here.


> give some kind of coherence to it (looking at you Java and C++)

I have never done any real programming in Java itself, but the parts of Java world that I learned while writing some Clojure circa 2015 felt pretty coherent. Now I'm curious what I missed.


It baffles me that so many developers are unable to use pre-comit hooks for their code formatting tools, that exist since the 1990's, to the point go fmt became a revelation.


That's hardly the point. The point is that there is a single format for the language itself and you don't have to argue about spaces vs tabs vs when to line break, whether you want trailing commas and where to put your braces.

You can format on save or in a pre commit hook. But that the language has a single canonical format makes it kind of new.


Yes, because there is no one in the room able to configure the formating tool for the whole SCM.

A simple settings file set in stone by the CTO, such a hard task to do.

The fact that is even a novelty unaware of, only confirms the target group for the language.


> A simple settings file set in stone by the CTO, such a hard task to do.

And then you have a 100 companies with 100 CTOs resulting in 100 different styles.

With Go there is only one style everywhere.


Most people only care about the code of their employer.


Many shops have to write and submit patches to upstream projects. Some shops have to maintain their own "living fork" version of an upstream project.


Yeah, and apparently use Notepad, since they are unable to have a configuration file for formatting.


Very few employers do 100% of the code in-house, everyone uses libraries and code from the internet.

Which will have a different style you need to contend with.

But with Go every single sane piece of code you find will be formatted with gofmt and will look mostly the same.


> A simple settings file set in stone by the CTO, such a hard task to do.

It does seem hard thing to do. Working over dozens of enterprise shops in last 15 years I have not see such setting done or dictated at all. So whole codebase used to be mishmash of person styles.


A clear management failure then.


Any CTO who is aware of the impact that having an incoherent programming style can have on employee productivity, is likely going to arrive at the conclusion that the most efficient way to set such policy is to "outsource" it to the programming language, by requiring projects to use an opinionated language.

Then again, any such CTO is likely also going to be someone who tends to think about things like "the ability to hire developers already familiar with the language to reduce ramp-up time" — and will thus end up picking a common and low-expressivity opinionated language. Which usually just means Java. (Although Golang is gaining more popularity here as well, as more coding schools train people in it.)


It is going to be a very clueless CTO, if they aren't aware of tooling that is even older than themselves.


IMHO it's not about the standards in your company, it's more about being able to parse any random library on GitHub etc with your eyeballs.


I use compilers and IDEs for that.


I am so tired of reading Java/C++/Python code that just slaps try/catch around several lines. To some it might seem annoying to actually think about errors and error handling line by line, but for whoever tries to debug or refactor it's a godsend. Where I work, try/catch for more than one call that can throw an exception or including arbitrary lines that don't throw the caught exception, is a code smell.

So when I looked at Go for the first time, the error handling was one of the many positive features.

Is there any good reason for wanting try/catch other than being lazy?


>the error handling was one of the many positive features.

sounds good on paper, but seeing "if err!=nil" repeated million times in golang codebases does not create positive impression at all



Yes but the impression is largely superficial. The error handling gets the job done well enough, if crudely.


The ability to quickly parse, understand and reason about code is not superficial, it is essential to the job. And that is essentially what those verbose blocks of text get in the way of.


As an experienced Go dev, this is literally not a problem.

Golang code has a rhythm: you do the thing, you check the error, you do the thing, you check the error. After a while it becomes automatic and easy to read, like any other syntax/formatting. You notice if the error isn't checked.

Yes, at first it's jarring. But to be honest, the jarring thing is because Go code checks the error every time it does something, not because of the actual "if err != nil" syntax.


Just because you can adapt to verbosity does not make it a good idea.

I've gotten used to Javas getter/setter spam, does that make it a good idea?

Moreover, don't you think that something like Rusts ? operator wouldn't be a perfect solution for handling the MOST common type of error handling, aka not handling it, just returning it up the stack?

  val, err := doAThing()
  if err != nil {
    return nil, err
  }
VERSUS

  val := doAThing()?


I personally have mixed feelings about this. I think a shortcut would be nice, but I also think that having a shortcut nudges people towards using short-circuit error handling logic simply because it is quicker to write, rather than really thinking case-by-case about what should happen when an error is returned. In production code it’s often more appropriate to log and then continue, or accumulate a list of errors, or… Go doesn’t syntactically privilege any of these error handling strategies, which I think is a good thing.


This. Golang's error handling forces you to think about what to do if there's an error Every Single Time. Sometimes `return err` is the right thing to do; but the fact that "return err" is just as "cluttered" as doing something else means there's no real reason to favor `return err` instead of something slightly more useful (such as wrapping the err; e.g., `return fmt.Errorf("Attempting to fob trondle %v: %w", trondle.id, err)`).

I'd be very surprised if, in Rust codebases, there's not an implicit bias against wrapping and towards using `?`, just to help keep things "clean"; which has implications not only for debugging, but also for situations where doing something more is required for correctness.


Well we are in a discussion thread about a language that does just that :)

I see two issues with the `?` operator:

1. Most Go code doesn't actually do

    return nil, err
but rather

    return nil, fmt.Errorf("opening file %s as user %s: %w", file, user, err)
that is, the error gets annotated with useful context.

What takes less effort to type, `?` or the annotated line above?

This could probably be solved by enforcing that a `?` be followed by an annotation:

  val := doAThing()?("opening file %s as user %s: %w", file, user, err)
...but I'm not sure we're gaining much at that point.

2. A question mark is a single character and therefore can be easy to miss whereas a three line if statement can't.

Moreover, because in practice Go code has enforced formatting, you can reliably find every return path from a function by visually scanning the beginning of each line for the return statement. A `?` may very well be hiding 70 columns to the right.


For the first point, there are two common patterns in rust:

1. Most often found in library code, the error types have the metadata embedded in them so they can nicely be bubbled up the stack. That's where you'll find `do_a_thing().map_err(|e| Error::FileOpenError { file, user, e })?`, or perhaps a whole `match` block.

2. In application code, where matching the actual error is not paramount, but getting good messages to an user is; solutions like anyhow are widely used, and allow to trivially add context to a result: `do_a_thing().context("opening file")?`. Or for formatted contexts (sadly too verbose for my taste): `do_a_thing().with_context(|| format!("opening file {file} as user {user}"))?`. This will automatically carry the whole context stack and print it when the error is stringified.

Overall, what I like about this approach is the common case is terse and short and does not hinder readability, and easily gives the option for more details.

As for the second point, what I like about _not_ easily seeing all return paths (which are a /\? away in vim anyways), is that special handling stands out way more when reading the file. When all of the sudden you have a match block on a result, you know it's important.


It might just be me, but I find both of those to be massively less readable. More terse is not the same as more readable (in fact, I find the reverse).

I'm a huge fan of keeping things simple; my experience has shown me that complex things have lots of obscure failure points, while simple things are generally more robust.


You always have the option of using a match block if you don't like those chained calls. But I do agree, it's a bit bolted on and kinda ugly.

> More terse is not the same as more readable (in fact, I find the reverse).

I generally agree, but I also find that "all explicit" also hinders readability because it tends to drown the nitty-gritty details. As always it's a matter of balance :) And I think that neither go nor rust are great in this matter as one is verbose and the other falls in the "keyword soup" with the chain call, the closure, and the format macro. I'm pretty sure something in between could be found.


Actually this is precisely same cadence as in good old C. As someone who writes lots of low-level code, I find Go's cadence very familiar and better than try-catch.


The idea that error handling is "not part of the code" is silly though. My impression of people that hate Go's explicit error handling is that they don't want to deal with errors properly at all. "Just catch exceptions in main and print a stack trace, it's fine."

Rust's error handling is clearly better than Go's, but Go's is better than exceptions and the complaints about verbosity are largely complaints about having to actually consider errors.


> The idea that error handling is "not part of the code" is silly though. My impression of people that hate Go's explicit error handling is that they don't want to deal with errors properly at all. "Just catch exceptions in main and print a stack trace, it's fine."

I'm honestly asking as someone neutral in this, what is the difference? What is the difference between building out a stack trace yourself by handling errors manually, and just using exceptions?

I have not seen anyone provide a practical reason that you get any more information from Golangs error handling than you do from an exception. It seems like exceptions provide the best of both worlds, where you can be as specific or as general as you want, whereas Golang forces you to be specific every time.

I don't see the point of being forced to deal with an "invalid sql" error. I want the route to error out in that case because it shouldn't even make it to prod. Then I fix the SQL and will never have that error in that route again.


The biggest difference is that you can see where errors can happen and are forced to consider them. For example imagine you are writing a GUI app with an integer input field.

With exception style code the overwhelming temptation will be to call `string_to_int()` and forget that it might throw an exception.

Cut to your app crashing when someone types an invalid number.

Now, you can handle errors like this properly with exceptions, and checked exceptions are used sometimes. But generally it's extremely tedious and verbose (even more than in Go!) and people don't bother.

There's also the fact that stack traces are not proper error messages. Ordinary users don't understand them. I don't want to have to debug your code when something goes wrong. People generally disabled them entirely on web services (Go's main target) due to security fears.


> But generally it's extremely tedious and verbose

Is it? In my experience it's very short, especially considering you can catch multiple errors. Do my users really need a different error message for "invalid sql" vs "sql connection timeout?" They don't need to know any of that.

> There's also the fact that stack traces are not proper error messages

I would say there's not a proper error message to derive from explicitly handling sql errors. Certainly not a different message per error. I would rather capture all of it and say something like "Something went wrong while accessing the database. Contact an admin." Then log the stack trace for devs


> Do my users really need a different error message for "invalid sql" vs "sql connection timeout?"

Yes! A connection timeout means it might work if they try again later. Invalid SQL means it's not going to fix itself.

But in any case, the error messages are probably the minor part. The bigger issue is about properly handling errors and not just crashing the whole program / endpoint handler when something goes wrong.

> I would say there's not a proper error message to derive from explicitly handling sql errors. Certainly not a different message per error. I would rather capture all of it and say something like "Something went wrong while accessing the database. Contact an admin." Then log the stack trace for devs

Ugh these are the worst errors. Think about the best possible action that the user could take for different failure modes.

"Contact an admin" is pretty much always bottom of the list because it rarely works. More likely options are "try again later", "try different inputs", "clear caches and cookies", "Google a more specific error".

Giving up on making an error message because you only have a stack trace and don't want to show it means users can't pick between those actions.

If you have written a "something went wrong" error I literally hate you.


> "Contact an admin" is pretty much always bottom of the list because it rarely works. More likely options are "try again later", "try different inputs", "clear caches and cookies", "Google a more specific error"

You're totally misunderstanding what I'm saying. If I have an error the user can act on, I'll make that error message for them. If they can't act on it, I will make a generic catcher and ask them to contact an admin because that's the only thing they can do. It is not my experience that any of these things you've written (try again later, try a different input) are applicable when an error comes up in my apps. It's always an unexpected bug a developer needs to fix, because we've already handled the other error paths. And the bug is not from "not explicitly handling the error."

> Think about the best possible action that the user could take for different failure modes.

What if contacting an admin IS the best possible action? Which is what I'm referring to.

In the case of invalid sql, your route should crash because it's broken. Or catch it and stop it. It's functionally the same thing.

You seem to be under the impression that having exceptions mean people can't handle errors explicitly? It just prevents the plumbing of manually bubbling up the error. It means you can do so MORE granularly. Also, there are some errors that are functionally the same whether you handle them explicitly or not. There are unexpected errors, and even Golang won't save you from that. Golang doesn't even care if you handle an error. It will compile fine. Even PHP will tell you if you haven't handled an exception.

> If you have written a "something went wrong" error I literally hate you.

Lol.


> You seem to be under the impression that having exceptions mean people can't handle errors explicitly?

Not at all! It's possible, but it's very tedious, and the lazy "catch it in main" option is so easy that in practice when you look at code that uses exceptions people actually don't handle errors explicitly.

> It means you can do so MORE granularly.

Again, it doesn't just mean that you can; it means that you will. And for proper production software that's not a good thing.

> There are unexpected errors

Only in languages with exceptions. In a language like Rust there are no unexpected errors; you have to handle errors or the compiler will shout at you.


That has nothing to do with having exceptions, Rust just has a good type system (something go doesn't have).

But again, handling an error doesn't necessarily prevent bugs. Just because you handled an error doesn't mean the error won't happen in Prod. It just means when it does, you wrote a message for it or custom behavior. Which could be good, or it might be functionally as effective as returning a stack traces message. It depends on the situation.

For what it's worth, I've never seen people not handle errors that the user could do anything with. If it's relevant to the user, we handle it.


> That has nothing to do with having exceptions

It absolutely does. Checked exceptions sort of half get there too but they are quite rarely used (I think they are used in Android quite well). They were actually removed from C++ because literally nobody used them.

> handling an error doesn't necessarily prevent bugs.

I never made that claim.

> I've never seen people not handle errors that the user could do anything with.

We already talked about "something went wrong" messages. Surely you have seen one of those?


> We already talked about "something went wrong" messages. Surely you have seen one of those?

My point is that "something went wrong" messages are for errors the user CANT and SHOULDNT do anything with.


> In a language like Rust there are no unexpected errors

What? Of course there is. Rust added panic! exactly because unexpected errors are quite possible.

Unexpected errors, or exceptions as they are conventionally known, are a condition that arises when the programmer made a mistake. Rust does not have a complete type system. Mistakes that only show up at runtime absolutely can be made.


> What is the difference between building out a stack trace yourself by handling errors manually, and just using exceptions?

You cannot force your dependencies to hand you a stack trace with every error. But in languages that use exceptions a stack trace can be provided for "free" -- not free in runtime cost, but certainly free in development cost.


This one frustrates me a lot. Not getting a proper trace of the lib code that generated an error makes debugging what _exactly_ is going on much more of a PITA. Sure, I can annotate errors in _my_ code all day long, but getting a full trace is a pain.


Sure, I just don't think it's that significant. Humans don't read/parse code character-by character, we do it by recognizing visual patterns. Blocks of `if err != nil { }` are easy to skip over when reading if needed.


I agree, though I was really surprised to learn this when reading Go code. Much easier to skip over than I was expecting it to be


I find that knowing where my errors may come from and that they are handled is essential to my job and missing all that info because it is potentially in a different file altogether gets in the way


> sounds good on paper, but seeing "if err!=nil" repeated million times in golang codebases does not create positive impression at all

Okay, but other than exceptions, whats the alternative?


> other than exceptions, whats the alternative?

This may be a crazy/dumb take, but would it be so wrong to allow code outside the function to take the wheel and do a return? Then you could define common return scenarios and make succinct calls to them. Use `returnif(err)` for the most typical, boilerplate replacement, or more elaborate handlers as needed.


The ? Operator in Rust?


More than just that, Result in general also prevents from accessing the value when there is an error and accessing an error when there is a value.


The absence of that safeguard in Go is a feature. It's used when the error isn't that critical and the program can merrily continue with the default value.

Of course, this is also scarily non-explicit.


Good point.

I only briefly tried Rust and was turned off by the poor ergonomics; I don't think (i.e. open to correction) that the Rust way (using '?') is a 1:1 replacement for the use-cases covered by Go error management or exceptions.

Sometimes (like in the code I wrote about 60m ago), you want both the result as well as the error, like "Here's the list of files you recursively searched for, plus the last error that occurred". Depending on the error, the caller may decide to use the returned value (or not).

Other times you want an easy way to ignore the error, because a nil result gets checked anyway two lines down: Even when an error occurs, I don't necessarily want to stop or return immediately. It's annoying to the user to have 30 errors in their input, and only find out about #2 after #1 is fixed, and #3 after #2 is fixed ... and number #30 after #29 is fixed.

Go allows these two very useful use-cases for errors. I agree it's not perfect, but with code-folding on by default, I literally don't even see the `if err != nil` blocks.

Somewhat related: In my current toy language[1], I'm playing around with the idea of "NULL-safety" meaning "Results in a runtime-warning and a no-op", not "Results in a panic" and not "cannot be represented at all in a program"[2].

This lets a function record multiple errors at runtime before returning a stack of errors, rather than stack-tracing, segfaulting or returning on the first error.

[1] Everyone is designing their own best language, right? :-) I've been at this now since 2016 for my current toy language.

[2] I consider this to be pointless: every type needs to indicate lack of a value, because in the real world, the lack of a value is a common, regular and expected occurrence[3]. Using an empty value to indicate the lack of a value is almost certainly going to result in an error down the line.

[3] Which is where there are so many common ways of handling lack of a value: For PODs, it's quite popular to pick a sentinel value, such as `(size_t)-1`, to indicate this. For composite objects, a common practice is for the programmer to check one or two fields within the object to determine if it is a valid object or not. For references NULL/null/nil/etc is used. I don't like any of those options.


> that the Rust way (using '?') is a 1:1 replacement for the use-cases covered by Go error management or exceptions.

It is a 1:1 replacement.

I think you're thinking of the case when you have many results, and you want to deal with that array of results in various ways.

> Result implements FromIterator so that a vector of results (Vec<Result<T, E>>) can be turned into a result with a vector (Result<Vec<T>, E>). Once an Result::Err is found, the iteration will terminate.

This is one such way, but there are others - https://doc.rust-lang.org/rust-by-example/error/iter_result....

This doesn't handle every case out there, but it does handle the majority of them. If you'd like to do something more bespoke, that's an option as well.


> Is there any good reason for wanting try/catch other than being lazy?

It's the best strategy for short running programs, or scripts if you will. You just write code without thinking about error handling at all. If anything goes wrong at runtime, the program aborts with a stacktrace, which is exactly you want and you get it for free.

For long-running programs you want reliability, which implies the need to think about and explicitly handle each possible error condition, making exceptions a subpar choice.


The huge volume of boilerplate makes the code harder to read, and annoying to write. I like go, and I don’t want exceptions persay, but I would love something that cuts out all the repetitive noise.


This has not been my experience. It doesn’t make the code harder to read, but it forces you to think about all the code paths—if you only care about one code path, the error paths may feel like “noise”, but that’s Go guiding you toward better engineering practices. It’s the same way JavaScript developers felt when TypeScript came along and made it painful to write buggy code—the tools guide you toward better practices.


> The huge volume of boilerplate makes the code harder to read, and annoying to write

That may be superficially true but don’t forget our brain is structured to optimize every repetitive work or some boilerplates, we can basically use “strcpy” and “string_copy” we are so used to all these that even if repeated a billion times it can be processed fast


The example in the article is a good one. Result and Optional as first class sum types


That just changes the boilerplate from if's to match's.


See the example with the `?` operator: https://github.com/borgo-lang/borgo?tab=readme-ov-file#error...

The main benefits of a Result type are brevity and the inability to accidentally not handle an error.


Yes, but that isn't necessarily a feature of option types. Is it the case that similar sugar for the tiresome Go pattern couldn't achieve similar benefits?


Perhaps, but there have been several proposals along those lines and nobody seems capable of figuring out a sensible implementation.

A funny drawback of the current Go design that a Result type would solve is the need to return zero values of all the declared function return types along with the error: https://github.com/golang/go/issues/21182.


exactly.. yes, I understand why ? is neat from a type POV since you specifically have to unwrap an optional type whereas in Go you can ignore a returned error (although linters catch that) - so at the end of the day it's just the same boilerplate, one with ? the other with err != nil


> Is there any good reason for wanting try/catch other than being lazy?

In a hot path it’s often beneficial to not have lots of branches for error handling. Exceptions make it cheap on success (yeah, no branches!) and pretty expensive on failure (stack unwinding). It is context specific but I think that can be seen as a good reason to have try catch.

Now of course in practice people throw exceptions all the time. But in a tight, well controlled environment I can see them as being useful.


> In a hot path it’s often beneficial to not have lots of branches for error handling.

This is true but the branch isn't taken unless there's an error in Go.

Given that the Go compiler emits the equivalent of `if (__unlikely(err != nil)) {...}` and that any modern CPUs are decently good at branch prediction (especially in a hot path that repeats), I find it hard to believe that the cost would be greater than exceptions.


Yes, it's the ability to unwind the stack to an exception handler without having to propagate errors manually. Go programs end up doing the exact same thing as "try/catch around multiple lines" with functions that can return an error from any point, and every caller blindly propagating the error up the stack. The practice is so common that it's like semicolons in Java or C, it just becomes noise that you gloss over.


Go programs generally do not “blindly prepare the error up the stack”. I’ve been writing Go since 2011 and Python since 2008, and for the last ~decade I’ve been doing DevOps/SRE for a couple of places that were both Go and Python shops. Go programs are almost universally more diligent about error handling than Python programs. That doesn’t mean Go programs are fre from bugs, but there are far, far fewer of them in the error path compared to Python programs.


This matches my experience _hard_; there is simply no comparison in practice. Go does it better nearly every time


The difference is that all code paths are explicitly spelled out and crucially that the programmer had to consider each path at the time of writing the code. The resulting code is much more reliable than what you end up with exceptions.


I understand your sentiment. The debate of error codes vs exceptions will be debated until the year 3000, and further. One point to consider with exceptions: It is "impossible" to ignore an exception. The function implementation is telling (nay: dictating[!] to) the caller: You cannot ignore this error code. At the very least, you must catch, then discard. Another point this is overlooked in these discussions: Exceptions and error codes can, and do, peacefully co-exist. Look at Python, C#, and Java. In the standard library for all three, there are cases where error codes are used and cases where exceptions are thrown. Another thing about exceptions, especially in enterprise programming, you can add a human readable error message. That is not possible when only returning error codes.

EDIT

Inspired by this comment: https://news.ycombinator.com/item?id=40220147

I forgot about exception stack traces, including "chained" exceptions. These are incredibly powerful when writing enterprise software that commonly has a stack 50+ levels deep.


> One point to consider with exceptions: It is "impossible" to ignore an exception. The function implementation is telling (nay: dictating[!] to) the caller: You cannot ignore this error code. At the very least, you must catch, then discard.

Error returns are no different, assuming a proper implementation like the Result type in Rust. The difference is, unhandled error returns are found at compile time but unhandled exceptions only show up at runtime, when it's too late.

> Another point this is overlooked in these discussions: Exceptions and error codes can, and do, peacefully co-exist.

Both Go and Rust have panics, which are basically exceptions that are generally not supposed to be caught. They are used for unrecoverable cases like running out of memory or programmer mistakes. There's otherwise no reason to mix the two.

> Another thing about exceptions, especially in enterprise programming, you can add a human readable error message. That is not possible when only returning error codes.

I don't really know what you mean, it's equally possible in both cases. If anything, the error return implementation that Go uses is probably the most optimal out there when it comes to error messages. Most look like:

    return nil, fmt.Errorf("opening file %s as user %s: %w", file, user, err)
Whereas most exception code will just dump a stacktrace since that's the default.


Do you really do that in practice, or do you just blindly go 'if err != nil return nil, err'?

Because fundamentally the function you called can return different errors at any point so if you just propagate the error the code paths are in fact not spelled out at all because the function one above in the hierarchy has to deal with all the possible errors two calls down which are not transparent at all.


In Go, no one really blindly returns nil, err. People very clearly think about errors—if an error may need to be actioned on up the stack, people will either create a named error value (e.g., `ErrInvalidInput = errors.New(“invalid input”)` or a named error type that downstream users can check against. Moreover, even when propagating errors many programmers will attach error context: `return nil, fmt.Errorf(“searching for the flux capacitor `%s`: %w”, fluxCap, err)`. I think there’s room for improvement, but Go error handling (and Rust error handling for that matter) seem to be eminently thoughtful.


Coming from dotnet, I rather like the Go pattern as you've described it. I would normally catch and error and then write out a custom message with relevant information, anyway, and I hate the ergonomics of the try{}catch(Exception ex){} syntax. And yes, it is tempting to let the try block encompass more code than it really should.


Yeah, I was pretty skeptical when they added the error wrapping stuff to the standard library, and it still feels a little too squishy, but in practice it works very well. I prefer Go’s error handling even to Rust’s much more explicit error handling.


I don't see how it's possible to do it blindly unless the code gets autogenerated. If you're typing the `if err != nil` then you've clearly understood that an error path is there.

There's no requirement for the calling function to handle each possible type of error of the callee. It can, as long as the callee properly wrapped the error, but it's relatively rare for that to be required. Usually the exact error is not important, just that there was one, so it gets handled generically.


Their point was writing `if err != nil return nil, err` is the same thing that stack traces from exceptions do, but with even less information. And if that's most of a Golang codebases error handling, it's not a compelling argument against exceptions.


Try-blocks with ~one line are best practice on code based I have worked with. The upside is that you can bubble errors up to the place where you handle them, AND get stack traces for free. As a huge fan of Result<T, E>, I have to admit that that's a possible advantage. But maybe that fits your definition of lazy :).


> try/catch for more than one call that can throw an exception or including arbitrary lines

You generally need to skip all lines that the exception invalidates. That's why it's a block or conditional.


I agree, I don't really understand everyone's issue with err != nil.. it's explicit, and linters catch uncaught errors. Yes the ? operator in Rust is neat, but you end up with a similar issue of just matching errors throughout your code-base instead of doing err != nil..


The problem is that you're forced to have four possible states

1. err != nil, nondefault return value

2. err != nil, default return value

3. err == nil, nondefault return value

4. err == nil, default return value

when often what you want to express only has two: either you return an error and there's no meaningful output, or there's output and no error. A type system with tuples but no sum types can only express "and", not "or".


this is true, but not a problem. Go's pattern of checking the error on every return means that if an error is returned, that is the return. Allowing routines to return a result as well as an error is occasionally useful.


I mean, I wish Go had sum types, but this really isn’t a problem in practice. Every Go programmer understands from day 0 that you don’t touch the value unless the error is nil or the documentation states otherwise. Sum types would be nice for other things though, and if it gets them eventually it would feel a little silly to continue using product types for error handling (but also it would be silly to have a mix of both :/).


Yeah, also you almost always need to annotate errors anyway (e.g., `anyhow!`), so the ? operator doesn’t really seem to be buying you much and it might even tempt people away from attaching the error context.


You can print or log the stack trace of the exception in python.


I don't understand what you think pip, npm etc. have to do with dynamic linking. Those libraries get loaded in the respective interpreter, the linker doesn't come into action at all. And security wise npm, pip and the likes are nightmares waiting to be exploited xz style. No static linking will save you from it.


It maps.

If you throw a nodejs or python package in apt, you can either include all of its js / python dependencies in the package itself or make separate apt packages for each of the projects dependencies and have the apt package depend on those packages. The difference is pretty similar to static / dynamic linking. Including the dependencies gives the package author explicit control over the versions of all your dependencies. But depending on external apt packages means you can install security updates to shared libraries system wide. (Think log4j or xz). And as I understand it, that is what Debian prefers.

In JavaScript projects it’s even become common practice to use a bundler to “compile” server side JavaScript (with all your dependencies) into a single large unreadable .js file. Doing that can reduce memory usage and dramatically improve startup time. The bundled code usually has dead code elimination. It’s more or less identical to static linking - except the resulting artifact is another JavaScript file.


Ah, it was an analogy. That makes sense, thanks!


Not only that. There are people who go to great lengths to make sure that native apps don't work properly across desktop environments even on the same OS. They also call out anyone who dares to complain about it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: