Hacker News new | past | comments | ask | show | jobs | submit login
NilAway: Practical nil panic detection for Go (uber.com)
241 points by advanderveer 7 months ago | hide | past | favorite | 242 comments



I do like the approach of static code analysis.

I found it a little funny that their big "win" for the nilness checker was some code logging nil panics thousands of time a day. Literally an example where their checker wasn't needed because it was being logged at runtime.

It's a good idea but they need some examples where their product beats running "grep panic".


Actually, if we were running into cases where we aren't logging a panic which is actually happening in production, then the first thing to note is that we need to improve our observability. The issue might or might not be recoverable, but it should be logged. If nothing else, it should show up as a service crash somewhere within those logs, which is also something that service owners monitor and get alerts on.

The advantage of NilAway is not just detecting nil panic crashes after the fact (as you note, we should always be able to detect those eventually, once they happen!), but detecting them early enough that they don't make it to users. If the tool had been online when that panic was first introduced, it would have been fixed before ever showing up in the logs (Presumably, at least! The tool is not currently blocking, and developers can mistake a real warning for a false positive, which also exist due to a number of reasons both fundamental and just related to features still being added)

But, on the big picture, this is the same general argument as: "Why do you want a statically typed language if a dynamically typed one will also inform you of the mismatch at runtime and crash?" "Well, because you want to know about the issue before it crashes."

Beyond not making it all the way to prod, there is also a big benefit of detecting issues early on the development lifecycle, simply in terms of the effort required to address them: 'while typing the code' beats 'while compiling and testing locally' beats 'at code review time' beats 'during the deployment flow or in staging' beats 'after the fact, from logs/alerts in production', which itself beats 'after the fact, from user complains after a major outage'. NilAway currently works on the code review stage for most internal users, but it is also fast enough to run during local builds (currently that requires all pre-existing warnings in the code to either be resolved or marked for suppression, though, which is why this mode is less common).


That makes sense. I hope my first comment came across as I intended, which wasn't as criticism of the product but a suggestion about how to talk about the value of the product.


No worries, that's how I understood it too :) Just adding a bit more context on why we feel the approach does beat "grep panic" for people checking out this thread.


Just tried this out on some of my own code and it nails the warts that I had flagged as TODOs (and a few more...). The tool gives helpful info about the source of the nil, too. This is great.


Building a type checker on global inference is the kind of thing that sounds romantic in academia - "no type definitions and yet get type checking!" - but ends up being a nightmare to use in practice.

Nilability of return values should be part of functions public interface. It shouldn't come as a surprise under certain circumstances of using the code. The problem of global inference is that it targets both the producer and the consumer of the interface at the same time, without a mediating interface definition deciding who is correct. If a producer starts returning nil and a consumer five levels downstream the call-stack happens to be using it, both the producer and caller is called out, even if that was documented public api before, just never executed. Or vice versa.

For anyone who had the great pleasure of deciphering error messages from C++ templates, you know what I'm talking about.

I understand the compromises they had to take due to language constraints and I'm sure this will be plenty useful anyway. Just sad to see that a language, often called modern and safe, having these idiosyncrasies and need such workarounds.


> Building a type checker on global inference is the kind of thing that sounds romantic in academia - "no type definitions and yet get type checking!" - but ends up being a nightmare to use in practice.

Hi! I use global type inference and I love it.


I got a nil pointer deref panic trying to use this tool:

$ nilaway ./...

panic: runtime error: invalid memory address or nil pointer dereference [recovered]

panic: runtime error: invalid memory address or nil pointer dereference

[signal SIGSEGV: segmentation violation code=0x2 addr=0x0 pc=0x100c16a58]


Is there any movement in the language spec to address this in the future with Nil types or something?


I really love Golang and how it focused on making the job of the reader easy. But with today’s modern programming language the existence of null pointer dereference bugs doesn’t really make sense anymore. I don’t think I would recommend anyone to start a project in Golang today.

Maybe we’ll get a Golang 3 with sum types…


If you squint really hard, the work on generics is a step toward the future.

If you don't squint, then I don't think so.


With generics, can you not make a NonNil<T> struct in Go, where the contents of the struct are only a *T that has been checked at construction time to not be nil, and doesn't expose its inner pointer mutably to the public? I would think that would get the job done, but I also haven't really done much Go since prior to generics being introduced

Otherwise, since pointers are frequently used to represent optional parameters, generics + sum types would get the job done; for that use case, it's one of two steps to solve the problem. I don't foresee Go adding sum types, though.


Every type in Go has a zero value. The zero value for pointers is nil. So you can't do it with regular pointers, because users can always create an instance of the zero value.


This is one of those things which feels like just a small trade off against convenience for the language design, but then in practice it's a big headache you're stuck with in real systems.

It's basically mandating Rust's Default trait or the C++ default (no argument) constructor. In some places you can live with a Default but you wish there wasn't one. Default Gender = Male is... not great, but we can live with it, some natural languages work like this, and there are problems but they're not insurmountable. Default Date of Birth is... 1 January 1970 ? 1 January 1900? 0AD ? Also not a good idea but if you insist.

But in other places there just is no sane Default. So you're forced to create a dummy state, recapitulating the NULL problem but for a brand new type. Default file descriptor? No. OK, here's a "file descriptor" that's in a permanent error state, is that OK? All of my code will need to special case this, what a disaster.


> Default Gender = Male is... not great

    enum Gender {
        Unspecified,
        Male,
        Female,
        Other,
    }

    impl Default for Gender {
        default() -> Self {
            Self::Unspecified
        }
    }
or:

    enum Gender {
        Male,
        Female,
        Other,
    }
and use Option<Gender> instead of Gender directly, with Option::None here meaning the same that we would mean by Gender::Unspecified


I think they are talking about the cons of Go allowing zero value. Rust doesn’t have that problem.


    type Gender int
    const (
        Unspecified Gender = iota
        Male
        Female
        Other
    )
Works the same way. Declaring an empty variable of the type Gender (var x Gender) results in unspecified.


…and now you need to check for nonsense values everywhere, instead of ever being able to know through the type system that you have a meaningful value.

It’s nil pointers all over again, but for your non-pointer types too! Default zero values are yet another own goal that ought to have been thrown away at the design stage.


> instead of ever being able to know through the type system that you have a meaningful value.

That's... not what I'm looking for out of my type system. I'm mostly looking for autocomplete and possibly better perf because the compiler has size information. I really hate having to be a type astronaut when I work in scala.

So, I mean, valid point. And I do cede that point. But it's kind of like telling me that my car doesn't have a bowling alley.


Making an analogy between a car with a bowling alley being as useful as having the ability to know you have a valid selection from a list of choices does not exactly reflect well on your priorities.


I take it you use enums fairly regularly?

I don't really use them that much, so they're superfluous for the most part. Sort of like a car having a bowling alley. I mean, I'll take them if it doesn't complicate the language or impact compile time, but if they're doing to do either of those, I'd rather just leave them.

Adding default branches into the couple of switch statements and a couple spots for custom json parsing that return errors for values outside the set doesn't seem like a bad tradeoff.


It's not optimal but one can still implement enums in userland using struct and methods returning a comparable unexported type.


More like a car without ABS and no TC. It's cheaper and you can drive just more carefully, but you're more likely to crash.


I think that's a bad analogy because I actually use ABS and TC. I don't really use an enum heavy style of programming. Maybe twice per project or so. Putting a default branch in a couple switch statements also seems to get me back the same safety (although I'd rather detect the error early and return it in a custom UnmarshalJSON method for the type).

I also imagine I'd use a bowling alley in my car ~2 times per year, tops. So that seems like a better analogy to me.

edit: I guess I should bring up that I don't use go's switch statement much either, and when I do, 99% of the time I'm using the naked switch as more of an analog to lisp's cond clause.


> I think that's a bad analogy because I actually use ABS and TC. I don't really use an enum heavy style of programming.

The point the others are making is you are using a less safe type of programming equivalent to driving without ABS and TC.

From your point of view, "TC and ABS is useful so I use it unlike enums".

From their point of view though, you are the person not using ABS and TC insisting that they offer nothing useful.


To continue the analogy, since I don't use enums, I'm also the person who isn't driving the car.

Can you explain how owning a car without ABS and not driving it is less safe?

Edit: Wait, or am I not using breaks? I think the analogy changed slightly during this whole process.


> Can you explain how owning a car without ABS and not driving it is less safe?

I don't know that it is, I'm speaking based on the implication in this thread that no-enum and no-ABS people are overconfident to a fault.

I do believe that those who claim they don't need enums, static typing, etc are probably overconfident and have a strong desire or need to feel more control.

I'm not sure though, at least for GC'd languages, how enums sacrifice control.


There are still people that swear that they can brake better than ABS. Until they do a side-by-side test.

Re: custom UnmarshalJson implementation - you still have to remember, and do it for every serialized format (e.g. sql).

A default case in a switch only solves, well, switching. If a rogue value comes in, it will go out somewhere. E.g. json to sql or whatever things are moving.


> A default case in a switch only solves, well, switching. If a rogue value comes in, it will go out somewhere. E.g. json to sql or whatever things are moving.

I mean, yeah, but eventually you have to do something with it. And the only useful thing you can really do with an enum is switch on it...


But it does not work. It looks like it would, with the indentation and iota keyword, but its just some variables that do not constrain the type. There will be incoming rogue values, from json or sql or something else.

    var g Gender // ok so far.
    if err := json.Unmarshal("99", &g); err != nil { panic(err) }
    // no error and g is Gender(99)!
Now you must validate, remember to validate, and do it in a thousand little steps in a thousand places.

Go is simple and gets you going fast... but later makes you stop and go back too much.


Default gender male not how this works in practice. Instead, you define an extra “invalid” value for almost every scalar type, so invalid would be 0, male 1 and female 2. Effectively this makes (almost) every scalar type nullable. It is surprisingly useful, though, and I definitely appreciate this tradeoff most of the time.

(Sometimes your domain type really does have a suitable natural default value, and you just make that the zero value.)


Great, now you’ve brought the pain of checking for nil to any consumer of this type too!


This is a thread about Go, not about Rust. There is a bunch of interesting computer science in this post, and if interesting new computer science is a baby seal, Rust vs. Go discussions are hungry orcas.


I write a decent amount of go - this isn't a defence of the current situation.

> All of my code will need to special case this, what a disaster.

No, your code should handle the error state first and treat the value as invalid up until that point, e.g.

    foo, err := getVal()
    if err != nil {
        return
    }

    // foo can only be used now
It's infuriating that there's no compiler support to make this easier, but c'est la vie.


Man, if only over 30 odd years of PL research leading up to Go, somebody came up with a way to do it better.


Given the choice between a (objectively) theoretically superior language like Haskell or Rust , and a language that prioritises developer ergonomics at the expense of PL research, I'll take the ergonomics thanks.

We have a 1MM line c++ codebase at work, a rust third party dependency, and a go service that's about as big as the rust dependency. Building the Rust app takes almost as long as the c++ app. Meanwhile, our go service is cloned, built, tested and deployed in under 5 minutes.


Other than compiling a bit faster, what ergonomics does Go give you that Rust doesn't?


> Other than compiling a bit faster,

It's not "a bit faster", it's "orders of magnitude faster". We use a third party rust service that compile occasionally, a clean build of about 500 lines of code plus external crates (serde included) is about 10 minutes. Our go service is closer to 5 seconds. An incremental build on the rust service is about 30s-1m, in go it's about 5 seconds. It's the difference between waiting for it to start, and going and doing something else while you compile, on every change or iteration.

> what ergonomics does Go give you that Rust doesn't

- Compilation times. See above.

- How do I make an async http request in rust and go? in go it's go http.Post(...)

In rust you need to decide which async framework you want to use as your application runtime, and deal with the issues that brings later down the line.

- In general, go's standard library is leaps and bounds ahead of rust's (this is an extension of the async/http point)

- For a very long time, the most popular crates required being on nightly compilers, which is a non-starter for lots of people. I understand this is better now, but this went on for _years_.

- Cross compilation just works in go (until you get to cgo, but that's such a PITA and the FFI is so slow that most libraries end up being in go anyway), in rust you're cross compiling with LLVM, with all the headaches that brings with it.

- Go is much more readable. Reading rust is like having to run a decompressor on the code - everything is _so_ terse, it's like we're sending programs over SMS.

- Go's tooling is "better" than rust's. gofmt vs rustfmt, go build vs cargo build, go test vs cargo test

For anything other than soft real time or performance _critical_ workloads, I'd pick go over rust. I think today I'd still pick C++ over rust for perf critical work, but I don't think that will be the case in 18-24 months honestly.


And yet it's a productive language with significant adoption, go figure.

They made trade-offs and are conservative about refining the language; that cuts both ways but works well for a lot of people.

The Go team does seem to care about improving it and for many that use it, it keeps getting better. Perhaps it doesn't happen at the pace people want but they always have other options.


> And yet it's a productive language with significant adoption, go figure.

Perl was also a successful language with significant adoption. At least back then, we didn’t know any better.

In twenty years the industry will look back on golang as an avoidable mistake that hampered software development from maturing into an actual engineering discipline, for the false economy of making novice programmers quickly productive. I’m willing to put money on that belief, given sufficiently agreed upon definitions.


I can't agree with the definitions you're insinuating. To suggest that the creators of Go do not know what the difference is between "writing software" and doing "software engineering" is plainly wrong. Much of the language design was motivated by learnings within Google. What other "modern" language that has more of a focus on software engineering, putting readability and maintainability and stability at the forefront? It's less about novice programmers and more about artificial barriers to entry.

Modern PLT and metaprogramming and more advanced type systems enable the creation of even more complex abstractions and concepts, which are even harder to understand or reason about, let alone maintain. This is the antithesis of whatever software engineering represents. Engineering is almost entirely about process. Wielding maximally expressive code is all science. You don't need to be a computer scientist to be a software engineer.


The simple existence of the Billion Dollar Mistake of nils would suggest that maybe Rob Pike et al are capable of getting it wrong.

> Much of the language design was motivated by learnings within Google.

And the main problem Google had at the time was a large pool of bright but green CS graduates who needed to be kept busy without breaking anything important until the mcompany needed to tap into that pool for a bigger initiative.

> What other "modern" language that has more of a focus on software engineering, putting readability and maintainability and stability at the forefront?

This presupposes that golang was designed for readability, maintainability, and stability, and I assert it was not.

We are literally responding to a linked post highlighting how golang engineers are still spending resources trying to avoid runtime nil panics. This was widely known and recognized as a mistake. It was avoidable. And here we are. This is far from the only counterexample to golang being designed for reliability, it’s just the easiest one to hit you over the head with.

Having worked on multiple large, production code bases in go, they are not particularly reliable nor readable. They are somewhat more brittle than other languages I’ve worked with as a rule. The lack of any real ability to actually abstract common components of problems means that details of problems end up needing to be visible to every layer of a solution up and down the stack. I rarely see a PR that doesn’t touch dozens of functions even for small fixes.

Ignoring individual examples, the literal one thing we actually have data on in software engineering is that fewer lines of codes correlates with fewer bugs and that fewer lines of code are easier to read and reason about.

And go makes absolutely indefensible decisions around things like error handling, tuple returns as second class citizens, and limited abstraction ability that inarguably lead to integer multiples more code to solve problems than ought to be necessary. Even if you generally like the model of programming that go presents, even if you think this is the overall right level of abstraction, these flagrant mistakes are in direct contradiction of the few learnings we actually have hard data for in this industry.

Speaking of data, I would love to see convincing data that golang programs are measurably more reliable than their counterparts in other languages.

Instead of ever just actually acknowledging these things as flaws, we are told that Rob Pike designed the language so it must be correct. And we are told that writing three lines of identical error handling around every one line of code is just Being Explicit and that looping over anything not an array or map is Too Much Abstraction and that the plus sign for anything but numbers is Very Confusing but an `add` function is somehow not, as if these are unassailable truths about software engineering.

Instead of actually solving problems around reliability, we’re back to running a dozen linters on every save/commit. And this can’t be part of the language, because Go Doesn’t Have Warnings. Except it does, they’re just provided by a bunch of independent maybe-maintained tools.

> enable the creation of even more complex abstractions and concepts

We’re already working on top of ten thousand and eight layers of abstraction hidden by HTTP and DNS and TLS and IP networking over Ethernet frames processed on machines running garbage-collected runtimes that live-translate code into actual code for a processor that translates that code to actual code it understands, managed by a kernel that convincingly pretends to be able to run thousands of programs at once and pretends to each program that it has access to exabytes of memory, but yeah the ten thousand and ninth layer of abstraction is a problem.

Or maybe the real problem is that the average programmer is terrible at writing good abstractions so we spend eons fighting fires as a result of our collective inability to actually engineer anything. And then we argue that actually it’s abstraction that’s wrong and consign ourselves to never learning how to write good ones. The next day we find a library that cleanly solves some problem we’re dealing with and conveniently forget that Abstractions Are Bad because that’s only something we believe when it’s convenient.

Yes, this is a rant. I am tired of the constant gaslighting from the golang community. It certainly didn’t start with “generics are complicated and unnecessary and the language doesn’t need them”. I don’t know why I’m surprised it hasn’t stopped since them.


I doubt we can have a productive debate here as you're harping on issues I didn't even reference. Compared to peers, the language is stable, is readable and maintainable, full stop.

- The language has barely changed since inception

- most if not all behavior is localized and explicit meaning changes can be made in isolation nearly anywhere with confidence without understanding the whole

- localized behavior means readable in isolation. There is no metaprogramming macro, no implicit conversion or typing, the context squarely resolves to the bounds containing the glyphs displayed by your text editor of choice.

The goal was not to boil the ocean, the goal was to be better for most purposes than C/C++, Java, Python. Clearly the language has seen some success there.

Yes, abstractions can be useful. Yes, the average engineer should probably be barred from creating abstractions. Go discourages abstractions and reaps some benefits just by doing so.

Go feels like a massive step in the right direction. It doesn't have to be perfect or even remotely perfect. It can still be good or even great. Let's not throw the baby out with the bath water.

I think for the most part I'm in agreement with you, philosophically, but I don't get the hyperfocus on this issue. Most languages you consider better I consider worse, let's leave it at that.


> Go feels like a massive step in the right direction

I agree. Go has it's warts, but given the choice between using net/http in go, tomcat in java, or cpprestsdk in c++, I'll pick Go any day.

In practice: - The toolchain is self contained, meaning install instructions don't start with "ensure you remove all traces of possibly conflicting toolchains" - it entirely removes a class of discussion of "opinion" on style. Tabs or spaces? Import ordering? Alignment? Doesn't matter, use go fmt. It's built into the toolchain, everyone has it. Might it be slightly more optimal to do X? Sure, but there's no discussion here.

- it hits that sweet spot between python and C - compilation is wicked fast, little to no app startup time, and runtime is closer to C than it is to python.

- interfaces are great and allow for extensions of library types.

- it's readable, not overly terse. Compared to rust, e.g. [0], anyone who has any programming experience can probably figure out most of the syntax.

We've got a few internal services and things in Go,vanr we use them for onboarding. Most of my team have had PR's merged with bugfixes on their first day of work, even with no previous go experience. It lets us care about business logic from the get go.

[0] https://github.com/getsentry/symbolicator/blob/master/crates...


> In twenty years the industry will look back on golang as an avoidable mistake

And here is my opinion:

I think in 20 years, Go will still be a mainstream language. As will C and Python. As will Javascript, god help us all.

And while all these languages will still be very much workhorses of the industry, we will have the next-next-next iteration of "Languages that incorporate all that we have learned about programming language design over the last N decades". And they will still be in the same low-single-percentage-points of overall code produced as their predecessors, waiting for their turn to vanish into obscurity when the next-next-next-next iteration of that principle comes along.

And here is why:

Simple tools don't prevent good engineering, and complex tools don't ensure it. There are arcs that were built in ancient Rome, that are still standing TODAY. There are buildings built 10 years ago that are already crumbling.


> I think in 20 years, Go will still be a mainstream language. As will C and Python. As will Javascript, god help us all.

And yet the mainstream consensus is that C and JavaScript are terrible languages with deep design flaws. These weren’t as obvious pr avoidable at the time, but they’re realities we live with because they’re entrenched.

My assertion is that in twenty years, we’ll still be stuck with go but the honeymoon will be over and its proponents will finally be able to honestly accept and discuss its design flaws. Further, we’ll for the most part collectively accept that—unlike C and JavaScript—the worst of these flaws were own goals that could have and should have been avoided at the time. I further assert that there will never be another mainstream statically-typed language that makes the mistake of nil.

For that matter I think we’ll be stuck with Rust too. But I think the consensus will be that its flaws were a result of its programming model being somewhat novel and that it was a necessary step towards even better things, rather than a complete misstep.


> And yet the mainstream consensus is that C and JavaScript are terrible languages with deep design flaws.

Oh, they all have flaws. But whether these make them "terrible" is a matter of opinion. Because they are certainly all very much usable, useful and up to the tasks they were designed for, or they would have vanished a long time ago.

> and its proponents will finally be able to honestly accept and discuss its design flaws

We are already doing that.

But that doesn't mean we have to share anyones opinion on what is or isn't a terrible language, or their opinions about what languages we should use.

And yes, that is all these are: opinions. The only factually terrible languages are the ones noone uses, and not even all languages that vanished into obscurity are there because people thought them to be "terrible".

Go does a lot of things very well, is very convenient to use, solves a lot of very real problems, and has a lot to offer that is important and useful to us. That's why we use it. Opinions about which languages are supposedly "terrible" and which are not, is not enough to change that.

An new language has to demonstrate to us that its demands on our time are worth it. It doesn't matter if it implements the newest findings about how PLs should be designed according to someones opinion, it doesn't matter if its the bees knees and the shiniest new thing, it doesn't matter if it follows paradigm A and includes feature B...the only thing that matters is: "Are the advantages this new thing confers so important to us, that we have a net benefit from investing the time and effort to switch?"

If the answer to that question is "No", then coders won't switch, because they have no reason to do so. And to be very clear about something: The only person who can answer if that switch is worth for any given programmer, is that programmer.


Perl had less competition and also suffered from being more of a "write-only" language.

Lisp, Haskell, OCaml all likely tickle your PL purity needs, but they remain niche languages in the grand scheme of things. Does that make them bad?

I think Go will be the new Java (hopefully without the boilerplate/bloat). It's good enough to do the job in a lot of cases and plenty of problems will be solved with it in a satisfactory manner.

Language wars are only fun to engage with for sport, but it's silly to get upset about them. Most languages have value in different contexts and I believe the real value in this dialog is recognizing when and where a language works and to accept one's preferred choice may not always be "the one".


One answer would be to provide something like a GetPointer() method which, if the inner pointer is nil, creates a new struct of type T and returns a pointer to it.


Turns out worse isn’t better after all. Who could have guessed?


You can make such a type and it works well in practice.

First we define the type, hiding the pointer/non-existent value:

    type Optional[Value any] struct {
        value  Value
        exists bool
    }
Then we expose it through a method:

    func (o Optional[Value]) Get() (Value, bool) {
        return o.value, o.exists
    }
Accessing the value then has to look like this:

    if value, ok := optional.Get(); ok {
        // value is valid
    }
    // value is invalid
This forces us to handle the nil/optional code path.

Here's a full implementation I wrote a while back: https://gist.github.com/MawrBF2/0a60da26f66b82ee87b98b03336e...


Even if this would be possible, it won't be idiomatic.


Without intention to offend. It's Golang, the language that famously ignored over 30 years of progress in language development for the sake of simplicity.

What answer do you expect?


Hey, can you please not do programming language flamewar (or any flamewar) on HN? We're trying for something else here: https://news.ycombinator.com/newsguidelines.html.

Partly this is out of memory of the good/bad old newsgroup days where this kind of thing somehow worked ok, until it didn't, but it definitely doesn't work on the sort of forum that HN is. We'd like a better outcome than scorched earth for this place.


Wonderful job.

I am toying around with a similar project, with the same goal, and it is DIFFICULT.

I'll definitely get to learn from their implementation.


Very interesting work. I wonder what were the difficulties encountered. Aliasing? Variable reassignment wrt short declaration shadowing?

Hopefully with time, when exploring union types and perhaps a limited form of generalized subtyping (currently it's only interface types) we'll be able to deal with nil for good.

Nil is useful, as long as correctly reined in.


> Nil is useful, as long as correctly reined in.

A good way to rein in behaviour is with types. If you need Nil in your domain, great! Give it type 'Nil'.


Yes that's part of it. It will probably require a nil type which is currently untyped nil when found in interfaces.

The untyped nil type is just not a first-class citizen nowadays.

But with type sets, we could probably have ways to track nillables at the type system level through type assertions.

And where nillables are required such as map values it would be feasible to create some from non nillables then ( interface{T | nil})

But that's way ahead still.


It's really easy to check a field of a pointer struct without first checking the struct is non nil. Would be interesting if go vet or test checked this somehow.


Plenty of Go commentary in this thread but can I just say I'm glad to have learned about nilness? Suffered through a few nil pointer dereferences after deploying and having this analyser enabled in gopls (off by default for me at least) is a nice change.

Tested via vim and looks good!


I tried it but got too many false positives to be useful.


I tried it and got a lot of false positives, but there wasn't so much output that I couldn't quickly pick out the interesting cases. This is very cool.


We'd be interested in the general characteristics of the most common ones you are seeing. If you have a chance to file a couple issues (and haven't done so yet): https://github.com/uber-go/nilaway/issues

We definitely have gotten some useful reports there already since the blog post!

We are aware of a number of sources of false positives and actively trying to drive them down (prioritizing the patterns that are common in our codebase, but very much interested in making the tool useful to others too!).

Some sources of false positives are fundamental (any non-trivial type system will forbid some programs which are otherwise safe in ways that can't be proven statically), others need complex in-development features for the tool to understand (e.g. contracts, such as "foo(...) returns nil iff its third argument is nil"), and some are just a matter of adding a library model or similar small change and we just haven't run into it ourselves.


In one case, it couldn’t tell that a slice couldn’t go out of bounds because I was iterating through it backwards instead of forwards. In another case, I had a helper method on a type to deal with initializing a named map type, but it couldn’t see that and thought the map was going to explode from being nil. Those are two false positives I remember off the top of my head. I can look it up again later.


Does a false positive mean:

- You're confident that a flagged value is actually non-Nil?

- A value was Nil but you prefer it that way?



cool... what does this mean the best linter / correctness checking is at the moment?

I have some code that eventually core dumps and honestly I don't know what I'm doing wrong, and neither do any golang tools I've tried :(

maaaaaybe there's something that'll check that your code never closes a channel or always blocks after a specific order of events happens...


I don't think a pure Go program can core dump, unless you use Cgo (wrongly) or unsafe. It can only panic.


Races between goroutines can corrupt memory. E.g. manipulate a map from two goroutines and you can wreck its internal state.


Can this actually manifest? Even without the -race flag I think maps are a special case which will panic with a concurrent mutation error if access isn't synchronized.


> Can this actually manifest?

Yes. Per rsc (https://research.swtch.com/gorace)

> In the current Go implementations, though, there are two ways to break through these safety mechanisms. The first and more direct way is to use package unsafe, specifically unsafe.Pointer. The second, less direct way is to use a data race in a multithreaded program.

That races undermine memory safety in go has been used in CTFs: https://github.com/netanel01/ctf-writeups/blob/master/google...

These are not idle fancies, there are lots of ways to unwittingly get data races in go: https://www.uber.com/blog/data-race-patterns-in-go.


It's interesting that the 2010 article you linked suggests they might consider improving this but nope, Go 1.0 and the Go people use today just basically takes the same attitude as C and C++ albeit with a small nuance.

In C and C++ SC/DRF (Sequentially Consistent if Data Race Free) turns into "All data races are Undefined Behaviour, game over, you lose". In Go SC/DRF turns into "All data races on complex types are Undefined Behaviour, game over, you lose". If you race e.g. a simple integer counter, it's damaged and you ought not to use it because it might be anything now, but Go isn't reduced to Undefined Behaviour immediately for this seemingly trivial mishap (whereas C and C++ are)


Go doesn't go out of its way to make weird things happen on UB like C compilers these days tend to, but once you corrupt the map data structure, weird things can happen. Trying to contain that explosion isn't necessarily "better", as it would make maps slower / take up more memory / etc.


> Go doesn't go out of its way to make weird things happen on UB like C compilers these days tend to

The reasons C compilers “tend to go out of their way to make weird things happen” is they optimise extremely aggressively, and optimisations are predicated upon the code being valid (not having UBs).

Go barely optimises at all, and does not have that many UBs which could send the optimiser in a frenzy.


Kind of ironic the raison d'être of Go is a memory safe language for concurrent programming but you can easily footgun yourself into doing something memory unsafe using concurrency...


Go generaly doesn't used shared memory and concurrency, or at least it's been considered an anti-pattern: https://go.dev/blog/codelab-share


That is dishonest.

Implicitly shared memory is literally the default behaviour of the langage, and you have to be careful to keep that controlled or contained.

Pretty much as in every other shared memory concurrency langage.

The quip about sharing memory by communicating is cute but it’s just that, the langage does not encourage let alone enforce it.

In fact it went out of its way to remove some opportunities e.g. because `go` is a statement there is no handle which could communicate the termination and result of routines.


Yes, but at the same time shared memory concurrency is not considered an unsafe usage of Go either...


This is a CTF challenge where attackers control the code that's running, isn't it?


Another example: thread A toggles an interface variable between two types, thread B calls a method on it. You can get the method of type X called with a receiver of type Y.


I've had that, and it did panic.


There are ways a Go program can fatal: by running out of heap, or stack, by corrupting variables by racing writes, by deadlocking, by misuse of reflect or unsafe, and so on.


I’ve seen it happen before because the stdlib actually directly just makes POSIX syscalls for a lot of things by default instead of using the native Go implementations and so you’re implicitly reliant on C code


I'm not sure if that was the best example to showcase NilAway. I understand there's a lot of context omitted to focus on NilAway's impact, but why is foo returning a channel to bar if bar is just going to block on it anyway? Why not just return a *U? If foo's function signature was func foo() (*U, error) {}, this wouldn't be a problem to begin with.


Not the point.



I have been thinking about this problem for a long time as well.

But I think that focusing on nils is a wrong analysis. The problem is the default zero-values dogma, and that is not going to change anytime soon.

Sometimes you also need a legitimate empty string or 0 integer, but the language cannot distinguish it from the absence of value.

In my codebase, I was able to improve the readability of those cases a lot by using mo.Option, but that has a readability cost and does not offer the same guarantees than a compiler would. The positive side is that I get a panic and clear stack trace whenever I try to read an absent value, which is better than nothing, but still not as good as having those cases at compile time.

No amount of lint checkers (however smart) will workaround the fact that the language cannot currently express those constraints. And I don't see it evolving past it's current dogmas unfortunately, unless someone forks it or create something like typescript for go.


It's not a dogma it's a breaking change to the language. Removing default zero-values is effectively a different language. Literally none of my code over the past several years (which otherwise all still works perfectly as-is) would work.

The Go team is very careful to avoid breaking changes (cue all the usual Well Actually comments regarding breaking changes that affected exactly zero code bases) and rightfully so. Their reputation as a stable foundation to build large projects upon has been key to the success and growth of the language and its ecosystem.

I have about a million and one other issues I'd like to see resolved first that don't involve breaking changes. It's a known pain point, the core maintainers acknowledge it, but suggestions to fundamentally derail the entire project are ludicrous.

Focusing on nils is fine. NilAway is fine. It's a perfectly reasonable approach and adds a lot of value. This solves a real problem in real code bases today. There is no universe wherein forking to create a new language creates remotely equivalent value.


I didn't said that we should remove default values, that is a wrong interpretation of my message.

For example we could have a new non-nilable pointer type (that would not have any default value), or an optional monad natively in the language (or any other thing in-between, there are many possibilities). That would allow the compiler to statically report about missing checks, without breaking backward compatibility.

But we all know that it's not going to happen soon because while not breaking any existing code, it goes against the "everything has a zero-value" dogma. That was the meaning of my message.


You are wrong regardless: there is no such dogma. There are numerous ongoing proposals discussing how to accomplish this. You're welcome to contribute. It took me a minute to find these proposals, as examples:

https://github.com/golang/go/issues/57644

https://github.com/golang/go/issues/19412

I interpret your comments as propagating FUD in bad faith.


Sum types are a different issue (even if somewhat related) than what I am talking about.

Even if sum types were introduced, it would not help with nil values because - as you said - backward compatibility won't be broken.

If I had the luxury of spare time to contribute, I would probably spend it switching away to another richer language instead, because it would be cheaper, solve more of my problems and with a higher degree of certainty. And that's not even mentioning the attitude and toxicity of the community compared to most of other languages facing critics and ideas.


What toxicity? This doesn't track for me.

> backward compatibility won't be broken.

I'm not sure what your point is anymore. You are clearly divested. Your assessment is totally unfounded. What are you trying to accomplish here?


Personal attacks are not welcome and against HN guidelines.

You can disagree with me and criticize my points, but I do not feel like it's done in a good faith or is leading to anything constructive.

So I'm going to stop the discussion here and let the readers judge by themselves what I meant, and what to conclude about it.


I mean no ill intent, I genuinely have no idea what you're arguing towards. You repeatedly claim dogma and even toxicity (!) where I can find no evidence of either -- that certainly doesn't feel like good faith to me. It feels like FUD.


> Nil panics are found to be an especially pervasive form of runtime errors in Go programs. Uber’s Go monorepo is no exception to this, and has witnessed several runtime errors in production because of nil panics, with effects ranging from incorrect program behavior to app outages, affecting Uber customers.

Insane that Go had decades of programming mistakes to learn from but it chose this path.

Anyway, at least Uber is out there putting out solid bandaids. Their equivalent for Java is definitely a must-have for any project.


> Insane that Go had decades of programming mistakes to learn from but it chose this path.

Yup, every time I write some Go I feel like it's been made in a vaccum, ignoring decades of programming language. null/nil is a solved problem by languages with sum types like haskell and rust, or with quasi-sums like zig. It always feels like a regression when switching from rust to go.

Kudos to Uber for the tool, it looks amazing!


> ignoring decades of programming language

True, and because of this, the language can be learned over a weekend or during onboarding, new hires can rapidly digest codebases and be productive for the company, code is straightforward and easy to read, libraries can be quickly forked and adapted to suit project needs, and working in large teams on the same project is a lot easier than in many other languages, the compiler is blazing fast, and it's concurrency model is probably the most convenient I have ever seen.

Or to put this in less words: Go trades "being-modern" for amazing productivity.

> It always feels like a regression when switching from rust to go.

It really does, and that's what I love about Go. Don't get me wrong I like Rust. I like what it tries to do. But I also love the simplicity, and sheer productiveness of Go. If I have to deal with the odd nil-based error here and there, I consider that a small price to pay.

And judging by the absolute success Go has (measured by contributions to Github), many many many many many developers agree with me on this.


It's funny how I always hear the point about new hires for Go. My team is a Rust shop at $DAYJOB that I created basically from scratch, so I had to onboard every new hire on the codebase. It's amazing how confident they are due to the compiler having their back, and how confident I am their code won't blow up that much in prod.

> code is straightforward and easy to read

I have to disagree. I don't want to read 3 lines out of four that are exactly the same. I don't want to read the boilerplate. I don't want to read yet another abs or array_contains reimplementation. Yes it's technically easy to read, but the actual business logic is buried under so much noise that it really hinders my capacity to digest it.

> the compiler is blazing fast

much agreed, that is my #1 pain point in rust (but it's getting better!)

> and it's concurrency model is probably the most convenient I have ever seen

this so much. this is what I hate the most with go: it pioneered a concurrency model and made it available to the masses, but it has too many footguns imho. this is no surprise other languages picked channels as a first class citizen in their stdlib or core language.

> Go trades "being-modern" for amazing productivity.

I don't think those two are incompatible. If we take the specific point of the article, which is nil pointers, Go would only have to import the sum types concept to have Option and maybe Result as a bonus. Would this translate to a loss of productivity? I don't think so. (oh and sum types hardly are a modern concept)

Also, there may be a false sense of productivity. Go is verbose, and you write a lot. Sure if you spend most of your time typing then yes you are productive. But is it high-value productivity? Some more concise languages leave you more time to think about what you are writing and to write something correct. The feeling of productivity is not there because you are not actively writing code most of the time. IIRC plan9 makes heavy use of the mouse, and people feel less productive compared to a terminal because they are not actively typing. They are not active all the time.


> Also, there may be a false sense of productivity. Go is verbose, and you write a lot. Sure if you spend most of your time typing then yes you are productive. But is it high-value productivity? Some more concise languages leave you more time to think about what you are writing and to write something correct. The feeling of productivity is not there because you are not actively writing code most of the time. IIRC plan9 makes heavy use of the mouse, and people feel less productive compared to a terminal because they are not actively typing. They are not active all the time.

This is my sense. "False sense of productivity" is an accurate statement - I've also found that it seems to be for a very specific (and not necessarily useful) definition of "productive", such as LOC per day.

It's not as bad as dynamic languages like Python, but very frequently Go codebases feel brittle, like any change I make might bring down the whole house of cards at runtime.


> It's funny how I always hear the point about new hires for Go. My team is a Rust shop at $DAYJOB that I created basically from scratch, so I had to onboard every new hire on the codebase. It's amazing how confident they are due to the compiler having their back, and how confident I am their code won't blow up that much in prod.

Same. Started a company, onboarded just about everyone to Rust. It went very well.


>It's amazing how confident they are due to the compiler having their back, and how confident I am their code won't blow up that much in prod.

I get what you're saying, and I'm glad you are having such a good experience with it. Disclosure, I am not talking down to any language here...in fact I actually like Rust as a language, even though I don't use it professionally.

I am just saying that Go is incredibly easy to learn, and I don't think there are many people who disagree on this point, proponent of Go or not.

> I have to disagree.

We'll have to agree to disagree then :-) Yes, the code is verbose, but it's not really noise in my opinion. Noise is something like what happens in enterprise Java, where we have superfluous abstractions heaped ontop of one another. Noise doesn't add to the program. The verbose error handling of Go, and the fact that it leaves out a lot of magic from other languages doesn't make it noisy to me.

> I don't think those two are incompatible.

Neither do I, but that's the path Go has chosen. It may also have been poorly worded on my part. A better way of putting it: Go doesn't subscribe to the "add as much as possible" - mode of language development.

> But is it high-value productivity?

Writing the verbose parts of go, like error checking, isn't time consuming, because it's very simple...in fact, these days I leave a lot of that to the LLM integration of my editor :-)

Is is high value? Yes, I think so, because I don't measure productivity by number of lines of code, I measure it by features shipped, and issues solves. And that's where Go's ... how do I say this ... obviousness really puts the language into the spotlight for me.


I honestly have started to wonder if the popularity of languages like Go and JavaScript are due to the lack of features in the base language. JavaScript in particular has had an incredible amount of effort invested in creating fairly limited, scattershot, and duplicative support for features that are "just part of the language" in Kotlin, or Rust, or even Java. It makes a really rich field for people who are interested in compilers, but did all this really make Uber a better company? Would all that effort have been better spent solving problems core to their business?

At my employer we have a pattern of promoting people who have done things like write a proprietary application gateway. The dev got a couple promotions and moved on to another company and we got stuck maintaining a proprietary application gateway with a terribly messy configuration and poor observability.


> it pioneered a concurrency model and made it available to the masses

Isn't it basically just what Cilk did, but with fewer feaures?


I think that it’s not just about something being possible with a tool, but people actually using it.

Go, for whatever reasons, has gotten people using it.


This is the core of the problem. Of course you can learn the language in a weekend, but you're bound to make the same mistakes developers have been doing for decades.

This may be ok, as you say, if you allow errors here and there because you are fine dealing with those problems. But at the other end, it may be a user that is affected by the error. Which may be ok as well, but why should it be? We lament the quality of software all the time.

Compare this to other engineering fields: unless you study the knowledge of those who came before you may not even be allowed to practice in the field. I would not want to use a bridge built by someone who learned bridge building in a weekend.

Software is different though, it's rarely a matter of life or death. Given that, maybe it's ok to not have the highest quality in mind, because the benefit of productivity far outweighs the alternative.

I'm torn.


Go is just making a certain set of tradeoffs. If you try to fix all the "mistakes developers have been doing for decades", you get Rust. And considering that Rust is already Rust, there is not much point in trying to make Go another Rust.

The line has to be drawn somewhere. I think everyone has certain things they'd put on the other side of that line, and strict nils are probably at the top of the list for many, but overall it's good that the Go team is stubborn about not adding new stuff. If they weren't, maybe there would be better nil handling, better error handling, etc. but compiles would also get slower and the potential for over-engineering, which Go now discourages quite effectively, would increase. At a high level, keeping Go a simple, pragmatic language with a fast compiler is more important than any particular language feature.


The argument is that Go is making a _wrong_ set of trade-offs.

It was designed, specifically, as per Rob Pike, for _bad_ developers. Developers who couldn't be productive at Google because they weren't properly taught at unis [0].

Then it caught momentum and then here we are, discussing a bad language designed for bad developers as if there is nothing better we can do with our lives.

[0] https://news.ycombinator.com/item?id=16143918


If you think developers at Google are bad and weren't taught fundamentals then we live in different universes.

Pike's point is that peak PLT is too lofty to be productive or even useful for folks who are actually technically competent and literate relative to the rest of the industry. No one will get anything done if they're spending all their time teasing an advanced type system into inferring the required program.


How does it apply to this specific issue? What is so lofty and unproductive in proper null handling?


I did not suggest it applies to this specific issue, I replied to a comment containing inflammatory remarks, but I'll bite: To answer that you need to first produce the minimum change to the language that provides this functionality.

A solution might be optionals, which might require sum types, which might require generics (which Go just learned), which most definitely requires a more complex type system, which almost certainly involves longer compiler times.

Is that all worth it? I don't know. The Go team certainly didn't think so.

Languages that I'm aware of that do solve this are Scala, Rust, Kotlin to some extent, Haskell... languages which do not have a reputation of being stable, easy to learn, easy to read and understand, compile quickly, etc.


Thanks, I understand it was more a general inflammatory conversation, that's why I didn't like it and was wondering whether it could be grounded to this specific topic. Although I had in mind what "teasing an advanced type system" would be needed in this case that would lead to a loss of productivity, slower compliation is also relevant.

Though the reputation for those other languages mainly stem from their embrace of way more advanced concepts rather than null handling via optionals, I think that this specific concept makes it easier to learn/read/understand (though not compile quicker)


> ...languages which do not have a reputation of being stable, easy to learn, easy to read and understand, compile quickly, etc.

Kotlin is definitely the odd man out in that list.


Nothing, the question is whether it is important enough to be included in the language.

Just because person A thinks this is hugely important, doesn't mean person B has to agree, or that B is a bad developer.


> It was designed, specifically, as per Rob Pike, for _bad_ developers.

Mind showing us the source for that?

Go wasn't made for incompetent developers. I'm fairly certain that people who land a job as devs at Google are pretty competent.

Go was made to facilitate rapid onboarding, easy digestion of large codebases, and working efficiently in large teams where people are guaranteed to have widely different educational backgrounds, experiences and ideas about programming.

That's why the language has to be simple, obvious, and be focused on readability. That's also why Go is strongly opinionated.


Here you go: https://www.youtube.com/watch?v=uwajp0g-bY4

I think Go could still work without completely neutering the type system.

Algebraic Data Types and pattern matching are not difficult... python already has them.


I don't think anyone is suggesting that Go should be like Rust. It's too late for that. We're suggesting that people should just use Rust (or Haskell, or F#, or any other robust functional programming language) instead.


> We're suggesting that people should just use Rust (or Haskell, or F#, or any other robust functional programming language) instead.

And how well did that work out for Haskell?

https://gist.github.com/graninas/22ab535d2913311e47a742c70f1...

Just because one person thinks Functional Programming is the right way to do it, doesn't mean another person has to agree. The same goes for every paradigm, and language feature under the sun. Different people want different things, different projects have different needs.

No single language gets everything right, no single paradigm solves every problem, no feature is a "must have" in every language. A functional approach might be great or a productivity killer depending on the use case. A GC may be the best thing in the world or a performance nightmare. OOP may be a really good idea or a path to unmaintainable crap depending on the implementation.

There are no silver bullets.

The only thing that is ABSOLUTELY certain: When people get told "Our way is better, you should use our way", despite the fact that there are no silver bullets, people will resist. And that resistance can lead to languages vanishing into obscurity.


> And how well did that work out for Haskell?

You know this post is speculative fiction, right? It's actually about what could kill Haskell, not what could kill Rust?


> It's actually about what could kill Haskell, not what could kill Rust?

Here is the articles title: "What killed Haskell, could kill Rust, too"

So no, it's not about what could kill Haskell. In 2022, ~0.3% of all code pushed to github was Haskell. To put that number into perspective: vimscript was ~0.25%


Yes, I can read the title thanks. The article is about Haskell. The first sentence makes it clear it's speculative fiction presented as if from the year 2030. The article draws a hypothetical analogy between what could happen to Rust in 2030 and what is contemporaneously happening to Haskell.


Yes, I read the article, thanks.

Yes, it is a speculative fiction, with a good reason, because the analogies are pretty clear.


And to return to the topic at hand, despite that article pointing to some weakness in the Haskell community, Haskell is thriving, so "We're suggesting that people should just use Rust (or Haskell, or F#, or any other robust functional programming language) instead" seems like reasonable advice to me.


> Haskell is thriving

https://news.ycombinator.com/item?id=38360177

So, according to what metric is Haskell "thriving"?


I'm not sure what you mean. Are you saying that something that occupies 0.3% of an ecosystem can't be thriving? "Thriving" is not the same concept as "widely used" or "popular"!


> And judging by the absolute success Go has (measured by contributions to Github), many many many many many developers agree with me on this.

Yeah, I truly hate this field


PHP was once an extremely popular language too...


Famously something you could learn in a weekend. It allowed you to start being productive right away, even if you were writing terrible insecure code!


> And judging by the absolute success Go has (measured by contributions to Github), many many many many many developers agree with me on this

You can't make up that other devs' opinions / preferences are identical to yours just because they use the same language, there are other important factors in play (e.g., if your company is using Go, then you'd be more productive in it and be more likely to choose to contribute in it even it Go is less productive as a language)


It could have done both though. It could have explicitly nullable types like Kotlin/C#. Or sum types like zig/rust/Swift. That wouldn’t make the language more complex to learn.


By definition, every bit added to a language makes it more complex to learn.

Sure, it could be done. Lots of things could be done to Go. The people who invented it are among the most brilliant computer scientists alive. It's a pretty sure bet that they know about, and in great detail, every single thing people complain Go doesn't have.

So every thing that is "missing" from Go isn't in it for a reason.

"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." -- Antoine de Saint-Exupéry, Airman's Odyssey


Not at all. Otherwise brainfuck would be the simplest language to learn. How do you currently represent a type that is A or B in Go? You have to use an interface. That’s much more complex than using a sum type would be.


Well, Brainfuck is simple to learn. The entire specification fits comfortably on a single page. Simple to learn doesn't automatically imply simple to use for any given purpose. The same is true for Go.

> You have to use an interface. That’s much more complex than using a sum type would be.

More complex how and by what metric?


> The entire specification fits comfortably on a single page

But to understand the specification and how it can be used to do a programming, you need to have at least a cursory understanding of turing machines and related theory, which isn't necessary to learn Java or python.

Under your definition, the conceptually simplest language is something like SUBLEQ, (the specification is only a single line!) but in this case, being able to implement the language, and learning the language aren't the same thing. Learning the language generally means, like, useful for given purposes.


It's definitely not as simple as "More features = harder to learn".

Removing footguns (nulls are a footguns, race-able concurrent APIs are a footgun) can make it easier to learn even though this may introduce new features (in this case sum types) to solve the problem.


It is not like Oberon, Plan 9, Inferno and Limbo were such a huge commercial successes.

Had those brilliant computer scientists not been employed at Google, it would have been another Oberon or Limbo.


Yes, having a good backing by a huge entity is a bonus.

But it's not a guarantee for success either. Google+ anyone?


Indeed, yet it helps a lot, as proven by all wannabe C and C++ successor languages.

The only ones that are actually taking off, have the baking of major corps, even if it is only giving money into the project.


> True, and because of this...

This is a false dichotomy. One does not imply the other.

Go is also not a simple language. It is deceptively difficult with _many_ footguns that could have easily been avoided had it not ignored decades of basic programming language design.

Many things also aren't straightforward or intuitive. For instance, this great list of issues for beginners: http://golang50shad.es/


I'm sorry but nearly all of them are along the lines of "I came from language X and in X we did it this way, but Go's syntax is different". That's not a footgun.

You know what's a footgun? Uncaught exceptions popping up in places far away from where they were created at which point you have very little context to deal with it robustly. Use after frees. FactoryFactoryFactories.


I don’t have too much an opinion on either side here, but as a developer who works full time in Go (and has for >6 years) all these things exist in Go.

Uncaught exceptions -> panics, like what this nil catcher is aiming to solve

Places far away -> easy goroutine creation with no origin tracking makes errors appear sometimes very far away from source

Use after free -> close after close

FactoryFactoryFactories -> loads of BuilderFunc.WithSomething

Lots of other pains I could add that are genuinely novel to Go also, but funny that for everything you mentioned my head went “yep, just called X”


> I'm sorry but nearly all of them are along the lines of "I came from language X and in X we did it this way, but Go's syntax is different". That's not a footgun.

You're right, I meant to link that in reference to how Go can be difficult to learn despite how it simple it seems. Not sure how I a sentence.

The overview of that site explains its purpose/necessity quite well. Some things are footguns, many are just confusing time-wasters. Nevertheless, they are frustrating and hamper the learning process.


> Nevertheless, they are frustrating and hamper the learning process

But that is the learning process. What else is there to learn in a language if the syntax doesn't count? They're all Turing complete and all of them can do everything. All we need to do is learn the exact magic words.


I never said otherwise. My point is that Go is far harder to learn than they're implying. It certainly can't be learned over the weekend — well, maybe it can be, but the code you end up writing will Inevitably be full of resources leaks, panics, nil pointer issues, improperly handled errors, etc. You may be able to put together some basic logic, but you are far from understanding the language.

I don't think it's honest to parade Go as a language that's the paragon of simplicity that's easy to learn when that's simply not true. I also don't think it's honest for people to argue that addressing any of Go's countless warts would somehow make the language more complex or harder to learn.


I agree that it's very unlikely for someone to learn Go in a week and start writing flawless code.

But Go's real strength is in its readability, not writability. I think it's very much possible to learn Go in a week, then read clean Go code like the standard library and understand exactly what's going on. At least that's my interpretation of what it means for a new grad to be productive in Go in less than a week. Nobody is expecting someone new to write production-grade libraries with intricate concurrency bits in their first week, but they're already productive if they can read and understand it.

As a rule of thumb we spend 10x more time reading code than we do writing it (code reviews, debugging, refactors). So why not optimise for it?


As a fairly experienced engineer, I have to say that reading go code is what gives me the most pause. I find the amount of visual noise and lack of useful abstractions makes it so that to be efficient at reading code, I have to trust that the loop or the error handling code is doing what I expect. The issue with go is that the primitive operations are written `for i := 0; i < 10; i++` instead of `map` and `x, err := foo(); if err != nil {...}; bar(x)` instead of `y := bar(foo()?)`, which requires either presuming, or spending the mental energy ensuring the primitive was written correctly every time it is used.

I generally do the second, because doing the first is extremely tiring when reviewing code, but I dislike it immensely.


> This is a false dichotomy. One does not imply the other.

No it isn't, and yes it does. By definition, the more features I add to something, the more complex it becomes. So yes, Go achieves it's simplicity precisely by leaving out features.

> this great list of issues

I just picked three examples at random:

"Sending to an Unbuffered Channel Returns As Soon As the Target Receiver Is Ready"

"Send and receive operations on a nil channel block forver."

"Many languages have increment and decrement operators. Unlike other languages, Go doesn't support the prefix version of the operations."

All of these are behavior and operators that are documented in the language spec. So how is any of these a "footgun"?


> No it isn't, and yes it does. By definition, the more features I add to something, the more complex it becomes. So yes, Go achieves it's simplicity precisely by leaving out features.

More complex for whom? Not having generics made the compiler simple, but having to copy and paste and maintain identical implementations of a function (or use interface) adds more complexity for users.

Similarly, adding a better default HTTP client arguably makes Go more complex, but the "simple" approach results in lots of complexity and frustration for users.

> All of these are behavior and operators that are documented in the language spec. So how is any of these a "footgun"?

Perhaps I could have been clearer. I didn't mean that the entire list was of footguns, just that there are lots of confusing and unintuitive things beginners need to learn.

Some actual footguns off the top of my head:

- using Defer in a loop

- having to redeclare variables in a loop

- having to manually close the body of a http response even if you don't need it


Not only that but those behaviors are patently not footguns or unreasonable in any way


> "Send and receive operations on a nil channel block forver."

I literally can not imagine a worse behavior than my program blocking forever. Of all of the things my program can do, short of giving remote code execution, blocking is literally the worst one I can think of.


Goroutines blocking is normal procedure in a a Go program. It also isn't a problem, unless my code is broken and allows all Goroutines to block simultaneously...in which case the runtime automatically terminates the program with a Deadlock-Panic.

The behavior of nil-channels always blocking is on purpose, and tremendously useful in functions where we receive from multiple channels via `select`. It allows the function to easily manipulate it's own receive behavior simply by setting the local reference to the channel to `nil`.

Since `selects` can also have `default`, the resulting functions don't have to block either.


> No it isn't, and yes it does. By definition, the more features I add to something, the more complex it becomes.

Yes, and go opts to include features that unnecessarily increase complexity in this manner, such as nil values.

> All of these are behavior and operators that are documented in the language spec. So how is any of these a "footgun"?

By this logic, no language with a spec can have footguns. C and C++, notorious for their footguns, both specify their behavior in the spec, so do they not have any footguns?


We’ve found that something like 50k+ LoC Go projects become impossible to maintain.

“Optimising your notation to not confuse people in the first 10 minutes of seeing it but to hinder readability ever after is a really bad mistake.” — David MacIver

Go is a simple language that anyone can pick up in a weekend, but productivity plateaus once you’re doing anything that requires hard constraints or complex systems (the same is true for JS, Python, and other scripting languages).


As someone who is ~3 months out of uni, I don't really understand this amount of concern towards "new hires". I've interned at a place where I was thrown head-first into a industrial-grade Java codebase. The initial few weeks were rough. There were complex class hierarchies, patterns that weren't documented but had to be followed, interfaces from 10 different packages, really hard to grok names, magical DI, Lombok, complex mock testing all while 10 different linters and code coverage tools yell at you. But I cut through the complexity and finished my task in the end, not through any particular skill but by being stubborn and willing to learn.

None of the Google fresh hires I know personally are stupid. They are talented people who could be just as productive in a C++ or Java codebase. Maybe even better when you have features like Java's streams or C++ templates to throw at non-trivial problems. They might need more time, but it's something easily budgeted for. If new hires have to be productive from the first day, that's a problem the company has created and not the employee. If other languages have too many ways to do something, just enforce only using a few of them, teams have and continue to do that.

I use Golang in my current job. The library ecosystem seems fine. But even as a "new hire", the language frustrates me sometimes. Go's concurrency is "easy", but has a minefield of problems. Just off the top of my head, for-loop semantics [which to Golang's credit, is being fixed but it is absolutely a breaking change], just being able to copy a mutex by accident. These are bugs I've written and not had fun tracking down. In a year I'll have all these footguns memorized, but I could also have spent a year getting better at any other language. Even at my experience level, the Rust compiler gives me enough grief for me to know that when it's happy, whatever I've written will work. Nothing about Golang gives me that confidence.


> There were complex class hierarchies, patterns that weren't documented but had to be followed, interfaces from 10 different packages, really hard to grok names, magical DI, Lombok, complex mock testing all while 10 different linters and code coverage tools yell at you.

And this is exactly what Go avoids, in my opinion and experience.

Unless someone shoehorns Go into an Enterprise Java style (which, sadly, is possible, and sometimes done), the problems you listed with the Java codebase either don't exist in Go, or are orders of magnitude less painful to deal with, even in large Go codebases. Plus, the toolchain is pretty obvious, because most of it ships with the language.

And while my argument mentions new hires specifically, because the impact is most visible with them, this is just as important for mid and senior level developers; yes Go is sometimes more verbose (although enterprise Java and lots of C++ code still runs circles around Go in that regard) than it's contemporaries, it is also obvious. There is little magic, there is little action at a distance, and the opinionated style of the language discourages superfluos abstractions.

I have used quite a few languages so far in my career. Go is the first where I was able to comfortably read and understand std library code within the first week of learning the language.

> Just off the top of my head, for-loop semantics [which to Golang's credit, is being fixed but it is absolutely a breaking change],

It is technically a breaking change. Practically, it isn't, because there simply are no examples of production code in the wild that rely on this unintuitive behavior (As mentioned multiple times in the discussion on Go's issue tracker, the dev team did their research on that), and code that implements the (very easy) fix, will continue to work after the upcoming change.

> Nothing about Golang gives me that confidence.

My experience is different. I know that most of the problems the Rust compiler complains about will be handled by the fact that Go is GC'ed, and most of the rest I avoid by relying on CSP as my concurrency model (Can't accidentially copy a mutex if there's no mutex ;-) )


If a tool is any good I’m going to be using it for years. I’d much rather spend more time studying reusable ways the language can help me solve a problem with less code; that’s what’s productive. A quickly learned language is like a nearly empty toolbox.


From witnessing so many HN flamefests, between Go and Rust, it seems there are people who are genuinely more productive with Go than Rust, and vice versa. Not saying people usually lie when they claim to be more productive with one, but rather that their judgement is very subjective and not scientific. I do wish Go didn't have the data race bugs, though. It greatly weakens Go's "fearless concurrency" selling point for me. To me, Rust doesn't always make concurrency "fearless" in terms of complexity, but at least I'm not fearing actual memory safety bugs in a random library. There's unsafe, but I think the design tradeoff there is quite reasonable and workable.


Dart made the same nullable mistake but actually managed to fix it, which is quite impressive.

Go is just obstinately living in the 90s. I guess that's not really a surprise. It's pretty much C but with great tooling.


Dart has a great write up on how they fixed the null problem by adding non-nullable types: https://dart.dev/null-safety/understanding-null-safety


Several language have "fixed the null problem" after the fact, though usually as an opt-in e.g. typescript, C#.

The problem with Go (for this specific issue, there are lots of problems with Go) is that they have wedded themselves extremely strongly to zero-defaulting, it's absolutely ubiquitous and considered a virtue.

But without null you can't 0-init a pointer, so it's incompatible with null-safety.

I think C# pretty much left the idea of everything having a default behind when they decided to fix nulls. Though obviously the better alternative is to have opt-in defaulting instead.


This is a good point. Null pointers are the billion-dollar mistake, but the real billion-dollar mistake is having "zero values" in your language. In addition to the problems with null pointers, zero values make loose constructor semantics like C# and Java tempting, where objects can exist in a not-fully-initialized state, leading to lots of room for confusing bugs. Without zero values to fall back on as a crutch, the language design is forced to tighten that up so that objects are either completely initialized or completely uninitialized, like in ML or Rust†, which is a much cleaner semantics. (The funny thing is, Go has the tools to get rid of zero values by virtue of not having constructors, but it chose not to use them for that!)

† Strictly speaking, objects can be partially initialized and partially uninitialized in Rust, but this is harmless as the borrow checker statically ensures that uninitialized fields of objects are never accessed.


Yeah technically you can always default init fields to None, but then you have to type everything as an Option and the ergonomics are so bad you're really incentivised not to do that. In that case you bundle that in a transient structure (a builder) and you get rid of it afterwards.


Or you use MaybeUninint, which I think is a great example of how unsafe Rust exposes to potential memory unsafety but grants you control when you need it.


While this is a great piece of engineering, and will certainly deliver a huge amount of value to any project, the fact that a whole new tool had to be built (and will have to be maintained) to address serious, fundamental shortcomings in the language is really quite sad.


The go version NilAway isn’t as good as the java version NullAway yet. But the team working on it is very responsive and eager to improve.

For java projects I think NullAway has gotten so good that it really takes the steam out of the Kotlin proponents. Hopefully NilAway will get there too.


It's worth bearing in mind that some of these runtime panics would have happened anyway even if the code had been implemented in (e.g.) Rust. Ugly real world code tends to make quite frequent use of unwrap() or equivalents. For example: https://github.com/search?q=repo%3Arust-lang%2Frust+.unwrap%...


Rust is a lot better in this aspect, but this is a symptom of not having proper code review and standards. Do not forget that in some scenarios using unwrap is totally fine if a panic is acceptable. The same could be said for javascript: How many time have we not wrapped JSON.parse inside a try catch? More than we would like to admit. Really appreciate Rust “forces” you to handles all execution paths.


My link is to the rust repo on github. Does the Rust project not have proper standards for Rust code?


> Do not forget that in some scenarios using unwrap is totally fine if a panic is acceptable.

Looking through the first few pages, most of these panics are easy to audit, and are infallible or in contexts where it doesn't matter (internal tooling, etc). That's a pretty stark difference to every single reference being a potential landmine.


Yes, you are probably less likely to get a panic caused by a nil reference in Rust than in Go. My point is that the equivalent software written in Rust (or most other languages with option types) would probably have had at least some of these very same bugs.


It's also worth reading the examples you post

Like, one of the first files has only .unwraps in the comments (like a dozen of them in a file), some are infallible uses, some are irrelevant-to-runtime tooling, etc.

But anyway, "some" is a lot smaller than "all". Just like some of memory safety issues would also have happened since you can still use unsafe in Rust, yet it's still a big step forward in reducing those issues in the ugly real world


It's a list of all instances of ".unwrap()" in the project, so of course it includes instances irrelevant to my point. Seems uncharitable to assume that I haven't looked through it on that basis.


My basis is the %, not the simple fact that it has irrelevant instances.

So let me charitably ask directly: have you looked through all the examples at least on the first couple of pages? And if you have, what % of instances is relevant to your point?


I think this is covered in my reply to shakow. Unless Rust is absolutely riddled with bugs, it’s obviously going to be hard to find uses of unwrap that are definitely bad. The point is that there’s no way to easily assure yourself that all the uses of unwrap are definitely good.

It would be similarly difficult to trawl through the source of the Go compiler and find definitely bad instances of pointer dereferencing. So does that mean that it’s not actually a problem in Go either?


Then please pinpoint some problematic ones, so that not every reader has to delve into pages to continue the discussion.


The point is precisely that it's not always easy to figure out which instances are problematic.

If you think about it a bit, given that bugs are relatively rare in a mature project, it's going to be difficult to find a use of unwrap that's definitely bad.


You can `grep unwrap` for Rust, you need an entire SAT solver for Go.


That's almost literally what I just did, but what use is it? No-one is really going to go through all those unwrap calls and check them.


But... they could. And also, they do? It's not uncommon.

All of those could be checked or irrelevant, I have no idea.


What do you think the borrow checker is?


A completely unrelated construct?


I don’t see this as a band-aid. It’s doing proper type checking (static analysis) and that seems quite promising?

Getting good type errors without requiring type annotations seems like a win over languages that are annotation-heavy. Normally I’d be skeptical about relying on type inference too much over explicit type declarations, but maybe it’s okay for this problem?

This is speculative, but I could see this becoming another win for the Go approach of putting off problems that aren’t urgent. Sort of like having third-party module systems for so many years, and then a really good one. Or like generics.


I guess you could call it a bandage? The point wasn't to bad-mouth it or undersell it - bandaids are awesome. The point is that we need some external thing to patch holes in the underlying system.


Agree, this is dumb. This should be a part of the compiler if the language def can't simplify this for the user.


"The Go monorepo is the largest codebase at Uber, comprising 90 million lines of code (and growing)"

Is this just a symptom of having a lot of engineers and they keep churning code, Golang being verbose or something else. Hard time wrapping my head around Uber needing 90+ million lines of code(!). What would be some large components of this codebase look like?


Imagine a multidimensional matrix with various payment methods, local regulations, cloud providers, third party dependencies, web/mobile platforms, etc. Then also add more dimensions for internal things like accounting, hiring, payroll, promotions, compliance, security, monitoring, etc. Then double it for Uber Eats or whatever.

There's a lot of overlap and some invalid combinations, but you're still left with a huge number of combinations where Uber must simply work. And every time you add a new thing to this list, the total number of combinations grows polynomially.

(Also, Go is slightly more verbose than most languages. I think that's a feature and not a bug, but it's one more reason.)


Dang, a little more verbose? Understatement of a lifetime. It's fine, if you like it not whatever, but it is quite a bit more verbose than many languages that I've used. My number 1 qualms go is with such simple building blocks requiring a bunch of redundant boiler plate. You're welcome to disagree with my opinion here.

A lot of people seems to gravitate toward languages with less dense cognitive load. I have learned to love kotlin, but its also a super dense set of syntax to power it's very expressive language.


Uber is famous for NIH syndrome. You can tell by their open source projects they've basically built every part of their infra from scratch. So it's not just the application code but everything else that helps run it.


I personally find uberFX to be a fantastic project. It isn't necessary for you to write golang with it, but it certainly does provide a great framework for organizing code so that you can ensure that writing tests is as easy as it can be.


Basically exactly this. Also, a lot of their "in production" open source projects are not "in production" but were generated and released as part of their broken promotion process.


Either genius or madness, you be the judge!


When you have thousands and thousands of engineers, and they are evaluated by how much code they produce, and they need to justify their job and continued employmwment, you end up with a 90M line codebase.


It is the nature of large systems to grow. As software engineers we build libraries to build libraries, we build tools on top of tools to check our tools.


still, having 3x the lines of code compared to the linux kernel is... weird.


From what I've heard from ex-FAANG, I'd wager that a significant portion of the Go is code-generated for things like RPC definitions or service skeletons.


They use bazel so generated rpc code is produced on the fly and is not checked in


It amazes me that in 2023 this is not a solved problem by design of the language. Why go doesn’t adapt the “optional” notion of other languages so that if you have a variable you either know it is not null or know that you must check for nullness. The technology exists


I write a lot of Go and used to write a lot of Swift. Swift is what you’ll consider a modern language (optionals, generics, strong focus on value types), while Go is Go.

I appreciate both languages, and of course Swift feels like what you’d pick any day.

But, after using both nearly side by side and comparing the experience directly, I’ve got to say, I’m so much more productive in Go, there’s SO much less mental burden when writing the code, — and it does not result in more bugs or other sorts of problems.

Thing is, I, of course, am always thinking about types, nullability and the like. The mental type model is pretty rich. But the more intricacies of it that I have to explain to the compiler, the more drag I feel on getting things shipped.

And because Go is so simple, idiomatic, and basically things are generally as I expect them to be, maintenance is not an issue either. Yes, occasionally you are left wondering if a particular field can or cannot be nil / invalid-zero-value, but those cases are few enough to not become a problem.


As a single developer? Yes. As part of a team? Give me explicit type checks, please.


This is a popular view, but, again, does not match my experience. I have only lead small teams (say 3 to 10 people) of either senior or very intelligent and motivated middles, but for those, limitations of Go are not a problem in any shape or form. Comparatively, we had significantly more mess (and debates) in Swift.


There's much I don't love about Rust, but I feel golang could steal the ? operator and keep the spirit of go.

Effectively,

instead of

    result, err := doSomething()
    if err != nil {
        return nil, err
    } 
you'd get the same control flow with

    result := doSomething()?


Try (the current incarnation of the ? operator) is actually a very clever trait which does rather more than that.

Types for which Try is implemented can Try::branch() to get a ControlFlow, a sum type representing the answer to the question "Stop now or keep going?". In the use you're thinking of where we're using ? on a Result, if we're Err we should stop now, returning the error, whereas if we're OK we should keep going.

And that's why this works in Rust (today), when you write doSomething()? the Try::branch() is executed for your Result and resolves into a Break or a Continue which is used to decide to return immediately with an error or continue.

But this is also exactly the right shape for other types in situations where failure has the opposite expectation, and we should keep going if we failed, hoping to succeed later, but stop early if we have a good answer now.


A big problem with Try is the function signatures...excuse me, I would like a <<T as Try>::Residual as FromResidual<Result<T, !>>::Output, please. Yes, that is a caricature and I don't know the proper signatures, but c'mon. Read the discussion for the Try v2 RFC if you want a better idea.

...and then they add more syntax sugar to partly sweep the complexity under the rug. I like Rust as much as the next person, but I'm apprehensive about how this will play out.


That won't work because in Go you often need to wrap errors with additional context.

I have worked with Rust Option/Rust types and found them extremely unergonomic and painful. The ?s and method chains are an eyesore. Surely PLT has something better for us.


In Rust you can wrap a context by chaining a context method on before the ?, at least with libraries like anyhow.


What does PLT mean in this context? I wasn't able to $searchengine it


Programming Language Theory


Do go errors or rust options include stack traces?


The same reason you can't get map keys without a library or looping yourself. "Simplicity" (for go maintainers).


It's because it's golang and that's how they oninonatedly want it. (Seriously, the community is incredibly stubborn and controlling)


There are several language design problems solved in the 20th century that Go designers decided to ignore, because they require PhD level skills to master, apparently.

Hence why the language is full of gotchas like these.

Had it not been for Docker and Kubernetes success, and most likely it wouldn't have gotten thus far.


And now they're stuck, since they doubled down on not making any language changes for the 2.0 release.

They made the language easier and quicker to write a compiler, but harder to write programs in, and it doesn't look like that will change in Go 2.0.


At least many CNCF projects are now adopting Rust, Java, C# and even to a lesser extent C++.


There's nothing better than a panic in production caused by a third party library.


This whole thread is about the money Über has spent to work around panics in Go.


speaking from personal experience, i selected go for a project because it is high perf, automatically uses all cores w/ goroutines, and is type checked


> type checked

Kinda...


It is a type safe language, not exactly sure what you're hinting at here.


This entire post is a big workaround go's insufficient type system because nil is not modelled in it. That's not safe.


I agree I probably should have said strongly typed instead of safe, as yes, if you dereference a pointer to nil you are going to crash. That being said, I do think "possesses an untyped nil" is a pretty far cry from "not type checked at all". It's certainly much safer than languages like C or C++ which allow type punning, or Java, where both nullables and runtime exceptions associated with types are generally a more pernicious problem.


Also possible in Go via unsafe.


Which does what it says on the tin :)


That's what the `func foo() (*T, error)` pattern is for. It's actually better than syntactic sugar for optional values because now you also have a descriptive reason for why the value is nil.

But if you really cannot afford to return more than one bit of information, do `func foo() (*T, bool)`.


> a descriptive reason for why the value is nil.

Result<T,E> does this. I forget exactly why Result is actually different from, and in fact superior to, `func foo() (*T, error)` but IIRC it has to do with function composition and concrete vs generic types.


Result<T,E> is in one of two states: It either has value of type T, or error of type E.

(*T, error) is either T (non-nil, nil), or error (nil/undefined, non-nil), or both (non-nil, non-nil), or neither (nil, nil). By convention usually only the first two are used, but 1) not always, 2) if you rely on convention why even have type system, I have conventions in Python.

Leaving aside pattern matching and all other things which make Rust way more ergonomic and harder to misuse, Go simply lacks a proper sum type that can express exactly one of two options and won't let you use it wrong. Errors should have been done this way from the start, all the theory was known and many practical implementations existed.


I don't know much Rust, but wouldn't the analogy to (*T, error) be Result<Option<T>, E>, which has 3 states? Or is that not a common construct?

Because *T could be nil or non-nil, it seems like the analogy would be a nullable type in the Result<>. In Go, (T, error) would only have the states (non-nil, nil) and (non-nil, non-nil) if T is not a pointer. Still, the Result type seems better to me because the type itself is encapsulating all of this (and the error I guess cannot be null).


As others have mentioned, Result is a sum type so you either have a T or an E, there's no situation in which you can get both or neither.

The second part is that it's reified as a single value, so it works just fine as a normal value e.g. you can map a value to a result, or put results in a map, etc... , language doesn't really care.

And then you can build fun utilities around it e.g. a transformer from Iterator<Item=Result<T, E>> to Result<Vec<T>, E> (iterates and collects values until it encounters an error, in which case it aborts and immediately returns said error).


Don’t rely on half remembering how specific languages implement things, try and internalise the fundamentals. Go functions tend to return a tuple which is a product type, while rust’s result type is sum type. Product types contain. Both things (a result and an error) while a sum type contains a result or an error.


90 million lines of code to .. call a cab?

Genuinely curious what's so much of business logic is for.


I was working on a Grab competitor, you would be surprised about the number of subsystems running there.

There are entire teams that are working on just internal services that connect some internal tools together.

There was also very little effectivity and efficiency in the era of cheap capital so there were tons of talent wasted on nonsense. Uber built their own slack for a while!! (before just going to mattermost)

People always ask who actually makes money on Uber... I think it's not the cab drivers, not the investors, who makes money is the programmers. It's a transfer of money from Saudis to programmers.

Well it was, anyway.


And billing, and reporting, and regulatory compliance, and inventory management, and abuse detection, and routing, and operations, and...


still, linux kernel is around 30 million lines of code if I'm not mistaken as a reference. most probably they have their reasons, but it smells weird to me.


It's can be difficult to understand by big companies write so much code, but it becomes obvious once you're inside of one: business software can be arbitrarily complex, because businesses can be arbitrarily complex. The guys in suits earn their paychecks by constantly coming up with new things the business could be doing.

"All" a kernel does (for some very large value of "all") is schedule userspace programs and manage the system's physical resources (memory, disk, devices). You can reach a point where a kernel is done, in the sense that it meets those basic needs with an acceptable level of performance. Kernel developers don't make extra money for every new feature they add - if the system is good enough, then it's good enough.


Microsoft Word is no small software. It's probably around 10 million lines of code.

As for "per locality business rules differ that's why so many lines of code.." seems like you can have a policy engine+DSL (JSON or YAML or custom policy language and engine) thus your code base shouldn't balloon to almost 100 million limes of code...


A lot of it will be location based. It has come up before here in the discussion of why there is so much in the app. They have to cater for all the different rules in every jurisdiction.


Well it is in Go, so has to be verbose


Uber does way more than calling a cab, however, I was also surprised by the number of lines of code


how in the hell does uber need engineering problems solved?

mad


[flagged]


Please don't attempt to start language wars on threads here. They're a curse; they grow like kudzu and take over the whole thread. This is interesting computer science, and in the ecosystem of Hacker News, superficial bickering is its top predator.


I don’t really buy the usefulness of trying to statically detect possible nil panics. In their example of a service panicing 3000+ times a day why didn’t they just check the logs to get the stack trace of the panic and fix it there? I don’t see why static analysis was needed to fix that panic in runtime.

What I would really like golang to have is way to send a “last gasp” packet to notify some other system that the runtime is panicing. Ideally at large scales it would be really nice to see what is panicing where and at what time with also stack traces and maybe core dumps. I think that would be much more useful for fixing panics in production.

There was a proposal to add this to the runtime, but it got turned down: https://github.com/golang/go/issues/32333 Most of the arguments against the proposal seem to be that it is hard to determine what is safe to run in a global panic handler. I think the more reasonable option is to tell the go runtime that you want it to send a UDP packet to some address when it panics. That allows the runtime to not support calling arbitrary functions during panicing as it only has to send a UDP packet and then crash.

I could see the static analyzer being useful for helping prevent the introduction of new panics, but I would much rather have better runtime detection.


Because they want to find the code paths before deploying the code. Surely they have error logging or tracing and can see why it panics.

I tried this with a medium sized project and some unexpected code that could panic 3 functions away from the nil.


Can we not link to scammy engineering blog articles with ads for scammy restaurant apps on top please?

Link to the source, or better yet, never link at all to anything related to Uber.


As a language that’s focused on backward compatibility than features oriented this is the best and optimal way to reduce some of Go’s loopholes. The problem of using developer tooling to solve the innate problems is that they lack awareness

I do recommend the Go team to find a way to these tools to run before it complies, just doing go build while going through these tools first goes a long way than just using scripts




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: