Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Error return traces for Go, inspired by Zig (github.com/bracesdev)
133 points by abhinavg 9 months ago | hide | past | favorite | 84 comments



The logo is awesome. Has an artist been hired to create it?


Thanks! No, just one of the maintainers with a free evening.


I read the README but I'm not understanding yet why the return path might be what you want. The stack trace at error creation sounds more useful.


The stack trace and return path are pretty similar if the flow goes through function calls on a single goroutine, but if errors propagate across goroutines or across different stacks (E.g., via channels), then it can miss some useful details.

Here's an example that compares them: https://pkg.go.dev/braces.dev/errtrace#readme-comparison-wit...

Since the HTTP library uses a separate goroutine to create connections, the stack trace at creation time doesn't have details about the user request that triggered the connection.


While from a first instance, this package seems a bit overkill, I think the idea is interesting and is something that can be improved for Go.

I also felt that Go errors where too bare-bones, so I developed a small package (https://github.com/Vanclief/ez) based on an awesome post that I saw here once. I use this package in all Golang code I touch.


We actually did something similar in Go here: https://github.com/Bedrock-OSS/go-burrito

I guess the difference we try to make is that we really wanted to make errors that are understandable by users. Each time the error is returned we try to wrap it with an information where and why.


(Aside about Zig, sorry. Although this applies to Go as well, I think?) Urgh I am so keen to switch to Zig but their attitude towards having vector operators just completely kills the viability for me as a graphics programmer.

I've asked in their Discord, Andrew Kelley himself passed on commenting (I know his stance, every C++ dev wants their fav feature), but the reality remains that it's just infeasible to do with a DSL so it's just the wrong language for writing graphics code.


I was watching a recent talk about the new Mojo language [1]. There is a section on SIMD and how they treat scalar values as a special case of vector operations (around the 33 min time).

It does seem that tensors are one of the core abstractions for modern ML systems. I've heard people joke that AI is just matrix multiplication. Python is such a flexible language that creating abstractions around its syntax was super easy. I believe that was part of the reason Python has come to dominate this space (not the only reason obviously).

I too felt the same as you, but as a distant admirer of Zig. I totally understand that operator overloading can be misused. But I have an intuition that at least providing operators for linear algebra (and probably complex/quaternion) is a really important feature for any languages from this point in history going forward.

1. https://www.youtube.com/watch?v=SEwTjZvy8vw&ab_channel=LLVM


Julia has good ideas, interesting trade-offs here as well. ISTM the deep problem is that there’s so much room for optimization under composition of operations or with invariants not captured in types that the typical LinAlg library will never capture. But I agree most languages should be anticipating that users need something helpful here.


  providing operators for linear algebra (and probably complex/quaternion) is a really important feature for any languages from this point in history going forward.
This is why I have started using Fortran for writing AI model inference code. It natively handles array manipulation like python and has a `matmul` intrinsic and compiles into fast code. There are some rough edges, but it's great as a low level matrix programming language which was its original point.


>I've heard people joke that AI is just matrix multiplication

Not sure where the joke is, today's deep learning based AI models are literally matrix multiplications with learned weights.


That's one of the downsides of a BDFL. Zig is like 98% amazing, and 2% strange decisions that I think could have been avoided if the creator had more experience with languages other than C and Javascript.

I'm still a happy Zig user though, and hey, there's still time before 1.0.


He's actually fluent in a lot of languages[0]. Just somehow apparently not much sympathy for numerical computing, or UI code (I'm fairly sure you want 2d vectors there too), or plotting, or ...

[0] https://andrewkelley.me/post/not-a-js-developer.html


Not supporting operator overloading is not a “strange decision” it’s widely hated feature outside of very specific domains (like maths and graphics) and every shop i worked at which bothered with a c++ guideline banned it


You guys are "interesting" in this c++ world

C# has operator overloading and during my whole career I have never seen it abused so hard that people needed to ban it, let alone write guidelines and a lot of shops adopting it.

I barely see anyone use it not for really good cases like graphics.

The only interesting case was using "/" operator for Path Combines so "home" / "folder1" performs Path.Combine("home", "folder1")

but still, that was PoC or lib, not even prod.

So, is it about community, some culture or actually what?


I think it’s mainly c++ devs are very performance sensitive (why else would you subject yourself to this torture?) so they hate it when your ‘+’ is suddenly O(n^2) and with side-effects


C++ operator overloading gets banned because C++ programmers invariably abuse it and cause problems, even the standards committee can't help themselves, which is how I/O Streams are a standard library feature.

C++ also allows you to overload short-circuiting operators, but of course your overload can't short-circuit so you just silently destroyed an important feature. Why ?

As others have pointed out, several languages have been able to provide this feature without causing half the mess and disappointment. Ten years ago if you said move assignment semantics are a bad idea you might persuade people because the C++ move semantics are messy, but hey, turns out a fresh language is able to just provide the destructive move developers actually wanted (but couldn't pull off for C++) and that's really nice.


> even the standards committee can't help themselves

cout << "foo" << endl;


Banned for in-house use, sure. Require an extra review before including a library, sure. But don't tell me you refuse to use Eigen and force everyone to write their own matrix multiplication routines that don't have operator overloading...


That is why I specifically mentioned math and graphics as exceptions. Honestly I still like explicit DSL model better for these than trying to build dsl on top of your existing language c++/lisp style


But you have to use operator overloading to use smart pointers.

This isn't a pedantic nerd snipe; the point is that operator overloading really is indispensable in certain contexts.


It is strange that we stop at scalars for algebra in most languages. Odin’s approach is almost more frustrating, allowing for quaternion types, but not clifford algebra which can elegantly describe them and much more(though i know it’s a comically niche request at this point in time)


The main thing that stands out about this comment to me (agreeing 100% about algebraic types) is the "at this point in time" bit, such boundless optimism :D


agreed, further with today's parallelism it should be trivial but like you said, most languages stop at scalars and force the onus on you. I get it, higher order math is an implementation detail but to not provide the proper tools for those implementations is frustrating.


Odinlang is a language in a similar space that seems to have first class support for matrix and vector operations. As well as having built in support for various graphics apps [0].

Seems like there is a bunch of interesting low level languages gaining steam at the moment.

[0] https://odin-lang.org/news/major-graphics-apis/


Does Odin have support for async/concurrency ?


Doesn’t this open github issue imply that vector operators are still being considered? https://github.com/ziglang/zig/issues/7295

Or is it that the three years it’s been around indicate it will never progress?


This is far better than my attempts at initiating the discussion, thanks for linking!

Also it seems like even the discussion of Geometric Algebra has been covered before 3 years ago: https://github.com/ziglang/zig/issues/7295#issuecomment-7389...

Unless it's actually possible after all this time (even in some random beta) to do vector code in infix notation, it doesn't feel like it's going to happen :(


Didn't Zig "just" decide to permanently stay irrelevant for performance critical code by replacing LLVM by a yet to be written home grown optimiser and code generator? Don't get me wrong LLVM has lots of warts, but the a good multi architecture optimiser and code generator is a larger project than the frontend and standard library for any reasonable language.


No. They are turning llvm into an "optional dependency", or plugin, or call it what you want. Day to day/debug compilation will be done with their own compiler and if you want you can cut a release with llvm to get all the optimizations when you're done, or at any time really.

Zig won't ship with llvm as part of the standard download, but i imagine it will be easy to get zig+llvm working


What's really sad is that not having overloading "ON" by default is a good idea, but it doesn't mean that language has to stop there, it's really simple to have a small language feature which would say that

a #everything_until_the_next_space b is rewritten in #everything_until_the_next_space(a,b)

So you'd have:

-a #foo b #foo c is foo(foo(a,b),c)

-a #foo b #bar c is forbidden, use parenthesis.

-and maybe #: for the inverse composition where a #:foo b #:foo c is foo(a, foo(b,c))

It's explicit, no hidden overloading, yet it's "short" enough to be usable: (a #* b) #+ c isn't that much worse than a*b+c and is much more readable than madd(mmul(a,b),c)..

If there is no implicit conversion name clashes shouldn't be too bad (both #+ could be usable for matrix and for graphic libraries).


Am I crazy or is there nothing stopping us from writing functions that work with vectors? I don't really understand why this is a big headache for graphics programming other than it not being your preferred syntax.


I'm a C++ dev, could you explain a bit of what is missing in Zig for vector operations? Does Zig not have operator overloading?


Nope, no operator overloading. As I understand it, the philosophy is to not hide O(n) behavior behind notation that looks O(1).


And what's funny about that stance is mathematical operators aren't actually O(1).

https://en.wikipedia.org/wiki/Computational_complexity_of_ma...

You don't want to know what the innocent-looking `*` in this function compiles to:

  fn square(num: u10000) u10000 {
    return num * num;
  }


Is the underlying algo not O(1)? Under what pairs of two u10000 numbers would you get different execution time?


Sure, if you ignore the n in n=10000, multiplication is O(1). But the same is true for e.g. heapsort--all lists of 10000 items take asymptotically the same amount of time to sort.

But that's a strained interpretation of complexity, and not very useful.


A heapsort is designed to take a variable number of items at runtime for given code. The n is fixed at compile time in the multiplication examples and is invariant at comptime (in zig).


It's mildly offensive that u10000 is a builtin type but vec2f is not (also note that I am not asking for general operator overloading!). Which has dedicated CPU instructions across all modern architectures, what's the relative usage frequency, what's the cost/benefit of each, etc etc... :( And we all know why u10000 is in there: because you have to have arbitrary-width integers when writing a compiler, so they figured eh, we'll just expose that because we have it already. Absolutely transparent compiler-programming (cf. rendering-programming) bias, almost nobody else needs that.

And yeah, I get that Add(Add(Add(a, b), Mul(c, 3)), d) is possible, but come on... imagine if you had to write your normal "a + b + c * 3 + d" with ints/floats like that! What's that, suddenly people care, but nobody cares if it's not their field...

Whatever, I will continue to look longingly at Zig for all the bits of C/C++ (and apparently Go, to try bring it back to the original topic) it solves, but missing the trivial and absolutely critical single feature to enable an entire class of performance-critical programming.


> It's mildly offensive that u10000 is a builtin type but vec2f is not

u10000 exists only because we want u3, u7, and u13 as builtin types.

u3, u7, and u13 are useful for embedded and systems programming.


> And we all know why u10000 is in there: because you have to have arbitrary-width integers when writing a compiler

You don't need them as a built-in type to write a compiler. They're there because LLVM was the original backend and you essentially get them for free (in the sense that the backend code generation is already handled for you, so why not include them).


> It's mildly offensive that u10000 is a builtin type but vec2f

Zig does have builtin vec2f, it is spelt @Vector(2, f32).


@Vector is counterproductive for cases like vec2f, since it tries to force data into SIMD registers even when it would be more efficient to leave it in normal registers. In fact I've wondered in the past if Zig might end up slower than C for graphics code, if people misuse @Vector to get normal math operators back. For that reason I never use @Vector unless I already know what SIMD intrinsics I'd be writing in C.


Strange.

How is that different from say, a function call? A function may look like a single O(1) operation from the input/output/name, but actually do something much more complex. That seems like the same thing to me, and very common. (and frankly I'm not sure that could even be avoided)


I didn't mean to volunteer to defend this choice, but without investigating a function you can't really support an opinion about its runtime. A language can make such promises about its basic syntax however.

Perhaps I'll rephrase how I understand the philosophy: if it's a function call, it should look like a function call. Operator overloading breaks that.

That said, this isn't my hill to die on.

Edit to clarify my final sentence there: I have zero interest in debating this any further. Pixelpoet, if you're going to be so fussy, go read Harvey and van der Hoeven and stop trying to win language fights, they're tedious.


I appreciate your being such a good devil's advocate, however as ForkMeOnTinder points out, there is already the super complicated[0] overloaded O(N^1.58+) operator for arbitrary precision integer multiplication, easily argued to be vanishingly less useful than simple vector operators, particularly if they were well-expressed at the IR level and well-mapped to modern hardware / instructions.

The ask isn't for general operator overloading (I'm also in favour of not having that), rather just not stopping native algebraic type support at scalars; the C function atan2(y, x) basically just wants to give you the complex argument, for example. Really it would just do so much to unify and simplify everything, besides being able to write vector stuff in a sane way; if every rando has to write their own vector and complex number classes, I'm much less likely to vouch for its correctness.

[0] I've recently been looking into Karatsuba multiplication to reduce it from O(N^2) to O(N^1.58): https://en.wikipedia.org/wiki/Karatsuba_algorithm


To me it does seem a weird hill to die on. If I'm using an operator on a non-primitive/non-trivial type - I'm going to consider the possibility that it does something more complex. It would be strange not to.


Can you explain why we should this over https://github.com/pkg/errors?


The README covers the idea behind errtrace in more details, but the primary difference is in what is captured:

pkg/errors captures a stack trace of when the error occurred, and attaches it to the error. This information doesn't change as the error moves through the program.

errtrace captures a 'return trace'--every 'return' statement that the error passes through. This information is appended to at each return site.

This gives you a different view of the code path: the stack trace is the path that led to the error, while the return trace is the path that the error took to get to the user.

The difference is significant because in Go, errors are just plain values that you can store in a struct, pass between goroutines etc. When the error passes to another goroutine, the stack trace from the original goroutine can become less useful in debugging the root cause of the error.

As an example, the Try it out section (https://github.com/bracesdev/errtrace/#try-it-out) in the README includes an example of a semi-realistic program comparing the stack trace and the return trace for the same failure.


How does one go about implementing something like this? I am quite curious.


Judging by the project, it's implemented by instrumenting the source code; either manually modifying error returns with a wrapper function, or by running source files through an automated tool that will find and modify the return statements for you.


I suspect it is based on https://pkg.go.dev/runtime#Caller


At the very least:

>This repository has been archived by the owner on Dec 1, 2021. It is now read-only.

It's largely complete so it is essentially fine at the moment, but it won't be adapted to future language or community changes. A future landmine.


It’s archived mainly because it’s been superseded by fmt.Errorf() with the %w directive. Go 1.20 also introduced errors.Join() and multi-%w which github.com/pkg/errors lack, so using for green field projects is very ill-advised.


Except for the stack trace part, which is a gigantic reason why it was popular.

tbh I'm not sure what the current popular option is for wrapping with stack traces.

Re join: it isn't a joiner-error, it has no need to do anything for that. Just stdlib-join-then-wrap.


It’s pretty easy to write your own Errorf() wrapper or some sort of WithStack() that stores runtime/debug.Stack() output. github.com/pkg/errors offers more flexibility in formatting though.


Well, sure, but by that metric this library is even easier to build yourself since it's only gathering a single stack entry per wrap. And it has no backwards compatibility to worry about.

"You can build X by hand too" has little to do with why people choose to create or use libraries.


> "You can build X by hand too" has little to do with why people choose to create or use libraries.

In my experience, "you can build X by hand" is the Go community's preferred approach.


It's been said that "Go" stands for "Go implement it yourself"


My implementation is about 120 lines, plus about 100 lines of tests. It's not hard, but not trivial either. Doing that for every single project is rather tedious. Especially for prototyping I just quickly want something that 'just works' without fuss.


It would be great if the internals added %W as an option that would also include the relevant trace. Shouldn't be unreasonable or impossible.


Even this package is not needed anymore


How do you get stack traces of errors?


The same way you get stack traces of names, email addresses, random numbers, etc. It is funny how the word 'error' leaves some suddenly forgetting how to write software.


go is step by step reinventing exceptions


Exceptions typically unwind the stack, producing a stack trace similar to what certain Go error types already do. Go's panic does actually unwind the stack. In a sense, Go has had Java/Python style exceptions, from the beginning, through panic and recover. This project distinguishes itself from that pattern in the README.

As far as error handling is concerned, errors as values is the modern thinking. Go is not behind the times here. If you squint, the `(T, error)` return type is very similar to Rust's `Result`, and the `if err != nil` idiom is basically Monadic control flow.


> If you squint, the `(T, error)` return type is very similar to Rust's `Result`

This requires the kind of squinting where 9 x 9 = 81 is basically the same as 9 + 9 = 18 right? I mean, they're roughly the same symbols, albeit one at slightly different angle, and in a different order...

Result is a sum type, as are a lot of key things in Rust. Take the Rust type Result<bool,()> - this has three possible values, Ok(true), Ok(false), Err. The analogous Go product type has four possible values, (false,false) (false, true), (true, false) and (true, true).


the go type has the implicit restriction that if error is true, then the result is meaningless. So really it has 3 meaningfully distinct values.


That depends on perspective.

An idiomatic Go function will ensure that T is always useful, regardless of the error state. At very least it will return the zero value for T, to which the Go maxim states: Make the zero value useful. From the perspective of writing the function returning (T, error), there will always be four states, with T and error being independent of each other. Anything else is faulty design.

Unfortunately, some early Go code written before things were well understood left T to be undefined given certain error states, so one cannot assume that T will be valid in all cases for all functions. This does mean that the caller's perspective can only reliably consider three states absent of diving deeper (e.g. reading the documentation).


> implicit restriction that if error is true, then the result is meaningless

By 'implicit' I assume you mean 'by convention'? I say this because unless I've misread the go spec (and that's a distinct possibility), function returns are either a single value or a tuple and there is no specification on tuple returns and mutually exclusive values.

FWIW, I mostly like go and work with it practically every day. I absolutely loathe it's ideas on error handling. I have quite strong feelings on the subject that aren't fit for polite discussion.


> I absolutely loathe it's ideas on error handling.

Its idea is simply that error is just state like any other. I find that to be quite reasonable – and something I miss dearly when I work in other languages which try to get overly fancy to try and hide that fact. There is nothing special about errors. The machine certainly has no concept of errors. Why does it need special constructs?

Bad practices like assuming T and error are dependent do break the ideas, but a language can only hold the hands of poor developers so much. Someone determined enough to write bad code will do so in any language.

Realistically, if there is a handling problem, it is a problem for all values of all kinds. Every single problem one can point to about handling errors is also a problem when handing names, email addresses, random numbers, etc. To single it out as an error handling problem specifically is flawed.

> I have quite strong feelings on the subject that aren't fit for polite discussion.

I'd love to hear more. You won't hurt my feelings. Getting worked up about a programming language discussion is illogical.


> To single it out as an error handling problem specifically is flawed.

Sure, product types are the wrong shape for several different problems, not just error handling. Go doesn't have any other shape of user defined types so... too bad you're just stuck with something the wrong shape. Errors are an example where it's more obvious to more people.

Also the machine actually does know about errors, for example the x86-64 architecture defines several kinds of faults, for which provided handlers will be executed, Linux actually deliberately makes it impossible to run such fault handlers after any other way to reboot has proved ineffective, then trips a fault, because it knows an x86 CPU which can't run the fault handlers despite a fault will give up and reboot, which is what Linux was trying to achieve anyway. ARM has error interrupts, POWER has layers of nested error handling.


> product types are the wrong shape for several different problems

While that is true, the shape has no bearing on the handling problem. Go could have every type shape-defining feature ever conceived and it would still have the very same handling problem – for errors and everything else.

> Errors are an example where it's more obvious to more people.

In my experience, the vast majority of bugs I encounter in the real world are related to the handling problem of non-error values. That is where it is most obvious, and not just in Go. I expect errors only get talked about because it is fashionable to repeat what someone else said, not because anyone put any thought into it.

> Also the machine actually does know about error

Only within the confines of one human interpretation of how the machine functions, just as is the case for code. The machine itself has no such concept. It doesn't have awareness that a given state is an error or a rainbow.


go basically has exceptions with panic & recover. and it’s perfectly simple to add stack traces to errors, should you want it.


That completely misses the fact that you need the providers of the code several layers away from you to make those choices and they frequently don't.

I can add stack traces and raise panics all day long in my code and it will never help me trace a deep error in my system. The collective blindness to that in the go world is staggering.


[flagged]


Exceptions are only better than C style error return numbers but worse than pretty much every other model of error handling we have today.


There is an opposite opinion on this


I just picked up Go and thought I might use it for Advent of Code, to help gain some familiarity. The error handling kind of threw me off, though. I am coming from typescript most recently, and I think crashing on unhandled errors is a good thing. I have bad memories of perl doing unintended things after failing to catch an error.

But I am willing to go out of my comfort zone. I think I'll probably have to find some linter configuration that will squawk when an error is ignored, though. Just for my own sanity.


The "standard linter" in Go is https://golangci-lint.run/ , which includes [1] the absolutely-vital errcheck which will do that for you.

For an Advent of Code challenge you may want to turn off a lot of other things, since the linter is broadly tuned for production, public code by default and you're creating burner code and don't care whether or not you have godoc comments for your functions, for instance. But I suggest using golangci-lint rather than errcheck directly because there's some other things you may find useful, like ineffassign, exportloopref, etc.

I highly recommend using Go with linters... but then, I highly recommend using any language with all the lint-like support you can get from the very beginning of any project, so I wouldn't read too much into that.

[1]: https://golangci-lint.run/usage/linters/


Based on what exactly? I do think that e.g. rust’s error handling is quite okay with proper algebraic data types, but (checked) exceptions are similarly safe and ergonomic to use. The worst is without doubt the C style, whose legacy Go happily lengthens by doing the exact same thing, but natively in the language.


Checked exceptions are not ergonomic.

Whatever other benefits they might have are lost when every Java developer rethrows every checked exception as a more ergonomic RuntimeException.


> Checked exceptions are not ergonomic.

its about the same ergonomics as rust error handling. Its just many people don't like to check exceptions, similarly many people write python code without explicit types.


Still better than losing an error case. But java’s solution is not the most ergonomic, no one was talking about that implementation.


What checked exception implementation do you have in mind then?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: