Both this and Rob's article seem to be hiding an inefficiency: if one of the early functions in the chain (hint) fails, the rest of them are still called, just to do nothing. It also hinders understandability and debugging, since you're continually bombarded with "if(err)" after the first failure.
I've thought about this problem too (error handling in general), and with that hint of this being a "chain", arrived at this solution which I think is extremely concise and elegant:
Of course this can be modified suitably if your error vs. success values are different, but the structure is the same: a chain of actions one after another, broken only by short-circuit evaluation. It's surprising that this pattern isn't seen more, because I've shown this to a few others and the response has almost always been an initial puzzlement followed by "wow, that's extremely neat" or "why didn't I think of that?" This is certainly an example of "use the language".
Yes, the pattern isn't useful for function calls that return results that are used as parameters in later function calls, which tends to happen all the time.
Real world scenario? I would have wrapped this in a custom http.HandlerFunc that did the authentication portion... similar to this (obviously not showing the auth portion, but setting headers)...
I would like to counter with my C++ way of doing error handling without
exceptions:
if (!func1()) return false;
if (!func2()) return false;
if (!func3())
{
err = get_error();
...
}
if (!func4()) return false;
Each function sets a global exception-like error object which can be
inspected by the caller if the function signaled an error (like
returning false in this example):
Of course you need thread local variables for this. Not sure if this is
a problem for Go or in general...
Yeah me too!
But I wonder how it'll hold up in some actual code scenarios where not everything is chained neatly in a function. And I could rewrite that, but that might be worse.
If you want to do something with the intermediate results (e.g, the functions return more than just error) this will not be possible..
resourceName, err := connection.getName()
if err != nil {
// handle
}
fst := strings.Split(resourceName, ".")[0]
sendTime, err := connection.findSendtime(fst)
if err != nil {
// handle
}
// do something with sendtime and call new func.. you get the point
EDIT: Fix some mistakes in my example code. Obviously fake example though :)
I've been writing Go for over 6 years. Error handling at the source of failure is a feature, not a bug.
Rob Pike's example code is bad and I would not allow it through a code review. Why? It makes the code lie. It appears as though the writes cannot fail, but in fact they can, and later writes become no-ops. On top of that, it means that you lose the context of which write failed. Did you fail writing the header, the body, or the footer? You can't know anymore.
Code lies all the time. This is called an abstraction, and is very useful. The abstraction that rob's code provides is that it makes you think of writing not as doing something with the OS, but as merely pushing to a buffer, with flush being the call that actually commits the transaction. Therefore, flush is the call that can fail. Is this literally what is happening? No, but it is a much easier way to think of things, and makes the code more readable.
>Did you fail writing the header, the body, or the footer? You can't know anymore.
You are writing to the same stream in both cases. The error wouldn't be caused by the header or footer data, but by some property of the stream, like it being closed, or the disk running out of space. Knowing whether the failure occurred while you were writing to the header or footer is unnecessary information, and therefore obfuscatory.
I'm not sure that's the argument you want to use it as. Dijkstra is surely not arguing that abstraction should not lose information (that would actually not be abstraction at all); he is saying that the important thing about it is that it "finds" the pertinent information (i.e. by throwing away the irrelevant details). He's not redefining the term as something incompatible with its usual meaning, but proposing a perspective, taking the information loss as a given.
I think the name "abstraction" is unfortunate in Dijkstra's sense. The notion of abstraction he's talking about is not about the process of going from less to more abstract. The abstraction is the more abstract. In fact, ignoring the path by which we arrived at the abstraction, it has absolutely no intrinsic relation to the original, more real, less abstract model.
Maybe we're just trying to say the same thing here.
> Knowing whether the failure occurred while you were writing to the header or footer is unnecessary information, and therefore obfuscatory.
So are you making a general case for exceptions or are you just asserting that there exist cases where we don't care where errors happened? There are absolutely such cases. When writing quick and dirty scripts I absolutely value that they crash when they couldn't find a file or similar.
But for more complex systems, I've never found a good use for abstractions. In a complex system, you need to see what happens. Out of sight, out of mind. There is no way building a complex system on top of exceptions and having a good understanding over its failure modes.
I would go further and state that it's extremely uncommon to be able to handle an exception in an appropriate way. The most common use of them is to either 1) handle the error immediately (for example FileNotFound), in which case the exception handling adds just unnecessary syntactical variety. Or 2) just let the program die. Coincidentally, golang has that too.
There may be rare situations where you can handle a series of statements with a single exception handler, like several write-file statements in a row or such, and that might seem cleaner than repeating the error handling. But actually these cases are just a code smell. You could rewrite them using a for loop and factor the repetitions as actual data, adding the error handling only once (inside or after the for loop). In other words, exception handling is just a way to paper over that code smell. It makes hiding bad code easier, so leads to bad code.
That type of attitude I would not allow through an employment review. Too many programmers share varying opinions on style. Enforcing your design principles onto others during code review is not only arrogant but you can't prove in any formal way that your "design" is better. Especially on a controversial design such as this.
It's totally on point. I've been writing a fair bit of Go lately and consequently am firmly in the Generics! Now! camp. The pace of writing in this language is almost snail like, and a lot of that is due to re-implementing common idioms like popping and pushing from/to a stack and then debugging the little typos that creep in around the edges. It's freaking annoying, and it's a shame because the language is otherwise nice to work with. If it weren't for the testing workflow I'd have given up on Go entirely, just because of how annoying it is to re-implement the basics, over, and over, and over...
I don't know. I've worked in three different languages for years on end, and they all have different pain points. C++ is too complicated, and generics are part of that complexity. Java has an ok basic language, but the ecosystem is weird and fad-prone, jumping from beans to patterns to annotations to injection, searching for the One True Way. Go is a little too simple, which sometimes leads to rewriting fairly straightforward stuff that a few clever general mechanisms would let us write once.
This experience makes me suspect that there will always be some pain point with the language. We'll never be happy; that's impossible. The only thing we can do is choose what type of unhappiness we are willing to live with.
The thing I love about Go is its fundamental clarity. It's very upfront and literal. I find it easy to understand what is happening in any particular bit of code. And I suspect that whatever complexities we add would compromise this clarity in the name of brevity. I'd spend less time being bored, and more time being puzzled or incredulous. And fundamentally, I'd rather be bored.
> The thing I love about Go is its fundamental clarity. It's very upfront and literal. I find it easy to understand what is happening in any particular bit of code.
When reading Go codebase I'm not familiar with, I'm very often having a hard time figuring which interfaces go with which structs. As in "Oh, this function accepts interface Foo, which is implemented by which structures?" and then I have go on a adventure throughout the codebase to figure out what structures go in there. Really annoying.
In a language whose intention is to be explicit and easy to read (as opposed to write) I can't understand for the life of me why the authors chose to make interface implementations implicit. It seems like a decision that pretty much only has downsides and which is contrary to the overall aim. I just don't get it.
The one reason that comes to mind is that doing it implicitly lets you use structures in terms of interfaces even when those structures come from codebases you don't control .
Suppose Bob's codebase has a structure implA that implements interface A, and you want to use in terms of its interface. If the language requires Bob to declare implA as an implementation of A, you have to persuade him to do so in his codebase. But if implicit implementation is enough, then you can just use implA directly as an A with no changes by Bob.
But yes, this is a sore point. It would be convenient to be able to declare structures as implementations of particular interfaces, even if the current implicit behavior were preserved.
> Suppose Bob's codebase has a structure implA that implements interface A, and you want to use in terms of its interface. If the language requires Bob to declare implA as an implementation of A, you have to persuade him to do so in his codebase.
Not necessarily, in some lanugages you can add an interface to type even if the type is foreign and you only control the interface. I believe this can be done in Swift, Rust, F# and quite likely some others...
I'm a fan of doing the first pass for most things in a scripted language. I mean, it may not be the least bytes or fastest code, but the time to done seems to be a lot better, and usually performs well enough a second pass isn't needed.
I really appreciate what I've seen of both Rust and Go myself. They both feel more approachable than the C/C++ legacies imho. Though I haven't gone much farther than general tutorials with either.
When I had the opportunity to choose, I felt I could find useful ways of doing things. The problem was that I often had to accommodate myself to someone else's choices that had been made years before, and I had to understand what the heck they had done before I could make useful upgrades. And with many ways of doing things, some of which are hugely reliant on really non-obvious behavior controlled by cryptic settings in control files or annotations, that can be hard.
C# has the feature-bloat of C++, except with a garbage collector. (I wrote C# for 9 years and even when writing it 40 hours a week, I still had trouble keeping up with all the features that continually came out)
> I still had trouble keeping up with all the features that continually came out)
Not that I ever had this sentiment but even if so: why would you try to do that? Most new features are tiny enhancements you can live without and the big changes (generics, Linq) you cannot do without so you they are forced anyway. ‘Trouble’ sounds like you were actually bothered by it which seems a bit over the top?
C# has become quite a big language although it manages to hide the complexity quite well compared to C++. There is very little in the language that will trip you up, unlike C++.
The only problem I have with it is it is still really Windows only, hopefully that will improve with .Net Core 3.
I have been writing stable and robust C# software on Linux and Mac OS X for many years now, so not sure what you mean. Starting with Mono and now solely .NET Core 2 & 3. I know many people who do that as well. And our deployments (all our prod/test/staging servers) went from Windows-only 15 years ago to Linux-only since 4 years.
Maybe you are talking WPF / Desktop only; there are other options for Mono but yes, there you would be right. However the trend is, unfortunately, toward browser interfaces (Electron etc) and those you can do on Linux/Mac already with .NET Core.
> My experience with .Net Core 1 and 2 was that it felt unfinished
Ah, curious what made you feel that. We ported massive code bases of ASP.NET and commandline tooling over to .NET Core since the 1 and had not many issues. It's a much better experience now but it never felt unfinished to me.
I have to admit that I have been writing software for a long time and one of the things I automatically do is abstract (not too far, just far enough) the underlying implementation of whatever I make/made. So our old ASP.NET code was very easy to port for that reason; I never use internals directly and still do not. For instance MVC looks more or less the same anywhere so I just use plain old C# classes as controllers so they can be reused by apps, other (non ASP.NET) frameworks, commandline, tests etc. It adds a thin layer below them so they work but it saves a lot of time and with the coming of .NET Core it proved smart once again.
In the early days, my impression was that the build tools were totally overhauled every few months, and not always in compatible ways. JSON projects, xml projects, dotnet-cli.
There's a number of differences, including generics done well. The team behind C# seem really engaged and transparent. The latest C#8 brings non nullable reference types, similar to how Kotlin does it.
I don't agree with this, and I find the pace of writing in Go to be quite quick. I've recently been working on a server that performs arbitrary work pre-defined in DAGs, with an API, full tests and several fancy features. The initial bulk of the work took me about 3 weeks and the only dependency is a test mocking library.
I've been working with Go full time for 2+ years. After forcing myself to understand its philosophy and idioms, I surprised myself with not missing generics at all.
The language seem to have found a nice spot, hence its success.
Strangely, I feel the day Go has generics is the day its attractiveness will start to fade away.
Do you really need to debug errors from implementing a stack? If you mean compile-time errors, I would look into getting your editor to report them while you type.
People constantly screw up the simplest and easiest things, especially when there are hundreds of simple and easy things in a program. And Go makes sure that you have to write a hundred simple and easy things every time you work with it to solve problems.
You won't find hundreds of purported simple/easy things that are exclusive to Go code.
There's probably less than 10 cases where I implemented some simple thing (like a set of strings) that Java gave for free because of library function that used generics.
More importantly, this will take me a minute to read and two minutes to make sure it’s doing the right thing every time I encounter it somewhere new.
I don’t think I will ever understand why so many go advocates seem so openly hostile to encapsulation. If go had used C-style for loops, I swear there would be people here defending the decision saying:
sum := 0
for (i := 0; i <= len(a); i++) {
sum = sum + a[1]
}
“See? Only four lines. What’s the big deal? It took me literally thirty seconds to write.”
Is that an O(n) implementation of pop? I hope you realize my point..
I looked it up and it's not but the point is that you shouldn't have to worry about these things, including implementation details of trees and hundreds of other data structures.
So, basically, textual, non-syntax-aware macros? It's not uncommon to use such in C to define generic data structures and operations, but it's very painful to write and maintain... and besides, it has been widely considered a pain point in C for decades. It shouldn't be necessary in a modern well-designed PL.
Error handling has to be one of the worst things about go. Even though IDEs can highlight where you've forgotten to bind an error returned from a function to a variable they don't prompt you if you then forgot to return that err back up the call stack - it's easy to just log the error and forget to actually return it.
The only way I've found to avoid RSI and make error handling just about tolerable is to use an IDE that supports code autocompletion - e.g. type `err` and hit tab and it should replace it with an `if err != nil {}` block. The other thing is to use pkg/errors[1] and wrap the error so you've actually got a stack trace if you want to log the error further up the call chain.
Error handling is a solved problem - exceptions, or like the article says monads if the language is functional. The fact that Go supports neither out-of-the-box is a frustrating waste of time.
Monads are not a functional concept. For example an imperative language like Scala (which, being in the ML family of languages has a powerful and first-class functional sublanguage) has monads that work perfectly fine with imperative computation. Indeed, I find it a good programming style to let
trivial, ubiquitous monads, like printing / IO / global state, be handled by the built-in primitives, while higher-level, unusual, application-dependent effects are implemented explicitly using monads.
Monads are essentially a form of overloading / re-defining composition operators, e.g. the semicolon operator in traditional imperative languages. In languages like Haskell and Scala, types (and higher kinds) are used for this purpose, hence it is often though that monadic programming is restricted to statically typed languages with powerful types, and don't make sense for dynamically typed languages. Nothing could be further from the truth. All we need is a 'hook' into the sequencing mechanism(s) the language provides, at least in principle. Admittedly, the outcome is not nearly as pretty as in statically typed languages. See [1, 2] for discussions.
What you write is technically correct for a sufficiently relaxed definition of the word "monad", but omits a very important point: when something of type Integer is evaluated in a Haskell program, the evaluation is guaranteed not to have side effects. The language and the standard library have been carefully arranged so that that is the case. If you want the evaluation to have a side effect, you have to change its type to `IO Integer`. Kiselyov's approach to monads in Scheme in contrast make no such guarantee and cannot be modified to make the guarantee. The situation with monads in Haskell is analogous to the situation around data hiding in certain "pure" object-oriented languages which have been carefully arranged so that the only way to mutate the state inside an object is to send a message to the object. Both guarantees make it easier to reason about programs. (Another example of a language that has been carefully arranged to make a guarantee is how Rust guarantees against use-after-free errors.)
That is really all I have to say. The rest of this comment is designed only to prevent my being dismissed by a reader because what I wrote above is not precise enough.
When I referred above to monads in Haskell it would have been more precise for me to write "the IO monad and if you are into monad transformers, user-defined monads with the IO monad in its implementation". Monads have multiple uses in Haskell. (e.g., the List monad is involved in the implementation of list comprehensions.) It is only the IO monad (and maybe the relatively obscure ST monad, but every use of the ST monad can be transformed into a use of the IO monad with some loss of conciseness and maybe readability) that is involved in the guarantee about side effects.
When Haskell and its standard library were being carefully arranged, the designers had to make some choices as to what is considered a side effect and consequently what is in the IO type. For example, a loop does not have to have `IO` in its type signature in Haskell, so looping endlessly is not considered a side effect in Haskell even though maybe it should be, and in fact it is possible to define a language similar to Haskell where a loop must have `IO` in its type signature: specifically, you get rid of tail recursion with the result that the only way to write a loop is to use the fixed-point operator (i.e., the Y combinator, which since you got rid of tail recursion you have to provide as a primitive of the language) and then instead of giving Y the type (a -> a) -> a you give it the type (IO a -> IO a) -> IO a. There is some ambiguousness in other words in what is considered a side effect, but that ambiguity does not undermine my point because any skilled definition of side effect in Haskell will provide some guarantee.
I agree monads can be understood in multiple ways.
But so can side-effects -- as you point out yourself ("designers had
to make some choices as to what is considered a side effect"). Why
should Haskell's definition of side-effect be the be-all-and-end-all
of analysis on what counts as an effect? I do think that
Haskell's choice of what counts as an effect is wrong, both from a
theoretical POV and pragmatically. I'm happy to defend this claim, but
HN is probably not the right venue for this. (Just two hints: (α) if you
think about types from the POV of guaranteeing linearity by types,
then non-termination is clearly an effect, and not one that should be IO; (β) Scala's approach to monads doesn't guarantee effect freedom, yet is widely used.)
I consider the (co-)monad abstraction to be orthogonal to effect
definitions, with the purpose of (co-)monads being to allow abstracting out certain boilerplate do to with composition.
Conditions can be signalled, handled & resignalled like exceptions, except that they can be optionally ignored and handlers can optionally choose to restart. An exception unwinds the stack, while a condition is handled prior to unwinding, which means that the available recovery strategies are more numerous.
Exceptions are an extremely useful and widely used tool for dealing
cleanly with certain classes of errors (through a non-local jump out of a context to a dynamically bound target), let's call them simple
errors. Maybe something that can be oversimplified like this:
def main ( ... )
try
...
catch
case e => print "Something's wrong, aborting"
(Note that in Go's original main use case (writing low-level servers), such simple error handling is rarely what's needed.) If you need restarts etc, then standard exceptions are not the right
tool. In my experience, it's worthwhile to stop thinking about errors in such
situations but see the behaviour that you are tempted to classify erroneous as part of the normal algorithm, using control constructs other than exceptions.
They've been perfectly fine for the majority of popular languages and are a vast improvement over manually mimicking them the way most people do with Go.
The design pattern espoused here is dated. The emerging consensus for exception-free Go error handling entails automatic dispatch of errors to local handlers. See the golang wiki[1].
I've drafted a menu of possible requirements for Go 2 error handling[2] because the Go team's Draft Design document omitted a requirements section.
This kind of reminds me of the story of promises in javascript, which are almost but not quite monads. Among other things, this has made implementing cancellation difficult. In addition, if it were monadic, the async/await syntax could have been trivially generalized to arbitrary monads. That would have been nice.
Also, IxJS and RxJS provide some really nice sugar for working with higher-level abstractions around "pull" and "push" flows. The protocols associated with `Symbol.Observable` and `Symbol.asyncIterator` have small surface areas, so it's not difficult to hand-write implementations (using generators and async generators) to adapt code that wouldn't otherwise fit in the flow.
little off-topic. am i the only one who thinks Go is becoming a verbose language?
some of the creators i have followed for years because on their take on succinctness and simplicity in the software realm. and there we are with Go becoming somewhat verbose to write in my opinion. as opposed to C or Python, or even Rust, if you will, for example...
I find the language mostly concise, i.e. new types / funcs / etc are about as small as they can be (without type inference), but the type system tends to result in an absurd amount of error-prone duplication and/or wrappers. And the wrappers that can clean up the duplication take a fair amount of work each time, and must be repeated each time, so they really only exist when there are dozens of places where it's useful, rather than some smallish number - everywhere else you just `if err != nil { return err }` and that just keeps accumulating.
Compare the 3 approaches, I rather go with the first one. Yes, it's verbose, ugly, repetitive. But It's dead simple, easy to understand. The other two approaches just make the code harder to reasoning once it getting bigger.
N years later, maybe we'll have the handle/check proposal[1] implemented. Which I'm actually fairly optimistic about - much of my error handling would be simplified by it, since in many cases (90%+?) any error can be treated as fatal.
There are, fortunately or unfortunately, a lot of very good counter-arguments and alternate proposals and specific implementation debates. So I'm not sure who long it'll take, and as much as I Want It Now™ I do think it's important to be careful for the long-term health of the language.
FWIW, persistence of I/O errors is precisely how stdio works in C. When fwrite encounters an error it will continue to return failure (for the same FILE handle) until invoking clearerr. ferror is used to query the error value.
And I dislike the persistent-error setup in Rob Pike's example for the same reason as disliking global errs in C: they're easy to forget, especially because they're abnormal. And history keeps showing these things being forgotten in large numbers, so I don't see why it wouldn't repeat with Go.
I've thought about this problem too (error handling in general), and with that hint of this being a "chain", arrived at this solution which I think is extremely concise and elegant:
or if you want to get more fancy and know which step failed, Of course this can be modified suitably if your error vs. success values are different, but the structure is the same: a chain of actions one after another, broken only by short-circuit evaluation. It's surprising that this pattern isn't seen more, because I've shown this to a few others and the response has almost always been an initial puzzlement followed by "wow, that's extremely neat" or "why didn't I think of that?" This is certainly an example of "use the language".