Hacker News new | past | comments | ask | show | jobs | submit login

Some people need to use Rust in places where exceptions aren't allowed (because the unwind tables and cleanup code are too big). Those people include virtually all browser vendors and game developers.

Furthermore, exceptions have this nasty codegen tradeoff. Either you make them zero-cost (as C++, Obj-C, and Swift compilers typically do), in which case throwing an exception is very expensive at runtime, or you make them non-zero-cost (as Java HotSpot and Go 6g/8g do), in which case you eat a performance penalty for every single try block (in Go, defer) even if no exception is thrown. For a language with RAII, every single stack object with a destructor forms an implicit try block, so this is impractical in practice.

The performance overhead of zero-cost exceptions is not a theoretical issue. I remember stories of Eclipse taking 30 seconds to start up when compiled with GCJ (which used zero-cost exceptions) because it throws thousands of exceptions while starting.

The C approach to error handling has a great performance and code size story relative to exceptions when you consider both the error and success paths, which is why systems code overwhelmingly prefers it. It has poor ergonomics and safety, however, which Rust addresses with Result. Rust's approach forms a hybrid that's designed to achieve the performance of C error handling while eliminating its gotchas.




> Some people need to use Rust in places where exceptions aren't allowed (because the unwind tables and cleanup code are too big). Those people include virtually all browser vendors and game developers.

Kernel developers is another good example.


The NT kernel uses exceptions all over the place, in the form of SEH. Accessing an unmapped user pointer? Exception. NTFS log error? Exception. There's no fundamental incompatibility between ring 0 and exceptions.


I'm not sure it's a positive example.


I actually prefer NT exceptions to __copy_from_user for obtaining values from userspace. With SEH exceptions, the kernel can just access user pointers directly and handle possible errors in a natural way. This pattern is especially nice because NT allows ring-0 code to call system call functions directly, and in that case, an input pointer might be a kernel pointer, meaning that the SEH block does no harm and everything Just Works.

__copy_from_user feels very awkward by comparison.


It sure is a positive example. NT kernel is very successful and well respected.


The argument made by others--that you are working from a mental model of exceptions as made popular by Java as opposed to as a generic language feature for removing boilerplate (essentially a syntax sugar for a specific kind of error monad)--should be well examined (and I don't want to detract from it, as it is really important and seems to have colored your perception quite badly :( enough that it seems to have led you astray even in this comment... one of the big issues with Java exception performance is the required reification and storage of stacktraces, a language quirk that gcj must honor), but I am going to bring up a related one of error handling performance: C's error handling is expensive (in addition to being verbose and unsafe, Rust sadly only trying to work on the latter) because it requires functions that could possibly return an error but normally don't (which certainly should be "all functions that claim to return errors", though I will argue is actually the set of "all functions", but this is a longer argument) to have to go out of their way, moving around values in registers at minimum, to indicate "success".

C++ zero-cost exceptions make successful code run fast--faster than the equivalent successful C code--and in the cases where the compiler infrastructure (or more likely in 2015, cultural baggage, as it really doesn't take much effort to bootstrap exceptions) or memory constraints prevents them from being used, the great thing about the C++ mindset is that they don't cost you anything to ignore: even in a program that has exceptions elsewhere, if your code doesn't need unwind semantics (and a lot of code doesn't), you don't take any overhead; and, as kernel and game developers also often build their own data structures--not to avoid exceptions, but for other important pragmatic reasons (including but not limited to virtual memory restrictions, control over non-determinism, and extreme case-specific optimization)--it is easy (and, I would argue, strongly preferable) to free most developers from frustrating error handling boilerplate while still seeing (as we do) the developers who somehow need to avoid these features having no issues doing so (as we do in the C++ community).


Sometimes you want the error case to be fast. That's actually what everyone who is bringing up this argument is implicitly acknowledging by saying that you shouldn't use exceptions for all errors. In that case you need an alternative mechanism for error handling, which is readily provided by Rust's facilities. It is not true that Rust only cares about performance of errors and not ergonomics, and I don't know where that idea came from--that's why we have the try macro and unwrap!

Rust does follow the model of "exceptions for truly exceptional cases"--that's what panic is for. It's implemented with unwinding under the hood, just as in C++, and you can change it to abort if you want.


C++ allows either error cases and non-error cases to be fast, depending on your use case. Rust essentially leads to code where the successful case (which really is almost all code; even in your Eclipse situation, which I will again stress is a flawed datapoint, there is also a ludicrous amount of successful code) is both verbose and slow. What the comments here were indicating is that with a library that has been well thought out, certain kinds of activities that are known to lead to errors that developers need to check in the moment are implemented with case logic while most errors can be left implicit. No: unwrap and try! are the enemy, not the solution. This is like arguing that C++98 solves the ergonomics of types by providing typedef: a better solution is C++11's auto, and the correct solution is inference.

When I have seen Rust code, it is always littered with error propagation logic, which seems to be due to a failure by language designers to realize that most use cases for errors should be using a mechanism like panic, in turn leading to panic being pretty useless: as it can only throw a string, it is impossible to provide the kind of rich context that would be needed 50 levels up the call stack (maybe all the way up in main) to report the error to the user (which ranges from information that is description to that which is technical). This is what most, and I would argue almost all (but going this far seems to require an hour of your time and a whiteboard, as people have been so trained to try to handle their errors constantly :/), use cases for errors are: to store and provide rich context on the error, whether in a log for an administrator or for the operator.

This is what a command line tool does with almost all errors, this is what a graphical application does with almost all errors, and this is what a website does with almost all errors. So while it would be great to try to sell someone on using panic, as in reality almost all errors in their program should be using panic, that means that all those errors have to, specifically at the point of the error or very near to it, be translated all the way down to a useless string... this will remain in the code for maybe a day before you sigh and get angry at how rigid your error reporting is and in the process of fixing it demand 1) more structured context or 2) the ability to annotate the error with more context at a few key places on the call stack.

To do either of these things in Rust requires that you switch to the painfully verbose (and unfortunately slow!) case-based error handling mechanism, which then forces you to throw down unwrap calls and try! expansions everywhere in your program even if you have very little logic that is even capable of failing in any normal sense, as error propagation in Rust in manual :/. I would love to use panic in my website or tool, and "omg this failed in a way I can't recover" is absolutely the semantics I want for all of my errors, but as far as I've been able to see--even in code from Yehuda, who sadly moved to Portland, making it difficult for me to sit down with him at a whiteboard for an hour :/--to be able to report (not handle: I don't want to handle panic, just report it) these errors in any reasonable way always leads to not being able to use panic, despite it being the more correct tool.


Zero-cost exceptions are occasionally required for performance, yes. They usually aren't--I think the strongest argument for them is in parsers, where you may be performing tons of I/O that could potentially error and don't want to bloat the icache with result handling. Either way, for two reasons (performance, and making sure you can catch exceptions at C FFI boundaries) Rust provides a `catch` in unsafe code. What it discourages is using them for semantic reasons.

I wish you would be more realistic about the tradeoffs of exceptions, though. The win in compile time alone from not having landing pads everywhere is substantial, and the "error handling boilerplate" you discuss isn't generally agreed to be boilerplate.


A big argument for using systems programming languages is thay they are fast: you are writing code that is compiled once and run thousands of times per second on millions of computers. Why would I care so much about compile-time performance? If I need a faster compiler mode during development (though in my experience, the pain of C++ compile times is due to its muti-stage transformation and lack of separate compilation for templates, not its exception handling... I don't find my projects that turn off exceptions to compile much faster), the compiler can also implement a really slow mechanism for exceptions as an alternative.

I also maintain that requiring the developer to do manual error propagation to get errors from the point of failure to the point of reporting to the administrator or the operator (which is what happens to almost all errors, whether you are writing a command line tool, a graphical application, or a website) is "boilerplate". At the simplest, without changing the error model, littering the code with calls to try! that push the error upwards is automatable (and I have a friend working on a compiler plugin to accomplish exactly that).

An example: I like statically-typed languages, but as demonstrated by languages with type infererence, having to declare every type every time it is used is "boilerplate", especially if types are long and unwieldy. C++98 has typedef, which lets me take C++'s long and unwieldy type definitions and make them shorter. If you are used to typing them out manually, every single time, some might even claim this is "ergonomic". However, C++11's auto is much better, and as demonstrated by languages like OCaml and Haskell even that isn't as good as it can get. C and Go have long and unwieldy error propagation (checking flags and manually returning new ones). Rust provides macros and tooling for this that make it safer (no more "goto fail;" or "Rage Against the Cage" exploits) and easier (like typedef!), but that doesn't mean it isn't still "boilerplate".


I mean, sure. But I've personally benchmarked a lot of code that uses Result and noted where it caused performance issues, if it did, fixed the code so it didn't anymore, and moved on with my life. Most of the time, it wasn't even close to being a bottleneck--most of the time the branch is correctly predicted and the time is just spent copying bytes for the `Ok` value, which is basically a nonissue if you can keep your error word-sized unless you're calling a tiny function in a tight loop that LLVM can't inline for some reason. OTOH, landing pad generation in Rust probably doubles its compile time, which is already pretty slow (which is one reason I personally will probably just compile with panic! -> abort, when that option becomes available).

I don't consider the error handling boilerplate because I do frequently want to recover, even from supposedly fatal errors. For example, if I'm handling an iterator of things, and some of them fail, I still usually want to keep processing them before I return the error. Exceptions lock you into a very rigid pattern of error reporting that make that much more difficult to deal with. Because libraries don't assume that I want to fail when they do, it can be easy and fast for me to do this, even for pathological inputs; that wouldn't be the case if everyone used exceptions.

I agree with you that Rust could do better at providing options for error reporting with panic. I believe there's an RFC about that right now, please comment on it!


I have been a games developer in a past life (my primary focus for over three years), and I am unconvinced I or any of the other people I worked with would have been willing to tolerate a low-grade performance degradation distributed around all of our code if we had access to programming languages that did not have that issue and weren't infinitely worse (and while I really wish I could use Rust, it just isn't infinitely better in a way that seems relevant for either the hardcore or casual gaming markets). Also, it sounds like there is something wrong with the Rust compiler if this landing pad generation really doubles the compile time... a poor compiler should not influence the language design in such a negative direction.

Solving problems like your iterator issue should not be difficult with exceptions: in fact, frankly, I don't see why it is any harder than with Result: your worst case is that you start with a primitive, "antitry!" (which should be called "try!", as it makes more sense than what "try!" currently does for that word) which performs an operation and returns a Result of either its value or its failure. Even if you do this "frequently", I would be shocked if this was remotely as common as the code that was assuming success and just wanted to fail if something fails. And, as long as you aren't trying to "handle" those errors (and it doesn't sound like you are, you just kind of want to fail out with a list of failure reasons) there is no semantic complexity either. Exceptions are the flexible primitive, not Result: simulating Result from exceptions requires a tiny finite amount of code, while simulating exceptions using Result requires modifications to an arbitrarily large amount of code.

As for fixing panic, what really sucks is that so much library code is going to be written using Result now when much of it should have been written with panic, both in the core library and from third-party developers, meaning that the boilerplate is going to be somewhat inescapable at the lowest levels of the program... I think the thing I like most about programming in Python vs C++ is that I don't have to wrap every single API I use to make it handle errors in a sane way. I don't think that fixing panic at some point in the future is going to work as there seems to be such awkward community momentum against exceptions and behind Result, when the real RFC needs to be "when you write your library, try to use panic, not Result".


> Also, it sounds like there is something wrong with the Rust compiler if this landing pad generation really doubles the compile time... a poor compiler should not influence the language design in such a negative direction.

Um, actual, non-trollish question: do you know how exceptions are implemented? Because I can't think of a way around this in a language with destructors, short of some sort of "doesn't throw" effect (which Rust doesn't currently have) or aborting on every exception. Do you have any pointers? Similarly, I don't understand how your "antitry" solution would be efficient on pathological input--"zero-cost" exceptions are far from that when the exceptions actually happen (and can be made to happen by malicious users). You also haven't addressed how I'm supposed to know to use antitry in the first place, which is one of my biggest issues with unchecked exceptions--again, the only alternative I see is a "throws" effect. All of these are language level concerns, not implementation-level ones.

> Exceptions are the flexible primitive, not Result

Catchable exceptions require coding transactionally if you want to use them everywhere and maintain strong exception safey. There's an enormous cost associated with that too; and worse, it can't be checked by the compiler.

Again: I have actually benchmarked a lot of code that uses Result, explicitly checking to see if it was a performance issue. It's almost never a performance issue. So performance-related arguments for why exceptions should be used everywhere are going to fall on deaf ears here. I am far more interested in seeing whether we can eliminate panics altogether for most of the common cases (e.g. with compile-time checked range types).


> C's error handling is expensive (in addition to being verbose and unsafe, Rust sadly only trying to work on the latter)

Did you read the OP? I realize the post is long, but the short story is that error handling in Rust can be concise and ergonomic. It's only verbose if you're doing explicit case analysis every where. I've found error handling in Rust to be the exact opposite of frustrating. It's pleasant and explicit.


I consider having to unwrap or try! on essentially every function call to be verbose. I did not read all of the text, but I did look at all of the code examples, as I was hoping this was an article about how to do it better than Rust seems to consider idiomatic, but was disappointed when even the examples at the end didn't seem to have any "magic". I then scrolled back to the table of contents and realized "oh, this is just a manual". Can you point to somewhere in the documentation where it shows error propagation (which is the key use case, not "handling") being as easy as it can be in C++ or Python? FWIW, "implicit, but correct" should be the goal, not "explicit" (in the same way that types should be inferred when possible, not manually specified: a world where obvious things must always be explicit leads to Java).


It will never be as easy as in Python because Python uses unchecked exceptions. (Frankly, I hate error handling in Python.)

I don't know Java or C++, so I can't help you there.

> I consider having to unwrap or try! on essentially every function call

They aren't on every function call. Only on functions that can return errors. The type signatures will tell you which.

> to be verbose.

I don't. In fact, I love how errors are handled. It's explicit, concise and ergonomic. We'll just have to disagree, I guess.


I also hate error "handling" in Python, but I thankfully long ago realized that what I want is error propagation and reporting", and almost all of my use cases for "handling" errors were mistakes: what I wanted was a mechanism similar to panic that could propagate rich error context (as opposed to just strings), and that is what unchecked exceptions are. Most of my Python code that tried to "handle" errors now gets removed and replaced with better semantics: the kind of failure semantics that one might expect to get from panic, but can't, as panic loses too much context to be useful for reporting (forcing developers to use case logic).

Rust programs are littered with calls to unwrap and expansions of try! for the sole purpose of doing manual error propagation, which is both verbose in the code and slow at runtime (as the successful case must constantly produce and then check error results), just so the error can get to the top level somewhere and be reported to someone (which in turn drags a surprisingly large amount of your code into the domain of "functions that can return errors"). None of that boilerplate should be required: even with an insistence on the mechanism, it is such a clear case of boilerplate that a friend of mine finally ended up working on a compiler plug-in whose purpose was to jusy add calls to try! around anything that seemed to require those calls.


I like explicit error handling. I don't want errors bubbling up automatically.

As I said, we'll just have to disagree.


Clearly ;P. I just hope you realize that you sound a lot like the people working in languages like C or Java or Go that insist that adding abstraction and automation is bad as they want everything to be explicit and transparent :(.


That's absurd. I wrote the OP, and the entire article is about using abstraction to make error handling more convenient. Here's an excerpt from my post that describes my true feelings on the matter (because we don't all fit nice and neatly into pre-conceived buckets):

> My advice boils down to this: use good judgment. There’s a reason why the words “never do X” or “Y is considered harmful” don’t appear in my writing. There are trade offs to all things, and it is up to you as the programmer to determine what is acceptable for your use cases. My goal is only to help you evaluate trade offs as accurately as possible.


Maybe this is going to be "way too Hacker News", but to me this is "bulb paradox": C developers could claim it is absurd to say their arguments for explicit and transparent logic are somehow against abstraction and automation because they, unlike their assembly brethren, opted for structured control flow and rigid function conventions, and Java developers can say the same about the way they abstracted explicit polymorphism. That doesn't change the form of the argument they are making about going even further.

Having to litter code with unwrap and try! just because somewhere much lower an error is used and much higher an error is reported does not seem beneficial unless the alternative is to do it in a way that is even more manual and less safe, but that's not how it works in the languages I use on a daily basis. Yes: C and Go make dealing with errors so painful and dangerous that Rust is absolutely amazing in comparison... but I don't program in C or Go and I would hope we aren't using them as the benchmark when we have C++.

I encourage you to read my example with type declaration that I wrote elsewhere in this thread.

https://news.ycombinator.com/item?id=9552612


Recognizing trade offs is not the same as the blub paradox. Note that I responded that way because you literally accused me of not liking abstraction at all: "you sound a lot like the people working in languages like C or Java or Go that insist that adding abstraction and automation is bad as they want everything to be explicit and transparent." If you have a more nuanced argument to make, then do so, but I'm not going to let you paint me into a box.

See pcwalton's comments in this thread on exceptions.


Read that quote again: I say "add" because I mean it; I would never argue that a C developer does not like abstraction at all because that is obviously false (functions, control flow, text macros). You can't say that I "literally" said you don't like abstraction at all, as that's not what I am arguing: if anything, I am arguing that, due to the blub paradox, seeing the difference between the automation you encourage and the automation you have a distaste for is complex at best and impossible at worst; I look at your article and, as I have every other time I have examined Rust, am left wanting for more abstraction and automation when it comes to errors.

Your argument that you do not want error propagation to be implicit is strange to me, as a developer who is used to this being automatic, because it is does not affect safety to automate and is effectively nothing but boilerplate. There is nothing different in the form of your argument against exceptions from the form of the arguments made by developers of C or Go or Java against adding features like templates or optionals or macros. It isn't that Java developers hate abstraction in all forms (that would be an absurd argument that I would need to pull from an AbsurdArgumentFactory ;P), it is that the form of argument that we often hear from all of these developers about the features we like in languages like C++ or Rust or D has the same core structure :(.

On pcwalton's notes on exceptions, I already responded to him in my comment that started my involvement in this thread, and went further in my first reply this morning: his mental model seems stuck in the implementation of exceptions as defined by Java and as used by Java developers. His datapoint for why they are so slow is from an old story about gcj: the Java exception model requires a stacktrace reification, and you would not be able to replicate that kind of performance loss in C++, even though they share the same implementation of propagation. His API issues are all (as argued by others) based on the awkward way that Java developers have made exceptions used for things that aren't really errors.

Have you developed in Erlang? This is a language that considers errors and failures to be the core problem faced by developers, and which has structured almost all other decisions surrounding that core premise. After spending a lot of time coding in Erlang a few years ago, it fundamentally changed the way I think about errors, and that was after over 20 years of software development in languages like C++ and Java. When I was (comparatively) green and naive, back in 2001, I even made an argument for checked exceptions to Microsoft in C# (it was partly my argument for RAII in that same thread that got us the using syntax sugar for IDisposable): that is how far I had to come in the subsequent decade.

Erlang has the right abstraction: a mechanism for exceptions that is designed somewhat similarly to panic, in that it encourages you to structure your code in ways that isolate failures behind processes and avoid ever "handling" errors, but which lets you propagate and augment rich error context through multiple process layers implicitly, making error propgation automatic and without boilerplate. Once you "get it", you can do this easily in any language that has unchecked exceptions (even Java, using RuntimeException as the new exception root, but you have to really hate Java to pull his off ;P), as function calls can be mentally modeled as calls through process linkages (though you sadly then can't use the standard libraries provided by any of these languages, as they are all built with a broken model :().

Given how many other amazing things that Rust got uniquely right, it is somewhere between disappointing and devastating that error propagation, something which is so important, only seems to be competing with languages (like C, Go, or Java) where errors are almost unworkable :(.


> Have you developed in Erlang?

Yes.

> Erlang has the right abstraction

There is no right abstraction. If we can't minimally agree that there are trade offs at play here, then we can't have a productive conversation.

> and avoid ever "handling" errors

I want to explicitly handle my errors.

Just because you had great internal progress does not mean you arrived at some objectively correct solution. It obviously works well for you, but everyone is not you and not everyone has the same requirements or preferences as you.


The arrow I am providing for progress is one towards abstraction and automation; so, fine: we are right back to where we were before, with you making an argument that is of the exact same form as those people from C, Go, and Java who argue against adding abstraction and automation to keep things explicit and transparent, the people you were so painfully opposed to being lumped together with, and you seem to have exactly as little concrete rational as they do for making that "tradeoff". It isn't clear there is any tradeoff in play here, and you haven't tried to demonstrate one.

My argument is "given that there is absolutely no demonstrated benefit to doing so, and even some clear downsides (performance of normal successful code), being forced to litter my code with tons of boilerplate--something you can clearly see if you glance almost anywhere in the code for Cargo--is a major and depressing step backwards". Your argument seems to still be "I don't want that much automation, I like things being explicit, and of course you should look again the comments from pcwalton you already responded to that complain about Java (really just Java) being slow"... :/.


Explicitness has benefits.

Honestly, please just leave me alone and stop twisting my words. We can't even agree that there are trade offs involved here. There's no point in continuing.


> Furthermore, exceptions have this nasty codegen tradeoff

In OCaml (a language with blazingly fast exceptions), the worst that can happen for each try-block is a register spill or memory store [1]. For a managed language, that's excellent.

The only thing I don't like about OCaml exceptions is they are unchecked -- which is strangely out-of-place, given the ML emphasis on type safety.

The resistance towards checked exceptions doesn't make sense, either: used properly, checked exceptions could be virtually indistinguishable from typechecked return values. They also enable some really elegant control-flow constructs, although that may take some discipline to use properly in a large project.

[1] http://stackoverflow.com/questions/8564025/ocaml-internals-e...


> In OCaml (a language with blazingly fast exceptions), the worst that can happen for each try-block is a register spill or memory store [1]. For a managed language, that's excellent.

OK, but that's OCaml, a language with no value types (other than 31-bit ints) or RAII. So you don't need custom cleanup code in particular, since the runtime tagging tells you everything you need. That isn't an acceptable approach for Rust, which uses unboxed types with custom cleanup code everywhere. You would need an approach like the approach Go uses to implement defer, which involves pushing and popping custom handlers onto a linked list all the time.

> The resistance towards checked exceptions doesn't make sense, either: used properly, checked exceptions could be virtually indistinguishable from typechecked return values. They also enable some really elegant control-flow constructs, although that may take some discipline to use properly in a large project.

We could have used checked exceptions, I guess, as they're isomorphic to ADTs, but I don't think there's much of a benefit when checked exceptions still require you to annotate signatures or catch the exception. Also, without subtyping of exceptions, the annotation burden would likely be worse. And the automatic wrapping and unwrapping of values in exception types would probably make systems programmers unhappy that extra indirections and data structures are being conjured up in ways that aren't immediately obvious.

Finally, checked exceptions have this annoying property whereby the easiest thing to do to handle them is to write "try { ... } catch (Exception e) {}", and programmers often pick the easiest thing to do. Compare to Rust's ".unwrap()", which is does the sensible thing if you don't want to bubble the exceptions up to your caller, and is much less typing.


>Compare to Rust's ".unwrap()", which is does the sensible thing if you don't want to bubble the exceptions up to your caller, and is much less typing.

Can you register a global panic handler in Rust apps yet? Because that's usually what you want to do in fail-fast scenarios for error logging etc.


> that's OCaml, a language with no value types (other than 31-bit ints) or RAII

Hence the "managed language". Such features would certainly be more difficult to do in Rust, and agreed that exceptions probably aren't the ideal error-handling mechanism given the goals of Rust.

I guess the point was that exception tradeoffs can be mostly overcome by a well-designed implementation, given the appropriate language. (Not saying other languages have bad implementations, although the entire OCaml runtime is a work of art that's very nice in more ways than just exceptions...)

> checked exceptions still require you to annotate signatures or catch the exception

Could the signatures be inferred with the exception type(s)? Not catching them at some point in the call stack would result in a compile error; no reason it has to be at the caller. Probably tricky for libraries, maybe having a notion of scope for exceptions would be useful. Although I'll admit I don't know the first thing about the nuances of type inference.

> the automatic wrapping and unwrapping of values in exception types

I could be missing something, but how would this be worse than one or maybe two dereferences at catch-site(s) and nothing in between? Is the problem with exceptions containing large unboxed values that have to be propagated up the call stack? Since call stacks can be arbitrarily long, this seems like another good reason to limit the "scope" for exceptions somehow.

> easiest thing to do to handle them is to write "try { ... } catch (Exception e) {}"

True, generic pattern matching has the same problem with the 'default' case (eg, match _ -> ... ), that's often a code smell. Moreover, the notion of a syntactic try-block seems like something that a "sufficiently smart" compiler could do away with, given enough inference: just match on the "returned" exceptional values [1]. But then again, what do I know about type inference...

[1] OCaml has a pretty nice feature which approximates this: https://blogs.janestreet.com/pattern-matching-and-exception-...


TBH, I can count on one hand the number of times I've seen an empty catch block without a justifiable comment. I think it's a straw man.


I've worked as a consultant on a ton of projects with teams of massively varying skill levels, and I too can count the empty catch blocks on one hand, provided that hand has hundreds of fingers.


This is a strong argument for not becoming a consultant.


I read it as a strong argument that consultancy, as a business, won't be going away any time soon.


I linked the study elsewhere, but unfortunately not: https://www.usenix.org/system/files/conference/osdi14/osdi14....


Checked exceptions don't work very well with generic higher order functions like map and fold, which is probably why OCaml chose unchecked exceptions. Even Java has backed away from checked exceptions in its streams api, so you see things like UncheckedIOException in the Java 8 standard library now.

If instead the result of the operations are fully encoded into the return value, then there is no problem with how to type map, fold etc., because results are all handled uniformly by the accumulator (for fold) or the transformation function (with map), etc just any other sorts of values would be.


I don't really see the semantic distinction between checked exceptions and Either / Result style return values with monadic composition of code. There's a large difference in syntactic overhead, and there are some implications in how costs appear to be apportioned, but they are more or less isomorphic after exception-style has been converted to continuation passing style.

Map, fold etc. work just fine with typed exceptions if you parameterize map, fold etc. by the exception type. The problem comes with runtime polymorphism; runtime polymorphism is a firewall to static type flow. Runtime polymorphism is also fairly popular with Java developers.

(Note that e.g. a monadic map is implicitly parameterized by the "exception" type via a type argument of the return type (Either, Result, whatever you like) of the transform passed in.)


> "There's a large difference in syntactic overhead"

Well that's rather in line with what I meant by "don't work very well".

BTW, do you know of any languages/libraries that chose to define map/fold (or similar) to be parameterized over exception types like this? That sounds interesting enough that I'd like to see examples of what it would look like in practice.


> Checked exceptions don't work very well with generic higher order functions

Can you elaborate?

The only problem I can see is if the language forces the exception to be handled by the caller. That doesn't have to be the case: the language could require that the exception be handled somewhere in the call stack, which would fix this problem.


>The performance overhead of zero-cost exceptions is not a theoretical issue. I remember stories of Eclipse taking 30 seconds to start up when compiled with GCJ (which used zero-cost exceptions) because it throws thousands of exceptions while starting.

Then maybe the solution is to not have it throw thousands of exceptions while starting?

And have them be, well, exceptional?


APIs have to be designed to allow that in the first place. For instance, the easiest way to check whether a file exists and open if it so is to try to open it and handle the error, because "look before you leap" can be racy. But in exception-based languages, the only way to do that is typically to catch an exception, because the core file I/O APIs throw exceptions on error.


They do throw exceptions, but should they? Is trying to open a file that doesn't exist really an extraordinarily special condition? No, that happens all the time. File systems are known to be messy, untidy places that constantly have problems.

Opening a file should optionally return a file, and if it didn't work return an error value instead. This can be solved with multiple return values (like Go), or an easy to use option type (like Rust). Even C solves this by returning NULL and setting the global errno. But since Java doesn't have either, it uses exceptions for completely mundane cases which then makes their performance much more difficult to tune since they're everywhere.


Sure, but why would there be thousands of files missing for example?

I can understand something like an unmounted drive or a network being down leading to a slow Eclipse launch due to thousands of exceptions.

But those 2 are exceptional circumstances in themselves, and I think it's OK for exceptions to slow down a program in this case.

But would those 0-cost exceptions be much of a burden in a normal run of a program?

I don't see how checking for thousands of missing file references (or anything similar) is representative of what a program does in normal runs.


Consider, for example, checking for robots.txt and favicon.ico. Any search engine or browser is going to generate tons of 404 requests for these resources. If your HTTP library throws exceptions on 404, you're eating an unnecessary cost of stack unwinding on almost every navigation.

IIRC the Eclipse case was similar, checking for optional plugin metadata files or something like that.


An integer parsing function should not throw an exception on failure to parse (see .NET's Int32.TryParse()); and an HTTP library should not throw an exception on 404. These are spurious examples typically inspired by Java's dreadful (especially early) API design.

A REST API library might throw an exception for a 404. But even then it's debatable, because network errors are very common, so they're a poor fit for exceptions. Networks are laggy, so they're also a poor fit for synchronous APIs. Many procedural programming techniques are on shaky ground when they try to paper over the network.


TryParse is a hack around exception performance. In fact, all the TryXXX things usually are. It's purely an unfortunate hardware/performance decision that impacts library design. In F#, I'm happy to write:

  let i = try int s with _ -> -1
OCaml does some similar things, I think. You might use it to attempt to get an item from a dictionary, and return a default if not found. It's easy and elegant.

Saying "exceptions are for exceptional cases" is meaningless wordplay. Call them faults, traps, errjumps, whatever and it breaks down (faults are for faulty cases?). Library design should trump all, and sometimes the better way is to throw in many cases.


No, it's more than that. Exceptions are a useful signal that something has gone wrong with the application. If you can get into that mode of thinking, other benefits flow out of it. You don't get in the habit of catching low down in the stack unless it's a deliberate retry mechanism. You can rely on your debugger breaking on exception meaning something, rather than having to set detailed and careful exception breakpoints. When you use exceptions as a glorified alternative function return value, you risk creating accidentally broad exception firewalls - if you don't catch just the right subset of exceptions, you start swallowing genuine errors.

My advocacy of exceptions for exceptional situations is a cultural one, and the primary aim of the culture is so that you don't catch exceptions. You should almost never catch them. They should almost always bubble up. Only the server handler dispatch or UI event loop should catch, most of the time.

And you can only make that work if the library doesn't throw exceptions for situations where you are likely to want to handle them.

I tend to think of errors as falling into three classifications: (1) incorrect user input, (2) unexpected external condition revealed through a system call, and (3) programming error.

You almost never want to catch (3) - null pointer exceptions, overflow, division by zero, index out of bounds, etc. In some apps, you probably want to terminate. You don't want to catch the exception, you don't want to catch, wrap and throw the exception, because these just obscure the nature of the problem.

You might occasionally want to catch (2) to perform a retry or use an alternative strategy. Depending on how expected the error is, it might make sense to have two different styles of invoking the operation, one that throws, one that doesn't throw. Some code just cares about the happy path and doesn't mind blowing up on failure, other code needs to try harder.

But (1), user error, requires design. It means the user - normally the guy who, however indirectly, is paying for the code - has made a mistake, and that's probably our fault; we didn't do enough communication. We need to work hard to correct it. Exceptions come with technical messages. Technical messages are not user friendly. Throwing an exception would just mean we'd have to catch and translate it. It creates work. It doesn't help. Transporting the message up the stack automatically doesn't help either; context that could communicate the problem to the user is lost. Exceptions simply aren't the right tool. The affordances are wrong, automatic unwind is wrong, and the instincts they inculcate about catching, break everything else.


Yeah, this is interesting. Exceptions come with 2 different moral guidelines that actually contradict each other. The contradiction is not obvious on the surface:

(1) Exceptions can make your contract with your calling code nicer by offering an alternative way of unrolling the call stack. That way, your caller can deal with anything unexpected in a more convenient way.

(2) But the price of being able to do so is expensive. The whole system is designed and optimized for "normal" procedure calls making normal procedure calls and returning normally. Exceptions go against that grain and are very expensive. So only use them for "truly exceptional" situations.

The problem is, if you try to take advantage of #1, you run afoul of # 2.

TryParse vs Parse is a perfect example. The calling contract of Parse is so much cleaner because it takes advantage of # 1. But having an exception occur is too heavy because it runs afoul of # 2.

Problem is, an exception is almost _always_ too heavy. So it is an illusion (delusion) to think you can ever freely enjoy the benefits of # 1. If you need a guideline like # 2 at all, then # 1 is a lie.


So if you aren't using exceptions for all errors, then you need an error handling strategy for those non-exception errors such as parsing failures and 404s. Which brings us back to Rust's strategy: the Result type.


Those languages have choices: you can use exception handling or not, depending on the API context. Exceptions should be used exceptional errors. For example, even in the file exists case, you can have an API to check existence (Java and C# do!), but maybe the file is deleted between that check and the open call...

Now without exceptions, you still have an existence check AND have an error-encoding result on open that is rarely in failure mode. Very annoying. You must combine the two calls even if separating them makes more sense.


Sorry... I want to make sure I have this right. Is your argument that exceptions would a better idea really that the version with sum types doesn't let you get away with the common antipattern of checking for file existence before opening it? Because what you should actually do in that situation (not, I stress, just in Rust, but in any language!) is just try to open the file, which makes the check-and-open operation atomic and results in a single error call. In other words, this is an example of how the Result type guides you towards the correct solution--which is pretty much the strongest reason I can think of for including a feature in a language.


I'm saying, for the file example, you might want to check existence first anyways regardless as part of feedback to the user. You might not even want to open the file, just present it in a list of files that could be open.

I get it that Rust is designed for the command line where that doesn't really occur.


I find this example extremely unconvincing. If you're not talking about an immediate check (for example, you have a list of files in a sidebar), it's quite possible that it hasn't been refreshed for a long while; there's no particular reason to assume that the file is still there. And even if it is, there are many other errors that can occur when you open a file (insufficient permissions, for example). In that situation, I don't see why it's better to fail with an (unchecked) exception, which the developer is going to have to remember to catch and try to tie back to the originating event (which might be quite tricky, given how much I/O occurs in something like an editor), than to require the developer to handle the potential failure immediately (by popping up a dialog box saying the file was deleted, for example).

In any case, what you're talking about is hardly the common case to optimize for. In basically every language I've ever used, the majority of files were opened without any user interaction, in which case Rust's API is absolutely the correct approach.

That is not to say that there aren't times where you have to do a redundant check, but generally speaking they only occur when you have control over the entire system under consideration and statically know (perhaps can even prove) that the error can't happen. In that situation, you should of course unwrap(), as there is no good way to recover from a logic bug in the existing program.


> If you're not talking about an immediate check (for example, you have a list of files in a sidebar), it's quite possible that it hasn't been refreshed for a long while; there's no particular reason to assume that the file is still there.

Exactly. It is probably still there, just someone could have deleted it in the few seconds or so that it took for the user to make an action. UIs are filled with a bunch of invariants that are probably true but not necessarily so. 99% of these programs will just exception out if say that file is deleted between the non-atomic time it takes to do verification and action. And there is nothing considered very wrong with that: you screw with the environment of an application while its running, bad things are expected to happen.

Now, a language like Rust expects that everything is atomic in the normal case. But since users aren't very batchy, that means re-verifying things over and over again. It is not the overhead that is the problem here, the fact that the check is redundant is not the issue. The fact that the programmer is pestered into handling these cases is a huge issue: Java/C# will just exception out and that's the end of it (you don't really want to bother trying to handle that file being deleted in the few seconds it took for the user to make a decision, or if you deal with it, deal with it at a very coarse granularity).

Rust is not designed for writing interactive programs, it is designed for batchy systems code where "files are opened without any user interaction." But let's not discount why existing languages that are designed for writing user facing code make the decisions that they made.


I think we've hit on our point of disagreement:

> 99% of these programs will just exception out if say that file is deleted between the non-atomic time it takes to do verification and action. And there is nothing considered very wrong with that: you screw with the environment of an application while its running, bad things are expected to happen.

I would much rather strive to always do the right thing, if possible, because I do consider that wrong. The pervasiveness of the `Result` type in Rust makes it much easier for me to do so; the cost to you is that you must write `unwrap()` instead of nothing. On the other hand, if unchecked exceptions are the error handling default, I have no recourse in order to discover what possible errors can occur from a function call other than to read the function's source code, while you just save typing `unwrap()`. To me, Rust's solution seems like an eminently reasonable tradeoff.


I get that philosophy, it works wonderfully in batch non-interactive environments. But when almost everything is occurring non-atomically, you wind up throwing unwrap everywhere since nothing is actually guaranteed, the state of the world can change between any operation, even if that is unlikely.

I've specialized in very interactive systems so my world is quite messy. When you run a compiler (and the program being compiled) while the user is editing it, there are lots of transient error conditions to deal with. I just couldn't imagine writing such a system in Rust, error propagation and resolution is a much more global affair that can't tunnel explicitly through function signatures. But then, I understand that Rust wasn't designed for my problems.


Yet, you still end up seeing stories like https://www.coffeepowered.net/2011/06/17/jruby-performance-e...


"should" is a fine word, but there's also the reality that using exceptions in non-exceptional conditions is what they did.


Java is a poor example of good exception practice, and isn't a strong argument against them, IMO. Java went all-out on a particular (flawed) exception model, and pushed it as far as it could - too far, too soon.

I'm a much bigger fan of Delphi's exception model, for example. You would not normally see any exceptions on an app startup, not least because the default setting in the debugger is to break on exception and it would drive the developer up the walls.


If exceptions were instead named traps or faults, or even just errors, what would the prescriptive advice be?


I don't think this argument makes much sense. If Rust decided to go all exception-happy, it could have an option to codegen all function calls as if they were wrapped in try! - or at least, all calls that can throw, a flag which can be either specified in declarations of stored in metadata, though not without some complexity in the latter case in order to handle separate compilation correctly.

I disagree with the GP about whether Rust's error system is elegant, but syntax isn't that tightly bound to codegen. Indeed, I'd be interested to see an optional pass someday that does the opposite, converting Result returns to exceptions at the LLVM IR level.


> If Rust decided to go all exception-happy, it could have an option to codegen all function calls as if they were wrapped in try!

That's what I mean by overhead for even the non-throwing case.

> - or at least, all calls that can throw, a flag which can be either specified in declarations of stored in metadata, though not without some complexity in the latter case in order to handle separate compilation correctly.

That's checked exceptions, and I don't think they'd be a good fit for Rust for a few reasons that I've outlined in comments downthread, but briefly: (a) the annotation burden would be just as bad; (b) automatic unwrapping and wrapping of function result values could be surprising for systems programmers; (c) checked exceptions like Java don't allow for the nice .unwrap() syntax; (d) checked exceptions, as an out-of-band effect type, don't play as well with higher-order functions as just using the normal return value. But yes, it is possible.


> Some people need to use Rust in places where exceptions aren't allowed (because the unwind tables and cleanup code are too big).

Can you describe that scenario in more detail? What happens in case of panic?

> I remember stories of Eclipse taking 30 seconds to start up when compiled with GCJ (which used zero-cost exceptions) because it throws thousands of exceptions while starting.

I guess many Java developers assume that it's cheap to throw exceptions, and GCJ can't easily change that. In a new language, you wouldn't have that problem. If you had zero-cost exceptions in Rust, and told people that they have the same performance characteristics as C++ exceptions, I think most people would understand.


> What happens in case of panic?

Process abort.

> I guess many Java developers assume that it's cheap to throw exceptions, and GCJ can't easily change that. In a new language, you wouldn't have that problem.

No, as I described below, you still do if the core library APIs throw exceptions on error. You have no choice but to catch exceptions if you want to do simple things that programs need to do.


If Rust had zero cost unchecked exceptions, I imagine their usage would be more like C++ than Java. You wouldn't throw on IO errors. You would throw on divide by zero, arithmetic overflow, array index out of bounds...


In fact that's exactly why Rust has panic, which is implemented via zero-cost unwinding under the hood. (Panics can be disabled and turned into an abort if you want to, but only in LTO right now. Improving the support for abort-on-panic is a wanted feature.)


> throwing an exception is very expensive at runtime

They shouldn't be that expensive. It'd help if libgcc didn't take a mutex around the exception dispatch (because of the very unlikely possibility of a concurrent dlclose) so that different threads could do exception lookups in parallel.

(I once tried replacing that mutex with an rwlock. It didn't help very much.)


It's still important to remember that zero-cost exceptions are a bit of a misnomer. They may not have runtime costs in case no exception is thrown, but can hinder optimisation by the compiler.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: