Hacker News new | past | comments | ask | show | jobs | submit login
Scope exit in C++11 (2012) (the-witness.net)
39 points by noch on Feb 6, 2018 | hide | past | favorite | 49 comments



What everybody seems to miss (including in the other comments here) is the fact that no kind of hackery with destructors will ever make them equivalent to scope(exit) or finally{}, because of the simple reason that destructors invoked via exceptions can't throw their own exceptions. And this makes sense, because destructors are meant to release resources, whereas finally/scope(exit) are meant to execute after prior operations in a LIFO manner. Not all operations that need to occur in a LIFO manner are inherently tied to resource acquisition. More fundamentally, one is inherently about data, and the other is inherently about control-flow... simply put, they're different tools with different use cases, even though people don't bother to make the distinction. Nevertheless, if people insist on using the same hammer for both nails, the language needs to address this fundamental issue before people can claim it's possible to do scope(exit) or the equivalent in C++.


> destructors invoked via exceptions can't throw their own exceptions

`finally`, in languages that have it (and presumably, scope(exit) too), face fundamentally the same problem with exceptions that destructors do: if an exception is already propagating, and you throw another one, what happens?, since you (presumably) can't have two exceptions. C++ says tough / don't do that / you're terminated. Other languages just forget about the other exception, and you lose the context (which can be a pain to debug; though I think more recent versions of Python, for example, will attach the original exception as a "context" to the new one) but your program continues execution.

Languages that expose a result object, e.g., Result in Rust, force you to deal with the issue yourself, essentially, by deciding at the site what to do with it. (The downside being the increased amount of code, though to some extent I believe the language can supply tooling to help there; Rust is again a good example. The upside is you get to choose what happens.)

(I see these more as a tradeoff, that different languages have different opinions about.)


Java from the beginning allows you to chain exceptions and that is a core part of debugging.

In Python this is a very recent construct and few libraries use it, unfortunately.


This is my big gripe with exceptions and why I'm happy that Rust doesn't have them(yes I know there's panics but that's a separate thing).

Every C++ project I've worked on has turned them off both for this and for the RTTI savings(less of an issue these days).


To be fair, Rust doesn't solve the errors-in-destructors thing any better: Drop::drop can't return anything.


It's not that hard to work around that, you just pass a reference in on creation that gets filled on drop().

The exception case is much nastier and a lot less explicit.


"Just" passing a reference elides how clunky and annoying it is: depending on the code, it requires extra nested scopes, along with remembering to actually handle the error. Even the case of propagating the error upwards requires a bunch of annoyance:

  let mut failed = Ok(())
  {
    let object = create_it(..., &mut failed);
    // ... 
  } // destructor runs
  failed?;
C++11 introduces a pile of tools for first-class manipulation of exceptions as values (the <exception> header), that mean that even using exceptions, the handling in a destructor can be fairly similar to your suggestion for Rust (e.g. wrap the contents of the destructor in a try/catch, and then fill in an exception_ptr& using current_exception()). The main difference is type erasure and allocation in C++.

In any case, in either language, an object storing a reference to fill in later restricts how the object can be used, since it can no longer escape from the stack frame which owns the data to which the reference points.


Except every exception in C++ is unchecked. Literally every API call can throw the entire world at you and you won't know it until you run the code under each permutation. This includes library code that you may not have the source for.

Compared to the above I at least know from an API signature that there's the possibility of an error.


You're jumping around: exceptions being checked or not is orthogonal to returning errors from a destructor. It's true that being unchecked may make it harder to know when you need to, but the point stands that the ability to return errors from a destructor is not a way in which Rust's error handling is better than C++'s.

Additionally, the C++ solution for returning errors from destructors is also explicit in the signature.

(To be clear, I'm very firmly a fan of Rust, but I'm also a fan of accurate representations of it.)


Fine, I'll bite then(although I think my point still stands).

According to the spec current_exception() returns a smart pointer which the storage isn't specified. Part of that is implied allocation(returning bad_alloc for instance) which means you may be making a heap allocation for a pointer that could have been handled on the stack. That's going to rule out cases like the embedded space and gamedev where allocations during runtime is a big no-no.

I'm no stranger to C++ but I've been bitten so many times by exceptions. Either in libraries that didn't say they'd throw them(but then did), ABI pollution and having to get new binary drops from vendors because turning them on bloated RTTI which caused binary sizes that didn't fit. They really have no place if you plan to use the language at any sort of scale.


I think you're still looking at too much of the picture: this discussion is specifically about error handling in destructors (and how this affects finally/scope(exit)). There's numerous reasons why the checked, explicit, first-class, typed error handling of Rust is better than C++'s exceptions, but "able to use an outpointer to return an error from a destructor" isn't one of them:

1. using the outpointer is clunky and error prone

2. C++ can do the same thing (both with exceptions enabled, and with them disabled, the latter of which is pretty much the same as Rust)

(The current_exception stuff even makes it possible to handle it fairly generically in C++, so even calling arbitrary code with exceptions-on can be the normal code wrapped in catch_and_write_exception(&exceptionSlot, [&] { ... });, which works for any library code, whether it throws or not.)

> According to the spec current_exception() returns a smart pointer which the storage isn't specified

Exceptions often allocate anyway; current_exception() isn't creating a new problem.


> the latter of which is pretty much the same as Rust

If exceptions allocate that's not the same as Rust, full-stop. I'm sorry if it seems like I'm being pedantic about this point but it's a huge issue of contention for those of us who used C++ to squeeze every ounce of performance out of a platform.

Yes I can manipulate a value through pointers/refs to return a result in both, but in the Rust case I can guarantee that the value lives on the stack by passing in a reference because A: I know the full type up-front(no virtuals unless I use Box<Trait>, no std::exception* opaque types) and B: exception_ptr() makes no such claims.

> Exceptions often allocate anyway; current_exception() isn't creating a new problem.

For those of us who ship codebases with exceptions disabled by default it definitely is a problem. Unchecked exceptions make this a problem which pollutes every one of your destructors and forces you to pay the above cost.


> I'm sorry if it seems like I'm being pedantic about this point but it's a huge issue of contention for those of us who used C++ to squeeze every ounce of performance out of a platform.

You're being pedantic about something I'm not saying.

I 100% agree with what you're saying in that comment, but that isn't what this thread is about. It has now become clear that your "big gripe" with exceptions is not the inability to throw them out of destructors (which is what causes the problem with scope(exit)), but all the other problems with them.

As my previous comment said, there's lots of reasons why Rust's approach is better (I'm intimately familiar with Rust, and also work on a large C++ codebase... with exceptions disabled ;) ), but that's not what we're talking about here.

> For those of us who ship codebases with exceptions disabled by default it definitely is a problem

No, you're misreading me again: it isn't a new problem. As in, the problems with current_exception() are just the normal problems with exceptions, they aren't something specific to that function. If exceptions are okay (which they're often not; yes, I know, I get that, but that's not the point here), then using current_exception to return errors from destructors is fine.

My point is Rust and C++ both handle returning errors from destructors equally well/badly. (Your point is you, in general, prefer Rust's error handling to C++ exceptions, which is a different point.)


This is your biggest gripe with exceptions? So you're saying you'd be using them if only the language supported a native finally/scope(exit) that avoided this particular edge case?


I'm not sure I understand what you mean.

For your "release resources" point: fclose() can fail, yet in fstream::~fstream() that must be ignored. So that's not great.

For your "execute after prior operations in LIFO manner" point... what is the purpose except to clean up resources? If it isn't about scoped resources, then what is it about, given that defer(...) is explicitly about scope? I mean, if "defer" wasn't about that, we'd just write a few more statements doing the thing and not bother with the whole defer(...) thing?!?


> For your "release resources" point: fclose() can fail, yet in fstream::~fstream() that must be ignored. So that's not great.

No, fclose() cannot fail to close the file. The return code merely tells you whether any pending write operations have failed in the meantime, which is unrelated to the releasing of the FILE. The FILE will be closed either way. The C standard is explicit about this: Whether or not the call succeeds, the stream is disassociated from the file and any buffer set by the setbuf or setvbuf function is disassociated from the stream (and deallocated if it was automatically allocated). http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf#pa...

> For your "execute after prior operations in LIFO manner" point... what is the purpose except to clean up resources?

Well, fflush() is an easy example. Or, any logging that must occur at the end of a function call regardless of its success or failure. Or, really, anything else that isn't releasing a resource...


Hang on, isn't fclose() tasked with flushing buffered I/O, or are we assuming entirely synchronous I/O? Well, I suppose fsync() absolves POSIX of that, but in practice? fstream::~fstream() has to flush the IO (at least at runtime library level) and if that fails?

> The FILE will be closed either way.

That doesn't matter. It only matters in a resource-accounting way -- the resource will be freed, but that program does the wrong thing (partial write, 50-50 whether write happens, whatever).

> > For your "execute after prior operations in LIFO manner" point... what is the purpose except to clean up resources?

> Well, fflush() is an easy example. Or, logging that must occur at the end of a function call regardless of its success or failure. Or, really, anything else that isn't releasing a resource...

fflush() can also fail (see above). Logging can also fail, disastrously, if it's going through a serial port -- though, it technically doesn't necessarily break the program's semantic meaning -- nobody wants to wait for N hours for the next log statement to be sent before continuing on with the program's execution.

Do you see, now, what I mean?

(EDIT: I think HN screwed up the quoting on my reply, but I think I've fixed it. Please yell if I quoted you or me(!) badly.)


> Hang on, isn't fclose() tasked with flushing buffered I/O

You're not reading carefully. I never said fclose() flushes anything. Please go back and read what I wrote.

>> The FILE will be closed either way.

> That doesn't matter. It only matters in a resource-accounting way. The resource will be freed, but that program does the wrong thing (partial write, 50-50 whether write happens, whatever). fflush() can also fail. Logging can also fail, disastrously [...]

Yes, you are only confirming what I wrote. Again, please go back and read what I wrote; it seems you are not reading carefully and are very confused what I have been saying (and what you yourself were asking).

Here, I'll quote myself again. I said: "Destructors are meant to release resources, whereas finally/scope(exit) are meant to execute after prior operations in a LIFO manner."

You were surprised these are distinct categories and wanted examples of the operations that would need to occur in a LIFO order but which are not releasing any resources. I cited flushing and logging as easy examples of such operations. You then proceeded to object that fflush and logging can... fail? Yes, I agree; of course they can -- indeed that was half of my entire point as to why they are inappropriate for destructors/RAII, but appropriate for finally/scope(exit).

To repeat even more explicitly: both flushing and logging would need to occur in finally{} blocks, as they are not releasing any resources. In addition, neither of them is appropriate for a destructor, because these operations can fail, and destructors cannot propagate the corresponding exceptions. By contrast, closing a file needs to occur in a destructor, because is releasing a resource (which inherently cannot fail). And indeed, this implies fclose() would need to be called in a destructor and have its return value ignored, as it always succeeds in closing the file.


> You're not reading carefully. I never said fclose() flushes anything. Please go back and read what I wrote.

I did read, thanks.

AFAICT, the POSIX standard mandates that fclose() must flush pending output, so what you said is immaterial.

That is, it can fail to flush in exactly the way I mentioned, and so any destructor-like behavior is faced with a decision as to "what to do".

Now, you're technically right in that the RESOURCE representing the file will be closed, but my counter-point was that it doesn't matter. Either the destructor swallows the error yielding silent incorrect behavior, or the destructor must throw and abort the program.

Neither is very helpful, though I guess aborting the program doesn't play havoc with the program's meaning.

My question is really: What does scope(exit) actually do if a fflush() fails, for example? Any non-built-in equivalent of "scope(exit)" in C++ would have to use a destructor to function reliably, I think, so what to do?

Does it throw? When does it throw? Is it allowed to throw if we're already unwinding the stack due to another exception?

This is horribly complicated... and that's my overall point, really. Just the fact that a resource was technically freed is moot if the program does the wrong thing.

The horrible complicatedness of all of this is an indicator that something is deeply wrong.

(Incidentally, in Java the try-with-resources statement actually seems to do at least a semi-sane thing which is that an exception during a "finally" clause attaches that exception to the already "active" exception in case it happens during unwinding from a thrown exception. Or the other way around, can't be arsed to look it up. The point is that unwinding happens, but errors are not lost. I'm not sure if it actually does the right thing wrt. file descriptors, but one hopes so.)


That’s funny, because I really dislike “finally,” because it isn’t much better than doing all the cleanup yourself.

The design document for Java try-with-resources says that Sun’s own programmers working on the standard Java libraries got it wrong a third of the time. I find it interesting that C#, Java and Python have each created some kind of automatic cleanup mechanism even though they already had “finally.” That suggests to me that “finally” has some big drawbacks.


The drawback is that the finally block is far away from the set-up block, which makes it painful to maintain and allows them to get out of sync pretty easily. It's a syntactic issue rather than a semantic issue.

(This is only referring to use cases other than resource release.)


That is probably the most well-known problem, but it’s not the only one.

Another issue is that people generally don’t test failures as much as they should. They’ll generally get “everything must be rolled back” but mess up “only some things should be rolled back.”


Andrei Alexandrescu has a fun talk about implementing scope_exit using a new C++17 function: std::uncaught_exceptions (which is a tweak to std::uncaught_exception). Fundamentally, it doesn't really solve the destructor throwing exceptions issue though.

https://www.youtube.com/watch?v=WjTrfoiB0MQ


I don't think I understand, you can throw from a destructor. However, in C++11 it gets tricky because destructors are implicitly declared noexcept, so if you want to do so you need to mark them noexcept(false). As for implementing scope(exit) yeah, apparently this isn't possible.


> I don't think I understand, you can throw from a destructor

Read what I wrote carefully:

>> destructors invoked via exceptions can't throw their own exceptions

i.e. if you are in a block where an exception is thrown, and a destructor is called during the exception's propagation, then that destructor cannot throw its own exception, as it will terminate the program. (This is unrelated to noexcept.) See here for more info: https://stackoverflow.com/a/130123


Ah I see, like struct B { ~B( ) noexcept( false ) { throw ...; } }; struct A { B b; ~A( ) noexcept( false ) { throw...;} };

I guess the saving grace is that most C++ devs are aware of what could throw in a destructor because it is rare and C++ 11 made this more difficult. Just don't and design around it. If the destructor fails use it as a choice to terminate or ignore.


Yeah. But when people start writing wrappers like SCOPE_EXIT that call user code inside destructors, suddenly the users of those wrappers don't have any way to know they can't throw exceptions.


scope_exit is just not very idiomatic C++. If used, It's for quick'n'dirty I'm too rushed to wrap it in a class times.

It's use in C++ implies that you are worrying about something at the wrong abstraction level too.

With that though, I would think that the default > c++11 implicit noexcept is proper. Crash hard when an exception is thrown and maybe even static assert on the result of noexcept( on_exit_cb ) so that it gives a compile error, lambda's can be noexcept and c functions cannot throw(thinking fclose and friends)


The people who use SCOPE_EXIT are responsible for making sure no exceptions are thrown (well, that any exceptions that get thrown are swallowed before they trigger std::terminate).

If that’s not possible, they have to use a different approach.


This type of construct is unnecessary in C++11; the presence of RAII containers and structures totally obviates it. C++ provides a language-level construct for logic performed at the end of object lifetime (i.e., when value types fall out of scope or when heap-allocated objects are deleted): the destructor!

This person is describing an error-prone reimplementation of std::unique_ptr with a custom deleter. I could understand the value of an article like this if the use of C++11 were not allowed, but given that the article is written with C++11 in mind, I fear the author has totally missed the point of RAII-style resource management that the newer language-level constructs provide.

Deterministic destruction and associated logic, as well as the RAII strategies that follow, are perhaps the greatest strength of C++. Languages like Rust have gone on to improve this even further by eliminating some of the greatest sources of error. Capabilities like this don't seem accurately described as C++ learning from D; indeed, if D provides hooks for running logic when names fall out of scope, that seems more accurately described as vice versa.


That's if you program "object oriented". In my experience object orientation in C++ often means something entirely different than in a GCed language like Java or C#. It's totally reasonable to program data-oriented in C++, especially for time critical applications like games. Unfortunately, this rules out many abstractions and mechanisms that make sense in a modern C++ application.


It's nice to see some D ideas make it into C++.

I think that D is kind of like Mercurial: both less popular than some other wildly popular alternative, but we still hope that at least some of their ideas trickle back into the more popular sibling.


C++ is like English - widely used, and steals like hell from other languages.

D is probably more like, hmm, Esperanto? Carefully designed but few people know about it.

Haskell would probably be Lojban...


While I'm a C++ fan, I'd readily concede it's much closer kin to (verbose) German than (laconic) English.

Java's even more verbose, so it probably has the stronger claim on German,* which I guess makes C++ Danish and Perl English. :)

* https://en.wikipedia.org/wiki/Andy_Bechtolsheim


I’m pretty sure D got the idea from C++: http://www.drdobbs.com/article/print?articleId=184403758&sit... (note that there are three pages).


This pattern was possible with a scoped class in c++ 98.


I absolutely agree. C++ is at least a decade behind D. Literally.

The kinds of features that eventually hit C++, D actually had for at least ten years. I think of D as a clean break. A major version upgrade with an API break. Whereas C++ has committees that move incredibly slowly, D has two architects who are decades long professionals and considered experts in their fields. It has focus, and it's not concerned with making code from the 70's compile on a modern compiler.

I simply cannot enumerate all of the features I use from D that I use every day that have become second-nature. Until I try to code in C++ again and I'm like, "The !@$@!ing order of declarations matter?! I can't make circular references without manually writing forward declarations?!" It feels like banging rocks together to make a 747.


I think that's mainly because C++ is an ISO standard used all over the world by big business and governments. So its standards are released less often and with a great deal of testing and care. I bet you will be able to specify -std=c++Whatever for the next 50 to 100 years, maybe longer.


This is pretty similar to std::lock_guard (and similar RAII types). Except instead of unlocking (or tearing down a resource, etc.) upon destruction, it invokes a lambda.


The site doesn't load for me. Here's an archived copy:

https://archive.is/srQvw


Wow, so much debate over the specific manner in which to solve a problem. More needed on the which problems to solve, I think.


(2012)


You can do the same thing with no C++11 features other than C99 variadic macros. (IMO cleaner, too).

  #define SCOPE_EXIT(...)\
  	struct SE##__LINE__##_{~SE##__LINE__##_(){__VA_ARGS__;}}se##__LINE__##_


No, you can't generally put anything useful in there. The whole point of a lambda is the [&]. If you can't capture local variables then it's practically useless.


Alexandrescu managed to do it in 2000: http://www.drdobbs.com/article/print?articleId=184403758&sit... (note that there are three pages).

The version in Facebook Folly was cleaned up because of C++11, but this wasn’t a new technique five years ago.


I'm curious: is __VA_ARGS__ part of the standard, or just widely used?


It's part of the standards since C99 and C++11.


It is.


Yeah but that’s kinda cheating




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: