I hate setjmp/longjmp and have never needed it in production code.
Think about how it works: it copies the CPU state (basically the registers: program counter, stack pointer, etc). When you longjmp back the CPU is set back to the call state, but any side effects in memory etc are unchanged. You go back in time yet the consequences of prior execution are still lying around and need to be cleaned up. It's as if you woke up, drove to work, then longjmped yourself back home -- but your car was still at work, your laptop open etc.
Sure, if you're super careful you can make sure you handle the side effects of what happened while the code was running, but if you forget one you have a problem. Why not use the language features designed to take care of those problems for you?
This sort of works in a pool-based memory allocator.
The failures happen three ways: one is you forget something and so you have a leak. The second is that you haven't registered usage properly so have a dangling pointer. Third is by going back in time you lose access to and the value of prior and/or partial computation.
If you use this for a library, and between the setjmp and longjmp is entirely in a single invocation you can sometimes get away with it. But in a thing like a memory allocator where the user makes successive calls, unless you force the user to do extra work you can't be sure what dependencies on the memory might exist. If your library uses callbacks you can be in a world of hurt.
Trying to keep track of all those fiddly details is hard. C++ does it automatically, at the risk of potentially being more careful (e.g. deallocating two blocks individually rather than in one swoop -- oh, but that language has an allocator mechanism precisely to avoid this problem). The point is the programmer doesn't have to remember anything to make it work.
Think about how it works: it copies the CPU state (basically the registers: program counter, stack pointer, etc). When you longjmp back the CPU is set back to the call state, but any side effects in memory etc are unchanged. You go back in time yet the consequences of prior execution are still lying around and need to be cleaned up. It's as if you woke up, drove to work, then longjmped yourself back home -- but your car was still at work, your laptop open etc.
Sure, if you're super careful you can make sure you handle the side effects of what happened while the code was running, but if you forget one you have a problem. Why not use the language features designed to take care of those problems for you?
This sort of works in a pool-based memory allocator.
The failures happen three ways: one is you forget something and so you have a leak. The second is that you haven't registered usage properly so have a dangling pointer. Third is by going back in time you lose access to and the value of prior and/or partial computation.
If you use this for a library, and between the setjmp and longjmp is entirely in a single invocation you can sometimes get away with it. But in a thing like a memory allocator where the user makes successive calls, unless you force the user to do extra work you can't be sure what dependencies on the memory might exist. If your library uses callbacks you can be in a world of hurt.
Trying to keep track of all those fiddly details is hard. C++ does it automatically, at the risk of potentially being more careful (e.g. deallocating two blocks individually rather than in one swoop -- oh, but that language has an allocator mechanism precisely to avoid this problem). The point is the programmer doesn't have to remember anything to make it work.