I see one major difference: a Haskell program with optimization mistakes will run slowly and waste memory, its performance will improve as you fix them, and eventually you can stop because it's sufficient. A C++ program with any memory mistakes whatsoever will blow up randomly, and will continue doing so after you give up trying to make it perfect (which is hardly ever feasible) and ship unstable crap like the rest of the industry.
I was bitten by the space leak once or twice and my programs was useful for my colleagues despite that errors.
One great example is the model of CPU. Space leak made it to slow down quadratically - simulation_speed=O(1/simulated_time^2). It took a second for 5000 cycles and couple of hours for 100000. But all interesting effects about CPU execution could be discovered in 5000-20000 cycles - inner loops of various use cases. So you have to wait about minute or so to see what is good and what is not.
That's not entirely true. A haskell program can also fail in unpredictable ways thanks to lazy evaluation and memory exhaustion. It's probably still easier to pinpoint the source of the problem than it is to track down memory corruption in C++.
Yeah, it's true that performance can be so poor that the Haskell program can't get any useful work done. I was trying to express the spectrum from "perfect" to "tolerable" to "pretty bad" (and "unusable" does belong at the far end), where all the old unchecked languages only offer a cliff between "perfect" and SIGSEGV (or "random output" if you're really unlucky).
In most cases, tracking down memory corruption is pretty easy in C++. The simplest cases are solvable using gdb, and about 99% of the rest can be easily solved using valgrind.
Unless you're running 64-bit, a program's memory space is not only finite but relatively minuscule, and exhausting it through wasted memory is definitely a possibility.