Overcommit doesn't mean that malloc never fails. malloc will fail if you ask for an allocation that won't fit in the virtual address space (for instance because of your rlimit settings, or if you asked for a 3G chunk on a 32-bit system).
A lot of folks are pointing out that malloc can fail, which is true, but the important part is that there are situations where your application will just abort randomly in the middle of nowhere (i.e. not during malloc) and there's nothing you can do about it. There are also situations where malloc fails and returns a null, but given the existence of situations with no errors on a malloc, handling the error in these cases isn't close to a solution. No language or stdlib can solve this problem 100%.
At a baremetal level, it helps having this, but you're probably better off not using the stdlib anyway.
Bullshit. Every part of the C++ standard library (which is _much_ bigger than Rust's) can gracefully propagate resource exhaustion errors, including memory allocation failure, to callers. That Rust can't is a design flaw.
> there are situations where your application will just abort randomly
That's sure as hell not the case in any environment I choose to use.
> can gracefully propagate resource exhaustion errors, including memory allocation failure, to callers
only on malloc. If the kernel overcommits, your process will abort when you try to use the memory, possibly way after the malloc and there's nothing you can do about it. That's the point being made here.
> That Rust can't is a design flaw.
(This is false, see Steve's reply above about this)
The world is not Linux. I happen to believe that believe that overcommit in the Linux kernel is a disgrace. It is, however, at least possible to disable it. It's not possible to retroactively add real exceptions to Rust, or to change the signature of all memory-allocation function to return Result.
Rust is supposed to be a general-purpose systems programming language, not a Linux programming language. Windows does not overcommit. A correctly configured Linux system does not overcommit. Lost of embedded systems don't (and can't) overcommit. Are you saying all of these people should avoid Rust's standard library?
> > That Rust can't is a design flaw.
> (This is false, see Steve's reply above about this)
It's clear that my opinion differs from that of many Rust developers and users. I still think I'm correct, that these developers and users are misguided, and that as Rust attempts to fill more niches, experience will show that my position is the correct one.
All I can say is that I personally will not use any language that bakes cornucopian assumptions about memory baked into its core library. I know that you say that it's possible to just avoid stdlib --- but the temptation to use it will be irresistible, and once somebody succumbs to the temptation, the entire program is now capable of aborting irrecoverably.
I will stick with languages . Modern C++ is safe and expressive enough, and it correctly reacts to resource exhaustion.
Sure, but if linux has this issue, then C++ programs on linux will also have this issue, and the language can't solve that. That's all my point was.
> or to change the signature of all memory-allocation function to return Result.
When custom allocators part 2 happens, you can. I've already argued the "real exceptions" part above.
> Rust is supposed to be a general-purpose systems programming language, not a Linux programming language. Windows does not overcommit. A correctly configured Linux system does not overcommit. Lost of embedded systems don't (and can't) overcommit. Are you saying all of these people should avoid Rust's standard library?
No. My point was simply that no language has a complete solution to this problem.
Most people don't need to worry about OOM; abort-on-OOM is the expected behavior. For the people who do, there is a mechanism to handle it, as explained above. I can't help it if you have an idealogical issue with that mechanism. But ultimately, it works and can be used.
While quotemstr's reaction is over the top, I do see the need to have a memory allocation approach that can handle OOM gracefully. Many types of software that could benefit from Rust's compile-time safety will want to allocate right up to the limit of available memory, such as audio/video processing software where more memory equals more simultaneous effects and less I/O.
I am endlessly frustrated by poorly designed audio software that aborts without saving if an OOM occurs. At the very least, a process should have the oppprtunity to save its state to disk, or ideally continue operating at a reduced capacity (e.g. a video codec might use fewer reference frames) after freeing some resources.
> I do see the need to have a memory allocation approach that can handle OOM gracefully.
Do you find anything wrong with inserting an allocator that panics on OOM (IIRC the default one aborts on OOM) and using `std::panic::recover` to catch the panic? This is the same as throwing and catching an exception. Note that `recover()` is designed to be exception safe by default.
(There soon will be a way to make std heap APIs like box and vec use Result, which might be neater)
> Sure, but if linux has this issue, then C++ programs on linux will also have this issue
Why even bring Linux into the discussion? A Rust program running on Windows has the same problem.
> No. My point was simply that no language has a complete solution to this problem.
A correct C++ program running on Windows will not spuriously abort. Neither will a C++ program running on a Linux system configured not to overcommit. That some Linux systems can be configured to kill processes at arbitrary times is not an excuse for Rust to be sloppy with memory allocation.
C++ and many other languages do, in fact, have complete solutions to this issue, and that Rust does not is a serious deficiency, one serious enough to prompt me to prefer other languages despite Rust's other advantages.
Can you point to an example of such a "correct C++ program" that is larger than some sample code or a toy program demonstrating this technique?
I'm just wondering if this argument is all hypothetical, or if there are any teams of C++ programmers who are actually disciplined enough to be able to handle this case in practice, in large scale software. I know that in most code that I've seen, the only use of std::bad_alloc has been to log the error and abort.
...and the same is true of Rust, just with different defaults. Switch allocation failure from an abort to a panic and catch the panic just like you would in C++.
What's incomplete about that? It's not like C++ can't be switched in the other direction with -fno-exceptions or equivalent.
Not true. For example, it can fail if there is no big enough chunk of virtual address space available (i.e. your heap is fragmented enough and your attempted allocation is big enough). I've even seen 64-bit processes manage to do this, my mmapping lots of multi-GB things at once and then trying to do large allocations.
Overcommit will fail if you go over the overcommit ratio. In addition, the process will die if you start using the fake pages the kernel gave you. But yes, malloc can fail on Linux.
You're correct in the way that it's default behavior, but the behavior can be turned off.
Also, if you're using cgroups (or anything that leverages that like containers) you can put a limit on the memory resource and then malloc will fail. Which is common enough, and only becoming more common as people are collocating a lot of disparate workloads on nodes (using the kurbenets scheduler).