I had a famous machine learning professor send me a C program that crashed in init on a 32-bit machine because it allocated a 8GB array...
It was particularly strange to set a breakpoint at the beginning of main() with gdb and see the program never got there. Oddly enough, he never actually used the 8GB array, even though he had no problem allocating the array on the POWER workstation he was using.
Not sure what OS he used but some operating systems, Linux for example, overcommit memory. You can alloc an 8GB array just fine and as long as you don’t use it no actual memory gets used.
That’s why it worked on his machine. It didn’t work on yours because of the lack of address space.
Not if you tell it not to. This is a common configuration on servers and other applications where requiring overcommit to function is considered a bug.
The typical cause of this problem is not that the program runs out of memory but that it overflows its stack. In this case, of course, 8GiB would wrap right back around to the start — but I would think that would result in failing to generate an executable, not generating a crashing executable, unless the compiler was super slapdash.
Compilers rarely have any idea how large your stack space is. Quite often it's determined at runtime in various ways, especially in multithreaded environments.
(There are some surprising exceptions such as PIC hardware stacks where you might be allowed exactly 8 call frames, and your whole program's call stack must be a DAG with no recursion)
Okay, but in this particular case, we're talking about an 8GiB array on a 32-bit platform. Its size truncated to 32 bits is 0x00000000. If the compiler doesn't detect that, it's a compiler bug.
Same thing happened to me in C++ with gcc, making an array too big on the stack. Or maybe that was a dynamically sized array. Can't even remember the context now, project euler maybe.
The code in the example asks for a top-level array of length (uint) -1 with all elements initialized to 1. Overcommit would never help, because where this blows up is not in running the program, but in compiling it.
Even if that weren't true, the only way overcommit ever helps is if you don't initialize or otherwise touch the memory you allocate.
I'm not sure what happens if you try to write zeroes in overcommit allocated memory, but I would bet it breaks because it'll pagefault and linux will allocate it.
I was of course referring to the example in the post I was replying to. If someone allocates an 8GB array but never uses it, that doesn't do anything. The code you're referring to has that array explicitly set as main.
It was particularly strange to set a breakpoint at the beginning of main() with gdb and see the program never got there. Oddly enough, he never actually used the 8GB array, even though he had no problem allocating the array on the POWER workstation he was using.