Independently of the operating system , your 70g process might have an actual working set of 70 g, while the 55g might have had a working set of 1 gig.
Furthermore your Vm was running most likely in a shared environment, your swap partition running on shared not super fast io, etc - you may not even have had 64g of ram ( despite your Vm claiming the contrary ). Your Mac on the contrary, has what is says it has.
This is the difference. Linux has a swap partition. You fill up the dedicated swap partition (which you decide when you partition your drive and is unusable for anything else, so you want to keep it small) and boom you're out of memory, OOM starts killing random stuff.
macOS uses a dynamic swap file (technically its multiple files) on your boot drive that starts at 0 bytes that grows until your main partition runs out of space.
I wrote a small program to do some image processing, I had a memory leak that leaked uncompressed bitmaps, after a couple minutes browsing HN "why is that process so slow?", I go check, it turns out it had burned up like 200 GB of RAM on my 16 GB Mac without me even noticing. But it did actually complete without me bothering to fix the bug, so problem solved.
This also means macOS has a lot more leeway to, say, swap out background tabs you haven't touched in a week in favor of live data since it can grow that swapfile full of stale garbage.
That’s not what OP is saying. If your program gets shot by the oom killer your machine doesn’t lock up. OP is complaining that their machine locks up. What you’re describing is a reasonable scenario on Linux too that I’ve seen many times. On a mac things will go wrong when you run out of disk in general instead of running out of swap , but your process will fail eventually. Computing resources are finite. Sure there’s a bit more leeway ( assuming you’re not short on disk space … ) but given how the problem is described it’s very not clear that’s the issue.
I ´m an hpc engineer , specialize in performance work, have been using Linux since 1996, and also have a Mac. I like both. I understand these are not technical arguments, but my point is there are no easy answers when it comes to performance work. You need to measure, know exactly what’s going on, compare apples with apples, and then you can draw conclusions. In that particular case, there simply isn’t enough details to prove anything.
I guess I've just been unlucky because the OOM killer has always made bizarre decisions on my machines and killed something more critical than the runaway process (or just not worked at all, not sure what the deal was is that, I never get logs). Every time I looked it up online the replies were just "you need to enough RAM to run your process, idiot, you should never need swap, swap is bad"
If the normal case is that the OOM killer just kills the out-of-control process, then I forfeit my argument.
Yes that’s how it’s supposed to work. Supposed to, because as you’ve noticed there’s a long list of oom related patches in the kernel because it’s doesn’t always kill what it should kill !
Independently of the operating system , your 70g process might have an actual working set of 70 g, while the 55g might have had a working set of 1 gig.
Furthermore your Vm was running most likely in a shared environment, your swap partition running on shared not super fast io, etc - you may not even have had 64g of ram ( despite your Vm claiming the contrary ). Your Mac on the contrary, has what is says it has.