While the JVM has an excellent garbage collector (maybe the best on the planet), it's performance is linearly degraded by the amount of memory a process is using.
If you go into any Java shop (which is where I live) and try to run a highly available system with more than 8GB of RAM in a server instance, you will be lucky to do that.
Servers on the cheap exist with 256GB of RAM, if you can't get to 12GB without OOMs, and GC pauses of 30 seconds, you will never get to a process that scales to this hardware.
Your GC pause on a 100GB system would be over a minute complete stop the world probably more.
And that's a shame, because the only antidote, is to over-distribute, and over cluster, and not only slow everything down by 1000X compared to a single server, but the engineering and operational costs to distribute everything into tiny 4GB buckets are also unacceptable.
I think the industry should rethink GC, if it can get past this boundary. 4GB walls and 8GB walls compared to the current availability of hardware is embarrassing.
I would rather not build a server that can't use all the memory I want it to without falling over.
My minimum required RAM metric is 256GB for a process. That could even be a fairly small (code wise) system that just needs a lot of RAM in which case manual memory management is a piece of cake.
We run JVMs - running complex webapps on Tomcat - in production with 28 GB heaps. We don't have OOMs or chronic GC pauses. We don't reboot in between fortnightly releases.
We aren't running machines with 100 GB heaps, but i can tell you from personal experience that it's possible to exceed 8 GB without difficulty.
I doubt you'd have to look far to find people who were running with substantially bigger heaps on stock Sunacle/OpenJDK JVMs.
I wont discredit your experience, however, I have been a consultant in several 64 bit shops, linux and solaris who had 64GB servers who could not scale above 8GB due to OOMs.
Also remember that if you are on linux doing that 28 GB heap you are living in the world of memory over-commit, which means if you actually tried to use that memory, you will probably segfault.
Again if you have done it great, no shop I have worked in, has been able to do it with app servers and keep their servers up reliably. Its a constant source of pain and I think JVM GC is not sufficient to scale at the level of current hardware.
Well, they did release it as open source in a "Managed Runtime Initiative", but no one did anything with it and the site is now offline.
It's a set of patches to Linux, to cheaply implement the memory management tricks, e.g. batched operations on big pages without redundant TLB clearing, and a set of patches to the Hotspot JVM, which is where I think most people had trouble digesting it. See e.g. http://en.wikipedia.org/wiki/User:AzulPM/Managed_Runtime_Ini... for more details.
It ideally would have gotten enough traction that the Linux patches would get accepted in the mainline kernel. Ah well, perhaps someday.
If you go into any Java shop (which is where I live) and try to run a highly available system with more than 8GB of RAM in a server instance, you will be lucky to do that.
Servers on the cheap exist with 256GB of RAM, if you can't get to 12GB without OOMs, and GC pauses of 30 seconds, you will never get to a process that scales to this hardware. Your GC pause on a 100GB system would be over a minute complete stop the world probably more.
And that's a shame, because the only antidote, is to over-distribute, and over cluster, and not only slow everything down by 1000X compared to a single server, but the engineering and operational costs to distribute everything into tiny 4GB buckets are also unacceptable.
I think the industry should rethink GC, if it can get past this boundary. 4GB walls and 8GB walls compared to the current availability of hardware is embarrassing.
I would rather not build a server that can't use all the memory I want it to without falling over.
My minimum required RAM metric is 256GB for a process. That could even be a fairly small (code wise) system that just needs a lot of RAM in which case manual memory management is a piece of cake.