I'm the author of this post. Note that it's entirely adapted from the talk done by Attila Szegedi (http://www.infoq.com/presentations/JVM-Performance-Tuning-tw...), who deserves all the credit. I just filled in some of the gaps and reworked it so it flows in textual form.
I'll try to update the post with any comments I get here. It's definitely not ground truth as I was a bit fuzzy on some parts of the presentation.
Great article. Note that the point about Strings using 2 bytes per char isn't always true. The latest HotSpot VM has an option called "-XX:+UseCompressedStrings" (see
http://www.oracle.com/technetwork/java/javase/tech/vmoptions...) where the VM will try to use ASCII-strings whenever possible to save memory.
Nothing is wrong with byte buffers. Use them where appropriate. The advice is to stop when you find yourself implementing a full-blown memory manager / quasi-malloc in user code on top of byte buffers...
Full context as explained in the talk: used to have stop-the-world GC for 2 minutes every hour. After implementing bytebuffer-based slab allocation, this is only several seconds, and once every three days. Service runs on 200 nodes, with redundancies. Kicking the process on one of them, in a slow roll that finishes in under 3 days, works around the unresponsiveness window (planned shutdown easier to manage than an unpredictable pause).
It's totally a workaround and not a solution. Atilla follows up this example with an anecdote of talking to Oracle folks about when are they going to have true pauseless GC, and they responded with "not that big an issue, really, everyone finds a workaround..." So, this is an example of a workaround. A pretty good one once you realize it's not "the one" machine, it's "some one" machine out of 200.
I'll try to update the post with any comments I get here. It's definitely not ground truth as I was a bit fuzzy on some parts of the presentation.