Hacker News new | past | comments | ask | show | jobs | submit login

I would gently suggest that you are laboring under incorrect assumptions. There is nothing "lightweight" about JVM threads; they are standard native threads on normal platforms and on Linux a thread context switch has roughly the same performance overhead as a process context switch. (There is a difference, and if you study the kernel source I'm sure you can divine it, but you will also quickly realize that it is a rounding error compared to the first cache eviction of the new context.) The memory overhead of a process in Linux is literally measured in tens of kilobytes.

I also suggest you look into preforking and copy-on-write and ensure that you are clear on how Linux works with regards to memory usage; modern Linux systems indeed do not necessitate "burning RAM" to use multiple processes (indeed, the fork paradigm is the standard Unix approach for a reason, it only makes sense to make it performance-friendly). I would also note that while a Ruby or a Python can do this, Java, in standard configurations, cannot, were one to desire the ability to do so (and I've had reasons to run JVM applications in a multiprocess mode before).

I don't dislike Java, don't get me wrong. I write a great deal of Kotlin. But accuracy is important.




I know they're native threads, they're lightweight compared to unix processes. You're trying to change the subject to CPU performance during context switch when what we were discussing in memory consumption.

The memory overhead of a MRI ruby process running on linux is much higher than the overhead of a native JRuby thread, they are apples and oranges, surely we agree on that point because you can check it via a simple ps aux command.

Perhaps I am mistaken, but from what I've read to run unicorn following their recommended guidelines you need to spawn at least 1 MRI process for every CPU core, but if your application blocks on IO rather than CPU during it's request cycle which is common then you will need more processes than CPU cores in order to handle concurrent HTTP requests. At that point you're wasting many orders of magnitude more memory than JRuby for every "unit of concurrency" so to speak.

by the way Kotlin looks interesting, thanks for pointing it out I was not familiar


Attempting to cite `ps`, which includes the unique pages as well as all shared pages used by the process--i.e. pages created by its parent to store such trivialities as the entirety of the Ruby runtime and the classes loaded therein--makes me question your understanding of the Linux process and memory model. Do you understand how fork(2) works on a modern Linux machine with regard to memory page sharing from parent to child? Suffice to say that nobody running a hojillion processes in Unicorn or another forking process worker--and I should point out that I have written one of these in Ruby, this isn't Unicorn magic--is suffering under the tyranny of large amounts of duplicated data. (I'm sure there are people who spin up X of a process from the jump, but they are swimming upstream and are not the norm in my experience.)

The difference in memory usage between a forked process on a shared-page/copy-on-write OS and a thread is infinitesmal. You'd have better luck arguing about having a JIT and better perf (and there you'd be incontrovertibly correct, but I don't think it really matters for 95% of everything).


Well it looks like you're right I wasn't aware of how efficient COW forking is. I stand corrected




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: