Hacker News new | past | comments | ask | show | jobs | submit login

That's true for GC vs. manual heap-based memory management, but most GC languages don't do stack allocation at all or only for primitive types and stack allocation is much, much faster than any sort of heap-based memory management.



All these GC languages (RC is GC, just in case) do allow stack allocations for any user defined type.

Mesa/Cedar, Modula-2+, Modula-3, Oberon, Oberon-2, Active Oberon, Component Pascal, Eiffel, D, Swift


Listing largely-obscure languages and repeating them multiple times doesn't do much against a claim of "most".


Java, Go, and C# all do stack allocation either implicitly via escape analysis (Java), explicitly via value types (C#), or both (Go). I don't know that this is "most" (perhaps by marketshare), but these are certainly 3 of the most popular languages in this space.


That answer is fair, and it sounds like btmorex is completely wrong. Thank you.


Well, the Algol derived ones, might be a bit obscure for those that haven't learned history of computing, but they are quite well known.

I can list other ones that are actually quite obscure.

In any case, I also had D and Swift on the list, which are quite actual.

As for the rest, I don't have anything to add to weberc2's answer.


The unfortunate part is finding good IDEs with solid debuggers and auto-completion.


Go has excellent auto-completion support, even for vim. It's debugger (delve) is also decent, though not graphical. There are not many languages with better tooling than Go, in my experience.


FYI, delve is integrated into a lot of editors, and basically works exactly like visual studio when used in VS Code (I used visual studio for C++ & C# for 13 years before moving to Go).


Yes, but that is not relevant for those that live on VI and Emacs.


"stack allocation is much, much faster than any sort of heap-based memory management"

No, it's not. For short-lived objects, at least on the JVM, allocation is a pointer bump, and collection is free, because unreachable objects are not even walked. Stack allocation doesn't beat that by much.


> stack allocation is much, much faster than any sort of heap-based memory management.

Is that true? I thought nursery allocation was usually as fast if not faster than stack allocations.


The allocation speed will be close; a bit worse since the pointer to the end of the current allocation buffer usually isn't in a register and a function call and range check is required. However the overall cost of handling that memory from start to finish is significantly higher than the stack even if it gets clobbered in the first nursery collection.

Not because it is very high, but because stack allocation/deallocation is so very simple.


Range checks can be done by the MMU, they're basically free.


That's fair. Does Go do this? Or any other somewhat mainstream language? Any thoughts on how arenas (rust) compare to gc and manual allocation for speed?


Arenas trade some granularity and flexibility for speed and fragmentation-free allocation; they're a great choice for bulk data that you can precompute the size of and want to iterate over very quickly, and they're also easy to reason about. You can do many tasks using only arenas and stack allocations, and it'll zip along very quickly. If needed, you can flag liveness to get very efficient reuse of short-lived objects. They're less ideal if you are gradually adding more data over time, and you want to retain a consistently low usage, since you end up with "stairsteps" of latency when it copies into a bigger buffer, and once you have the big buffer, it's quite complicated to shrink it down again.

malloc()-style allocation gives you precision over how much you want, and this is most interesting to a memory-constrained system trying to allocate amongst many different processes(the original Unix use-case). But willy-nilly use of malloc() and free() leaves you with lots of fragmentation, as well as a larger attack surface for memory errors. What the actual allocation algorithm does is out of your hands, too, at least if you're depending on the OS allocator(you can always go write your own and supply it with a giant heap to munch on, and this may occur when you need tuning for individual allocation scenarios).

In the right scenario, a GC won't do too much differently from a manual allocator(there are lots of optimizations that could bring the allocation to same-or-negligible runtime), but as we all know, right scenarios are something you can't always count on. A GC does, however, greatly simplify the memory management of a long-lived process since it can do nifty things like automatically compact the heap.

IME, a mix of stack, GC, and some arenas in the form of growable arrays, is absolutely fine for the needs of most applications. Quite often this last requirement creates a sticking point, though, where the language disallows value semantics for arrays of objects, and then you can no longer assume they're allocated linearly, plus the GC is taxed with additional references to trace. In those cases, if I have them I use arrays of primitive values as crude containers for an equivalent optimization. Either way, it's very annoying and creates some awful code, because those huge batches that I'd like to put in arenas also tend to be the bottleneck for the GC.


C# & Go both have stack allocation options. The JVM is supposedly getting them sometime soon.


I'm of the impression that Java has done escape analysis for a while now. They just haven't had value types, which as I understand, just introduce a semantic for stack allocation.


Actually I have seen presentations that mentioned Graal is much better at it than Hotspot.


stack allocation: 20 years in the making.


Malloc is O(n²) so it totally depend on how able you are to not do gradual allocation.


In most C libraries malloc() is O(log n), e.g. when implemented as balanced trees.


Argh. True, sorry for the brainfart.



In Java, heap allocation is a single instruction, most of the time.


Do you mean a single jvm instruction? Or?


No, he means a single CPU instruction. That's not quite fair, I don't think it actually is a single instruction, more like a few instructions in the best case and a very large number of instructions on the slow path.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: