Hacker News new | past | comments | ask | show | jobs | submit login

This is a memory leak in a GC language as well, depending on what happens to foo in ...code that uses foo... Maybe it will become a field of some class, maybe it will get pushed to some queue, maybe it will get captured in a lambda, etc. You won't even be able to tell just from looking at this code in this one place alone, if you passed this pointer as argument to a function.

It is my experience that when people work with GC languages, they treat the GC as a blackbox (which it is) and simply won't bother investigating: do they have memory leaks? Of course not, they are using a GC after all, all memory-related problems solved, right? Right... With code that relies on free(), I can use a debugger and check which free() calls are hit for which pointers. Even better, I may use an arena allocator where appropriate and don't bother with free() at all. With a GC I'm just looking at some vague heap graph. Am I leaking memory? Who knows... "Do those numbers look right to you?"

Memory management issues are usually symptoms of architectural issues. A GC won't fix your architecture, but it will make your memory management issues less visible.

It is my experience that most memory problems in C come from out-of-bounds writes (which includes a lot more than just array access), not from anything related to free(). A GC doesn't help here.




Per Wikipedia:

> In computer science, a memory leak is a type of resource leak that occurs when a computer program incorrectly manages memory allocations in a way that memory which is no longer needed is not released.

In most of your scenarios, e.g. "pushed to some queue", the object in question is still needed. Hence, this is not a leak. Presumably, the entry will eventually be removed from the queue, and GC will then reclaim the object.

At any rate, I think we're moving past the point of productive discussion. My experience in practice is that memory leaks are more common / harder to avoid in non-GC languages. Of course a true memory leak (per the definition above) is possible in a GC language, but I just don't see it much in practice. Perhaps your experience is different.


Are there any languages that provides memory leak safety, so that the compiled binary is guaranteed to be memory leak free?


What does memory leak free mean? It has a semantic meaning.

If I have a queue with elements and I won’t be accessing some of it in the next part of the program, is it a leak? Also, remember that certain GCd languages intern strings, is it a memory leak since it will not necessarily use it anymore?


Well, you would need a way to "tell" the compiler more about your intentions and maybe even explicitly declare rules for what you consider a memory leak in certain scenarios.

To provide an example for a different case: programs written in Gallina have the weak normalization property, implying that they always terminate.


I have at times in the past made the (somewhat joking) observation that "memory leak" and "unintentional liveness" look very similar from afar, but are really quite different beasts.

With a "proper" GC engine, anything that is no longer able to be referenced can be safely collected. Barring bugs in the GC, none of those can leak. But, you can unintentionally keep references to things for a lot longer (possibly unlimited longer) than you need to. Which looks like a memory leak, but is actually unintentional liveness.

And to prove the difference between "cannot be referenced" (a property that can in principle be checked by consulting a snapshot of RAM at an instant in time) and "will not be referenced" (a larger set, we will never reference things that cannot be referenced, but we may not reference things that are reachable, depending on code) feels like it is requiring solving the halting problem.

And for free()-related problems, I've definitely seen code crash with use-after-free (and double-free).


> I have at times in the past made the (somewhat joking) observation that "memory leak" and "unintentional liveness" look very similar from afar, but are really quite different beasts.

Both situations prevent you from reusing memory previously used by other objects, which isn't being utilized for anything useful at that point. The distinction is valid formally, but from a practical point of view sounds rather academic.

> And to prove the difference between "cannot be referenced" (a property that can in principle be checked by consulting a snapshot of RAM at an instant in time) and "will not be referenced" (a larger set, we will never reference things that cannot be referenced, but we may not reference things that are reachable, depending on code) feels like it is requiring solving the halting problem.

As usual, looking for a general solution to such a problem is probably a fool's errand. It's much easier to write code simple enough that it's obvious where things are referenced. Rust's lifetime semantics help with that (if your code isn't all that simple, it will be apparent in the overload of punctuation). If not Rust, then at least it would be good if you could check liveness of an object in a debugger. In C you can check whether a particular allocation was undone by a free() or equivalent. I'm not aware of any debugger for e.g. Java which would let me point at a variable and ask it to notify me when it's garbage collected, but it sounds like something that shouldn't be too hard to do, if the debugger is integrated with the compiler.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: