Hacker News new | past | comments | ask | show | jobs | submit login

> I'm not a "professor" but as a software engineer with 35 years in this industry I can say that new languages should avoid GC's

With respect, and much less experience than You, I really don’t think so. I believe the majority of languages are better off being managed. Low-level languages do have their place and I am very happy for Rust that does bring some novel idea to the field. But that lower detail is very much not needed for the majority of applications. Also, ARC is much much slower than a decent GC, so from a performance perspective as well, it would make sense to prefer GCd runtimes.




ARC is in fact faster than GC, and even more so on M1/M2 chips and the Swift runtime. There were benchmarks circulating here on Hacker News, unfortunately can't find those posts now. GC requires more memory (normally double the amount of that of an ARC runtime) and is slower even with more memory.


How can more and sync work be faster than a plain old pointer bump and then some asynchronous, asymptotic work done on another thread? Sure, it does take more memory, but in most cases (OpenJDK for example) it is simply a thread local arena allocation where it is literally an integer increase, and an eventual copy of live objects to another region. You couldn’t make it any faster, malloc and ARC are both orders of magnitude slower.

ARC, while in certain cases can elide, will still in most case have to issue atomic increases/decreases that are the slowest thing on modern processors. And on top it doesn’t even solve the problem completely (circular references), mandating a very similar solution than a tracing GC (as ref counting is in fact a form of GC, tracing looking it live edges between objects, ref counting looking at dead edges)


I'm not familiar with the details but it is said that Swift's ARC is several times faster than ObjC's, it somehow doesn't always require atomic inc/dec. It also got even better specifically on the M1 processors. As for GC's, with each cycle there's always overhead of going over the same objects that are not disposable.

Someone also conducted tests, for the same tasks and on equivalent CPU's Android requires 30% more energy and 2x RAM compared to iOS. Presumably the culprit is the GC.


That’s a very strong presumably, on a very niche use case of mobile devices.

It is not an accident that on powerful server machines all FAANG companies use managed languages for their critical web services, and there is no change on the horizon.


It might be because on the server side they usually don't care about energy or RAM much. The StackOverflow dev team has an interesting blog post somewhere, where they explain that they figured at one point C#'s GC was the bottleneck and they had to do a lot of optimizations at the expense of extra code complexity to minimize the GC overhead.

It is actually quite rare that companies think of their infrastructure costs, it's usually just taken for granted, plus that there aren't many ARC languages around.

Anyway I'm now rewriting one of my server projects from PHP to Swift (on Linux) and there's already a world of difference in terms of performance. For multiple reasons of course, not just ARC vs. GC, but still.


With all due respect, (big) servers care about energy costs a lot, at least as much as mobile phones. By the way, out of the manages languages Java has the lowest energy consumption. RAM takes the same energy whether filled or not.

Just because GC can be a bottleneck doesn’t mean it is bad or that alternatives wouldn’t have an analog bottleneck. Of course one should try to decrease the number of allocations (the same way you have to do in case of RC as well), but there are certain allocation types that simply have to be managed. For those a modern GC is the best choice in most use case.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: