Hacker News new | past | comments | ask | show | jobs | submit login

I'm glad to see that this language has a garbage collector, in a time when "lightweight" languages increasingly forego GC and memory safety. Even in the Harvard architecture of WebAssembly which mitigates many of the security problems that lack of memory safety causes, memory safety is the right choice.



Thanks. I feel I have gotten tons more done over the years by having GC. It's just so much more productive. For the really nitty-gritty low-level details of a particular platform, like directly calling the kernel to set up signal-handling and memory protection, do I/O, etc Virgil has unsafe pointers. On targets like the JVM or the built-in interpreter where there are no pointers, there is no Pointer type.

I am also working on a new Wasm engine and I have a neat trick (TM) where the implementation of the proposed Wasm GC features are implemented by just reusing the Virgil GC. So the engine is really a lot simpler, doesn't need a handle mechanism, doesn't have a GC itself, and the one Virgil GC has a complete view of the entire heap, instead of independent collectors that need to cooperate.


Will your new Wasm engine be written in Virgil?


I am curious about your comment. What new languages have you seen that are not memory safe by default?


Not GP, but the following come to mind: Jai, Zig, Odin, and Hare, all of which aspire in one way or other to be a modernized take on C. There is also a larger class of languages I call "safe-ish" as they are generally safe for single-threaded programs but can exhibit undefined behavior from data races; this includes Swift and Go, and likely other newer languages inspired by those.


> There is also a larger class of languages I call "safe-ish" as they are generally safe for single-threaded programs but can exhibit undefined behavior from data races; this includes Swift and Go, and likely other newer languages inspired by those.

Go does have a garbage collector though, so maybe the conflating of GC with safety (not by you, but earlier in thy thread) is a bit misleading.


GC is not the only way though, see Rust and Swift, both very safe languages.


For what it’s worth, the person you’re replying to is one of the heavyweights of the rust community.


Not only is a “heavyweights of the rust community”, he was literally one of the main designer of the language at Mozilla (he's still contributor #6 by commit[1] despite not having worked on it for the past 7 years!)

[1]: https://github.com/rust-lang/rust/graphs/contributors


I don't understand, how does that invalidate my response to that person's comment about GC's?


By analogy, you are telling a mathematics professor that "2 + 2 = 4".

EDIT: It wasn't me that was downvoting you FYI, but go ahead and down vote me. :)


This is becoming a pointless meta, but the parent comment didn't indicate in any way that I was talking to a "professor". The comment said, great that there are more languages with GC. I disagree whoever may say that.

I'm not a "professor" but as a software engineer with 35 years in this industry I can say that new languages should avoid GC's (as in, generational and related) and stick to either ARC or Rust-like compile-time memory management.

Just because the original comment is by, let's say, a prominent figure, doesn't make it right.

P.S. I rarely downvote out of disagreement, only for comment quality.


> I'm not a "professor" but as a software engineer with 35 years in this industry I can say that new languages should avoid GC's

With respect, and much less experience than You, I really don’t think so. I believe the majority of languages are better off being managed. Low-level languages do have their place and I am very happy for Rust that does bring some novel idea to the field. But that lower detail is very much not needed for the majority of applications. Also, ARC is much much slower than a decent GC, so from a performance perspective as well, it would make sense to prefer GCd runtimes.


ARC is in fact faster than GC, and even more so on M1/M2 chips and the Swift runtime. There were benchmarks circulating here on Hacker News, unfortunately can't find those posts now. GC requires more memory (normally double the amount of that of an ARC runtime) and is slower even with more memory.


How can more and sync work be faster than a plain old pointer bump and then some asynchronous, asymptotic work done on another thread? Sure, it does take more memory, but in most cases (OpenJDK for example) it is simply a thread local arena allocation where it is literally an integer increase, and an eventual copy of live objects to another region. You couldn’t make it any faster, malloc and ARC are both orders of magnitude slower.

ARC, while in certain cases can elide, will still in most case have to issue atomic increases/decreases that are the slowest thing on modern processors. And on top it doesn’t even solve the problem completely (circular references), mandating a very similar solution than a tracing GC (as ref counting is in fact a form of GC, tracing looking it live edges between objects, ref counting looking at dead edges)


I'm not familiar with the details but it is said that Swift's ARC is several times faster than ObjC's, it somehow doesn't always require atomic inc/dec. It also got even better specifically on the M1 processors. As for GC's, with each cycle there's always overhead of going over the same objects that are not disposable.

Someone also conducted tests, for the same tasks and on equivalent CPU's Android requires 30% more energy and 2x RAM compared to iOS. Presumably the culprit is the GC.


That’s a very strong presumably, on a very niche use case of mobile devices.

It is not an accident that on powerful server machines all FAANG companies use managed languages for their critical web services, and there is no change on the horizon.


It might be because on the server side they usually don't care about energy or RAM much. The StackOverflow dev team has an interesting blog post somewhere, where they explain that they figured at one point C#'s GC was the bottleneck and they had to do a lot of optimizations at the expense of extra code complexity to minimize the GC overhead.

It is actually quite rare that companies think of their infrastructure costs, it's usually just taken for granted, plus that there aren't many ARC languages around.

Anyway I'm now rewriting one of my server projects from PHP to Swift (on Linux) and there's already a world of difference in terms of performance. For multiple reasons of course, not just ARC vs. GC, but still.


With all due respect, (big) servers care about energy costs a lot, at least as much as mobile phones. By the way, out of the manages languages Java has the lowest energy consumption. RAM takes the same energy whether filled or not.

Just because GC can be a bottleneck doesn’t mean it is bad or that alternatives wouldn’t have an analog bottleneck. Of course one should try to decrease the number of allocations (the same way you have to do in case of RC as well), but there are certain allocation types that simply have to be managed. For those a modern GC is the best choice in most use case.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: