Hacker News new | past | comments | ask | show | jobs | submit login

> Basically you only use GC if you declare something using a GC type.

So similar to C# (2000)? A useful feature to be sure, but not a major innovation.

> The standard library uses seqs in various places so if you fully turn the GC off using the compiler switch --gc:none you'll get warnings for things you use that will leak. There's no GC 'runtime' stuff that you need though.

Running with the GC off (and accepting the leaks) was already a standard managed-language technique though.

> Nim's GC is thread-local (so no stop-the-world issues)

Well no wonder it has nice properties if it avoids all of the hard problems! What happens when you pass references between threads?

> only triggered on allocate, and has realtime support via enforcing collection periods.

Your link describes the realtime support as best-effort, and implies that it doesn't work for cycle collection. So honestly this doesn't seem to offer much over e.g. the tuneable GC of the JVM (which admittedly made a massive mistake in choosing defaults that prioritised batch throughput rather than latency).

I do appreciate the information, and hope this isn't coming off as overly confrontational. But honestly it sounds like Nim is overselling things that are mostly within the capabilities of existing managed languages (maybe even behind them if the "GC" is only thread-local and/or not cycle-collecting).




So, the full gist is: Nim uses automatic reference counting with cycle detection. If you want to, you can disable the cycle detection, perhaps only temporarily. The compiler flag for turning GC off doesn't actually turn off all GC, IIRC. It still does automatic reference counting, and it can still do cycle detection, it's just that you need to initiate it manually.

The language does have pointers, and will let you do any C-style memory unsafe whatever you want to do with them. However, it doesn't have any library calls that I'm aware of that are equivalent to C's malloc() and free(). You'd have to supply your own.

There are also ambitions, of introducing real non-GC memory-safe memory management. Probably something along the lines of how Rust does it. Those haven't come to fruition yet, though.

So, long story short, yes you can completely disable GC, but I think that its capabilities on that front are somewhat overstated.


> However, it doesn't have any library calls that I'm aware of that are equivalent to C's malloc() and free(). You'd have to supply your own.

The malloc/free equivilents are in the system module: https://nim-lang.org/docs/system.html#alloc%2CNatural


This is the owned reference memory management: https://nim-lang.org/araq/ownedrefs.html


As shown by Mesa/Cedar, a systems language with RC coupled with a cycle collector can go a long way, like a full graphical Xerox PARC workstation.


I wouldn't say separating the GC at the type level is a major innovation, but as you say it's useful. I don't think Nim really sells itself on a groundbreaking GC implementation either. However it does give you a fast GC with enough flexibility should you need it. For example, the Boehm GC is not thread-local.

GC types are copied over channels with threads, or you can use the usual synchronisation primitives and pass pointers.

As you say, thread-locality avoids the hard problems and this is a good default - I would argue that most of the time you want your data being processed within a thread and the communication between them to be a special case.

Certainly, there's a lot of talk of adding some sugar to threading, and Nim does offer some interesting tastes, such as the parallel statement: https://nim-lang.org/docs/manual_experimental.html#parallel-...

The performance of the default GC is good to very good, the JVM is almost certainly better in most cases, however this is comparing apples to oranges; it's a different language.

Nim's GC pressure is much lower in most cases, not least because everything defaults to stack types which not only don't use GC but tend to be much better for cache coherence due to locality. Using ref types is not required unless you use inheritance, however the language does encourage composition over inheritance, despite providing full OO capabilities, so you find inheritance isn't as needed as in other languages.

Plus it's not really much different to using refs to drop down to pointer level and avoid the GC without disabling it:

  type
    DataObj = object
      foo: int
    Data = ptr DataObj

  proc newData: Data = cast[Data](alloc0(DataObj.sizeOf))
  proc free(data: Data) = data.deAlloc

  var someData = newData()
  someData.foo = 17
  echo someData.repr
  # the echo outputs eg: ptr 000000000018F048 --> [foo = 17]
  someData.free
All this means that you can 'not use the GC' whilst not disabling it. I am a performance tuning freak and I still use GC seqs all the time because the performance hit is actually using the heap instead of the stack, and worse, the actual allocation of heap memory - regardless of the language. The GC overhead is miniscule even with millions of seqs and would only even come into play when allocating memory inside loops. At that point, it's not the GC that's an issue, but the allocation pattern.

Again though, it's nice to be able to drop down to pointers easily when you do need every clock cycle.


That is the gist of the anti-GC crowd doesn't get.

Languages like Nim allow for having the productivity of relying on GC's help, with language features for performance freaks available, when they really need to make use of them.

Java not offering a Modula-3 like feature set has tainted a whole generation to think that GC == Java GC.

EDIT: grammar errors


> has tainted a whole generation to think that GC == Java GC

We can extend this pattern:

> has tainted a whole generation to think that OOP == Java OOP


From that point of view, C# also had hardly new to bring to the table, given Algol 68, Mesa/Cedar or Modula-3, if we start counting GC enabled systems programming languages.


I think the biggest issue is that most equate GC with Java/Smalltalk style, instead of GC Modula-3/C# style.


Doesn't .NET (and C# with it) have stop-the-world GC, very similar to Java?

Or do you mean something else?


CLR was designed for multiple languages execution models, including C++.

In what concerns C#, besides GC, you get access to off heap unamaged allocations, low level byte manipulations, value types, inlined vector allocations, stack allocation, struct aligments, spans.

All GCs have eventually to stop the world, but they aren't all made alike, and it is up to developers to actually take use of the language features for writing GC-free code.


> All GCs have eventually to stop the world

Not entirely true. Erlang's BEAM definitely doesn't need to (unless you define "world" to be a single lightweight process). Perl6's MoarVM apparently doesn't need to, either.


Yes, I do define it like that, there is always a stop, even pauseless collectors actually do have a stop, even if a few microns.

Doing it in some local context is a way to minimize overall process impact.

Just like reference counting as GC algorithm does introduce pauses, specially if implemented in a naive way. More advanced ones end up being a mini tracing GC algorithm.

Regardless, having any form of automatic memory management in system languages is something that should have already become standard by now.


There's always a stop, yes, but there's not always a stop of the whole world, which is my point. "Stop the world" implies (if not outright definitionally means - explies?) that the execution of all threads (lightweight or otherwise) stops while the garbage collector runs - e.g. that of (the official/reference implementations of) Python and Ruby.

Erlang doesn't require stopping the world because every Erlang process is isolated from the others (no shared state at all, let alone mutable shared state). I don't know off-hand how Perl avoids it.


> All GCs have eventually to stop the world

Not at all. Nim is an example.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: