Hacker News new | past | comments | ask | show | jobs | submit login

Prior art like Wirth's or Modula-3 language added a minimal amount of features like exceptions or some OOP on top of simple, efficient, systems languages. Usually have a GC but many prefer ref counting for efficiency. This seems like that kind of thinking in an interpreted language.

Just going by what's on the front page: didn't dig deep into it or anything. Interesting project.




Reference counting isn't more efficient than GC; usually the value of reference counting is that it's deterministic and you can bind finalizers to them (with traditional GC, finalizers run when GC runs which may not even happen). The canonical example is a file object that closes its operating system file when the object is no longer used; this is nice in theory, but as the Python folks found out, not a good substitute for properly closing files.


The other recurring example for me was OpenGL resources like textures (this was in ObjC). Again, it turned out not to work so while once I got into more complicated examples, because of ordering issues: my GLWindow instance would get deallocated, and my refcounted GLTexture class would then get finalized after the underlying CGLContext was gone, or just not the active context. I started coming up with more elaborate schemes to deal with it but then it no longer felt elegant.

There is one other aspect of RC systems that I have come to appreciate but never seen anyone else write about: they play well with each other. It's fairly trivial to create foreign object wrappers for python and objective-c in both directions, that each call the appropriate refcount ops of the foreign system. You can then have allocated data structures referring back and forth across the language bridges and when you abandon the root, everything deallocates properly and deterministically in both runtimes. I do not have much experience with GC runtimes and their FFIs but from what I've read I suspect this might be trickier to reason about. This is an obscure point, but one that I did find interesting in terms of language interop.


Oh yeah, my mistake. I should've said predictable, not efficient. I remember people liked it to avoid long pauses coming out of nowhere.


With (naive) ARC you can still get long pauses "seemingly out of nowhere", e.g. when a large object tree goes out of scope.


That's not out of nowhere, since you have exact control over when it goes out of scope.


> Reference counting isn't more efficient than GC

Isn't it more efficient in terms of memory usage?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: