Hacker News new | past | comments | ask | show | jobs | submit login

Superficially this language appears to be very similar to Swift. Beyond the syntax, it also has first class refcounting, C language binding, and no external runtime (compiles straight to binary).

I wonder, does Vala have a stable ABI, or native compatibility with other higher-level languages like C++ or ObjC? These are other difficult challenges which Swift attempts to tackle (and depending on who you ask, with varying levels of success).

In any case this is an interesting language. Thanks for sharing




Vala predates swift by many years. It's meant as a high level implementation of the C GObject system, to which it compiles.


> Superficially this language appears to be very similar to Swift

It's almost a clone of C#; it's tightly coupled with the GObject system instead of .Net.


Oh got it. So like c# but compiles to native a la Swift?


There's more languages than Swift...


Didn't you watch the latest WWDC? Apple invented programming languages.


What does .NET JIT emit? What does NativeAOT compile .NET applications to? :)


(I’ve worked in an AOT for .NET and I’m a JIT expert.)

Even if you compile MSIL to native, you’re compiling to the GC’s ABI, which is definitely very different from what Vala and Swift do.


.NET follows platform calling convention. GC reference assignment to a memory location does involve going through a write barrier (main user of which is concurrent mark phase of GC) but otherwise it's just plain Windows or System V ABI for the respective ISA.

Practically speaking, you cannot call .NET methods directly unless they are annotated with [UnmanagedCallersOnly] which is necessary to ensure GC is in consistent state, module initializers have ran, etc. This is a concern for NativeAOT libraries as you don't have to explicitly call their entrypoint before calling them AFAIK.

This, however, is true for most languages that are not C. This is also a constraint for both Swift, which has its own reference counting and ABI (Library Evolution ABI) and likely Vala assuming it is reference counted.

The runtime vs runtime-less arguments are not exactly helpful given the context - there are """runtime"""-heavy interpreted languages like Python, Elixir or JS, there is Java which assumes JVM, but is already lower level, and then there's .NET, which under the hood looks a lot like a strange flavour of C++ when you e.g. inspect the AOT binaries with Ghidra.

Fun fact, native profilers work with NAOT applications transparently on all platforms. You can hit "sample" in activity monitor on macOS and it will show you fully symbolicated trace if symbols are available. Just recently, I used Samply to perform multi-threaded profiling and it worked just as well as it would for something written in Rust, if not better.


Swift and Vala and any other eagerly reference counted language don’t have to worry about native C code squirreling away a reference to a GC’s object and not responding to a GC marking callback (either because the mechanism doesn’t exist or because it isn’t used correctly).

That’s an enormous difference in ABI.


How would C track references for heap-allocated data originating from (A)RC-based language?

I don't think what you say on .NET correlates with reality - you are not supposed to pass managed objects across FFI (you just can't, it won't let you unless you do unsafe cast shenanigans) - the correct way is to either marshal the data or use blittable representation (plain structs can be passed as is, either by reference or value, same goes for arrays/pointers of primitives). On the rare occasion where unmanaged code needs to keep a GC object alive, it is passed via special GCHandle but that's an exception not the rule.

Swift has its own high level representation for such objects which are very much not FFI compatible. ARC in Swift is a feature that requires compiler involvement to work correctly, and its structs are not FFI compatible unless they are "trivial". Swift also has its own more advanced Swift Library Evolution ABI, which is strictly not C ABI you expect out of your regular library and has its own calling convention and semantics.

Overall, there seem to be strange misconceptions about how these languages work so it would be best to check with the documentation first:

.NET P/Invoke (FFI):

- https://learn.microsoft.com/en-us/dotnet/standard/native-int...

- https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...

Swift non-C ABI:

- https://github.com/apple/swift/blob/main/docs/LibraryEvoluti...

- https://www.swift.org/blog/library-evolution/

Swift ARC:

- https://docs.swift.org/swift-book/documentation/the-swift-pr...

- https://github.com/apple/swift/blob/main/docs/ARCOptimizatio...

(I don't actually know if any other other platform, besides Swift itself, implements Swift ABI support, but .NET is going to have it in .NET 9 - likely the first non-Swift platform to do so)


.Net was built for interop (mostly with COM+, but that's close enough to C ABI once you get a function pointer). It's pretty good at making it easy. GCHandle and pinning in general aren't incredibly rare, heck, merely passing a byte array or string to a native function involves pinning. There's also all the heavy lifting it does for you: it doesn't look like you need to pin byte arrays because the JIT does that for you. .Net's safety invariants aren't that hard to uphold either (any more than C).

I have lost all love for the platform, but I still have to hand it to Microsoft: no FFI has come anywhere close to .Net in the 25-odd years of its existence.

> How would C track references for heap-allocated data originating from (A)RC-based language?

Intentionally designed FF interfaces do not at all. C either delegates allocation to the host language, or the host language needs to let C know when it's done with things. I think Lua is an example of the former (it has been a while), the Vulkano crate is a living example of the latter.


Pinning is always done for heap-allocated GC memory, yes. Otherwise fixed statement does nothing for pointers/byrefs that originate from stack and it should be also a no-op for e.g. NativeMemory.Alloc-returned pointers.

On the other hand, GCHandles[0] are rare and only ever needed when you have complex object lifetime where, for example, the object reference needs to be passed back from unmanaged and object needs to survive in the meantime. The unmanaged code cannot interact with an object itself because it is not representable and would have arbitrary layout.

Today, there is support for multiple calling conventions and ways to interact with FFI. [UnmanagedCallersOnly] exports with NativeAOT-compiled libraries are C exports in the same way they are for the dynamic (or static) libraries compiled with GCC. Various flavours of function pointers have existed for a long time, yes. The most recent one allows to cast pointers directly to the desired unmanaged function signature within unsafe code, or create one given mentioned UnmanagedCallersOnly annotation on a C# method.

[0] https://learn.microsoft.com/en-us/dotnet/api/system.runtime.... note specific use case, it's not the bread and butter regular marshaling and pinning are


Swift’s reference counting is exactly what C code on Apple’s platforms were already doing.

And Vala’s is exactly what GObject code was already doing.


That's not C but Objective-C. From quick skim of the spec[0], it relies on macros and same autorelease pool.

There is no free lunch in software engineering, only difficulties with accepting reality.

[0] https://clang.llvm.org/docs/AutomaticReferenceCounting.html


It's both C and Objective-C.

Objective-C's refcounting is exposed to C via CFRetain/CFRelease. Long before there was ARC (the thing you cite), the bulk of the NexTSTEP and then Apple frameworks were written with Objective-C code manually doing [obj retain]/[obj release] and C code manually doing CFRetain(obj)/CFRelease(obj). There was some doc somewhere telling you the protocol.

Later, ARC came around to automate it for Objective-C code that opted in. ARC carefully mimicked the semantics of the protocol in that doc.

And then later, Swift came along and mimicked the semantics of ARC.

There is a free lunch for languages that use reference counting. That free lunch doesn't exist for GC'd languages, which introduce a semantics that is different from the ones that C code already uses.

Sorry to burst your bubble.


If reading documentation and looking into implementation details doesn't help, I welcome you to measure ARC overhead and observe the numbers with your own eyes at least once, especially in multi-threaded scenario. Might also read one of the previously posted links still, surely something will work eventually.


I’m not saying ARC is fast. Reference counting is slower than GC if what you want is throughput, unless you somehow manage to do very coarse grained RC, which is what you get in most C++ code (but then it’s far from safe).

The GC<->legacy-C interoperability problem is unavoidable. I’ve been around the block on that problem as someone who writes GCs for a living. It always devolves to some sort of ugliness.

Anyway, the fact that Vala uses GObject RC semantics is a point in favor of Vala. Folks who go that route, as Swift also did, should just be honest with themselves about the tradeoff: simpler interop, worse throughput.


Oh neat I’ve never heard of NativeAOT. I’ve only used Unity’s AOT compiler “Burst” for C# (I assume that is something different?).

Cool stuff.

edit: Actually I meant to say Unity’s IL2CPP, which transpiles IR to C++. Burst is a different tool with similar goals—it compiles IR straight down to native via LLVM.


To be precise, runtime is an umbrella term. Swift, C++ and Rust usually have """runtime""" just as much. Which includes but not limited to:

- Automatic or semi-automatic memory management

- Threadpool and, optionally, async abstraction implementation

- APIs to introspect type system properties / reflection

Swift very much has all these. And so does .NET.

As for Unity, it has diverged and lags behind "vanilla" .NET in features, language versions and performance significantly, so the experience of using it won't translate to what is expected to be "normal" C#/.NET of today.



Vala is pretty old these days. My understanding is that they built it because Mono became popular but there were patent and licensing concerns with it.


That's how I remember it, too. I still miss F-Spot.


> compiles straight to binary

I checked it ten years ago-ish, I think it transpiles into C (with GObject). Still no runtime though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: