There are languages with similar type safety, which are faster, because the authors have placed focus on having several ways available to compile the code.
I will admit I never understood why Ada never took off (is it only because of the tooling being propertiary for a long time?), but c# is not as memory safe as rust.
The only memory safety that C# lacks in comparisaion with Rust is one special case, data races for in-process data.
For everything else regarding concurrent access to shared data out of process, they are on the same foot.
Ada didn't took off, because of several reasons, price of compilers, on UNIX it was an additional SKU on top of the respective developers SDK that already offered C and C++ in the box, 1980's mainstream hardware wasn't able to cope with it, most OS vendors outside UNIX decide to migrate from their toolchains into C and C++, so again additional money on top of the OS SDK.
>> I will admit I never understood why Ada never took off (is it only because of the tooling being propertiary for a long time?)
Early Ada compilers had expensive licensing and required expensive hardware to run.
By the time GNAT was added to GCC, C++ had already taken over most of the spaces that were not Ada exclusive (meaning safety critical / defense / aerospace niches where either Ada was once mandated or has thrived in despite the original high costs).
It does have GC but that is not the point being made.
I believe the point the author is making is that other languages provide better safety than C and have faster compile times than Rust, therefore Rust should be able to improve its compile times.
If those languages achieve their memory safety at runtime, like C# does with GC, then that becomes relevant to the point of the compile time performance. The C# compiler has to do less as that complexity has been pushed elsewhere with different tradeoffs.
GC is not a roadblock for OS kernel work. Smalltalk is itself an OS and had GC from the start. IBM's i (the descendant of OS/400) most likely has GC as part of its kernel. Lisp machines had hardware-assisted GC.
What you may want to try to avoid is complex and non-deterministic GC, which makes it harder to reason about.
It is certainly a religion, as most people that advocate against it hardly ever learned to use a profiler, or bothered to learn that not all GC languages are alike, and many of them offer the same capabilities like C and C++ for low level coding.
It's not, collaborating on a C# game engine and what I've seen is basically a lot of skirting around GC, because it's impact shows up in profiler. It has gotten to a point arrays and stack alloc are prefered over List and HashSet were basically forbidden in the hottest path.
Except most of C# ecosystem relies on classes and GC. A lot of these problems are caused by overuse of GC-ed classes, and their ease of use.
My take on this is a little different than the comments previous. I tend to side with your position over the default position the GC is workable as a languages base assumption. I think the problem really is an expressiveness issue. In a language conceptualized to have memory managed either manually or GC’d and that said choice should be easily made (i.e. it should not take much work to designate some code path as utilizing a GC’d strategy), but I would say for implementation and performance you would give up being able to easily abstract over the memory strategy.
tr;dl — I’d really prefer the option to determine when I’d like GC, as opposed to dodging the collector to avoid performance hits, hot code can default to manual. Probably not a realistic ask, but it could work.
> I’d really prefer the option to determine when I’d like GC, as opposed to dodging the collector to avoid performance hits
Fair enough. If SS14 used D maybe they wouldn't have these problems.
D and early Rust (pre 0.2, like alpha alpha) had that. Problem is you split your community in two. You get a version of "What GC-color is your function/lib?".
It's a tradeoff for some domains - allow no GC bypass you're going to run into nigh insurmountable performance cliff.
Allow GC as opt-in and you run into issue of splitting your APIs in two.
> "This would be a lot easier if we didn't have malloc()/free()", basically.
Well, not really. It's just some libraries like YamlDotNet copy waaaay more than needed and it shows in serialization. They mostly minimized calling YamlDotNet, but true solution would be a zero copy parser.
Other issue was HashSet operation like Clear had huge impact.
Think they replaced those with arrays.
Third issue was something about flecs and archetype ECS. I don't know if it was a jest, but they mentioned changing GC layout or rewriting GC.
Points is, they now face a steep performance cliff. The only way out of it is through sheer effort.
So no, it's not anti-GC religion. Some domains and GC really badly mix.
Ada, Delphi, OCaml, C#/F# (.NET Native / Native AOT), D, Nim,...