Hacker News new | past | comments | ask | show | jobs | submit login
A Comparison of Rust and Zig (expandingman.gitlab.io)
86 points by sundarurfriend on Jan 1, 2022 | hide | past | favorite | 37 comments



Regarding lifetimes: they're severely overused by people new to Rust.

Types like struct <'a,'b,'c,'d> are not for storing data, but are temporary views into data stored elsewhere. If such struct didn't have to be forced to be temporary, or wasn't a view into pre-existing data that can't be moved, then it wouldn't need the lifetime annotations. Such lifetime salad can happen in very specific scenarios, but they're rarely needed in most programs.

It's very unfortunate that the Rust's feature for temporary, scope-limited loans, AKA references, looks so similar to general-purpose features that other languages have for referring to objects or storing data by reference. It's not the same thing!

Rust has a few other constructs for referring to objects, like Arc which is a shared reference, but it's not temporary, or Box which is a pointer for storing data by reference, but doesn't have a lifetime attached. When you start taking ownership into account when designing your data structures, it often turns out that Rust references are not the right tool to use. In 99% of structs references are a mistake (but they exist, because there is that 1% of legit cases).


> Types like struct <'a,'b,'c,'d> are not for storing data, but are temporary views into data stored elsewhere.

Thanks. As someone that still has a hard time with appreciating the complexity of Rust and its lifetimes, this explanation of yours is illuminating and should be prominently featured in the documentation. It makes a lot of sense and answers a lot of doubts I had.


As someone who has tried to like Rust repeatedly, I found that a collection of "gotchas" like this extremely offputting. Is it better for people to unlearn a common thing to do it the "rust way"? Or for Rust to make itself a bit more ergonomic to prevent a common mistake?

I understand the importance of thoroughly learning one's tools, but this seems a lot like the ol' "you're holding it wrong" argument.

I will also acknowledge the possibility (and likelihood) that I don't understand Rust well enough to dissect the nuance here, but I'll also point out that I really didn't have to dig in that far to gain a working understanding of Go.


I'm definitely a massive fan of some of zig's ideas

I like how flexible structs are, and the inclusion of structural subtyping for them. I do wish C's unnamed fields made an appearance, however.

The way type constructors work is also really great. It's just "yeah it's literally a constructor that returns a type, like the same syntax and everything"

I'm a little concerned about how long zig is willing to defer semantic analysis and possible difficulties with symbol binding in generic code (the binding rules end up being closer to C macros than C++ templates I think). But, the language doesn't have overloading, which makes the symbol binding problems much, much, much less severe.

It has a lot of the same qualities that make me like C. It can represent the data structures I want to write and the code I want the compiler to generate, it makes using it's escape hatches ergonomic, and it's obvious what's going on, especially to unfamiliar folks looking at zig code in a debugger. Also embedding a full C compiler into the zig compiler is absolutely the correct way to do FFI. C++ level compatibility with C without all the baggage (it's kinda amazing how annoying C's conversion rules are once you add overloading and templates tbh).

It really is _staggering_ how long the compiler will defer actually doing sema and generating real code though, so we'll see.


> The aspect of rust about which I am most deeply dubious is the heavy use of macros: they are a herald of rust's pedantry ("If you don't use these macros you are going to have to write a shit-ton of boilerplate code.")

The most-used, most-loved macros in Rust have nothing to do with Rust's "pedantry", but rather the programmatic derivation of business logic that would have been annoying to write.

Serde's Serialize and Deserialize macros are amazing for creating complex and correct logic for shuffling data into and out of various serialization formats.

Structopt's (and now clap's) macros for command line argument parsing make writing CLIs a piece of cake. Arguably this is just another form of deserialization. Still great, and unrelated to the fact that Rust is verbose and explicit.

I guess you could argue that something like the PartialEq derivation macro is only necessary because Rust forces you to explicitly declare how user-defined types compare to each other (<, >, =, etc.) But I think this is super reasonable, and if you accept that, then the macro just provides a shortcut to opt-into the code that most people might want, which is that structs will compare using the comparison functions of their inner fields.


GC, manual malloc/free, or lifetimes and macros: choose one. Rust would be miserable to use without macros. There's so much ceremony and friction to ensure memory safety that you need a powerful macro system to cut through it sometimes.

If you don't like the macros then you can always use a managed GC language that gives the same safety guarantees at a performance cost.


Wait, most uses of macros in Rust don't have much to do with memory safety. RAII and borrow-checking don't rely on macros at all. Most features of, e.g., serde would work the same way in a garbage-collected language.

The primary exception that comes to mind is pin_utils::pin_mut, which is a macro primarily because of the somewhat circuitous route by which async came to Rust. If the language had been designed to support async from the beginning, I suspect that it probably would have been built-in syntax (assuming it didn't avoid the need for such an API entirely by using a different design).


Try to use Gtk-rs without the clone macro, and it will be lots of fun handling callbacks and widgets relationships.


I wouldn't call this kind of macro idiomatic in non-GTK code. Reference counting is mostly used only occasionally in Rust code, and so the boilerplate cost of adding another local variable when giving a closure its own reference-counted pointer is outweighed by the maintainability cost of introducing a macro.

gtk-rs is different because it pervasively uses Rust bindings to a non-Rust-based memory management system based on reference counting and interior mutability, kind of like if everything was wrapped in Rc<RefCell<T>>. In that scenario, yeah, adding another local in front of every closure capture sounds like a pain. If most Rust code were like this, I suspect that something like glib::clone would be added as built-in syntax.


For instance one of the most popular crates is the lazy_static macro without which it is hard to define global state. I also tend to define quite a few helper macros in my Rust code to keep clean.


once_cell does the same with a closure.

Improved const evaluation has also eliminated some uses of lazy_static.


Yeah, derive macros in rust are very similar to lombok and other annotation processing code generators in Java. Any of the trait function implementations could be written manually, but it saves work and makes it more robust to use the derive macro (or Lombok).


Macros get better as more people use them anyway, you build up a community knowledge of what the hell they generate. It turns out that unhygenic macros are really rather useful, if a bit dangerous.


> The problem [with self-referential structs] seems to be that there is no way to promise rust that you will deallocate the entire struct properly.

That's not the problem. The problem is that Rust doesn't have move constructors. A move is just a memcpy. If a struct contains pointers to itself (in this case the references are just pointers) then memcpying it will invalidate the pointers.

I agree it's one of the biggest annoyances with Rust. For example a lot of parsing libraries return a parse result that contains pointers to the `&[u8]` that you gave it. If you want to return the parse result up the call stack past where the input comes from you end up with a self-referential struct and it's pretty much impossible.

There are complicated solutions but frankly they're all so complicated I've always given up and just copied the data or rearchitected the program.


Man, I am a young junior developer so I am not aware of all the history of programming, but this past decade really seems like the renaissance of low level programming (at least in terms of new languages)

On another note, as a fan of both C and Rust, I am excited to see new languages that might improve upon ones that I already love.

My big question for Zig is, is this going to be like the D programming language? From what I was told D was better than C++, but wasn't enough of an improvement to fully take hold.

I do like the idea of an improved C though. For as good as Rust is, it certainly can take a while to code up a project, I would prefer C to fire things up quickly and having some modern improvements in a language like C definitely grabs my attention.


D is chock-full of very good ideas but original-D has a GC so it's not really a competition for C++. DasBetterC has no GC but it's quite new so it must compete for mindshare with Rust, Zig, V, Odin..


It was all the way like this until everyone decided it was all about Java and scripting languages.

Go into online archives read magazines like Byte or DDJ, they are full of ads for tons of languages.


I do a lot in the PL space, and Zig has me pretty excited. I think Zig, paired with memory tagging for memory safety, could be a killer combination one day.


have you looked at ParaSail. It's designed by the guy who led the Ada 9x ('95) design team and it's unique in being pointer free (not just like in ada or Fortran where pointers are discouraged, but the language literally does not have them). Instead of pointers it has grow-able types based on memory tagging. It's specified such that each sub-expression can be safely executed in parallel.


ParaSail development has stagnated after he joined Ada Core, have any other progresses happened since then?


Given that memory tagging for memory safety has been designed for C and C++, already exists for SPARC ADI, and is getting adoption across ARM based devices specially iOS and Android ones, what is the improvement Zig brings to the table on that regard?


I can imagine Zig using memory tagging at a more granular level: imagine changing a 16B union's tag when assigning it. C and C++ only do it per-allocation IIRC.


It has nothing to do with C or C++, rather OS support.


What I'm saying is, a new language can use memory tagging more effectively, if it decides when to scramble a tag.


Hardware memory tagging is done via OS APIs, the language has nothing to do with it.

The OS decides when what.


Was on my phone before, but now I can give a more in-depth reply =)

So far, memory tagging is only done per allocation from malloc or alloca. The compiler backend inserts special new instructions (such as STG) to scramble the tag for a specific 16B "block". [0]

I propose that the zig compiler _frontend_ can also insert some STG instructions. Whenever the user does an = operation to replace one union member with another, we should do an STG on the union's containing blocks. Suddenly, we are much more memory safe, pretty much closing the gap with Rust.

Indeed, only some CPUs support memory tagging, and that's unfortunate. However, that doesn't stop us from using it where it is supported ;) if for nothing else, its help as a debugging tool would be amazing, like a much faster ASan or valgrind.

[0] https://developer.arm.com/-/media/Arm%20Developer%20Communit...



Thanks for sharing these! I see what you mean now. And it's especially unfortunate that Android needs an entirely different OS build.

iOS works a bit more like I expected, though they don't list the instructions we might use to scramble the tags for a block of memory like the ARM paper's STG instruction does. I wonder if that might be possible one day.


Traits support inheritance,

    trait Widget {
        fn draw();
    }

    trait Menu : Widget {
        fn on_select();
    }
Most package managers are older than cargo, so it is a bit hard for it to have had an influence on them.

Unless zig makes it a compiler error to forget to use defer, forgetting to delete memory is still a possible scenario, and even then it does very little to prevent an use-after-free.


The default Zig allocator will actually make it very hard to have unnoticed use-after-free. It doesn't re-use the virtual address space, it will discard the page entirely and get a new one, that make it far more likely that a use-after-free will get a segmentation fault.


Basically the same as debug allocators for C and C++, what is the improvement there?


Wouldn't that require allocating a full page for a 1-byte object? That would be rather wasteful, both in space and in munmap(2) overhead.


Supertraits aren't inheritance!


Yes, they are, brush up your knowledge on OOP, plenty of SIGPLAN papers available.


> I personally really dislike the degree to which rust and zig force the programmer to worry about error handling. The way I see it, things should not error, if they do, stop everything and print a stack trace, please don't bother me with it.

Then just .unwrap() everything?


Personally, I really like error handling in Rust. It makes me think about handling deviations from the happy path much more than exceptions ever did.

As you said, you can choose to .unwrap() everything and that's fine if it's what you want, but it forces you to make a conscious decision. I like that explicitness.


Or use `anyhow` and `?` everything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: