Hacker News new | past | comments | ask | show | jobs | submit login

Do I understand it correctly that upgrading to a new rust version is mostly implementing new best practices and new features, instead of needing to "fix" your code, as rust is backwards compatible?

I've only used rust nightly for my own projects and didn't give too much thought about rust versions




For normal projects yes.

And you can use clippy to tell you about changes you should make.

For example, in my projects I run this in the CI pipeline:

  cargo clippy --all-targets --all-features
and

  cargo fmt --all --check
In addition to the regular test and build steps.

This both means that I follow clippy recommendations and cargo fmt in the first place, and also that my CI tells me about any clippy changes if I didn’t notice them myself as well as any formatting I’m not following. In my main IDE I auto format the code of course. But sometimes I make small changes in vim and don’t run the format step myself so it’s nice to have for that reason as well.

For the integration of Rust into the Linux kernel I imagine it’s a bit more convoluted.


For Rust on Linux it's a bit more involved because they use nightly features, which can change from day to day. In practice there's an implicit tiered strata, with features that rarely change and features that frequently change in bursts. I would like it if we formalized that distinction a bit. We already mark when a feature is very likely to change (unstable_features), but not when they are very close to being stabilized.


Yes, generally. For example, recently David Tolnay shared the amount of burden Meta has when upgrading the compiler: https://old.reddit.com/r/rust/comments/19dtz5b/freebsd_discu...

> I estimate it's about ½ hour per 1 million lines, on average.

That being said, Rust for Linux isn't using stable Rust, so they have a higher burden than projects that do.


Rust-for-Linux made it more complicated for themselves, because they chose to enable unstable/experimental features of the compiler without waiting until they’re released, so they don’t get the stability and compatibility guarantees that normal Rust projects get.


The alternative was to do the same and also not merge what they had. Stuff like custom allocator support is not optional.


I was confused when reading this because I was pretty sure that using other allocators had been supported for a while in Rust. From refreshing myself on the details, it seems that replacing the default allocator is stable (https://doc.rust-lang.org/std/alloc/trait.GlobalAlloc.html), but the API for arbitrary allocators (which includes stuff like being able to do "zero-size" allocations) is not yet stable (https://doc.rust-lang.org/std/alloc/trait.Allocator.html). I guess if there were ever a project that needed fine-grained control over how allocators be work, it would be the kernel.


There's also https://docs.rs/allocator-api2/latest/allocator_api2/ -- I end up using this more often than I end up using custom allocators, simply because it has a stable version of the currently-unstable functions for constructing uninitialized Vec<MaybeUninit<u8>>s.


More importantly allocator_api adds try_new which means you can handle allocation errors.


Ah, that is important. I didn't even notice that wasn't possible in the GlobalAlloc API, but you're definitely right that it's not.


GlobalAlloc can be used for fallible allocations: its functions just return a null pointer on failure, as with malloc() and realloc() in C. The main limitations are around the safe heap data structures in the standard library, which don't stably expose any fallible APIs except for Vec::try_reserve().


Hmm, I'd expect that would mean that it's possible to add those APIs today then rather than requiring the `Allocator` trait. Is the idea that the Allocator a parameter (maybe a generic one) when calling `try_new`, so they don't want to stabilize anything now?


The Allocator type is an unstable parameter on the heap type; Vec<T> is unstably Vec<T, A>, Arc<T> is unstably Arc<T, A>, and so on. (The allocator "A" defaults to Global, which is an Allocator that forwards to the registered GlobalAlloc.) I think the Linux kernel also wants an Allocator trait for other reasons than fallibility, such as allocating different kinds of objects on different heaps.


Rust is backwards compatible when you stick to stable features, but the kernel uses unstable features that can and do incur breaking changes.

https://github.com/Rust-for-Linux/linux/issues/2


It seems prudent to limit rust usage in the kernel until that list can be burned down to zero. It makes sense that you need to at least get rust in the kernel to find out what missing features you need to have implemented and stabilized, but excessive use will make folks lives painful as they try to track upstream rust releases.


Please bear in mind that Linux has used non-standard GCC extensions to C for decades as well. The tradeoffs here are their call to make.

Besides, at this stage, it makes perfect sense for Linux to use unstable Rust features. It was one thing to say Rust should be great for writing kernels, it's another to actually get feedback on how it needs to be better, and that's only possible if the potential improvements are motivated by those who need them and incubated without the constraints of backwards compatibility nor the risks of locking in permanent tech debt.

Rust's unstable feature concept was designed for exactly this kind of freeform evolution and it's working exactly as intended. As for the specific tradeoffs being made in Linux, its contributors are in a much better position to weigh those than we are.


What you propose is exactly what's been done by the kernel. They are integrating the language in a non-mandatory way, to both exercise the kernel side and the language itself. The unstable features haven't been stabilized because either they have open questions on their implementation (and having a customer using them helps define them) or no-one has cared enough to complete them (and having a customer using them gives them the extra push). Either way what's happening now is exactly the process you are proposing.

The article is about updating the Rust version the kernel targets where a feature they use (offset_of) was stabilized.


If you ignore dependencies and stick to stable features yes.

If you include dependencies then it can happen that a dependency relies on unstable features. In which case you might have to upgrade the library version (if they support the new compiler version). The library might have changed the API by then which would force you to change your code.

Except for the above use case, upgrades to the latest version of the compiler have been painless for me.


Dependencies are not allowed to use unstable features either with the stable compiler. The only exception is the standard library, which uses numerous unstable features even with a stable distribution of Rust.


Worth briefly explaining the rationale for this (stdlib gets to use unstable features)

Rust's stdlib is maintained with the rest of the language and by the same broad team, so, if you're tweaking unstable feature X, you are also responsible for ensuring the stdlib people using feature X sort that out. I'm not sure if Rust's internal policies mean you shouldn't land a change to the main tree without accompanying stdlib patches, or whether you're only required to give them adequate notice, but either way it's not going out the door in a stable release being incompatible with its own implementation.

This couldn't really work with 3rd party libraries.


There's a second category of unstable features to mention here.

Some of the features are essentially perma-unstable, because they're exposing some compiler intrinsics for the library to be able to use. This is the equivalent of things like __builtin_* for C compilers.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: