> Since most of our CVE type problems are usually programming mistakes nowadays, the lack of review could contribute to an increase in programming fault type bugs which aren't forbidden by the safer memory model.
That's an odd statement. What CVEs aren't due to programming mistakes? I'm not sure if the majority of CVEs for the Linux kernel come down to memory safety, though I would not be surprised, but certainly a huge number are.
> without a clear and obvious benefit beyond promises that can only truly be fully fulfilled with a whole kernel written in Rust.
That's not true, really. You don't need a completely safe kernel to have an improvement to safety. If every device driver were memory safe tomorrow we'd be better off.
That said, I think this will be an interesting, possibly losing, battle. The Linux Kernel is extremely monolithic, it has a lack of testing and code review, decades of dug-in investments, and a strong history of not prioritizing security or even considering it to be a legitimate goal. Fixing that seems like it will itself take decades, whereas the current approach really feels like it's trying to get it done ASAP.
If they can do it, cool. As a Linux user I'll possibly benefit. I'm curious to see how it plays out.
The question whether a logic bug can be turned into an exploit depends on the domain of the program. An image format decoder written in safe Rust might yield a pixel in a different color due to a logic bug, but that's the worst impact it can have. In software reasoning about security constraints however, a logic bug can mean the user now has root privileges, can write to /etc/shadow as an unprivileged user, or the TLS certificate is considered valid for the domain even though it isn't.
Rust's safety promises don't prevent logic bug CVEs in the kernel. But I think it would be a major improvement if the kernel were written in mostly safe Rust.
This is not a matter of opinion. The collection of CVEs reported (which represent an unknown fraction of faults that could in principle be found) clearly identify which are memory usage bugs of a type that could be prevented by a compiler. My reading is that the large majority are not.
If the demonstrated complacency of Rust coders toward potential bugs is factored in, and inexperience with the language among potential reviewers, use of Rust in the kernel could actually increase the number of exploitable faults. I do not assert this would certainly occur, but no one can demonstrate it would not.
> If the demonstrated complacency of Rust coders toward potential bugs
lol what?
This is so very the opposite of my 6+ years of experience in Rust, where the community has a serious focus on testing, and great tooling support for it as well.
I honestly can't take anyone seriously who thinks that adding Rust to the kernel will increase security bugs. I just can't imagine they know anything about Rust, security, or the kernel.
I judge based on the tide of opinion that washes up, everywhere Rust is even mentioned, that bugs in Rust code are physically impossible. That is the whole value proposition of the language, as presented. Anyone hinting that bugs are still possible in Rust gets downvoted to oblivion (as here).
I do not doubt that there are Rust coders who are especially vigilant about introducing bugs, but they certainly are, as in all times and places, the exception.
> that bugs in Rust code are physically impossible
This isn't a real thing either. I see it asserted all the time on HN and it's hilarious.
> Anyone hinting that bugs are still possible in Rust gets downvoted to oblivion (as here).
90% of the time it's because the person is making a stupid point. 10% of the time it's the community being annoying.
So just to reiterate, you're basing this on being barely an observer on forums, and I'm basing this on multiple years of professional rust development.
There's a similar effect as Amdahl's law with these. Only after programmers come to their senses and opt to take care of that 70%, we get to the remaining part of the pie and its subdivisions, and can start thinking about the actually interesting part of engineering secure systems.
The overwhelming prevalence and impact of memory safety bugs has retared development of solutions for the remaining, nontrivial problems for many decades, by consuming so much of the attention.
these companies apply things like sandboxing and fuzzing to reduce the incidence of memory unsafety bugs, and yet they're finding a majority of their security bugs being memory unsafety. if you can't find memory unsafety in your c++ code, it's because your code isn't worth attacking.
Shellshock would be a good candidate: Bash is designed to be able to pass around some amount of shell scripting in environment variables, which obviously leads to some pretty severe security issues if attackers can control environment variables (say, CGI scripts). So you can argue that the problem here is a design mistake rather than a programming mistake.
> > Since most of our CVE type problems are usually programming mistakes nowadays, the lack of review could contribute to an increase in programming fault type bugs which aren't forbidden by the safer memory model.
> That's an odd statement. What CVEs aren't due to programming mistakes?
I guess they meant as opposed to higher level design mistakes.
Is that really true? I mean my first reaction is that the linked article / thread is a direct counterpoint. All the review and testing the rust facilities are going through.
Are these changes not going through substantively the same process as other proposed changes to the kernel?
The fact that a process is burdensome does not mean it is necessarily effective.
Look at the seq_file thing Qualys discovered the other day. The overflow was obvious if you thought about it, and all Qualys did was think about it. But the bug was present since 2014.
Linus's law is empirically untrue for security bugs - many eyes don't actually spot them. Moreover, we have computers, which are good at doing repetitive and detail-oriented tasks with 100% accuracy. Why not use them?
No, what we have here is definitely more of an exception - sort of like an RFC for larger changes that could take months or years to play out. The vast majority of code changes do not go through this.
The ton of driver code that piles up in the Linux kernel every minute doesn't go through Torvalds. It is delegated.
And my reviewing of a few drivers source short commits is enough to tell me that those delegates do not perform a satisfyingly thorough review by any mean.
Heck, I saw patches of just a couple dozen lines which exhibited bad copy-pasting errors anyone without prior knowledge could have spotted. You don't need to know what the code does to spot some, you don't either need to know what the driven device does to spot some: purely formal errors with bad macros definitions for example. This kind of stuff wouldn't even pass the first internal review where I worked, which just looked at formal appearance (then there were more in depth reviews, and then there was an external review, but we'd make as sure as possible that our code would be clean before going out).
So first you have people (employees of company A) which sends code to a public, external project without having done a proper internal review. Then you have someone else (employee from company B) who claims to have reviewed the commit but hasn't done it properly or at all. And then possibly a third someone who validates this, but doesn't actually check either. It has become a job, a task like another, with the same people who do the same sloppy job as quickly as possible to get rid of it and go home earlier or slack, in the same proportion that you can find in any other position in the world.
No. Security is important and definitely considered.
Some years back there was a viral blow-up where Linus basically said, "security is important but there are lots of things that are important." A lot of people in the security field decided that meant "not even considered" even though that's ridiculous. Linus has always had a pragmatic, holistic approach to the kernel, and many specialists hate that because they think their field is the most important and all others should be second.
If security wasn't even considered, would Linux really have become the de-facto base on which high security orgs (banks, 3 letter gov agencies, etc) deploy? I doubt it very much.
As a security engineer who has seen egregious security malpractice on the part of developers, I fully agree that there can be a real problem with that. However I think it's silly to suggest that the Linux kernel has a history of not even considering security.
You've built a straw-man for my statement and you've completely mischaracterized Linus.
Linus had multiple statements that won him pwnies, but what I'm referring to is decades of mailing list posts where he's insulted researchers, or decades of him and Greg rejecting CVEs and hiding vulnerabilities, etc. This has persisted even today, mostly from Greg, but in a less public way than it once was due to cultural shifts.
Make no mistake, Linus and Greg have always had a hostile relationship with security researchers.
> many specialists hate that because they think their field is the most important and all others should be second.
Another straw man. I never said anything like this; that security should be the number one priority or that anything else shouldn't be a priority.
> would Linux really have become the de-facto base on which high security orgs (banks, 3 letter gov agencies, etc) deploy? I doubt it very much.
Is this satire? Are we really going with "Banks deploy Linux... therefor it's secure" ? Did you know banks also run Windows XP on their ATMs?
Linux has had external contributions to security, yes. Much of that has been despite upstream, and with immense work across decades to get upstream to play ball.
> However I think it's silly to suggest that the Linux kernel has a history of not even considering security.
Sorry but the only way for this to be the case is to simply not know the history of the Linux kernel.
Do a quick search of the terms: Linus Torvalds security. Pick any one of the results. While some of his points with regard to utility vs security seem to have merit... when you switch out "secure" with "correct" the problem becomes pretty obvious.
I think there's a strong argument to be made for Spectre being a programming mistake, with the programming in question being Intel and AMD's proprietary microcode formats. We'd consider a similar timing/information channel in C to be a programming mistake, so it's not clear why we should exclude one in a lower-level representation.
> with the programming in question being Intel and AMD's proprietary microcode formats
No, as far as I know, the design mistakes which lead to Spectre (and other similar vulnerabilities) are not on the microcode; these design mistakes are on an even lower level, in the hardware structures which execute both simple instructions (which are decoded directly, without going through the microcode engine) and microcode instructions. Most of what the microcode "fixes" for Spectre and similar do, is flipping a few "chicken bits" (to disable or bypass some of the hardware structures), and providing extra semantics to a few of the complex instructions (which go through the microcode engine) like LFENCE and VERW; these changes do not actually fix the problem (which is on physical hardware), but instead give software ways to workaround the issue.
You should argue instead that the programming in question is the VHDL or Verilog (or other proprietary language) which was used to generate the hardware.
And, in any case those are also not coding bugs either, but architectural design bugs. Any hypothetical smarter, more suspicious HDL would have been wholly unable to prevent them, because the hardware is working exactly as designed and specified. The designers actually knew all about the flaws, they just thought they didn't matter.
With all its build options and modules, the traditional monolith-microkernel distinction doesn't really apply. Do you mean something else by "extremely monolithic"?
It's not so much the Linux kernel that's monolithic, but the Linux source code. Since drivers gets mainlined, they become part of the Linux stability guarantee. When a significant fraction of the source code is drivers for specialized devices (i.e. AMD GPUs), the development process takes on the characteristics of an extreme monolith that requires coordination of many different teams.
No, Windows has by far more investment into security and a far better culture. Linux has a multiple-decades-long history of saying that security is not important and has had to be dragged kicking and screaming to modernity.
SELinux is just an LSM built decades ago, I wouldn't say that somehow proves that upstream cares about security.
With all due respect, Linus could learn some manners:
> I think the OpenBSD crowd is a bunch of...
Apart from that, if there is no test coverage it is difficult to talk about security or reliability (I don't know myself if there are tests and how good they are, I'm just assuming GGGP is right).
I assume this was after the linked post then, and it's good to hear that. The problem is that these posts stay online for a loooong time. Thanks for answering.
Actually, not really, one has to look for Android, if wanting a Linux kernel with all the proper security knobs actually turned on and configured appropriately.
In an OS where Linux kernel is actually an implementation detail in what concerns userspace.
> In general, I would avoid exposing Rust subsystems to C. It is possible, of course, and it gives you the advantages of Rust in the implementation of the subsystem. However, by having to expose a C API, you would be losing most of the advantages of the richer type system, plus the guarantees that Rust bring as a language for the consumers of the subsystem.
This feels like putting the cart before the horse. "we shouldn't integrate Rust into the kernel in a modular sort of way cause it's not as optimal as more thorough type-aware integration."
Like, cross that bridge when you come to it, eh? The kernel is currently comprised of hundreds if not thousands of components, talking to each other via C API/ABIs. And talking to hardware is almost entirely ABI. That is how the kernel do.
Would we benefit from more type knowledge across boundaries? Absolutely, but this is hard (especially given how different the c and rust approaches to allocation are), and shouldn't stand in the way of progress.
I agree. I've migrated multiple large codebases from a weakly-typed language to a strongly-typed language, and in my experience this is the only way to do it. And it's also not as bad as the OP makes it sound: you get your stronger guarantees gradually, and that's fine.
One gotcha, though: a stronger type system encourages the removal of runtime checks. When doing a partial conversion, I've found it's best to leave any existing runtime checks in place, even if the type system makes them "redundant", because it doesn't really until the calling code has also been converted. Once it has, you can go back and remove them.
(Clarification that these codebases were obviously not as large as Linux, and I don't mean to imply that the job was anywhere near the same scale as this one would be, I just think some of the same lessons are broadly applicable)
Exposing a C API/ABI is also the only supported way of making separate Rust crates that can be truly interoperable beyond the constraints of a single rustc build. It's not a limit to high-level interfaces because one can always build those as higher-level layers within a single crate, and there are tools (bindgen and cbindgen) to make this workflow a bit easier.
What do kernel subsystem interfaces usually look like? Is there a lot of shared ownership of objects with complex lifetimes (i.e. not just allocated on startup and never freed). Do APIs often require consumers to check invariants themselves? If so then there could be a lot of safety lost at the interface.
The best situations for using Rust modules from C are simple APIs that allow the complexity and danger to be strongly encapsulated.
- It is not fun but frustrating to work in Rust, and contrary to C, you are limited by the language/compiler on what you can do.
- building/compiling the kernel is not trivial, and you will add a new huge dependency that you have to deal with to build the kernel for whatever target.
Let's suppose you want to build for MIPS, then you need to have Rust supporting MIPS.
As an exemple, there is a common package in python that decided to start having their module in Rust instead of C. Now a lot of users of the module are pissed off, with good reason, because you can't build/install the module anymore in a little bit older or non conventional distributions.
Deeply subjective. Rust has been the most loved language on Stack Overflow for 5 years in a row now.
> add a new huge dependency
Sure, but setting up Rust is much much easier than GCC with all the trimmings.
> As an exemple, there is a common package in python that decided to start having their module in Rust instead of C. Now a lot of users of the module are pissed off, with good reason, because you can't build/install the module anymore in a little bit older or non conventional distributions.
Assuming you are referring to pyca, you are mistaken and there has been a lot of misinformation about the change. Rust support is needed to build the module, but not install it. Pyca works just fine for users without rust and works everywhere rust does. Users on niche CPU architectures which haven't been sold commercially for 15+ years were the only ones impacted.
The problem with pyca cryptography was that Python users are not in the habit of using lockfiles which meant reinstalling venvs picked up more recent versions of transient dependencies. That and that they made the change in a minor update and non wheel users got caught out.
That people weren’t version-pinning critical dependencies was the most eye-opening thing about that whole affair. The tools to make this easy have been available and well-used for years, don’t have a lot of sympathy for them.
Well people think they are pinning their critical dependencies by using a requirements.txt file. But it normally the transient dependencies are not listed. And anytime you rebuild a
You probably know this but for people reading along who think using requirements.txt is the same thing: it is not.
How lockfiles work is that you define your dependencies in a file like pyptoject.toml or Pipfile (similar to a Cargo.toml). You then use pipenv or poetry or pants to compute all the dependent versions of your dependencies and transient dependencies. Then that's saved in a lockfile. Any time you need to remake a venv for local Dev or rebuild a docker container or install deps for CI is uses the same locked versions from the lockfile. Only when you decide to recompute the dependencies do the transient dependencies change in the lockfile.
Sadly, a standard lockfile was rejected from PEP-650, held back by pip being woeful:
> Additionally, pip would not be able to guarantee recreating the same environment (install the exact same dependencies) as it is outside the scope of its functionality.
Well then, maybe fix it? Because clearly it’s an issue? A good chunk of that explanation really reads like “ehhhh, can’t really be bothered fixing this”, which makes sense given the Python devs approach to the last couple of Python versions: no fixes for anything important, just more half-baked features nobody asked for.
>Python versions: no fixes for anything important, just more half-baked features nobody asked for.
Oh god, tell me about it! 'Hey guise I heard pattern matching in rust and Scala and Haskell is popular! Let's add it to python but with no compile time checks to make sure matches are exaustive!'
Some excellent and smart devs who I really do respect worked really hard to deliver a complete dog shit feature while pip languishes for almost a year with a broken version resolver [1]. It's so frustrating. :( :( :(
> Users on niche CPU architectures which haven't been sold commercially for 15+ years were the only ones impacted.
This is kind of the root of the problem, when you are kind of a hobbyist dev wanted to work with the latest shinny new version of everything, then everything is fine.
But when you try to do different things that are not mainstream or doing embedded, then you understand why sometimes you have to keep old software or hardware.
Rust like go are made for a connected world, where you are always upstream, always connected to get the latest versions, and it's ok to do breaking change to the language every few years.
Did you ever try to build a Linux from scratch? Trying to fit some build or runtime constraints? Then you learn the real cost of each dependency that is added and of their complexity.
Imagine you had that much dependencies to build a kernel, probably this much of memory, CPU and storage used. Now you need to add rust, its dependencies, other tools related to it. Each dependency possibly not supporting your system or configuration or require their own library dependencies in a new version that is maybe conflicting with older versions already used by the system. And that you can't change without breaking other existing programs in the system. And maybe you can fix these other applications to support the new version of the library but you just wanted to "update" your kernel and not spend 10 days reworking your system unexpectedly because a stupid update required it.
It’s worth pointing out that a lot of those “weird” old chips weren’t formally supported anyway, and the the fact that the software worked on them was a convenient fluke, and by design or intention.
This is the magic of Linux so far to be so versatile and able to support so many hardwares.
But that being said, most of the time, issues are not coming from really exotic "chips" but from little variations, configuration or system library versions.
Let's say you want to cross compile from x to y, and want code to use that specific memory space. This is when things start to get messy usually.
Example of how you can lose a lot of time and get crazy, when you just wanted to compile the code of something for your case:
That particular metric is derived from "people who use the language outside of work and wish they could use it more at work." The survey doesn't explicitly say "which language do you love?".
Maybe the Python number is smaller because those people are working right? All the Rust programmers are just working on a fun program at home, and Python is a serious language now, you're working the 9-5 with Joe Coder?
Haskell is 51.7% though. So, I guess when I wasn't looking Haskell really exploded in boring office environments and I shouldn't expect to see any more hobbyist Haskell projects everywhere... or your hypothesis was just wrong.
Maybe it's just that Rust is new and exciting, let's look at what people want to use that they don't now, surely that'll be Rust too and we'll know it's just Hype.
Huh, that chart is dominated by Python. 30% of programmers not doing Python wanted to start.
> why does Rust report 86.1% love while Python only got 66.7% ?
I mean, because python is in practice a boring office language where the immense majority of devs have to maintain Joe Coder's 2009 Django set of custom attributes.
> Huh, that chart is dominated by Python. 30% of programmers not doing Python wanted to start.
Because people believe that if they learn python they'll instead land a cool ML job which pays 100k more than what they have ? Like, I have my girlfriend who literally does not know anything about programming ask me if I could teach her python because she saw an ad about it (for a "land a CS job in 3 months" type of thin). That necessarily causes some inertia for Python, not enough to be more hyped than rust, but enough to influence results.
Anecdotes about what non-programmers believe aren't very applicable to Stack Overflow's survey of programmers. Still though, you end up with a weird conclusion where you believe "hype" drives the Loved statistic (one based on people's real experience) but not so much the Wanted statistic (based on what they heard) when by definition that isn't how hype works.
Think about the 2016 movie "Deadpool". Deadpool is a one joke character. There are no major Marvel characters in the movie, the stakes are low, there is no connection to the larger Marvel Cinematic Universe storyline. Reynolds has played this exact character before, in a movie which nobody liked. There was inevitable fan hype before it came out, "the merc with a mouth" sells comics and those fans are going to see the movie whatever, but fans don't know anything right? But, both critics and audiences seem to have liked this movie, a lot more than its studio expected. It made a lot of money and got pretty great reviews from most quarters. Neither of those things is hype, that's called success.
The Loved result looks exactly like what I see what I talk to people about Rust. Those who haven't heard of it of course aren't looking to write Rust, for those who've only heard of it, it's on their "things to check out" list with Go, and maybe Swift but it doesn't jump out at them. But among those who've written Rust you see a spike of enthusiasm, "Hey this is really good".
When I learned Go, I filed the acquired skills away. "This may be useful in some future scenario, but I have meanwhile ceased to be employed writing TLS bit-banging code for which I thought Go would be the best option, so, never mind now"
But when I learned Rust the first thought was "I should write more Rust". I immediately rewrote the smallest interesting C project I ever published and pushed that to GitHub. Then I wrote a bunch of code to check some of my intuitions about Rust's safety when used by people who are unreasonable (misfortunate, a collection of perverse implementations of safe Rust traits). Then I started writing a triplestore, which I'd done twice before in C -- a friend and colleague left programming and went into management after his third one, so cross fingers that doesn't happen to me.
Could also be referring to the cryptography library which added a Rust dependency, which caused pain for people running ansible, and other downstream users.
> Could also be referring to the cryptography library which added a Rust dependency, which caused pain for people running ansible, and other downstream users.
This isn't true for the overwhelming majority of deployments, since pyca/cryptography was/is distributed as a pre-built wheel. There is no runtime dependency on Rust in pyca/cryptography; the only downstream change is that packagers are required to have a Rust toolchain.
Just because there are wheels doesn't mean you'll never need to install Rust to install Cryptography. Just this week I got a "you need Rust" build error inside of a Docker container due to Pip not being able to find a wheel for the specific Python version used by the container. Fortunately, the version of Cryptography that was being pulled in supported that environment variable to use the C version instead, so I was spared from having to do a ton of work.
For now. One day I'll wake up and a future version of this container will refuse to build because whatever library pulled pyca/cryptography in got upgraded and now needs the Rust-only version.
The latter point is an important one. Rust as a language for libraries cannot work the same way as rust as a language for applications. For the latter it is OK to depend on the cargo toolchain and be opinionated when it comes to things like dynamic linking. For the former you ideally want support in any common compiler (clang, GCC) and as little dependencies and constraints as possible.
It's also a problem for application themselves IMO. Cargo combined with dependency pinning brings most of the disadvantages of similar environments with centralized package handling: the ease of adding package dependencies increases the number of dependencies themselves very rapidly. Overly-narrow version pinning forces per-package lockstep updates of the various dependencies, which in turn means multiple versions of the same package will get pulled two-or-three levels deep. This ensures each single rust package or update you build is almost a guaranteed rebuild-the-world even with a shared cache dir. And we're not touching the problem of building projects where rust is only part of it: annoying if you want to link other stuff into rust, even more so if you want to do the opposite.
I'm following a couple of projects that transitioned to rust, and my experience as a contributor is not stellar. A minimal rust project can take hundreds of mb of disk space just in dependencies, and double that for build space. The solution for some has been providing build bots, but again doesn't help me as a contributor, where I need to be able to rebuild from source.
This has on me almost the same effect produced by large projects: I only contribute if I have a large vested interest in the package, otherwise I just avoid because it's time consuming.
True in principle. But once you divorce yourself from Cargo, almost all resources and advice when it comes to building Rust programs go out the window. I love the language, and I love the community, but the attitude of "rustup nightly and Cargo, or bust" is bit terrifying.
As a noob, I had to wade through endless "but don't do that, just get the latest from Cargo!!!!" when I asked for advice on how to use my system-provided Rust packages for my project.
For what it's worth, system-provided can be arbitrary and vary widely between systems.
Moreover, in the Python world a distinction is made between "software that runs your system" and "software that you use for development"; maybe Rust people think similarly.
> For what it's worth, system-provided can be arbitrary and vary widely between systems.
Sure. That's inherent with software.
> Moreover, in the Python world a distinction is made between "software that runs your system" and "software that you use for development"; maybe Rust people think similarly.
Is there any sort of detailed documentation on how to use rustc directly in more complex ways? I imagine that any begginer's text will mostly treat cargo and not such special usage scenarios as this one.
Oh, it's actually easy to find on https://www.rust-lang.org/learn in the second section of the page. Most likely reason for me not having found it is that the last time I looked at that page, I wasn't looking for the rustc book specifically, and that now that I wondered where rustc's user-level documentation was, the book doesn't appear when I googled simply "rustc documentation". You have to google specifically "rustc book" and that didn't occur to me, as I was expecting some kind of manpage instead.
Thanks for clearing that up. Say, you know a lot of things about the rust ecosystem. Do you have any insight into how hypothetical rust driver code would integrate with the rest of the kernel build process? I imagine it would have to use llvm-rustc, or is gccrs ready for the job?
Would it be emitting objects that gcc/ld would link against?
Well, this is a working patch set, so, yes, though I have no direct involvement and haven't read all of it yet.
> I imagine it would have to use llvm-rustc,
Yes, it uses upstream rustc.
> is gccrs ready for the job?
Not yet. They're making great progress, but major chunks of the language are still missing. They'll get there.
Using upstream rustc isn't a blocker for new code implementing drivers, but it is a blocker for getting into older drivers, or the kernel. The blocker is platform support; or at least, the current largest blocker, and either rustc will gain it, or gccrs will be good enough to compile this codebase. We'll see :)
> Would it be emitting objects that gcc/ld would link against?
Yep, it emits output that looks like any C compiler's output, you link 'em together and you're good.
If you manage to compile the kernel with clang, in theory you can even get cross-language LTO; this is working in Firefox, but I'm not sure if anyone's tried it with the kernel yet or not.
> If you manage to compile the kernel with clang, in theory you can even get cross-language LTO.
Note, that there is still bunch of unsolved issues [1] in LLVM to allow all building all of Linux kernel. The efforts had stopped for some years but recently gained steam again, though a lot of time was wasted before.
Even if you're compiling dynamic libraries/.so's with a C ABI to be consumed through a C FFI from another language? That seems to be fairly common use case these days, and I don't see those issues there (unless I've missed something, which is of course very possible).
Rust itself largely supports this (via the 'cdynlib' project type), however for many Rust crates it either does not make sense to export a pure C API/ABI (i.e. the "crates" are purely generic code that's going to be instantiated with build-specific types, so there's no predefined API beyond that single build) or they just don't bother to enable that use case.
Well, sure, language-level API (with a type system identical to Rust's) will be a problem, but as long as there's only one compiler, there doesn't seem to be a problem -- yet. I'm mildly wondering if this isn't a chicken and egg problem of sorts.
Exactly, it's subjective fun. Some people like languages where compiler errors point you in the right direction, expressive features like ergonomic strings, closures, syntactic macros, etc.
Other folks are masochists and get their jollies from cryptic errors, segfault debugging, pouring through valgrind traces, and of course manually managing memory. If you aren't suffering, are you really programming?
I tease, but not entirely. Sometimes I do in fact enjoy the challenge of C programming. But it's squarely type 2 fun.
The amount of segfault debugging and managing memory is roughly linearly dependent on how much you pretend to not be programming in C but in an object-oriented programming.
Those languages give you syntax and/or runtime tools to get away with badly structured programmings (allocating / deallocating stuff like mad, lots of implicit behaviour). C is not like that. It wants you to think and learn how to structure programs (this is transferable knowledge, i.e. it gets much better with time).
Based on Debian's popcon numbers (https://popcon.debian.org/), Rust has support for 99.99% of users. The reality is that if you're using architectures at that level of reality, no one actually supports your architecture, so you're reliant on locally patching software and hoping things work.
(Note that MIPS is a Tier 2 support for Rust, which is a commitment to keep it building but does not obligate running tests for every checkin).
Indeed. The vast majority of computing environments are covered by existing rustc support. However, people in weird retrocomputing environments are more or less existentially threatened by Rust.
In my personal experience (since I wanted to see how big of a problem this is), I looked into bringing up Rust as a cross-compiler for Mac OS 9. This requires a compiler that can emit PowerPC machine code, as well as a toolchain that can handle XCOFF objects and classic Mac OS's strange resource formats (if you ever wondered why Win32 has resources, that's why). Retro68k provides such a toolchain (albeit GCC based), and I wrote a rustc target file to make it spit out XCOFF objects in PowerPC format.
Then I got hit with a bunch of llvm assertions about unimplemented functionality in it's XCOFF generator and gave up.
Less anecdotally, the ArcaOS people (responsible for trying to keep IBM's freakshow fork of Windows and DOS alive) and TenFourFox both have abandoned attempts to maintain Firefox forks for OS/2 and old Mac OS X (respectively), specifically because of the Quantum update making Rust a requirement to build Firefox.
I heard Rust did merge in a GCC backend, which might help some of these retrocomputing projects... but there are platforms out there where the primary (or only) development environment is a proprietary C compiler. (e.g. Classzilla uses Metroworks to provide old Mozilla on Mac OS 9) I'm starting to wonder if some kind of "Rust to C" backend might be useful for these cases...
Linux also can't abandon hardware support for some of these weird environments, either. So until and unless Rust-with-GCC can compile on every environment Linux does, we aren't going to see anything more than Rust drivers.
> Linux also can't abandon hardware support for some of these weird environments, either.
Can't because why though? I agree it shouldn't abandon them just to get more Rust, but there are other reasons some of the crustier less used platforms go away.
> Linux also can't abandon hardware support for some of these weird environments, either.
But why not? Are we obligated to support everything forever? If it’s hardly being used, and is starting to get in the way of safety and correctness improvements, why can’t we drop something old, arcane and unused?
Counterpoint: not all of this is actually going unused. Yes, Debian popcon is going to show the vast majority of users on x86 and ARM; but that is primarily consumer use cases.
When you get into embedded, you will start to see all sorts of weird arcane setups that actual businesses rely upon. Case in point: this commercial kitchen appliance that is actually a DOS PC built with modern parts. [0]
Not to mention the startlingly high number of large businesses running off of IBM server hardware. Much of that is actually legacy stuff that's been rebranded and massively upgraded over the years. A company that bought into System/360 in the 80s or AS/400 in the early 90s will almost certainly have backwards-compatible zSeries or System i hardware running literally 30-40 year old programs.
Point is, there's lots of business critical crap running on things other than x86 or ARM. I only used retrocomputing as an example because I had a good anecdote for it. Businesses treat computers as if retrocomputing was also somehow mission critical and they pay handsomely for the privilege.
> will almost certainly have backwards-compatible zSeries or System i hardware running literally 30-40 year old programs.
But like, isn’t that a them problem? If you want to calcify your compute layer, don’t be surprised when the rest of the industry moves on, and possibly does things that aren’t compatible anymore? If they want to keep running that software, I think it’s their responsibility to either evolve their software to keep up, or deal with the fact that they’ll have to run their own old version/fork when the time comes.
If they contribute to the kernel, I would have thought their perspective would have been represented on one of these threads by now, as they seem to get posted at pretty much every event hahaha. Maybe they do and I totally missed it as well.
But it is used by retrocomputing enthusiasts. Linux has been supported by them for the platforms they care. They gladly hold up the bar “now you have to support Linux yourself”, with C compilers already existing and being supported by someone else. With Rust becoming a build-time dependency, things suddenly turn into “you're not getting any Linux until you port rustc to your platform and then make sure it's working there at all times”.
So everyone is not getting a more secure/safe system because a small minority wants to run Linux on old computers?
That does not sound fair to me. Why can't they not use old versions of Linux as well.
In what way are you more limited by the compiler in Rust than with C? Just write "unsafe" and you're off to the races.
Writing correct C is very hard and most definitely not fun. It's like juggling with with 7 balls and if you drop one you'll be shot. C is defined for a weirdo abstract machine that doesn't match what computers really do, and when people apply their intuition and knowledge about computers to their C programs "because C is low-level", it's a crapshoot whether they will trigger undefined behavior and the compiler goes off the rails with wild optimizations.
If I designed a low-level language I would enable such optimizations by making it easy to communicate your precise intent to the compiler. Not by making the standard a minefield of undefined behaviors.
> Just write "unsafe" and you're off to the races.
I'm pretty sure I mention this in the LWN comments, but, since it gets repeated so often the contradiction might as well be repeated as well:
No. Unsafe Rust only gets to do three things that aren't related to the "unsafe" keyword itself. It can dereference raw pointers, it can access C-style unions, it can mutate global static variables.
That's everything. Your C program is free to define x as an array with four elements and then access x[6] anyway - but Rust deliberately cannot do that. Not in Safe Rust, but also not in Unsafe Rust either. Writing "unsafe" doesn't mean "Do this anyway" it only unlocks those three specific things I mentioned, and so sure enough x[6] is still not allowed because that's a buffer overflow.
In fact by default the Rust compiler would warn you, if you write unsafe { foo[z] = 0; } that unsafe isn't doing anything useful here and you should remove it. That array dereference either is or, if z is small enough, is not, an overflow, and either way unsafe makes no difference.
> Your C program is free to define x as an array with four elements and then access x[6] anyway - but Rust deliberately cannot do that. Not in Safe Rust, but also not in Unsafe Rust either.
Woot?
fn main() {
let a = [0, 1, 2];
let _b = [42, 42, 42];
println!("{}", unsafe{a.get_unchecked(6)});
}
Calling get_unchecked on this array ends up as get_unchecked on the slice containing that whole array, which ends up as as get_unchecked on the slice index, and in the end it is...
Dereferencing a raw pointer. One of the three specific things I said unsafe Rust can in fact do.
This is not a "funny syntax" thing, a[6] is the idiomatic and obvious way to express this in Rust, and, it isn't allowed because it's a buffer overflow. Whereas a[6] is also the idiomatic and obvious way to express this in C and the result is Undefined Behaviour.
The claim was about `x[6]`, which does not appear in your program. The point is that `[]` is always bounds-checked, and the bounds-checking cannot be opted out of even with `unsafe`.
By explicitly doing it so, in an operation that is easy to grep for, or in the case of a binary library, search for the symbol during the linking phase.
Something that is impossible to validate in C, unless one is using a custom compiler, like Apple is doing for iBoot firmware.
> By explicitly doing it so, in an operation that is easy to grep for, or in the case of a binary library, search for the symbol during the linking phase.
Most of the time, these operations will be inlined, so they will already be gone by the time it gets to the linker. The compiler phase is the latest point where they are still visible.
>No. Unsafe Rust only gets to do three things that aren't related to the "unsafe" keyword itself.
>[...]
>Your C program is free to define x as an array with four elements and then access x[6] anyway - but Rust deliberately cannot do that. Not in Safe Rust, but also not in Unsafe Rust either.
>[...]
>In fact by default the Rust compiler would warn you, if you write unsafe { foo[z] = 0; } that unsafe isn't doing anything useful here and you should remove it. That array dereference either is or, if z is small enough, is not, an overflow, and either way unsafe makes no difference.
Since the conversation is claiming that you can do everything in Rust that you can do in C, I want to provide some counter-nuance. :) I am guessing what people actually mean is that all the operations you want to do can be done via unsafe Rust somehow, and yes, you can do that. But also yes, it is not literally "just write 'unsafe'". You do need to use raw pointers.
For instance, if you want to overflow a buffer intentionally,
fn main() {
let mut a = [1, 2, 3, 4];
let b = [5, 6, 7, 8];
let ptr: *mut i32 = &mut a[0];
unsafe { ptr.add(5).write(999); }
println!("{:?}", b);
}
(Note that this is not just extremely platform-specific and compiler-specific about whether a is in front of b or vice-versa, it is straight-up Undefined Behavior because you write past the end of an object... but the equivalent C code is also Undefined Behavior, and subject to the same LLVM optimizations. So if you were happy with the corresponding C code, this is the equivalent Rust.)
If you really, really want, you can write your own UnsafeSlice type that does the unsafe stuff internally and exposes the standard indexing operator, which would make foo[z] actually accept arbitrary indices just like in C. But you shouldn't. https://play.rust-lang.org/?version=stable&mode=debug&editio...
(Among other things, a code reviewer should be suspicious of your use of "unsafe" in the internals of a thing without stating why the higher-level abstraction is safe, and in fact the abstraction is wildly unsafe here, so it's bad style to write code that launders the unsafety, so to speak. In the Rust for Linux patches, there are "SAFETY" comments above each use of "unsafe" defending their logical safety.)
UnsafeSlice is a terrible idea, but let us at least give it the normal ergonomics of a wrapper type so we can say UnsafeSlice(&mut a) rather than needing curly braces to make one :)
You don't need anymore than that to match what C offers, C as in standard C, not in the folklore of C as a portable assembler. In fact you're freer in Rust than in C, because there is a simple, defined way to type-pun memory in Rust.
> Your C program is free to define x as an array with four elements and then access x[6] anyway
It's free to do anything, but you can't be sure that it will do that, because of the utterly weak specification.
Note that the C version of this might compile but the result is undefined, so it could be what you intuitively thought what would happen, or anything else.
> - It is not fun but frustrating to work in Rust, and contrary to C, you are limited by the language/compiler on what you can do.
That depends on the beholder.
Developers that grew up in Algol derived languages like Modula, Pascal and Ada, feel Rust brings fresh wind into systems programming, with safety considerations we thought it were lost forever and only partially covered by C++.
Then there are the others like Kernighan, that feel that languages like Pascal, sorry Rust, is programming with a straightjacket and better not change anything.
I supposed that's why they're writing drivers not crypto libraries.
The sample in the article is Binder. Which is only needed for platforms Android supports. Which are all Tier 1 & 2 Rust targets. (As is MIPS too).
If you wrote a driver for HW that only appears on 1 or 2 CPU architectures then targeting is less of an issue. I would not be surprised if lots of drivers in Linux only work on x86 anyway.
> you are limited by the language/compiler on what you can do
You can translate most C to Rust automatically (https://c2rust.com/) and there's nothing that I'm aware of that can't be done in Rust via unsafe and transmute. (technically some things like specific label jumps can't be translated, but all of those can be rewritten to other constructs) Do you have some specific cases in mind?
That's "translate c to rust" in the same way as translating English to Japanese by looking up the kanji for an English word, and replacing it word by word. Why not just generate bindings at that point?
I'm neither recommending to use it, nor saying it's a good quality result. I'm addressing the "you are limited by the language/compiler on what you can do" part, which for real code is not the case in my experience.
Ah, yeah I see your point. I suppose that's a useful shim to having full Rust interop with a pre-existing C codebase as you convert, or if you have a mature lib you just want to include wholesale in Rust.
But yeah bottom line, nowhere does Rustc "stop" you from doing things. Just strongly discourage :)
Here’s a blog series about rewriting some classic C in Rust, first unsafely and then safely, and getting some performance wins along the way: http://cliffle.com/p/dangerust/
> It is not fun but frustrating to work in Rust, and contrary to C, you are limited by the language/compiler on what you can do.
People who have had prior exposure to C tend to find Rust frustrating.
People who have not had prior exposure to C tend to find it fun, and an average programmer of this sort can fearlessly write bare-metal code that beats the code of the best C programmers writing in C in safety and rivals it in performance.
Systems programming has been fundamentally changed by Rust. It's become as accessible and democratized as web programming, no longer the sole province of a cadre of elite C programmers.
Indeed it was (/s), that's why not even Servo has been shipped. Stop preaching programming languages without results. Because of their benevolent dictators, Linux (Linus Torvalds), Clojure (Rich Hikey), Zig (Andrew Kelley) and Python (Guido van Rossum), the development process of these has been less democratic, and this is precisely why the results are so good. A good design is not a democratic consensus. Look how Rust and C++ ended up, a big pile of complexity. Even Scala 3 was saved with an intervention from Martin Odersky to clean up the language, with huge backlash from the community.
Servo was not meant to ship (at least by Mozilla, when there was paid staff working on Servo), it was a research vessel and Rust components now shipping in Firefox (Stylo, WebRender) started life in Servo.
The FUD around Rusting the kernel reminds me of similar sentiments surrounding autonomous vehicles. Better != perfect; it's an evolutionary step. Even if there are unintended negative consequences of Rust code, if they are less frequent than the rate we deal with bugs today, then it's worth using.
Rust in Linux has a known constantly showing negative consequence: it introduces another language to the stack. Furthermore, Rust is quite different from C. This makes the whole massively more complicated, and increases the amount of knowledge needed to understand the whole.
The same effect applies every time a new language is introduced, if it doesn't completely replace the previously used language. In this case, Rust won't.
Zig might be a better fit, given how much more similar to C it is.
> Rust in Linux has a known constantly showing negative consequence
It'd make your argument a lot stronger if you can actually list some of these negative consequences. "Another languages to the stack" and "Different than C" is just complaining about change because it is change.
What negatives have already been shown that isn't just about "This isn't C"?
Mixing any two languages in any single code base creates significant friction at the boundaries, and adds new degrees of complexity in major areas (build system, tooling, debugging...). If we're talking about a project as complex as a production OS kernel, this kind of a decision should never be taken lightly. It's a much smaller step from 2 to 10 than from 1 to 2.
> It's a much smaller step from 2 to 10 than from 1 to 2.
But here, you're already starting with 2: C and assembly. Besides inline assembly, a small but very important part of the Linux kernel is written in assembly on every architecture: the system call entry point (entry.S) and the kernel entry point (head.S). And if you consider each architecture's assembly as a separate language, it's more like 10 languages than 2 languages. I'm always impressed whenever I see changes to for instance signal handling or thread flags which touch not only the common code in C, but also the entry point assembly code for each one of the many architectures Linux supports; whoever does these changes need to not only know the assembly language for all these architectures, but also have at hand all the corresponding tooling and emulators to compile and test the changes.
You do have a point, however (as you noted) the lowest-level bits of an OS kernel are practically impossible to build (and subsequently, maintain) without precise control over the machine code; you can't even start a hobby OS kernel project without relying on assembly. It's a part of the deal; a pure-assembly kernel is more feasible than one without any. You also (as you pointed out) still have to be mindful about the C-asm boundary; the integration doesn't come free.
The story here is pretty different: integrate a new, high-level language into a 30 year old, 30mil SLOC, production code base, that billions of people rely on every day, AND actually extract some value from that work.
A very obvious one is that by adding another language, you are adding more complexity.
It's not as if C is going to disappear from the kernel as it's something like 25 million lines of C code, and if Rust was to be supported, the current C experts who are maintaining various subsystems will now also have to become Rust experts, so that they can effectively accept or reject code contributions in that language.
Personally it just seems illogical, better to make a new kernel in Rust is you really want to use that language, than converting small parts of a HUGE C kernel. Google has been pushing for the inclusion of Rust into the kernel, it's weird that they are not writing their own shiny new Fuchsia OS kernel in Rust, instead of C++.
It's funny how frequent people bring this up, but the truth is simple, check here [1]. Zircon kernel is not new [2], it has been in development for a while now. By the time the started to work on the microkernel, Rust 1.0 was really new, so they would've to implement several things from ground up. There's a implementation of Zircon in rust called zCore [3], but I don't know how stable and feature complete this one is.
Why is it a bad thing that Rust requires you to learn more things? As 'mjg59 pointed out recently, the kernel dev community intentionally asks you to learn more things unrelated to your code as a means of keeping the "bar" high and fielding only committed contributors. Isn't it all the more reasonable to ask people to learn a programming language? https://twitter.com/mjg59/status/1413406419856945153
Rust isn't terribly hard to learn, especially for a kernel developer with a good understanding of C and of memory. You can pick up the basics in probably an hour. A lot of its design choices match approaches the kernel already takes (traits are like ops structs, errors are reported via return values, etc.)
And Rust is a language that plenty of college students pick up for fun. Professional kernel engineers should be able to learn it just fine. Frankly the hardest thing about Rust is that it makes you think deeply about memory allocation, concurrent access across threads, resource lifetimes, etc. - but these are all things you have to think deeply about anyway to write correct kernel code. If you have a good model for these things in C, you can write the corresponding Rust quickly.
In fact, learning Rust and thinking about Rust's concurrency rules has made RCU a lot easier for me to understand. RCU is famously a difficult concept, but the kernel uses it extensively and expects people to use it. So "requires little knowledge and is easy to understand" is not an existing design goal of the kernel - but having people pick up Rust might help there anyway.
(Zig seems like an entirely reasonable choice too. Send in some patches! :) )
> Rust isn't terribly hard to learn, especially for a kernel developer with a good understanding of C and of memory.
I'm not so sure. I program in C for a living (embedded, for almost 20 years) and believe me that I tried learning Rust, but when I see something like:
_dev: Pin<Box<Registration<Ref<Semaphore>>>>
I cannot even image the knowledge code like that might require, its implication, the result, the reason why it was written like that. It's confuse. It's seems like something a trying to workaround a language limitation. Not nice at all.
Adding Rust complicates things, but Rust makes writing correct code easier, which is no small feat in the kernel world. The added complexity may be big, but it's a one-time cost compared to the stream of Rust code that one can hope for.
Rust is known to be hard to learn (YMMV), but C is even harder. If things go according to plan, someday for some use-cases you'll be able to contribute kernel code in pure safe Rust without having to learn C. In the meantime, adding Rust doesn't seem to be such a big ask when you consider what the kernel already has beside C: Assembler, the "C preprocessor (yes, it's actually a different language independent from C, and some kernel macros are really complicated), the BPF an io_uring APIs (essentially their own DSL), and a myriad of other inner-platform curiosities you might need to deal with depending on the kind of kernel work you do.
Concerning Zig, the cons may be smaller then Rust, but so are the pros. IMHO it's not worth it in the current context (I like Zig but it seems "too little, too late" to me). But there's no telling until somebody puts in the work for a "$OTHER_LANGUAGE in the kernel" RFC like is currently happening for Rust.
What Zig shares with C is orthogonality, with a large power-to-weight ratio, meaning it's a small language grammar with powerful range.
But Zig also improves on C's safety in many ways, not least checked arithmetic enabled by default in safe release modes, along with bounds checked memory accesses, function return values that cannot be ignored, checked syscall error handling, explicit allocations, comptime over macros, a short learning curve and remarkable readability.
It's hard for systems programmers not to appreciate any of these qualities in isolation.
We, the global community of software developers, are in the process of putting C out to pasture, with Rust as the de facto front runner as a successor. At this point it becomes a question of either admitting Rust into the kernel or, eventually, using another kernel written in Rust.
C++ was hampered by the same safety problems C was. And Java had a VM and GC, which cripple performance and determinism. Rust solves both those issues.
We need to keep in mind that are 2 types of autonomus vehicles discussion. We have a company that uses a ton of hardware, radar, lidar and many cameras and then we have the other ones that want to move fast and break things using only cameras, a GPU and many beta testers. It is normal that the aproach of brute forcing it with ton of data gets a ton of criticism.
My other criticism is the bad statistics used. It is like I create the "robot athlete" and I compare it stats with the average of all athletes including the young children and the people with some physical problem. You should compare self driving with cars with exact same safety features and save driver demographics. Bonus if you calculate all deaths caused by illegal driving and then ask the giants WTF not put the money into first solve the speeding and drunk / tired driving , I bet ANN work better on this problem.
Rust in Linux seems to me a waste. IMO the Unix philosophy is great but it needs a better implementation , one that is based on the present day hardware and expectations.
> Bonus if you calculate all deaths caused by illegal driving and then ask the giants WTF not put the money into first solve the speeding and drunk / tired driving , I bet ANN work better on this problem.
Why would they want to do that though? No one would buy a car with those features. I guess you could get a few people to buy one if the insurance was way lower, but you certainly wouldn't get decent market penetration.
So say someone builds a system that you install in cars that monitors how you drive, it can detect if you respect the speeding laws, if you drive normal or you have risky behavior. Few will want this in their car but they will want it on the others car so I can see this possibilities:
- a law that requires it in all cars
- a law that requires it in new cars
- a law that requires it only for new drivers (less then 2 year) and for people caught speeding/drunk.
- a law that will make certain roads only available for self driving cars or for people with this safety system. It would be a compromise between AI drivers camp and people that still want to drive themselves.
Maybe you will say something about privacy, this system can be implemented with no connectivity to spy on you. It could be read only when you want to pay your insurance or in case of an accident.
But also AI can be implemented in a different way, like you could have some quality cameras recording traffic and have the AI detect who is using the phone, who is not paying attention , who is moving in a erratic pattern , so it would be an advanced spending camera.
The argument is that the AI driver camp will demand humans to be removed from the road, because their AI driver is better then the average. To prevent you losing the right to drive then this average needs to be improved and this can be done by removing the bad drivers and a good place to start is the ones that do not respect the rules. Otherwise you might object to install a safety system in your car but then the big companies will lobby hard and you will have to use the Tesla/Google or Apple AI powered cars, then not only that he government will know all about your movement but the ad companies will know it too.
Then maybe the Rust community would be better served by not hyper-evangelizing it so much and at the same time bashing other languages?
There's a perception issue here: people think that Rust people (not necessarily the maintainers or official evangelists, but the community at-large) think Rust is as close to perfect as you can get because of its safety features, because these people talk about Rust like it's a universal problem solver.
I kind of agree, that is why you tend to see for-and-against comments from my side.
Despite the plus sides, there are lots of incumbents, certain domains are better served by managed languages, and regardless of our wishes C and C++ have 50 years of history.
Even if we stop writing new code in those languages today, Cobol shows us how long we would still need to keep them running anyway.
Microsoft, for example, despite their newly found love with Rust, is full of job ads for C++ developers in green field applications.
This points to the larger perception issue that "anybody who advocates for Rust is part of the Rust community and/or knows Rust well". But there are many Rust evangelists who obviously don't know much about Rust (this is not Rust-specific, it's a common issue in tech). This kind of "positive FUD" is ultimately harmful, as outsiders understandably get tired of the hype and start ignoring any pro-Rust argument, good or bad.
In my experience, the community of actual Rust users is much more level-headed. While most do love the language and the "this aspect of Rust is irrefutably better than the equivalent in $OTHERLANG" opinion occasionally pops up, the community seems pragmatic and well aware of Rust's cons. Case in point: the "should I use Rust" questions on the rust subreddit don't get dogmatic answers, and often result in "Rust isn't ideal for your use-case" advice.
It seems pretty clear that rust in linux would reduce certain kinds of run-time bugs. What isn't so clear is whether, overall, rust improves linux or not.
There's bad to be weighed against the good. Adding complexity strains and breaks processes, slowing development. Among other things, this means additional bugs and lets them survive longer, so it's not even clear rust is a win purely from a bug perspective.
> Decades of promises of self-driving cars. And still nothing able to drive without a driver.
Waymo have driven 20 million miles autonomously since 2009, and as of late 2020 claim 74,000 of those were done completely driverless. They're still a far cry from being common, but they're here and they're impressively safe.
The standard economic analysis of cooperatives is that, being run by their employees, they find it hard to embrace disruptive change (e.g. change which involves firing people). Hence, you probably buy your food from a shareholder-owned business, not an employee-owned cooperative.
Linux is a bit like a cooperative in that decisions are made by the software engineers, not by shareholders or managers. In particular, most Linux contributors are probably heavily invested in C. If Rust gets the big boost of being allowed into the kernel, that could make C - and their own skills - be perceived as less valuable, maybe even obsolete eventually.
I'll do some economic imperialism here, and claim that Rust's technical merits or otherwise are a second-order issue. Linux developers have an interest in keeping C dominant, so that will probably happen.
The average Linux kernel developer is probably confident in 5-10 languages, and I can’t imagine they would really have trouble picking up Rust.
That kind of argument might hold for the average junior javascript dev, but kernel developers are normally an experienced bunch and C is rarely learnt today as a sole language.
Yes, I'm sure they're extremely smart, but equally, they are deeply expert in C, and it is not that easy to become deeply expert in Rust (or any other language). Even smart people can't just pick up new tools in a day.
They way rust is used in the kernel is a lot more constrained in scope and APIs compared to normal userland rust and hence easier to learn. A lot of added rust API are also just high-level abstraction of already existing concepts/code used in the kernel.
I think pragmatism wins. It seems very likely to me that a safer alternative which has similar characteristics to Linux attracts most of the users and contributors and thus renders Linux irrelevant. Once upon a time it seemed like there was no practical way that could be done, but today not so much.
This is in some sense a co-operative, but it's first and foremost technical. A co-operative farm feels a duty to consider Jim's irrational dislike of corn when deciding what the farm should grow. On the one hand, this land is very suitable for corn, there is a need for it, and they have all the equipment, on the other hand, Jim hates corn. But the technical focus means that although Jim's dislike of corn is respected in the LKML, it doesn't override the technical decision that clearly we should grow corn.
Among non-experts you can end up with lengthy arguments about how "good" C programmers don't make the mistakes Rust defends against. Those arguments won't last five minutes on the LKML because everybody there considers themselves a "good C programmer" and has made those mistakes.
One big technical obstacle to Rust was allocation. In C all your allocation is explicit, so that's kernel policy. A line of code that seems to simply assign a to b, for example, must not in fact secretly allocate resources. The ordinary day-to-day Rust you write does have some implicit allocation, but the Rust-for-Linux people landed changes so that you can have core and much of alloc in Linux but lose features that have implicit allocation.
Unless there's a hitherto unforeseen technical blocker, Rust in Linux seems inevitable at this point. They have a list of unstable features they're requiring -- expect over the next months either things get crossed off (having meanwhile become stable) or removed from Rust-for-Linux in favour of an alternative. Unlike Firefox I'm guessing Linus has no appetite for living on the bleeding edge, so I think once Rust-for-Linux is just stable Rust, it'll make its way towards the Linus tree.
C was born to make UNIX portable (V4), so any UNIX clone will have a symbiotic relation with C, which kind of boils down to your line of argumentation.
I see Rust having more success in OSes, and platforms, that aren't so reliant on being UNIX clones.
If there is industry backing, it will happen. The coming generation of programmers (and hence the future of the kernel) are nowhere as near enthused to program in C as they're Rust, so this transition is inevitable given time. Especially if the kernel should not slowly die out over the decades, stuck permanently to a fossilised language.
Every now and then you see companies make insanely large big bets that radically change a fundamental technology because of a in addressable need.
I'm increasingly convinced that a good chunk of MacOS/iOS security problems are due to the simple fact that it is very difficult to write correct system level code w/ C.
If (and it is a big if) the performance of rust can be in the same neighborhood as C - and these kinks can be worked out, then as a architect, I think it might be time for a ideological come to Jesus moment/protest reformation of the kernel.
In my opinion, you should learn Rust even if it doesn't get adopted everywhere. My experience is that learning the coding style enforced by the borrow checker (having to think about whether a piece of data is "owned" or "borrowed", whether it's an exclusive or shared borrow, whether it's safe to send or access it from another thread) makes you a better programmer in other languages.
Rust helps a lot with memory or concurrency issues. But not the rest. And in my experience, although the concurrency/memory issue are surely the most time consuming bugs, the "rest" is what takes 90% of the time...
We can hope they would be caught in review if the code happens to do the right thing by accident, or use static analysis and hope it triggers in that case, or use a language with clearer types where it's much easier to use types preventing this issue.
Ada can do that, but C++ arguably makes things even worse than in C: apart from supporting a subset of C's misbehavior, it introduces implicit-by-default converting constructors. Those make API misuse very easy.
To each their own. I find Rust a huge step up from C and even C++...especially C++. I've been writing Rust for months and C++ for years. I still feel more comfortable and confident my Rust code has fewer bugs and does what I intend vs C++.
It should be noted that memory in cases where automatic memory management cannot be afforded, and concurrency only in what concerns in memory data structures.
Compared to plain C, you have ADTs which help from day to day programming. You can bring them to C (https://github.com/Hirrolot/datatype99) but I don't know if the Linux guys would allow it.
I would say that 90% of the bugs are just logic issues, but there is a heavy tail of devious bugs that will take 90% of the time, even if they don't surface right away.
Although I still haven't really played around with Rust yet myself (still trying to grok the new C++20 features in recent weeks), the fact it puts your code into a (sometimes admittedly annoying) sandbox to prevent these bugs could be a godsend.
In general I find I'm productive in Rust; however, I do agree that there are some features that produce more confusion and debugging than others. For me it seems to be async and dynamic dispatch that are tricky.
I understand what the kernel devs are afraid of. Rust needs to demonstrate beyond a shadow of a doubt that it's a good replacement for C in the kernel, or they risk alienating people who don't want to learn another language, and losing those developers to FreeBSD or something.
The kernel has accumulated a lot of technical debt.
Linus speaks openly about it.
Mixing in the new flavor into a codebase that is
95% (?) using (and abusing) C with warts and all does
not strike me as a good idea.
Interaction between assembly, C and Rust is not nearly
well, enough understood to start.
Now if it does go ahead, it will be a great platform
to figure out the edge cases, bugs and all those things
nobody thought about.
I am in favor of Linux.NEXT
Write a new kernel in Rust (if it's the best choice)
Take all the learnings from Linux and all the research in
operating systems that have occurred since
and make Linux.NEXT.
Steal from Linux, WindowsNT, Quebes, OS/400, Solaris, Minix, z/OS
Plan 9, Fuchsia, BeOS/Haiku, ESXi, QNX, and of course TempleOS
Creating an operating system suitable for 2020 with the way
technology, infrastructure, connectiveness, pervasiveness,
privacy conscious, built for the hostile world of the Internet.
That's an odd statement. What CVEs aren't due to programming mistakes? I'm not sure if the majority of CVEs for the Linux kernel come down to memory safety, though I would not be surprised, but certainly a huge number are.
> without a clear and obvious benefit beyond promises that can only truly be fully fulfilled with a whole kernel written in Rust.
That's not true, really. You don't need a completely safe kernel to have an improvement to safety. If every device driver were memory safe tomorrow we'd be better off.
That said, I think this will be an interesting, possibly losing, battle. The Linux Kernel is extremely monolithic, it has a lack of testing and code review, decades of dug-in investments, and a strong history of not prioritizing security or even considering it to be a legitimate goal. Fixing that seems like it will itself take decades, whereas the current approach really feels like it's trying to get it done ASAP.
If they can do it, cool. As a Linux user I'll possibly benefit. I'm curious to see how it plays out.