>I find it distressing that you are complaining about new users of your
code, and then you keep bringing up these kinds of complete garbage
arguments. Honestly, what you have been doing is basically saying "as a DMA
maintainer I control what the DMA code is used for". And that is not how any of this works.
I appreciate that his anger is still there, its just worded differently, in a more modern (softer) attack. Is this still how we control developers in 2025? Yes! Deal with it or go fork it yourself. The day this goes away is the day Linux begins to die a death of 1,000,000 cuts in code quality.
I've never seen anyone talk about Linus' bilingualism concerning his anger issues. I'm bilingual, and sound much harsher and mean in English than in my native tongue. Could that be an element of the problem?
Could be, but having seen a couple of his crashouts, I don't really feel like buying that. According to his Wikipedia page, he's also been living in the States for about 25 years, and been a citizen for 15. If he couldn't achieve appropriate mastery of the English language with so much time spent collaborating with others using it essentially exclusively, and having lived for decades in a country where it is the native everyday language, I'd find that legitimately more impressive than the opposite.
This isn't mastery of the English language, he's mastered it fine. This is mastery of American working culture, which is optional. The reason why he grates Americans so much is because he's direct about his feedback, which Americans take as anger.
To me (a Greek), what he says is fine. He never gets personal, he talks about the arguments/points/issues, not the person. "complete garbage arguments" is a judgement of the arguments, not of the person, and it's fine to tell someone their arguments are garbage, if that's what you think about them.
Americans/Canadians/the English will interpret that as anger, but, in other cultures, it's just voicing one's opinion.
Well I'll be damned then, cause (old example that came my way) ...
> Key, I'm f*cking tired of the fact that you don't fix problems in the
code *you* write, so that the kernel then has to work around the
problems you cause.
... this is getting personal, this is being knowingly crass, this is universally offensive (albeit fairly mild), and I have extremely serious doubts that in Greek being told that people are fucking tired of your behavior is "just voicing one's opinion".
You also have to appreciate that this is from an email. You don't type out an email like this and hit send without being really quite content with it. I cannot personally imagine sending an email (or even a text message) like this to anyone at work, for example, and there have been times where they definitely would have deserved this and then some.
There are communities and workplaces where this type of communication is the norm, yes, and it used to be especially common in the past (and from what I know, still often is in blue collar jobs). This is true both in and outside of America.
I'm not from America, I'm from Central-Europe. I know what being "direct and upfront" is like, and this is not that. This is just being a twat. Language like this is never productive or acceptable in any culture, and it's not some American mind-virus spreading around the world that made people figure they can and should afford to have standards regarding their interactions with one another.
If anything, the trendy bit here is the opposite, where people seem to make a sport out of mischaracterizing being crass as saying hard truths. Just like how cynicism is often mistaken as intellectualism, this is also utterly misguided, and Linus is being washed of his personal faults out of respect, not because he's not at fault in reality.
I'm a central European as well and i do not only think Linus's tone is okay, I'm actually more comfortable around people like him. I find this watch-your-words culture repressive, and being with people who use and tolerate the "bad words" as well is liberating.
Ultimately that's on everyone to decide on their own. I personally feel the exact opposite way, that although having to mind my speech gets really quite annoying at times, and that there are communities who for whatever reason really overdo it, I really do not wish to participate in communities where the disregard of it, in the form of a quasi rebellious attitude, has taken deep cultural root.
The best of both worlds for me is when I can naturally just mingle with others without this issue cropping up, because the chemistry just works out on its own. I miss and greatly cherish those people I've met over the years who I was able to collaborate and spend time with like that. Instead of subscribing to either extreme, that's the kind of interaction I'm actually wishing to experience more of.
Regrettably, I feel this has been becoming less and less likely over time, which I attribute to the growing distrust and hostility among people at large, as well as the countless other (what I perceive as) dysfunctional social dynamics on the internet these days (e.g. mainstream political activism and counteractivism on community spaces of all kinds). And then who knows.
I don't think anyone here is saying that the problem are the "bad words" themselves.
I'd wager that a lot of people here are probably fine with someone shouting "motherfucker" at a problem in your screen, or saying that something is "fucking stupid", but draw the line at shouting or telling this to a person... especially in a public email, and even more especially when you're in a position of power over them.
As South European living in Central Europe, and having worked for several years at Nokia, I also don't have an issue with Linus.
In fact I do happen to land in traps in Germany and UK, where when being more frontal at work turns bad, because apparently we have to be all smiles, and complain in a positive way with euphemisms.
A technique that I have improved over the years, talk back with the feeback level expected from the target culture, when I know their background beforehand.
"I'm fucking tired of the fact that you don't fix problems in the code you write" doesn't make any judgement about the person, it makes a factual judgement of what they did. He didn't say "you're a bad coder", he said "you create bugs and then don't fix them".
Sure, he could have omitted the word "fucking" there, which doesn't add much, but "I'm tired of the fact that you don't fix the problems in your code" is really good, direct, honest feedback.
I understand that the person receiving it might feel bad, but they might also feel bad with "I've noticed that sometimes you tend to leave some bugs unfixed, even if perhaps they were caused by your own commits". When you give feedback, you should steer away from making personal judgements, focus on the facts, and deliver the feedback calmly, and Linus' sentence hits two of the three, which isn't too bad.
Anglo cultures really do tend to walk on eggshells to avoid hurting feelings, which other cultures might find tiring and somewhat disingenuous.
> "I'm tired of the fact that you don't fix the problems in your code" is really good, direct, honest feedback.
I disagree. I don't see why Linus being tired or not is technically relevant at all. Leading with it makes it sound like he's looking for the guy to stop tiring him, rather than actually remediate his lapses in self-review.
Being courteous is not about avoiding a negative experience at all costs, it's about being considerate of it, keeping in mind that it exists. If you know you're going to tell the guy they're causing issues, then not making that about how fucking tired that makes you feel shows that you're not trying to mess with their head, but trying to actually address the issue. It's specifically to avoid the ambiguity on whether he has a problem with the person or what they're doing, since when insulted, people tend to reasonably assume they're being found to be problematic.
I really don't think this is all that culture specific, and that this is just some freak cultural mismatch having been ongoing for decades. Not ethnic cultural at least.
Maybe it is not exactly culture-specific, but most of your post definitely sounds to me like "walking on eggshells" stavros mentioned.
Tbh I agree that your citation of Linus' email was an example of not exactly productive conversation. But preventing such emotion ventillation takes effort, same as not feeling insulted when your work is criticized. And who should put in the emotional effort is imo at least partially influenced by cultural expectations.
I don't think these are mutually exclusive, and I also don't think that this is a scenario where that Key person just needed to put in effort to not feel insulted when their work was criticized, because it was them and their behavior that was criticized, not their work.
If wanting others to not say things like how they're "fucking tired of someone else or their behavior" is making them walk on eggshells, and can remain elusive to them for decades out of cultural differences, then clearly I'm taking crazy pills, because that's just about the most outlandish proposal I've ever come across.
That could be, but could be other way around. I would swear much, much more in my native language compared to English. You never know, maybe Linus always tuned it down.
That's a good point, and I'm surprised it doesn't come up more often. Not just the language itself, but use of it - how many of us read it in the light of American values taught through English and things like not cursing or lack of being direct, where Linus has just always communicated that way, and there's no ego or kindness at play. He shares what he sees and thinks as directly as he sees it and moves on, and that itself doesn't necessarily mean anger.
> I've never seen anyone talk about Linus' bilingualism concerning his anger issues. I'm bilingual, and sound much harsher and mean in English than in my native tongue. Could that be an element of the problem?
> My own introduction to speaking French as an adult was less joyous. After reaching out to sources for a different article for this magazine with little success, I showed the unanswered emails to a friend. She gently informed me that I had been yelling at everyone I hoped to interview.
> Compared with English, French is slower, more formal, less direct. The language requires a kind of politeness that, translated literally, sounds subservient, even passive-aggressive. I started collecting the stock phrases that I needed to indicate polite interaction. “I would entreat you, dear Madam ...” “Please accept, dear sir, the assurances of my highest esteem.” It had always seemed that French made my face more drawn and serious, as if all my energy were concentrated into the precision of certain vowels. English forced my lips to widen into a smile.
It probably takes a person with a direct approach to bullshit to keep a free software kernel/OS project on track. Linus Torvalds for Linux, Theo de Raadt for OpenBSD, both known for their lack of tolerance for bullshit, both heading projects which managed to stay on their respective tracks for decades. In medicine 'gentle healers make stinky wounds' and the same is true for software development.
That's a huge sacrifice when speaking of him, which we must appreciate. But to be honest, I must agree with his point of view.
Golden rule when developing projects is to stick to one (the least amount possible of) technology, otherwise, you'll end up with software for which you need to hire developers of different languages or accept the developers that won't be experts in some of them. I am working on a project that, up until a year ago, had been partly written in Scala. All the Java developers who didn't know Scala were doomed to either learn it in pain (on errors and mistakes) or just ignore tasks regarding that part of the system.
You're right that this is generally a golden rule. But rules can have exceptions, and this seems to be one of them; the Linux kernel is now so large and complex, and C so obviously outdated now, that it's worth the pain to start writing drivers in Rust. And because of the modularity of the kernel, and the care taken to make Rust binary-compatible with C, this looks to be actually practical, as individual subsystems will be either entirely Rust or entirely C, particularly when new drivers are involved.
Yes, and that API is needed for drivers written in Rust. So it's not like core parts of the kernel are being now written in Rust, it's still just specific Rust drivers.
It was my understanding the request was to have the core maintainer take on the additional task of supporting the rust API on top of the existing wrapper.
I could be wrong, but a branch or fork seems like the easiest solution. =3
Your understanding is completely wrong. There was no such request, and that concern was almost immediately addressed by spelling out very directly that no such expectation existed.
In general, mixing languages has serious long-term consequences. Unless you have done refactoring of large legacy systems... you probably wouldn't understand fully.
Perhaps ask why folks didn't create an isolated kernel branch for such a massive refactoring?
"Rust doesn't support all the architectures that gcc does"
Assuming a bootstrap compiler can even bring up Rust or the cargo monstrosity. People will just switch to other options, and maybe someone will get around to a Rusty Linux Fork if the working system needs entertaining drama.
Except... Most companies offer products built in multiple languages. Google notoriously has a multilingual monorepo and a uniform build system. Every company of non-trivial size uses multiple languages. Even Asahi Linux does!
Polyglot projects are harder, but definitely not as hard as OS development in C.
The majority of companies doing web is doing "something plus Javascript".
And then if there's some AI in the mix, there's also almost always Python. Often maintained by a super small team.
What about apps? Java and Swift. The migration to Swift included a lot of Obj-C and Swift living side by side for a while. Same with some apps migrating to Kotlin.
In general... it is often a skill issue with labor pools, as firms accepted Java/JavaScript is almost universally known by juniors.
Apps are in user space, and are often disposable-code after 18 months in most cases. Projects like https://quasar.dev/ offer iPhone/Android/iOS/MacOS/Win11 single code base auto-generated App publishing options, but most people never hear of it given a MacOS build host is required for coverage.
The talk pokes fun at the Ivory tower phenomena, and how small-firm logical assumptions halt-and-catch-fire at scale. =3
How the can it possibly be an issue with the labor pool when JS is the only truly supported web language and Python is the main language for ML tooling?
I think that you are handwaving away differences in runtime environments. All languages (almost) are Turing-conplete. Many of them, if used in the wrong context, will get you stuck in a Turing tarpit.
Languages succeed or fail on the strength of their runtimes. If you seriously think that mixed-code codebases are trash after 18 months, then I think I'm wasting my time - that statement is so fundamentally detached from reality that I don't even know how to start.
"that statement is so fundamentally detached from reality that I don't even know how to start."
It is the statistical average for Android, but again it depends on the use-case situation. Apps are a saturated long-tail business that fragmented decades ago.
Python sits on top of layers of C++ wrappers for most ML libraries, and doesn't do the majority of the heavy computation in Python itself (that would be silly). Anyone dealing with the CUDA SDK for Pytorch installs can tell you how bad it is to setup due to versioning. That is why most projects simply give up to use a docker image to keep the poor design dependencies operational on a per project basis.
"then I think I'm wasting my time"
Than just let the community handle the hard part, and avoid having to figure it out yourself:
Sorry but apps being disposable and "JS-only frameworks" existing doesn't change reality. Nor do juniors only knowing only one language.
Single-language codebases are the exception rather than the rule in a lot of industries, which includes apps and web. Ditto for operating systems: the Linux Kernel might be majorly C, but both Windows and Mac mix C and C++ in the kernel but a lot of userland is in C#, or a mix of Swift and Objective-C.
"Explanations exist; they have existed for all time; there is always a well-known solution to every human problem -- neat, plausible, and wrong." ( H. L. Mencken )
Probably conflating popularity with good design. The talk covers the financial incentives for naive code choices, and perceived job security in ivory towers.
It might work for you, but in general is bad for the firm long-term. Best of luck =)
"Single-language codebases are the exception rather than the rule in a lot of industries"
And indeed >52% of those Polyglot projects end up failing in more than one way, and are refactored or abandoned. I didn't like that the talk rips on Elixir/Phoenix either, but factually his insights were not inaccurate.
On average it takes around 2 to 3 years to understand why... As it takes time to see why Tacit knowledge is often ephemeral. Tantalizing bad ideas reverberate on the web, and the naive recycle them like facts. YMMV and it usually doesn't matter if you have under 38k concurrent users... and spend $3m/month in transfer fees to AWS. Perhaps your part of the world works differently than mine. =3
Google does not cancel projects because they're technically unwieldy, and if you ask pretty much any ex-Google engineer what they think about internal tooling they will generally praise it because it's very good.
Complexity is a consequence of dealing with the real world. An OS talks to hardware, which might have errata or just straight up random faults.
Googles initial technological success was due to the design assumption individual systems were never reliable, and should fail back gracefully. It was and still is considered a safe assumption...
Indeed, on average 52% of all IT projects at any firm end up failing, or were never used as designed. What exactly does a glorified marketing company have to do with successful OS design? The internal Android OS they planned to replace Linux with was canceled too. "All software is terrible, but some of it is useful" =3
I would suggest actually reading what engineers who have worked at the company say about it. Many of these statements you've just made are irrelevant or factually incorrect (e.g. Fuschia might be cancelled, but Google remains one of the major contributors to Linux and is certainly not a "marketing company.")
Drivers are hardware specific, and so it's not a problem that Rust has support for a subset of the platforms that the kernel does. If you want to write a driver for a platform Rust doesn't support, just write it in C.
Drivers are usually hardware specific but are not always platform specific. For example all the usb device drivers (but not host drivers) from x86 work just fine on arm and risc. And kernel isn’t just device drivers, but also stuff like network — higher level abstractions are decoupled from platform quite good.
I don’t think there is practical intersection of platforms not supporting rust and running USB host, but by the time rust will creep into something line network subsystem code you will either not have network (which was working before) or you make yourself a rust compiler.
This can mean dropping platform support. Maybe those are dead platforms, but somebody will make the argument that adding rust made things objectively worse.
Sure. The point is, this is for new code where you know Rust supports the platform. It “creeping in” to something Rust can’t support won’t happen, because then the rust-less build will break.
You are right that there are other policies that could work in a different way, but these policies were chosen specifically so that Rust doesn’t make the kernel sacrifice old platforms.
'It “creeping in” to something Rust can’t support won’t happen'
The exact same argument could be made for future architectural changes, as OpenBSD/FreeBSD/NetBSD build requirements will now beat Polyglot Linux in every metric of porting complexity moving forwards.
Agreed, that is why "Exception being when there is really no way around C,..."
The first startup I worked on, back in the crazy 2000's, we were replicating AOLServer at our own way, and all the Tcl extension modules were written in C, because multiple reasons, including writing portable C across several UNIX variants was already complicated enough, e.g. the HP-UX aC we were using was still not C89 compliant, no need to add C++ into the picture.
The hobby kernel I work on is only 68KiB in size, and won't work with most von Neumann machines. Probably a waste of time, but a fun odd architecture given the relative simplicity. =3
And yet, the powers that be, understand that, and have reasoned that the upside of preventing new code with bugs into the kernel is way too attractive to ignore.
At my last job at a FAANG we had an Android app in Kotlin, and in all their wisdom the management decided to jump on the hip new thing, React Native, and start coding new/certain features in React Native.
Multiple years later, what was the state of things? We had a portion of the codebase in Kotlin with dedicated native/Kotlin developers, and a portion of the codebase in RN with dedicated RN/JS developers.
Any time there's a bug it's a constant shuffle between the teams of who owns it, which part of the code, native or JS the bug is coming from, who's responsible for it. A lot of time time nobody even knows because each team is only familiar with part of the app now.
The teams silo themselves apart. Each team tries its best to hold on to the codebase - native teams tries to prevent JS team from making the whole thing JS, the JS team tries to covert as much to JS as possible. Native team argues why JS features aren't good, JS team argues the benefits over writing in native. Constant back and forth.
Now, no team has a holistic view of how the app works. There's massive chunks of the app that some other team owns and maintains in some other language. The ability to have developers "own" the app, know how it works, have a holistic understanding of the whole product, rapidly drops.
Every time there's a new feature there's an argument about whether it should be native or RN. Native team points out performance and look-and-feel concerns, RN team points out code sharing / rapid development benefits. Constant back and forth. Usually whoever has the most persuasive managers wins, rather than on technical merit.
Did we end up with a better app with our new setup, compared to one app, written in one language, with a team of developers that develop and own and know the entire app? No, no I don't think so.
Feels like pretty parallel of a situation compared to Rust/C there.
Other than the choice problem of deciding what language to build new features in (which needs a clear policy), I don’t see why maintaining a mixed language codebase HAS to be terrible.
In my current job, also at FAANG, my team (albeit SRE team, not dev team), owns moderately sized codebases in C++, Go, Python and a small amount of Java. There are people “specialised” in each language, but also everyone is generally competent enough to at least read and vaguely understand code in other languages.
Now of course sometimes the issue is in the special semantics of the language and you need someone specialised to deal with it, but there’s also a large percentage which is logic problems that anyone should be able to spot, or minor changes which anyone can make.
The key problem in the situation you described seems to be the dysfunction in the teams about arguing for THEIR side, vs viewing the choice of language as any other technical decision that should be made with the bigger picture in mind. I think this partly stems from unclear leadership of how to evaluate the decision. Ideally you’d have guidance on which to prioritise between rapid development and consistency to guide your decisions and make your language choice based on that.
As your codebase scales beyond a certain point, siloing is pretty inevitable and it is better to focus on building a tree of systems and who is responsible for what. However that doesn’t absolve especially the leads from ONLY caring about their own system. Someone needs to understand things approximately to at least isolate problems between various connected systems, even if they don’t specialise in all of them.
Out of the “FAANG” list, we can rule out Apple for obvious reasons and Amazon because it’s evident from the poor usability of their mobile apps that they have zero native code.
Does Google use RN? Seems unlikely with their Flutter stack.
Was this at Meta? I doubt the iOS FB app and Insta are using RN so that must leave FB messenger?
> React Native is used in multiple places across multiple apps in the Facebook family including a top level tab in the main Facebook apps. Our focus for this post is a highly visible product, Marketplace.
While I think your points about some of the difficulties that arise in multi-language/framework projects is fair, I sort of roll my eyes whenever someone frames Rust as something like the "hip new thing".
The Linux kernel's first "release" was in 1991, hit 1.0 in 1994, and arguably the first modern-ish release in 2004 with the 2.6 kernel. Rust's stable 1.0 release was in 2015, 13 years ago. There are people in the workforce now who were in middle school when Rust was first released. Since then, it has seen 85 minor releases and three follow-on editions, and built both a community of developers and gotten institutional buy-in from large orgs in business-critical code.
Even if you take the 1991 date as the actual first release, Rust as a stable language has existed for over 1/3 of Linux's public development history (and of course had a number of years of development prior to that). In that framing, I think that it's a little unfair to include it in the "hip new thing" box.
I've been doing this for over 20 years and it's the first I've heard of this "golden rule". I guess we've all been doing it wrong...writing our backends (pick your poison), frontends (TS/JS) and queries (SQL) in a variety of languages forever.
I've mostly seen language mixing in frontend. Backends seem to end up either being completely ported to a new (compatible) language, or experimental new languages get ported back. Perhaps frontend developers are just more versatile because they have to, with frameworks and the base spec constantly shifting under their feet.
Even many backend devs seem to shy away from things like SQL because they're not too comfortable with it. Which isn't bad per se, it's very easy to make a small mistake in a query that crushes the database, just a personal observation of mine.
In 30 years as a backend engineer I’ve never worked in a single language codebase.
The idea that there is some rule that you don’t mix languages seems like absolute nonsense. If someone suggested to me that it was _possible_ I’d be extremely curious what wild tradeoffs they were making to get there.
I think it makes sense to have a preference for a single language in code bases when all of your developers only have one language in common and are not interested in learning any more languages in the future. That doesn't necessarily make it a golden rule.
However, in my work I've seen plenty of developers with all manner of interests and experiences align only on one or two languages, and if that's your company's talent pool, single language code bases seem like a good idea.
Of course this skips over all the usage of scripting languages (makefile/bash/Python/XML) which in my experience are seen as quirks of build tooling rather than part of the product.
There’s also complementary vs competitive: C++/python (pytorch, itk, ROS) or Go/js (default web stack) aren’t going to quibble over what belongs in what language- react/swift or c/rust codebases have no such natural partition
And if you look how that mess started out you had cross site scripting on the frontend because html allowed you to inject more javascript from everywhere and SQL injection on the backend because you had to translate your input from one language to another with tools that went out of their way to interpret data as commands.
The modern web is a gigantic mess with security features hacked on top of everything to make it even remotely secure and the moment it hit the desktop thanks to electron we had cross site scripting attacks that allowed everyone to read local files from a plugin description page. If anything it is the ultimate proof how bad things can go.
I have worked on lots of cross language codebases. While it’s extremely useful to have experts in language or part, one can meaningfully contribute to parts written in other languages without being an expert. Certainly programmers on the level of kernel developers should readily be able to learn the basics of Rust.
There’s lots of use cases for shared business logic or rendering code with platform specific wrapper code, e.g. a C++ or Rust core with Swift, Kotlin, and TypeScript wrappers. Lots of high level languages have a low level API for fast implementations, like CPython, Ruby FFI, etc. The other way around lots of native code engines have scripting APIs for Lua, Python, etc.
I don't know if its golden rule or common-sence when applicable.
If our testing framework is in Python; writing a wrapper to code tests for your feature in Perl because you're more comfortable with it is the Wrong way to do it imo.
But if writing a FluentD plugin in Ruby solves a significant problem in the same infra; the additional language could be worth it.
I’d argue that number of languages is less critical than how well-supported/stable the languages/frameworks chosen are, and whether the chosen tools offer good DX and UX. In simple terms… a project using 5 very-well-supported languages/frameworks (say, C, Rust, Java, Python, modern React/TS) is a lot better off than one with 3 obscure/constantly-shifting ones (say, Scala, Flutter, Groovy).
Anyway, I’m a bit of a Rust fanboy, and would generally argue that its use in kernel and other low-level applications is only a net benefit for everyone, and doesn’t add much complexity compared to the rest of these projects. But I could also see a 2030 version of C adding a borrow checker and more comparable macro features, and Rust just kind of disappearing from the scene over time, and its use in legacy C projects being something developers have to undo over time.
C takes backwards compatibility quite seriously, so anything it adds has to be opt-in (not that it stops some people from trying to propose seriously breaking changes, but c'est la vie).
Something like a borrow checker can be added (and there are people on the C committee willing to see it). However, Rust's "shared xor mutable" rules are probably too strong for C, so you'd need to find some weaker model that solves a good deal of memory safety issues (maybe expressing pointer ownership and some form of invalidation is sufficient, but this is just some spitballing by me). The focus by the interested people right now is mostly around bounds-checking rather than a borrow checker.
In theory, they could. In practice, I would be shocked. As an example, Rust's memory safety rules are fundamentally built on top of generics, which C does not have. So in order to copy what Rust does, they'd need to do that first, and that's a massive change to the language.
C++ does have generics already (well templates, but you know) and so it'd be an easier lift there, but it's still a lot of work. https://safecpp.org/draft.html explains how this would be possible.
As I understand it, doing it while maintaining compatibility with old code is not possible. You'd have to add new required syntax for annotations and such.
Not really possible, I think. C is a language that's basically built on memory aliasing and pointer arithmetic: every variable is represented as a location in memory, but there is no guarantee that each memory location is only represented once (there can be many variables pointing to the same memory address). The Rust borrow checker needs pretty much the opposite guarantee: every declared variable has full control over its memory location, and if two pieces of code need to share memory there needs to be an explicit protocol for delegating access.
And it's not like pointers are a rare occurrence in C. This mechanism is used pretty much everywhere: accessing array values, parameter pass-by-reference, function output parameters, string manipulation. There's no concept of function purity either, so no way to guarantee in the function definition that a function cannot outstep its bounds. Sure, there are certain safe coding conventions and rules about what you can or cannot do in a C program, but fundamentally proving that a certain memory location is only accessed through a certain variable is only possible by just running the program exhaustively -- or by confining yourself to a subset of C.
But when you only allow a subset of C, it's no longer "C with a borrow checker", especially given the ubiquitous use of pointers for standard language features. It quickly becomes "we hobbled C because we need more guarantees". To take a quote from the D manual [0], to guarantee memory safety you need to disallow:
- Casts that break the type system.
- Modification of pointer values.
- Taking the address of a local variable or function parameter.
> C is a language that's basically built on memory aliasing
Additionally, C does actually have aliasing rules, but many projects, including the kernel, turn them off. Linus in particular does not think they're worthwhile.
These rules are different than Rust's, Rust also rejected these particular rules.
> To take a quote from the D manual [0], to guarantee memory safety you need to disallow
Just to be clear, this is the list for D, not in general. Rust is fine with you taking the address of a local variable or function parameter.
If you're referring to e.g. gcc's -f[no-]strict-aliasing option, then that's more about type compatibility than about limiting the scope of memory aliasing in general. If you mean something else, I'm interested to hear more.
> this is the list for D, not in general
Yes, I know. But it's the first authoritative source I could think of on memory safety in C-like languages. I don't think the list is wrong for C proper, just probably not exhaustive.
> Rust is fine with you taking the address of a local variable
Yes! But circling back to the earlier point: in Rust you can do this specifically because the language has well-defined lifetime semantics on variables and ownership. And as such, Rust can guarantee that a) a pointer to a memory location does not outlive the scope of the memory allocation itself, and b) when two variables/pointers refer to the same memory location, there is a compiler-enforced protocol for accessing and mutating that memory.
It's an interesting discussion. There's always a divide when you slowly migrate from one thing to another.
What makes this interesting is that the difference between C code an Rust code is not something you can just ignore. You will lose developers who simply don't want or can spend the time to get into the intricacies of a new language. And you will temporarily have a codebase where 2 worlds collide.
I wonder how in retrospect they will think about the decisions they made today.
Most likely Rust will stay strictly on the driver side for several years still. It's a very natural Schelling fence for now, and the benefits are considerable, both in improving driver quality and making it less intimidating to contribute to driver code. It will also indirectly improve the quality of core code and documentation by forcing the many, many underspecified and byzantine API contracts to be made more rigorous (and hopefully simplified). This is precisely one of the primary things that have caused friction between RfL and the old guard: there are lots and lots of things you just "need to know" in order to soundly call many kernel APIs, and that doesn't square well with trying to write safe(r) Rust abstractions over them.
I don't think changing to Rust code completely is something attainable. I guess some older or more closer to the metal parts will stay in C, but parts seeing more traffic and evolution will be more rusty after some time, and both will have its uses and have their islands inside the codebase.
gccrs will allow the whole thing to be built with GCC toolchain in a single swoop.
If banks are still using COBOL and FORTRAN here and there, this will be the most probable possibility in my eyes.
> I guess some older or more closer to the metal parts will stay in C
I suppose the biggest reason is that C programmers are more likely than not trained to kinda know what the assembly will look like in many cases, or have a very good idea of how an optimizer compiler will optimize things.
This reminds me I need to do some non-trivial embedded project with Rust to see how it behaves in that regard. I'm not sure if the abstraction gets in the way.
After writing some non-trivial and performance sensitive C/C++ code, you have feeling of how that code behave on the real metal. I have that kind of intuition, for example. I never had to dive to the level of generated ASM, but I can get ~80% of theoretical IPC with just minding what I'm doing in C++ (minimum branching, biasing branches towards a certain side, etc.).
So, I think if you do the same thing with Rust, you'll have that intuition, as well.
I have a friend who writes embedded Rust, and he said it's not as smooth as C, yet. I think Rust has finished the first 90% of its maturing, and has the other 90%.
I write embedded rust full-time and can say there's nothing that I can do in C that I can't do in rust. Sure the tools/frameworks are a lot more mature, but a combination of the PAC for register access (maybe a bit of community maintained HAL) and a framework like RTIC is pretty much all I need.
I am not convinced, given the amount of heavy lifting that the Rust type system does, that rusty Rust is nearly as brain-compilable as C. However, you can write the equivalent of C in many languages, and Rust is one of them. That kind of code is easy to compile in your head.
It's not brain-compilability, it's getting used to what that specific compiler does with your code you brain-compile.
So, I have a model for my code in my brain, and this code also has a real-world behavior after it's compiled by your favorite toolchain. You do both enough times, and you'll have a feeling for both your code and the compiler's behavior for your code.
This feeling breaks when you change languages. I can brain-compile Go for example, but compiler adds other things like GC and null-pointer protection (carry local variables to heap if you're going to hit a null pointer exception after returning a function). Getting used to this takes time. Same for Rust.
> I suppose the biggest reason is that C programmers are more likely than not trained to kinda know what the assembly will look like in many cases, or have a very good idea of how an optimizer compiler will optimize things
This is the only way Hellwig's objection makes any kind of sense to me. Obviously, intra-kernel module boundaries are no REST-APIs, where providers and clients would be completely separated from each other. Here I imagine that both the DMA module as well as its API consumers are compiled together into a monolithic binary, so if assumptions about the API consumers change, this could affect how the module itself is compiled.
I've done a non-trivial embedded project in C. (Quadcopter firmware). The language doesn't get in the way, but I had to write my own tooling in many areas.
Many people still have the mistaken belief that C is still trivial to map to assembly instructions and thus has an advantage over C++ and Rust in areas where understanding that is important - but in practice the importance is overstated, and modern C compilers are so capable at optimising at high optimisation levels that many C developers would be surprised at what was produced if they looked much further than small snippets.
Like half the point of high-level systems languages is to be able to express the _effects_ of a program and let a compiler work out how to implement that efficiently (C++ famously calls this the as-if rule, where the compiler can do just about anything to optimise so long as it behaves in terms of observable effects as-if the optimisation hadn't been performed - C works the same). I don't think there's really any areas left from a language perspective where C is more capable than C++ or Rust at that. If the produced code must work in a very specific way then in all cases you'll need to drop into assembly.
The thing Rust really still lacks is maturity from being used in an embedded setting, and by that I mostly mean either toolchains for embedded targets being fiddly to use (or nonexistent) and some useful abstractions not existing for safe rust in those settings (but it's not like those exist in C to begin with).
Often the strong type system of C++ means that if take C code and compile it with a C++ compiler it will run faster. Though part of the reason it is faster C++ will allow the compiler to make assumptions that might be false and so there is a (very small IMHO) chance that your code will be wrong after those optimizations. C++ often has better abstractions that if you use will allow C++ to be faster than C can.
If Rust doesn't also compile faster than C because of the better abstractions that should be considered just a sign of compilers needing more work in the optimize and not that Rust can't be faster. Writing optimizers is hard and takes a long time, so I'd expect Rust to be behind.
Note that the above is about real world benchmarks, and is unlikely to amount to 0.03% difference in speed - it takes very special setups to measure these differences, while simple code changes can easially but several hundred percentage differences. Common microbenchmarks generally are not large enough for the type system to make a difference and so often show C as #1 even though in real world problems it isn't.
Rust is a systems programming language by design; bit-banging is totally within its remit, and I can't think of anything in the kernel that Rust can't do but that C could. If you want really, really tight control of exactly which machine instructions get generated, you would still have to go to assembler anyway, in either Rust or C.
The exact reason why it was created in first place, a portable macro assembler for UNIX, and should stayed there, leaving place for other stuff on userspace like Perl/Tcl/... on UNIX, or Limbo on Inferno, as the UNIX authors revised their ideas of what UNIX v3 should look like, already on UNIX v2 aka Plan 9, there was a first attempt with Alef.
Or even C++, that many forget was also born at Bell Labs on the UNIX group, the main reason being Bjarne Stroutroup never wanted to repeat his Simula to BCPL downgrade ever again, thus C with Classes was originally designed for a distributed computing Bell Labs research project on UNIX, that Bjarne Stroutroup certainly wasn't going to repeat the previous experience, this time with C instead of BCPL.
I'm not sure what you mean by "leaving place for". There was a place for Perl and Tcl on Unix. That's how we wound up with Perl and Tcl.
If you mean that C should have ceded all of user-space programming to Perl and Tcl, I disagree strongly. First, that position is self-contradictory; Perl was a user-space program, and it was written in C. Second, C was much more maintainable than Perl for anything longer than, say, 100 lines.
More fundamentally: There was a free market in developer languages on Unix, with C, Perl, Awk, Sed, and probably several others, all freely available (free both as in speech and as in beer). Of them, C won as the language that the bulk of the serious development got done in. Why "should" anything else have happened? If developers felt that C was better than Perl for what they were trying to write, why should they not use C?
"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."
-- Fran Allen interview, Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming
C's victory is more related to not having anything else as compiled language in the box than anything else regarding its marvelous technical capabilities, so worse is better approach, use C.
Even more so, when Sun started the trend that UNIX development tooling was paid extra, and it only contained C and C++ compilers, for additional compilers like Fortran and Ada, or IDE, it was even a bit more extra on top.
Which other UNIX vendors were quite fast to follow suit.
But I've seen that quote before (I think from you, even). I didn't believe it then, and I don't believe it now.
There is nothing about the existence of C that prevents people from doing research on the kind of problem that Fran Allen is talking about. Nothing! Those other languages still exist. The ideas still exist. The people who care about that kind of problem still exist. Go do your research; nobody's stopping you.
What actually happened is that the people who wanted to do the research (and/or pay for the research) dried up. C won hearts and minds; Fran Allen (and you) are lamenting that the side you preferred lost.
It's worth asking why, even if Ada or Algol or whatever were extra cost, why weren't they worth the extra cost? Why didn't everybody buy them and use them anyway, if they were that much better?
The fact is that people didn't think they were enough better to be worth it. Why not? People no longer thought that these automatic optimization research avenues were worth pursuing. Why not? Universities were teaching C, and C was free to them. But universities have enough money to pay for the other languages. But they didn't. Why not?
The answer can't be just that C was free and the other stuff cost. C won too thoroughly for that - especially if you claim that the other languages were better.
Worse is better, and most folks are cheapy, if lemons are free and juicy sweet oranges have to be bought, they will drink bitter lemonande no matter what, eventually it will taste great.
Universities are always fighting with budgets, some of them can't even afford to keep the library running with good enough up to date books.
> What actually happened is that the people who wanted to do the research (and/or pay for the research) dried up. C won hearts and minds; Fran Allen (and you) are lamenting that the side you preferred lost.
Eh, sort of. The rise of C is partially wrapped up in the rise of general-purpose hardware, which eviscerates the demand for optimizers to take advantage of the special capabilities of hardware. An autovectorizer isn't interesting if there's no vector hardware to run it on.
But it's also the case that when Java became an important language, there was a renaissance in many advanced optimization and analysis techniques. For example, alias analysis works out to be trivial in C--either you obviously prove they don't alias based on quite local information, or your alias analysis (no matter how much you try to improve its sensitivity) gives up and conservatively puts it in the everything-must-alias pile; there isn't much a middle ground.
Directly programming hardware with bit-banging, shifts, bitmasks and whatnot. Too cumbersome in ASM to do in large swaths, too low level for Rust or even for C++.
Plus for that kind of things you have "deterministic C" styles which guarantee things will be done your way, all day, every day.
For everyone answering: This is what I understood by chatting with people who write Rust in amateur and pro settings. It's not something of a "Rust is bad" bias or something. The general consensus was, C is closer to the hardware and allows handling of quirks of the hardware better, because you can do "seemingly dangerous" things which hardware needs to be done to initialize successfully. Older hardware is finicky, just remember that. Also, for anyone wondering. I'll start learning Rust the day gccrs becomes usable. I'm not a fan of LLVM, and have no problems with Rust.
Two reasons I can think of off the top of my head.
The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language. This matters when writing drivers for exotic hardware.
Sometimes to do things like make a performant ring buffer (without vec dequeue) you need to use unsafe rust anyway, which IMO is just taking the complexity of the rust language without any of the benefit.
I don’t really think there’s any benefit to using C++ over rust except that it interfaces with C code more easily. IMO that’s not a deal maker.
> The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language.
The usual outcome of this assumption is that a user complains to the compiler that it doesn't produce the expected assembly code, which the compiler ignores because they never guaranteed any particular assembly output.
This is especially true for the kinds of implicit assembly guarantees people want when working with exotic hardware. Compilers will happily merge loads and stores into larger load/stores, for example, so if you need to issue two adjacent byte loads as two byte loads and not one 16-bit load, then you should use inline assembly and not C code.
I’m not saying every C compiler is always perfectly predictable, but by virtue of it being a simpler language it should Always be more predictable than rust, barring arcane optimizations.
I do agree that if someone actually cares about the assembly they should be writing it by hand.
> I’m not saying every C compiler is always perfectly predictable
No C compiler is predictable. First, there is the compiler magic of optimization.
Then you have Undefined Behavior, which in C, that's almost a guarantee, you'll experience inconsistent behavior between compilers, targets, optimization levels and the phases of the moon.
In Rust, use .iter a lot to avoid bound checks, or if you want auto-vectorization use a lot of fixed length arrays, and look how LLVM auto-vectorizes it. It takes getting used to it, but hey, so does literally every language if you care about SOURCE -> ASSEMBLY translation.
> The assembly outputted from C compilers tend to be more predictable by virtue of C being a simpler language.
That doesn't seem to be true, not in the presence of UB, different platforms and optimization levels.
> Sometimes to do things like make a performant ring buffer (without vec dequeue) you need to use unsafe rust anyway, which IMO is just taking the complexity of the rust language without any of the benefit.
If you write a data structure in Rust, it's expected to wrap the unsafe fiddly bits into a safer shell and provide unsafe access as needed. Sure, the inner workings of Vec, VecDeque, and Ring Buffers are unsafe, but the API used to modify them isn't (modulo any unsafe methods that have their prerequisite for safe access stated).
The idea is to minimize the amount of unsafe, not completely eradicate it.
Rust does ok at this but typically works better with some tooling to make register and bit flag manipulation look more like normal rust functions. chiptool and svd2rust do this for microcontroller code using all rust. The only asm needed is going to be the bootup to setup enough to run rust (or C)
> I wonder how in retrospect they will think about the decisions they made today.
The decision was not made today, what happens today (or, rather, a few days ago) is Linus calling out a C maintainer going out of his way to screw rust devs. Rust devs have also been called out for shitty behaviour in the process.
The decision to run a Rust experiment is a thing that can be (and is) criticized, but if you allow people to willfully sabotage the process in order to sink the experiment, you will also lose plenty of developers.
Well it's a middle ground between two other realistic extremes, those being "subsystem maintainers must understand and support the Rust bindings to their APIs" and "subsystem maintainers can veto the introduction of Rust bindings to their APIs".
As a C maintainer, you should care how the other side of the interface is implemented even if you're not actively involved in writing that code. I don't think it is reasonable, for software quality reasons, to have a policy where a maintainer can simply pretend the other side doesn't exist.
That's up to the maintainer; if they don't have any knowledge of Rust, then it's better they don't get involved anyway. They're still responsible for designing the best C interface to their subsystem as possible, which is what most of the kernel will be interacting with. It puts the burden firmly on the shoulders of the Rust advocates; who believe the task is manageable.
As for your concern about code quality, it's the exact same situation that already exists today. The maintainer is responsible for his code, not for the code that calls it. And the Rust code, is just another user.
>They're still responsible for designing the best C interface to their subsystem as possible, which is what most of the kernel will be interacting with.
What if you're in a world where Rust code is either a significant or primary consumer of your interface ... surely as the API designer, you have to take some interest in how your API is consumed.
I'm not saying you become the owner of Rust bindings, or that you have to perform code-reviews, or that you have veto power over the module .. but you can't pretend Rust doesn't exist.
Giving good feedback about Rust<>C bindings requires knowing Rust well. It needs deep technical understanding of Rust's safety requirements, as well as a sense of Rust's idioms and design patterns.
C maintainers who don't care about Rust may have opinions about the Rust API, but that's not the same thing :)
There are definitely things that can be done in C to make Rust's side easier, and it'd be much easier to communicate if the C API maintainer knew Rust, but it's not necessary. Rust exists in a world of C APIs, none of which were designed for Rust.
The Rust folks can translate their requirements to C terms. The C API needs to have documented memory management and thread safety requirements, but that can be in any language.
That puts far too many chefs in the kitchen and worse(!) dilutes your time and understanding of the part of the code you know well. You need to trust your fellows in other areas of the code to make good decisions without you, and focus on what you know. Let other people do their own job without micromanaging them. Spend your time in your own lane.
Sometimes the other team proves incompetent and you are forced to do their job. However that is an unusual case. So trusting other teams to do their job well (which includes trying something you don't like) is a good rule.
The API is the contract boundary. As long as it is well documented and satisfies its postconditions, it can be implemented in anything. Computing thrives on layers of abstraction like this.
Sure, and that's ideal for the maintainers that are willing to do that (and there are several), but for the C devs that just don't care and can't be forced to care, this is a pragmatic compromise. Not everyone has to be involved on both sides.
Yes. This is exactly what it is. It is a "pragmatic compromise" to side-step major internal cultural and philosophical issues (not technical issues). You're basically telling a number of C maintainers that they can simply pretend Rust doesn't exist, even if it may be the case that Rust code is the primary consumer of that API. That's a workable solution, but it isn't an ideal solution - and that's a little sad.
You should care that it is usable, but how they use it should not concern you. If someone wants to use the usb driver to interface with a coin motor to build vibrating underwear, then that's none of your business. Your concern is if your driver works to spec and can be interfaced.
So if someone wants to write software in Rust that just uses the DMA driver, that should be fine. Linus is entirely in the right.
Yes. And that involves not completely ignoring an entire universe of consumers of your API, *as a general policy*. This is especially true with modules that may have Rust code as the primary consumer of the API.
I admit, I don't know what not ignoring Rust code by maintainer means in practice, and I agree it shouldn't mean that the C maintainer code-reviews the Rust bindings, or has veto power over the entire Rust module, or that the maintainer vets the architecture or design of the Rust module, or is on the Rust module mailing list. But it also shouldn't be that as a *general policy*, the C maintainer does not take any interest in how the API is consumed by Rust, and worse, pretends Rust doesn't exist.
>So if someone wants to write software in Rust that just uses the DMA driver, that should be fine.
I think there's a fundamental disconnect here and I'm not sure if I quite see it.
It seems to me as if you're speaking about a hypothetical scenario where Rust needs something from the interface that isn't required by other languages. And you can't articulate what that might be because you can't think of an example of what that would look like. And also, in this scenario, Rust is the primary user of this driver interface.
But if that's the case, it's getting really close to "if things were different, they'd be different". If that's not the case, then I don't understand your case.
There's nothing wrong with the interface. Rust can use it just fine. It doesn't do anything C code wouldn't. They're not even asking for anything from what I can see. The person who maintains the DMA driver doesn't want Rust _using_ his interface, he's rejecting PRs where Rust code is interfacing with his driver.
The closest analogy I can think of is he wrote a book, but he doesn't want left-handed people to read it.
The API maintainer should only be concerned how the API is consumed in only that it is consumable and doesn't cause unintended side effects. And neither of those should be impacted by the language used to consume the API.
Punch the hole though interface is a bad idea. And probably the worst one you can do. You 'should' just ignore the other side of the interface. And if that don't work. Fix the issue there (by yourself or find the code holder) instead of try to workaround from your end. Or if it always require change on both end, it's a hint that your interface isn't designed properly (thus always glue the the end together instead of separate it).
I hate it so much when people assume they are smart and workaround issue at other end of interface. It always end up that you need to understand both the workaround and original bug or you can't even read it.
I get the feeling that, no matter how slow Linus goes, this is going to lead to a split. If Linus eventually pushes through Rust, the old guard will fork to a C-only version, and that won't be good.
Seems highly unlikely. Note that Hellwig is the only major remaining independent Linux kernel developer. All the rest have salaries paid by the Linux Foundation, Red Hat, Google, et cetera. They are highly unlikely to take an action that threatens their salary.
And Hellwig works as a contractor, he's not a volunteer in the same way that Con Kolivas was. Hellwig isn't truly independent either.
There is nothing ambiguous here, if anything Torvalds is simply enforcing common sense: Rust devs cannot be divas, and C devs cannot be saboteurs.
If anything, the whole kerfuffle is astounding for the lack of common sense, and sense of camaraderie, among those kernel devs. It should not take a dictator to enforce the obvious, but in this case it seems like it does.
How is this an ambiguous stance? "Subsystem maintainers don't have to allow Rust in, but other subsystems can and will build their own bindings to your code" seems fairly clear-cut.
Linus said that non-rustacean C programmers cannot veto rust code, but he did not clearly state how it works going the opposite way. It was rustacean-proposed changes on the C side that led to this drama. I don't see much progress here.
I don't think that's accurate. It was adding Rust DMA code that was to be shared between Rust drivers that was the spark. The C code was unchanged AFAIK.
> It was rustacean-proposed changes on the C side that led to this drama.
Why would you say something like that?
From the e-mail [0] the article is based on:
> The fact is, the pull request you objected to DID NOT TOUCH THE DMA LAYER AT ALL.
> It was literally just another user of it, in a completely separate subdirectory, that didn't change the code you maintain in _any_ way, shape, or form.
I can see only one viable path for Rust folks: Fork the kernel and make whatever mods are needed. It's not Linux anymore, but that's how Linux started from Unix all those years ago.
No, that's not how Linux started. No fork from Unix, if that's the comparison you were making. Linux started as a completely independent project, a multi-tasking kernel printing AAA...BBB, and, as it progressed, working towards being basically Unix compatible. But it was not a fork from anything.
Yes, my comment is poorly phrased, but the spirit is valid. Linux could not have come from Unix as Unix OSes were closed source at the time. It was not a fork. However, Linux was intended to be an open-source variant to Unix from inception. Torvalds aimed for POSIX and ABI compatibility from the start, it was one of his stated goals for developing the kernel.
I think that's not viable. To make that work you'd have to keep up with the kernel for years, probably more than a decade, to reach some kind of critical mass and become influential enough to be capable of separating from it and driving decisions that run counter to it. That's not even to mention the loss it would be to have these capable teams (rust proponents for the kernel and extremely experienced maintainers and contributors who want nothing to do with it) working in parallel at best and in partial opposition at worst, when they could work together.
This is much more of a manpower and money problem than it is a technical one. Of course it's possible to fork Linux and rewrite it in Rust. But who would spend all that time and energy doing that without the Linux foundations funds and expertise? You'd probably burn out within a few years before ever substantially converting the code base
What the Rust community is trying to do is antithetical to the whole free software movement. They want to impose a new language onto an existing body of maintainers who have limited incentives to change.
The "free" part in free software is not just free in beer, it's also free in freedom. That little bit gets forgotten. People work on it because they want to, not because they have to. If a developer does not want to use Rust, they can and should not be forced to. It does not matter if Rust is objectively safer, or better, or any of the purported arguments. Forcing it eliminates the freedom of choice.
The Rust folks should make their own kernel and OS. Let it compete directly with Linux. In open source, this is the way.
Since Con Kolivas resigned in 2007 there have been no volunteers making major contributions to the Linux kernel. Everybody is doing it as a job. So they are working on it because they have to, assuming they want to continue to get paid.
reply