Hacker News new | past | comments | ask | show | jobs | submit login
Linux kernel in-tree Rust support (kernel.org)
468 points by littlestymaar on July 11, 2020 | hide | past | favorite | 491 comments



Rust in Linux will be fantastic except for compile time. Rust (and the world) needs a Manhattan Project to build a fast Rust compiler (where by "fast" I mean both efficient and scalably parallel when compiling a workspace with many crates and long dependency chains).

To put this in perspective, though: increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people. Compared to the size of the projects that are starting to depend on Rust, that's a rounding error.


GCC 10 supports Dlang directly. (The support is still immature and it's being updated to the newer version of Dlang.)

Dlang compiles quickly not because the language is simple, but because the compiler authors care about compilation speed:

Lexer skips four spaces at once https://github.com/dlang/dmd/pull/11095

Optimize core logic to 7x compilation speed of language feature https://github.com/dlang/dmd/pull/11303

Cache rarely used types to improve memory locality https://github.com/dlang/dmd/pull/11363


It's important to note that run-time tradeoffs were made in the interest of fast compilation speed. When it comes to optimizing code emitted from D source, it's important to leverage the alternate implementations from GCC or the LLVM-based LDC...which don't sport nearly as fast compiles as DMD, the reference D compiler.


For DMD's backend, that is true. It's like Rust's MIR project, but finished.

However, the commits I linked to describe front-end optimizations. LLVM-based D Compiler and GCC share DMD's front-end.

I've seen some generated code optimizations for DMD's back-end as well, which could get one lightning fast compiles for decent code. (Maybe in the future one could write a JIT that first compiles with DMD then tiers up to LLVM D Compiler or GCC.)


You're right, I didn't distinguish between overall compile times and the front-end time taken, which is what you were focusing on. For the Rust toolchain, the (vast IIRC) majority of compilation time is in LLVM's backend. So, to me, front-end optimizations are interesting, but not the highest priority for Rust specifically. LMD seems to be a great thing to compare against, so here's my follow-up question: What's the proportion of front-end to back-end time in LMD?


The D language uses GC, so it's unlikely to be acceptable for the Linux kernel. (That's not to say that you can't write a kernel in D, just that the Linux kernel maintainers don't want to write a kernel that requires GC.)

Rust's ownership model, which allows you to write straightforward, non-leaky, non-use-after-free-filled code without a GC, is quite complex at compile time. It's definitely possible to make it faster, but it's definitely a thing Rust does that D doesn't do.


D has a nogc flag that can be used on methods and taints methods down the line. It's pretty neat. I'm a big fan of D, but I still think I wouldn't write a Kernel in D. I find it much more suited to quicker smaller programs, kind of like Python but with better typing. The script support for D is also really neat, and I've used that instead of Perl a few times.


When you turn off D's GC then its no different from C or C++ . So yes, it can be done. At last D conference, someone presented on building kernel drivers in D.


D is definitely trying to support it as well, how far they will get remains to be seen.


Nick Nethercote and others have done a lot of work with traditional profiling and optimization like that. They've done a great job, but for my project the fundamental problem seems to be limited parallelism, which I think requires more fundamental design changes (perhaps getting away from the crate-at-a-time approach of rustc).

Though TBH I'm not an expert on the compiler and I'm not confident I know the root problems or the solutions.


Performance of the generated code is always the #1 priority in both Rust and llvm, performance improvements are rarely declined, and this prioritization needs to change radically, to get a much faster compiler.


Prioritization is not that straightforward. Every open source project has different sub-groups with different goals. While we do care a ton about generated code, there is also movement by other folks to work on the compiler speed problem. It has been a lot of work to come this far, and it will be even more work to get to a better place, but that work is underway. That's not even to mention that since rustc is written in Rust, work to make generated code faster can also make the compiler faster.

I'm referring to the rust-analyzer work, which is effectively an outside-in re-write of the compiler, if you squint at it right.

At the end of the day, rustc is a very large codebase, and has already undergone major architectural changes, and will be undergoing even more. It is not fast to steer a large ship in a new direction, but that is what is happening.

See https://pbs.twimg.com/media/ENC1ls6XYAEdKkv?format=jpg&name=... for a time I ran https://github.com/erikbern/git-of-theseus on the rustc codebase. Thank goodness Rust is such a good refactoring language.


  d


You are talking incremental improvements. Radical is needed not just using cranelift for debug builds, but something like that for all builds. Current llvm model does not work well for delivering a fast compiler (opt and non-debug)


You can deliver radical improvements in an incremental fashion. The move towards a query-based architecture, and the rust-analyzer work, are both fundamental, radical improvements.

If anything, the cranelift stuff is less radical. Doesn't require any architecture changes, "only" involves implementing the backend API.


then there's also ongoing work in rustc_codegen_cranelift, which [as of 2 weeks ago] can now compile rustc:

https://github.com/bjorn3/rustc_codegen_cranelift/issues/743...


Yeah to be clear, I am very excited about cranelift too!


Sure, that's seriously cool but I don't see how it changes the material balance so to speak... If you still want llvm to codegen the same items, we don't have any radical departure at all, just a different way to do the same thing, when we are release-compiling any particular crate, from scratch.


Oh sure, if you're talking purely about a final release build, then yes. But most of the time, compiles are not final release build compiles. People don't generally complain about final release build times. They complain about the build time as they're developing.


I regularly need to run release builds while developing. Not as commonly as I need to do a debug build or a cargo check, but still often enough that the slowness is annoying.


Final release build times are up there as build automation systems are often at capacity


For sure, and I don't mean to say that they don't matter. I have worked on codebases that only use release mode, for reasons, for example. I just hear less complaints about it, that's all I'm saying :)


Usually I've observed release is used by automated systems and debug locally. If you're not managing the automation or working closely with those people, you're less likely to see those complaints but they definitely exist.


What is wrong having 2 versions: slower at runtime but super fast compile for debug and the other way around for production?


Of course, that is the status quo for rustc (as it is for all production compilers I know of). The issues are that (1) debug builds are still slow enough to be annoying, and (2) they are sometimes _so much slower_ to execute that a debug build + test run is slower than a release build + run.

A lot of work is being done to address these issues. The big projects I'm aware of are a move to a query-based compiler (which should improve incremental build times) and the new cranelift backend, which should dramatically speed up debug build times.


The slowness of building crates makes an immense impact on ordinary rust project build times, but as the Linux kernel would just be using rustc directly, and Kbuild already parallelizes CC invocations, the relevant gains here would be single-TU compilation speed.


But the translation unit in Rust is the crate. Running `rustc` directly won't change that fact. Unlike in C, where you can compile TUs in any order since you've got interfaces in headers, in Rust you can't compile a crate until you've compiled all its dependencies--and if it's not Cargo that's going to sort that out for you, Kbuild will have to.


> in Rust you can't compile a crate until you've compiled all its dependencies

Since Rust 1.38 (https://blog.rust-lang.org/2019/09/26/Rust-1.38.0.html#pipel...) that's no longer the case: "To compile a crate, the compiler doesn't need the dependencies to be fully built. Instead, it just needs their "metadata" (i.e. the list of types, dependencies, exports...). This metadata is produced early in the compilation process. Starting with Rust 1.38.0, Cargo will take advantage of this by automatically starting to build dependent crates as soon as metadata is ready."


Every time a new Rust version comes out I compile my toy Gtk-rs application ported from a C/C++ User Journal article about Gtkmm.

The C++ version still wins by several minutes on a Asus 1215B.


Has the gap at least narrowed over time?


Definitely, the improvements on rustc are visible, so eventually it will reach there.

And I test it on that hardware on purpose, because I should not have to buy a top class desktop to achieve usable compile times similar to other AOT compiled languages.

However, I also admit that what is missing is having a backend that favours debug workflows, which do exist just aren't yet there.


Is this a fresh compile each time?


Yes, make clean all, cargo clean build.


If it does compile, could you try reporting results for multiple Rust compiles (e.g. the current one, and the ones from 6,12,18,24 months ago) ? (If you can, maybe do the same for C, but I don't know how easy that is).


This is good but it doesn't work well enough in my experience. I still spend a lot of time waiting for dependencies to build before the crates that depend on them.


Given the modular nature of the kernel, how big will each "crate" be? Will they mostly end up interfacing with the rest of the kernel using the C ABI and not need to be compiled as a single unit?


The Rust compiler folks care immensely about compile speed, and the language is a factor in both cases.


I don't know about D, but a difference between Rust and Go, for instance, in this area might be that the Go language designers care about compilation speed, to the extent that it has influenced the language design, while in Rust that concern may be important in terms of implementing the compiler, but not enough to influence the language design.


It's not that Rust doesn't care about compilation speed, is that final binary speed, zero cost abstractions and ergonomics lie higher in the priority list. If we didn't care about ergonomics you wouldn't be able to have diamond dependencies inside of a crate (like defining a struct in a file and it's impl in a different one, or splitting impl items between two files, this is very handy but requires that the compilation unit be a crate). If we didn't care about zero cost abstractions and final binary speed we wouldn't always monomorphism generics and have a threshold where we automatically use dyn traits instead (vtables), like Swift does, but we want people to have full control over the behavior of their code so you have to change your code in order for that behavior to change. This means that your code's performance characteristics won't change because of unrelated changes crossing some threshold, but it does mean that unless you opt into using a vtable your code will spend a lot of time expanding code for every type used in generics. This doesn't mean that we don't care about performance or that there aren't things we can do to improve the current situation or that we don't actively pursue those actions (take a look at https://perf.rust-lang.org/ over larger periods of time and you'll see that for the most part things are getting significantly better).


Yes, agreed fully. This is a good example of what I meant by the language being a factor :)


Walter Bright founded Dlang and continues to be the primary author of its compiler frontend. He is the sole author of dmc++/Zortech C++, the world's fastest C++03 compiler.


Absolutely true. I'm not sure what you're implying though, none of that contradicts what I've said in any way.


Are you suggesting to hire Walter to work on Rust compiler/language itself?


Such things are giving me hope that eventually all compilation to native CPU code will converge on a singular project. There's so much fragmented effort!


Slightly off topic, I see D brought up incredibly consistently when Rust is mentioned on HN.. Is it just me?? (it's usually by walter themself but I digress..)


Some of us believe in systems programming languages with some form of GC, so it usually pops up as alternative.

For me D is what .NET (thus C#) should have been if the Ext-VOS project (COM Runtime) decided to go native, instead of embracing J++ and then turning into .NET due to the lawsuit.

But unfortunately D is a tiny community that gets by with small contributions and herding cats is hard.


A number of people (including myself) migrated from what was known as D1 onto Rust. Steve (Klabnik) was also one of the refugees. I can't speak for others, but Rust is kind of what I was expecting for D1 to go to.


Wow, didn't know Steve was rocking D.


> but because the compiler authors care about compilation speed

Is this meant to imply that the Rust compiler authors don't care about compilation speed?


Everybody cares about compilation speed. What would be interesting is how much time in percentage is spent optimizing it though. Does any one have any hard daya here?


We don't track time of who works on what, so there's no way to answer this specific question with hard data.


They haven't made it a priority until the last year or two, no efforts were made to improve it before then.


This is simply not true.

For example, here's the 2017 roadmap, which was authored in 2016 https://github.com/rust-lang/rfcs/blob/master/text/1774-road...

> The edit-compile-debug cycle in Rust takes too long, and it's one of the complaints we hear most often from production users. We've laid down a good foundation with MIR (now turned on by default) and incremental compilation (which recently hit alpha). But we need to continue pushing hard to actually deliver the improvements. And to fully address the problem, the improvement needs to apply to large Rust projects, not just small or mid-sized benchmarks.

We didn't do these sorts of yearly plans before that, but note that it isn't just talking about what should be done then, but the work that was being done on this previously.


The push for Rust is coming from above, top-down. Watch what programmers do, not say. They prefer Go to Rust because it compiles fast.


That depends on the programmer. Personally, I will choose rust ovet go almost every time, because I think it has a better type system, and is in many (but not all) ways more ergonomic. And that is more important to me than compile time, although I concede that other programmers (especially those with more experience with interpreted languages) prioritize compile time I over other language aspects. Besides which, my experience has been that when incrementally compiling during development, go compilation isn't actually that much faster.


Every language that goes all-in on type safety (Haskell, OCaml, F#, Scala) ends up being too hard to actually use in day-to-day programming. But people publicly say that they prefer it so as to not signal that they themselves find it too difficult. In an ideal world, I'd be writing everything in Coq, but I still live on earth and have deadlines, so I'll "do that some other day". All the code ends up being in some practical language, but when I get a developer survey I'll say I love the type safe hippie language.


I think the difficulty with those languagrs comes more from the pure functional paradigm than type safety. And I do use scala as a day-to-day language. Also, I'm not saying I prefer rust over go after reading some documentation and playing around with toy examples. I've written real code in both, and prefer the rust experience. I think go's main advantage is it is easy to learn. That is a real advantage in some cases (like onboarding new employees/contributors), but for me and many programmers I know it is not a sufficiently compelling reason to give up a more powerful type system.


Exactly this! I use Go because it's easy when I sometimes need to provide small binaries to colleagues to help with their day-to-day work. And those are colleagues who don't have a Python environment installed or anything, which I'd be opting for in small "workflow improvement" cases like this. So whip up a small Go program, compile to Windows, and make a colleague happy. But when I build larger codebases with my developer colleagues (or private projects at home) I really prefer Rust as it gives me so much more trust in the code that I might have to maintain for years and where the linecount is not just <200, but actually thousands and thousands.


> Every language that goes all-in on type safety (Haskell, OCaml, F#, Scala) ends up being too hard to actually use in day-to-day programming.

F# is the most pleasant, practical day-to-day programming language that I have personally used. Scala is also quite nice.


D was dead by all practical measures since its inception. It was never revolutionary enough to warrant the huge costs of migrating there. This is where Rust comes in...


Typescript allows to compile without type checking at the sacrifice of correctness. If you would have a slow computer or huge code-base, you could consider using the type checker only for tests, pre-checkin check, or CI. (Not that Typescript is slow - but as an conceptual example)

Would something similar be possible for Rust, so you still have the ability for correctness for CI and release builds, but allow for fast compilation when necessary or desired?


Type checking doesn't seem to be major bottleneck at all. The major issues seem to be

1) Technical debt in rustc producing large amounts of LLVM IR and expecting LLVM to optimize it away

2) generic monomorphization producing vast amount of IR

IIRC.


Typescript would have a far easier time in that scenario because much of it's syntax is simply Javascript with extra types attached. So, it could (I don't know the implementation details at all) 'just' strip away all the typescript-syntax, and give you your Javascript files. Rust has the issue that there's a lot of type-checking done, even in basic scenarios: such as setting a variable `let alpha = some_function(abc);`, you have to do some form of basic typechecking here to get the type of `alpha` (since that affects how it's stored on the stack). The simple case of `fn some_function(abc: u32) -> i16;` would be 'easy', but more complicated scenarios like generics would make your life harder. Though, I'm sure there's parts of the checking that could be eventually written so that they can be turned off, but I don't think it would provide nearly as much benefit as Typescript's non-correct compilation. Personally, I think it would be better to spend that time on just making the compiler faster. This is perhaps easier than I made it sound, but this is my assumptions as to why it wouldn't work that well.


> Would something similar be possible for Rust

Yes. The `mrustc` alternative implementation doesn't do things like borrow checking, it assumes the source is "correct".

> [skipping typechecking would] allow for fast compilation when necessary or desired?

The real expenses in a "release" compilation pipeline are in the optimisation passes and codegen, so the gains are mostly in avoiding optimisations (debug build) or avoiding codegen entirely (`cargo check`).

For instance on the latest ripgrep head on my machine:

* cargo check takes ~200s user

* cargo build (debug) takes ~300s user

* carbo build --release takes ~900s user


I’ve heard someone say about the rust compiler that it will never allow compilation without its safety checks in place because it would decrease faith in any rust binary.


You could always replace the fn body of large parts of your codebase with unimplemented!() (or use the nightly only everybody-loops flag), but I don't think this will have as big a benefit as people would expect. cargo check though is probably something more people should know about.


I assume dependency trees in the kernel would be much more shallow.


That might be true for drivers.

Outside the kernel, I don't think it's going to be true for Android or Windows.


But this discussion is mostly about the kernel, isn't it? I'm not sure why throwing out rust weaknesses in general is appropriate when we're talking about a specific use case such as kernel drivers and other modules which do tend to be quite a bit more shallow than most large c/c++ projects I've been on.


That's a fair point. I do think it's appropriate to point out that build times are a problem for a lot of projects and may affect the kernel so careful attention needs to be paid to that issue.


I know a lot of people will disagree with me, and I know that the rust compiler has room for improvement, but compile time isn't really that bad for rust when you consider what it actually does for you.

Assuming no unsafe blocks, rust's type system eliminates entire classes of memory safety errors. It fundamentally eliminates data races. It eliminates large swaths of logical errors, due to its use of algebraic data types, and strong type enforcement. No null pointers, explicit error handling, and no undefined behavior.

It boggles my mind that people will balk at compile times for rust, but then not even bat an eye at their C++ testkit that runs for 8+ hours, primarily checking for errors that rust would never let you make in the first place.


Part of what makes rustc's slow compile times so frustrating is that the slow compile times are mostly unrelated to all of the fancy compile-time analysis you listed. You can see this in action if you run `cargo check`. That will run all the special static analysis (typeck, borrowck, dropck, etc.) and complete in a very reasonable amount of time.

To a first approximation, rustc is slow because it generates terrible LLVM IR, and relies on a number of slow LLVM optimization passes to smooth that IR into something reasonable. As far as I can tell this is entirely a result of not having the necessary engineering resources to throw at the problem; Mozilla has a limited budget for Rust, and needs to spread that budget across not just the compiler toolchain, but the language and ecosystem.

Brian Anderson's been working on a blog post series that covers Rust's slow compile times in more detail, if you're curious: https://pingcap.com/blog/tag/Rust


Yeah, I realize that Rust has a lot of room for improvement. I'm just saying that compared to the alternative, you get a relatively inefficient compiler that, despite its inefficiency, still gives you a more efficient development process. You get instant feedback, which you can't get from a testkit, and you can eliminate massive amounts of testing.

If the focus is entirely on reducing compile times, it can very quickly lead down a path that limits runtime performance. There is so much potential for increased runtime performance in strongly typed languages, due to the guarantees made by the type system. It allows for analysis and reasoning that not just enables micro-optimizations but also macro-optimizations across entire codebases. Entire theses have been written describing optimizations that rely on immutability alone, and we haven't even touched so many other possibilities that rust's design enables (such as the ownership system).

I'm fairly familiar with the project to work on the LLVM-IR code emitter. If the focus is on reducing the complexity and cruft of the LLVM-IR emitted, that's a win all around (because you get faster code and faster compile times). I just hope that reducing compile times doesn't become the core objective, because that eliminates a lot of future potential.


Note that Rust doesn't really generate worse LLVM IR than clang does, given the same input. It's just that as a language, Rust needs MIR optimizations more than C and C++ do, given the increased abstraction (largely necessary for safety).


I think your and OP's perspectives are not exclusive:

A. Yes, as you say, taken from a distance and considering the whole development lifecycle from idea to shipping, "compile time isn't really that bad for rust when you consider what it actually does for you".

B. However, -during- development, you want compile speed, regardless of all the benefits. I don't know about you, but I have a hard time rationalizing "it's okay, rustc is doing many things" at that moment. I just want it to be faster, and as OP mentioned, I hope there's a future where companies / sponsors get to "hiring about 25 people" to crunch and improve things noticeably (and I will do more than hope, I already donate money to Ferrous Systems to improve rust-analyzer, and will donate if a "make rust faster" fund asks for it). Your argument sounds to me a bit like "who cares about making Python faster? It's just glue". To which I answer: maybe (though not always), but even then but I'll take faster glue anytime :) .

To sum up: rust being already awesome and being "fast given all the thing it does" doesn't mean it shouldn't be faster, and doesn't mean speaking about it is futile.


I'm not opposed to faster compile times, and there is plenty of room for improvement, as I have already stated.

Generally speaking, if you are using incremental compilation, Rust is fast enough for development. It could be better, especially on large projects, but I haven't found it to be an issue generally. There are a bunch of improvements that could still be made. The LLVMIR issue mentioned above is one of them. But also, why do we have to rely on a code emitter at all? In development mode, the type checker should be incremental, synchronous, and fast...but the code emitter could be relatively slow as long as it is asynchronously coupled to the type checker.

But relentlessly pursuing the path of compiler performance can be a bad route to take, because it invariably leads to stagnation in compiler capabilities. I'd gladly accept a 10x increase in `opt-level=3` compile times if it meant 10% faster runtime performance. I'd gladly accept a 10x increase in `opt-level='z'` compile times if it meant a 50% decrease in binary sizes. We can't get to those capabilities if those possibilities are squashed by concerns about compile times.


> Rust in Linux will be fantastic except for compile time. Rust (and the world) needs a Manhattan Project to build a fast Rust compiler (where by "fast" I mean both efficient and scalably parallel when compiling a workspace with many crates and long dependency chains).

A basic technique to parallelize sections is to split a project into modules. Is this not an option for the linux kernel?


Rust has great support for modules (crates in Rust lingo). It can compile modules in parallel when they don't depend on each other.

The problem is that when you have module A depending on module B, rustc doesn't do a good job of compiling A and B in parallel. In contrast, in C/C++ you can handcraft your header files to ensure that A and B compile in parallel.


Actually, rustc produce a crate metadata (those .rmeta files) which is all that's needed to start compiling dependent crates. Producing this metadata is pretty quick compared to producing the finished crate.


Right, but still, compiling long dependency chains is slow :-(.


In C and C++ it is also quite common to use binary dependencies, something that cargo still doesn't do.

However C++ is also now looking at parallelization opportunities for modules, which is a similar situation as crates.


Every time you compile a rust project after an initial compile, cargo does use "binary" dependencies (bitcode dependencies).

It is much better at this than C++.

For example, change a trait in your Rust library, and recompile the tests, and compare that with changing a type in, say, range-v3 (or any other C++ header-only library).

In C++, recompiles take pretty much the same time as clean builds (for range-v3, exactly the same time). In Rust, they are infinitely faster (100000x faster, depending on the project, but essentially almost instant, in contrast with initial compiles).

So for development at least I'll pick Rust over C++ every day. I only do an initial compile once, but I edit and recompile a lot, and that's the bottleneck that I really care about.

(Same with C btw, add a new API to a commonly use C header and re-compile, and you end up re-compiling the world, while from Rust, you just re-compile the part of the crate where the new API was added, and since no downstream crates use it, that's pretty much it - instant change).


https://github.com/StanfordSNR/gg - should be almost the same as the GCC/LLVM thunk extractor. You have to pay for the borrow checker, but LLVM IR optimization passes should be the same complexity.


> increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people.

Citation needed ?

I know at least 20 people being paid to work on the Rust compiler, which crank's your 10x number from 25 to 200. And this is only the people I can come up with from the top of my head.


How much of the needed speedup is engineering and how much is research?


Depends on what you mean by "research". If you mean "problems with a high degree of novelty" then quite a lot of research, I think. Rust is unusual: it requires a lot of tricky type inference and static analysis from the compiler, and doesn't make developers structure their code with header files etc to make separate compilation easier; BUT unlike most languages with those properties, people are writing large real-world projects in it.


My understanding is that Rust's slowness is mainly from code generation and optimization. `cargo check` performs full parsing and type-checking, and runs reasonably quickly.

The problem starts when Rust throws a ton of IR at LLVM. The current focus is on adding optimizations in Rust at higher level (MIR) so that it would cause less work for LLVM.


That is a problem.

There is another major problem, which is that rustc/cargo do not do a good job of compiling crates in parallel when one depends on the other.

There may well be other problems too.


Imagine you have 16 cores available. Allowing parallel compilation where it is sequential at the moment, you get 16 times speedup on such a machine. 32 cores machine -- 32 times speedup etc. Or even 10 such machines -- 320 times speedup.

On the other side, imagine you can even reduce the "ton of IR" tenfold but if the compilation is sequential, the tenfold speedup is fixed even if you have 32 or 320 cores.

In general, allowing for parallel compilation is much more important topic, and it's worth even redesigning language to make that more achievable.

Which also doesn't mean that producing "ton of IR" is a good thing. But working on allowing maximum parallelism is more fundamental. Also, discovering algorithms to allow all compiler passes as effective as possible is important for something to reach wider usability. But Rust must also get to achieve its promises of being "safer" in the scenarios where C is otherwise used, and I time and again see a lot of "unsafe" keywords in the sources wherever I see some implementation of anything fundamental.


Modulo the necessarily sequential (as mentioned by rayiner), you are correct in terms of absolute speed achievable. But you achieve that speedup at a roughly linear increase in $$$ cost. When you have idle cores sitting around, maybe it was opportunity cost you were already paying, but that's bounded.

On the other hand, reductions in the amount of computation needed mean you get a faster build for the same resources, or a similar speed build for cheaper.

Depending on your circumstances, one of these things (speed vs. cost) might be much more important than the other. We cannot forget about either.


Except Amdahl’s law.


Replace "faster" with "more incremental".

Trying to get a program to quickly do the exact same thing over and over again is a colossal waste of resource.


When you’re trying to balance a bad budget, you can’t dismiss many of the cost centers and still succeed.

You can prioritize, but that makes the rest a matter of “when” not if.

It can be hard to tell from the outside if someone is avoiding a problem because they don’t want to solve it, or because they are hoping for a more inspired solution. Tackling a problem when all the options are bad may block another alternative when it does finally surface. I’d lump this in with irreversible decisions and the advice to delay them to the last responsible moment.

Who know what internal plumbing has to be moved around to make the compiler more incremental, parallel, or both.


Yes, let's keep on all options on the table.

But trying to constant-factor optomize O(code I didn't change) is a ridiculous approach that needs justification, and the burden of proof is on those that advocate it. Yet, this seem to be the default thing people beg for.

Switching to incremental is hard programming, and that's why it hasn't happened overnight, but at least it is an algorithmically sane approach.


Yes, we should have a national politburo assign funds to Rust development. Although most of private industry (proles) disagrees, us in the Administration believe that Rust is superior. The proles--ahem I mean programmers with jobs--keep causing problems by using Unsafe technologies and being too dumb to use them. Rust should be funded by The People. It is a Public Good after all because using anything else would be an act of defiance against the will of The People.


> increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people. Compared to the size of the projects that are starting to depend on Rust, that's a rounding error.

And are you going to pay for the "rounding error" or just expecting someone to pay for millions a year for your idea?


It's not my idea. This is well-known.

As for who pays for it: that's a tricky issue! But if we get into a situation where (for example) 10,000 developers spend an hour a day waiting for builds, we know 50 developers could fix the issue in a year, but that doesn't happen because we can't figure out how to structure it economically --- that would be a failure of organizational imagination.


This is such a low effort sneer-comment, it's a dismissive way to attack and putdown any idea that includes any amount of payment. It's the discussion equivalent of "whoever smelt it, dealt it" - if someone mentions an idea which costs money, demand as a first response if they are going to pay for it (assuming they aren't, instant dismissal), or accuse that the alternative is a kind of entitled and unfair expectation held of others, when neither need be the case.

"That volcano looks like it's becoming active, maybe some sensors could give us an early warning of problems" - Are YOU going to pay for your little "idea"??? Then WHO IS?

"That tree is getting dangeoursly high, it's at risk of falling down in a big storm this winter, maybe we could get it cut down before that" - Are YOU going to pay for it?!

"Dumping raw sewage in the river is making people sick, treating it first would take a small amount of space and a small fraction of the council's existing budget" - and you want to TAX ME for YOUR clean river, I suppose?!

As if OP is the only person who would benefit, as if automatically assuming the worst possible intent for who would pay for it, and as if "who would pay" is the first and only thing worth demanding an answer to, before even having a discussion about whether it's worth doing at all - and which of many ways it might be done.


If the idea involved more than "hire 25 people" then, sure, I might've said something better but just saying more people would solve the problem, is that even an idea?


I think OP meant that companies, like Amazon and MS, whose projects are starting to depend on rust should be putting some resources. Although to be fair, Microsoft and Amazon do pay for their infrastructure if I am not wrong.


Right on both counts.


I’m not sure why everyone cares so much about compile times compared to the only thing that matters - runtime performance.

For the lifetime of a binary, compilation time is like 0.000001% of its existence.

I’m more than happy to wait longer compared to other languages (even zero time at all with scripting languages) while cargo does it’s thing.


Business reason: When you're in the business of building software, people are your biggest expense. You want to waste their time as little as possible.

Dev reason: tight feedback loops are encouraging, and make it easier to build momentum. I hate being deep in a problem then having to "wait" for a compile.


But compile times have a massive effect on developer time, which has a massive effect on the productivity and the amount of work that can actually be done.

Comparing compile time to run time is rather strange, but compared as a percentage of a developer’s time, it’s very obvious why people are worried about it.


I think it's a bit of a shadow argument. So for business, the logic goes: Goal := save(Money). time(Developer) := costs(Money). Therefore, to achieve our Goal, we optimise developer time.

But more often than not, businesses don't know how their developers spend their time. So as such, they optimise for results. The business doesn't care how you spend your time, as long as by the end of Sprint/Deadline/Whatever you have a result and/or a reason why not. And I'd bet some money that the reason will very rarely be "my compiler took to long".

From a developer standpoint, I understand better (mainly because I'm a developer). But it still lacks a bit: So I started using Rust and I have a long history with C. The first compile with Rust takes very long (because of loading all dependencies), but afterwards it is slower than C, but it's okay. In return you get a lot of extra safety for your program, it's basically time you spent on your C program to fix all those nitty gritty memory safe optimisations and making sure there's no dangling pointers and free(NULL) errors. Of course with experience, these get less and less, but there's always risk. Nobody is perfect.

For me as a developer, it's more of a shift in my thinking - which is, to be fair, hard to enforce on people and can only come from oneself. But now that I have a compiler which supports me handling memory and pointers, my mind is more at ease. It does these things I spend time debugging on for me (most of them, not all) and in return for longer compile time, I save development/debugging time. At first it feels a bit less productive, since you are just sitting around watching your compilation (there's an XKCD for that https://xkcd.com/303/), but you could also spend this time writing documentation, reading up on something or just grabbing your coffee (or leave the office and check in tomorrow). Things you would do anyways, it's just a matter of how you mentally approach it.

I think the downside only comes when you want it to be negative, or when you're still transitioning.


> increasing the number of people paid to work on the Rust compiler by 10x would only mean hiring about 25 people

I don't think there are 25 people qualified to do that work looking for jobs. It's not something you can just throw money at, you need really qualified people to do that kind of work. Those people are in very high demand.


Note that that would be increasing the number for pay, you already have many people who have the relevant qualifications because they’re already doing the work, just only in their spare time.

Second, the compiler team has put a ton of effort into mentorship infrastructure, and you can go from not knowing a ton to being a productive member of the team with their help. It "just" takes time and desire to do it.


Interesting. What sort of mentorship infrastructure? What sort of people are they targeting for mentorship?

I'd love to learn about compilers by working on an actual real world compiler, but I really only know high level details about how different components fit together. I also don't have a formal CS education, although I am currently working as a senior level data engineer. Not sure if that is the sort of person they are looking to mentor.


rust-lang/rust on github, go to issues, and click on the "easy" and/or "mentor" labels. That will show you all the easy tasks with a mentor available.

Just find one that you'd like to work on, write that you are going to work on it, and start doing the work, and asking lots of questions.


Or on some other project.

Sure, there are probably some projects for which if the primary author leaves, their coworkers will drop it or rewrite it in another language, but for others the management will realize what a liability it was to have all of their eggs in that one basket and they’ll have to hire 2 to replace them.

If you steal 25 to work on the kernel, you will likely see 25 open reqs to replace them. And more people will work with languages that they have seen reqs for.


There are a lot of people out there who can do this kind of work. I know a lot of such people. Many of them don't have traditional CVs.

Yes, most of them are employed. Some are under-employed. Some are well employed but might be keen to work on a project this important.


There are certainly way more than 25 skilled programmers who are experts in compilers that could be hired easily by offering enough money (on the order of several hundred thousand dollars per year).


> a Manhattan Project to build a fast Rust compiler

Or work on getting what Rust gives you in a different language that isn't Rust (maybe entirely new, maybe not[1]).

One underdiscussed thing about Rust is that it's so clearly a byproduct of the C++ world. And the most disappointing thing about Rust is that it probably won't ever be seen as a prototype that should be thrown away.

1. http://mbeddr.com/


If you want to convince people that Rust should be thrown away in favour of mbeddr or something else, you need to make an argument based on specific design flaws of Rust, not just tar it by association with C++.

That argument would have to not just explain why an improved language is needed, but also why Rust can't evolve into that language via the edition system. It would also have to convince people that whatever benefits the improved language brings justify moving away from the existing crate ecosystem, tool ecosystem, and developer pool, and building new ones (or how you would reuse/extend/coopt them).

(In reality I think the influence of Haskell and even node.js were as important to Rust as C++.)


    >not just tar it by association with C++.
To build on this and your last paragraph: I think it's arguably a feature of Rust that it combines:

- A syntax that's highly familiar to the legions of C++ programmers already out there

- A bunch of features C++ has had in the works but that aren't quite there yet (specifically, a proper module system comes to mind)

- Features from ML/OCaml/Haskell, like a first-class option type, first-class (powerful) pattern matching, and an emphasis on immutable-by-default variables

Having this without the cruft of the decades of backwards-compatibility that C++ has had (not without good reason, granted) makes Rust the hard refresh of C++ that I and hopefully others have been waiting for for a while.


.NET has these features for decades. It compiles really fast, much easier to use than C++, and has awesome IDEs.

C interop is pretty much same as in Rust.

GC is not an issue in modern .NET even for resource-demanding apps running on slow ARM CPUs. Here's an example: https://github.com/Const-me/Vrmac/tree/master/VrmacVideo BTW it consumes non-trivial amount of C APIs like V4L2, ALSA, and audio decoders.


The reasons .NET never really competed with C# for the longest time were a handful:

- First-party support for .NET only existed on Windows systems until 2015ish with the release of Rosetta

- Third party support across platforms via Mono never really picked up steam for a number of reasons

- Early on, .NET code, much like JVM code, wasn't in the same league as C++ in terms of performance

- I'd like to be proven wrong on this one, but .NET remains a poor framework to do systems programming in. GC is IMO still a factor because even though modern M&S GCs are impressively performant they still add a degree of unpredictability to your runtime, and sometimes that's not acceptable — so reference counting or something even more basic is preferred, if only for the consistency.


> .NET remains a poor framework to do systems programming in

Have you clicked that link? Essentially, I’ve re-implemented parts of ffmpeg’s libavformat and libavcodec in C#. On Raspberry Pi, the software uses 15% of CPU time while playing 12 mbit/sec 1080p video, on par with VLC, while using 20% less RAM than VLC. If that’s not a system programming, then what is?

BTW, I don’t think I allocate any objects on heap while playing the video, therefore no GC is involved, and no degree of unpredictability is introduced. Wasn’t hard at all, just recycling buffers between frames.


Object recycling often works fine to avoid GC issues for simple stuff, but in more complicated applications you really do need to dynamically allocate and free memory and GC is going to get involved.


> for simple stuff, but in more complicated applications

Media engine part of the library has 45k lines of code, this is mostly due to the complexity of V4L2 API, and of the MKV and Mpeg4 containers. I wouldn’t call that “simple stuff”.

> you really do need to dynamically allocate and free memory

Memory allocation in .NET ain’t limited to GC.

First of all there’s stack. I use Spans + stackalloc in quite a few places there. Modern .NET even has these `ref structs` where the compiler guarantees they are never accidentally boxed into the heap.

Another thing, no one stops you from using malloc/free, like I did in that class: https://github.com/Const-me/Vrmac/blob/master/VrmacVideo/Aud...

Finally, the mere presence of GC doesn’t affect performance in any way, only usage of GC does. Even for latency-sensitive applications like that media engine it’s OK to allocate stuff on managed heap, as long as it only happens on startup, or otherwise rarely. I allocate quite a lot of stuff on the heap, megabytes, when opening a media file.


Funny, before your comment showed up, I was actually going to edit in an addendum to link to

https://robert.ocallahan.org/2018/10/the-costs-of-programmin...

... to pair with the mbeddr link.

You seem to be at a place where you think of Rust as already entrenched, but I'm at a place where I still see Rust as a proposition—one that could lead to that place, or not if we think better and nix it (following the reasoning above).


I don't think Rust is going away. You may not like it; it may not be adopted by the Linux kernel. But it is widely used in real codebases (including, say, Firefox). It's not brand new. And the article linked above discourages creating new languages; valid or not, that horse is well out of the barn.


The post as written, and whether intended or not (admittedly by current signs, I guess it'd be foolish to wager anything other than "not"), ends with an observation that's still highly relevant here but that you're neglecting to account for.

> I hope people consider carefully the social costs of creating a new programming language especially if it becomes popular, and understand that in some cases creating a popular new language could actually be irresponsible.

And being widely used is not the same thing as being entrenched, not the same thing as being unable to say, "nice prototype; let's wrap this up now and do it for real this time".


> ends with an observation that's still highly relevant here but that you're neglecting to account for.

No, that's kind of the sentence I was responding to. Rust is already popular; that horse is out of the barn.

> And being widely used is not the same thing as being entrenched, not the same thing as being unable to say, "nice prototype; let's wrap this up now and do it for real this time".

I don't care to argue about the first clause, but I don't understand how you can argue the latter. Who is going to say "nice prototype; let's discard this?" Do you expect the Rust-using projects to just fold? Do you expect Rust developers to just disappear? Again, it sounds like you don't like Rust, which is fine, but as someone external to the community, why do you think anyone would listen to you about spinning down Rust?


As someone who's spent the last few years writing a 170K line Rust project ... I consider it entrenched!

I stand by that blog post, though I recognise it's controversial. I think Rust clearly qualifies as "worth the cost of a new language". There simply wasn't, and still isn't, any other serious contender for safe GC-free systems programming. New contenders will arise but in the meantime lots of projects are doing safe systems programming that without Rust would have been done in C or C++ and would have had to be unsafe.


Even if the benefits are "worth the cost of a new language", is using the new language worth the cost of it being Rust?


You have not made your argument clear.

Are you questioning whether Rust is simply not good enough to be worth using by anyone? Lots of people, including me, have judged that is worth using for us, both before we tried it and after we've put Rust projects into production.

(BTW once again you have implied there is something deeply wrong with Rust that cancels out its benefits, without saying what that might be. That is annoying.)


You're not seeking answers; in place of anything that would lead to a better understanding, you're asking the questions that make it easier to defend the position you've staked out. (Great job, Brendan!)

If someone released a memory-safe, highly concurrent, systems-oriented version of INTERCAL, you recognize that someone might look at that and say, "Gee, I don't think that's really worth using", and the comment's lack of detail doesn't nullify the substance of what makes the comment true or not? And an opponent could approach in the same way you are now, with the primary goal of "beating" the other participant—by making it burdensome to respond—and by all appearances do so?

It's not my job to painstakingly prepare a perfect, incontrovertible response for a hostile adversary, particularly since absolutely nothing would come of that time investment. If your idea of success is the sound of all your colleagues telling yourselves that you won the conversation and you're on the right track and nothing needs to change because there really aren't any issues... then, hey, that's just fine.


You asserted that Rust should be thrown away, but decline to provide any reasons why you think that. So yeah, in the absence of any supporting argument, I reject your claim, and so would any other reasonable person who doesn't already believe it. You have added nothing to the discussion.


> You asserted that Rust should be thrown away, but decline to provide any reasons why you think that.

That simply isn't true.

> I reject your claim

That's fine.


• Rust is an ML-family language dressed in a C-family syntax to look palatable to existing systems programmers. It's a byproduct of the C++ world only to the extent that it has to work on the CPU architectures and operating systems influenced by C and C++.

• Rust is mainly based on established research and existing languages from 80s and 90s. It's a distilled version of these, not a prototype.

I recommend checking out the first pitch deck for Rust: http://venge.net/graydon/talks/intro-talk-2.pdf


RAII-based memory management without GC, and the concept of ownership vs. borrowing are very C++-inspired concepts. The borrow checker, which is the main innovation of Rust, is automated checking of things C++ programmers spend a lot of effort thinking about.

I think the influence of C++ is much more than just the syntax.


Isn't this more of an artifact of our memory architecture and the way we (de-)allocate memory? In that sense, the C++ flying-by-the-seat-of-your-pants approach and Rust's borrow checker solve the same problem, and I'm sure a lot of C++ insight has made it into the ideas behind the borrow checker, but I'm not sure that makes Rust in a way derivative of ideas extant in C++-land.

To me, the borrow checker just feels like an additional type layer I have to reason and think about. In that sense, my thinking when reasoning about lifetimes in Rust is a lot more along the same lines I do in Haskell than something I would be doing in C++ (which I refuse to code in, I'm not capable of writing secure C/++ code.)


I think the programs that fit neatly into Rust's borrow checking patterns are the same ones that fit neatly into C++(11)'s approach to managing object lifetimes (move/value semantics).

The question is what happens when programs don't fit those patterns. Inevitably in any large real world application, there are some objects with lifetimes that need to be managed carefully and don't match conveniently with some sort of scope in a program.

With C++, this is where one would use std::shared_ptr, or something else. In rust, there's Rc. But either way, you've now stepped into a territory where object lifetimes and bugs can be messy.


I do not find reference counting problematic in practice. In Rust you have to go out of your way to introduce interior mutability in refcounted values, which discourages problematic patterns.

I find that in most cases where ownership patterns don't fit the Rust model, you can abstract the problematic parts into some kind of collection type and hide them in its own crate --- or better still, find an existing crate that does what you need.

The only remotely common case that still doesn't get handled well, in my experience, is the self-referential struct pattern --- when a value needs to contain references to other parts of the same value.


> The only remotely common case that still doesn't get handled well, in my experience, is the self-referential struct pattern --- when a value needs to contain references to other parts of the same value.

There's a basic reason for that, namely you can't memcpy a self-referential structure while keeping desired semantics. Rust now has a Pin<> feature to mark data that should not be moved in memory, but its use is still unintuitive and IIRC requires unsafe. In many cases one should perhaps avoid references to self-addresses entirely; often they can be replaced by indexes and/or offsets.


> There's a basic reason for that, namely you can't memcpy a self-referential structure while keeping desired semantics.

You actually can! For instance, consider a struct with two fields: a Vec<T>, and a &T which references one of the elements of the Vec. Moving the struct (which does the memcpy) does not reallocate the Vec, so the reference in the &T would still be valid. But as far as I know there's no way to represent that in safe Rust; the &T in the struct requires a lifetime, and there's no way to represent "the lifetime of the other field" or even "the lifetime of the struct itself" (something like &'self T).

Yeah, you can use an index in my example (replacing the &T by an usize, and looking up on the Vec every time), but it's not a perfect replacement. A reference would always be valid (and always pointing to the same element), while an index can become invalid if the Vec is truncated, or point to a wrong element if an element is inserted or removed before it in the Vec.


Right, I know what the reason is, but that doesn't stop it from being a problem.


Maybe that has more to do with almost everything new have been garbage collected for the last decades (and that is such a tragedy) and less to do with C++.

It is a low-level programming language after all. Not being explicit about memory would be a bad thing (it almost always is).


I was under the impression that RAII specifically (as opposed to manually calling malloc and free by hand) was an innovation of C++, but I could be wrong.


Ada and Object Pascal also have it.


Don't know, but the concept is very useful and has found its way into many languages, such as thr using statement in C#.

It is just that it is typically more useful/powerful in languages that lack a GC (doesn't necessarily need to be that way though, but my impression).


C# using is Common Lisp's with-....


> RAII-based memory management without GC, and the concept of ownership vs. borrowing are very C++-inspired concepts.

Why are you crediting C++ for an idea that is probably much older than it?


Because it was C++ that made it mainstream.


I'd love to know where C++ got it from, but I can't find it. Any ideas?


I think that's how it started out (and the compiler was first written in Ocaml), but the language has largely abandoned that heritage: the C++ camp is squarely in charge.

In the beginning, I feel like Rust aimed to be a general purpose language, but not anymore. Really the only reason why anyone would consider Rust for a project is if GC was unacceptable. And the amount of complexity in the language just to avoid that is truly staggering.

There aren't that many situations where GC can't be tolerated (and I think some would argue there are zero situations), so I don't actually see Rust as a particularly useful language, and certainly not one in the ML tradition.

Moreover, most zero allocation C code is a total lie. These code bases are littered with arena based allocator that burden the project with all the problems of GC and none of the benefits.

GC works for operating systems, they work for hard real time systems, they work for mostly everything except severely constrained embedded systems.


We have a 170K line project written in Rust (https://pernos.co). I've written a lot of C++ in the past, but also Java and other things (I've been programming for nearly 40 years now), so I'm familiar in developing with and without GC. Pernosco is mostly a server app that could use GC. I'm happy with the choice of Rust.

> And the amount of complexity in the language just to avoid that is truly staggering.

In terms of day-to-day coding this isn't my experience. Once in a while I have tricky ownership problems to resolve, but most of the time I get by just fine without thinking much about the ownership model.

In exchange we don't have to think at all about GC tuning or object recycling, we get RAII, we get to use affine types to encode typestates, and we get easy parallelism. I'm happy about this trade.

As you can see when I started this thread I am far more concerned about build times than I am about the cognitive overhead of ownership and lifetimes.


I have to say my experience with the language has just been very different from this. It absolutely feels like a general-purpose language to me. I actually prefer Rust over Python, as I find the language simpler to use and anywhere from 10-100x faster (if not much more than that in some cases). cargo is much better than pip, and the Rust compiler is much smarter about guiding me than something like mypy.

Maybe I'm unusual in some way, but I do not find Rust hard to use once you've learned the language. You say the complexity to avoid a GC is staggering, but it seems straightforward to me. You have to appease the borrow checker, and you need to learn about some tools like Rc, Arc, Cell, etc., that provide interior mutability. Other than that, the language has been an absolute pleasure and dream to use.

I tend to write Rust the way I write Haskell, not the way I write C++. I start by sketching out all the types that I need to model the problem space at hand. This can be done almost 1:1 with Haskell in many cases. Then I start filling in the implementation, getting me from "input" types to "output" types and only very rarely running into any kind of issue with the borrow checker. Of course, instead of using immutable data structures (as is common in Haskell), I use mutable ones in Rust. The compiler guides me the entire way, and often it feels like an entire program writes itself.

As for GCs, I almost never want one. Performance matters in almost everything I write these days, and faster is always better. Rust is so easy to write once you learn it that I see no reason to accept a GC in 2020 and beyond for any software where performance matters. I do not write software for severely constrained embedded systems. I mostly write real-time backend services that process data or handle large numbers of clients. I could tolerate a GC in much of this, but there's no reason to in a world where Rust exists, and everything is faster, more responsive, and considerably easier to reason about as a result. A few years ago, I would've used a mixture of C++ and Go. Now, I'd just use Rust for all of it.


> There aren't that many situations where GC can't be tolerated (and I think some would argue there are zero situations)

Those people would be wrong. As an extreme example: GPU programming, where GC is essentially impossible due to the architecture.


Can you elaborate on the underdiscussion of Rust as a byproduct of the C++ world?

I'm into Rust, so I guess I assumed it was common knowledge that it was originally written to be like OCaml with better concurrency, and that one of the early goals was to replace C++ in Mozilla code.

But, language-wise, I don't even see that much similarity with C++ except for the philosophy of zero-cost abstraction and the trivial syntax-style of generics being <>.


Rust certainly has C++ attitude to performance, for example 'zero-cost abstractions'. That precludes GC. But this is more indirect influence: targeting the same niche


> That precludes GC

This isn't the case, there are tracing garbage collectors implemented as libraries [1][2], and there is consideration being made for supporting tracing GC in the language and stdlib [0]. As well as reference counted GC having been available in stdlib for a long time [3] (similar to C++'s std::shared_ptr)

[0] https://github.com/rust-lang/rfcs/blob/master/text/1398-kind...

[1] https://manishearth.github.io/blog/2016/08/18/gc-support-in-...

[2] https://boats.gitlab.io/blog/post/shifgrethor-i/

[3] https://doc.rust-lang.org/std/rc/index.html


It precludes pervasive or mandatory GC


I can elaborate, but first: are you a C++ programmer?


I guess some other people here are assuming malintent with your question. Assuming that the answer might be condescending or dismissive depending on how I answer. I'll assume that's not the case and that you simply want to know so you can answer with relevant examples, etc.

I did write C++ code for about 7 years. It was mostly C++98 and C++11.

Since then I've also done a fair bit of Rust. I'm also very familiar with Swift, Kotlin, Java, some lisps, JS, Go, and a few others. So I have enough breadth in language experience for any compare/contrasts you'd like to do.


> I'll assume that's not the case

Thanks, that was actually the right answer! (And those assuming otherwise are not just getting the wrong answer, but I'll point out are breaking the rules[1] here, too.)

The C++-to-Rust background is what I suspected—but the reason had nothing to do with being dismissive about not having the right buzzwords to prove you're smart and well-rounded. (This isn't a job interview, folks, geez.) My instinct was a presumption of deep familiarity.

When I mention Rust having the marks of a being from the C++ world, I'm referring to a property of the language that it shares with Swift. When you say you don't see much similarity, my suspicion is that your first consideration is language semantics and the new affordances, and so when you think of Rust and C++, you're seeing foremost all the ways that they're different, rather than seeing the things that they share. I saw where Hejlsberg once said that he doesn't get why people draw comparisons between C# and Java, and then gives a list of reasons—where those reasons show that his main focus is similar, i.e. he focuses on the way that they differ (and maybe in his case, all the things that he "fixed" with C# in comparison). One of your sibling commenters writes that "Rust is an ML-family language dressed in a C-family syntax", but that's a little off the mark. Rust certainly has a C++-style syntax, but aside from curly braces, idiomatic Rust looks alien when compared to C.

1. https://news.ycombinator.com/newsguidelines.html


> When I mention Rust having the marks of a being from the C++ world, I'm referring to a property of the language that it shares with Swift.

But what property is that?

> I saw where Hejlsberg once said that he doesn't get why people draw comparisons between C# and Java [...]

That's very amusing!

> Rust certainly has a C++-style syntax, but aside from curly braces, idiomatic Rust looks alien when compared to C.

For sure. But I'm still not sure why you say it's from C++ land. It's definitely more like C++ than C, but JavaScript is also probably more like C++ than C. That doesn't mean JavaScript is actually anything like C++, really.

One thing that I thought of that Rust definitely does share with C++ (and C) that other languages (Swift) don't is that you don't have to decide at class definition if a type will be heap allocated or used as a reference vs value, etc. That's decided at the use-site.


C++ programmer of ~10 years here (Google Chrome, mostly).

Rust feels very very much like C++ to me. All the memory ownership stuff is basically common patterns in modern C++ pulled down into the compiler and enforced.

Also arguing with the compiler about types feels very familiar from C++ :-/.


That's a good point. The ownership patterns are basically compiler-enforced best-practices of C++. Fair enough.


Sure, so, what is the actual relationship you see between C++ and Rust?

It sounds like you're arguing that idiomatic Rust looks alien compared to C, which I'd agree with - I'd also argue it's alien compared to C++. But I assume you disagree with that?


Not to throw a mean comment, but i think maybe it can help you in your life by giving you an external feedback :

i hope you don't have this kind of attitude in real life, because it instantly makes you look like an pretentious prick and loose any credit. Your original comment made me curious, and this one loose any interest in what your point of view actually is.

(fyi : HN is (mostly) a developer community with usually knowledgeable people working in the field. if you don't think explaining yourself here is worth it, i have no idea where you think you're going to have the opportunity to give your opinion)


> i hope you don't have this kind of attitude in real life [...] you look like an pretentious prick

Yeesh. I don't even know what attitude you're referring to. You've set out to give some life advice, but do you know how real life conversations work? Can you understand that my response will differ greatly depending on what the answer to that question is? It's pretty normal for dialogues to be... dialogues, which includes checking in with each other instead of monologuing based on a lack of shared agreement.

> if you don't think explaining yourself here is worth it

Once again, what are you even talking about? You're responding to something you've imagined I've said or something you want to attribute to me, not what I've actually said or am thinking.

Can you wait for ragnese's reply and then my response before going off the deep end?


From the guidelines:

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

Ironically, by assuming bad faith and a condescending tone in the comment you responded to, you've made a legitimately condescending comment.


Personally, I think the put-down was quite on point. I don't see the point of flagging it, that just leaves the content to imagination.


Anyone can see flagkilled comments, or any [dead] comments, by turning 'showdead' on in their profile.

Just be warned that most of what you'll see is comments by banned users. Occasionally we get emails from people who don't realize that and somehow assume that moderators are condoning the worst dreck that appears there.


> I don't even see that much similarity with C++ except for the philosophy of zero-cost abstraction

That is not really an explicit goal of C++, which is part of the problem rust is trying solve.


Rust is actually a byproduct of OCaml and to a lesser extent Haskell. This becomes obvious the more you use it, especially if you’ve written substantial code in either of those languages before. To really drive this home, I’d wager that you have a better chance of translating OCaml or non-fancy Haskell straight into Rust than you do C++.


> I’d wager that you have a better chance of translating OCaml or non-fancy Haskell straight into Rust than you do C++.

I found that translating OCaml code to Rust is frustrating, because you cannot easily and cheaply replicate the functional approach. It's much more Rustic to take an object by mutable reference and modify it in-situ than it is to take an immutable reference and return a new object. The reason is that in OCaml, you have structural sharing, i.e., multiple pointers pointing into the same data structure, which is a big no-no in Rust, so if you want to replicate the functional OCaml code, you begin allocating a new, complete data structure every time, and that's costly.


Well, of course. Rust is not designed around immutable data structures. There are going to be differences because it's simply not the same language. This [0] example, written by a notable Haskeller, illustrates what I mean pretty well and also touches on what you just pointed out. It's also very consistent with my own experience.

[0]: https://donsbot.wordpress.com/2020/07/04/back-to-old-tricks-...


> And the most disappointing thing about Rust is that it probably won't ever be seen as a prototype that should be thrown away.

Can that ever happen with a language that lots of people use?


JavaScript has entered the chat...


You said two different things about Rust but you did not substantiate anything. What do you mean by

"Rust is so clearly a byproduct of the C++ world"?

Furthermore, in which way is Rust defective to the point that it should be seen as a prototype that should be thrown away?

mbbeddr seems really nice and interesting but it is not just a language. It is a set of integrated tools around one.

Rust shares goals with C++, i.e. it wants to be a high-performance and systems language like C++, but I find the differences to be quite substantial and worthwhile.


These threads always devolve into "rust is too slow" written by developers (or enthusiasts) that have never written no_std code in production. I've written and shipped firmware for embedded devices written in rust, yes, still using cargo and external crates, and had zero issues with compile time because the nature of the dependencies in the deps tree is different and very carefully curated.

Anyway, I really just wanted to point out that from the mailing list we have Linus and Greg endorsing this experiment/effort from the Linux side and a commitment from Josh on behalf of the rust team to grow the language itself with the needs of the kernel in mind. That's quite impressive and more than I could have hoped for.

I've actually played with writing kernel code in rust - for Windows/NT, however - and it's quite weird to be able to use such high-level type constructs in code where you typically manually chases pointers and wouldn't be surprised to see statically allocated global variables used to monitor reference counts.


Linus, normally being C has its quirks but is great and Anti C++ in general is endorsing Rust?

Why is that? Did he ever gave out his reasons?


Linus has commented on Rust twice (that I'm aware of).

First, back in 2016:

Q: What do you think of the projects currently underway to develop OS kernels in languages like Rust (touted for having built-in safeties that C does not)?

A: That's not a new phenomenon at all. We've had the system people who used Modula-2 or Ada, and I have to say Rust looks a lot better than either of those two disasters.

I'm not convinced about Rust for an OS kernel (there's a lot more to system programming than the kernel, though), but at the same time there is no question that C has a lot of limitations.

https://www.infoworld.com/article/3109150/linux-at-25-linus-...

Then second, much more recently:

People have been looking at that for years now. I’m convinced it’s going to happen one day. It might not be Rust, but it’s going to happen that we will have different models for writing these kinds of things.” He acknowledges that right now it’s C or assembly, “but things are afoot.” Though he also adds a word of caution. “These things take a long, long time. The kind of infrastructure you need to start integrating other languages into a kernel, and making people trust these other languages — that’s a big step.

https://thenewstack.io/linus-torvalds-on-diversity-longevity...

I was once in the same room as him at a Linux Foundation event, and really wanted to ask him about it, but also didn't want to be a bother.

Note that the C++ opinion everyone cites is from 2007. I don't know how he feels about C++ today. It seems he's using it for some things at least. I know I've changed a lot since 2007, and so has C++.


I don't know Linus's reasons specifically, but our presentation at Linux Security Summit last year laid out why we think that Linus's past objections to C++ don't apply to Rust. See slides 19-21 of https://ldpreload.com/p/kernel-modules-in-rust-lssna2019.pdf .

His previous objections were:

    In fact, in Linux we did try C++ once already, back in 1992.

    It sucks. Trust me - writing kernel code in C++ is a BLOODY STUPID IDEA.

    The fact is, C++ compilers are not trustworthy. They were even worse in 
    1992, but some fundamental facts haven't changed:

     - the whole C++ exception handling thing is fundamentally broken. It's 
       _especially_ broken for kernels.
     - any compiler or language that likes to hide things like memory
       allocations behind your back just isn't a good choice for a kernel.
     - you can write object-oriented code (useful for filesystems etc) in C, 
       _without_ the crap that is C++.
In brief, Rust does not rely on C++-style exception handling/unwinding, it does not do memory allocations behind your back, and its OO model is closer to the existing kernel OO implementation in C than it is to C++'s model. (There are other good safe languages besides Rust that I personally like in general but do not satisfy these constraints for this particular use case.)


Is the following code from page/slide 64 supposed to be a talking point?

  FFI: calling C from Rust
  extern {
   fn readlink(path: *const u8, buf: *const u8, bufsize: usize) -> i64;
  }
  fn rs_readlink(path: &str) -> Result<String, ...> {
   let mut r = vec![0u8; 100];
   if unsafe { readlink(path.as_ptr(), r.as_mut_ptr(), 100) } 
  ...
The example doesn't bother using r.capacity(), which is exactly the kind of poor code hygiene leading to overflows you would see in bad C code--i.e. rather than using the sizeof operator, manually passing a literal integer, simple macro, or other ancillary variable that hopefully was actually used in the object's declaration.

Also, notably, the presentation says one of the desirable characteristics for a kernel language is "Don't 'hide things like memory allocations behind your back'". I'm not very well versed in Rust, but when I write a small test program using vec![0u8; 100], the buffer is heap allocated. Which would be a hidden allocation in my book. I realize these are just introductions, but if you're pitching Rust code for the kernel, it seems disingenuous not to show the style of Rust code, presumably much more verbose, which would actually be used in the kernel. Constructs like boxes can't really be used, at least not easily. C++ was disliked because the prospect of fiddling and debugging hidden allocators is nightmarish. And to the extent Rust's type system relies on boxes and similar implicit allocation for ergonomics, some of the type safety dividends might be lost. How much would be lost is hard to judge without showing the kind of actual code that would be required when used in Linux. Toy Rust OS experiments don't count, because they can just panic on OOM and other hard conditions, and don't need to wrestle with difficult constructs like RCU.


`Vec` is always heap-allocated; it's not hidden if you know that. It's intentional that you have to write `vec![0u8; 100]` to get this, rather than just, say, `[0u8; 100]` (which will give you a stack allocation).

It's true that the example should use capacity() rather than hardcoding 100.

If you want a stack allocation, here's an example:

https://play.rust-lang.org/?version=stable&mode=debug&editio...

It's about the same length as the original, but it's hard to make a direct comparison. On one hand, the original snippet is totally wrong: it doesn't properly handle the nul terminator of either the input or the output. I fixed that, but at the cost of making the code a bit more verbose. On the other hand, the original snippet does try to parse the result as UTF-8, which is not what you want, so I removed it (it would add 2 lines or so).

That said… it's not true that avoiding heap allocation is what would actually be done in the kernel. Not for file paths, which is what readlink typically returns. Since kernel stack sizes are so small, paths have to be allocated on the heap. It's only in userland that you can plop PATH_MAX sized buffers on the stack.

On the other hand, if you were really doing heap allocation in the kernel you would need to handle out-of-memory conditions. Rust's standard library doesn't support recovering from OOM at all, but it's not like you'd be using the standard library in the kernel anyway. In contrast, Rust the language is actually quite good at this. A fallible allocation function would return something like Result<String, OOMError>; the use of Result forces you to handle the error case somehow (unlike C, you can't just assume it succeeded), but all you need to do to handle it is use the `?` operator to propagate errors to the caller.


I realize Rust-the-language can handle OOM, and that with Result types it could prove cleaner and safer than in C. But it's rare to see examples of this, and the wg-allocator working group still seems to have alot of unfinished business.


Yeah, someone reported a bug on that slide before: https://www.reddit.com/r/rust/comments/dc5kky/writing_linux_... That slide is not intended to be debugged production code, certainly. :)

Rust Vecs are, by definition, heap-allocated (just like C++ ones). See my reply to a sibling comment for what I mean by hiding memory allocations.

Here's my plan for RCU: https://github.com/fishinabarrel/linux-kernel-module-rust/is...


> In brief, Rust does not rely on C++-style exception handling/unwinding, it does not do memory allocations behind your back

It kind of does a little bit. Panicking is implemented via the same machinery. But of course code is not expected to panic unless things are terribly wrong (ie. kind of like the existing kernel panic thing). It also does sometimes allocate things, but it is true that Rust is a lot more explicit about this - ie. you have to call clone() or Box::new() etc.


`no_std` environments are different. They don't allow allocations unless you explicitly add an allocator. You also have to define the panic handler yourself.

Generally `no_std` libraries will not allocate or panic themselves. There is still the possibility of panicking (e.g. out of bounds array access) but there are alternatives that don't panic if that's a concern even with a custom handler.


Yeah, and it shouldn't be too hard to hook up existing nightly Rust to use the Kernel's panic functionality; it already supports user-overridable panics for no_std, iirc


How does C++ do memory allocations behind your back?


Consider this code:

    #include <iostream>
    
    void p(const std::string &x) {
        std::cout << x << std::endl;
    }
    
    int main() {
        p("hello");
    }
Neither p, which takes an already-allocated std::string, nor main, which uses a string literal, explicitly/obviously requests dynamic memory allocation. Yet it happens.

In userspace, that's not a huge problem - even if allocation requires asking for more memory from the kernel, which requires paging out some dirty files to disk, which blocks for a few seconds on NFS, that's fine - the kernel will just suspend the userspace process for a few seconds while all that happens. Inside the pager or NFS implementation, though, an unintentional memory allocation can deadlock the whole system.

In Rust you'd have to write .to_owned() to do the equivalent, and it would be obvious that doing it in code that might be responsible for freeing up memory is a mistake.


Constructors and copy constructors, mainly.

Something as innocuous as `T x;` or `T x = y;` will allocate in C++. In Rust, the equivalents are `let x = T::new();` and `let x = y.clone();` which are much more obvious as potential allocations.


> Something as innocuous as `T x;` or `T x = y;` will allocate in C++

I think saying these will allocate is wrong. They can allocate, but only if T allocates and manages memory, which depends on what T does.

I am skeptical of the more general claim. Anything but a truly cursory skim of the code is going to tell you that these lines are constructing a new object, regardless of the syntax, and to actually know whether it allocates memory, you need to know more about T, which is true in either Rust or C++.


Unless someone implements Copy on a type that allocates in Rust (which you shouldn't do; Copy is specifically for types that are cheap to copy), you really won't get implicit copies, though.

That means in any reasonable code, `let x = y` won't allocate in Rust while `T x = y` could in C++. `f(x)` in Rust won't either for the same reason. And that's on top of how C++ will implicitly take a reference, so even if you know x is a heavy object, you don't know if `f(x)` will be expensive (or if `f(move(x))` would be helpful).

It's just not exactly equivalent. C++ is more implicit and has less baked-in good practices like `Copy` vs `Clone`. The indirection is often shallower (less often I have to inspect a function prototype or type I'm using for specifics), and I find that very useful.


Minor clarification: afaik it's impossible to implement Copy such that it would allocate. Unlike Clone, the Copy trait is only a marker trait which doesn't have methods, so you cannot customize its behavior. It will always be a bitwise stack copy.


This is not minor. In Rust "x = y" specifically only copies or moves bytes, no magic involved, no code is run except something like memcpy.

- If the type isn't marked Copy, it's a bytes move (which will very probably be optimized out).

- If the type is marked Copy (which is possible only if all its subtypes are also marked Copy, which allocating types are not) then it's a bytes-for-bytes copy.

As soon as you need something more involved, then you have to implement (or derive for simple types) Clone.


The real key here is that something that allocates will want to Drop said memory, and you can’t implement both Drop and Copy.


How do you write OO code in C?


This is a really in depth read about OO design patterns in the Linux Kernel using C: https://lwn.net/Articles/444910/

The easiest to understand example (for me) is that you can manually implement polymorphism support using function pointers that take in a struct instance as an argument. Polymorphism is fundamentally just a method call that's dependent on which target object it's being called on, so this pattern allows you to pass in the target object (as a struct since it's C) explicitly.

From the article:

> Some simple examples of this in the Linux kernel are the file_lock_operations structure which contains two function pointers each of which take a pointer to a struct file_lock, and the seq_operations vtable which contains four function pointers which each operate on a struct seq_file.


The huge and important difference is when you use this pattern in C the compiler has no idea what you are doing and can't check anything, nor can it reduce the cost of this dispatch mechanism. A compiler for a language where this is a feature can check what you are doing and de-virtualize virtual dispatch. In C you pay the highest possible cost for polymorphism while enjoying the least benefits.


> The huge and important difference is when you use this pattern in C the compiler has no idea what you are doing and can't check anything

Function pointers exist in C and are type-checked by the compiler just fine.

> A compiler for a language where this is a feature can check what you are doing and de-virtualize virtual dispatch.

Virtual dispatch is pretty much only used in the kernel in places where virtualization is necessary. Devirtualization is pretty rare in C++ in practice (I don't believe I've ever seen a "virtual final" method, and my day job is on a large C/C++ codebase).


You don't need "virtual final" for devirtualization, the compiler will speculatively devirtualize in some cases.


> Function pointers exist in C and are type-checked by the compiler just fine.

This amount of type safety is almost useless. Many of the function prototypes, for example in inode and dentry ops, have the exact same signature. If you accidentally swap inode.rmdir with inode.unlink, compiler isn't going to say anything. And it won't be caught in code review either, to the extent that Linux even has code review culture.

> I don't believe I've ever seen a "virtual final" method, and my day job is on a large C/C++ codebase

It's common in my experience, for performance-sensitive code.


I don't think that kind of optim is usually important in a kernel, especially one architected a lot with dynamical modules. Plus, on the convenience side, you can also use other models (e.g. have some function pointers at instance level) than the one of C++ in ways that do not look like completely different when reading the source.

Devirtualization is mainly useful to cope with some C++ self inflicted wounds. Linux obviously never had them the first place.


I'm really not a compiler expert, but is this optimization common in C++ for example? I thought all virtual methods incurred the cost of a vtable lookup


Just slapping the virtual keyword on some function doesn't do anything. Overrides may or may not happen via vtable. Often compilers can figure out how to dispatch at compile time and don't need the vtable.


C++ was originally a precompiler that generated C code. So it is definitely possible.

Generally:

- an object is a struct

- a method is a function taking a struct (the object) as an argument

- inheritance is done by having a struct that has its parent as the first element

- polymorphism is donc by implementing the vtable manually, these are function pointers

- constructors are functions returning an object (so, a struct)

It is inconvenient but possible. OO is a concept that is not dependent on the language, though some languages make it easier than others.


These seem like the statements of a person who knows next to nothing about C++.


Why it is weird? It has been done before in Ada, Object Pascal, C++, Mesa/Cedar, Modula-2, Lisp, Oberon, Sing#, System C#, Objective-C,....


Right, and it would be "weird" if I were using one of those, too. You don't run across drivers written in any of those languages on a daily basis.

I didn't mean it with any negative connotation, btw.


In Linux yes, however Arduino, MBed, Android, iOS/macOS and Windows drivers tend to be written in a C++ subset, and Swift, Java and .NET are also an option for userspace drivers.

I got your point, and naturally we are in the context of Linux kernel, however I like to make a point that not every OS constrains themselves to C and having been embracing more productive options for a while now.


The title is not very accurate, this is a thread about a discussion on this topic which will happen at the upcoming Linux Plumbers Conference in late August.


I see what you mean. I just posted it here with the same title that was used on /r/Linux, and that I found accurate (for the same reasons chrismorgan exposed in a sibling comment) but now I agree with you that it could cause some confusion.

Maybe a moderator could rename the post to “discussion about Linux kernel in-tree support” or something like that?


The title is perfectly accurate, it’s an email thread about Linux kernel in-tree Rust support. Sure, you could misconstrue such a title to be implying that the Linux kernel supports Rust in-tree already it if you wanted to, but half the titles on a site like this could be similarly misconstrued.


> The title is perfectly accurate, it’s an email thread about Linux kernel in-tree Rust support.

The title the person you are responding to is complaining about is not the title of a email thread but the title of a hacker news post.


Unless there’s a strong reason not to, the Hacker News post title should match the email thread title. It does match, and I see no even slightly compelling reason for it to deviate.


Immunant folks wrote a blog post [1] about automating conversion of Linux kernel drivers from C to Rust by using their tool c2rust[2].

[1] https://immunant.com/blog/2020/06/kernel_modules/

[2] https://github.com/immunant/c2rust


I don't see them discussing what I view as the biggest question of such conversion: is the converted code safer than C, or is it a direct translation with all the issues that we were trying to fix by using Rust? (This came up, IIRC, with automatic C to Go; it worked, but was only useful as a first step because it gave you unsafe unidiomatic Go code)


It is currently a direct translation into unsafe.

They are also interested in "unsafe to safe refactoring tools" in my understanding, but they're not there yet.



Oh awesome! Thank you!


Pleasantly surprised by Linus response. IIRC his attitude to C++ was that it should be refused if only to keep C++ programmers out.


Rust is well designed modern and concise language with sound type system, while C++ isn't. The difference is huge and obvious.


Except that one of Linus' most vocal offenses on C++ was due to operator overloading and how basic, seemingly native things like + can actually do a lot of hidden stuff unknown to the programmer. He must have softened on this since Rust offers the same facilities for operator overloading.

https://doc.rust-lang.org/stable/rust-by-example/trait/ops.h...


Operator overloading in Rust is less flexible than in C++, so there are fewer possibilities to mess it up.

Overloading is done indirectly via specific named traits which enforce method signatures, invariants, immutability rules. Assignments and moves can't be overloaded. Borrow checker limits what indexing and dereferencing overloads can do (hardly anything beyond pointer arithmetic on immutable data). Comparison operators can't be overloaded separately and can't return arbitrary values (like C++ spaceship operator). Generic code has to explicitly list what overloads it accepts (like C++ concepts), so even if you create a bizarre overload, it won't be used in generic code.


Linus’s point was also that C++ makes extensive use of function/method name overloading (is polymorphism) and this can make code hard to read. Not only is this not possible in C but the feature tends to be overused in C++.


Rust generally uses function/method polymorphism based on traits (aka type classes), that's far less prone to overuse and abuse than overloading in C++.


Is this form of polymorphism done at runtime and thus incurs a similar overhead to the C++ virtual?


There are two ways of doing it. The way most people use most of the time does not, it's statically dispatched. The other way is dynamically dispatched, though there are some differences from the way that C++ virtual works. They should be roughly the same in that case, though.


It depends, you have access to both capabilities. By default if you're only dealing with an impld trait then it will use static dispatch. If you are passing around that same object as a trait objects, it will use dynamic dispatch and you can also impl Traits on other traits, which are always used as trait objects (vtable) but that have some restrictions on what they can look like. If you rely on the Deref trait to make it look like you have inheritance (but in reality it is composition) it will be the same as the later using dynamic dispatch.


You can pick whether you want to do this at run-time or compile-time. The default is compile-time.


> Not only is this not possible in C ...

There are lots of object systems in C that have polymorphism via function pointers.


The linux kernel is full of those


I agree with this critique, but a good IDE helps, as does lint rules preventing egregious misuses (or even outlawing operator overloading entirely). As one example, many Java/C# linters prevent the use of non-curly-braced single line control statements entirely. The language allows them, but many codebases do not.


Could you give me a pointer to a discussion of type soudness in Rust? I recently watched perhaps 3 years old video about Rust where Simon Peyton-Jones asked about this, and Niko Matsakis answered it was ongoing work back then.

Couldn't find anything proper by googling.


See the work of Iris[1] project. It's open-source[2] and based on Coq, see Rust-related work[3] in particular.

[1] https://iris-project.org/

[2] https://gitlab.mpi-sws.org/iris/iris/

[3] https://gitlab.mpi-sws.org/iris/lambda-rust


Linus nowadays also uses Qt, so.


I would assume he sees difference between requirements for kernel code and a dive tracking GUI.


His attitude towards C++ also changed when using it for his side project [1].

Initially he started with C and GTK+ and later migrated to C++ and QT Framework.

[1] https://subsurface-divelog.org/


I think that was after others took over the UI development. The back end of that program also was still C as far as I remember from their presentation and the move was mostly motivated by the GTK community, the documentation and different priorities on cross platform support.


Yes, you're correct. Linus still works on C backend of the application and sometimes has to connect his parts with C++ code.


Did he have an "attitude" about C++ in general? I thought he only commented on it with respect to operating system development. He did make much more general statements about Java, though.


I thought the famous comment on C++ was in the context of git, not operation systems?



Did he talk about his thoughts precisely ?


I think I watched an interview with him where he talked about Subsurface project. In that interview he said that C++ is not all that bad but he still prefers C because he's so used to it and will continue write C code forever. I don't remember title of the video though.


I love how the bugtracker on that is "check out our mailing list" :D


Perhaps the possibility of rust improving kernel security/robustness makes the idea of rust integration seem like it carries its own weight, where C++ has more downsides (perceived or real) and fewer upsides.

Rust's history/origins - loosely, being designed to allow replacing Mozilla's C/C++ with safer Rust that performs well - feel like a good fit for kernel drivers even if the core kernel bits will always be C.


Yes, very much so. Not even only in the context of Rust but the insight, to fail fast, integrate early and do work in the open, instead of some hidden work, failing after a long time, when revealed.


So this surprises me, obviously this is early discussion about a potential topic, but the general consensus seemed to be more positive than I thought.

I thought I'd remembered reading something (maybe from linus) that seemed very against having rust in the kernel, can anyone find a source for that, I searched a little and can't?

(caveat, I obviously realise that linus isn't supporting rust in the kernel, and is only saying something bounded that, if we have it, it shouldn't be completely hidden behind some config options, but it doesn't match my memory)


Maybe you're remembering Theo's comments about Rust in OpenBSD?

https://marc.info/?l=openbsd-misc&m=151233345723889&w=2


The sad part is that Theo's answer on the thread seems to be one of the most polite and realistic ones


This was interesting to read:

> Such ecosystems [Rust] come with incredible costs. For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.


This may be solved now that rustc can bootstrap itself via cranelift.


But is that a huge issue? One could compile on better machine with i386 target, no?


According to the link above, in openbsd the policy is that 'base builds base'. Rust can't be included in the base on i386 without violation of that policy.


And this is a reasonable and understandable policy. It just means that Rust in it's current form is not a good fit for OpenBSD base system.


Shame I hadn't seen that thread. He's right that most of the rust projects "replacing" standard Unix utils are not feature equivalent and differ in intent, but that doesn't mean there aren't other efforts to do exactly what he is asking.

Eg here's tac (admittedly not cat, but hey, ./goo | tac | tac) published a few months before his email: https://github.com/neosmart/tac


I think you're right actually!

Thanks for finding it for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: