Hacker News new | past | comments | ask | show | jobs | submit login
Making Rust binaries smaller by default (kobzol.github.io)
248 points by todsacerdoti 8 months ago | hide | past | favorite | 194 comments



My go to reference when I want to reduce rust binary size is the excellent https://github.com/johnthagen/min-sized-rust, a set of guidelines on how to reduce size with explanations of the consequences


It's really a shame that Rust includes the stdlib piecemeal in binary form, debug symbols and all, in every final binary.

I do love Rust but binary sizes have always annoyed me greatly and I always had this nagging feeling that part of all programmers don't take Rust seriously because of it. And I actually have witnessed, several times in the last 2-ish years, older-school programmers berating and ignoring Rust on that basis alone (so the author is quite right to call this out as a factor).

Looking at the https://github.com/johnthagen/min-sized-rust repo, final binary size of 51 KB when compilation / linking / stripping takes stdlib into account (and not just blindly copy-pasting the 4MB binary blob) is acceptable and much more reasonable. I wouldn't care for further micro-optimizations e.g. going to 20KB or even 5KB (further down the README file).

I also don't use nightly in my Rust work so I guess I'll have to wait several more years. :(


> so I guess I'll have to wait several more years

A feature that's landed in Rust nightly will be part of the next beta release (at most 6 weeks away) and then the following full release (exactly 6 weeks away).

For this feature in particular, rustbot added a tag of 1.77.0, which is releasing on 21st March 2024, less than 2 months away.

It's possible you've confused this with more complex features that stay on nightly for a long while they are tested. This is not one of those features.


Some features remain in nightly for years.

A very relevant example is `build-std`, which builds the standard library (and does LTO) instead of copying a pre-built one. This feature has been on nightly for at least a couple of years.


Are you sure? If so then this is awesome news, but I'm a bit confused; the commit in that min-sized-rust repo adding `build-std` to the README was merged in August 2021: https://github.com/johnthagen/min-sized-rust/pull/30

Are you saying that the feature still hadn't "landed in Rust nightly" until recently? If so then what's the difference between a feature just being available in Rust nightly, vs having "landed"?

The Cargo doc page on `build-std` says that it is an "unstable feature" which will only eventually end up in stable once it is stabilized: https://doc.rust-lang.org/cargo/reference/unstable.html#buil... and Kobzol's linked post above indicates that `build-std` is "sadly still unstable."


Yes, I am sure this is going to be a part of Rust 1.77.0 and it will release on 21st March. I say that because of the tag in the PR (https://github.com/rust-lang/cargo/pull/13257#event-11505613...).

I'm no expert on Rust compiler development, but my understanding is that all code that is merged into master is available on nightly. If they're not behind a feature flag (this one isn't), they'll be available in a full release within 12 weeks of being merged. Larger features that need a lot more testing remain behind feature flags. Once they are merged into master, they remain on nightly until they're sufficiently tested. The multi-threaded frontend (https://blog.rust-lang.org/2023/11/09/parallel-rustc.html) is an example of such a feature. It'll remain nightly only for several months.

Again, I'm not an expert. This is based on what I've observed of Rust development.


Ah I see, I misunderstood: you were talking about `strip = "debuginfo"`, while I thought you were talking about `build-std`. Thank you for the link/clarification!


Oof, here I have mistaken the simple strip debug stuff default flag with the `build-std` work which is much more complex. Sorry.


I indeed am confusing it with those features that seem to be in nightly forever. Thank you, compiling with beta is much more acceptable for me. (EDIT: or simply waiting for stable 1.77)


But why? I mean, I'm also obsessed of the byte size from time to time and I do often optimize for the size when it's doable, but in practice anything below 1 MB seems small enough that you don't need to optimize further. There are so many low-hanging fruits when it comes to the Rust binary size...


I don't have a horse in this race, but a programming language that prides itself on "zero-cost abstractions" and then generates a Hello World program that is 90% wasted space doesn't leave a great impression.


Most debugging informations are "useless" until they are desparately needed, and this "wasted space" was no exception. So as always, it's another trade-off: will you "waste" some bytes to ease yourself in the future? Cargo already has decided that uninformed people don't want them for the release mode, but it was found that this decision was not uniformly enforced and some debuginfo was still left there. So a new option was added and the default was changed. Happy ending!


True, but it sounds (nearly?) "useless" based on the description provided by the author:

> For example, one thing that was noted is that if we strip the debug symbols by default, then backtraces of release builds will... not contain any debug info, such as line numbers. That is indeed true, but my claim is that these have not been useful anyway. If you have a binary that only has debug symbols for the standard library, but not for your own code, then even though the backtrace will contain some line numbers from stdlib, it will not really give you any useful context [...]


That's true, which is why it's now being stripped.


I don't judge by this but I have met programmers who do (as mentioned upthread) so yeah, I agree. It's a signal that is useful to many, and that signal should start being optimized sooner rather than later.


I don’t see how zero cost abstraction has anything to do with binary size. It’s about runtime efficiency of language features.

Optimizing binary size is a worthy and important goal because some people have uses cases where it matters. If you are not one of those people why fret about it?


> I don’t see how zero cost abstraction has anything to do with binary size.

But binary size is absolutely a cost. Whether or not it's a cost that matters to you is a different question, but it's still a cost.


> "zero-cost abstractions"

Zero cost in terms of CPU and RAM usage during execution time.


Program size affects disk/file system load time, relocation time, cache utilization, zram speed, and probably many things more.

Nothing is zero-cost if you look closely enough.


I don't disagree with your premise but bigger binaries lead to significantly more CPU cache churn. (EDIT: or not, see elsewhere in this thread for clarifications)


I work in embedded software and we fight for every megabyte of NAND flash. Multiple smaller executables add up, too.

Reducing the binary size is usually more important than performance (and often more important than memory safety, if we are totally honest...).

Personally, I hate bloated software just as much as slow software. It's crazy what you can do with 64 kilobytes, let the demoscene blow your mind: https://www.youtube.com/watch?v=ZfuierUvx1A

Would be awesome if compilers could do the same :)


In fairness, this is just doing something automatically that you could have done manually anyway. It's a good change because defaults matter, but it's always been possible to strip debug symbols from Rustc-generated binaries, and all the guides I've seen to reducing binary stuff have mentioned this already.

It's also important to remember that in embedded software, you're probably working in a no_std environment anyway, which means that the Rust standard library doesn't need to be included in the binary, which will already significantly reduce the size of your binary.

A lot of this is about tradeoffs, and I think the article does a good job of explaining what tradeoffs are relevant here. Yes, binaries should be as small as possible, but shipping multiple compiled standard libraries is also not ideal. The Rust team seem to have gone for a good default for people who are beginning, with lots of ways to tailor the build process for people who don't need debugging information, or who are willing to accept longer install/compile times in exchange for smaller, more optimised binaries.


I'm not criticizing Rust, I'm criticizing statements that doubt whether the effort to reduce the binary size is worth it.

We currently have some nasty C++/boost monstrosities, if Rust can deliver a better development experience AND reliable stack traces AND smaller binaries, it would be HUGE for embedded.


Stripping debug stuff has always been possible, yes, but that doesn't entirely address the problem of the stdlib being included piecemeal. I want a compiler who throws away everything that's not used in the final binary.

If that means that 90% of the stdlib is unused (which is likely true for many small projects) then it should not be included in the binary.


Isn't that the linkers job? To throw away unused functions?

I remember this being the standard decades ago. It might not make sense in certain situations (i.e. with reflection), but that shouldn't be the default case anyway. What happened?


Yes, but I was left with the impression that including the stdlib in its entirety prevents proper tree-shaking / dead code elimination. Could be wrong.


While I'm not a demoscener, I know enough about how they are generally made and have my own share of award-winning small programs. So I'm confident that they don't make a good example of non-bloated software.

Every demo is really amazing when you encounter them for the first time, but AFAIK 64K demos were hottest around 2000 when Farbrausch revolutionalized the scene with .fr-08: .the .product [1]. Since then, 64K turned out to be too large because most 64K demos can be divided into multiple parts---engine, data and compressor---and each part can be individually developed. There are many 64K demos but far less engines and only a handful number of compressors in this level. It also means that there are only a handful number of people that can actually make engines and compressors. It's not a fit software, it's rather an unhealthily thin software. They are still awesome but they can't be a model.

[1] http://www.theproduct.de/


> I work in embedded software

But embedded Rust usually does not use stdlib. This thread started with:

> It's really a shame that Rust includes the stdlib piecemeal in binary form, debug symbols and all, in every final binary.

Which is not case for embedded.


We are shipping a full blown custom Linux distribution, embedded software is more than just microcontrollers. Binary size is still just as relevant.


Are you confusing "embedded" for "microcontrollers without an operating system"? There are plenty of embedded CPUs that can run Linux. Buildroot even exists to make it easier to create distributions for these types of devices. For example, I would consider the reMarkable 2 to be an embedded device, but it runs full-blown Linux (to the point of having a bloated Qt stack).

That's probably an extreme case though, as nobody in their right mind would actually try to run Qt on an embedded device with any reasonable limit on resources. I'm sure rM only does it because they can afford to be wasteful - they have hundreds of megabytes of space for the operating system, and all the latency-sensitive stuff is definitely not built on Qt.

Reason I know this is because they offer SSH access to the device and an SDK.


You can't be everything to everyone. Packaging considerations for niche use cases rightfully take a back seat to debuggability and packaging considerations in modern web deployments and consumer devices.


You can always do a multi-call binary (busybox-style) so that you only get one copy of the stdlib. Coupled with build-std (and other min-size things), it should bring rust code size to a manageable level.


Out in javascript land no one cares about file size or wire time any more (but should) --

Over in C, plenty of people work on things where file size matters. It is a big deal. System constraints (embedded) wire constraints (far away systems where internet is slow and ephemeral), legacy systems where everything is going to be slow even moving that fat file around and you have to be present to update (ATM, ticket, vending machines)... The list goes on.

So it may NOT matter for the desk top, or mobile or servers, but that's a tiny fraction of the computing out there.


I do have a lower threshold for JS apps, which is about 200 KB. This difference accounts for the fact that websites can be streamed while executables can't. I'm not saying that we shouldn't optimize for the size---rather I'm saying the size optimization is not worthwhile under some threshold.


Many MMOs do actually download updated client executables on the fly in the launcher, and that shouldn't make the player wait more than necessary. Even the Steam client has such an automatic update process (and at 200 MBytes this is already in "very annoying" territory IMHO).


I mean, yes! Games are on the opposite end of the spectrum, where uncompressed assets fly around and 100 MB sounds like very small. These are what I refer to "low-hanging fruits"---I expect a moderate effort can halve the total size for most cases. (I did work on asset file formats back when I was a gamedev.) 400 KB executable is not really a big deal compared to that.


I dont have a hard limit for JS...

What I have is a wall for functionality.

Loading 700kb react blob (compressed mind you) so I can read a web page is a hard no. You want to give me a rich GUI to do data editing in a browser, bring it on, I'll take the down load.

I know that rust and tiny go swap code here and there, binary sizes getting smaller on rust might make me give it a poke for some of the more "cute" embedded/iot things I like to play with... (another place where small matters!)


I agree with the threshold argument but I guess it differs for all of us. To me 4MB Hello World program is hilariously bad. I want Rust to compete with C. Not be better or even equal to C when it comes to binary sizes, but I want it to be close.


I agree that 4 MB is bad, but I felt 400 KB is fine given all circumstances.

If Rust really wants to minimize any overhead in spite of the necessity of backtrace supports, there are indeed many ways to minimize the cold section of executable. Even a simple compression will work---especially given that we already have a copy of miniz-oxide there! So try that if you are motivated, and I would more than welcome that effort, but Rust has way, way more important things to do than that.


I am 100% sure there are much important issues right now and I sympathize with the lack of bandwidth. The one thing I am claiming is that some lower-hanging fruit (if it is indeed that; might be super complex in fact) carries a significant positive public relations / image potential which should IMO be picked / harvested.

(EDIT: I am talking about the `build-std` work here, not the default strip debug info flag.)


> Out in javascript land no one cares about file size or wire time any more

Which is one of the reasons why I tend to leave JS disabled in my browser. Too many web devs have no care or concern for these sorts of things, which often makes JS a real resource drain.


What? JS is the only ecosystem I know of that really cares about file size, to the point that it's expected to see the size of a library in release form (minified, gzipped) on its homepage. That's vanishing rare with any other language, including C.


Depends on what you're aiming for.

If we talk about ultra-low-power platforms, e.g. energy-harvesting IoT devices, 1MB is still quite a lot.

If we are going to argue that Rust can compete with C/C++, it needs to have similar performance, also regarding binary size.


C/C++ standard library doesn't work in that environment anyway---you typically need a freestanding mode. Rust equvialent is `#[no_std]` which works well for popular boards.


To be technical, Rust deliberately splits out three elements: core (which is necessary for the Rust language and is always provided) alloc (which needs a heap allocator, not applicable if you don't have or don't want one) and then std (which expects an operating system, with features like files and sockets and threads and knowing what the time is)

C++ just labels a few specific features as available in a "freestanding" C++ standard library if you have one (on an embedded platform presumably you do).

This makes it very easy to know what you're getting in Rust's stdlib in #[no_std] because it's all of core, so e.g.

https://doc.rust-lang.org/core/primitive.slice.html#method.s... vs. https://doc.rust-lang.org/std/primitive.slice.html#method.so...

At first glance those are identical, but no, std is re-exporting every feature core had, but it also gains features, for example the stable sort function family only exist in std because they're using a temporary allocation whereas the unstable (ie equivalent items may be re-ordered) sort provided even in core doesn't do that.

Figuring out what you get in your stdlib with C++ often comes down to suck it and see.


Mostly because CPU caches matter a lot. Executing 7-8 different Rust binaries, in a loop, that are each north of 10MB+ is bound to be less performant than corresponding smaller programs.

Never had the time or dedication to actually verify this but I've been bitten by programs and OS-es that trash the cache too much and I've seen humanly perceivable lags because of it. But maybe in this case I am overreacting.


Would debug symbols ever be loaded into CPU cache? They'd be in RAM, sure. But why would the CPU ever load them into cache? The way I'm imagining this, and I could be wrong, is that the debug symbols are to one end of the binary. They wouldn't be loaded into cache unless they're going to be used imminently.

Another way of saying this is that the change to strip automatically is a small win for for disk space and RAM consumption, but I don't think it's going to improve performance dramatically? I could be wrong about this though.


No, it's more likely that you are correct because it has been a long time since I worked that close to the metal and my info is outdated or downright wrong at this point.

Thanks for the nuance. If debug symbols indeed never go into the CPU cache then my remark is completely irrelevant.


I think most debug info is in its own section. On a real OS the pages are loaded lazily so no (significant) performance hit.

Still, I hate big Rust binaries as much as the next guy.


That won't fit into a ESP32, Arduino or embedded hardware with similar restrictions, leaving the space for C and C++, or the Basic, Pascal compiler vendors still enjoying that space like Mikroe.


> in practice anything below 1 MB seems small enough that you don't need to optimize further.

I think that it depends on what sort of machine you're aiming for the binary to run on. I develop for a few platforms where a 1MB executable size would be completely unacceptable.


Adds up with larger applications using a bunch of distinct binaries or many libraries.


> (and not just blindly copy-pasting the 4MB binary blob)

Where is this 4mb claim coming from? I just built a hello world on macos in release mode, with no special flags and the result was 400kb. Thats absolutely larger that it should be, but its a lot smaller than 4mb.

Is it really that much worse on linux?


Macho executables don't contain debug info. Just references to .o files.


You are looking at the aftermath. Also, you could have manually put `strip = "debuginfo"` to avoid the 4 MB binary even before that; the issue is about the default when no `strip` option was given.


No, this is using stable rust 1.75. The binary is not stripped at all, and its still 400kb, not 4mb.


Actually my issue is with including the stdlib in its entirety, sorry for not clarifying.

Indeed we could always strip the binaries and I've done so on every Rust project I worked on.


You are aware that the stdlib is linked with --as-needed which tells the linker to not include unused code, right? The entire stdlib is not included in Rust binaries unless you use the entire stdlib. However, as this article shows, that does not apply to debug info.


Actually I was not aware, thanks for clarifying!

Hrm, I read the comments of another user who contributes to Rust and it seems there is various formatting / panic / abort / symbol and location translation code that contributes to the size and currently there is not much that can be done about it.

Oh well.


I was wondering if dead code elimination also removes the debug symbols related to the dead code?

The ~ 4 MiB of debug symbols the article talks about are for the whole libstd or just the portions that actually ended up in the binary?

I think the new default makes sense but I'd love to have the option to build a lean but debuggable release binary with just the needed symbols.


You can't really use dead code elimination on debug symbols, because you don't know which symbols will you need. You would need to know where and how will your program crash or if the user will want to use a debugger on it.


I don't understand. If a function isn't in the binary I will never need its name, so it does never have to be in the debug information. Likewise for variables.


I think linkers don't go this far to rewrite the debug info. If libstd is used, libstd's debug info is used.

The second problem is dynamic dispatch. The overhead in "hello world" is from printing and panicking machinery (it handles closed stdout), and these features use vtables, so it's even harder to precisely analyze what's actually used and what isn't.


So what's in the remaining huge 415KB hello world program? Can we get rid of 90% of it a couple more times?


Buffered I/O and synchronization primitives to avoid corrupting the buffer. The underlying vector implementation and memory allocator. Panic support and backtrace printer, which includes a Rust-specific name demangler and a not-so-small DWARF parser in Linux. Path support because backtraces would include source file paths. Zlib-compatible decompressor because ELF allows compressed sections (!). Then you have several formatters that are often pretty large (e.g. f32 and f64 would add at least 30 KB of binary, and some depend on Unicode grapheme clusters). They are all essential for edge cases that can happen even for such a simple program.


> several formatters that are often pretty large (e.g. f32 and f64 would add at least 30 KB of binary …)

I know floats are full of scary corner cases, but… Assuming tens of bytes per an if statement, hundreds of corner cases just in formatting? Is it really that bad?


A naive implementation would be small, but a correct implementation (with some concrete definition of "correct") will need a sizable data table. I know this because I wrote that part of code [1] and somehow it is still in the std even though other algorithms now exist [2].

[1] https://github.com/rust-lang/rust/pull/24612

[2] https://github.com/rust-lang/rust/issues/52811


Why does it need to parse elf ?


The stack frame is a list of return addresses which have to be translated to a file name and line number. Such debug information is within a specific ELF section in the predefined format. In this simple case you may be able to hard-code the offset to the section (and guarantee that it was never compressed), but any additional C or Rust library will break this assumption, so a general parser has to be included.


Sorry if this is dumb but if the idea is stripping debug stuff, why would a parser for translating return addresses into file names and line numbers be useful?

EDIT: oh, ok, so I guess it's because strip is "debuginfo" here, rather than "true".


There is some work underway to enable removing the backtrace generation/parsing from Rust binaries. It's hardcoded for now though.


32Kb with the following configuration, on a linux target. It's not the default, you're using nightly, it's more complex to build and there are tradeoffs. You can even go less than that if that's your thing

  [profile.release]
  strip = true
  opt-level = "z"
  lto = true
  codegen-units = 1
  panic = "abort"
cargo +nightly build -Z build-std=std,panic_abort -Z build-std-features=panic_immediate_abort --target x86_64-unknown-linux-gnu --release

As mentioned in another thread, I've simply followed https://github.com/johnthagen/min-sized-rust


These look like sensible defaults to me for release mode. What are the tradeoffs?


Comparing with the current Cargo default [1]:

• `panic = "abort"` means that any panic terminates the program. This is not always desirable because you may want to catch and recover from panics, particularly in long-running servers.

• `strip = true` means that anything depending on DWARF would no longer work. Backtraces won't work, but also unwinding will no longer work (so this is disastrous if you haven't set `panic` above). The actual proposal has `strip = "debuginfo"` instead, so unwinding will work while backtraces won't.

• `codegen-units = 1` is the number of concurrent compilation jobs (cgu) in the LLVM codegen phase. A single cgu will significantly increase the compilation time, while allowing a bit more optimization. Otherwise this is okay.

• `lto = true` enables Rust-specific link-time optimizations across crates. The actual benefit depends on the set of crates linked, but it is significantly slower that many large enough projects wouldn't want it. It does benefit small programs like the "Hello, world" program the most though.

• `opt-level = "z"` is same to C/C++ `-Oz` and the same pros and cons apply.

[1] https://doc.rust-lang.org/cargo/reference/profiles.html#rele...


All makes sense, but I don't agree with:

> This is not always desirable because you may want to catch and recover from panics, particularly in long-running servers.

IMHO a panic implies that execution cannot continue under any circumstances, and even any attempts for a graceful shutdown might be futile (if recovery is possible it shouldn't be a panic but done through regular error handling).

For a server process the best reaction to a panic would mean abort and clean restart.

> A single cgu will significantly increase the compilation time, while allowing a bit more optimization.

Increased build time is acceptable for release mode IMHO.


> IMHO a panic implies that execution cannot continue under any circumstances, and even any attempts for a graceful shutdown might be futile (if recovery is possible it shouldn't be a panic but done through regular error handling).

Rust panic is just a C++ exception in its implementation, and not every C++ programmer would terminate a process when an exception is thrown. Of course Rust panic is more resillient because Rust provides a memory safety and the logic error can be reasonably bounded as a result.

> Increased build time is acceptable for release mode IMHO.

I think the slowest possible configuration is at least 3x slower than the default, and that's too slow to be acceptable for most people. But you can always tune them up if you want---please note that this issue is all about defaults.


> For a server process the best reaction to a panic would mean abort and clean restart.

That's often infeasible, and creates a major DoS/reliability problem.

The server may be processing hundreds of different requests at the same time, and panic=abort will kill all of them. That creates a visible failure to other unrelated clients of the server, not just the offending request.

If you try to fix that and retry aborted requests, you'll retry the panic-inducing request and cause another failure (and it'll take several restarts to bisect out the offending request).

Plus a restart may be costly, require loading data, warming up caches, etc.

It's just way cheaper to catch a panic and return 500 to the offending request. Rust guarantees to panic before anything terrible memory-corrupting happens. Even if you don't trust it and would prefer to restart anyway, you have an option to gracefully hand over the traffic before the restart.


>For a server process the best reaction to a panic would mean abort and clean restart.

I suppose you could have a panic in one thread whilst others are still running fine. You may just want to shut those other threads down gracefully.


I remember my disappointment when I found out panics could be “caught” and that I was still living in the world of exceptions


Panics have to be catchable because it's undefined behavior for Rust to unwind through C code, so a Rust library that exposes a C API has to stop panics at the FFI boundary and propagate errors in a C-compatible way. It's not "catching" in the Java sense; nobody is using catch_panic to implement resumable error-handling.


You aren't. They're only caught in special cases, e.g. if a thread panics and you want to propagate that to other threads, to catch panics in unit tests, to avoid unwinding across FFI, etc.

You shouldn't use them as a general exception mechanism. They aren't the same, even if under the hood they both use stack unwinding.


Naively, what is the difference between panics and exceptions? Is it the philosophy on when/how to use them?


Yes, and also exceptions can have arbitrary value associated with them. You aren't going to get far using panics as a general error handling mechanism because there's only one type of panic.


Oh no, I didn't know this. I'm adding panic = "abort" to all my projects. If I wanted exceptions I wouldn't be using rust.


That's actually not too bad if you know you don't need to recover from panic. I kinda hope that `panic = "abort"` is indeed default unless overridden by dependencies, say, tokio...



opt-level = "z" may be slightly slower than O2/O3. lto = true and lto-units = 1 makes the final linking unbearably slow for large programs.

These are still okay, but then `panic = "abort"` shakes a lot of code off by making the final executable unable to print a nice stack trace (even without symbols) and instead immediately traps/abort()-s.

Edit: and GP also used "panic-immediate-abort", which also removes the dependency to std::fmt::format!() because it now silently abort()s without even printing an error string.


> These are still okay, but then `panic = "abort"` shakes a lot of code off by making the final executable unable to print a nice stack trace (even without symbols) and instead immediately traps/abort()-s.

I just tried this. The stack traces on panic seem more or less the same with or without panic = "abort" in Cargo.toml.

For example, this program:

    fn main() {
        let v = vec![1, 2, 3];
        v[99];
    }
Compiled with panic="abort" outputs this stack trace:

    $ RUST_BACKTRACE=1 cargo run --release
      Compiling rust-panic v0.1.0 (/Users/seph/temp/rust-panic)
        Finished release [optimized] target(s) in 1.82s
        Running `target/release/rust-panic`
    thread 'main' panicked at src/main.rs:4:6:
    index out of bounds: the len is 3 but the index is 99
    stack backtrace:
      0: _rust_begin_unwind
      1: core::panicking::panic_fmt
      2: core::panicking::panic_bounds_check
      3: <alloc::vec::Vec<T,A> as core::ops::index::Index<I>>::index
      4: rust_panic::main
    note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
    [1]    95405 abort      RUST_BACKTRACE=1 cargo run --release
Weirdly, in this test if I don't strip the binary, I get a larger binary size with panic="abort" than when I leave that out. That is surely due to a bug somewhere.


You're not adding the +nightly switch and the required flags to compiled std. Just setting panic=abort in cargo.toml isn't enough.

One can get binaries pretty damn small (low-mid tens of kilobytes for a basic cli program doing something like hashing of a file).

Problem I've found with manually compiling std (which has ancillary benefits of being able to compile to a specific uarch) is it can break the compilation process when bringing in third-party deps. The config.toml (stored in $PROJECT_ROOT/.cargo) overrides cargo's behaviour for all dependencies as well - which may break those compilations.

Tbh, it's one reason I don't particularly rate the rustc+cargo toolchain - but for most people writing regular applications: just being able to do ```cargo build -r``` and not care about binary size, uarch optimization or custom llvm/rustc optimizations (PGO etc), most won't care.


> `panic = "abort"` shakes a lot of code off by making the final executable unable to print a nice stack trace (even without symbols) and instead immediately traps/abort()-s.

I believe it does print backtraces then terminate, since the backtrace is printed via panic hooks, which happen before the actual unwinding.


Is the backtrace performed by the OS and that's why it still works? And the debug info needs to still be provided, just not an unwinding mechanism?


Additionally the panic = "abort" disables stack unwinding, and running destructors when panic occurs, so it's a big change to the semantics of the program.

It's comparable to C++ land -fno-exceptions (not exactly, but similar).


Abort panic strategy is common in firmware and osdev, for anyone wondering if there are other use cases.


IMHO when a piece of code decides to panic it should only happen when execution cannot continue under any circumstances, and in such cases all bets are off whether any cleanup code would actually still work. A hard abort might indeed be the best option.


That seems overly pessimistic (or perhaps overly optimistic about code that does not panic). Panics mostly exist to guard code from entering into such unrecoverable states, eg writing past the end of an array.


For me the difference would be: if the write past the array has been detected before it happens (via a range check) it would be a regular error which which can be handled, while a panic would be "oops somebody else has written past the end of the array" (e.g. a hitting a canary check), which should abort immediately because at that point it's no longer guaranteed that any recovery code would even work or just make things worse (e.g. trying to flush data from memory back to disk, but that data might have been corrupted too).


You have a good point and lock poisoning had the same reasoning, but wasn't really popular [1]. (In hindsight the poisoning itself was a great idea but should have been decoupled from locks.) Compared to other languages with similar constructs, Rust panic is probably regarded as less fatal because an incorrect but safe Rust code tends to have a limited reach.

Ah, and it should be also noted that some non-fatal signals were also delivered only via panic. The best-known example is a memory allocation failure, which is recoverable in Rust but needed unwinding for a long time. Nowadays you have an unsafe but non-unwinding alternative.

[1] https://blog.rust-lang.org/2020/12/11/lock-poisoning-survey....


In Rust, out of bound array accesses are detected before they happen and handled by panicking.


opt-level z would not be a reasonable default: you should normally optimise for speed, not binary size.

lto and codegen-units=1 have a huge compile time cost. For release for general distribution you should tend to favour them, but the release profile isn’t just about that activity (especially because debug/opt-level<2 is often just too slow to use while developing). You commonly want to create another profile for production distribution.

Abort on panic changes runtime behaviour by stopping you from catching panics, which will completely break some programs, and harm the failure mode of others, so that e.g. one defective route on a web server will suddenly take the entire website down for everyone (or, if you have a supervisor that can restart the server, at least disrupt it for everyone).


> you should normally optimise for speed, not binary size

IME optimize for speed vs size usually isn't as clear cut as the name says though, sometimes smaller code does indeed run faster, but in most cases I've seen there's not much of a difference between -O3, -O2, -Os and -Oz.


Weirdly, most of the time I find that -O3 produces smaller binaries than -Oz.


Mostly the formatting machinery from stdlib. You can go sub 100 KiB or even less without it, but unless you target embedded, there's really a reason to do that, IMO.


Initial binary size, like of a hello world, is a great indicator of how much abstraction the language has that youre also paying for.

For example, java has to set up a constants pool, parse it from the classfile, run init and cinit, resolve references, allocatea a frame, and so on, just to get to the entrypoint.

Compared to C without stdlib which needs to basically just run a few bytes to syscall write().

There are obviously massive advantages to Java for which these steps are needed, but you do pay for it.


I'm not trying to be rude, but what does your comment have to do with the actual article linked. I suspect you're commenting based solely on the title? If you read the article it mentions 90% reduction if you remove the debug symbols from the rust std lib that get bundled in the final hello world binary. A conversation about Java's size and abstraction is quite irrelevant to the article.


The article talks about how "hello world" binary size can be part of the first impression someone new to your language has. Java is a nice illustration of that. Rust probably isn't, because many of Rust's abstractions are zero-cost, so it's not as obvious a comparison.


My comment is pretty unrelated to the content of the article, but more has to do with what others said here in the comments


It’s funny, because your claim that it’s a great indicator is contradicted by the article we’re discussing. Rust was widely considered to be “only pay for what you use” despite having 4MB hello world programs. And now that it’s 400KB that continues to be true. So in this case, 4MB wasn’t a good indicator was it?


It's funny that it was already considered be “only pay for what you use” when the binaries contained half of H. P. Lovecraft's œuvre.

See issue #13871

https://github.com/rust-lang/rust/issues/13871

I exaggerate, point is that we've come a long way and are still getting better. Different people look at different metrics and the more popular the language becomes the bigger the variety of metrics the come into focus.


Should've used that as a ballast instead [1].

[1] https://web.archive.org/web/20230708052006/https://www.gamed... (search for "The Programming Antihero")


Its still a great indicator, its just that indicators dont mean very much sometimes.


Out of curiosity, I looked at a basic hello world in C, and compiled it naively using gcc on Ubuntu, by default the executable produced was 15960 bytes, but when adding the -Os option (from the man page: "-Os Optimize for size") to enable many optimisation flags including code size, I was able to reduce it to 15968.

  #include <stdio.h>
  int main() { printf("Hello, World!"); return 0;}
Of course, If I had read more than 3 words in the man page, the answer was easy to understand: "Os Optimize for size. -Os enables all -O2 optimizations except those that often increase code size", so you can't really get lower size than the default using only the optimization system, there's also "-finline-functions" included in -Os, but it won't help you in a Hello World.


This program is not same to the Rust version. A more faithful version, assuming glibc, would look like this:

    #include <stdio.h>
    #include <stdlib.h>
    #include <execinfo.h>
    #include <errno.h>
    
    void print_backtrace(void) {
        void *traces[50];
        char **symbols;
        int num_traces, i;
    
        num_traces = backtrace(traces, sizeof(traces) / sizeof(*traces));
        strings = backtrace_symbols(traces, num_traces);
        if (!strings) return;
        for (i = 0; i < num_traces; ++i) {
            fprintf("%d: %s\n", i + 1, strings[i]);
        }
        free(strings);
    }
    
    int main(void) {
        static const char FMT[] = "Hello, World!\n";
        static int EXPECTED = (int) (sizeof(FMT) - 1);
        int ret = printf(FMT);
        if (ret < 0) {
            fprintf(stderr, "printf failed: %s\n", strerror(errno));
            print_backtrace();
            return 1;
        }
        if (ret != EXPECTED) {
            fprintf(stderr, "printf failed: only %d characters were written\n", ret);
            print_backtrace();
            return 1;
        }
        return 0;
    }
While this is still substantially different (for example, Rust's I/O buffering is different from C), this should be enough to demonstrate that this comparison is very unfair.


Building your code (I fixed a few typos, strings -> symbols, fprintf(" -> fprintf(stderr, ") with

    gcc -s -Os -fuse-ld=lld a.c && ls -al a.out
leads to a 5496 bytes ELF though. Which is not much larger than just printf("Hello World!\n"), see sibling comment.

I think the point is C "cheated" by including a lot of goodness (format, backtrace etc) in the shared library so they does not have to be copied to each binary.


Thank you for the actual testing (and sorry for typos...). Yes, a C version is small because it dynamically links to libc.so in this case, and I meant to compare against a statically linked version. I primarily wrote this example as a response to the claim that an equivalent---it isn't---C code with musl is very small.


The problem is musl does not support execinfo. For a simple hello world, I managed to statically link with musl to get something with 7888 bytes.


    $ gcc -Os a.c && ls -al a.out && size
    -rwxr-xr-x 1 user user 15952 Jan 24 17:49 a.out
       text    data     bss     dec     hex filename
       1316     584       8    1908     774 a.out
    $ gcc -s -Os a.c && ls -al a.out && size
    -rwxr-xr-x 1 user user 14472 Jan 24 17:50 a.out
       text    data     bss     dec     hex filename
       1316     584       8    1908     774 a.out
    $ gcc -s -Os -fuse-ld=lld a.c && ls -al a.out && size
    -rwxr-xr-x 1 user user 4552 Jan 24 17:50 a.out
       text    data     bss     dec     hex filename
       1199     528       1    1728     6c0 a.out


    zig cc -Os -target x86_64-linux-musl hello.c -o hello
...which basically calls Clang under the hood, but comes with out-of-the-box cross-compilation support for Linux and MUSL creates a 5136 bytes executable.


Debug symbols increase the binary size and don't really have anything to do with the abstraction level of the language. My understanding is that you also don't really "pay for" the debug symbols at runtime (until you generate a stack trace and need them). There could be some nuance that I'm missing there.


More abstraction generally means more functions which means more debug symbols. For example, iterating over an array in Rust involves iterators, slices, options. There are lots of function calls, which all contribute debug info. In C, iterating over an array is a for loop and pointer arithmetic, no function calls, so much less debug info.


But that's abstraction that you're actually using. The parent to my comment was implying that the debug symbols are an indication of abstraction that you're paying for whether you use it or not (in contradiction to the "zero cost abstraction" mantra/motto of C++ and Rust).

You're perfectly welcome to just use a raw for loop with numeric indexes in Rust , in which case you wouldn't have those extra function calls for iterators, etc. Likewise, if you implemented an iterator abstraction in C, you'd end up paying a similar cost. So, that's not what the grandparent comment was talking about.


It'd be nice if the default was external symbols rather than none at all


I'd find that a bit counterintuitive. To me, --release is an alternative for --debug. Debug implies debug stuff is in there, therefore the alternative removes it.

But I do find it useful now and then to have debug symbols in production, so that monitoring and telemetry gets some context. But to me, it's rather logical that I need to add this to the build.

I would, however, love it when it's trivial to get these symbols set up external.


GNU binutils and LLVM both have support for debuginfod server topologies now, it would be great to see Rust tools get on the train.

https://github.com/llvm/llvm-project/commit/36f01909a0e29c10...

And naturally we already have a port: https://crates.io/crates/debuginfod-rs


It's great to see rust maturing. That people care about binary size and work to improve it is a good sign.


It's not a great sign that this bug existed for 7 years, people noticed, asked about it, wrote tickets, everybody agreed that it should be fixed but then nobody took the time to actually do it (and the eventual fix is a rather crude hack by just stripping the output binary instead of preventing that debug symbols slip into a release binary in the first place).

Looks more like a systemic issue in the Rust development process to me tbh.

What's more shocking though is that even after a 90% size reduction, a vanilla hello world is still 415 KBytes. That's about 10..100x bigger than I would expect from a low level "systems programming language".


A vast majority of that 415 KB is due to the backtrace support, which is amazingly complicated. (See my other comment for specifics.) It will depend on some more machinaries from std, and a "simple" hello world can always panic if stdout is closed or so, therefore that part of std cannot be easily removed unless you are fine with useless backtraces.

Also, Rust has no direct platform support unlike C. So everything has to be statically linked to be portable. A statically linked glibc is indeed much larger than that (~800 KB in my machine). Conversely, you can sacrifice portability and link to `libstd*.so` dynamically to get a very small binary (~17 KB in my machine, both for C and Rust).


For a statically linked libc I would expect that only the stdlib code that's actually used is included in the program (even without LTO).

For instance a statically linked C hello world for Linux via MUSL (cross-compiled with `zig cc -Os -target x86_64-linux-musl hello.c -o hello` because I'm currently on a Mac) is just 5 KBytes.


Musl doesn't have anything like Rust backtraces so it's not a fair comparison. By the way, glibc does have one [1], and I would be surprised if adding a backtrace printer doesn't significantly increase the binary size.

[1] https://www.gnu.org/software/libc/manual/html_node/Backtrace...


> nobody took the time to actually do it

Because in the end almost nobody actually cares about it enough to create a fix.

Small binaries are great, but people care mainly about how fast it compiles and how fast it runs, and in the few cases where the binary size is important it was already possible to shrink it significantly (more than with this new change). In my entire life I have never heard a user complain about the size of the binary, what people really care about is efficiency/speed at runtime. That's why people regularly mention VS Code using Electron, but not that its installation package alone has >500MB.


What bothers me more than the actual binary size (yes ok, the remaining 400 KBytes may actually be justified as pointed out by @lifthrasiir) is that the additional 3.5 MBytes baggage of stdlib debug info was just dead weight. They could just as well have included 3.5 MBytes of noise in each executable.


As pointed out by the author, people who care could already strip the debug info.

C was created in a time when binary sizes used to be important for every program. These days, it only matters for a small subset of use cases.

If anything, the fact that no one fixed it for this long is an indicator that it doesn't matter much for the kinds of things people typically build with rust.

It's nice that there's a fix now, but would I care if Firefox or Chrome binary was 4MB bigger?


If you read the article you would know that this is just about changing defaults. You already could achieve this by editing your project's Cargo.toml. As is documented and discussed in several places over the years (google "shrink rust binary size").

The `strip` was added to rust nightly in 2020.

1: https://github.com/johnthagen/min-sized-rust

2: https://kerkour.com/optimize-rust-binary-size

3: https://rustrepo.com/repo/johnthagen-min-sized-rust

4: https://sing.stanford.edu/site/publications/rust-lctes22.pdf

5: https://arusahni.net/blog/2020/03/optimizing-rust-binary-siz...


> but then nobody took the time to actually do it

That happens all the time, it's called prioritizing. If you don't let people prioritize, they will burn out and leave the project. That's not what you want.


If people only pick "interesting" problems to work on and ignore fundamental (but boring) issues like this, then those issues will just pile up and frustration and burnout will grow even more in the long run. At some point you'll have to stop feature development for a while and put all hands on cleaning up the accumulated cruft.


The issue was initially filed in 2017, but was partially solved by the introduction of `strip = true` in 2020 (nightly) or 2022 (stable). The OP wrote a concrete and comperhensive proposal to fully solve the issue in late 2023. Unlike your claim, people did periodically revisit this issue and even largely solve it.


This is not a fundamental issue, it's just a kind of boring papercut that Rust has many examples of. If you need to do a whole lot of "foundational" yak shaving to ultimately accomplish something else that's clearly worthwhile, the Rust folks are quite comfortable with that: it's what the WG's and initiatives are for.


This isn't a fundamental issue, but it is very, very boring.


I applaud this initiative. yet I wonder - the ideal situation would be to not throw away the debug info (ever), but rather it should be put into an external file. even Linux supports this for many years and nowadays we even have direct support with dwarf5/dwo. is trust taken any action towards that direction?

after all: without debug info (which you do not want to ship to customers necessarily), you cannot do profiling or debugging in any meaningful way...


Cargo does support `split-debuginfo` [1] but it still has some rough edges in my experience. I do think that is the ultimate way to go in the future.

[1] https://doc.rust-lang.org/cargo/reference/profiles.html#spli...


Splitting the debug info from the executable is supported on the 3-big-OSes but only enabled by default on Windows (and maybe macOS?)


Many Linux distributions have debuginfod servers that supply split debugging symbols on demand. So one could argue that it's implemented by default on Linux too.


> I can imagine a situation where a seasoned C or C++ programmer ... make fun of it on the forums

Is this really a thing? Do C folks make fun of Rust?


What is this “C programmer” you speak of?

Anyone who’s been doing this for long enough is a polyglot.


That mirrors my experience. I've hardly ever met really experienced people that are hardstuck on language flaming. Programming languages are tools, with general and situational up and downsides.


Same.

Reddit gave me some interesting insights into people like this. Whenever I come across a very radical opinion, I check their whole profile. It is often someone without much if any experience, and clearly doing it just for the sake of tribalism.


People can be quite tribal. Especially the ones who haven't taken the time to learn what the other tribe is up to and why.


Yes, we do, in some contexts.

Because despite its complexity, slow compile times, lack of a specification or more than one viable implementation and bloated binary size, people still tout it as a C replacement.

For application programming, Rust is fine, but for embedded and systems programming, nearly any of those on its own can be enough to eliminate Rust as an option, depending on the situation.

Except for complexity, they are all solvable and being worked on, for this very reason. But Rust can never replace C completely, because it's not simple, and there is a sizeable portion of primarily C programmers who are minimalists.


For what it's worth, I was able to write Rust for the 8bit AVR microcontroller platform. 256 bytes of RAM and 1024 CPU instructions FLASH.

In this comment I describe what I did to remove the bloat: https://news.ycombinator.com/item?id=36394426

It's not the default and it took me some searching and reading to get there. I wonder if Cargo could have some vastly different default per target. As everybody said in this thread, defaults matters.


1024 instructions? At that point I'd probably just use assembly. At that size, it would still be manageable.

Hmm, maybe that's the fundamental difference in thinking; the people I'm thinking of would reach for the simplest tool that can feasibly handle the job, even if it's somewhat harder to use.


In those 1024 instructions I was able to fit a decoder for standard 433Mhz remotes control, with learning of remotes (stored in EPROM), a button, a rotary encoder input, and a dozen led outputs through a 16 bit shift register. It also controls a digital potentiometer. Used to adjust a power supply output voltage driving a 20 meter LEDs string.

I am sure a pure assembly implementation would have saved some room in the FLASH, but the generated machine code wasn't that bad for what I can tell with my limited experience anyways. And it sure was nice to write it in Rust. Except a codegen bug in the version of Rust at the time ;) That one was painful!


You forgot the funniest part, the programming socks.


> In fact, this new default will be used for any profile which does not enable `debuginfo` anywhere in its dependency chain, not just for the release profile.

won't this destroy stack trace support?


> To reduce download bandwidth2, it does not come in two variants (with and without debug symbols), but only in the more general variant with debug symbols.

Why not instead only without?


If you have them, but don't want them, you can throw them out. If you don't have them, but want them (as does anyone who doesn't build in release mode when developing), you're out of luck.

Sure, you can rebuild the standard library, but it's a lot simpler to strip the output than set things up so the first time you want debug symbols, it has to rebuild std, cache it somewhere, not rebuild it again on the next build, but be sure to invalidate that cache the next time std gets updated.

And in general, not wanting debug symbols is the last step in development, before making your first release. Before then, you pretty much always want those debug symbols, except for when you're benchmarking binary size or something.

So if they were going to ship std without debug symbols, they'd probably just be better off not shipping a prebuilt std at all, as pretty much everyone would end up having to build it on first run of 'cargo build' anyway. (Which is maybe fine, actually?)


not a rust programmer. can someone explain why this is in the final binary and not in a shared library?


The Rust compiler statically links everything into a single binary. This can be changed, but is generally discouraged as there are no ABI stability guarantees at this point.


You can use the C ABI if you want something stable, and there are custom crates that will help with translating from the Rust to the C ABI. (This also helps because the resulting shared objects can potentially be FFI'd with from any language, not just Rust. They might as well be plain C libraries as far as anyone is concerned, only with the usual Rust safety requirements.)


Also, even in "the" C ABI provided by Rust out of the box there's a certain amount of "Well, this is probably what your C compiler does here, but there's no requirement" rather than an actual hard ABI document. A lot of the "We're ABI stable" claims in C and C++ are "We daren't change anything or stuff breaks" which isn't so much an ABI as it is paralysis and Rust is confident it doesn't want to do that.


> A lot of the "We're ABI stable" claims in C and C++ are "We daren't change anything or stuff breaks" which isn't so much an ABI as it is paralysis

For the C++ standary library maybe, but for pretty much all others which provide ABI compatibility it's a concious and properly followed decision.


> "We're ABI stable" claims in C and C++ are "We daren't change anything or stuff breaks" which isn't so much an ABI as it is paralysis and Rust is confident it doesn't want to do that.

For C++, yes sure, but I thought C usually has a pretty well specified ABI on most platforms, no?


I think it's true for common enough platforms, but C is available in too many platforms so I'm not sure it's generally true.


"Rust tries to appeal to programmers coming from many different backgrounds, and not everyone knows that something like stripping binaries even exists."


At this moment, I am compiling GCC 10.5 on a RPi Model B with 256M of RAM. The root filesystem is on a ramdisk embedded in the kernel. The SD card from which I booted is read-only.

The compilation runs not from the SD card but from a chroot to a large external drive plugged into the RPi via USB.

Once I have GCC and rest of the toolchain built, I then use it to compile custom kernels with embedded ramdisks.

https://en.wikipedia.org/wiki/Self-hosting_(compilers)

Wish I had a computer with 32GB RAM but I don't.

I use strip -s every day.


It's not ideally written but it's safe to assume if someone compiles a quick test program for that purpose and it ends up being 4MB, they might just move on to other options instead of starting to think about how they can manually reduce the binary size.

When evaluating languages, you rarely look foor good enough results to work around.


My machine has 32GB of RAM, why should I care about binary size? For embedded, sure. But most people don't write for embedded.

If it was like 1GB, sure. But it aint.


The other replies cover "the why" adequately already but as a tangent:

Every company makes some effort to be green and eco friendly these days, yet we're perfectly happy wasting billions of hours of compute time every second because "who cares" and "we can"... Same mindset.


Because your employer might want the deployed final binary work fast on a 1 vCPU container with 1GB of RAM.


Yes. The statement I agree the most with in the blog post is "defaults matter".


> I can imagine a situation where a seasoned C or C++ programmer wants to try Rust, compiles a small program in release mode, notices the resulting binary size, and then immediately gives up on the language and goes to make fun of it on the forums.

While I'm not a seasoned C or C++ programmer I have definitely done this to a few new languages when playing around with them. My thought was "If helloworld.rust is this large, it'll be huge once I've actually written more code".


That’s almost never the case, though. Imagine if a C binary statically linked in libc. Adding a page full of more C to its code would only grow it a tiny amount.


A statically linked C hello world is just 5 KBytes though (with MUSL on Linux). Static linking doesn't mean that the entire library is included in the binary, only code that's actually used (details vary depending on how the library was created, and how the program was linked, but those 5 KB are with defaults).


I would hope that it could statically link libc and exclude the unreachable portions.


Rust can't do this very well because it doesn't build std as part of the project, that's still an unstable option. It's linking object code.


C doesn't either, but statically linking still pulls only the object files which are actually needed into the executable.


I think that's the main reason why every stdlib function lives in its own source file in MUSL:

https://git.musl-libc.org/cgit/musl/tree/src/stdio

Not sure if this can easily be replicated in Rust though and LTO must be used instead for dead code removal.


A Rust crate is a single translation unit. It's the precise equivalent of a C source or object file.


Link-time optimization should have been beneficial for this case... unless you absolutely needed quite a bit of `std` to print backtraces.


That's ignoring the difference between fixed costs and marginal costs.

When the program grows, the fix costs stop to matter.


Yup same. Also I hate the "just strip it" advice. I don't know who on earth strips random binaries and has them actually work fine afterward 100% of the time. Stripping after linking frequently breaks binaries I try it on.


All binaries on a classic Linux distro are stripped (you can use 'file' to quickly check some), so your issues might be worth reporting


Stripped via -s or via strip?


I never understood why strip's "--strip-unneeded" flag wasn't the default. Leaving that out can often enough give you a binary that just doesn't work.


Yup that's exactly what I mean. Nobody ever mentions that flag when recommending stripping, and it's frustrating when you realize it's broken. It's been a few years, but even when I finally discovered and tried that flag, it still didn't quite do the right thing. Either it failed to strip stuff that -s stripped, or it still produced a binary that didn't work... I forget what exactly the issue was. I just remember failing to find an alternative to cc -s that was guaranteed to work.


They're really gonna panic when they deploy their first qt app then.


Statically linked Qt 4.x wasn't actually so bad. But that's not compatible with the LGPL license for closed-source apps, you had to get a commercial license. The problem with DLL based Qt apps is that you ship a ton of code that's never called.


I was just thinking about the size of the app if you include all the needed libraries for even a basic gui app :) . I guess it was a bit tangential of an comment.


[flagged]


I have implemented the fix and got it approved in the span of about 3 weeks (and that was over the Christmas!). Doesn't seem that bad to me, given that it's a change that will affect pretty much any Rust user by default.


I was curious about how an implementation for this would work and browsed the PR. Cool stuff, thanks for putting the effort in!

I noticed a very tiny typo which doesn't affect behaviour I think. https://github.com/rust-lang/cargo/pull/13257/files#r1464506...


Even if just focused on time alone, that's 3 weeks and all the years when you kept rediscovering it

Then there is the fact that it required establishing the whole working group

All for a simple fix that positively affects pretty much any Rust user


It didn't really require establishing a working group :) That is just an effort to improve Rust binary sizes in general.

In the end, changes in Rust can take a very long time (years) sometimes. It's maintained and developed by volunteers, and it has been growing exponentially these past few years, so not everything can move forward as fast as we would like.


That's not "process overhead" it's just a bit of feature/code review. Which is necessary if you want to ensure that you're making consistently high quality software.


Low priority issue easily fixed by googling first if you actually care.

And it isn’t like release builds with debug symbols are a bad thing. Getting good bug reports matters, too.


As the article points out, having debug symbols for only the stdlib is not very useful.


Except it is a bad thing, you just need to increase the low priority of reading the article




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: