Hacker News new | past | comments | ask | show | jobs | submit login
Why Rust in Production? (corrode.dev)
152 points by zaphodias 10 months ago | hide | past | favorite | 69 comments



This is such a well written post. No fluff, gets straight to point while still offering some context, easy to skim through and get the main points. Bravo, top notch technical communication.


This is a very good article that doesn't shy away from downsides. Below are my personal not-so-humble opinions.

> Rust has a great developer experience. Its type system is very powerful and allows you to encode complex invariants about your system in the type system.

Usually this means: we have three people doing type-level magic, no one understands it, and when they inevitable quit no one can understand how this works and why it takes weeks to add a small change.

> Related to the previous point, the Rust community is still relatively small. It is hard to find developers with professional Rust experience.

This directly correlates with what was written previously: "Many developers enjoy working with Rust. It is the most admired language for the 6th year in a row". Enjoyment often comes from early adopters, and people trying the language for some side projects.

I'll admit, however, that Rust seems to have crossed the chasm in the adoption cycle.

> Rust has a famously steep learning curve. It is a complex language with many advanced features.

Combined with "It is hard to find developers with professional Rust experience" and "mostly training their developers on the job", stay away from it unless you know exactly what you are doing.

> more than 2/3 of respondents are confident in contributing to a Rust codebase within two months or less when learning Rust.

This is a huge amount of time. However, unclear: is this due to the language, due to being thrust into a new domain/role/job, or both.


You can skip type system magic, and have people just keep introducing bugs that it would prevent over and over, writing more tests trying to prevent it, never quite succeeding, but overall "feeling productive" and "being agile" along the way.

Having said that - yeah, making good apis and abstractions that prevent mistakes takes time and some skill, and pays off gradually proportional to the scale and longevity of the project. And for certain things is not worth it. Being able to make a good judgment if enforcing something at the compile time / runtime / at all is worth it is part of the skill.


> You can skip type system magic, and have people just keep introducing bugs that it would prevent over and over, writing more tests trying to prevent it, never quite succeeding, but overall "feeling productive" and "being agile" along the way.

There's a middle ground, and I was specifically responding to the quoted bit: "Its type system is very powerful and allows you to encode complex invariants about your system in the type system."

Once people start "encoding complex invariants in the type system" it becomes an unholy mess only one or two people on the team understand. Usually an ad-hoc unspecified poorly thought-out type-level DSL dozens of level deep.


> You can skip type system magic, and have people just keep introducing bugs that it would prevent over and over

Personally, I found even Haskell's type system easier and nicer to use than fiddling with _lifetimes_ in Rust types. Everything else in Rust is falling nice into place, but lifetimes are painful, magical and very annoying.


The survey toward the bottom was interesting; ownership and lifetimes showing up as the hardest things to learn for new users. And those are exactly the things that GC solves, or that were historically been involved in majority of security vulnerabilities.


The actual data toward the top was also interesting; the cost of GC is high and Rust eliminates it.


The cost of GC is only high as in the cost of living in modern society is. Sure, you can technically live in the forest and build/hunt/harvest everything for yourself, but unless you have very specific reasons for that, having a house and going to the supermarket is the sane default option.


Easy to learn, safe, fast: pick two


That also depends on GC, and the language, and the implementations of the GC.

E.g. in Erlang GC is mostly negligible [1] (but you can put undue pressure on it, and on the underlying OS), there are realtime GCs for Java, etc.

But yeah the cost of GC is never zero.

[1] citation needed


> That also depends on GC, and the language, and the implementations of the GC.

Nothing depends on no GC.

I'm not anti-GC. I earn most of my living using GC languages. But there comes a time when the cost of power and hardware outweigh the value of GC. When that time comes Rust is an excellent way to solve it. I am, however, very "anti" the "no such time exists" position. That has never been and will never be true.


Smaller amount of time than the equivalent in C/C++ probably.


> Usually this means: we have three people doing type-level magic, no one understands it, and when they inevitable quit no one can understand how this works and why it takes weeks to add a small change.

This is how it is in all typed languages I've used. There will always be trivial propositions which the compiler cannot check and bending around that often means ridiculous type magic. Such is the life of library & framework authors.


Are there actual jobs in Rust? This is not snark, btw. I love Rust and I am currently learning it. Genuinely curious, since I have not seen a lot of listings for it, if any.


Not a direct answer, but I do get a decent amount of contracting work for Rust. Mostly for 2-3 year old projects where the original Rust developer has moved on and they need a small fix or feature change.


Do you have any leads you'd be comfortable with sharing?


I mostly work with local clients born out of my networking.



Plenty in crypto. A few others.

Not enough to pay well enough, it reminds me of node.js in the early days.

I'm active in the community, do rust events but I can't find something that pays more than js consulting with untouchable companies.

I guess at some point I'll have to decide between money and my soul


My current and previous jobs have been in Rust. They're definitely out there.


Recently I decided to try Rust at work as well, after using it a little as a hobby, just to replace a basic shell script with it at first. While reliability, ergonomics, and other positive sides either do not beat Haskell (which I use for most programs, except for a few small shell scripts or [PL/pg]SQL functions) or it does not matter here, I similarly ran into that "immature ecosystem" issue: apparently people are still supposed to run a nightly build or rustup, but not a compiler from stable system's repositories, let alone libraries. It was that way when Rust was really new, which was understandable, but it is odd to run into that now, and also as the article mentions, even with basic libraries: I ended up using eprintln! instead of a logging library (fortunately used it with systemd, which picks up stderr output, and did not really need to set syslog levels or additional fields), env.args instead of an argument parsing library.

Mostly agreed with the conclusion, too: the language still looks good, especially as a C alternative, and hopefully it will be usable in a more stable setting. Gradually trying it out does not feel like a pivotal decision though, that sounds overly dramatic.


Using Rust without using the features provided by Cargo is akin to buying an electric car and asking for all the electronics to be removed.

Rust libraries will never be shipped by your distro’s package manager. Maybe a few exceptions for things that also have C bindings, but it will never be the default.


Many Rust libraries are shipped by distro package mangers, right now. But in general, those packages are intended to be used to build the Rust programs that are packaged in the distro, and not for general use. So it is going to be a painful way to try to write programs using Rust's ecosystem, as it is with basically every other non-C or C++ language.


It’s painful with c and c++ as well, the tool chain/libraries a distributor provides are almost never the ones you want to develop with as they are often ancient


> I similarly ran into that "immature ecosystem" issue: apparently people are still supposed to run a nightly build or rustup, but not a compiler from stable system's repositories, let alone libraries.

What languages/ecosystems is this not the case? Even with C/C++ you probably shouldn't be relying on your distro's package management for library and toolchain support.

Also, you do not need nightly unless there is some very shiny and new feature that you need that isn't on stable yet.


With C, I comfortably rely on the distribution's package management, even though it is not a rolling release or anything of that sort (Debian stable). Actually not sure how that would be the case with C, if one does not try to make things incompatible intentionally: the language and the major libraries' APIs are pretty stable there.

With Haskell (used for most tasks at work) I get the tools (GHC and Cabal) and most of the dependencies from Debian stable repositories, though loading missing ones from Hackage (but slowly moving towards not relying on Hackage at all). And keeping sources compatible with package versions from Debian repositories from version 9 to 12 (4 major versions, 6 years apart). With shell scripts, sticking to POSIX; with SQL ones, avoiding extensions and also doing fine.


We have very different experiences, I haven't worked on a production C/C++ project in the last decade that didn't vendor dependencies somehow. Debian stable is especially unreliable. In fact I can't think of a time I haven't had issues when working on a team greater than 1 or shipping builds because distro packaged libraries aren't reliable.

Also, why is it immature for Rust to ship a toolchain and package manager through a sidechannel, but not Haskell?


> why is it immature for Rust to ship a toolchain and package manager through a sidechannel, but not Haskell?

Haskell also feels less mature to me than C, with fewer POSIX functions being readily available, but as mentioned above, with Haskell (in my experience; the experiences indeed seem to differ) it is "one or two dependencies have to be pulled from Hackage across multiple projects and system versions", while in Rust it is rather "I failed to pull common logging and argument parsing dependencies on a single up-to-date stable system, and apparently one is supposed to install and update its compiler, package manager, and libraries separately from the rest of the system". Though some use ghcup or Stack, which also aim working without a system package manager, but at least a system package manager is a viable option.


You didn't have an up to date system if you use Debian stable. They do not keep their packages up to date.

It just seems weird to blame Rust for a problem you had with your package manager, when every modern ecosystem I can think of eschews distro package managers because of these problems.


This appears to highlight differences in our perspectives: while working with (and supporting software for) 4 most recent major Debian releases, the latest stable one feels like a fresh one to me, especially if it is updated to the latest available package versions (and minor release). The packages are not supposed to be cutting edge there, but "stable".

I also don't mean to blame Rust's ecosystem (let alone the language itself) in the sense of complaining about it, though talking about this feels that way, and I thought it might be useful to clarify: it appears to aim less "stable" and more "cutting edge" (or "experimental") versions and practices than "stable" system distributions do, and than more mature ecosystems tend to do. Likewise, I wouldn't call having a slightly older compiler version in system repositories a problem with package manager: this is a useful approach for getting a stable system, relatively well-tested and with predictable changes. Not every system has to be this way, but in some cases it is useful, some people (myself included) generally prefer those. And unfortunately those fail to play together smoothly at the moment, but I view it as an issue arising between those and their approaches, not as a problem of one of those.


Let me ask a more foundational question then, what is the virtue of a "stable" system?

In my mind, it's that updating a dependency doesn't break existing installations, or knowing that an existing install isn't going to get borked by an update.

And this is not something that is applicable to ecosystems like Rust, where it's not really possible to break a Rust program because another Rust program needs a newer version of the same dependency that happens to be incompatible with the older version. In fact, you can compile one Rust program that links against multiple versions of the same dependency at incompatible versions without issue.

So the entire notion of the Debian model of package management doesn't really apply to Rust, and there's not any benefit to keeping an older version of the toolchain around. There are only negatives.

And Rust has strong stability guarantees. A newer toolchain will not break compiles of older code. Nor will Cargo's cache break compiles with an existing lockfile because another package needed different incompatible versions of the same dependency. It's designed to be stable from first principles, in a way that C and C++ builds are not.

This is kind of why you're only going to have a bad time if you want to use the system package manager to manage your Rust packages. It's not built for the same problem domain, and over constrained for the design space.


> Let me ask a more foundational question then, what is the virtue of a "stable" system?

Not introducing breaking changes too often (so that you have to adjust configuration and custom software relatively infrequently), while applying security patches and other bugfixes, and being well-tested to work together are the first ones I think of. With "breaking changes" including changes to configuration, file formats, command-line interfaces, notable behavior, as well as library interfaces. Well, I think it mostly amounts to "knowing that an existing install isn't going to get borked by an update".

Supporting multiple dependency versions and stability guarantees indeed must help with avoidance of breaking changes in libraries, assuming the latest compiler version (or a dependency restriction: either compatible packages in system repositories or if there was such dependency resolution in Cargo), though probably not so much with fixes (I imagine there is less motivation to maintain stable branches and apply fixes to those then) and with integration testing. Besides, not all software is written in Rust: other packages and ecosystems must be taken into account as well. Likely more varied ecosystems can be handled more smoothly with NixOS or GuixSD, but I am not risking to use them on anything that should be reliable yet (maybe it is the time to look into those again though). But this kind of poor compatibility with stable systems does not seem necessary, especially for a language that is supposedly a safer alternative to C, while C is about as well-integrated into--and compatible with--common (POSIX) systems as it gets. Though then again, above it was noted that our experiences with C differ, but this is how I see it.


Feel free to use stable, plenty of libraries target stable.

At the beginning I was using stable without many problems.

I switched to nightly to access the last language features and I have no reason not to use it (I've never encountered a nightly bug in 4 years)

Haskell is my favourite language (if we forget prelude, exceptions, strings and a few other warts) but Rust wins hands down on pragmatism and getting things done. The tooling (cargo vs stack) is way better and there are more production ready libraries to do cool things.


> env.args instead of an argument parsing library.

I'm sorry do you not use https://github.com/clap-rs/clap ?


This is the one I tried to use, but failed to, since one of its dependencies required a rustc version newer than what I have here (1.63.0), and cargo was not able to pick the latest compatible version, but suggested that I do that manually instead. Which surprised me as well, since I keep hearing about it being nice, yet resolving dependencies is something other package managers (and Cabal in particular) tend to do. I tried to find what is the last version of that package supporting my compiler version, failed to do it quickly (how is one supposed to go around such a task, by the way?), and gave up.


It appears both clap 2 and clap 3 have packages in bookworm. Were you trying to use them from crates.io instead of apt? Clap is on version 4 there.

> how is one supposed to go around such a task, by the way?

There is not great tooling for this, because most people use the latest stable when starting a new project, and then rust being backwards compatible means things Just Work into the future. A vanishingly small number of folks use builds significantly older than that, since upgrading is generally trivial, and newer compilers have faster build times and better error messages.


Oh, I have not tried using it from system repositories: I tried that with another library (in a hobby project) before, had issues with that, been told and generally gathered that it is easier to pull dependencies with cargo, so went straight to that this time. I see that librust-clap-dev would pull 157 other packages with it though, but will look more closely into it: I would actually prefer getting dependencies from system repositories, thanks for pointing it out.

As for newer compilers, I prefer to depend on system repositories and package manager for updates, and to stick to stable branches, so that things do not change too often. I see the appeal of rolling release distributions and cutting edge software, but not feeling comfortable using it for things that are supposed to be reliable.


So to be clear, I do not recommend that you use the rustc provided by Debian for general development. But if that’s what you want to do, by trying to mix a rustc from Debian with packages from the general ecosystem, it is going to be the worst of both worlds. I would ignore crates.io if I were trying to do what you’re trying to do. You’ll have less access to packages overall, and be using older versions, but they’re at least known to work with each other and that specific rustc version.


I'm fascinated by that description since it's the total opposite of my experience using it. I'd love to see what happened if you have a repo. I'm simply intrigued at this point.


They are using a rustc from August of 2022. If you installed that version, made a new project, and asked for the latest clap, it would not shock me at all if rustc were too old.

EDIT: from clap itself:

> We will support the last two minor Rust releases (MSRV, currently 1.70.0)

So yeah, 1.63.0 is going to be quite old.


I see!


Steps to reproduce, once you have cargo and rustc from Debian stable repositories: cargo init && cargo add clap && cargo run. The following happens:

    error: package `anstream v0.6.4` cannot be built because it requires rustc 1.70.0 or newer, while the currently active rustc version is 1.63.0
    Either upgrade to rustc 1.70.0 or newer, or use
    cargo update -p anstream@0.6.4 --precise ver
    where `ver` is the latest version of `anstream` supporting rustc 1.63.0


For what is worth, you can use https://lib.rs to see the earliest version a crate supports. For Rust 1.63.0, you will have to rely on Clap 4.0.32 from December 2022, instead of anything newer (which only support 1.64.0+): https://lib.rs/crates/clap/versions

For anstream, the situation seems more difficult for you, as it doesn't seem that there's any (non-yanked?) release that supports <1.64.0: https://lib.rs/crates/anstream/versions


Thanks! I was looking for a page like that on docs.rs, crates.io, and possibly on lib.rs as well, but missed it; that is likely to be useful in the future. Actually now I see it on crates.io as well, <https://crates.io/crates/anstream/versions>. Or maybe I saw it before, and just have not found versions <= 1.63.


Rust looks nicer and nicer. Is anyone familiar with RAM/Memory requirements as compared to c?

Every microcontroller project I've worked on, as we approach maturity, goes through a round of freeing up ram and code space. Usually deleting strings, removing debug functionality, or shortening datatypes.. etc

Can I write rust code with the same footprint as c code?


My employer does embedded Rust. We built our own little OS, Hubris, as the foundation of those projects. A no-op task in Hubris ends up at like 100 bytes, last I measured. https://hubris.oxide.computer/

You still have to like, actively think about binary size in order to get things to be that small. But the smallest x86_64 binary rustc has ever produced is 137 bytes. How small your program is up to you, not Rust.

EDIT: Oh yeah, I left a similar comment with more details about a year ago: https://news.ycombinator.com/item?id=34032824


You might enjoy this blog post: https://darkcoding.net/software/a-very-small-rust-binary-ind...

It illustrates the steps to take Rust from 3.6MiB executable to 400 bytes, by stripping more and more things away, and changing things.


Also see min-sized-rust repo[1], which also has this blog post at the bottom (along with others)

[1] https://github.com/johnthagen/min-sized-rust


thank you! I am currently enjoying it


You definitely can use rust in an embedded space. I rather liked having traits and proper types instead of a giant mess of integer constants. In general I think a lot of the so-called bloat you see with Rust binaries is due to a combination of not using a shared standard library and generics – the former is a non-issue in an embedded context and the latter is well under your control.

Sure, ELF includes a lot of fluff but you're not deploying ELF on a microcontroller.


yeah I think I just need to write some stuff to figure it out.

I was mostly trying to figure a 1:1 comparison. For example, If I write a Feature Control module in c and in rust, using the same design, are outputs similar?

seems like either no one has a good sense of that comparison, or it's a bad comparison and I don't understand why.

What I'm trying to avoid is having a space-saving task be "rewrite rust module X in c to save code memory"


IMO it would be good to familiarize yourself with the rust quirks/features (check out the rust book) and then poke at some of the embedded specific groups. Once you get a handle on the common types and patterns it's probably easier to find the information you're looking for, e.g.:

https://github.com/rust-lang/rust/issues/46213

In general Rust chases a "zero cost abstraction" ethos, so a lot of the type system niceties have no code or memory cost because the heavy lifting is done at compile time, not run time. For instance using specific traits for each GPIO pin ensures you won't accidentally apply some meaning meant for another pin, but the compiler will compile it right down to integers anyhow.

Things like options (how Rust deals with "null" values) are enums and usually incur a one byte penalty, but in some cases they can be compiled down to no overhead at all.


> it's a bad comparison and I don't understand why

Different languages are different, and so it's tough to compare. You don't generally write Rust code in the same way you write C, and so a "using the same design" constraint on a comparison means that some folks will not consider it to be an accurate one.

In general, similar constructs have the same amount of overhead, that is, zero. Rust has additional features that are useful that can increase code size if you're not careful, but getting that back down doesn't necessarily mean abandoning Rust, but it may lead to "write the Rust closer to C." I am thinking of generics specifically here, monomorphization can blow up your code size, but you could either not use generics or use a few techniques to minimize it.


I used Rust for my latest embedded project based on an MSP430 from Texas Instruments. The specific model has 128 bytes of ram, and it fits there quite neatly. Same for the flash.

This was quite the great experience!


That’s 128 KB, right?


Nope, the B was not a typo. It has 128 bytes of RAM.

The specific model is an MSP430G2231 [1] with 128 Bytes of RAM and 2kB of flash.

1: https://www.ti.com/lit/ds/symlink/msp430g2231.pdf


In the MSP430 family? Not likely.


Going off topic, but the size of the programming language communities is a good source for shuting down the Kotlin folks on how the language is taking over the world.

17.5 million users versus about 6 million, basically the ones being driven by Google to Kotlin on Android.

Back to topic, I feel Rust is a great language for production on scenarios where having automatic memory management is forbidden, like in high integrity computing, GPGPU code, micro-controllers and such.

Everywhere else, a mix of automatic memory mangement, value types and eventually linear typing, are a much ergonomic approach.


The benchmark link doesn't work for me. (Blame my adblocker??) The graphics in the article are interesting but don't include a way to identify which are Rust/Java/Go/Python. Maybe you're supposed to assume they come in the order given, but the x-axis is time... could definitely be clearer. (Also apparently not from the OP, just the cited article.)


Also, this kind of throughput in Java 11 in 2021 while Java 17 was already out is a bit meh.

Nowadays, with Java 21 and virtual threads, results might be quite different (and the default GC has changed as well).

I agree with the graphs being unreadable. The tables in the benchmark article are easier to read.


Also, not using ZGC when latency is the focus just strikes me as unserious.

"Benchmarked on JDK 11, with G1"


The first link is missing a "7" at the end, that's why it 404s. The second one works though: https://medium.com/star-gazers/benchmarking-low-level-i-o-c-...

"Rust, Go, Java, Python" in order, left to right, yes.


There's an oddity in that article...the C++ p99.9 is lower than its p99. Am I crazy or should that not be possible?


That is confusing to me as well, yes.


Author here. The link is fixed thanks to M3t0r on GitHub: https://github.com/corrode/corrode.github.io/pull/6 The source code of the blog is open source. Contributions like these are very welcome.


I had to laugh at the "great developer experience". Meaningless error messages, unreadable syntaxes, forced structures, no explanation why the solution to certain errors is importing some lib.

You write backend code as of you were writing frontend code, which is not enjoyable. You always have to think about who owns what.

And the whole crate terminology is just stupid. What am I, a dock worker or software designer?

No, to each their own but Rust does not have a great developer experience.

Compile times are long.

What do you gain in performance compared to Go? Not a while lot, and you pay with wasted time aka increased development time and more complicated thought process.

It's not for me, and when this blog says great dx, that means it's a propaganda post. In fact the whole article reads like it's trying to convince someone do something they don't want to.


> And the whole crate terminology is just stupid. What am I, a dock worker or software designer?

Of all the reasons to complain about a programming language, the package manager not being elitist enough is certainly an original one.


> Meaningless error messages, unreadable syntaxes, forced structures, no explanation why the solution to certain errors is importing some lib.

The compiler has pretty good error messages, `rustc --explain` exists, "unreadable syntax" is a common complaint from people that expect everything to be C-like or Python (or are just trolling), "forced structures" is explained by Rust being a statically typed programming language.

> You write backend code as of you were writing frontend code, which is not enjoyable.

No idea what's this about.

> You always have to think about who owns what.

If your code is well-structured this is a non-issue. The types of people that complain about ownership are the ones that write messy code with chaotic inter-dependencies and unclear semantics.

And besides, you have to do that anyway in e.g. C++, only it doesn't enforce it, which leads to programmer error, which is worse. Or you can use reference counting if you have skill issue.

> Compile times are long.

Compared to what? They aren't much longer when comparing apples to apples: C++ with static analysis tools and Valgrind will have pretty much the same compile times. Again, a bold general statement that doesn't really say much.

> What do you gain in performance compared to Go? Not a while lot, and you pay with wasted time aka increased development time and more complicated thought process.

Performance in what? For non compute-intensive tasks you can obviously pick any language you are comfortable with. Obviously there's no point using C++ to serve a static site when a simple Python server will do. In benchmarks, Rust just murders Go in performance, so you are objectively wrong if you are talking about raw performance.

> that means it's a propaganda post

Based on your wacky arguments, I'd say that your comment is in fact propaganda.


> What am I, a dock worker or software designer?

You really must hate this thing we use instead of VMs nowadays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: