Hacker News new | past | comments | ask | show | jobs | submit login
Taking Rust everywhere with rustup (rust-lang.org)
259 points by steveklabnik on May 13, 2016 | hide | past | favorite | 87 comments



Rust's support for cross compilation, both in the compiler and the tooling (cargo, multirust/rustup) is amazing. Last week I was working on a hardened system project. It involved building a system consisting of just a kernel and a single process. At first I was going to target a unikernel, so I pulled the "x86_64-rumprun-netbsd" target, read some docs on rumprun, and in about 30 minutes I had a Rust project running as a unikernel, all well supported by Rust and its tooling.

For various reasons I had to drop the unikernel and switch to Linux combined with the program (I set the program as init, so Linux boots, runs the program, and that's that). That was super easy as well. I grabbed "x86_64-unknown-linux-musl", and the Rust program runs like a champ as the init process on this Linux "system" with no libraries to speak of.

I'm also immensely happy that I added the .cargo/config "target" option to Cargo's code. "cargo build" is much nicer than "cargo build --target=name_of_the_target_which_I_will_never_remember_x86". Cargo's code was fairly simple to dive into, and Rust's team has made the contribution process smooth and bulletproof (thanks to all their Github bots and unit/integration tests). I think that pull request was resolved in about a week (mostly due to waiting for the team to meet and confirm that they wanted to add that flag to the config file).


What is the new target config option you mention? Link?


It's both mentioned in the blog posting being discussed and from there a link to the standard Cargo documentation: http://doc.crates.io/config.html


If I set

    [build]
    target = "x86_64-unknown-linux-musl"
in Cargo.toml and run cargo build --release

    unused manifest key: build.target
What am I doing wrong?


.cargo/config is different from Cargo.toml


Oops, you're right.


> Rust has already been ported to Emscripten (at least twice), but the code has not yet fully landed. This summer it’s happening though: Rust + Emscripten. Rust on the Web. Rust everywhere.

I've been following the various RFCs and PRs for emscripten support for awhile now, and it's been cool to see some slow but steady progress. It's very nice to hear the Rust team acknowledge that they are actively moving towards making the web a first class target in the near future.

> Rust is uniquely-positioned to be the most powerful and usable wasm-targetting language for the immediate future. The same properties that make Rust so portable to real hardware makes it nearly trivial to port Rust to wasm.

As a web developer I couldn't be more excited to be using Rust right now. There's been a good amount of movement among JavaScript developers toward type systems and more explicit typing in JavaScript code. I find it's a much nicer development experience to have sane guarantees about the types in your JS code. I see Rust as the big next step for the web—a language that is still a pleasure to develop in, but with excellent static typing and tight, targeted builds without the bloat and buildup I'm accustomed to seeing with current transpile-to-JS strategies.


> I see Rust as the big next step for the web—a language that is still a pleasure to develop in, but with excellent static typing and tight, targeted builds without the bloat and buildup I'm accustomed to seeing with current transpile-to-JS strategies.

I think this will depend on the output size of the compiled file. I've always gotten very large js files out of emscripten, which are fine for games or media decoders, but would be undesirable for (e.g.) a vdom implementation that's part of a larger framework.


Both Rust and emscripten do produce notably large output. On the Rust side it's true that the emscripten port won't use jemalloc, but instead the emscripten malloc (which presumably is better tuned for its environment). Also the wasm format's main design goal is to reduce binary size.


WebAssembly will have a binary format with a 20-30% reduction size than its gzipped asm.js counterpart[1], for context :)

[1] https://github.com/WebAssembly/design/blob/master/Rationale....


Agreed. I'm moderately optimistic though - statically-linked Rust binaries have always tended to be on the chunky side, but AIUI a lot of that footprint is jemalloc, which I'd hope wouldn't be included in an Emscripten build.


> There's been a good amount of movement among JavaScript developers toward type systems and more explicit typing in JavaScript code.

> There's been a good amount of movement among JavaScript developers toward type systems and more explicit typing in JavaScript code.

This is key to me. I'm not really interested in bolt-on additions or slight variations to JavaScript that leave me still grating against the edges of the language. Whether it's rust or something else, I want to use a language that's well designed, or at least consistent in the browser.


One of the fundamental problems is that there isn't a "the browser".


Try Elm or Purescript :)


> I see Rust as the big next step for the web—a language that is still a pleasure to develop in, but with excellent static typing and tight, targeted builds without the bloat and buildup I'm accustomed to seeing with current transpile-to-JS strategies.

Maybe this will happen but it won't be through Emscripten. Emscripten is cool and all but it produces really poor code size-wise. Maybe if you lack JavaScript expertise you'll use this as a stop-gap but for a production system that needs scale I can't see it happening.

Now maybe if it can be compiled into the Web Assembly intermedia? That seems like a good route to take.


That's addressed in the article. The LLVM backend is on the way and Rust will be able to take advantage of it relatively easily, it seems. Emscripten is more of an intermediate step to get Rust working on the web in the meantime—I doubt it's going to be the long-term solution for Rust on the web.


Great to see all this progress :) . I have three questions about Rust static binaries on Linux through musl, and how they compare to Golang static binaries:

1. At low level (binary format, system calls), is a Rust static (through musl) binary "as static as" an equivalent Golang binary?

2. The article says "for technical reasons, glibc cannot be fully statically linked" . Did this apply to Golang too? Is Golang using musl too on Linux? If no, why not / what is their strategy/tooling to produce static binaries?

2. From my own biased observation/reading of Golang writings and usage of Golang binaries, I feel static-by-default is hugely appreciated, for the portability. Did Rust consider it?


1. Yes, as you can see from the output of ldd: "not a dynamic executable".

2. Go does not use glibc or musl, they implemented all of that stuff themselves. cgo will link to glibc or musl though.

3. By default, all Rust code _is_ statically linked; it's only the libc that's not. Using musl just gets you the last step of the way.


1. I'm not certain on the precise binary layouts etc, but I believe the answer is "yes, except Go binaries can't be stripped". In terms of practical usage, they'll run on the same machines.

2. This is a long one.

The statement you've quoted is false. However, it's both a) a common misunderstanding and b) a sort-of-ok simplification (the full story is much more involved). I'd personally alter it to say "it is technically challenging to statically link glibc". You can read more about the whole musl Rust origin story at [0]. The 'problem' with glibc is that it uses NSS, a feature that allows you to dynamically load libraries installed on the system to let you change how some libc functions work (if neither you nor your libraries use these functions, static linking with glibc works). For example, musl will look up users from /etc/password, whereas with glibc an admin can install an LDAP library, change a config file and all programs using glibc will magically use LDAP. But, you can disable NSS in glibc at compile time [1], which then allows you to truly statically compile with glibc.

Go is interesting because doesn't use libc at all (when using the standard compiler)...except when doing some network things that NSS is useful for, when it does link against it. People who want totally static binaries pass a few additional flags to say "don't use NSS, use the Go implementation of these features"...and you then end up with bugs like [2].

3. Yes [3], but you lose (by default) a) the ability to use shared libraries from the system, b) NSS. Given that one angle Rust is pushed from is "C/C++ replacement", not being able to link to system libraries without using arguably cryptic command line arguments would be a bit sad. But I'm ambivalent about this.

[0] https://internals.rust-lang.org/t/static-binary-support-in-r...

[1] https://sourceware.org/glibc/wiki/FAQ#Even_statically_linked...

[2] https://github.com/docker/docker/issues/1715

[3] (specific post from [0]) https://internals.rust-lang.org/t/static-binary-support-in-r...


> Go is interesting because doesn't use libc at all (when using the standard compiler)...except when doing some network things that NSS is useful for, when it does link against it. People who want totally static binaries pass a few additional flags to say "don't use NSS, use the Go implementation of these features"...and you then end up with bugs like [2].

Oh god, I completely forgot about that bug. For what it's worth, the technical reasons why Docker had to be linked statically are no longer valid and so it should be able to fix that issue. Unfortunately, "os/user" is lacking some things we need within runC and Docker.


Docker is available statically linked now yes, and it works fine. Although that issue is still open, you have to use dynamically linked version if you want to use these features. Docker is not required as a trampoline any more.


Heh, I know, I'm a long-time contributor. And I'm the one that actually removed .dockerinit. ;)


I'm sure I'm not the only person who works in an environment where there are many thousands of Linux and Unix boxes, and they are all configured to use LDAP for /etc/passwd, /etc/group, etc. If a program uses NSS, it will just work; if it tries to read /etc/passwd for itself, it won't find most of the users (including likely the user it is running as).

One solution might be if the statically-linked program included an nscd client, since most people using NSS+LDAP will be using nscd for caching.


> One solution might be if the statically-linked program included an nscd client, since most people using NSS+LDAP will be using nscd for caching.

No, that's not OK either; many people use other NSS modules, including mdns (for .local names), resolve (for local resolved support), resolving "localhost" (via "myhostname"), and resolving local container hostnames.

If you want to resolve any of the things NSS supports, use NSS.


Why can't nscd support those other NSS modules too? (Even if it doesn't work with them at present, could it not be extended to do so?)

Rather than having NSS plugins loaded into every process which needs name services, why not centralise all name services in a daemon (whether nscd or sssd or something else)? Then client processes don't need to actually load the NSS code into their own address space, they can just speak a simple protocol over IPC to access this functionality out-of-process.


Android does something much like that I believe.


In case you're binary-size-obsessed like me, here are some stats on that statically-linked musl "Hello, World" binary:

    $ size target/x86_64-unknown-linux-musl/debug/hello
       text	   data	    bss	    dec	    hex	filename
     351471	  10256	  10064	 371791	  5ac4f	target/x86_64-unknown-linux-musl/debug/hello
Here is a "Hello, World" C program statically linked against musl:

    $ size test
       text	   data	    bss	    dec	    hex	filename
       3380	    248	   1232	   4860	   12fc	test
Looking at a size profile, about 25% of the static Rust binary comes from jemalloc, which is strange since musl doesn't seem to use jemalloc?


Rust chooses to call jemalloc for allocations by default[0], as it is generally faster than the system allocators, especially as we can go beyond the plain malloc/free interface. This choice is independent of the libc used, which Rust mainly uses for syscalls.

[0]: http://doc.rust-lang.org/book/custom-allocators.html


Would switching to alloc_system then use the musl one, leading to a smaller binary?


That's the intention, yes.


> rustup is a toolchain manager for Rust

I'm hoping this follows the "functional" paradigm, where a compile command is invoked, and the compilation result depends solely on the arguments of the command, and not on configuration files and such. This makes it much easier to script things, to use alternative build systems, and, in general, to keep things predictable.

I'm saying this because I'm detecting a trend in a different direction, where tools are making things more confusing instead of more clear.


Rustup manages installing rustc, cargo, and toolchains: it's not what you use to compile. Cargo does have a single Cargo.toml that manages per-project config stuff. But keeping it in a file is part of repeatable builds...


My opinion is opposite of the GP: why isn't this just in Cargo? I'm locking all of my other dependencies, why not the language?


This question comes up a lot, and we've debated it a fair bit.

There are a few key arguments for keeping them separate from my perspective:

First, they have distinct audiences. Cargo is a tool for building Rust programs no matter how you obtained the compiler. rustup is a tool fundamentally for installing the official Rust binaries. So if Cargo contained rustup, that would be a large chunk of features that have an very unclear role when Cargo is distributed by Linux distributions, and would probably need to simply be compiled out.

That last point about compiling out rustup also points to the fact that Cargo's and rustup's features are completely orthogonal. There's no technical reason to combine them.

Of course, the practical reason to combine them is that one tool is conceptually simpler than two. Even I have found myself accidentally typing `cargo update nightly`, `rustup build`, etc.

Finally, releases of Cargo today are paired with releases of rustc. Distributing the Rust installer with Cargo would necessarily change that relationship. That is, there would be one global Cargo that is used with every revision of rustc. This isn't necessarily a bad way to arrange the tools, but it is a big change from today that would require significant effort to move to.


Having had this same conversation in the past:

I want to say thanks for all the hard work!

Regardless of anyone's rationalizations about 1 or 2 or X tools, having great tools exist is most important!


Cargo cannot control versioning of Cargo, or at least, an external tool that handles versioning everything seems conceptually cleaner to me than making Cargo version both Cargo and rustc. It's also pretty similar to tools in other ecosystems.

There is ongoing discussion about possibly adding configuration to Cargo packages with a minimum rustc version for the package; the details have not all been worked through yet.


Yes coming from nixos, this "foo install me tools; bar build me project" ideom is quite primitive. I have an long-open RFC that I hope will get cross-compilation working the right way https://github.com/rust-lang/rfcs/pull/1133.

That said RustUP is still a fine easy way to get compilers builds as opposed to standard library builds. I view it as convenient duck-tape analogous to Haskell's stack in that regard.


> I'm saying this because I'm detecting a trend in a different direction, where tools are making things more confusing instead of more clear.

Rust tools in particular, or tools in general? Just curious; the ease of use of Cargo is one of the things that initially drew me to Rust, and rustup seems pretty cool too (just used it to install Rust nightly on a new machine) but I have not done any serious work with Rust yet.


Should I be using rustup instead of multirust, then?


Yes, it is its successor.


Oh. This should really be clearer on these pages. I was using multirust-rs after having used rust up and had no idea this was related/a successor.


So, to lay it all out there:

In the beginning, there was multirust. This was a bash script, or rather, a set of them, I guess. It didn't really work on Windows, though, being a bash script.

Then came multirust-rs. We already have a programming language that we like to write that supports the platforms Rust supports: Rust! So, port multirust to Rust.

Then, it was decided that mutlirust-rs would be the right path to move forward generally, so a plan was formed to figure out just exactly how we'd want such a tool to truly work, rather than just copying what multirust did, and so this became rustup.


Is the plan to start advocating for using rustup on the homepage?


Eventually, yes.


> In the beginning, there was multirust

Well, in the beginning, there was rustup, but a different rustup that didn't manage side-by-side installs :P


I found the sources to rustup's setup tool but not rustup itself which gets installed by rustup-init. Can you provide a link to the right repository?


https://github.com/rust-lang-nursery/rustup.rs is the repo.

The setup tool and the tool itself are the same binary. The rustup install is basically copying rustup-init.exe to `~/.cargo/bin/rustup.exe`.


Thanks, need to take another look then.


Sure, but last I heard rustup wasn't recommended yet (i.e. it was still beta, missing some features, etc). Is it recommended over multirust now?


I do recommend using rustup to all Rust developers now, despite still having reservations about its done-ness, and not being ready to commit to putting it on the main website (I guess that's something of a contradiction).

This blog post marks the beginning of the phase where we will seriously try not to break things on upgrade - I consider it 'in production' now.

Even now I think it provides the best installation experience and I have much higher confidence in its reliability than the older multirust shell script.

That said, if you are on Windows you might want to be more cautious. Just this week I broke rustup's networking on Windows, and there are periodic reports of intermittent self update failures (though these don't cause data corruption).


This blog post is sort of like a bigger beta announcement, to get things in front of a larger audience and make sure things are still good. It will be the recommend way to install rust soon, but not quite yet.


This sounds like a huge boon to game development! With the piston.rs ecosystem doing a stellar job of centralising Rust gamedev libraries, Rust seems primed for use in game tech.

One thing I'm not hearing much about yet is iOS though. C# relies on Xamarin. Java's best bet is probably Intel's Multi-OS (since RoboVM is no more). What does Rust/Rustup have planned?


You can cross compile to iOS today, but you lose access to native GUI stuff IIRC. But games don't usually use that anwyay.


The article says this about a musl-using statically-linked rust binary:

> take that hello binary and copy it to any x86_64 machine running Linux and it’ll run just fine.

Is that true? I would have assumed that musl has some kernel ABI version requirements, and won't run on older kernels (older than whatever that specific musl library was built to support).


https://www.musl-libc.org/faq.html Says 2.6 officially, due to some threading stuff on earlier versions. Should work on anything later though.


Yes, that's an understatement. It's very uncommon to find pre-2.6 x86_64 systems.


This says that musl will build and run on Linux 2.6.x, but it says nothing of the binary compatibility between systems.

If I build musl and statically link my program to it on one version of the Linux kernel, and then I copy that to an older version of the Linux kernel (both 2.6 or greater), I'm extremely skeptical that it would just work (especially for any two arbitrary versions above 2.6).


My understanding is that the kernel goes to pretty great lengths to not break ABI compatibility, so this should be fine.

I just tried it out with a server I have lying around: https://gist.github.com/steveklabnik/0b2736642ddd4669260bd7f... Compiled on 3.16.0-4-amd64, ran on 3.2.0-4-amd64, no issues


I expected it to work for most cases - especially trivial ones like that.

Where I was skeptical was around new features added to the kernel. I do not know how careful musl is when it is built on a kernel which includes a new feature (say something like F_DUPFD_CLOEXEC, added in 2.6.24), but is then copied and run on an older kernel (2.6.10) which would return -EINVAL from that system call.

A quick glance at the musl sources shows that in this particular case musl is careful to notice the -EINVAL and emulate it (via F_DUPFD and F_SETFD FD_CLOEXEC).

But without seeing documentation from musl that this is guaranteed for everything that could be added in later kernels (I'm not even sure everything added could be emulated easily, but perhaps it could be), then I'd still be skeptical making the claim that musl and Rust could be statically linked on any version of Linux after 2.6 and run on any other version of Linux after 2.6.

It's not really about the kernel ABI being compatible (programs built on older kernels run on newer ones quite nicely). It's about newer features from newer kernels still being supported (or their absence detected and their features emulated) on older kernels. That's a property of musl, not of the kernel.


Gotcha, that does make sense. I tried looking around for more information about specifics here, but couldn't really find anything. I did think that musl was attempting to deal with this, but without something explicitly saying it, I am wondering if that's something that I misunderstood, or just heard somewhere.


For libc functions (man 3) it will emulate if the syscall doesn't work. If you are making a system call directly (man 2) it is up to your code to do a fallback, more or less.


First, my example was fcntl(2), which musl is emulating. So that doesn't fit your "rule".

Second, is your "rule" documented anywhere for musl? That's what I'm looking for - an indication by the musl project that such a thing is supported (binary compatibility from newer to older Linux kernels, and if so, what version range).

Without such a project-sanctioned statement, I'm afraid I stand by my position that the blog's assertion that the statically-linked rust-musl binary can be copied to any Linux 2.6.x machine (or newer) and just run without any issues.

I would agree that any statically-linked rust-musl binary could be built on a Linux kernel and then run on a later Linux kernel, but earlier, well that's up to the musl project to say.


This is definitely a big step forward. I had become spoiled by Go's cross compile capabilities, which Rust had paled in comparison to. Prior to this, it was a pain having to get the right toolchain for each platform. With this, I can download the toolchain as needed for the target platform, and build against it.


Do these changes require/imply upstreaming rust's llvm changes? Is there any resistance for llvm to take the changes necessary?


Today Rust only has a [few patches to LLVM](https://github.com/rust-lang/llvm/tree/rust). One is for Android stack overflow detection using a mechanism that upstream isn't fond of (split stacks), the other an optimization not required for correctness.

The Android patch can be eliminated someday with stack probes (though it's not clear upstream wants those either).

Emscripten support though - at least in the short term - is going to bring in the [emscripten LLVM 'fastcomp' backend](https://github.com/kripken/emscripten-fastcomp) that translates LLVM IR to JS. This patch has no hope of being upstreamed and will impose significant maintenance burden on both Rust and Emscripten as long as its in tree. This solution should be short-lived though as we will transition to either the upstream LLVM->wasm backend or a new Rust MIR->wasm backend.

So I think the answer to your question is 'no'.


I've been told that the LLVM branch I linked to is 'the most wrong branch you possibly could have linked to'.

This is the right one: https://github.com/rust-lang/llvm/tree/rust-llvm-2016-03-13

Patches are mostly backported from upstream or optimizations.


To those in this thread, note that brson is the author of this blog post, as well as the primary maintainer of rustup, and has had his eyes on a web target for Rust for some years now.


We generally try to upstream our patches, and they've been generally accepted. IIRC, our current fork is very close to LLVM head. We also do support building with stock LLVM, though it may have bugs for the stuff that wasn't patched at the time that release was made, of course.


Easy management of cross compilation is a major plus. Thumbs up!


I wonder when rust will be natively available on arm64.


Excitingly, today! If you run the rustup install script on an arm64 device it should "Just Work", but if it doesn't end up detecting arm64 you can download rustup and/or the compiler manually.

* rustup - https://static.rust-lang.org/rustup/dist/aarch64-unknown-lin...

* rustc+cargo - https://static.rust-lang.org/dist/rust-nightly-aarch64-unkno...

Currently arm64 isn't a Tier 1 platform for us, however, so you may hit some bumps along the way. Please feel free to file issues so we know what to fix if you do!


That is great news indeed! Been waiting forever to try it out on my arm64. Thanks!



The lack of C toolchain management yields some errors when trying to cross compile between OSX and linux (both directions).

See https://github.com/rust-lang-nursery/rustup.rs/issues/462 and https://github.com/rust-lang-nursery/rustup.rs/issues/463

Been able to cross compile between these systems using rustup, anyone ?


> rustup install stable-aarch64-apple-ios

> info: syncing channel updates for 'stable-aarch64-apple-ios'

> error: target not found: 'aarch64-apple-ios'

What is the process for toolchain stabilization? I know that iOS support is unofficial right now, but I would love to be able to run the above command at some point.

I can see that official support for more platforms might hurt development speed (more tests to pass!), but eventually all platforms will be officially supported, right?


Try `rustup target add aarch64-apple-ios` and see how far you get. It's a tier 2 platform so it usually has builds available. But the iOS targets are some of the least tested for Rust. I have no idea if they work!

The process for adding tier 2 platforms isn't well defined but is closely related to ease of automation. If it can be done in a Docker container then it is simple to build std and ship it. Right now we're enthusiastic about this level of support - we can provide builds of std for lots of things. As long as somebody cares just enough to keep std building, which is pretty easy to do.

Whether those builds work at all is a different matter! We don't run tests on most tier 2 platforms. It's a big commitment to do so, even more to keep the tree green.

Personally, I really want to say that in time Rust will have perfect test and build automation on all platforms that matter even a little bit. All platforms are on a slow treadmill to perfection. They all come with maintenance burden, but the more successful the project is the more maintenance (and perfection) we can afford. Hm, I hope that's how it works...


I am confused. A few months back everyone was talking about WebAssembly. Now Rust on the Web. Do they conflict each other?


Rust would compile to WebAssembly. As would other languages. LLVM will support a target that will enable this in the future.


Absolutely. Rust does have some advantages here, in my understanding, since it has no GC. I haven't been following the latest wasm stuff, but GC integration is coming later, right?


The MVP for wasm will not include any built-in support for garbage collection, no.


I believe languages with heavier runtimes would need to compile those runtimes to wasm as well, which could include a GC. Needless to say, languages like C++ and Rust will have an advantage here.


Are musl builds of the rust+cargo toolchain available via rustup? For running rust+cargo on Alpine.


There's still some issues with a host musl rustc, so not yet, but we would like there to be.


Is there a tracking bug to follow?


My knowledge was apparently outdated! So, the bug I was thinking of was https://github.com/rust-lang/rust/issues/28667 , which has apparently since been closed. The Alpine tracker also links to this bug: https://bugs.alpinelinux.org/issues/3949

And in fact, https://doc.rust-lang.org/book/advanced-linking.html#linux has some instructions for building your own rustc in this way.

I am not sure why we don't currently distribute them today, though... I will ask around.

EDIT: Alex says:

  we don't have an issue for it 
  specifically, no, but I think all alpine users were 
  trying to use the llvm on the system

  b/c we don't work well with building an llvm against 
  musl right now
So, yeah, some small things to work out with regards to Alpine.


I've done that once but since building rustc requires a specific nightly I'd need a glibc environment each time I want to build rustc. Replacing the glibc builds with a musl static build (not targeting but built with musl statically and no glibc requirement) would give us a single binary that works on Alpine and Ubuntu. If that's not an option I'd welcome a musl variant that's automatically installed by rustup when on Alpine or VoidLinux musl edition or any other musl linux environment.


And please add fix for OpenSSL OS X issue ot of the box.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: