Not in the same league as the mp4 parser shipping to all Firefox users, but GeckoDriver [1], a Mozilla-authored standalone binary for interacting with Firefox via the WebDriver protocol (e.g. using Selenium) is also written in Rust and shipping, possibly to as many as hundreds of users ;)
Overall the experience of using Rust for that project has been pretty great; the original requirements for a language were "able to produce static executables with no external runtime requirement, not prone to C-style memory safety issues, and accessible to a team with mostly Python backgrounds". That leaves quite a few options — Go for example — but being a Mozilla project taking the opportunity to use Rust seemed like it would align better with initiatives like those discussed in the article.
Apart from the general niceness of Rust-the-langauge, the experience of the Rust ecosystem has been really nice. Not only were there easy-to-install, well documented, packages to do much of the heavy lifting (http server, command line argument handling, etc.), it was also quite straightforward to set up travis builds using cargo to compile releases for linux64+musl osx and cross-compile releases for linux-arm7hf (added because someone asked about running on raspberry pi and it turned out to be trivial) and win64. The only wrinkle so far has been the difficulty of cross compiling to win32, meaning we might actually need to set up Appveyor or similar.
Obviously this is a much smaller project than the original post in terms of the number of users, and rather different as it is much higher level and could comfortably be written in Python or similar, except for the distribution requirements. However I think the use of Rust has been an unmitigated success, and I would be tempted to use it for a lot more projects that I would never consider writing in C/C++.
I don't think I have any specific resources to recommend, sorry.
I wrote most of the initial implementation of geckodriver and mostly learnt Rust from reading the official book, writing some small patches for Servo, and working on this project (of course); pretty much the same advice you would get coming from any other language.
I just chatted with one of the other major contributors to the project who said that he was able to learn on the job by reading the existing code, looking up concepts in the book or via web searches, and asking questions when necessary; although some parts of Rust undoubtedly have a significant learning curve it is empirically possible to contribute to an existing project without a significant amount of upfront study. This arguably isn't too different from learning most languages, although you are going to need to read up on the rules around e.g. references and borrowing sooner than you would need to look something up if you took the same approach to writing your first Python.
The type system and borrow checker mean that the compiler will tell you if you're doing something wrong which is both a blessing and a curse; it can be dispiriting to get dozens of compile errors when you are getting started, but once you have satisfied the compiler it's possible to be more confident that your code won't break in ways that are relatively common in Python (e.g. missing or broken code for error handling).
So I'm not sure that I answered your question, but in practice it didn't seem to be a major problem. Of course in a different environment — one where people were less enthusiastic about learning a technology their colleagues were raving about, for example — you might have a different experience.
Not OP, but first few chapters of http://rustbyexample.com/ seem much less steep than The Book (to me). Would also appreciate other guides... (coming from C / Java / PHP / Python myself)
I've noticed people coming from high level languages tend to struggle a bit with the borrow checker, did you guys experience this? How did you deal with it?
Yes, the borrow checker is undoubtedly one part of Rust that's unfamiliar, especially if your background is mostly in GCd langugaes where you can treat ownership and lifecycle concerns with impunity. So there is a learning curve which, when you get your code to pass all the compilation phases only for borrowchk to point out that your design is fundamentally unsound, can feel like a learning wall ;)
But at the end of the day the borrow checker is enforcing a relatively simple set of rules that you can learn and, with experience, intuit. So after a while the number of mistakes you make goes down, along with their severity. And there is payoff too in the ability to do things that would be impossible in Python and challenging in C e.g. write a copy-avoiding parser in performance critical code (not something too relevant to geckodriver, but useful for a so-far-prototype project to replace the log parsing on Mozilla's CI system — used to extract the reasons a job failed, and responsible for about 80% of the CPU time on that server — with a Rust alternative).
Even if Servo doesn't pan out as anything more than an experiment (though I'm optimistic that it will), I think Rust will bring real benefits to Firefox by enhancing security of isolated components like this mp4 parser.
> I wonder if Firefox will slowly become written in mostly Rust.
I doubt that will ever happen. Small parts of Firefox, yes, but the browser is enormous. I think I once read that even Servo, which is a showcase for Rust, has more C/C++ code in it than Rust code, largely because it uses Firefox's JS engine.
"core project" is a nebulous term. Stuff is in that repo when it doesn't make sense to break it off as a crate; and a lot of our code is broken off as a crate. But yes, the servo team rarely hacks directly on the C++ components. (we hack very often on Rust out of tree things though)
The C++ code is in servo/mozjs and servo/skia (though that dependency doesn't get used on a default run now IIRC), as well as some other scattered deps and native linkages (fontconfig, harfbuzz, openssl, etc)
We've split the code up in crates so servo/servo is only a fraction of the story. Things in servo/servo are usually components which don't make much sense as something you'd independently use, and are very servo-specific and/or tightly coupled. But a lot of our Rust code is outside the tree, and any C++ modules we use are too.
Also, he vast majority of code in servo/servo is actually HTML/JS test code, vendored in tree from w3c/web-platform-tests.
I built Servo last week, I ran "git fetch upstream && git reset --hard upstream/master && git submodule update" and then "tokei .". So, unless Servo puts its cache in the dir, I would guess not, no.
I think by default servo does put the cache in the dir. So yeah, this would be an accurate estimate.
Though not all that rust code is written specifically for servo, and a lot of that C/++ code is winapi and skia. winapi is autogenerated, and skia isn't used by default anymore iirc.
SpiderMonkey is probably a long-term target for a Rust replacement or oxidization over time, but considering it is a JIT there's certain classes of issues Rust couldn't help with since native code generation is inherently unsafe.
No, but to be safe enough it has to be more safe than the Rust compiler, because the latter doesn't get run on untrusted code (with the result automatically executed)[1]. If bounds checks exist in the compiler IR, they're subject to optimization, which is very helpful for performance but also risky, as incorrect optimizations can easily cause memory unsafety. Optimizer bugs in modern backends are rarely encountered in practice, but from a security perspective, that's like saying your C++ program never crashes in practice: it helps, but it doesn't prove the absence of bugs that can only be triggered by pathological inputs; such bugs in fact tend to be quite common.
I've never tried to find an optimizer bug in LLVM, but I have found more than one in V8, so I have some idea what I'm talking about.
[1] More specifically, this doesn't happen in situations where correctness of the generated code is relied on to provide safety guarantees. There are several websites that will compile and run Rust code for you, but none of them try to ban unsafe code, or filesystem/syscall access for that matter, at the language level; rather, their security model relies entirely on the OS sandbox the process runs in. Google's PNaCl uses (or used to use?) LLVM on untrusted code, but AFAIK the output of LLVM, the machine instructions, are still run through the NaCl validator, so getting LLVM to miscompile something wouldn't accomplish much. (NaCl also runs both LLVM itself and the untrusted code in an OS sandbox.)
It seems safe to say that a JIT-trace -> Rust -> machine code compilation pipeline will probably never be fast enough to satisfy the requirements of a high-performance JIT compiler.
Going the long way is just a 'proof of concept' sort of thing. You could design a high-performance JIT around equivalent safety mechanisms, and even prove the tricky parts.
JavaScript's memory model is incompatible with that of Rust anyhow. You would want something like typed assembly language (Google this--it's a fertile research area). Very researchy though, with uncertain payoff.
But note that a lot of security problems are not in the jitcode but rather in C++ implementations of JS objects and in the compiler itself.
BOOM! Typed, assembly language is exactly what I was going to recommend! TALC assembly, Chlipala's Bedrock, and Microsoft's CoqASM are Google keywords to use for anyone following along.
CakeML or Verisoft's C0 could be useful for assembly generation but not as sure there. Tough constraints in JIT. Edited to add Myreen's JIT that I just remembered.
Not that Im aware of. They might privately license it if asked. I mainly bring it up as something worth cloning by FOSS team given there's plenty details in paper. Meanwhile, look up Magnus Myreen's publications and software as they're on a row with verified everything.
Typed assembly language would be an excellent addition to the Rust ecosystem -- there are still segments of software which should (or must) be implemented in assembly, so anything that can help make assembly easier to verify would be helpful to the ecosystem.
The people who would work on these two things are pretty disjoint, or at least, I'm not aware of any SpiderMonkey people working on our wasm support. So it's not really an either-or kind of proposition.
I wonder if you could ship a fully featured javascript engine written in WebAssembly. Then servo could just include that and it would interpret/JIT the rest of the JS.
> real benefits to Firefox by enhancing security of isolated components like this mp4 parser
Now if only there was a way to isolate stuff at runtime (i.e. a sandbox) to better take advantage of operating system mitigations. I hope that with e10s now (almost) shipping we will start to see some progress towards this. IIRC Mozilla has said they would start to roll out security features (related to e10s) in stages. I'll believe it when I see it but it's nice to finally see this from Firefox since it's been far too long without a defense in depth approach to security from them.
Tried rust on my old c2d, feels like alpha level. I thought they'd be closer to a "wow" effect from parallel tree rendering but not in my 4 minutes test. Crashes and memory exhaustion.
Still wishes a safe parallel renderer to emerge so good luck to the team.
AHHH Telemetry! I feel like there is a group of people who freak out to any form of telemetry, I just wanted to highlight them using it to track bugs. It's super beneficial to any form of changes, especially something like switching an entire language for a component.
"Freak out" is a bit strong, but it's not illogical to dislike telemetry. It's an extra bit of tracking being broadcast by your devices, and as we know from the Snowden docs, the NSA uses collected windows error reporting (telemetry) to help target their exploits.
That someone is worried about these issues does not imply that they don't understand the value of telemetry data for developers. It's just a question of tradeoffs, and it's a decision the user should make.
I don't dislike error reporting in programs. I dislike it when programs call home without asking permission.
IIRC the telemetry involved here was only run on the nightly/developer edition versions. Or something like that.
The restrictions on telemetry on release versions are pretty stringent. It's possible this was measured on release, I'm not aware of the details, but I suspect it wasn't.
Firefox telemetry is opt-out on the Nightly, DevEdition, and Beta channels, but (mostly) opt-in on the Release channel. The opt-in telemetry from the Release channel has limited use because it is not representative of all users.
Here is a list of Firefox's telemetry measurements. Those that are opt-out on the Release channel are tagged "releaseChannelCollection": "opt-out". Adding a new opt-out measurement requires an additional privacy review.
This makes telemetry effectively useless for developers though, unless a big intrusive "Click here to opt in" button is presented during startup. Nobody is going to enable it on their own accord.
I find your comment disturbing because I feel that "Opt-Out" features have become a real problem. The reasons for setting something as opt-out are often self-serving and, even when not, it is still an affront to informed consent (since as we all know that most people will take the defaults).
I think a big portion of those people come from the fact that it's essentially impossible to verify what each and every piece of telemetry does. And ultimately, you as a user don't directly gain anything from enabling it / leaving it enabled, so it's perfectly reasonable to stay ignorant in this case.
Does this imply anything regarding rust compiler package for distros? To support the Firefox build, all distros will have to have good rust compiler packages now, right? (I'm sure the status is already not bad, I haven't checked recently..)
If it becomes something dependable, infrastructure-wise, like Java, this might mark the beginning of a more serious uptake.
Now that Rust can properly bootstrap from previous versions I can see most distros getting it packaged soon(TM), unfortunately the Fedora 25 change window ends today and since rust/cargo will require new packaging guidelines unless someone really steps up to the plate (I don't have time today) it won't make it in until F26.
SBT (scala) bootstraps itself in the same way. I think we're going to be seeing more of that going forward.
For the produced binary packages (rpms, debs) it won't matter that much; although it will reduce build dependencies for source packages and give more consistent builds, even if your distribution comes with an _ancient_ version.
That's not a complete comparison though -- sbt only downloads the platform-independent jars and minuscule launcher scripts. It's still on the user to install a JRE globally.
An analogue for rust would be something that detects the platform at runtime and then downloads a statically linked (?) binary that works on that platform. It can also exploit the ivy cache as the Scala compiler is just a library as far as SBT is concerned.
Isn't that a deal breaker? I would assume that downloading and checking in the binary for a whole compiler for 3+ platforms is untenable.
Distros are okay with a single starting binary for compilers. Gcc does this, for example. Rust would publish a single bootstrap binary, and distros would build the next compiler with that (and publish), and build the next one with the one that they just built, and so on. Multiple platforms would just involve cross compiling the first time.
The problem, historically, with Rust is that each release was built with the latest nightly, so you had to bootstrap each new version. Now they've changed to using the previous compiler release to bootstrap, so you can bring a binary in for the first build of a package then immediately rebuild with your bootstrapped binary and continue on from there for eternity.
It matters a lot for many distributions, as it means we can strip a huge binary blob from the source packages. There's nothing PREVENTING Fedora from building this way, but it's a lot easier to get the package through review if it can be bootstrapped with a binary once and then previous packages can build it.
6 months, all that needs to get in is a proposal (first alpha is a month away so there's plenty of time to flesh it out). We can still package rustc without needing a change, but cargo would be delayed if there isn't a proposal on the wiki before the end of day.
Yeah, there was a bugzilla for the packaging, 'changes' are done on the wiki and are for things more than 'just' packaging up software (where guidelines are needed, in this case).
Someone got a draft up last night [0], and discussion is happening on -devel. If things go well this may make it in F25.
Will Fedora ever rename/relogo Copr? Surely it was a joke to use the Greek word for faeces (κόπρο) and a logo of a side-on view of a sphincter pushing out a fresh coil.
We put in a lot of work for reproducible builds, I'm not aware of anything obvious that would make it not so right now. If we're not, please file bugs! We've fixed them in the past.
Great, because rust is not yet supported on all of the platforms that FireFox itself runs on (nevermind packaged), and that would put many in an unfortunate position.
Well, LLVM doesn't work well on SPARC yet, so anyone on a SPARC-based system would be affected (yes, I know the number of people running FireFox on SPARC is relatively small, although it certainly numbers in the thousands).
And until recently, as another example, rust didn't quite work yet on Solaris (amd64); thankfully, a community member stepped up and has been actively working on that port.
1) Making the "cargo vendor" story work better. rust-url has a bunch of dependencies, and you have to get them all in-tree.
2) More security review & planning. URL parsing is scary! And we'd want to ship & run it alongside the C++ one to check for places where rust-url is not fully web compatible, but there are major privacy issues in reporting back anything more than "1 failure," even for users who have explicitly opt'd in to reporting back data.
But the team is definitely working on both of these pieces and I'd hope to see it in the near future. No timeline / release number promises, though, right now :-)
It would be cool for #2 that if a difference was detected, firefox would try to test and generate a general case (or a minimal case) - substituting out sensitive information. I guess sort of like fuzzy testing...
Another question is that determining what is sensitive information is a bit complicated… But there is an option of asking the user to edit URL to find anonymous enough form of the bug trigger. Maybe after doing some basic fuzzing (like replacing runs of alphanumerics with random runs of alphanumerics of the same length, if possible).
Some people were talking about the patch in the bug tracker, but it never landed, as far as I know. It's still intended to eventually, but (at least when I've talked about it on here) it's been "there's a patch" not "this is in-tree".
I followed Rust at the beginning and I was pleased with the design goals of the language. However, I wasn't thrilled with the pre-1.0 documentation, and even after 1.0 there were breaking changes to the language.
Hopefully this is a sign of Rust's maturity. I'll have to look at it again:>)
Which post-1.0 breaking changes were you worried about? This is technically correct, but we've managed it in a way that's been hopefully painless to deal with, and the community survey seemed to agree.
The compiler has been fantastic emitting warnings about future language changes and I have appreciated it. The Rust compiler's messages (in general) are some of the highest quality I've seen, actually. The Rust Devs have done a superb job at documenting the changes. I'm more complaining that breaking changes to a non-beta language leave a bad taste in my mouth. Sure, they do have to happen sometimes, but it seems that this should be rarer than it has been so far with Rust.
So I haven't worried about anything per se, but breaking changes are mildly irritating to me. Without a doubt the utility value of some of these changes has been worth it (specifically RFC 1214, which was ironically one of the most major and definitely necessary). I'm not involved enough to be able to speak about other changes.
EDIT: I should also add that the software I work on professionally doesn't use any language that Rust competes with directly. Rust doesn't offer me professional utility so it's easy for me to complain about one thing and ignore all the benefits of Rust. If I were in a situation where I was contemplating using C++, Rust, or maybe Go, I think I would still choose Rust. It's far ahead of its competitors and gets a lot of things right.
Cool, thanks. I mostly want to make sure we don't have any blind spots here; this is an issue we care about deeply. I think it's also important to remember that even languages which are known for "no breaking changes" do introduce breaking changes. Take Java for example, known for being an exemplar in this space.
> However, implementation of some Java SE 8 features required changes that could
> cause code that compiled with Java SE 7 to fail to compile with Java SE 8.
Especially in a statically-typed language, technically breaking changes are a fact of life. The key is to make them as minimal as possible, so that you don't struggle to update.
(Oh, and glad to hear you like the errors: we're actually working on making them even better. Elm is really leading the way here...)
Between Rust's and Elm's error messages, Go's focus on compilation speed, and even GCC's improvements in diagnostics, I'm really enjoying this focus on developer UX I see. It's made me more UX-conscious in my own tools.
I'm curious, do you have a language in mind that has been better than Rust as far as breaking changes go after the 1.0 stable release? An example? Rust has been far above most languages/libraries that I've found in dealing with breaking changes. The complaint here seems pretty empty to me. (Complaining for the sake of complaining)
I think that Java has (on the whole) been very stable, but as steveklabnik pointed it, it also has had breaking changes. But considering Java's age and the quantity of these changes, I'd say it's been exceptionally good (especially since there's some fairly hefty pre 1.5 stuff that will still compile and run.) Clojure, Go, JavaScript, and Scheme come to mind.
I don't think it's fair to compare a language's breaking changes to a library's. I expect different levels of stability depending on the library, but with languages I expect the same level of stability. Rust __has__ been very good at dealing with breaking changes, yes. I'm stating that I don't like breaking changes in a language. Breaking changes in a 1.0 > language are something that should be exceedingly rare.
It might appear differently because we're very up front about any change that might possibly break any code, even theoretically. We don't make any changes that we think actually break code, except for blatant bug fixes.
Go has made changes post-1.0 that were more aggressive than anything Rust has done, such as changing the size of int.
>Go has made changes post-1.0 that were more aggressive
>than anything Rust has done, such as changing the size of int.
Changing the size of int shouldn't be a breaking change in Go since pointer arithmetic isn't allowed and everyone should use fixed-size variables when you're relying on this behavior.
Even if this isn't true, Go's release note says:
>The language allows the implementation to choose whether
>the int type and uint types are 32 or 64 bits. [1]
The only thing that changed was the implementation, not the language.
> Changing the size of int shouldn't be a breaking change in Go since pointer arithmetic isn't allowed and everyone should use fixed-size variables when you're relying on this behavior.
All of the "breaking changes" in Rust have been of this form.
We're very up front about any possible breakage. When we talk about "breaking changes", we mean "a change that shouldn't be breaking, but could be breaking if code was relying on this bug/implementation-specific behavior". Those types of changes—changes that might cause breakage in practice but do not change the language definition—are often not considered "breaking changes" in Go. But in Rust we often call them "breaking changes", because we are concerned first and foremost about the practical considerations of changes we make, not just whether we are technically allowed to make them per the letter of the law.
> Changing the size of int shouldn't be a breaking change in Go since pointer arithmetic isn't allowed and everyone should use fixed-size variables when you're relying on this behavior.
Theoretically one should be using fixed-sized variables, but people may not realise they're relying on a particular behaviour. One of the whole points for using a language like Go over, say, C is that humans are imperfect and so having the computer assist is good, and this imperfection penetrates into all aspects of the software process. With a tasty name like "int" and being generally the default type, I'm sure a lot of code uses it when fixed sized types may be better.
> The only thing that changed was the implementation, not the language.
This is somewhat irrelevant: it still results in code not compiling or possibly changing behaviour, because people in fact have to use an implementation of Go, they don't write in an abstract idealised version of it. Rust's breaking changes are generally the same sort of thing (undocumented implementation details, changing the implementation to match the long-stated desired behaviour and bug fixes), but they still result in people's code not compiling or changing behaviour and so need to be handled as such.
I'm sorry to say but you must be delusional. Can you list the these `aggressive` breaking changes? There were only 7 releases since go1 and I fail to find any breaking change in the language specification. On the other side each Rust release has a fat list with "BREAKING CHANGES". Many Rust packages only work with specific Rust versions(i.e. nightly). The rust ecosystem(std lib, tools etc) is also way behind Go in terms of stability. Why do you need rustup if Rust is so backward compatible? I'm not saying that Go is a better language than Rust but the its ecosystem and dev experience is definitely superior.
> Can you list the these `aggressive` breaking changes?
Changing the size of int. Changing methods to introspect the type of their arguments and do things differently. And so on.
> There were only 7 releases since go1 and I fail to find any breaking change in the language specification. On the other side each Rust release has a fat list with "BREAKING CHANGES".
Because we have a very specific definition of "breaking change" that is primarily concerned with the practical effect of changes we make. Go does not consider these changes "breaking". Using Go's definition (changes to the language definition), we have no "breaking changes".
If we were to change the size of int (something we will not do, by the way, due to the practical effects of making such a change), then we would list it under "breaking changes", even if we were technically allowed to do it. That's because we care about the practical effects of our changes, not just the letter of the language definition.
> Many Rust packages only work with specific Rust versions(i.e. nightly).
Because they are explicitly opting into unstable features that are carefully marked as such. We can't stop packages from doing that. Nor can any other language.
> The rust ecosystem(std lib, tools etc) is also way behind Go in terms of stability.
The parts of the Rust standard library that are marked stable have remained completely backwards compatible, in both interface and implementation.
> Why do you need rustup if Rust is so backward compatible?
Because it's nice to keep your compiler up to date and to target different platforms?
You are right about the compiler changes but even so you can't compare 1-2 compiler BK with Rust which has language changes as well. Compiler changes are the norm in Rust.
>> The parts of the Rust standard library that are marked stable have remained completely backwards compatible, in both interface and implementation.
Rust doesn't have backwards incompatible language changes.
> This looks like a breaking change on a stable API.
No. It is a method addition. It is breaking only in the sense that code that didn't explicitly invoke the previous "as_ref" method might call this new method instead. It's the moral equivalent of:
type A struct { ... }
type B struct {
A
}
func (a ﹡A) Foo() {}
and then a later version of Go adds a method:
func (b ﹡B) Foo() {}
Such that code that called Foo() on an instance of ﹡B might call the new method instead. Go can make those changes.
Compiler changes are the norm everywhere? I'm not sure what you're trying to say here.
> This looks like a breaking change on a stable API. Am I wrong?
Yes and no.
Rust's policy on breaking changes is that changes that can be fixed by properly qualifying an implicit path are not breaking. Otherwise, adding any method to anything would be a breaking change. In this case, you can use the UFCS syntax to disambiguate.
So it's an "allowed" kind of breakage because not allowing this means freezing the stdlib.
(Go doesn't have this issue due to lack of generics, overloading, and interface-based overloading. Edit: actually, go does too, due to inheritance, but that is easier to avoid and isolate. In rust you can always write client code that breaks if the stdlib adds a method, anywhere. This is true for most typed languages).
Anything that has the chance of practically breaking things is still run through crater (which tests impact on the ecosystem) and as you can see that PR had minimal impact.
Steve has pointed out that go did have this issue as it has a limited level of auto-deref, which it has changed in the past: implementations performed two levels of auto-deref when executing methods when the spec only requires one, the implementations were changed to only allow a single auto-deref: https://golang.org/doc/go1.4#methodonpointertopointer
Right. However, as I mentioned in the edit, you can still be careful about avoiding breaking changes through inheritance and autoderef in the evolution of Go's stdlib. It forbids very specific types of methods from being added, and if you avoid that, you can continue to add methods as if nothing is wrong.
Rust (and C++, because SFINAE, and many other languages), on the other hand, technically has a breaking change each time any method is added to any public type in the stdlib. It's always possible that the client lib was using a trait method of the same name, and now has to fully qualify the method call.
https://doc.rust-lang.org/reference.html , which is accurate, but not always 100% up to date with the latest RFCs. There's also work on a formal, proven specification of the memory model, but that's not done. It'll be a while.
That's understandable. I've just tried to prove the point that Rust is still a language in flux compared with Go, hopefully making the Rust team aware why some developers hesitate to use Rust on new projects.
> I've just tried to prove the point that Rust is still a language in flux compared with Go, hopefully making the Rust team aware why some developers hesitate to use Rust on new projects.
That's not what I've seen from your comments. Instead I've seen some confused arguments about what "prose only" means (anyone in the PL field would consider both Rust and Go's documentation "prose"), combined with incorrect statements about both Rust and Go and a completely baseless assertion that Rust is "a language in flux".
Rust uses a much stricter definition of breaking change than does Go. As discussed elsewhere in this thread, Go changed the size of integers. While this is technically allowed by the language (it wasn't previously specified), and it shouldn't break conforming code, it can break code that depended on the previous size.
The Rust maintainers would have considered this a breaking change. The Go maintainers did not. This isn't to say either side is right or wrong, just that they are measuring different things.
Additionally, the Rust maintainers have been exceedingly cautious whenever making these types of changes. They literally download, compile, and test all published crates to look for indications that such a change might actually break existing code. In the very few cases it has, they've worked with crate authors to incorporate fixes.
The very low bar Rust sets for determining what is a breaking change directly reflects the extreme regard they have for this issue.
> Why do you need rustup if Rust is so backward compatible
To cross compile. To have quick toolchain updates. To get bleeding edge compiler improvements (e.g. speed) quickly. To test out new features. To help find bugs in the compiler.
One very common use case of rustup is to use clippy. Clippy is a developer tool which hooks directly into the compiler and uses all sorts of private APIs, an inherently unstable thing. It only works on nightly. Lots of people write their code to work on stable, but want to use this tool so they use rustup. Note that no language has a stable way of hooking into the compiler.
Very few rust packages only work with nightly. Care to provide some examples?
> Why do you need rustup if Rust is so backward compatible?
For one, testing on various versions of Rust. For example, I have a kernel project that's pinned to a particular nightly version, while the rest of my projects build on stable. Rustup makes this Just Work.
Well, that's my point! You shouldn't need to test various versions of Rust if you there is a strong backward compatibility policy. I might be mistaken but my feeling is that most of the rust devs are using the nightly version thus the reason of a tool to debug/test different versions.
He didn't say he was "testing" with nightly: some experimental features only work on nightly (which, being experimental, the features may change in breaking ways, but that's why people have to opt-in to using a nightly) and so if one of your project needs one of these features, you can use rustup to get nightly for just that project and the stable releases for the rest of your work.
A staged release cadence with different levels of surety gives people the ability to play with features as they're developed to make sure those features solve the problems they're trying to solve (in the best way) by giving time for real-world experimentation and feedback. A feature can graduate from nightly-only to stable, and it then has a strong backwards compatibility requirement. The nightly experimentation period is valuable to get those features perfected before people can start relying on them more broadly.
What if I would like to guarantee this property for my own code? As well as testing each nightly as they come out, in case something accidentally breaks, so it can be fixed before a release? This tooling assists greatly with that.
(And dbaupp is correct that it's not always about testing; not all of the OS dev features are in stable yet, so nightly is the only option for that kind of project.)
Right, but humans are fallible. Bugs happen. It's a good idea to test early and often, just to make sure: Travis runs are extremely cheap. Better to catch accidents before they make it into an actual release. More testing doesn't hurt anyone.
I would like the code developed now to work with all subsequent rust releases until 2.0 so that I can take advantage of improvements to the compiler and std libraries without any additional effort.
A small effort may effort may be required if there were security/critical bugs.
> I would like the code developed now to work with all subsequent rust releases until 2.0 so that I can take advantage of improvements to the compiler and std libraries without any additional effort.
> Although we expect that the vast majority of programs will
> maintain this compatibility over time, it is impossible to
> guarantee that no future change will break any program.
It's extremely similar to our attitude, and that of Java, etc:
> Of course, for all of these possibilities, should they arise,
> we would endeavor whenever feasible to update the specification,
> compilers, or libraries without affecting existing code.
This isn't a bad thing! My point is as I said below: every language includes some small breaking changes, even if the goal is to have very, very few of them.
Sure. My point is that this is similar to what we guarantee: the compiler used to compile some code, and now it won't. As they say, "It has therefore been disallowed in Go 1.4, which is a breaking change, although very few programs will be affected." Our breaking changes have been similar.
Rust has rfcs, and many of the "breaking changes" are in places where the implementation didn't follow the RfC. Others were in things that were never intended to compile. Its effectively the same thing.
If you click on the image to see the interactive graph, under "Advanced settings" there's a switch "Date range variable: filter submissions by build date range or submission date range", which is set to "build date range". So those "3 dates" correspond to three different build dates.
The telemetry link in the article points to Firefox 45 data. Curiously, the error rates appear to be going up: Firefox 46's error rate is 0.10% and Firefox 47's error rate is 4.35%.
Wouldn't that make sense because FF 46 was not an ESR, 47 is the latest, and most users have been pushed to 47? More users are triggering more bugs in the parser.
Also, 45 ESR doesn't include the Rust code on Windows (I heard that's landing in 48, if a Mozillian could please confirm that), so that's a large userbase to not have included in testing.
Performance can be considered a consequence of security/language safety. Rust code is free from data races and developers can utilize concurrency and all available CPU cores without fear.
Interesting question. One assumes they've achieved at least parity with the legacy code, so comparing these in detail would yield some useful insights.
Rust was supported from the start by Mozilla in order to enable the construction of an experimental browser engine (ie. Servo) that was completely separate from Gecko and Firefox. Afaik, there was never any explicit goal at the beginning to rebuild things in Firefox - this is just the result of how successful the language project has been.
I see it more like the Chesterson's Fence argument. Flash exists, and continues to plague us, because it met a need for developers that wasn't met in other ways. If that need still exists, rather than remove Flash (can the capability with it) and replace it with the same capability but with a much reduced attack surface.
I think that would be an excellent project for someone, if you should both show all of Flash's capabilities implemented in Rust and that the result was safer it would be an excellent endorsement of Rust. It could also illuminate end user features which can never be made safe. Also good for the overall body of web knowledge.
> Flash exists, and continues to plague us, because it met a need for developers that wasn't met in other ways.
Emphasis on "met". I've yet to come across a function that can be built in Flash, but not in HTML 5. In fact, I'm not using flashplayer at all anymore and I don't suffer. (There are a few video sites that are still Flash-only, but `mpv --ytdl` works around that very nicely.)
Come to think of it, this is actually not true. At work, I have to use Flash Player for exactly one thing: Adobe Connect. I wonder why they didn't move to Flash yet. ;)
Not sure if this is the case for adobe connect, but other, similar products (such as blackboard's collaborate) are trying to move across to webRTC. This is complicated because Apple doesn't support webRTC yet, which I'd imagine is why Adobe hasn't moved across just yet.
IIRC, technically alternate (rendering) engines are allowed but you can only execute downloaded code using the system's webkit/JSC (and if you didn't you'd be restricted to a straight interpreter).
I presume not because IIUC the actual video decoding would be handled by iOS's built in webkit. However I would expect that some rust components would make it to the iOS Firefox in the future.
Am I'm reading either the headline or the announcement wrong?
As I read it, they are not shipping Rust but a component written in Rust.
> For this reason, Ralph Giles and Matthew Gregan built Mozilla’s first Rust
> media parser. And I’m happy to report that their code will be the first
> Rust component shipping in Firefox.
Was it inconceivable for you, after reading the article, to understand what they meant? Is it so shocking a headline that multiple people have commented that "well REALLY it's just compiled code from Rust, not the runtime/source code/etc.". Which one of those possible interpretations makes the most sense?
It's just shocking to me this type of meta-discussion about the phrasing of an announcement headline is the bulk of top comments.
I'd say it's nerdview that's going to be read by nerds, and that unlike the examples in that lovely post, it will be understood by the majority of people who read it.
Sure, it wasn't egregious, but I think the insider perspective idea has lots of explanatory power when it comes to the confusion Noseshine expressed and the surprise wyager responded to the confusion with.
Well, obviously we can disagree and I could be wrong, but I think Noseshine's response was a "man bites dog" situation. After thinking about it, I'd still expect the intended audience to see it.
What does "man bites dog situation" mean? Looking at the Wikipedia explanation for the phrase in the context of journalism it supports my question about the title: Extreme and rare events are more likely to be reported. Obviously shipping "Rust code" is much less noteworthy than "shipping Rust" would be.
Sort of offtopic, but.. since Manish has already clarified the intent below... I am just curious - would you have drawn a similar conclusion from the headline if the language in question was not Rust, but a more popular/stable one, like say JavaScript/C? "Shipping C in Firefox"? (asking purely from a linguistic point of view)
Err, I didn't mean to say that Rust is unstable in the literal sense. Apologies! Just meant to highlight its "newness" in the given context (i.e., relative to other languages) :)
What does "replace" have to do with it? Why do you invent the most illogical option possible - that I never uttered - to find a counter argument?
Assuming that they ship another language in addition to Javascript would not be completely out of the question. While it seems that with WebAssembly that is no longer necessary, who knows what Mozilla, in search of future funding, may come up with to open new markets. They also tried creating their own OS for mobile - shipping Rust to further its adoption for whatever to me at the moment inconceivable reason is no less improbable.
So yes, I wasn't sure what to expect from the headline - it said they are "shipping Rust" after all, so something crazy was certainly within realm of possibility.
> While it seems that with WebAssembly that is no longer necessary
I believe they have mentioned that a WebAssembly target for Rust compilation is something they are working towards, and IMO that would be the ideal way to deploy Rust for a website (if it's a compiled language, there's no need to deliver it uncompiled).
The headline is really wrong. Mozilla is shipping compiled and executable code to us. Now, if its Perl, I can confidently say I'm shipping Perl code to you.
Their source contains the actual Rust code and it's part of the build process. Distribution build maintainers will compile that rust code when making their packages. I run Gentoo, so that process happens when I upgrade Firefox.
Overall the experience of using Rust for that project has been pretty great; the original requirements for a language were "able to produce static executables with no external runtime requirement, not prone to C-style memory safety issues, and accessible to a team with mostly Python backgrounds". That leaves quite a few options — Go for example — but being a Mozilla project taking the opportunity to use Rust seemed like it would align better with initiatives like those discussed in the article.
Apart from the general niceness of Rust-the-langauge, the experience of the Rust ecosystem has been really nice. Not only were there easy-to-install, well documented, packages to do much of the heavy lifting (http server, command line argument handling, etc.), it was also quite straightforward to set up travis builds using cargo to compile releases for linux64+musl osx and cross-compile releases for linux-arm7hf (added because someone asked about running on raspberry pi and it turned out to be trivial) and win64. The only wrinkle so far has been the difficulty of cross compiling to win32, meaning we might actually need to set up Appveyor or similar.
Obviously this is a much smaller project than the original post in terms of the number of users, and rather different as it is much higher level and could comfortably be written in Python or similar, except for the distribution requirements. However I think the use of Rust has been an unmitigated success, and I would be tempted to use it for a lot more projects that I would never consider writing in C/C++.
[1] https://github.com/mozilla/geckodriver