I tried Rust about a month ago. The language itself is amazing, the pattern matching is super expressive, the borrow checker is incredible in the kinds of errors it can pick up on, and rust-analyzer is leagues beyond where RLS was. But... the compile times are an absolute non-starter for me. I'm the kind of guy that likes to re-run his code continually to see if it validates to what I expect it to be doing. In Rust, this kind of workflow just doesn't work at all. The long compile times completely kill any enjoyment or flow that I normally get out of productive programming. Even worse, I found myself anxious to introduce new dependencies, because each new crate would add a few more seconds to the compile time, which immediately correlates to less coding enjoyment.
I keep reading in Rust surveys that Rustaceans just don't care that much about compile times enough to prioritize improving them. I've often wondered how this can be possible, given that to me it's such an obvious glaring issue that all the other cited problems are distant distant seconds at best. I have a theory: there must be two groups of engineers. One group loves fast compile times and quickly validating hypotheses. The other group must value thinking about their code a lot more than running it, and so compile times aren't that important. My guess is that, while the second group hangs around and loves Rust, Rust has completely driven away the first group (including myself) to the degree that they don't use the language enough to even fill out the surveys.
Anyways, I know it has a wide swath of use cases, mostly in systems programming. I'm just bummed that if I ever do any of them, I won't really enjoy it. :-(
[EDIT:] Gotta go to sleep, it's far too late here. I really appreciate all the thoughtful replies. Rust's amazing community is another reason it annoys me that I can't fully get into the language as I'd like to.
> I keep reading in Rust surveys that Rustaceans just don't care that much about compile times enough to prioritize improving them.
I am interested in how you got that impression, because at least in our official surveys, it's often one of the most-requested improvements to Rust, and it's something that we're constantly working on improving. Still a ton of work to do though!
For what it's worth, my workflow is closer to yours than "compile twice a day." And if you're using rust-analyzer, by default, it compiles the code every save!
Here's the exact thing that I checked when making this conclusion a few weeks back, the most up-to-date state of the compiler roadmap that I could find: https://rust-lang.github.io/compiler-team/minutes/design-mee... I see 16 top-level goals on here for next year (under the Goals section). The only thing that seems related to compile speed is to continue working at incremental compilation (no new initiatives?). And even there, the only action item for the entire year is to create a working group.
I love rust-analyzer! But I'm not sure what you mean by "it compiles the code every save." I love how fast it can type-check my code, but I was unaware it could actually compile my code into something I can run? I thought it was just a LSP provider.
Ahh I see. Yes, so that's meeting minutes, so they're very inside baseball, and so it would make total sense that you would get this impression from this.
Before we get into that, to answer the other question:
> But I'm not sure what you mean by "it compiles the code every save."
Rust-analyzer (by default) will run "cargo check" on save. cargo check does everything except codegen, so it won't give you something you can run, but it does invoke the full set of compiler analyses and everything else.
Now, what it does with the type-check stuff is the key to understanding what you're missing from the compiler team roadmap, funny enough. So rust-analyzer is actually going to end up merging with the rustc codebase eventually, if all things go to plan. Basically, rust-analyzer is slowly re-implementing the compiler from the outside in. It's doing this because the best way to get the largest win on compile times is to completely re-architect the compiler. This comment is too long, so I won't get into too many details, other than to say that it's going to be less like the Dragon Book and more like C#'s Roslyn, if that means anything to you.
It's doing this by taking advantage of a process called "librarification," which is extracting stuff from the compiler into re-usable libraries. You can see this on the notes when they talk about chalk; chalk is the next-generation trait resolution system. This is integrated into rust-analyzer, and into the compiler. So slowly, bit by bit, things are being re-architected and integrated in a way that will be much, much better in the future.
So this is a massive, massive project that touches everything, and so there's nothing spelled out in the minutes that says "this is for compile times" because the folks involved already have this context, basically.
And so yeah, that's the big project. There are also contributors who, while this larger work is going on, are working on individual PRs to make things faster. See https://blog.mozilla.org/nnethercote/ for one of the largest contributors in this regard, that talks about the work he's doing. But that doesn't appear on this either; there's no need for a plan or coordination here.
... does that all make sense? I am thinking maybe it would be good for us (to be clear, I mean the Rust project, I am not on the compiler team and am not doing this work) to like, actually write a blog post on all of this...
Mostly, that makes sense. I think the thing that doesn't make sense to me is that rust-analyzer seems to me to be a LSP - my mental model of it has nothing to do with codegen. And I thought that codegen and linking was the slow part in Rust (cargo check runs so fast!), so how could those gains be brought back to rustc? I'd bet other people have this misconception, too - maybe that should be highlighted on the rust-analyzer readme on Github.
If rust-analyzer will eventually introduce those gains to rustc, that is fantastic news, and I'll be watching it with great interest! The improvement from RLS to rust-analyzer is exactly the type of improvement that brought Rust from the "fascinating tech demo" to "I could actually see myself using this day-to-day."
And yeah, I'd love to read a blog that went into more detail.
Yeah, I read through that too. He's my hero! :-) But the feeling I got when reading it was: "fast compile times" really need to be part of the DNA of a language. Let me explain what I mean. Some languages, like Go and TypeScript, have this in their DNA, and that means that every design decision and every addition to the language is considered seriously through the lens of compile time speed, and vetoed if it were too costly. But with Rust, the fact that there's one guy writing a blog post about some incremental wins he managed to chalk up just... doesn't seem like it's part of the DNA. If it is, why is it just one guy, and why do a lot of his changes seem more like incremental wins than the big sweeping changes I'd expect to be necessary? I could definitely be wrong here (sounds like I am and rust-analyzer is that big sweeping change).
Thanks for all your great responses, by the way. I really appreciate it!
> Thanks for all your great responses, by the way. I really appreciate it!
Any time. :D
A couple more brief comments:
> rust-analyzer seems to me to be a LSP
Remember that LSP is a protocol, something/someone has to actually figure out the answers. Like, the LSP says "please draw the squiggles here", but something has to actually say "line 1, column 10, please". Doing that involves semantic analysis, which is what a compiler does.
> And I thought that codegen and linking was the slow part in Rust
It is the slowest part of the current architecture, but that doesn't mean that the current architecture is the best possible one.
The RLS invoked the compiler, and then examined its output to say "line 1, column 10, please". And you said you saw the improvement with the architectural switch to rust-analyzer. Same thing.
> "fast compile times" really need to be part of the DNA of a language.
You are correct that, when push comes to shove, if there's a tension between, say, runtime speed and compile-time speed, Rust will choose runtime speed. Rust will not have compile speed as high up on the list of concerns as Go does. But it is important enough that we don't let major regressions happen, and actively pursue improvements where possible. https://perf.rust-lang.org/ for example, tracks this data over time for this kind of reason.
> why do a lot of his changes seem more like incremental wins than the big sweeping changes I'd expect to be necessary?
Well, again, it's not always either or. He's doing the incremental thing, and others are doing the big sweeping changes thing. His improvements land nearly every release, but the bigger projects take a lot longer. They work in tandem to make things better than they were before.
One of the first computer system I used frequently was Windows ME. This system used to crash (BSOD) extremely frequently (depending on what you were doing, it could be as frequently as every 20 minutes), and any unsaved work would be lost.
I have therefore developed a reflex where I hit Cmd-S every time I finish typing. I actually have to put conscious effort in to stop this when using software where saving is slow.
I don't have that much to add to the discussion, really. I just wanted to ruminate on something:
> I've often wondered how this can be possible, given that to me it's such an obvious glaring issue that all the other cited problems are distant distant seconds at best.
It's amazing how little I understand your point of view. And I'm being genuine and not critical of you at all.
To me, all of the features of the language, from move semantics, to ergonomic sum types, to nicer error handling (to me. I know it's debatable), and great runtime performance are so important, and bring me so much more peace and joy while working that compile times are literally not even on my radar. I simply don't care if they ever get better (assuming that effort is being put elsewhere. I guess if the language is "done" then work on the compile times is appreciated).
And I've certainly done real work in languages that compile fast (Go, Java) and languages that don't even compile. They always made me so much less happy because of the languages themselves just not clicking with my mental model of problem solving. Again, compile-run feedback loop just had nothing to do with it.
All that said, I use Rust Analyzer for code completion and it's generally fast enough. But it's still sometimes has a noticeable stall. That's not my favorite and it's kinda related to compile times.
Just really amazing how different people using the same tool have such wildly different modes of interaction and perceptions. Cool stuff. Cheers!
Believe me, I love the same things you love about rust. Move semantics, sum types, error handling are all amazing, to say nothing of the ridiculous perf!
I think our difference is I spend a lot of time on graphics and games, where there's a lot of manual tweaking that you have to do. (Is 20px far enough? 25px? Is 1 second long enough for the explosion? Maybe 0.5s instead?) There's just no way to test it outside of rerunning the app.
The time to compile these small web apps is not worth worrying about after the initial compile. I'm not sure what he's getting at to be honest. While it's certainly not instant, it's not that bad.
In dynamic languages like JavaScript or Python it's necessary to continually run your code in order to validate that the API (of the standard library or of dependencies) was used correctly, that the types match, etc.
Rerunning your code continually is no longer necessary in a language like Rust, because the compiler already does that for you.
People complaining about slow compile times in static languages like Rust, Haskell, Scala miss the forest from the trees, which is that the compiler eliminates entire bug categories, logic you'd otherwise need to check at runtime, either via unit tests or at least by running the code locally. Compiler times can always be better, but the compiler works as a theorem prover and the slowness is entirely justified, overall yielding a better ROI.
N.B. I'm not saying that with Rust you don't need to run your code or have unit tests. There's only so much a compiler can prove. But you no longer need to do it as often.
You can't switch to a language like Rust and expect Python and if you do, then the experience is going to be horrible, because Rust is a very different language.
> In dynamic languages like JavaScript or Python it's necessary to continually run your code
No it's not. It's just preferable to do your work in smaller batches so that your feedback loop is fast and you know exactly which change broke things, rather than making 10 changes and then having to figure out which of those changes 4 bugs relate to. And it's better to discover a flaw in your implementation early rather than late, which is also easier when working in small increments.
You get used to it and adapt. That might sound bad but comes with benefits as well. You will train yourself to think and be thorough to get it right the first time without it slowing you down.
Getting feedback quickly can be useful (especially when trying something new), but not needing it is liberating (especially when trying something new).
For me the problem would be learning how something does or does not work, even with a compiled language.
The reason I hate Spring for example (not that I've worked with it extensively) is that the documentation gives you zero feel for what you expect the behaviour to be. It will compile, but until you run it and check what it's doing, very often you'll be developing with no idea of what you're actually building.
Especially if you're trying to get specific behaviour of what you want to see (e.g: figuring out why it won't send mail, or why it doesn't ignore a certain json key or whatever), it still needs many runs to get right.
Short compile times are beneficial when development involves a lot of trial-and-error.
The parent comment was insinuating that, when developing in Rust, one can leverage its type system to replace this trial-and-error with static verification. And, indeed, if what you're worried about is a dangling or null reference error, the Rust type system has a rich language for specifying expected behavior so you don't have to run your code to check it.
If, on the other hand, your job description includes the nebulous mandate to "make it look right," there isn't any feasible way to design your types so that Rust can check that. For example, sometimes I have to do something like "make sure the dialog box is wide enough that it doesn't truncate the title string on any relevant platform." The Rust type system has no way for me to query several versions of macOS and Windows about when they start truncating dialog-box title strings and size my dialog that way, so here I am, still performing this task by trial-and-error, waiting forever to compile my code each time.
Nothing about this is Rust-specific by the way. I'm similarly bearish on statically-verified cross-platform dialog boxen in Haskell. And that's one of my dumber "make it look right" tasks - I work on CAD software so most of them involve 3-D graphics and/or computational geometry.
And, returning to the parent comment, waiting for C++ to compile in order to test things is a giant fucking time-sink and drain on my productivity. It can't really be automated, or, if it can, it's way beyond the reach of this organization's know-how.
In summary, I call bullshit on static verification replacing snappy trial-and-error in the UI / Graphics space. In practice, you're left shifting UI / Graphics customization out of the host language and into "content" so that you can tweak it without re-compiling. But this is another massive fucking time and complexity sink that only exists because at some point you let your compile-times get so far out of control that your people couldn't do their job ("make it look right") in a reasonable amount of time.
I see, the type system cannot ensure logic bugs, specifically with an applied example of the visual aspect of an application. Got it. Thank you for expanding on it!
In theory and to some degree in practice. The compiler won't fix the code that compiles but runs incorrectly.
Besides, there are statically typed functional languages where compiling time is not a problem, see D.
Type checking and Parsing isn't bottleneck in Rust compilation. Code generation is. LLVM is particularly heavy and while it generates optimized code, it is quite slow.
Go Authors didn't pick LLVM because of compile speed reasons (as well as complexity), and that turned out to be worthwhile tradeoff.
Well, worthwhile in terms of compile times, which makes sense knowing google’s codebase. Most people aren’t compiling massive dependency trees that can’t fit on any single computer when they push a commit.
Anyway in C++ land massive compile times are just as much of a problem. Fast code is expensive. Go’s a lot of things, but being good at generating fast code ain’t one of them (looking holistically anyway).
Go's authors didn't pick LLVM because they weren't sufficiently familiar with it to implement what they needed (e.g. segmented stacks). See https://news.ycombinator.com/item?id=8817990 for the full explanation.
I told type system is not the bottleneck, codegen is.
Even if you count the size of ode after monomorphization, I am pretty sure the Go compiler compiles it much faster. Because the compiler is not in the benchmarks rat race of adding one optimization pass from every academic paper in the world for diminishing returns. That would be unnecessary for a development compiler.
Go compiler could have an optimized slow build option, in ideal world, but that's a different matter altogether.
If you're casting to interface{} non-stop, you are doing something against the grain of the language. It works, but there's probably a better way of getting the result you are looking for.
As a datapoint for your hypothesis, when working in a typed language I will build my code only a few times a day.
I much prefer working from a logical and thoughtful approach rather than iteration. At the point where I start a build I am already reasonably confident that it will do what I want it to.
There are some bugs where I will need to re-build several times consecutively but these are relatively rare for me (I work on REST API systems - so nothing too crazy).
That's really interesting. Compiling only a few times a day is mind boggling to me - in a normal day I'd probably average hundreds to a thousand compilation cycles!
The main thing that I use compiling for is to validate little off-by-one things. Like, is substring() exclusive on the second parameter? What about range syntax and the slice operator? What if I wrote + 1 instead of - 1 somewhere, or did < instead of <=? I could spend a few minutes combing documentation, or just compile it and check immediately. Well, assuming compilation is fast anyways.
> The main thing that I use compiling for is to validate little off-by-one things. Like, is substring() exclusive on the second parameter? What about range syntax and the slice operator? What if I wrote + 1 instead of - 1 somewhere, or did < instead of <=? I could spend a few minutes combing documentation, or just compile it and check immediately. Well, assuming compilation is fast anyways.
I'm a really big fan of reading the docs (or, where the docs are insufficient, the source), and will always have docs.rs open alongside my editor. While the examples you provide are easily validated by testing (and I'd probably use `evalr` in `##rust`, or some similar thing, for them), many subtleties can exist in more complex functions, so not reading the docs/source for unfamiliar functions feels like programming-by-guessing to me.
Perhaps my examples were a little too trivial. I do a decent amount of game/graphics coding, so there's a lot of "hmm, does this look good 20 pixels over? How about 18?" and I don't know how you'd get around that without recompiling.
(OK, you could read preferences from a file, but then you'd have to optimistically write every value you'd ever want to recompile to a file. I've tried this, but it's far too much overhead.)
Why not have controls in your program to do these immediate, non-logic changing modifications? It can report the ideal value through a debug interface. No recompiles needed.
I'd also recommend getting used to serialisation/deserialisation. Serde for Rust makes this remarkably easy. Writing every setting to a file is simple if the compiler can do it for you, rather than you walking the long way around.
When you're doing any sort of non-trivial gamedev, graphics, or physics simulation work, there are so many instances when you have to get a bunch of magic numbers by trial and error, and it's usually all over your code. If you try to "refactor" these cleanly into a separate JSON file (hint: it NEVER is that clean), it still takes time and effort away from you to by writing boilerplate code, which seriously kills your momentum for experimenting and iterating with your code. It's kind of a niche domain-specific thing which most gamedevs and graphics programmers would sympathize with.
Also there are also a shit-ton of minor logic-changing modifications when you're writing prototype gamedev code, (for example, moving an if statement here or there to tweak the physics of your platforming). These cannot be easily represented in config files, and this is why people attach lightweight scripting languages like Lua to their game engines (so that you don't have to wait for those atrocious C++ compile times when tweaking your gameplay code).
(Ironically, serde is known for its atrocious compile times, which really makes the situation even worse. Because of this some frustrated Rust gamedevs wrote their own serialization libraries such as nanoserde (https://github.com/not-fl3/nanoserde)... but you get the point.)
FWIW, I am with you in the camp that cares about compile times. I use C++, I care deeply about compile-time correctness (and have "as close as I can get to rust as I can" abstractions for tons of things such a small locks, but then take advantage of the almost-dependent typing provided by C++ to go even further than you can in rust to prove correctness of my buffers), and I envy people who get to code in Haskell or Idris. I am not a "dynamic typing" addict by any means.
But I work on network protocols and file formats and parsers and video transformers and user interfaces (often command line or web) and a long time ago games ;P... I make a small change and then I want to see the behavior difference. I almost always type code that compiles, but that doesn't mean it did what I want when interacting over the network: rust doesn't provide anywhere near powerful enough type abstractions to actually prove my code is correct... it only is proving that my code can't crash and will avoid undefined behavior.
What is so frustrating is that there is nothing about Rust that makes it impossible to make fast. With C++ I get to manually make tradeoff in my code on a file-by-file basis with an assumption that my project will be built out of thousands of mostly-independent fragments that I am able to compile incrementally or in parallel. I can very rapidly isolate individual functions I am working on for fast iteration without having to restructure my project, I can tell the compiler to instantiate templates in different ways to reduce dependencies, and I have designed a system to let me do parallel compiles on AWS Lambda (I haven't yet gotten my build environment to be ready for a -j1000 parallel build, but I am somewhat close).
I have been integrating some rust libraries into my codebase (I haven't even gotten to the point of really coding in it omg) and Rust is just so painful and frustrating to work with... and it really does just seem to be, as you pointed out, a matter of priorities and interest: there is this myth that it is some kind of tradeoff on the type system, but it isn't; it is just that the people who work on rust seem to not work on the same kind of projects that we do, at least in the same way, and so have made a bunch of trade offs like "I would rather never have to type a prototype than save any time compiling" and "I would rather my build system never have to consider dependency management than save any time compiling" and "I would rather provide the highest possible quality resulting binary than save any time compiling" (which is at least one I can appreciate, but only once a month when you cut a release... not the hundreds of times a day that I recompile my C++).
> "I would rather never have to type a prototype than save any time compiling" and "I would rather my build system never have to consider dependency management than save any time compiling" and "I would rather provide the highest possible quality resulting binary than save any time compiling"
Rust outputs prototype information in .rmeta artifacts that are generated as one of the first steps in compilation, so this is not a compile-time issue. AFAICT, dependencies are also tracked, and debug vs. release switches are provided that also control things like binary optimization.
The real "problem" for compile times in Rust is that Rust makes it idiomatic to write code that's slower to compile. That's really all there is to it. If you were to literally write Rust like it's C, you'd find that there's no real overhead introduced by Rust per se.
(This is not to say that improvement is not possible, of course. Even what's "idiomatic" can be tweaked over time to reduce the amount of excessive, duplicated work that the build system has to do. Newer features like const generics will probably make this feasible in the future, and improvements in the compile workflow itself will do the rest.)
Yes, for example I can easily do multiple UI changes in UWP, with C++/WinRT or C++/CX, in a fraction of the time that it takes to build Rust applications.
I am quite curious how usable Rust/WinRT will turn out to be for WinUI work.
> I do a decent amount of game/graphics coding, so there's a lot of "hmm, does this look good 20 pixels over? How about 18?" and I don't know how you'd get around that without recompiling.
I know exactly what you mean. Front-end development can be this way, too.
I think it's OK (and probably true) to say that you shouldn't use Rust for cases like this right now.
Yeah. It's a bummer because Rust would be a fantastic language for gamedev otherwise, and a lot of people are interested in it for this reason. It's got everything else you'd want out of a good gamedev language - no GC, smart memory model, blazing perf, really expressive...
> I think it's OK (and probably true) to say that you shouldn't use Rust for cases like this right now.
I think that's true, but only because of the slow compile times. Which is why they're so frustrating. Rust would otherwise be an excellent language for these use-cases.
I see where you're coming from. The good news is that compile times are something we are certain will get better. Rust's maintainers know it's a high priority, but even if they didn't, we would still benefit from increasingly powerful machines used for graphics/gaming development.
That’s the sort of thing where you often use another language suitable for hot reloading for such customisation or rapid-feedback alteration. As a couple of examples I’m aware of:
• The Azul GUI toolkit uses CSS for styling of its widgets, and hot-reloads stylesheets.
• The Mun programming language is designed for the sort of niche Lua is often used, with hot-reloading scripting-like functionality to augment your Rust code. (Well, Rust is the main host language at present, anyway.)
> The main thing that I use compiling for is to validate little off-by-one things. Like, is substring() exclusive on the second parameter? What about range syntax and the slice operator? What if I wrote + 1 instead of - 1 somewhere, or did < instead of <=?
Rust has a testing facility that can be used for these things. By default, all "example" code that's included as part of a doc comment ends up in the test suite. And because test cases are small and self-contained, they're also very quick to compile.
I've to say, I really hate this way of working, which might be the reason why I prefer statically typed languages like Rust.
I think the main reason why I dislike working with dynamically typed languages is that most of them make the reasoning of code a lot harder. If you're able to reason about the code at hand, then there's a lot less need to run the code for every small change.
I've used Nim just a bit on Windows/MingW (I'm from python world). The very short program I made (about 200 lines, mostly math stuff, no framework imports) compiles in 2-3 seconds. It's already too long for me to iterate (it's not a rant against Nim, it's just that it's too long for my way of working; the language is nice and gives good results). Also note that Nim compiles fast, it's the compile-to-native compilation step that takes 90% of the time.
Wow, one of our solutions is nearly 100 projects (legacy) and the compile time for the whole solution is sub-5 minutes. I'd love to see what kind of solution would take 40 minutes to compile! (or maybe I don't :)).
That being said, C# is clever enough to only need to recompile the assemblies affected by your code change, so often you can get away with 10 second compile times even for large solutions.
IIRC our solution was around 450 projects. Visual Studio just woudn't open the whole solution.
So you had to work in individual projects at a time, slowly going through and changing stuff project by project.
VS wouldn't even build it either, really. YOu had to build via batch file that did various ms build magic. I would make changes, set of a build and go to lunch, then come back and fix the errors.
Once you checked the code into source control it would trigger a build which would sometimes take upwards of an hour :( I hated that code base.
It got even worse when they added Coded UI tests. Wait an hour for a build and then a random Coded UI test would fail and the advice from the people who wrote the Coded UI was "just run another build!", yeah, flakey tests on a code bases that tests an hour to run...
I've had a reviewer in a (scientific) journal tell me some paper is relevant, and thus I need to cite it, because it "appears on their top google results".
People should be educated on the huge bias introduced by personalized services like google.
I'd love a REPL that could somehow load the state of my entire app so I could test stuff out, but I've never found a language that could do that and also had reasonable type safety guarantees.
With Jetbrains IDEs (IntelliJ, Webstorm, CLion) you can run a "sketch" that can "see" any module in your project. I use that a lot in Java/Groovy/Kotlin/Rust and it feels much superior to a REPL because I can write not just one-liners, but larger amounts of code, with all the features of the IDE like auto-completion, inline docs, syntax checking etc.
A test suite facility can also be used like this. And Rust adds any "examples" you write in the code (with proper doc formatting) to the test suite, as mentioned elsewhere.
A test suite has a different use case... REPLs/Sketches are useful for exploration... both of your own design and of libraries you're using. Tests can be used for this as well, but it's not as convenient when you're very "early" in your exploration or have no code yet to test at all.
Some of the best solutions to that world right now seem to be things like lisps with dynamic typing added. You get a great set of repl support, but you can also build components and systems into easy wrapper scripts to load them as needed.
As a spec user in production, I must say it's very nice usability-wise and can help alleviate most of the issues coming from a lack of a type system.
But god it's slow... Some static analysis support would also be hugely appreciated. Not being able to check everything doesn't meen you shouldn't attempt to check something! Many specs are type predicates anyway.
Clojure has a java type system which can be partially enforced statically with clj-kondo
Clojure spec is great at system boundaries but it's hard to describe it is a type system, it's a predicate system it can define very arbitrary constraints mostly at runtime
If you have an editor with syntax checking such as VS Code with rust-analyzer, it checks syntax in the background as you type, saving you a lot of compile cycles. It doesn't yet scale very well to huge projects but getting better every week - rust-analyzer is under very heavy development.
Assuming you take ~4 seconds from wanting to compile til you have your answer, that’s 60 minutes a day for a thousand compiles. An hour. Doesn’t seem very efficient to me.
Not OP, but these things don't have to be synchronous. I sometimes have entr running in a separate terminal, which tests and builds on every save. I don't wait for it to happen, but I do see it if it errs.
As one more datapoint, I do this (=barely ever "compile" / run) even in dynamically typed languages. My primary is Python.
Definitely a leftover from when I was a kid, in the early 90s, without ready access to computers. I remember lying at a hospital bed and filling pages and pages with C64 programs, using pen and paper.
This kinda forced me into the paradigm of "think through invariants and structure first". To this day, typing code out on a computer (incl. compilation) is almost mechanical, not a vital part of the design process.
That's spot on. I mostly work with C#, F# in a day to day job and I build my code only a few times.
This is also true when I work with Java, Angular and TypeScript.
When I pick a new feature to implement I design that on paper with pencil and then mostly translate that to code. Mostly I can code for hours without compiling since this is where Typed Languages have strength.
I York in C#, Java, and Go and I mostly use a combination of these modes. I will usually take some time to design a feature and write it out almost entirely before even the first compilation. Then, I will usually start a compile-run-debug-edit cycle to fix all of the off-by-one type errors, cover missed cases, the occasional wrong assumption etc.
I don't think typed vs untyped languages really makes a huge difference if you're designing code this way. The difference comes when the compiler can actually verify your design, if you don't compile it doesn't really matter what the compiler can check.
I mostly work with C#, and occasionally I might only end up compiling the project a couple of times in a day, but I'll probably be building a test project several times in the meantime (e.g. after adding each test).
Alternatively, if I'm building a web GUI, I'll probably be building quite frequently, making sure that data is bound correctly and that the UI looks as expected.
I agree that whenever I've to "adjust" something in HTML, CSS, its always a struggle with multiple tries and write-run-write-run cycle.
But again, this has nothing to do with how you design and write code in a static language.
As a counterpoint, I'm mostly working in typed, compiled languages, and I compile and debug-step through my changes in much smaller increments (usually a couple of minutes). But this only works because part of the work is to always fight 'compile time bloat'.
It is easy to let a project slowly slide into a state where this workflow is no longer possible, especially in high-level languages like modern C++ or Rust, when even incremental builds take so long that it throws you out of "flow".
But anyway, I think this sort of working style doesn't have to do with a language's type system, but is a personal choice and one isn't necessarily better than the other, but certain personalities might be attracted to certain language communities, and thus directly influence priorities (e.g. a large part of the 'modern C++' community seems to think that compilation time and good runtime performance in debug builds are not high priorities, which from my point-of-view is entirely irrational).
I have seen few, often genius programmers do that. but not everyone can have a mental map of the code like that. we just need to run and fix part by part.. may be it's matter of practice, but that's the way several programmers workflows are.
+1 for that data point. The type system allows me to be much more explicit with regard to what I expect my code to do. And Rust-analyzer helps me spot those parts that don't fit together. But I typically turn it off, until I am actually done with defining all data types.
I also use assertions and unit tests, because the type system still has its limitations (or some things would be too awkward to express), but I can move forward a long way without actually compiling a project.
> I keep reading in Rust surveys that Rustaceans just don't care that much about compile times enough to prioritize improving them.
Confusing since this is, to my knowledge, a top 3 issue just about every year.
That said, I think Rust programmers dislike compromise (maybe to a fault). Rust is naturally a 'have your cake and eat it to' language and the community very much likes to be best-in-class, so compromises on runtime performance to improve compile time (which is one way you can go today) are often not seen as viable by a large part of the community.
For what it's worth, I think I've called out compile times as probably one of my top issues with Rust, working on a fairly large and quite dependency heavy codebase. It isn't a deal breaker for me by any means, but it's a pain.
>compromises on runtime performance to improve compile time (which is one way you can go today) are often not seen as viable by a large part of the community.
Typically this tension would be resolved with o-level flags (O1, O2, O3...).
I think the tricks are more like "use dynamic dispatch instead of static dispatch".
O1 vs O3 is probably a fair way to do things. Some people will do that for debug builds, to get a mix of reasonable performance (for tests) and reasonable compile times. I'm not sure how meaningful the difference is going to be between O2 and O3, it's been a long time since I'd looked into it, and back then O3 wasn't really a thing people did.
> I'm the kind of guy that likes to re-run his code continually to see if it validates to what I expect it to be doing.
I used to be the same way, but the more I practiced holding the state in my head while I worked, the better I got at it.
> One group loves fast compile times and quickly validating hypotheses.
You just don't need it. As you make changes, the delta between expected behavior and actual behavior continues to grow. You begin to develop an intuition for what kind of things might not work as expected, and you can conceptually model around those things and continue to build your logic and make your changes.
An IDE helps immensely with type error minutiae. I'm using CLion and it's pretty great. For the few cases it can't figure out the types for you, you can run "cargo check".
I'm confident that Rust has made me a better engineer. I don't need to continually save and evaluate and I can do deeper work with more uninterrupted flow.
Give it a try! I think you'll start to acclimate.
Edit: Why the downvotes? I stated my personal experience as it relates to OP's observations. I improved in how I approach problems after exposure to Rust.
I've always wondered if the difference is the quality of work that we do. For instance, I do a lot of graphical/games stuff, and there's no resolution in my head high enough to know that 200 pixels to the right is too far (or not) without recompiling the app and actually looking at it. Or if that shade of blue is the right one.
Still, I appreciate your optimism. I'll give it a shot! Especially for things that aren't graphical.
I think the difference is that you're using the language as a prototyping/design language: there, shortening the feedback loop is absolutely critical. There are lots of things that are not prototyping though - when I touch a graphical project I typically am working from a diagram that specifies all the color values and has a layout labeled with dimensions.
For a contrast, I don't notice compile times at all in just about any language for the most part: when you reach the "few million lines of code" size, there's no language that is going to be instant; and for embedded/systems work, committing a change can involve things like creating builds for N different architectures and running a test suite that may have to control hardware [run times of the test suite are measured in hours to days]. At this scale, anything that the compiler catches, even if it takes an hour to compile, saves an order of magnitude more time later in the process.
It's legitimately a hard problem to design a language that works well for both projects in the few tens of thousands of lines of code size -- i.e. tossing together a quick prototype -- and scales to millions of lines of code or larger.
> I've always wondered if the difference is the quality of work that we do. For instance, I do a lot of graphical/games stuff, and there's no resolution in my head high enough to know that 200 pixels to the right is too far (or not) without recompiling the app and actually looking at it. Or if that shade of blue is the right one.
I get that there're some differences in the games domain, but I don't quite get your examples, because changing the position or color of objects seems like a data and not a code change. If you have these properties as data you might be even able to change them on the fly, in the running game, which will increase your iteration times quite a bit more than any faster compile times.
>I used to be the same way, but the more I practiced holding the state in my head while I worked, the better I got at it.
Doesn't that partially defeat the point of having a powerful compiler? The more work the compiler does, the less we have to keep track of in our head (and conversely, much of the pain of coding in a language like Python is the amount of stuff we need to mentally keep track of).
In a modern, well-supported programming language's arsenal, there are many tools that work to take some of the pains of coding away. The compiler is but one of them.
Unexpected behaviour and errors can be caught by means other than constant recompilation, and tools exists to do just that. In IDEs, these tools tend to become invisible, seamless, and can be relied upon without much configuration; for those accustomed to less integrated environments, the manual configuration required to get these tools up and running becomes a hassle.
I suppose using those tools efficiently and constant recompiling are different ways to solve the same problem, but I wouldn't say it's solely the compiler's job even in the latter case. For languages that aren't super-fast for compiling like Rust, Swift, etc. there may be much more reliance on those other tools.
> The more work the compiler does, the less we have to keep track of in our head
There’s two sides to this: on the one hand, you don’t have to keep as much info in your head because the compiler will tell you if you ask it. On the other hand, when you do try to keep state in your head, the compiler is able to double-check that for you so that you don’t have to do it perfectly.
With a language like Python, you’re working without a safety net, so you make small, cautious, steps in order to not fall. A strongly-typed language like Rust enables you to make larger, bolder, changes in one go, confident that the little mistakes will get caught.
Using Rust heavily for ~4 years here. I usually compile a few times a day, so while I do hear people talk about compile times a lot, it's just never really been a problem for me. I'll often write hundreds of lines of code, and sometimes build the "outline" of a whole app or library, before compiling. I haven't developed that workflow in response to slow compile times, it just feels natural to me.
People seem desperate for one language to rule them all. If you don't need a binary, blazing fast multi-threaded speed, and memory safety then you probably don't need Rust. "Need" being the big thing here. Do you need these things or are they just nice to haves in your head. A scripting language and a web browser can solve a surprising amount of application requirements these days, and its only getting better with time.
If you can afford a garbage collector, then you can also just use something like Haskell or OCaml or Python, which also gets you memory safety. No need to think about lifetimes or different types of smart pointers. Not saying that it’s a lot of mental overhead, but it’s definitely a trade-off.
Agreed. I am not saying that rust is best for everything. I think, that languages are just tools, and you should use the best tool for the job, there should not be fanboyism in this. The bigger your toolbox the better.
I just don't agree with statements about memory safety and speed. The thing is much more complex. I think simply stating "Use the best tool for the job" is much better than showing bad examples.
Because everybody needs memory safety and wants speed. Nobody wants to write unsafe slow programs (Well, I HOPE). Nobody wants to drops either of those requirements. When you drop those requirements, it's because of tradeoffs (I think memory safety shouldn't ever be dropped, yes use GC lang if you want to, but in 2020 memory safety should be a very hard requirement)
Well, I like an instantaneous edit-compile loop too, bit I was never bothered by rust compile times.
Once you get your cache warm, it's pretty much instant. If not, run `cargo check` instead of `cargo run`. You can also use rust-analyser for an in-editor <1s feedback loop.
Copying this from someone else I replied to: No, it's an issue for warm caches as well. I had a 10 second compilation cycle to add a comment to a file in a project with a couple hundred lines of code and like 4 lines in my cargo.toml. 10 seconds! For a few hundred lines! Maybe that doesn't sound insane, but extrapolating out, that's at least 100x worse than the languages that I'm used to.
If you were using `cargo run` or `cargo build` instead of `cargo check`, these 10 seconds could be spent in linking the final binary and not running the compiler frontend. for that matter 10 secs is not unheard of for large libraries with a lot of debug symbols.
That's why you should use `cargo check` instead : it will only run the rust compiler frontend, but not the LLVM linker.
Static checking is just a part of correctness verification (pretty large chunk but not enough).
It can't catch many logic bugs, eg forgetting to update a variable's value. The type system can't help without non-straightforward techniques, and at that point it is diminishing returns.
A powerful type system shouldn't be an excuse for not being able to iterate fast.
> One group loves fast compile times and quickly validating hypotheses
In Rust, it’s sufficient to type check (if it type/borrow checks, it will probably work) and Rust can type check in real time via rust-analyzer. This is much faster feedback than running your program or even its tests.
Of course, Rust’s type/borrow checker is also choosier than many others, and you still spend more time fighting with errors that don’t actually improve your code’s correctness (e.g., borrow checker errors are rarely indicative of a bug in a single-threaded context). So Rust’s iteration loop is quite fast but it’s development velocity is still relatively low (even when adjusting for quality), but it’s improving all the time.
On the other hand, I've seen a few Java applications that leaked memory (and one case of leaking file descriptors) because developers had mostly gotten away with not thinking about resource ownership. Removing data races isn't the only advantage of refusing to generate binaries when the programmer appears to need to think a bit more about ownership.
As someone who regularly uses Delphi, famous for its compile speed, I can totally see where you're coming from. Rust looks really interesting to me, but I've been holding off getting into it, because I would likely end up feeling the exact same way.
I don't use Rust (super interested in rectifying that someday though, especially to replace the ruby we use in our server) but I have the same compile time issue with bytecode on Android.
I used to be the kind of dev who compiles every tens minutes. The thing is, on Android compilation time are just getting larger and larger. Anecdotally, this is not because the tooling team is not interested in build performances, on the contrary they continuously work on it. However they are currently losing the battle : the median app complexity is growing way faster than they are improving the build times.
I mostly got used to it. I can write code for a whole day and only compile once or twice. The more I am accustomed to our ecosystem and codebase, the less I feel the need to compile frequently.
The only exception is for graphics code. The only way to know if something looks correct and good is often to compile.
For that reason, while I still love writing polished animations, I dread having to compile 10 times in a row to get it just right.
`lld` is incredibly fast and allows medium crates to compile in under a few seconds for me. For people who don't know, if you're on linux put this in the `~/.cargo/config` file, and install the LLVM linker on your system.
```
[target.x86_64-unknown-linux-gnu]
rustflags = [ "-C", "link-arg=-fuse-ld=lld" ]
```
`-Clinker=clang` also works. I think you need a recent `gcc` (8 or newer) for `-fuse-ld=lld`. For `linker=clang` you probably need `clang` installed.
Could someone who is familiar with Rust clarify something for me? If I am using an IDE such as IDEA's Rust plugin, and I write some function which contains a type error, do I have to compile in order for the IDE to tell me there is a type error, or can I rely on automatic type-checking like I can in e.g. Java? In other words, is Rust slow to validate code, or is simply slow to compile code into a runnable binary? If it is the former then I can totally understand why Rust would turn certain programmers away. If it is only the latter, then I find it slightly harder to understand.
> In other words, is Rust slow to validate code, or is simply slow to compile code into a runnable binary?
The latter. Validation/type checking is super quick. It might slow down a little bit if you do a lot of compile-time processing via macros, etc. IDE support is also improving a lot, and many Rust devs use an experimental component known as rust-analyzer to do the IDE-based validation you're talking about.
It's interesting - i very frequently recompile, running tests, etc - and i just don't have a problem with Rusts compile times. Maybe you want it more immediate than i do?
The project i'm working on now takes ~8s to compile. Perhaps too slow, but i guess it just doesn't bother me. Though i definitely want to see it improve for wider adoption, as it's clearly an issue for many people - I just have difficulty feeling the pain in this case, i guess.
edit: I imagine i compile once every 5 min of code writing. Varies by problem solving of course.
Yes 8s would me too slow for me. I'd usually set up a fullstack front+backend dev that hot reloads both parts an 8s delay interrupts flow. Compare that with 1s (or less) for Go or even Java (with a lightweight framework).
> I'm the kind of guy that likes to re-run his code continually to see if it validates to what I expect it to be doing. In Rust, this kind of workflow just doesn't work at all.
Interestingly, my case is the exact opposite. When I am coding in Typescript or Python I'm out of place, since I got used to first write the types to make sure everything is aligned, _and then_, start writing logic...
Even in Typescript this workflow is not easy to do, since its compiler is nowhere as powerful.
Can you share how long your compile times are? And what you feel is are "usable" compile times?
I see comments a lot about "this is too slow" but rarely are numbers provided, neither the actual times nor the expected times.
You may have something setup wrong and are seeing unusually long compile times. Or your expectations may not be realistic. Or you're just an outlier whos work triggers some pathological compile issues.
Sure. In an empty project, I saw around 0.7s compile times. This is ideal. But I kept developing and added around 300 lines of code and a few (3?) dependencies to my cargo.toml, and I was seeing compilation times that were exceeding 10 seconds to, say, add a dbg! message.
10 seconds is basically as slow as I'd accept. I wouldn't love it, but I'd suffer through it if I had to. But what I really didn't like was that my feeling was it wasn't the bottom - adding more deps or more lines of code looked like it would continue to increase the time without bounds.
I like doing a lot of game dev and graphical work, and it often requires really rapid iteration to test small changes. (How fast should the AI walk? Should he turn around at this point? Does this logic "feel right" like this, or do I need to trigger it conditionally?)
No, it's an issue for warm caches as well. I had a 10 second compilation cycle to add a comment to a file in a project with a couple hundred lines of code and like 4 lines in my cargo.toml. 10 seconds! For a few hundred lines! Maybe that doesn't sound insane, but extrapolating out, that's at least 100x worse than the languages that I'm used to.
(I know that compilation speed is a Hard Problem, I know that I'm comparing apples to oranges, I know that Rust does fancy magic that other languages could only dream of. But everything that slow compiles buy me won't get bring me back flow state.)
If it is a project you’ve recently built, it shouldn’t matter what’s in your cargo file, since none of that will change from build to build. I wonder what’s broken. How long does it take to build a new empty project?
One think that’s helped me stay “in the flow” is switching to rust-analyzer instead of the current official RLS. Much faster, I leave format on save turned on, so `cargo check` output is usually ready in about a quarter of a second after save.
A new empty project compiles in under a second. However, adding dependencies will add some arbitrary number to this. Additionally, every hundred lines of code seems to add some amount of time between half a second and a whole second. I did some reading and read that generics are particularly expensive to compile, so I wonder if that accounts for the variance. I can't explain why cargo deps add time, though.
I LOVE rust-analyzer. It's actually what got me to reconsider Rust after I tried a year or two ago, and it is SO good.
Let me put it a stronger way: if you're seeing your cargo dependencies recompile every time, it is a bug. Please file one upstream. These bugs do happen, but they are bugs.
I don't literally see my deps compile. It's just that compile time gets slower. (I wonder if it's related to the linker?) Is that still a bug? Would be happy to report it if so.
If the slowless you're seeing is caused by the linker, which can very well be the case, consider using LLD as you can see significant speed improvements to the linking step.
I am less sure, but I would think that's expected, given that it has to link everything together into the final binary. You're gonna end up giving the linker more work to do, as you suspected.
What?
I have a ~1k LOC program which consists of 4 modules, 3 of them all importing the 4th and in total ~200 dependencies. With a warm cache my program takes ~2s to compile. Cold-cache is about 18s
Rust compilation times as they stand today are actually worse than C++. C++ also offers flexibility to an engineer to reduce compilation times significantly via use of techniques like shared headers, etc.
In fast-compiling languages, it's viable to have recompile-on-file-save or even continuous recompilation. Modern (and they aren't that modern anymore) IDE's will show red squiggles under code with compilation errors without requiring any action at all.
Yes, but the programmer knows what they added, and I was asking why they wanted to build again if all they added was a comment? If this was in a CI context then 10 seconds seems unimportant; compile times of 10 seconds are usually only complained about in the context of programmers who've gotten used to tiny edit-compile-test cycles.
"there must be two groups of engineers" -- possibly, but I'm both groups - with Go i'm in the obsessive build group, to the point that people hate pair programming with me, and with Rust I compile less, review more. I'm certainly not driven away.
I find that the strong type system and editor plugins like Rust Analyzer have me covered for “does this do what I expect?”, which means I do way less write-run-check than I would I say, Python.
UI code absolutely requires fast compile times, ideally hot reload, but it wouldn't hurt to have a compiler that compiles code on background and executes unit tests for it.
I hardly know Rust. What about using many small .rs files so the compilation units are quicker. For a change of one line of code only that .rs needs to be compiled.
Furthermore, most libraries make the stylistic decision of using generics and trait bounds instead of trait objects, which potentially generates faster code at the cost of slower compilation. This also has the consequence of requiring a recompilation of every dependant if that item: if you modify a root item that uses generics you will get a nice recompilation of your entire tree. Edit: working on dependency leaves though is mostly painless. If you don't work on fundamental libraries or frameworks of your system the experience becomes much better, although linking can take quite a while depending on your project.
> I keep reading in Rust surveys that Rustaceans just don't care that much about compile times enough to prioritize improving them. I've often wondered how this can be possible
I don't have a good idea about the compilation speeds of C++, so I don't doubt you on that point.
That's hardly the only other AOT language competing with Rust though, and while some of them are simpler on a language level (i.e. C) the compilers do a fair share of work in the optimizing stage of those simpler languages. There's also several languages that do quite a bit of heavy lifting during compilation, like Zig and Nim have extensive compile time features for example. And Nim does two passes since it compiles to C first (by default) and then that's compiled to machine code.
On balance, I don't think you can wriggle out from the fact that Rust compiles slowly compared to its competitors.
Even the Rust team admits this and are working on improving it.
If you want to use rust, and can achieve fast enough performance by simply buying some newfangled threadripper machine with a boat load of ram, what difference does it make?
The OP sounded like they wanted to use rust except for this one issue of compilation being too slow for their development style.
How fast is fast enough and is that achievable just by throwing some money at the machine?
> I'm the kind of guy that likes to re-run his code continually to see if it validates to what I expect it to be doing
This honestly is a bad habit. Thought it does depend quite a bit on the type of work you are doing. For some things it is necessary especially if it is inherently fibbly.
Rust is a language to which the programmer must adapt his way of working and thinking. If the programmer is too set in his way, his Rust journey will be unsuccessful. If the programmer is adaptable, he will come out in the end a better programmer, because the Rust way is actually the right way.
I think I actually went a couple of weeks once before trying out my program for the first time and there was no big deal to get it to actually work in the end.
I'd like to hear some justification for this rather than a blanket assertion. Just because our work styles differ doesn't mean that my style is necessarily wrong.
> Rust is a language to which the programmer must adapt his way of working and thinking.
I mean, Rust is actively attempting to improve compile times. It's not like devs made them intentionally long to cultivate good programming habits or something. So I'm not sure how this is relevant.
The most time wasteful part of programming is debugging. Writing slowly and thinking through your options and double checking what you wrote saves time. Especially since you are doing this while writing and you have the complete picture in your head. Obviously it does not catch everything. But one should strive to minimize detective work.
After I've been programming for an entire day I will rely on the compile to tell me when I'm wrong with an assumption. I'm honestly trying to conserve mental energy so I've learned that it's better to do it from the start and avoid keeping track of as much stuff as possible in my head. The compile tells me something's wrong somewhere and I go fix it and move on. That's not detective work, at least it doesn't feel like it to me since it's so quick.
Turning off parts of your brain is generally not a great idea when the goal is to write code that works. A better way to conserve mental energy would be to take a break, go for a walk, or just context switch for a while.
I'm building with Rust, Rocket, Diesel, and agree with much of the original article. I can also add more: in my direct personal experience, the lead people creating Rocket and Diesel are superb about responsiveness, ongoing communication, and enabling people (e.g. me) to help diagnose issues and fix the them. I'm continually thankful for the quality of participation among the people in the ecosystem.
On the technical side, there are definitely some learning curves among Rocket, Diesel, and other tools. A typical example is that different crates have their own implementations of concepts such as a UUID, and the developer must handle conversions, and also ensure that various dependency versions all align, and also can't (yet) easily use the current UUID crate, and also can't (yet) build using stable Rust. All of these aspects will be fixed soon, likely within a month or so as the crates stabilize.
If you try Rust, I highly recommend trying rust-analyzer, which provides near-real-time code advice as a plugin to most major editors.
Does having to develop against rust nightly over stable not worry you when attempting to productionize a service? I understand new features get shipped behind feature flags/language pragmas, but it seems like a massive looming risk.
I believe Diesel has been on stable for a long time now, so for this use case I don't think anyone will need to worry about using the nightly releases.
One more thing: I build my final binaries with stable (when possible), but develop with recent a nightly. There are some features that require nightly (custom testing frameworks come to mind) but don't affect the final binary (only tests) and I get the new goodies (better diagnostics, speed bumps, fixes) up to 12 weeks ahead of schedule.
Rustup makes having nightly and stable in the same machine painless, and I got in the habit of running cargo +stable build and cargo +nightly test. BTW, you don't have to do this, it just makes my experience a bit nicer.
Not OP, but: It does not for me, fwiw. However we release often, so i'm not releasing a binary into a distributed ecosystem where i expect it to live for years and years. In that scenario i would be wary of non-stable, since a looming issue in nightly could persist for an uncontrolled amount of time.
However thus far we have never had to push a release due to a bug found in nightly. We also have never had a build break because of nightly, or even upgrading.
The only breakage i'm aware of was, oddly, a semi-recent Rustup change. Our CI was not pinning the Rustup version, and the argument defaults changed so the downloaded Rustup did not have the components needed for our CI setup.
We will however migrate to stable once Rocket becomes stable. Thus far however, we've not had any trouble on nightly. I can only assume this is a result of the Rust teams excellent QA/tests/care/etc, combined with the language itself being very easy to write error-free code.
Developing based on Rust nightly has caused some learning curve gotchas, such as discovering a POSIX bug in the Rust install script, or needing to write a custom Docker Alpine Rust nightly container, or needing to write small system scripts to ensure that nightly is the same version on our various build machines.
So far we've seen about 2/3 of nights fail for our code. The failure is totally obvious. So we revert to the previous night that we know works.
Depending on the level of maturity of the nightly features you rely on, for CI there is an env variable that makes the stable compiler think it is nightly. We use this flag at day job to use latest stable but be able to enable a handful of quasi stable nightly features (custom testing frameworks mainly and getting rocket to play nice). This is a double edged sword: it is the same as a pinned nightly in the sense that it is a static target, but differs in that bugs affecting stable and beta have been backported where any arbitrary nightly has no assurances one way or another, but bugs affecting nightly features aren't backported to the stable release and only fixed on nightly.
Also, for the love all that is holy do not publish a crate relying on that flag to enable the nightly features, it breaks the languages stability assurances and by extension the ecosystem.
There is an advocate on our team that wants to migrate our web service from Nodejs to Rust.
While I am not a huge fan of Typescript, at least libraries are readily available and generally easy to use. On the other hand, being able to show that we are able to do monitoring, user auditing, ORM, opentracing, gRPC-web with Rust is non trivial.
Now that I think of it, being able to do a "hello world" on any language is pretty simple. But having all the tools around it to build a production level service is a different story.
Rewriting is very expensive, and the advocate will need to make a VERY strong case for Rust if they're asking your company to invest in replacing it - and alternatives have to be suggested as well, including other languages and a good list of things wrong with Node / TS. Consider developer availability as well.
I don't think a strong enough argument can be made. I'm sure you CAN write web services in Rust, and that in very specific cases it'll have some benefits over Node, but honestly very few people work in an area like that.
Disclaimer: I've settled on using Go to rewrite an existing application. The original app was written in PHP; nothing wrong with PHP per se, but the existing codebase is a mess and the PHP version is hard to keep up to date because of LTS versions of operating systems + very slow and careful updates at our customers (it's network infrastructure). For me, switching to a compiled language that produces a self-contained executable was a compelling argument. I have to admit that I do kinda pine for something like Java again though.
Swift / Vapor is amazing as well. They just released version 4, which streamlined and tidied up lots of things, can't recommend it enough. There's just something solid about Swift's strictness and compile time checks, that make it easy to be sure you're handling all possible code paths, and you can be reasonable confident it works and won't break all the time. Also very lean on dependencies, mostly unopinionated, and performance/mem-usage is top tier too. Only con is probably, you're bound to Xcode (and therefore macOS) for development, I guess you could try to set it up in VSCode, but haven't heard of it and experience will probably be not so good.
In this case Swift is being used on the server, so that could be an officially supported platform[0] which if I recall correctly is macOS, Ubuntu, CentOS, Amazon Linux 2, and Windows as of Swift 5.3.
I get that, but it still doesn't explain why the above poster used the MacOS market share as a detractor in this case. If you value languages that don't dictate hardware/software for you, then Apple was probably never a serious contender for you.
Yeah. I am being very cautious about their desires. I learned that the best way to discourage someone from taking a large project is to show how much work it would be. Just shutting down developers is not nice.
I agree too. Rewrites are a scary thing. The best is to do some test with a few new micro services and see how it feel to code from 0 to production. Learning the tool chain etc..
That over-engineering is what save us from clunky solutions that are good for hello world applications and HN like posts, but fail when doing deployments across heterogeneous platforms on Fortune 500 IT departments with endless number of external contractors and technology stacks hardly seen elsewhere.
And if I have to pick between managing WebSphere containers and taking care of k8s, I will rather pick WebSphere.
The learning curve should be beaten after a few weeks and is definitely lower than learning rust. On the other hand, the productivity and expressiveness return on investment is huge.
Spring is arguably the state of the art framework on the server and is integrated with so many powerful technologies.
> On the other hand, the productivity and expressiveness return on investment is huge. Spring is arguably the state of the art framework on the server and is integrated with so many powerful technologies.
Having worked with Spring for years (and still working with it, unfortunately), I have found exactly the opposite.
It does way too much dynamically, making debugging hard and rendering the type-system almost useless.
Sure, it comes with a lot of libraries for handling a large varieties of tasks, but they all seem to be half-assed, and much of the time I either have to work around their limitations or write something myself anyway.
I really don't see why the same thing couldn't have been achieved by writing a bunch of useful libraries that don't depend on a dependency injection framework (which I also don't find much value in).
Finally, it takes forever to startup, making testing a pain. It even means that JUnit tests are slow.
It's like any framework: once you learn it's idioms, you can be insanely productive. Unlike other frameworks, enterprise Java is all about flexibility, so in Spring land, it's often in incredible easy to tune or replace lower level components like connection pools, etc.
As such, I'd describe it as a good combination of highly dynamic architecture with lots of manual control. Of course, all of this is enabled with copious amounts of magic, which is usually why people don't like Spring.
Rust is much too low level for most glue/web services. Unless you have a specific high performance requirement (and Go doesn't meet this), there's no real strong case for migration here.
Rust itself is not inherently "low level" per se. But others are probably right that the whole web services ecosystem for Rust is rather half-baked at this time, the OP notwithstanding.
But I don’t? The borrow checker takes care of that? String vs &str is trivial to get ones head around, usually it’s really easy to decide whether you’d like to pass a reference or ownership, and worst comes to worst, sprinkling some copy/clone etc to get things sorted quickly still yields a binary that’s faster and more robust than something I can whip up in Python...
The borrow checker only checks, it does not solve the problem. In other languages the problem does not even exist to begin with.
It is not a trivial problem to solve (as you claim), otherwise we would have never needed the borrow checker to avoid memory bugs, nor higher level languages to speed up development by avoiding the problem altogether.
If you are going to end up sprinkling clones, heap allocating and reference counting, then you could have used C#, Java or JS to begin with which are plenty fast with their JIT/VMs and not think about memory at all.
Finally, comparing against Python is a very, very low bar for performance.
> In other languages the problem does not even exist to begin with
I am going to disagree here, because I've run into my share of memory issues in Python and C#/F#, and I'm sure by this point, everyone is well acquainted with Java's memory issues.
> It is not a trivial problem to solve (as you claim), otherwise we would have never needed the borrow checker to avoid memory bugs, nor higher level languages to speed up development by avoiding the problem altogether.
I'm not claiming that memory management is a trivial problem, I'm saying the borrow checker takes care of enough and the compiler/clippy hints when I do something wrong help me fix it easily enough. I write code slightly slower than I would in Python, but at the end, what I get from the Rust code is something that is more robust and more hardware efficient.
> If you are going to end up sprinkling clones, heap allocating and reference counting, then you could have used C#, Java or JS to begin with which are plenty fast with their JIT/VMs and not think about memory at all.
Rusts type system is enough to make me want to use it over dotnet, JS is a language with...some issues...that is fortunate enough to have a nice JIT, I consider it a serious choice for doing anything except web front-ends. I find C# needlessly convoluted and I dislike all the implicit mutability, but those complaints are very subjective.
The difference is that even if I have some clones and ref counts, they're rare, and the resulting binary is still outrageously fast, and has clear indicators of where to come back to and improve so as to not need the clone/reference counting/etc.
> Finally, comparing against Python is a very, very low bar for performance.
I compare against Python because that's the other language I do most of my work in.
You were talking about the borrow checker, which is mainly about memory safety, not memory limit issues.
In Python, C#, Java, JS... you are memory safe without dealing with memory management nor a borrow checker.
There are many languages running on top of those VMs for all kinds of tastes (OOP, functional, strict type systems, loose ones...). Claiming Rust leads to more robust software than any of those is an exceptional claim, but even if that were true, the key is the development cost.
A typed language is a typed language, there are other languages that are easy to get performance out of. I’m not a rust dev and I’m highly skeptical it will be used outside of firefox and a few niche projects after this initial hype train dies off. What other features would make me pick rust over golang or one of the interpreted languages?
> a typed language is a typed language
Well yeah, but not every type system is equal. For example I vastly prefer Rust's type system to C's because of Options instead of null and enums as sum types.
> what other features would make me pick rust over golang
Generics, iterators, pattern matching, etc. There's lots of features Rust has that golang doesn't; that's not necessarily a good thing but for what I do it is. IMO the only good thing about golang's featurelessness is the compile times and the standard library.
As for interpreted languages, IMO it's just better to be able to catch errors at compile time.
> Well yeah, but not every type system is equal. For example I vastly prefer Rust's type system to C's because of Options instead of null and enums as sum types.
Fair enough. But I'm not advocating using C here either.
> As for interpreted languages, IMO it's just better to be able to catch errors at compile time.
Just because you have a garbage collector doesn't mean you don't have to worry about memory management. I've see too many problems pop up because people don't understand how memory is managed in their GC'd language.
This isn’t true, in all languages with one you can ignore the garbage collector and still get work done. It may not be the most efficient but you still get work done. Let’s get a fresh out of code bootcamp grad in here and throw two languages in front of them if you want to test this.
You may be able to get work done, but I've seen actual bugs because people didn't understand how memory was managed. For example, not realizing that passing an object to a function was passing a reference, and not a copy. These are things that are explicit in Rust.
This won't result in any security related bug, you'd be updating the referenced version instead of a copied version. Both testing and use of the written code will show this "bug" if it's in fact a bug for this specific codebase. So now the question is, does rusts difficult learning curve warrant removing this "maybe" bug? There are other things to consider as well, memory fragmentation, performance etc. Have you measured the performance of code that both copies and updates?
But this is still using a hammer to screw in a nail. Rust is a systems language, it’s a junior dev move to force it into a web server. Use go or typescript for this, not rust. Just like I would write c++ for a backend unless i’m trying to shave off some nanoseconds.
Actually the OP only wrote that the current state of the ecosystem is surprisingly mature, but he doesn't recommend writing anything serious in it yet.
Personally I don't see the point to implement a typical web application in Rust - the performance improvements you get will be lost on IO-bound applications, but you'll still be saddled with the complexity of the memory management. I'd rather suggest to rewrite VS Code or the Slack client in Rust (i.e. apps which currently use Web technologies on the desktop) - those would definitely benefit more from increased performance and reduced memory footprint...
> the performance improvements you get will be lost on IO-bound applications
Performance starts mattering even in IO-bound applications as soon as you're trying to seriously scale out. Especially when running on a cloud-based platform. As for "the complexity of memory management", people like to bring this up about Rust but OP suggests that it's not a huge concern with the language.
I do agree that rewriting stuff like Electron-based apps should be a priority, and that Rust can help this via easy bindings to native OS and GUI platforms.
Generally when I’ve successfully advocated for new languages or tools at work, it’s been by gradually introducing the language in newer and fairly self-contained projects where the risk is relatively low. This provides a safe space to explore all the concerns you just mentioned without compromising any working code.
If you have some interest in Rust, that’s what I’d personally recommend instead of migrating an existing service to a different language.
I'd say there is value in learning for the sake of learning.
Even if you later on decide that the Rust implementation is not production-ready, you can draw conclusions from the project, and the next time someone considers Rust for a bigger project, there is a member on the team who can provide insight into potential issues. Of course, the project would have to be sufficiently small to not waste too much time.
Your development cost will skyrocket.
Even if you ignore everything that the ecosystem may not offer to you at this point (which is a big deal), you will have a hard time to find experienced Rust developers that like to build web services with it.
I come from a C# background, but to list out a few hurdles that I have already deal with with TypeScript/NodeJs.
1. C#'s AsyncLocal has made a few things simpler to trace SQL queries to a request.
2. We chose to use hapi.js some 2 years ago, because we found the interface superior to Express, but I did not expect the sole developer of hapi.js to decide stop working on the project [1], and left my head scratching on what I am going to do about that.
3. TypeScript's interface sometimes feels too much like just "suggestions", and every once in a while run across scenarios that are not fully supported. One that I really dislike is that Sequelize's "where" options has no type support for the model. [2]
4. Sometimes libraries does not use async correctly, and breaks stack traces.
5. Sometimes TypeScript's generic errors can be as bad as C++.
My favorite feature of TypeScript is being able to just define types/objects in-line, but I actually like to stay on the side of caution and stability on a large project with many people.
I can feel the pain on all 5 points. One thing I did notice though is that NodeJS has such a huge breadth of packages that its _very_ hard to actually pick something good. But there is almost always an alternative that's maybe not as popular, but is a lot more "solid" alternative.
For example we used SOHU-Co/kafka-node for a while as a kafka client, until we hit some bugs that made us dig through its internals and we realised it had some deep issues. We then switched to kafkajs which turned out to be much more mature and polished, even though it was less "popular".
Sequalize in particular I think was developed in an era before TypeScript was a thing so it follows the ideals of that time, more in line with Ruby and being easy to use and malleable. We switched to using slonik for our query needs, with a more declarative and static approach, skipping ORMs and query builders altogether - just raw strictly typed queries. I think in the end its a better approach for our needs.
I guess what I'm trying to say is that TypeScript was built to be able to handle _all_ of the weird and wonderful world of JS from its most amateurish and fun, to its most solemn and strict. And it's just a matter of picking up where on the spectrum you want to operate and make your dependencies match that vibe. It's limiting and freeing at the same time.
I feel like I have been to hell and back with ORMs, between Sequelize, Entity Framework, Hibernate, SQLAlchemy, etc.. and frankly, I think they just cause more headaches than solve problems.
I would love to have strongly typed SQL queries, but I have found that Dapper [1] fills a special place in my heart.
I've been following https://github.com/adelsz/pgtyped for awhile. It should give you TS types from sql files (and even sql template literals) directly. Though I haven't used it in prod. Might be worth a look.
Not the OP but my take is that Typescript has done great things for improving JS but it's still JS and carries a lot of it's baggage with it. There is only so far you can take it while maintaining compatibility and the type system is overly complicated in places because of this. It also doesn't really give you the same guarantees that something like Rust does, I don't believe the two type systems are that comparable. On top of that unless you use all TS packages types are maintained and installed separately adding to an already tricky to maintain dependency graph that comes with Node development.
As far as I know, you need grpc-web only because there is no direct grpc implementation for javascript. For rust, c++ etc. you would use grpc natively.
grpc-web is attractive to our use case because it would take away from manually implementing REST interfaces and every once in a while having type mismatches. The less room for mistakes, the better.
.NET Core just recently last month that gRPC-web is stable [1]. It would remove a huge amount of boilerplate in setting up services on both client and server side.
Some people also recommend to use Envoy [2], but usually the less cogs the better.
I would wait and see whether GraalVM would not help significantly with performance. After that I would try running the web service with Deno. Then I would rewrite in Go, then I would rewrite in Rust. That is not to say I think Go is better than Rust in every dimension (I prefer Rust for most things), but I think the Go community have really optimised for this one single usecase.
I took an example of a server straight from the Rust async book, and to my surprise, it performed very well for a long time serving real internet requests till I actually had to modify it:
In my opinion, the section which the author called "the ugly" makes it clear that it's not that surprisingly good as a application server language. A file upload is a basic feature that has already been solved yet Rust code contains a lot of boilerplate when compared with Python.
Remember the mantra from the 2010's when the dynamic typing craze? Rust is not about optimizing developer time, it's about guaranteeing safety for base sysstems software.
All this permanent reasoning about ownership, borrowing and so on makes it the greatest contender against C++ (hence Mozilla and Microsoft support) but in terms of productivity you'd be best served with a higher level, GC-collected platform for an application.
The point of the article is that lack of good support for file upload IS a productivity hit, but ownership reasoning IS NOT a productivity hit. And file upload can be solved by just more code. This indeed matches my experience.
Ownership reasoning is not a productivity history in a web server, where the lifetime of pretty much everything is a single request which is handled and then disposed of. This is pretty much the simplest case for ownership. In a different application (say, one running a GUI with many objects which live for indeterminate amounts of time and can be shared across views), ownership reasoning can be extremely complicated.
I’m not saying having to reason about ownership is bad, I’m just saying that this is not a good test for whether ownership reasoning is difficult or not.
The thing with ownership reasoning in Rust is that you can opt out of it whenever it makes sense to do so. If you really have "objects which live for indeterminate amounts of time and can be shared across views", that's not an increase in complexity; you just acknowledge that reasoning about ownership and sharing at compile time is not going to be feasible, write Rc<RefCell<…>> (with a documentation comment to that effect) and move on. All it takes is knowing where to add a tiny bit of boilerplate.
Yes, but you still can't write garbage collected code, which is tremendously useful in many circumstances. E.g. when using closures you really don't want to be thinking about memory allocation, and closures have proven a very useful concept. There are countless of reasons why the availability of a GC is a productivity booster.
So I wouldn't describe Rust as a language that fits all domains and/or programmers well.
> when using closures you really don't want to be thinking about memory allocation
You don't have to think about this with Rust - and you don't need a GC either. The borrow checker will make sure that your closure doesn't outlive the variables it captures, which is what you need for correctness in this case.
> The borrow checker will make sure that your closure doesn't outlive the variables it captures, which is what you need for correctness in this case.
No, the borrow checker will merely give you an error when your closure may outlive the captured variables. This is often not useful, and is an impediment to being productive. As a programmer you don't want to be solving the same boring problem of memory management over and over again, unless perhaps when you're doing really low-level stuff and there is no other option.
How is it "not useful"? It's generally quite easy to resolve the ensuing lifetime problems: either use .clone() to copy the underlying values, or use Rc<>/Arc<> to provide shared control of the lifetimes involved. And it's far from a "boring problem"; quite often, being aware of how and why lifetime and mutability interact at a "low" level can inform higher-level design as well.
Reference counting doesn't solve all memory management problems. With closures you often end up with circular references.
And yes, in probably 99% of cases you can actually find a way out of a memory management problem in Rust if you think hard enough.
But my point is that quite often you don't want to think about memory management. Rust doesn't help here. Rust advertises that it solves memory management problems, but in reality that only holds up to a point. So use Rust for your OS or your low-level server, but don't think it is a panacea.
> With closures you often end up with circular references.
In that case, you can manually lift the data object involved into a function argument, as opposed to a variable capture - with its ownership being thus managed explicitly. This is generally an improvement in design.
Of course, Rust does only solve memory management problems "to a point"; there are cases where fully general GC is pretty much a necessity. But even most uses of closures - a fairly high-level language feature, all things considered - don't require this in many cases.
Not really. Reference counting doesn't prevent reference loops, and reference loops which aren't reachable from code are memory leaks - which is why garbage collectors do sweeps to detect unreachable memory. Rust doesn't have any kind of automatic cleanup of unreachable memory if you bypass the borrow-checker.
Rust is a language for memory management. There is no way to write code in Rust without thinking about memory.
Honest question: how many hours would you say it took you to grok the ownership/borrowing stuff and then reason about it in a natural way so you were as productive as with the language you were coming from?
> And file upload can be solved by just more code.
Yes, but it's a papercut. How many other papercuts are there, compared to more developed ecosystems for this domain? That's not an easy question to address.
Or take the ORM as an example. Doesn't even begin to compare with 2012's ActiveRecord from Rails land. Not to be surprised, as Rails used a lot of dynamic magic that is simply not doable in Rust.
Granted, it's more than enough for a lot of use cases but it wouldn't seem to me like the most appropriate approach for, say, and e-commerce platform in terms of maintainability and time to market. Of course, the performance speedup would be huge on the other hand.
Conversely, active record can’t generate the SQL you can learn in a few minutes in a database class. I’d much rather have highly precise sql serialization and deserialization code and emphasis on minimal client overhead in a systems language than the ability to rapidly spike a data model and query it in the space of a PowerPoint slide.
But then you have also stated the point: systems language, which is the perfect use case for Rust. The original article talks about a "server language", as in application server, and my whole point was that for building business applications this is a weakness, not a strength.
I don’t see what excludes building an application server with a systems programming language. Business applications have maintenance costs that the ORM backloads; it’s not like all businesses need to do this. There’s nothing inherently “business logic friendly” about an ORM.
On the set_expires thing, the docs are at https://docs.rs/rocket/0.4.5/rocket/http/struct.Cookie.html#..., and “Tm” is a clickable link. Rocket 0.4 is using version 0.1 of the time crate, and the snippet in this article that looks into the time crate docs is looking at completely the wrong thing, time::Time::now from 0.2, which gets the time of day (which also, I think, helps explain the deprecation reason, because time of day in UTC is extremely seldom useful and can easily be obtained otherwise, while a full date + time in UTC is useful), rather than time::now from 0.1, which gets a timestamp.
It's sad that we still have to make async I/O explicit in the code to obtain some efficient concurrency in 2020.
Async/await is a huge improvement over callback hell, but this doesn't fix everything. The "function color" problem still exists [1], and seems to be more than binary in rust. This quote from the article is incredibly sad: "each async library, comes its own ecosystem of libraries, which only work with that async library".
Rust is ground breaking in some areas, but also completely lacks innovation in others.
But is it actually _necessary_ to resort to async I/O in Rust, given that the type system appears to make thread-based concurrency safe ?
> This quote from the article is incredibly sad: "each async library, comes its own ecosystem of libraries, which only work with that async library".
Already in Rust many libraries can be written completely agnostic of the underyling executor. Some cannot; we have a bit more interface work to do, but there's nothing inherent about this, it's solely a standardization issue that's being worked on. Sounds like they ran into the latter more than the former, which is unfortunate, but should be better in the future.
> But is it actually _necessary_ to resort to async I/O in Rust, given that the type system appears to make thread-based concurrency safe ?
> This quote from the article is incredibly sad: "each async library, comes its own ecosystem of libraries, which only work with that async library".
Having to write async versions of the sync APIs doesn't need to be the case: https://docs.rs/smol/0.1.18/smol/ lets you use blocking APIs in async contexts without blocking (by dispatching them to different threads). What the ecosystem is seeing is a few different projects trying things out in different ways to explore the design space. This is not necessarily a bad thing.
> Rust is ground breaking in some areas, but also completely lacks innovation in others.
Other than the borrow checker Rust is a very boring language for language designers. That is intentional :)
> But is it actually _necessary_ to resort to async I/O in Rust, given that the type system appears to make thread-based concurrency safe ?
async/await lets the compiler perform some extra inferences on how you're using your data, which makes writing an `async fn foo();` much easier than `fn foo() -> impl Future;` if you are dealing with borrowed data. Of course, the same underlying threading primitives are still available to use.
I would love to see some quality benchmarking of a non-trivial threaded solver in Rust.
As strongly suspect it will perform "well enough" (hopefully much better than async in Python, and hopefull javascript too), which is good enough for me to be able to escape async.
One of the core facets of Rust is that you don't pay for what you don't use. If everything was async by default then code that wasn't async would pay the cost of it. Async isn't free.
Why wouldn't you just use OCaml, Haskell, F#, or Scala, where you've got much more mature web framework options? Don't get me wrong, Rust is fine, but if you don't need its memory management then why make trouble for yourself?
Rust is interesting in that it may eventually end up as a very useful language for WASM work.
Rust has the advantage that when compiled to WASM it won't require a runtime library. So the prospect of Rust as a fast, low overhead, use anywhere language is tempting.
Since Rust is a relatively new language it's worth checking to see how it's maturing over time against small low risk projects exactly as the author has done.
Rust may not be a perfect fit right now but that may change. So it's very valid to test one's assumptions from time to time.
Or to put it another way, working in one of the languages you've mentioned wouldn't answer the question "how suitable is Rust for web server work right now?"
> The project is simple enough that I can't imagine being too limited by any language's ecosystem. And I've been itching to write something substansive [sic] in Rust...
I think it's also a bit of a chicken and egg problem: If no one uses Rust for web servers then its ecosystem won't ever mature. If there are people willing to deal with some rough edges, that's great and will help the ecosystem mature.
I've taken the deep dive into Rust API service development the past few weeks. I really enjoy the experience of developing in Rust, but not so much the experience of using Rust web frameworks, somewhat in line with the author.
In particular, even having a relatively small set of feature requirements it has been difficult finding a web framework that supports: Middleware, Websockets, easy routing, and async handlers.
Actix probably supports all of those, but I don't much like development in it and prefer to avoid the drama of Actix. Surprisingly, none of the other libraries I've looked at support all of those (except possibly Gotham, but I ran into other issues with it).
I'm not sure what my takeaway is, other than that web frameworks still have a ways to go, and development on each of these web frameworks seems slow, I think because of the lack of companies picking them up. I also get the impression that people are somewhat obsessed with doing it the "Rust way", rather than just getting a simple completed web framework out the door.
However, I love working in Rust, and think I may reach a point where I'm actually more productive in it than something like Typescript. There's just a large learning curve. But it's fun.
I'm glad to see this post here. Just because I hear/read a lot of people outright dismiss Rust's potential here with arguments that kind of miss the point:
1. You don't need CPU performance for most web stuff. It's all IO bound.
2. Related to #1, garbage collectors are fine and you don't need the Rust model.
3. Rust is so hard to learn that it isn't worth it unless you need the performance.
There's so much more to Rust than the performance. It hits a really nice sweet spot between being expressive and letting you take as much control as you want/need.
Sometimes GC'd languages are a pain in the butt because I just want a damned destructor and I would like to be able to guess when/if it's gonna run.
And, frankly, Rust isn't that hard. It's an imperative language. It's not Haskell.
Can someone elaborate on why #2 is wrong on the server level? My systems are not performance bottleneck'd and as a Java dev I'm more worried about the potential for memory bugs (which I admit I will create) if I stray away from automatic garbage collection.
It's not wrong. I didn't mean to imply that. I just mean that saying non-GC languages are about performance overhead of the garbage collector is missing the point.
Sometimes, it's convenient and easier to reason about stuff when you can predict when something is actually dropped from memory.
And in Rust, you wouldn't create those memory bugs you're worried about. (Well, you can, but you have to go out of your way to take the safety off :)) That's kind of Rust's whole "shtick"
Depends what we're talking about. A "true" memory leak (as in allocated memory that is not referenced) is not very likely.
A reference cycle with `Arc`s can happen though.
But if you're talking about Java and mediocre devs (I'm in the club- don't worry), I feel like there are no shortage of ways to make bugs with null, concurrency issues, etc, that Rust completely eliminates. You can make reference cycles in Rust, though.
Also, in Rust code you often don't have references to references to references the way you do in Java, so it's just not an issue that I'm aware of having had yet.
That SO answer restates what the parent said: you can leak data either explicitly (Box::leak/std::mem::forget) or by creating reference cycles when using Arc/Rc.
I'm working with Rust as a backend right now and can echo the Diesel experience. The issue is that there is not really much of a choice, it is either Diesel or not. So you have to stick with that. Another problem is that most OS projects in Rust are maintained by 1-2 persons who are not working on the project fulltime. That's not the case for JavaScript/Node as there are order of magnitudes more projects and people working in these platforms.
His last example could be improved a lot with the "?" keyword. That would remove lots of wrapping and simplify the code greatly. So it's not that bad really.
I've built a few web services in Rust, and I've been incredibly pleased by the existing ecosystem as well as how easy it is to develop concurrent solutions for CPU-intensive tasks.
https://vo.codes is a text to speech service written in Rust and it performs really well. Any service failures are simply my poor proxy implementation - the core TTS service itself scales very predictably.
Reading this I can't help but be lost trying to come up with any alternative solution for this specific use case that is worse than what they describe?
A bunch of mostly static blogs can be hand-edited HTML, or some ancient perl script, or any of the existing static site generators.
If it absolutely needs to involve server-side logic, it could be completely accomplished within a single call to ```rails init```, and the resulting project would incorporate a few $billion worth of collective experience commonly refered to as "best practice". It would be far easier to modify and far less likely to contain vulnerabilities. Performance would be worse by a factor of maybe x100, or "why are you asking these weird questions?" when converted to the real world, according to a representative survey of end users.
There's a good argument for replacing individual components on, say, the critical path for rendering people's twitter feed with lower-level implementations.
But it strikes me as unlikely that such endeavours would care about the ability of their ORM to quickly generate the migration scripts to drop or add some database columns.
So I can't quite see the benefit of cramming the rails model into Rust? Rails is spectacular in how it allows you to quickly iterate, adapt your data model, try some ideas, and so on. Those are qualities that just logically do not transfer to a world of static typing and manual memory management.
Not the point of the article, so sorry about offtopic, but the workflow described in the first chapter feels a bit off. If they wanted dynamism, then writing a server is the way to go, but it seems that the easier fix to their problems as described there, would have been setting an automated CD pipeline that would build and deploy their blog every time you make a change.
Why wouldn't you want a garbage collected language for developing web services? Developer time is at a premium, lots of RAM, likely a single environment managing multiple requests so the GC gets a global overview, and because of network variability you're unlikely to be doing anything with realtime constraints.
A simple web server barely needs GC - allocate anything and everything needed during a single request from the same pool, and free the pool once the request is handled. The lack of GC can hurt developer time for other kinds of work, where the lifetime of data structures are not scoped to “within a request”.
GC pauses in some languages are not good (Go put a lot of effort into fixing this). That's about it, but Rust is good for reasons other than its lack of GC. The type system is very robust, which means fewer runtime errors. The standard library is pretty great, even if parts of it are third party de facto standards (serde, crossbeam, etc.). It's much more expressive than Go. Much faster than Python.
I started learning Rusit and using it for hobby projects in 2014, but I have to agree: for a full-blown product development it's just not there yet. Static typing your code up to the point where you can assume it's correct if it compiles and managing memory manually without the need for a garbage collector, these two things alone just light up my inner nerd with excitement. But as an engineer who has to deliver a product in time and in a maintainable shape (which means "hiring considerations"), I just can't excuse using Rust instead of NodeJS or Python. Web servers that work on request/response model are ideally suited for garbage collection, and just as OP says, they're IO bound, not CPU bound. All these wonderful things that Rust offers are just not that important in the real world.
But if I ever have to write a small (in terms of requirements and potential LOC, not load), atomic, CPU-bound microservice that will probably not require a lot of maintenance work, Rust will be my first choice.
I think Rust is past the hiring hurdle and you would probably get the opposite effect - I suspect you would get more higher quality applicants just looking for a chance to work with it. Seen a similar story with a team choosing RoR over Clojure years back.
Higher quality, but all seniors with salary expectations to match. Using a senior developer to maintain a couple of services is both a misuse of his salary and a good way to bore him to death.
People is downvoting you, but you are correct. Rust is ideal for CPU-bound problems were performance is critical. For typical web services, not so much. It is hard to justify the extra development cost, even if hiring were easy (which it isn't in the majority of the world).
I find the title surprisingly annoying. What the heck is "surprisingly good"? As compared to what? (I know, I know; reading the article would've clarified). And what the fuck is a "server language"? The title made me want to not read the article just to come to know of yet another new developer come in contact with something they haven't used before and make obvious or irrelevant pronouncements like they discovered something new. Or maybe I am just growing old :-).
what about server debugging ? python as an interpreted language have a real advantage here. you can easily patch on production for example to debug or hot fix. you would need to rebuild and upload binary in the case of rust.
I don’t think I’ve ever seen anyone try live-patching a Python server. Modifying the code and restarting the server, sure. Modifying Django templates and having the updates show immediately, sure.¹ But modifying the code and restarting the server is logically equivalent to modifying the Rust code, rebuilding it and restarting it. Any finer live-patching in Python is risky, largely only safe to do at defined boundaries. Modifying code is a hazardous operation, because it depends on how things use it. Patching module.function only helps places that import module and call module.function, for example, not places that use `from module import function`. So, simplifying drastically, you can mostly only really modify singletons, and singletons that have been designed to be modified in this way.
In Rust, you can define such boundaries, and manipulate things like configuration when you’ve decided you want them to be modifiable, perhaps within a debugger, or more likely via some exposed API (maybe a web API).
To be sure, Rust is less flexible: all such extension points must be designed in, rather than often working by accident (though the Python way is very likely to blow up in your face from time to time).
But in the end I don’t think it’s such a big difference.
(On reflection, I suppose I have seen a debugger used in the scope of a particular request within Django, to give you a pdb prompt instead of just the 500 error page. But that’s then just used for inspecting what’s broken, and most such brokennesses would have been caught by the Rust compiler. Still, something equivalent to that could be nice to have in a Rust development web server; I don’t believe any such thing exists at present, but it could in theory and might be interesting to make. It would still definitely be more limited than pdb. Maybe when we have a Miri-powered REPL something interesting will happen in this space. It’s not a fundamentally impossible space.)
¹ If that works, by the way, you should set up the cached template loader, as it’ll speed template rendering up a lot. That used to be something you’d have to do manually, but now the default template loaders configuration includes caching if debug is False.
This really only works in the case that you have one server running a python app that you can ssh into. Any deployments more complicated than that and it starts to break down.
Debugging and hotfixing in production is not generally a good idea. Updating a site with significant traffic will usually involve pushing some type of artifact through a build system and deploying it to multiple servers, and that’s true whether it’s a python script or rust binary. As far as an actual debugger, I believe rust generally encourages gdb. If your concern is about Rust compile times, that’s a different but valid question.
I've been writing Python for a few years now (at my job), and most of the situations where I've used breakpoint() to debug something would literally not have been possible in Rust because they would have been compile time errors instead of runtime errors.
I really don't understand the down votes :) yes I've been working on python applications and live patching code for emergency, our app was deployed and running on 64 different machines. I'm not talking about coding in production or whatever, of course once the workaround is done on production a proper solution is implemented and deployed.
why restarting ? when you can reload the new code of the app while it's running ? no service interruption worked for us for years ! but yeah, I agree, better having 4 eyes while patching on prod :)
I can't confirm what is the motivation behind Ada++, but actual Ada2012 standard and in particular the Ada SPARK (2014) subset is in many ways objectively superior to Rust with regards to memory safety.
One major issue with Ada is that commercial grade compilers are not cheap and for the most part the language was unable to get rid of the stereotype of being an Aerospace/Defense language only.
More importantly the previous discussion on Ada SPARK 2014 'safe pointers' may also be an interesting read for proponents of a Substructural Type System:
There is so little information on the page you linked that I doubt this project is production ready in any shape and form. Ada webdev is already though as it is, I doubt that project can compare to rust.
It is an Ada compiler with modified syntax. The GNAT Ada compiler is used by defense contractors, air traffic controllers, and NVIDIA for some of their internal research.
I had a similar idea for my blog [1], with almost identical goals, but I wrote it in Go. Rust is on my to-learn list, but I find it harder to read from the code samples in the article and what I saw on other websites. I guess I will stick to Go for business logic. When would you guys chose Rust over something else, like Go?
I thought about getting back into game development. Nothing fancy, just some 2D experiements. I haven't looked into Vulcan/OpenGL bindings for Go in a while, but I remember that it wasn't really recommended. Maybe I'll give Rust a shot one day to do that.
I keep reading in Rust surveys that Rustaceans just don't care that much about compile times enough to prioritize improving them. I've often wondered how this can be possible, given that to me it's such an obvious glaring issue that all the other cited problems are distant distant seconds at best. I have a theory: there must be two groups of engineers. One group loves fast compile times and quickly validating hypotheses. The other group must value thinking about their code a lot more than running it, and so compile times aren't that important. My guess is that, while the second group hangs around and loves Rust, Rust has completely driven away the first group (including myself) to the degree that they don't use the language enough to even fill out the surveys.
Anyways, I know it has a wide swath of use cases, mostly in systems programming. I'm just bummed that if I ever do any of them, I won't really enjoy it. :-(
[EDIT:] Gotta go to sleep, it's far too late here. I really appreciate all the thoughtful replies. Rust's amazing community is another reason it annoys me that I can't fully get into the language as I'd like to.