I've taken a non-technical lesson from these blog posts. It's a simple one - their very existence has made the rust compilation time less of an issue to me.
For most people, knowing that the relevant people (in this case the Rust team) agree with your problem and are doing something about it significantly reduces the pain of the problem, because you know it's getting better. You need both parts though - the Rust team could be quietly working on performance for years and double compile speed and I probably wouldn't notice. This blog series is excellent marketing for some forward progress, and means that I do notice, and feel better about it.
The downside is that it is a very common marketing trick. Promise something so that customers feel good about it.
I know the Rust contributors are doing their best, and that this blog post is not writing a marketing ploy, but in the end, the result is what matters.
My take on this is that I am fine with compile times as long as some effort is being done to keep it near optimal.
But I really do not want the Rust team to feel pressure to improve the compile times so much that they will end up killing other features or designing the language differently just for that (like Go, for instance). Or even making the compiler so complex that adding new features is harder for them.
I really like Rust as a language that is more complex to write and to compile but in return gives you robustness and performance. Compilation times be damned. I am not writing Rust to compile fast!
I want two different languages. I want one that compiles fast so I can run my unit tests fast. I want one that takes however long needed to get the fastest binary. These need not share the same compiler so long as both compile the same source to the same result.
> I want one that takes however long needed to get the fastest binary.
Year after year, people keep saying this or things like it. But it's worth remembering that it is not the compiler that makes your code fast, but how you structure your data transformations that makes code fast. In 2014, Mike Acton taught us that the best compiler optimisations can only account for 1-10% of the performance optimisation problem space. "The vast majority of [performance] problems, are problems the compiler cannot reason about." https://youtu.be/rX0ItVEVjHc?t=2097
Is that really true in all cases though? That's certainly true for languages like c++ and C and other compiles languages with manual memory management because the compiler can't re-order things past a memory load. That's less true I think for languages like where Rust can get to (even if they're not there yet). In Rust you can reason about memory allocations because there's guaranteed invariants in safe rust that let the compiler reason more about what the code is doing and rearrange instructions more than what would be possible in C or C++.
> [...] because the compiler can't re-order things past a memory load.
Re-ordering of reads and writes is not really the problem either, even if we assume that one could automatically safely reorder reads and writes arbitrarily to yield significant optimisations. Fundamentally, the compiler does not know the semantics of your data and the relationships between pieces of data and the logic of their transformations. Knowledge of your specific problem and the specific data you are transforming yields the majority of the powerful optimisation opportunities. Similarly, a garbage collector doesn't know the meaning of your data or your algorithms.
> Is that really true in all cases though?
Consider: Given a set of data, if a compiler or garbage collector understood the data sufficiently, then there would be no need for you to exist to write the program, because the compiler could simply write the data transformations itself and generate the program.
However, one is encouraged to do experiments for oneself. Concrete profiling data, not abstract theory, is what matters when evaluating cost-benefit of any approach.
>The downside is that it is a very common marketing trick.
"The less you intend to do about something, the more you have to talk about it." However this maxim is not applicable here because compile times actually have been improving.
This. Don't ever dumb down a language just to make tools faster. Anything the tools don't help with, I have to do with my brain, which is made of meat and runs at 0.0000001 GHz.
There is definitely a marketing angle to these posts. Rust has a reputation for slow compilation, and one of my goals is to show that (a) people working on Rust care about compile times, and (b) improvements are being made.
I'll let you judge whether the marketing is backed by actual substance. It's a long-running blog post series, here are links to all the posts.
I hope you're right and it gets better on the long term - the blog post says "since my last post, the compiler is probably either no slower or somewhat faster for most real-world cases".
For most people, knowing that the relevant people (in this case the Rust team) agree with your problem and are doing something about it significantly reduces the pain of the problem, because you know it's getting better. You need both parts though - the Rust team could be quietly working on performance for years and double compile speed and I probably wouldn't notice. This blog series is excellent marketing for some forward progress, and means that I do notice, and feel better about it.