I really want to like Rust, I like the intention of what the language is trying to do. But in a universe where languages like Smalltalk, Eiffel and Lisp exist, it just feels like a step backwards. Surely the computer should be doing the heavy lifting? What happened to my bicycle for the mind? Just my opinion. I admire the work going on in Rust but the complexity makes me sad.
My experience with Rust is that it tries to squeeze every last drop of performance, but instead of relying on simplicity (C) or unsafety (C++, the reason Mozilla started it was to reduce C++ memory-related bugs), it introduces language complexity (lifetimes, "prove that your code is safe, or fallback to a runtime overhead such as Mutex") and a little-bit of runtime overhead (explicit Option<T>/enum checks). In my mind, it doesn't play in the same league as high(er)-level garbage collected languages. It was made for robust low-level applications, which is why it's being considered for the Linux kernel, and not Lisp.
Concatenating strings in a GC language is easy, you just "+" them together, the cleanup will happen eventually. Concatenating in low-level languages means you have to think about the memory allocation(s) taking place, and the format("{}{}") is a lot more powerful for anything more than just adding two strings. What if you want to prepend "https://"? In GC languages, this might be an extra string being allocated and discarded, in Rust, it's a constant hard-coded into the binary. It just happens that the language has attracted a lot of people who are used to higher-level abstractions due to it's relatively pleasant syntax, type system and abstractions, ecosystem and perfomance.
The idea was to copy slices model from memory safe languages, naturally being C++ with C's influence regarding performance trumps safety, string_view couldn't be made to be as safe as the idea they were copying from.
Another example is span, while C#'s Span and Microsoft's gsl::span do bounds checking, ISO C++17 span naturally leaves it to the developer's responsibility.
This is why while I toy with Rust, I can only see it replacing C and C++ (long term), in the context of kernels, device drivers and real time scenarios where any kind of memory allocation is a no-go (MISRA style).
For anything else, it isn't worth the productivity drop of automated memory management (regardless in what form) coupled with value types.
If you don't allocate memory in the first place, Rust also looses a lot of appeal though, because dynamic memory management is often the root cause of memory corruption issues in C and C++ (while working with a upfront defined static memory layout avoids some typical footguns, because of things like stable memory locations throughout the lifetime of the application). I'm aware that Rust also helps to make accessing "static" memory safe, but Rust's safety guarantees are much more important with complex memory management patterns.
While it is true that dynamic memory management in C and C++ is a source of memory corruption issues, the other great source is precisely the reluctance of many C and C++ programmers to use dynamic memory allocation instead of buffers of statically-defined sizes. E.g.
1. Functions which convert a number to a string, where the target string is a byte buffer passed into the function as a pointer argument. This allows C developers to pass pointers to statically-sized arrays of bytes, which leads to buffer overflows. (Inb4 someone shows up with "but that's the programmer's fault for not using an array that's big enough".) Example: strcpy.
2. Functions which return pointers to static storage, which makes them thread-unsafe, and even in single-threaded cases is error-prone, when you call one such function multiple times in succession (aliasing). Example: gmtime.
Both cases are very prominent in the standard library.
If instead functions allocated memory dynamically, then they would avoid buffer overflow, because size would be handled exclusively within those functions, allocating as much memory as is needed, no more, no less; instead of forcing the programmer to worry about the size at every call-site. They could also be easily made thread-safe and less error-prone (eliminating aliasing), provided that the allocator is thread-safe.
Another issue with the first case is that it leads to uninitialised variables. Often a pattern of "bool NumToString(int n, char* dst, int size);" is employed instead of "Optional<std::string> NumToString(int n);". This invites the possibility of using "dst" without checking the return value, which says if the conversion succeeded. In this case dst is most likely uninitialised, and definitely incorrect. An "Optional<std::string>" solves both issues.
All true, but on the other hand you have much more dangerous (because hidden) memory corruption problems in C++ caused by hidden dynamic memory management in the C++ stdlib like iterator invalidation, or dangling std::string_views. And IMHO those cases where C++ suddenly pulls the rug from under your references are much more dangerous than working with a static memory layout in C.
The C stdlib string API is rubbish of course (along with most other C stdlib APIs). But at least it's quite simple in C to ignore the stdlib (and use better 3rd-party-libs instead) without loosing important language features.
The experience with CVEs in MISRA-C(++) vs Java and Ada embedded deployments, proves that there is still room for improvement by pushing C and C++ completely out of the picture.
Rust also helps prevent many types of concurrency bugs, and the type system is much more expressive than c's or c++'s, which can lead to cleaner code as well as preventing bugs like null references.
> But since it’s trendy people try to apply it to everything.
When people do things you don't understand, don't just say that they're just doing it because it's fashionable. Sure, they might be — or they have their own reasons.
I use Rust in a couple of places outside of its original "systems programming" niche, where, yeah, I don't really need to track every allocation, I don't care about GC pauses, I don't need to ship one single binary, and the overhead of a runtime wouldn't bother me. Things like Web servers for side projects, or small scripts to do a task I need to automate.
However, I found that:
• The effort it took to bring Rust out of its niche, and the time it took to learn the domain-specific libraries for my use cases (Rocket for Web stuff; duct for shell script stuff) was less than I thought;
• The amount of knowledge I needed to retain to use a programming language effectively — the components of its standard library, common third-party helper libraries, how to navigate the documentation, how to fix mistakes, how to avoid traps and pitfalls, how to structure your program, how to handle differences between language versions — was much larger than I thought!
So I stick with Rust for non-systems tasks because the benefits outweigh the detriments for me.
(Granted, I was only able to do this because I already knew my way around the language and the borrow checker; if you already know, say, Python, you can make this exact same argument in reverse. But then you need to know how to wield Python for low-level programming as well as high-level programming.)
I read an article a few years ago called Java for Everything[1] (which was discussed here on HN[2]) that makes the same point, only with Java. If I had to pick an "everything language", I don't think it would be 2014-era Java, but the article did sell me on the benefits of having an "everything language" in the first place, and I feel the same benefits apply here with Rust.
Some go as far as saying that "Rust is pretty accessible to developers coming from scripting languages (JavaScript, Python, etc)." [1].
Meanwhile in the real world:
"I find myself hanging for long periods of time on borrow checker errors. One of the errors has stopped my progress dead for a week now, I swear it worked a week ago and then Rust decided that a borrow I was doing was no good." [2]
The borrow checker is deterministic, so the commenter in [2] is mistaken about it “deciding” that a previously-OK borrow was suddenly no good.
I do recall the steep learning curve and the frustrations with the borrow checker, but I haven’t had it impede me at all in probably two years now and I’m writing Rust most of the day every day.
Also, most of the knowledge I gained by learning to work with it is general knowledge that made me a better programmer, not specific knowledge of how to “work around” the borrow checker.
Well. I think they're very different languages for different purposes. It's trying to replace C/++, and in those languages you do have to pay a lot more attention otherwise you'll have the occasional use-after-free or unchecked memory leaks etc. They can be extremely problematic and for a human it's impossible to catch everything.
Rust helps lift than mental load because if you're in safe rust then you knows the compiler will catch these problems for you. It also makes quite explicit when you're on the stack or heap.
I also love the explicit mut keyword makes functions very easy to reason about. In Go I find pointers being used for three purposes, to act as a reference, to act as an optional (nullable) or to act as a mutable parameter, and it's impossible to tell which by looking at the function signature and I can't tell you how many bugs that's caused in our codebase
In my experience lisps are orders of magnitude more complex than Rust, and require extensive debugging to be certain that the code works. Rust is mostly a compile and run experience, idiomatic Rust is hard to screw up.
I think rust was targetted at c and c++. Do you like any more recent languages or do they all feel like they went the wrong way? I find Julia interesting, but I haven't delved deep.
The bicycle you pick depends on the problem you're trying to solve. You can divide up all programming languages into two categories:
1. Those that allow you to state your problem in terms of ideas. You write your code, the compiler or interpreter does some optimisations, and the end result runs fast enough for your use case.
2. Those that allow you to state your problem in terms of machine instructions. You demand a certain level of performance, or want every memory allocation to be explicit. The compiler or interpreter still does some optimisations, but instead of inserting more instructions implicitly if it has to, you'd rather your program straight-up fail to build.
The choices made by a language's designers will make it seem unnecessarily complicated if you're looking at a (2) language when you have a (1) problem, and will seem like it's making too many assumptions for you if you're looking at a (1) language with a (2) problem.
For example, let's go back to string concatenation, which, as you saw in Rust, looks like this:
format!("{}{}", x, y)
And, yes, in other languages it looks like one of these two things:
x + y
concat(x, y)
If all you want is to end up with a "String" that's made up of those two other strings in it, this is good enough. I've done this bajillions of times over the course of my career. (Truly, I am a string-concatenating expert.)
But a String in Rust, under the hood, contains a pointer to a heap-allocated buffer. And getting one of those requires a memory allocation, and Rust makes those explicit. So the method of concatenating two strings you choose depends on what machine instructions you pick:
• If you don't really care, or if you need a new heap-allocated buffer, then `format!("{}{}", x, y)` will give you what you want.
• If `x` is already a heap-allocated String, and you know for a fact that you aren't going to need to use it again, then you can re-use the existing buffer with simply `x + y` (or something like `x.push_str(y)`).
• If `y` is a heap-allocated String but `x` isn't, you can re-use that existing buffer with `y.insert_str(0, x)` (but this requires it to copy the bytes in the buffer to make room for the string you're inserting).
• If neither is a heap-allocated String, and you don't want to allocate any more memory, you'll have to skip concatenation altogether and do something else (for example, if you're just writing the concatenated string somewhere, and don't need a buffer to put it in, then you can do `write!(somewhere, "{}{}", x, y)` which won't allocate).
> Surely the computer should be doing the heavy lifting?
I find that the Rust compiler does an excellent job of doing the heavy lifting for me: I don't have to worry about iterator invalidation or use-after-free, surprise memory allocations or numeric type conversions, or unexpected performance drops when I make a small change. But this is all because I'm expecting a type (2) language, rather than a type (1) language.
It's the last part I'd like to focus on, because you may have read the above list and thought "a compiler could choose the appropriate optimisation for me!". And, well, it probably could! After all, if the only difference between the first option `format!("{}{}", x, y)` and the second option `x + y` is whether the string `x` gets used after concatenation or not, a compiler could certainly check whether this happens and optimise out the allocation if necessary.
But then you need to worry about keeping this optimisation when the code changes. Sticking with the example, suppose you have code like this in some hypothetical (1) language that allows you to express ideas but optimises them the best it can, where `x` and `y` are Strings still:
var frobozz = x + y;
And then someone on your team makes this change:
var frobozz = x + y;
/* skip a couple dozen lines of code */
var gunther = x + z;
A code change that starts using `x` for a second time now results in an allocation being introduced a couple dozen lines above from where the code change was, because it can now no longer be optimised out. If this is still good enough for you, because you're solving a (1) problem, then there's nothing wrong with this. But if you're solving a (2) problem, then the language has kind of... let you down.
So don't be sad! It just depends on what problem you want the language to solve for you.
The only fallacy is when one then goes to Goldbolt and then reaches the conclusion that regardless of the language, the kind of generated Assembly can be the same, depending on the proficiency of the author on the language features at their disposal, and the compiler backend being used.
Isn't it true that Rust is meant to solve problems in spaces where those languages really can't play? I'll admit to plenty of ignorance about all three of those languages, so I'm genuinely curious what you think.