"Another sensible use of panic!, even in library code, is in situations where it's very rare to encounter errors, and you don't want users to have to litter their code with .unwrap() calls."
Say you're implementing quick-sort, and you're checking the pivot value like this:
if (array.len() >= 2) {
let pivot = array[array.len() / 2];
...
}
Those brackets could panic, if the index was out of bounds! Of course you can tell locally from the code that it can't be out of bounds. But if you're really not allowed to have panics (and equivalently array accesses with square brackets, as those can panic), then you have to write this code instead:
if (array.len() >= 2) {
let pivot = match array.get(array.len() / 2) {
Some(pivot) => pivot,
None => return Err(InternalQuicksortError("bad index somehow")),
};
...
}
You do this all over your codebase and it gets hard to read very quickly!
There are way too many replies here which seem like they fundamentally don't understand that you can't actually write an algorithm to solve the problem they imagine is easy.
Rust does have Results that it knows can't fail - for genericity reasons - these are Infallible results, for example try_into() when into() would have worked is Infallible, Rust uses an empty type (equivalent to but for now not literally identical to ! aka Never) for this, so the compiler knows this type can't be reified, it needn't do code generation for this never-happens error.
But there's a crucial distinction between Rust's compiler can see this is Infallible and I can show it won't fail and Rice's theorem means that gap is insurmountable even just in principle.
I’d be curious to see if the optimizer couldn’t detect in this instance that that code is, in fact, unreachable. It seems to me like it should detect it with ease.
I will concede that the “you should never get here” type errors are tempting to panic on. But I have seen so many instances where the “you should never get here” code does execute. Memory corruption, access by other processors, etc., could make this check (which should never happen!) actually fail at runtime. If one of those happens in code I wrote, I would rather be alerted to the error, without crashing, if possible.
A lot of the appeal to “panic on error” IMO is to reduce developer cognitive load. As a developer, it’s inconvenient to consider all the errors that can actually happen when programming. Allocations, array accesses, I/O, and other errors programmers want to ignore, do happen. It’s annoying to think about them every time we write code, but I’d rather have something that forces me to consider them, rather than be bit by something that I designed out of my consideration.
This preference might change depending on what I’m writing. For a one-off script that I need to spin up quickly, I don’t really care. For code running on a device I may not get to service for 5 years, my opinion would be different.
Wait hang on: your code hit a case that you know means something like memory corruption happened, and you want it to keep executing? If there's memory corruption, I want the program to stop ASAP so that it does as little damage as possible!
FYI Rust panics and illegal `[]` accesses will tell you the file & line number where it happened.
Some people do indeed want this as far as I know. I believe it is what the Linux kernel wants for example. I'm not aware of any other projects that want the same thing.
> A lot of the appeal to “panic on error” IMO is to reduce developer cognitive load. As a developer, it’s inconvenient to consider all the errors that can actually happen when programming. Allocations, array accesses, I/O, and other errors programmers want to ignore, do happen.
Rust has the ? operator to address the common pattern of returning a failure condition to the caller. There's effectively zero cognitive load involved, since compiler hints take care of it when writing the code.
> I’d be curious to see if the optimizer couldn’t detect in this instance that that code is, in fact, unreachable.
Probably, but that should be a job for the type checker not the optimizer. Ideally, there should be a way of creating statically verified asserts about program values, and plugging them in as preconditions to function calls that require them for correctness.
If only there was some way to propagate the range checks...
let lo, hi = array.bounds()
match lo < hi {
None => array,
Some(lo, hi) => {
let pivot_index = lo.avg_with(hi) // guaranteed to be in bounds
let pivot = array[pivot_index] // never panics, but the index type is opaque, not usize
...rest of the qsort here
}
}
You're still using `array[index]` syntax though. That comes with a panicking branch. The optimizer might elide it, but it might just as well do that in the original code.
I do not believe you've rebutted the main point here. At best what you've done is a show a better way of guaranteeing that indices are correct. But you still need to deal with the fact that index notation comes with a panicking branch.
Not necessarily? Indexing is overloadable, so nothing stops one to write an implementation that'd use an unsafe block without any checks/panics whatsoever (because there are static guarantees that those would be unnecessary).
Apologies for the verbose response, but I just don't see how to avoid it because people constantly get tangled in knots in this sort of discussion.
I think this is continuing to miss the forest for the trees. The original code snippet could also just use `unsafe` to elide the panicking branch and it would still be just as correct while satisfying the requirement that there are no panicking branches.
But what's actually happened is that you've traded "panic when a bug occurs" with "undefined behavior when a bug occurs." Both things are valid choices depending on the circumstances, but the latter is not a realistic escape hatch out of the discussion at hand: whether one should completely elide all panicking branches.
The comment that kicked off this sub-thread:
> IMO (systems programming background), the litmus test for panic! vs. Result doesn’t exist. Don’t panic.
The "Don't panic" advice is correct but ambiguous on its own. If it's referring to the behavior of a program, then yes, absolutely, don't panic. Or as I like say, "if a program panics, it should be considered a bug." But if it's referring the source code, as in, there should be no panicking branches, then that is absolutely wrong. Neither the standard library nor any popular Rust library follows that practice. And they shouldn't.
I interpreted "Don't panic" as the latter meaning because of the claim that "panic! vs Result doesn't exist." That only makes sense in the latter interpretation. In the former interpretation, one does need to contend with a "panic vs Result" choice, as it's often a balance between convenience and how it's usually used. For example, indexing a slice with a `usize` panics.
So either the former interpretation was meant and the poster is confused (or just misspoke), or the latter interpretation was meant and is divorced from reality.
I'll post a link to my blog on this topic yet again, because I'm pretty sure it will clarify the situation and my position: https://blog.burntsushi.net/unwrap
Yes, indeed. And it's not even a novel idea, my example is lifted straight from one of the references of the paper you've linked (namely, Kiselyov & Shan, "Lightweight Static Capabilities", 2007).
I personally think that taking an optimization (e.g. boundary checks elimination) and bringing it — or more precisely, the logic that verifies that this optimization is safe — to the source language-level is a very promising direction of research.
How does anyone on the stack above handle this error? i.e. what can be done programmatically to deal with it?
I claim this is far worse than panic in this case... you've now introduced a code path variant that folks above think is "legitimate" or caused by some input or something in the environment that could be changed to make it go away... but that's not the case here... the only thing that can be fixed is the code in question to not be in this region if the array length is 2 or less. It's a plain old "bug" and the only cure (other than more cowbell) is to modify source and recompile.
> How does anyone on the stack above handle this error?
For a web application, you respond to the request with 5xx error.
Applications do many things. A degraded state (e.g. a single endpoint being broken) is often preferable to a full crash. It's very little effort to opt in to panics by calling unwrap(). It's a lot of effort to recover from panics. Let me, as the caller, make that decision.
Some technical limits, a lot of cultural opposition. While it is possible to have a panic handler, it's generally viewed as a bad idea. Not everything is UnwindSafe, there are limits to how much a panic handler can do, concerns about memory leaks.
Again, the caller can turn it into a panic easily enough if that's what they want. leave the decision in their hands.
You can convert the Option to a Result where the None becomes the error and then propagate from that.
This is basically how you are using this Option by the way, and while it's possible, you have to consider why the original author decided to return an Option and not a Result: It's because he wants you to handle that in a way that doesn't fail (ie: an error) but instead consider it as a non-failure possibility.
Maybe my mind has been poisoned by a few years of Scala, but that just looks like normal code to me. Bundle up those errors and return a useful message to the user about why it failed, call it a day.
The point is that the code literally cannot panic under any circumstances. Just because there's a call to `unwrap()` or `[]` doesn't mean it's actually possible to trigger that panic. I made that more explicit the code, in case that wasn't clear.
If there's any question as to whether a line might actually panic or not, then absolutely, generate helpful error messages. But when you can tell through simple local reasoning that the panic will never happen, you can just use square brackets. It's OK.
> when you can tell through simple local reasoning that the panic will never happen
A problem is that you may be able to tell now, but as code changes and functions grow and split, that reasoning may grow less simple and less local without any changes of those lines in particular.
The odds of that, and the degree to which it matters, are of course both spectacularly situation dependent.
The point is that the language has decided that some kinds of builtin operations can panic, in the interest of developer ergonomics. The alternative is that every indexing operation returns a Result, or you write 4 lines of code instead of 1 for every indexing operation. So the notion that the programmer should never write "panic!" in their code does not fully address the underlying problem, if the reason for not writing "panic!" is "library code should never panic".
Panic is not a rust-specific term, it generally refers to software halting intentionally because something has gone so wrong that you can’t reasonably recover. For example, some safety invariant is violated or core data is corrupted. Kernel panics (also called bug checks or BSODs in Windows, same concept) exist in all OSes by necessity.
When linux panics, it halts the machine - for fundamental things like data corruption detected, and we can't continue to operate and risk corrupting the disk. Just like Windows has the BSOD.
if !cond {
// I ad-libbed the exact error message
panic!("Asssertion failed {cond} was false, line/file information, etc")
}
panic!(message) is "either throw an exception containing message, or print the message + maybe a backtrace and abort the process".
You can use compiler flags to guarantee that it aborts instead of throwing an exception, you can't use compiler flags to guarantee that it acts an exception, sometimes rust will just abort the process (for example if you panic!() while you're already unwinding because of a prior panic!()).
If nothing catches the exception (and that's usually the case, by convention), the runtime will print the message, maybe a backtrace, and kill the thread that paniced.
If you are building application software like a web framework, panicking and then dealing with the issue using a global exception handler to return 500 is often the easiest. There was no point in littering the code with match Result => err return http.status500 like bad Go.
This is sometimes called the diaper (anti)pattern - because it catches all the poop. It works great - so long as the only exceptions your program ever throws are ones you are deliberately throwing to the global error handler. Given that, go for it. ;)
(That would be the point of exceptions that make the correct thing the majority of times, and why I think they are superior, checked exceptions that it. They are completely analogous to Result types, but are properly supported by the language and can have as small or big scope as needed. I am really hopeful that they will have a resurrection with languages with effects)
You have a point but so does the OP. The classic example that follows the OP's recommendation is allocation failures. Of course, not everyone is happy that String or Vec will panic on allocation failure and wish allocating methods returned a Result instead. But changing all allocating APIs to returning a Result would make others unhappy given how rarely a program can truly recover from an allocation failure.
BTW, overcommit/oomkiller aside, recovery from allocation failure in Rust can be easy and reliable.
There is an assumption carried over from C that code paths handling OOM errors are untested and likely broken. However, Rust doesn't have manually-written error handling paths like that. It has `Drop` which is always correctly inserted by the compiler, and regularly tested on normal function exits.
Just to jump in from the peanut gallery: this kind of discussion, about the "right" way to do things in a complicated language with lots of choice, prompting digressions about conventions and mildly-kludgey third party "soft" solutions to what would seem to be generic problem...
...is exactly where C++ was in 1995 when Meyers published Effective C++.
The specifics of the argument notwithstanding, the very existence of this discussion validates the need for this book. And it also says some somewhat awkward things about the direction in which Rust is evolving.
> "the very existence of this discussion validates the need for this book."
Or it could also mean that some folks involved in this discussion haven't yet read the chapter[1] in "The Book" that explains when to panic and when not to[2]?
Not at this level of pontification. Python and C and Java and .NET have achieved much success without this kind of "the community can't decide on the right way to do this very fundamental thing[1]" kind of disconnect, largely by scoping themselves such that the tradeoffs are made internally to the language runtime and not a subject for debate.
And, fine, Rust is more ambitious, just like C++ was. But that too has tradeoffs in language complexity and evolutionary cruft, something that is often cited as an advantage vs. C++. But as we're seeing here, it's not really. It's just that C++ is a few decades farther along on the same curve.
[1] In this case, literally, how to handle a runtime error.
This is absurd. The community has decided. Go look at any popular Rust library. The panicking strategy will be the same in all of them: if a panic actually occurs at runtime, it's considered a bug. But libraries still use unwrap in the same way asserts are used.
I've asked many people to show me real Rust code that does it differently, and I haven't found any takers.
The confusion here is mostly in terminology and in the "peanut gallery."
This has literally been a solved problem since Rust 1.0. I know because I was there and published a blog post on exactly this topic.
The joke is that there is no correct answer, and there "should" be, which is why Rust needs books now. Things always get worse and not better over time in this space.
> The litmus test for panic! vs. Result<T,E> is not rarity of occurrence, it's whether the condition represents a programming bug or a recoverable error.
Because that exact same conceptual separation worked so well for Java with checked vs unchecked exceptions...
Everyone just started using unchecked exceptions because they provide an easier coding experience.
Also, there's legitimate situations where the library cannot make that decision, but rather the user. As someone else says in another example, who are you to say that my request to allocate too much memory is a panicable offense? What if my program can, in fact, recover from that situation?
At least in Java, catching checked vs unchecked exceptions is the same if you do choose to handle them.
> > The litmus test for panic! vs. Result<T,E> is not rarity of occurrence, it's whether the condition represents a programming bug or a recoverable error.
> Because that exact same conceptual separation worked so well for Java with checked vs unchecked exceptions...
> Everyone just started using unchecked exceptions because they provide an easier coding experience.
Indeed. That's a problem with exceptions and checked exceptions and the choices in Java around that, not the idea that operational and programming errors might be handled differently.
> Also, there's legitimate situations where the library cannot make that decision, but rather the user.
What you're describing is that different components may choose to treat these separately. That's fine -- there doesn't need to be an objective answer. You can have one library say "allocating too much memory is not an error we care to deal with and we treat that as a programmer error". And people using that can be told that's a limitation. That's fine. All components have limitations. Another library that implements the same thing can say "we treat allocating too much memory as a recoverable operational error and express it in this way". It's all tradeoffs.
That also doesn't mean _all_ such cases are subjective. Another comment mentioned an index out of bounds in quicksort. That's completely within the control of the implementation and in most contexts there's no reason that should ever be a recoverable error.
> Everyone just started using unchecked exceptions because they provide an easier coding experience.
I think that's true, but I think there's some ambiguity about what sort of "easier coding experience" mattered. I feel like it's often portrayed as "too lazy to write the annotations", but in practice the language to describe exceptions is often too limited to express what we'd want. As a trivial example, we'd like to be able to say "map() can throw anything its argument can throw". That we can't say that means a programmer working with checked exceptions needs to either catch and suppress things they shouldn't be suppressing in that function passed to map(), or build clever (and possibly inefficient) workarounds to extract the exception(s?!?) anyway. All of that has drawbacks enough that the downsides of unchecked exceptions may be the correct choice, not merely the lazy choice.
> Everyone just started using unchecked exceptions because they provide an easier coding experience.
This is because checked exceptions, contrary to Result, are much more limited/annoying e.g. they don't compose well with lambdas, can't be generic, and don't offer those nice functional combinators.
In Java and many other languages, the expectations are declared at the callee site, which is a completely wrong place to do it. Only the caller knows whether the situation is expected or not.
Lets say I'm trying to get the element at index 20 of some array-like datastructure.
Lets say that returns a `Result<T, E>`.
Lets add another twist: I've already checked whether the length is > 30 for other resons in that same block. Therefore, I know (if borrowing mechanisms ensure the absence of concurrency issues) that getting the 20th element will not cause an error. In this case I can use
my_obj
.element_at(20)
.expect("Unexpected missing element at index 20 even though number of elements is > 30")
As a caller, something is very wrong in this situation. If so, I can make the choice (doesn't work for all types of programs) to abort immediately.
edit: see other comments in the thread for a more believable quicksort example, where you are getting the element at `len/2` as pivot.
This is a somewhat certain situation. You can also use this in softer situations, like e.g. a program that just can't run if its data file (large language model files, for example) cannot be found in `$PWD` because say, the docker image is always built in a way that includes it and you wrote it to only run within that image. As a caller, its up to you to decide whether this situation is expected / recoverable or not. (Yes you might criticize that rigid choice, but if the choice is correct, so is the behavior to abort if the condition is not satisfied)
You can also decide that the callee doesn't know whether the error is expected and "propagate" the error further by ensuring the function returns a Result<T, E> using the `?` operator.
Rust is one of the rare languages that gets this right. (Swift does too [1] - i am not quite aware of others)
Checked exceptions aren’t really compatible with generics. You can’t pass a lambda and have the compiler infer its exception list now applies to the caller.
aye - in Java, this inevitably leads to strange catch Exception/RuntimeException statements all over the code. In practice, there are very few/no reasons to order a program crash. Programming errors are inevitable on a large code base, we learned the hardway in kernel development that Blue Screen Of Death is not the correct outcome for an error.
After writing primarily no standard library C for 15 years, I have to say that I find Rust just as ugly and cumbersome as C++ (not debating its safety guarantees). It seems like languages that add sufficiently advanced type/macros systems always spiral into unwieldy beasts where you spend a bunch of your time arguing with the type systems and writing blog posts about some new piece of type theory to solve a problem you would have never have had with C. People just get greedier with deriving code and wanting more "magic" until every program only makes sense to its author.
I don't think I will ever like kitchen sink languages. Experience has taught me that the most effective tool is the simplest one, which for most use cases today would be Go. For systems programming I just shudder to think how convoluted and hard to read things will become when we take already extremely complex code written in the simplest terms in C and port it to Rust.
> I have to say that I find Rust just as ugly and cumbersome as C++
Ok, so you don't like it aesthetically. Do you have problems writing unmaintainable software in it? What about incorrect software? Is there any specific feature in the language that you object to that will cause confusion and lead to people writing bugs?
C is a very beautiful and "simple" language, people also write a lot of security vulnerabilities with it.
Beauty is a useless concept in a programing language. Most of the time it just relates to someone's bias towards what they are familiar with. A lot of people find Python "beautiful", I'm unfortunately very familiar with python and as I've become more and more familiar with it I find it uglier and uglier.
Same with C, I remember all the many hours I've spent in valgrind debugging problems other people have made for me and I find it ugly too.
When I look at a programming language, I don't think about aesthetic "beauty" or even "simplicity".
I think, does this programming language allow me to represent the concepts I want to represent accurately and correctly?
If it is not memory safe, does not support static typing with algebraic data types, and does not have null safety it does not meet those minimum requirements and is not suitable for use.
Edit: I want to add, it's not just accuracy and correctness that are important. Performance is very important too and many functional languages absolutely flounder because of strict immutability and the adoption of patterns that have terrible memory access patterns.
> Beauty is a useless concept in a programing language.
Maybe not aesthetic beauty, but readability in many contexts matters more than performance. Code that isn't readable hides bugs and can't be maintained (FYI Rust doesn't stop you from writing bugs into your code). A language like C or Go where you can fit most of the syntax and mechanisms into your head are simply easier to read and reason about than a language where you need a PhD in type theory to understand one signature.
> If it is not memory safe, does not support static typing with algebraic data types, and does not have null safety it does not meet those minimum requirements and is not suitable for use.
You'd better stop interacting with technology then, because the vast majority of it is still running something compiled from C. We're talking about control systems, life saving technology, technology that's running in outer space.
> FYI Rust doesn't stop you from writing bugs into your code
You are committing the perfect solution fallacy [0]. Rust won't make you have no bugs in your code but don't conflate some number of bugs with a reduced number of bugs, a reduction is still a meaningful outcome, otherwise we'd still be writing in assembly.
> We're talking about control systems, life saving technology, technology that's running in outer space.
Indeed, that's why we should use more memory safe languages, because we are dealing with critical systems. Just because they were written in C does not mean that they should continue to be. It's like digging a hole with a stick, and when someone suggests a shovel or backhoe, you mention that all previous holes were made with a stick. There is no relationship between the previous work and what should be done in the future.
> You are committing the perfect solution fallacy.
Rust is also not a perfect solution. You have to prove that the benefits of solving for the bugs Rust can prevent outweighs the downsides of asking a bunch of C experts who are currently developing kernel subsystems in C to stop what they are doing and rewrite millions of lines in C in a language that has very low marketshare in the space and is far, far more complex in terms of abstracting over assembly. People write systems software in C not because they have a fetish for bugs, but because we do in fact still have to stay close to the hardware and not get lost in a minefield of abstractions.
C is successful despite all the bugs, not because of it. If there are newer methodologies that solve bugs, one should use them, rather than sticking to their "tried and true way" while suffering through segmentation faults. Already parts of the Linux kernel as well as Windows are being written in Rust, so if it's good enough for them, it should be good enough for those bunch of C experts. They also don't have to rewrite millions of lines, they can incrementally improve the codebase.
All this to say that even in your above comment you're still committing the perfect solution fallacy. You are still talking about how Rust is not perfect even though no one said it was, and you mention trying to convince all of those C experts to rewrite millions of lines of code when, again, no one said they have to do.
> Already parts of the Linux kernel as well as Windows are being written in Rust, so if it's good enough for them, it should be good enough for those bunch of C experts.
The C experts I was referring to are the subsystem maintainers and none of them are currently working on migrating to Rust AFAIK. So far the work in Linux with Rust has been a few driver rewrites. Keep in mind that there are subsystems like eBPF, io_uring, etc. that are under rapid development and are completely in C. I think if you really believe that kernel code should be written in Rust, you should start submitting patches instead of telling people that they have some sort of moral obligation to use Rust when you aren't the one that has to do any of the work.
You aren't being charitable. Please stop fussing it's rather immature. People have addressed your points, please actually respond to theirs.
It's not helpful to keep saying "look there still some people writing C" and use that as an argument for how C has somehow solved the problem where C programers regardless of how good they are ultimately write bugs that lead to takeover of the program counter. Rust makes the world better. If you don't want to pay the semantic price of that then fine, but it means you're okay with still writing horrible bugs. Some of us aren't. I promise Rust's semantics are better than C++ even though Rust has its rough edges and could be improved.
What would be helpful to your cause is to mention the examples of people starting to add tooling to C to solve problems people are solving with Rust, but without the semantic price of Rust.
> Rust doesn't stop you from writing bugs into your code
Obviously. It's not about Rust specifically. It's about accurately representing state which is extremely difficult to do in programming languages that do not have algebraic data types.
In Golang how do I represent that a type is either type A or type B and then in a type safe way perform logic if it's type A or perform logic if it's type B? Algebraic data types also allow for things like the result and option type.
Throw in the borrow checker which prevents a host of aliasing and concurrency bugs (not all). You are more likely to write correct Rust than other languages.
> You'd better stop interacting with technology then, because the vast majority of it is still running something compiled from C.
Thank you for another lesson I already know. I clearly meant I don't think it's suitable for use in new programs. People will still use it regardless as they are free to do so.
> extremely difficult to do in programming languages that do not have algebraic data types.
And this is why people like me avoid ML based languages, type astronauts, and shiny new toys. You're misrepresenting inconveniences as fatal flaws when we've been successfully running all of modern society on kernels written in C for fifty years. Most of the kernel subsystem maintainers aren't even considering Rust as anything more than an experiment at this point. No one cares about ADTs and the perfect representation of state transitions in the type system when we have actual problems to solve, and we can and have been solving them in C.
> successfully running all of modern society on kernels written in C for fifty years.
No we haven't. C is successful despite its major flaws (seg faults based on memory errors are not mere "inconveniences"). Microsoft mentioned for example that around 25-33% of all bugs in Windows were based on memory errors. They are currently cutting those down to 0% via Rust.
> And this is why people like me avoid ML based languages, type astronauts, and shiny new toys.
There's nothing new or shiny about the ML family. And the growing interest in ADT's is about their very real advantages in representing state and solving real problems in a less error prone manner than C
Don’t argue with him. He is just and old man doing what all old man do: find excuses to not learn new stuff and convince himself that he didn’t have to do it
Yeah I find it really hard to see how they're being charitable. Just an old fart arguing for the continued relevance of curmudgeon-y technology instead of actually responding to the points being argued.
What people who still care about writing C are doing is starting to use tools to bring some of what Rust offers to C, because they acknowledge that avoiding entire categories of bugs is actually really good, despite which language you use. But... no mention of that here, it seems.
C is not what I would call a very readable language. I don’t think you could drop someone who newly learned C and have them be effective looking at glibc for example.
Because of the use or abuse of macros, different C codebases can look very different, and to learn it you have to figure out all the quirky macros. If people stick to standard C with few macros and verbose variable names, it's really not bad.
It's nearly impossible to actually parse anything more complex than a pointer to a struct in C because it wasn't written for humans, it is a relic of using a convenient recursive descent parser to parse C's syntax that we end up with `void* (*id) (void*)`...
Are you familiar with the lengths people go to to make C work in the "life saving technology" contexts?
Your argument is essentially: My thinking is clear and correct, I just can't explain to the compiler why it's correct, so just trust me.
People _very_ easily delude themselves into thinking their thinking is clear. When they are forced to spell it out in excruciating detail it often turns out otherwise.
Readability is a function of familiarity though. I have been programming in Rust for > 5 years, I find it to be an incredibly readable language, because I am so familiar with it.
It isn't a question of aesthetics, the primary pragmatic issue with languages that have many features is how hard it can be to read a new code base. Tools like macros and traits can save you a lot of typing but when someone else comes into your code, it is much harder to come to terms with it than simpler languages like C or Go where the set of features can all be held in your head and the actual code is explicitly stated rather than generated by macros and generics and such.
This is a natural tradeoff in language design, do you deal with verbosity and boilerplate because the language has few features, or do you deal with the cognitive overhead of understanding all the features?
I find Rust extremely easy to read. In comparison, I was trying to read a Go codebase and found it hard to read. Of course, I've spent multiple years writing Rust and almost no time writing Go and I don't know the Go programming language at all. It was very easy to "learn" of course. But I didn't actually learn it given when I started reading code bases I was interested in I started to see weird things like
which I couldn't find an explanation for using normal means so I asked chatgpt and it told me it's an
"interface compliance check" "This line is a way of ensuring that a certain type (*ReplicaClient) implements a certain interface (litestream.ReplicaClient)."
For some reason this pattern was left out of the official documentation. Am I missing other patterns? I find Traits much easier to understand.
Also, like many others I give coding interviews and people writing Go always seem to run into null pointer errors. I'm not making that up. (I have no sample for Rust programmers)
I think there’s this concept in language theory that those speaking or writing strive for more complex concepts as it matches better their thoughts, while those receiving strive for more simplicity as it takes more energy to make sense of complex concepts (think about it as adapting the language to your audience).
But I think the main problem here is the famous debate of statically-typed languages vs dynamic ones. As statically-typed languages add more and more features to be more expressive, they have to work around the compiler limitations and create very bizarre syntaxes if they want to keep performance.
But maybe this is a tooling problem? What if we could see the code in a simpler way even if the actual code involves macros and templates?
Swift has just implemented macros that Xcode expand to show the generated code in a collapse/expand way, making it easy to understand what it does.
This. Being able to expand macros lets you learn what it does in context, and once you're more familiar with the abstraction it gives you the benefit of hiding the details and highlighting the moving parts within an otherwise static structure.
> C or Go where the set of features can all be held in your head and the actual code is explicitly stated rather than generated by macros and generics and such.
The C preprocessor is a Turing-complete, and a lot of C code abuses it to implements things a lot more complex that just "macros and generics and such".
Because of that C with a preprocessor literally has a infinite "set of features", a pretty big set of features almost garanteed to not fit anyone's head. I don't think you can call something with - both by it's own code and it's libraries code - arbitrary turing-complete preprocessing "understandable".
But when all the features of the language fit in your head, it usually means the code you're reading keep repeating itself lacking more convenient features (yes Go, I'm talking about your error handling).
The needs of our programs are complex, so in the end this complexity has to be somewhere, and it is either in the language itself, or in our code…
> After writing primarily no standard library C for 15 years, I have to say that I find Rust just as ugly and cumbersome as C++ (not debating its safety guarantees).
The safety guarantees are the whole point compared to C. So what’s the point of complaining about how things are cumbersome when you just elide that whole side of the debate?
I could complain about how C is too cumbersome with the proviso that I don’t care about portability or efficiency. But then what would the point be?
Point of higher level languages is and was productivity. Now some languages have Turing complete type systems and I have seen some clever usage of types where really even the author might not know how it works given enough time passes without touching the code base.
There's definitely tradeoffs to some things, and I think in particular Rust's static guarantees can cause additional friction, but in general I've found ML derived languages to be extremely practical because it's a good local maximum on the graph in terms of complexity vs runtime safety (where languages like Java add too much friction without enough benefit compared to something like Python)
I've found in software development the 80/20 rule is extremely true, but when you need something from that last 20%, it can really mess up your architecture trying to work around it. To me this is why others love LISP so much, and I appreciate F#/OCaml for getting me as close to that potential as possible while keeping static analysis reasonable. Clever use of type systems is great for RAD and making sure critical components make certain classes of bugs a compile time error, and it can turn into undecipherable magic, but the additional information and guarantees advanced type systems provide allow me to focus on my immediate problem instead of the interplay with a larger system.
I would describe it as a language where there is more than one strong idiom
For C and Go, there really is only one prevailing idiom on how to write code (for C one could argue that static vs. dynamic allocation are two idioms, and multiple ways of doing concurrency)
For C++, whenever you have the chance to abstract some code into an interface you'll have the choice to create a template or use polymorphism, concurrency can now be done with an event loop, co-routines, threads...
Rust has the same bug/feature where there's more than one idiomatic way of doing things.
“For C…, there really is only one prevailing idiom on how to write code”
You can look at single projects that have changed the way they write C code over time, introducing new idioms all the time. OpenSSL is a perfect example of that in regards to their work to reduce memory related bugs. C’s idioms have changed so many times over it’s life it’s hard to keep track or even read it from one era to the next.
I would think you are referring to implementations of the one prevailing idiom as opposed to seeing multiple idioms for the concept. The idiomatic way to represent a growable list in C is to make a list (using some implementation) and put it in a structure with bookkeeping data, then using that list by passing it (or a pointer to it) to plain old functions to use it.
Of course I'm talking about an idiomatic implementation, what else would I be talking about in this context?
> The idiomatic way to represent a growable list in C is to make a list (using some implementation) and put it in a structure with bookkeeping data, then using that list by passing it (or a pointer to it) to plain old functions to use it.
That sounds like a long-winded way to say "there's no one idiomatic growable list in C".
Having more than one way of doing things is good when it's a consequence of a positive, well-curated language evolution. I think that's Rust case. It doesn't mean that it will become more readable ("beauty"?) in the future, there's a lot of line noise already: <'a>, foo::bar, #[define(...)], etc.
Or, ehem, JS. I think the JavaScript -> ES6,7,8,9 evolution is a good example of positive change: the language improved immensely at the cost of having even more idioms, TIMTOWTDI and transpilers like TS. The old parts are now marked as bad practice and optionally linted to hell. You still have to put up with W3schools-level code once in a awhile though.
Perl and C++ to me would be some bad examples. TIMTOWTDI hampered their evolution way too much. Perl froze for 10 years+. Python was much braver: their folks evolved (from 2 to 3) by breaking compatibility in order to remain true to its Zen.
Zig and Go are examples of rigidity, one-way-only extreme, which is a pain sometimes but, in Zig's case, maybe a good practice for a young language project. Readability in Zig is excellent and I like the choices with the @ functions ie. @as or @ptrToInt() instead of piling-up parens and asterisks. Definitely a case-study of readability and evolution for the future to come.
I feel like people have been saying this about Go for a while and I don't see how it's true. The only real idioms out of Go are explicit error handling, types implementing interfaces, and the formatter. Otherwise, you can kinda do whatever you want in Go, and I have yet to feel at home reading a Go codebase.
I don’t think your definition is badly chosen. In fact I think it is a pretty good demarcation between language ideals. However, I would define ‘kitchen sink language’ as one that has a bit of everything in the language itself and has the syntax to ‘support’ those bits of everything. Like a language that has lambdas, functions, methods, type classes, value types, box types, Types of types, macros, lifetimes, templates, multiple looping constructs… the list can go on. I think the real exemplars for ‘kitchen sink’ in this conversation is to differentiate between Go and C++. The amount of concepts in C++ is staggering and Go is in contrast brain dead simple. I think Rust is absolutely on the C++ side of this divide and is primarily what GGP was getting at in their comment.
Immediately put off by an egregious error in the Aggregate Types section in the very first chapter:
> Unlike the bool version, if a library user were to accidentally flip the order of the arguments, the compiler would immediately complain:
error[E0423]: expected function, found macro `print`
--> use-types/src/main.rs:89:9
|
89 | print(Output::BlackAndWhite, Sides::Single);
| ^^^^^ not a function
|
help: use `!` to invoke the macro
|
89 | print!(Output::BlackAndWhite, Sides::Single);
| +
which is a compiler error for calling the wrong function, not swapping the arguments.
Was this book not proof read at all? Is it AI generated? I'm not sure I can trust anything in it any more if something this basic was allowed through.
As a heuristic, you should treat obvious errors in the body of a technical text as a red flag, but don't be so harsh on mismatched examples or diagrams. That stuff is notoriously easy to mess up and doesn't tell you anything useful about the amount of thought or revision that went into the writing of the piece.
A more constructive thing to do than blaring the "FAIL" klaxon at the first such mistake is to just let the author know.
Speaking as someone who writes and puts stuff online, there are probably approximately 1.2 zillion subtle errors.
The best thing you can do is point them out so the author can fix it for posterity.
In a classic publishing environment, sure, edit the shit out of it before launch. But this isn't that environment. The fastest way to get quality, free content out there is to publish something and then have everyone tell you what's wrong, fix it, and move on.
Working together this way, we produce libraries of material for anyone to use.
This is really weird code because `print!` in Rust always takes a format string (similar to C's printf but as a macro), but I don't see one here. So it's not just a single-character typo.
It does indeed need to be proofread and that is annoying. It seems to me like the author just copy-pasted the completely wrong compiler output. Because in that section, he's trying to show how by having two distinct types for the arguments, the incorrect order would lead to a compiler error, as opposed to the case where both args were of type `bool` (and that this helps you avoid hard-to-spot mistakes at the point of invocation)
I agree LaTeX has it's uses, but... markdown can also do this trivially. See for example the Rust Book [1], and `cargo test` automatically runs example code in markdown doccomments.
After glancing through this, I can confidently say this is the “Rust, The Good Parts” I’ve been waiting for. I really appreciate the simple explanation for all “whys”.
Most folks like me that are doing application programming don’t need Rust’s full feature set. You can benefit from type safety/performance/etc even with a subset of the language.
And in doing so, you are greatly enhancing the readability and on-boarding experience for those new to Rust. This realization isn’t new, it was/is widely regarded as a best practice for C++ (readability over conciseness).
> Most folks like me that are doing application programming don’t need Rust’s full feature set. You can benefit from type safety/performance/etc even with a subset of the language.
I do think Rust suffers sometimes from guides putting all the complication upfront. When I first started playing with Rust lifetimes confused me… so I used Rc<> everywhere I’d need to use a lifetime and everything worked great. Then, later, I got to grips with how lifetimes actually work.
The vast majority of Rust applications aren’t so performance centric that a little reference counting would kill them. But you rarely see it recommended because it’s not the Rust Way.
The first thing I tell people is ".clone() it, Box it, Arc it. You can make it faster later."
Experienced devs see an Arc<RwLock<T>> and are tickled by the "obvious" performance impact, but then they try to use higher ranked lifetimes before fully internalizing the rules and get frustrated. The Rust Way there is to look at how you can reframe the problem to minimize the pain, because that pain is a symptom of an ownership web that's too complex.
If you use the wrapper types while you learn the language, it makes it easier to focus only on learning the lifetime rules, instead of trying to learn the whole language at once.
How does software get slow enough that it's one of our favorite things to complain about? There are multiple causes, of course, but I think one of them is death by a thousand cuts, that is, lots of little performance compromises that ostensibly don't matter. I don't always do everything in the optimal way, but I appreciate that Rust at least gives us the possibility of having safety without compromising speed, and I think it's good to push harder in that direction.
Here’s another way to frame the original argument:
A less performant solution in a subset of Rust will still be miles ahead of optimized Python or Ruby, while being as readable. An equivalent C++ solution on the other hand could be less readable and unsafe.
Also, limited use of borrowers is not that hard to grasp. Mutable references however make my head hurt. More so when they’re used by someone who doesn’t understand their purpose well.
One mutable reference passed down a chain of functions is okay (like a shared context/state object/db connection/etc), but more than a few make my head hurt.
You can still make logical mistakes with shared context, so in general - more pure functions = easier to spot logical mistakes (as most mutations will be concentrated in one place). And this is a general rule applicable to all programming languages.
I don't think we disagree here. I agree that not using Rc<> is the optimal position but I find myself wondering how many Rust beginners got tangled up in lifetimes, gave up and went back to a garbage collected language. I think it's fine to make an on-ramp that's a little shallower.
I'd suggest using `.clone()` over `Rc` since it'll be easier. Rc is an optimization to make `.clone()` cheap, but the downside is it makes mutability more complex.
Your statements seem contradictory. "bajillions of layers of bloat" sounds like exactly the situation of "many micro-decisions" that degrade performance.
That's probably a side effect of the borrow checker and its lifetimes being the interesting new thing which is mostly unique to Rust. The counted references Rc<> and Arc<> are boring, they exist in plenty of other languages (for instance, std::shared_ptr in C++). It's the same thing with Haskell and monads: every Haskell guide is going to focus on monads.
Oh, I agree. But the result is that I often see people say "don't use Rust for general purpose programming, it's too complicated" and in my experience it's actually a great general purpose programming language. You just have to reach for the tools given to you rather than push through the hard way.
+1 this is that my understanding as well regarding Rc. I can’t recall where I read but it is advised against using Rc for “not being the Rust way”. Do you have any references in doc or code that have Rc usage I can read?
There seems to be a heavy emphasis on explaining high level Rust concepts using parallels in C++ which overall leads to a weaker explanation for most items. Perhaps this should be titled: Effective Rust coming from C++.
This is what it was like to read any Kotlin book for the past few years. The general assumption was, "we all know Java, and we're all using IntelliJ... so this is analogous to....". It's useful for Java developers of course. It's not as useful to those that were new to Android development.
I believe the reasoning here is that it's often easier to teach by analogy. Rust is fairly difficult to approach as, "My First Programming Language". On the other hand, C had to do exactly this back in the day. Every book had to explain the stack and the heap so that pointers would make sense.
> I believe the reasoning here is that it's often easier to teach by analogy.
The problem is less the analogy and more the analogy to C++. Which is fine if you’re targeting C++ devs, and C++ oriented rust guides are certainly researched (I saw queries several times on /r/rust). It’s more problematic if you’re billing the guide as more general.
IMO (and this is certainly not shared by everyone), Rust is best understood as being a better C++. That’s it. It’s the best language for when you want to use a mainstream language to write large systems in native code and can’t tolerate a garbage collector.
Trying to shoehorn Rust into every niche is making the language lose focus.
Saying it's a "better" C++ is not very accurate. It's a very different beast IMO, and not suitable for all applications. There are areas where it shines, and areas where C++ would be a better choice.
If you want to call it 'C++-like' that's fine, there are similarities.
It’s hard for me to imagine starting a new project in C++ today instead of Rust, unless I needed interoperability with the pre-existing C++ ecosystem. What are some examples of areas where C++ is better?
There are other reasons, but having access to the mountain of existing C++ libraries (and C libraries) is no small thing.
The shear number of hours that can be saved by using existing known, well-understood, well-developed libraries and tools is enormous.
It will be another decade or two before Rust hits that point, if it does at all. It just takes time.
Also: Rust is a bit lower-level/aimed more at systems programming and forces thinking more about memory and structure than C++ does. That has advantages but isn't always great for higher-level applications. I can whip out a basic gui app using QT but that would takes ages in Rust.
> There are other reasons, but having access to the mountain of existing C++ libraries (and C libraries) is no small thing.
That's true. But a lot of code has a shelf life.
People used to point to CPAN as a huge benefit for using Perl. Most don't any more; while all that code is still out there, a lot of it doesn't really do much that's useful today. It was built for another time.
Some of the existing C/C++ code will rot in the same way.
I think code that is continually edited and expanded over many years, with lots of changes to requirements, if managed poorly, can have a shelf life.
But there's plenty of older code that will continue to work just fine essentially forever.
I use plenty of older code that hasn't been touched and is fine. I also use libraries that have been continually improved that have been fine. QT isn't perfect but it's been around since '95 and I think that the modern state of it is pretty great overall.
I agree with your general point, but its use case extends beyond that specific case in a few areas. Eg, embedded, where the analogy fails. Rust is a great fit for embedded. More generally, it's a fit for any case where you need fast code, low-level code, bare-metal code, or standalone executable.
The author dedicates a small portion of each section to highlighting analogous concepts in C++, but I don't think C++ -> Rust programmers are the only audience that benefit from this book.
Thanks! I've been writing quite a lot of rust lately but have been mostly silo'd to my own so i'll definitely read through and see how wildy un-idiomatic my code is.
Even just scanning through it I saw `Avoid matching Option and Result` and the `if let` stuff that I have NOT been using and it looks much more concise already.
Pedantic clippy will suggest those changes, amongst others. I wouldn't recommend running pedantic clippy in CI, but it is useful to run it against your app occasionally.
Occasionally matching Option or Result is actually the most readable way to express some logic. But it's definitely not the first thing you should reach for.
When I first started using Rust I did make the mistake of matching Option and Result everywhere and when I learned better I did cut out quite a bit of verbosity and unnecessary nesting by going back and refactoring those in code I had written. But that doesn't mean you should never use match with them. I thought I should share this experience for anyone new to the language reading this.
if you construct a Vec with capacity 0 via Vec::new, vec![], Vec::with_capacity(0), or by calling shrink_to_fit on an empty Vec, it will not allocate memory.
So an empty vec![] is just a struct on the stack; very cheap to make, and easy for the compiler to optimize out if it can see that it's not used in some paths.
> Needlessly allocates a `Vec` if `self.payload` is `Some`.
Empty vecs don’t require a heap allocation, so your original code is actually fine. In release mode it should compile to exactly the same instructions as your last example.
No, because it requires an &Vec<_> and that doesn't implement Default and for good reason. Just ask yourself, where would the default empty vec live so that you could create a reference to it.
When using unwrap_or(&vec![]), it lives in the enclosing stack. Without the reference, you could use unwrap_or_default().
I lack Rust experience to comment on your suggestion but it does make me wonder if the author has provided a means for feedback in order to make this "More Effective Rust" (No pun intended if you're familiar with Meyers' followup book.)
> When it encounters the question mark operator (?), the compiler will automatically apply any relevant From trait implementations that are needed to reach the destination error return type.
I thought that `?` only unwrapped `Option` and `Result` and was not overloadable. Is [the Rust doc on `?`][1] leaving something important out, or is this a mis-statement?
So whether it’s important to mention depends mostly on how important you think try_fold and try_for_each are, and if you think you’ll miss the information if ever you need those given that’s clearly mentioned in the ControlFlow docs.
> I thought that `?` only unwrapped `Option` and `Result` and was not overloadable.
This is still true, although the experimental Try trait in nightly does provide this. `From` is applied to the Error of the returned Result, ie. Result<T, String> => Result<T, MyError>.
I was never much impressed with that article (given its title, I say—given its title) from the first time I saw it, probably around 2010, and it has aged poorly. Some of my complaints about it:
• It tells a verbose story, rather than just telling you what you need to know succinctly. (Seriously, the 3,600-word article could be condensed to under a thousand words with no meaningful loss—and the parts that I consider useful could be better expressed in well under 500 words. As a reminder of where I’m coming from in this criticism: the title of the article is “The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)”, so I expect it to be, y’know, absolute-minimumy.)
• It spends way too much time on history that even in 2003 wasn’t particularly relevant to a lot of people (though yeah, if you were working in the Windows ecosystems he was working in, you would benefit from knowing about some of it), and which is now (twenty years later) completely irrelevant to almost everyone.
• It portrays UCS-2 and UTF-16 as equivalent, which is disastrously wrong. (As in, “if you do that, you stand a good chance of destroying anything beyond U+FFFF”.)
• As far as I can tell (I was a young child at the time), its chronology around UTF-8 and UCS-2/UTF-16 is… well, dubious at best, playing fast and loose with ordering and causality.
• Really, the whole article looks at things from roughly the Windows API perspective, which, although what you’d expect from him (as a developer living in Microsoft ecosystems), just isn’t very balanced or relevant any more, since by now UTF-8 has more or less won.
• It doesn’t talk about the different ways of looking at Unicode text: code units (mostly important because of the UTF-16 mess), scalar values, extended grapheme clusters, that kind of thing. A brief bit on the interactions with font shaping would also be rather useful; a little bidirectional text, a little Indic script, these days even a bit of emoji composition. These days especially, all of this stuff is far more useful than ancient/pre-Unicode history.
• The stuff about browsers was right at the time, but that’s been fixed for I think over a decade now (browser makers agreeing on the precise algorithms to use in sniffing). (He’s absolutely right that Postel’s Law was a terrible idea.)
I jump in and find a section about borrow checker. At the end, this book explicitly recommends using smart pointer (RC and RefCell) to write your internal DS to pass them around instead of convoluted one and fight with the borrow checker.
I never have seriously coded in Rust but this is what observed as well. I mean how much of an overhead if RC comparing to the real borrow checker coming from Java or JS world? What are your pro Rust dev opinions about the use of RC in the real world?
If you're using Rc you probably mean Arc. And most of the time that you mean Arc you mean Box.
If you're using RefCell you probably have a bad data model. The use case is interior mutability. This turns out to not be that useful in practice if you designed your DS well. Occasionally you need it in the design of some internals, but it's rare (and rare to expose)
If you're fighting the borrow checker you're not writing idiomatic Rust.
> I mean how much of an overhead if RC comparing to the real borrow checker coming from Java or JS world?
Negligible, but the real question is "why do you need RC in the first place?"
There are certainly areas where idiomatic Rust needs to fight against the borrow checker. If that wasn't the case, things like `ouroborous` and `pin_project` wouldn't be as widely used as they are.
Day to day rust users shouldn't care too much about pin_project since the big use case for pin is handrolled futures and executors. Most folks just use adaptors from the futures crate and one of a handful of executors in the ecosystem.
I've never actually heard of that other crate. It looks like most of its downloads come from its direct/indirect dependents like glium. I'd say if you're writing self referential data structures you're not writing idiomatic Rust.
Got to disagree here, a typical use case based on my experience is a parser holding a &str as well as an AST that does zero-copy references into the string.
I'm like a "medium at Rust" person and I very rarely use Box or RefCell or Rc. I wonder what I'm missing due to not really understanding them/when to use them. I don't really use lifetimes either. Maybe I'm not that "medium"...
My personal opinion on these is, it's probably good you're not using them all over: use only when needed, and be clear if you really need them.
Basically:
Box is used rather frequently, it basically corresponds to unique_ptr in C++, and is basically a way of a) having something be a passable heap-allocated thing and b) is often used to hold dyn implementations of a trait. Excessive instances of Box<dyn .. in your code are often a sign or code smell that you're bringing patterns from object-oriented languages through polymorphism via traits into what's really a non-OO language. See also: the Effective Rust entry about preferring generics over traits. Note at one point Box actually had its own special syntax (sigil ~) this was dropped. I am not a Rust core expert, but to me this feels like maybe a mistake...?
Rc is sort of the "out" for when the borrow checker is giving too many problems for a given access pattern and you want to share a piece of state more conveniently within one thread. It's not used very often, you'll see more uses of Arc.
Arc is generally used for shared state across threads. It can be combined with Mutex to create safe shared mutable state.
RefCell's use is basically pushing borrow checking into the runtime instead of at compilation. Its main use is for the so-called "interior mutability pattern": https://doc.rust-lang.org/book/ch15-05-interior-mutability.h... -- basically allowing mutable borrow in an immutable context (e.g. immutable self reference). So really, you shouldn't be reaching for this often. This is more something that data structure library designers might use in a "trust me, I know what I'm doing" way, or for, say a use like incrementing a usage counter or something which you know isn't really mutating state and you want do it inside an immutable reference. Similar to some uses of const_cast in C++, I guess.
You only need the Arc<Mutex if you have multiple threads touching something. And don't worry, the compiler will let you know you gone-messed-up before you get there by angrily telling you about your lack of Send and Sync all over.
Honestly, if you are in a multithreaded world, you are going to make your life a lot easier by reducing your shared mutable state and instead using channels (crossbeam or mpsc) to communicate and coordinate. This will reduce your need to toss things in Arc<Mutex, and in fact reduce a lot of borrow checker issues generally.
Rc for if you have some piece of data shared between (non-concurrent) components, that is maybe hard to manage with borrowing? I personally would try to fix this rather than rely on Rc, but there are legit cases.
RefCell for if you want to manage the borrowing at runtime instead of at compile. Basically it will raise a panic at runtime if more than one person borrows off it.
And it allows you to borrow something mutably (borrow_mut()) even if the surrounding thing is immutable or there are other mutable references. People use this for the "interior mutability pattern"; Additional examples would be e.g.: I have an internal counter I want to increment every time you call this function, and that's mutable, but the surrounding ref to the struct is immutable and I want to keep it that way. I could keep the counter in a RefCell and then borrow it immutably into a RefMut even tho the context I'm doing it in is immutable.
You use `Rc` when a value can have more than one owner. You use `RefCell` when you need multiple mutable references to a value in scope (aka interior mutability - there are more than one mutable references held, but the borrow checker can't prove that they don't overlap at compile time).
Does anyone know how to convert these e-book web templates to EPUB? There are a hack lot of Rust books with these templates and it would be great to turn them into epub. There are a couple of FF addons for this but they are just meh.
I disagree with many of these: I've enough experience writing, reading, and maintaining Rust code to know that some of these sound good in theory, but are harmful in practice.
It's nice to use the ? operator, but if you also want to include logging/metrics, I really recommend using `match` to make it clear what each branch does.
> Embrace the newtype pattern
A nice pattern for certain use cases, but often abused by programmers. Out of the box, a newtype is less useful than the type it wraps, because it has none of the underlying type's methods. This results in either having a ton of boiler-plate code that just does `self.0.underlying_method()` or in programmers making the wrapped value public, which defeats the purpose.
"Embrace" to me means "use it when possible", but my experience says that it should be like spice, a little bit can improve the dish, but too much can ruin it.
> Consider using iterator transforms instead of explicit loops
Similar comment to the point about avoiding matches for Option and Result. For simple use cases, a .map() or .filter() can be good and easy to read, but I've also seen monstrosities with .skip(), .filter_map(), .chunks(), etc. that become a lot harder to decipher than the equivalent for-loop. We also tend to try and write those in an FP style with no side effects, and sometimes that can discourage people from including logs and metrics in places where they would be useful because they would tarnish the purity of the code.
Whatsmore, in Rust closures are more finicky than in GC'ed languages like OCaml or F#, and some iterator methods take their closure parameters by ownership, others borrow, and it's very often an unnecessary mental burden.
> Don't panic
Good idea in general, but panicking is a great tool when the program is in a state that was not predicted by the programmer or cannot be recovered from. "In general, avoid panicking" would be better advice.
> Minimize visibility
It's been accepted since the advent of OOP that we should aim to minimize the visibility of the data in our structures. Every Java IDE defaults to making fields private and auto-generating getters and setters. But more often than not, when programmers hide all the internal details, they limit what other programmers. Yes, other programmers could use the API wrong and create bugs in their programs, but they could also use it in ways that the original programmer could not have anticipated and make something amazing out of it.
And when other programmers complain that they are limited in what they can do, the answer is always "we'll add more stuff (traits, plugins, closure parameters, etc.) to help you do more", increasing the complexity of the code.
> Listen to Clippy
Some of the warnings from clippy are good (e.g., when it reminds you to use .unwrap_or_else() instead of .unwrap_or()), but a lot of the lints are opinions made into code, and people view you very suspiciously if you disable any of them, which drives the general style of Rust toward the style of the people who successfully argued for their opinions to be part of clippy.
For example, I like putting return statements in my code; yes clippy, I know that the value of a block is the value of its last expression and thus it's unnecessary to put the return, but I want to make it more explicit that we are exiting the function here, why are you complaining about that?
> Good idea in general, but panicking is a great tool when the program is in a state that was not predicted by the programmer or cannot be recovered from. "In general, avoid panicking" would be better advice.
The disclaimer at the top of the item agrees:
> The title of this Item would be more accurately described as: prefer returning a Result to using panic! (but don't panic is much catchier).
Later on:
> If an error situation should only occur because (say) internal data is corrupted, rather than as a result of invalid inputs, then triggering a panic! is legitimate.
Most of your criticism boils down to "general idea is fine, but there are important special cases". Which is expected, to be honest. It's difficult to give general guidance that's correct for everyone.
> A nice pattern for certain use cases, but often abused by programmers. Out of the box, a newtype is less useful than the type it wraps, because it has none of the underlying type's methods. This results in either having a ton of boiler-plate code that just does `self.0.underlying_method()` or in programmers making the wrapped value public, which defeats the purpose.
e.g. https://www.lurklurk.org/effective-rust/panic.html
The litmus test for panic! vs. Result<T,E> is not rarity of occurrence, it's whether the condition represents a programming bug or a recoverable error. A good treatise on this topic here: https://joeduffyblog.com/2016/02/07/the-error-model/#bugs-ar...