Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.46 (rust-lang.org)
391 points by pietroalbini on Aug 27, 2020 | hide | past | favorite | 150 comments



The most exciting component of this release is the const fn improvements. With loops and if available, you can now do non-trivial computation in const fn for the first time. It reduces the gap between Rust's const fn and C++'s constexpr to a large degree. Ultimately, the miri execution engine that this feature builds upon, supports a much larger set of features that even constexpr supports. It's been a project spanning years to get it merged into the compiler, used for const fn evaluation, and stabilize it's features (this part is far from over).

In addition to the linked examples, I have some code of my own which is made simpler due to this feature: https://github.com/RustAudio/ogg/commit/b79d65dced32342a5f93...

Previously, the table was present as an array literal in C-style, now I can remove it once I decide for the library to require the 1.46 compiler or later versions.

Link to the old/current generation code: https://github.com/RustAudio/ogg/blob/master/examples/crc32-...


Dlang has had CTFE for a long time. The engine just needs to be rewritten to reduce memory usage. https://tour.dlang.org/tour/en/gems/compile-time-function-ev...


> supports a much larger set of features that even constexpr supports

This sounds promising. Can you give examples? I don't know Rust at all, and the reason I like C++ is its metaprogrammability.


Basically, the interpreter interpret's rustc's internal IR, so it can theoretically support the entire language. That's not a good idea for various reasons, though, so its capabilities are effectively on an allowlist, that we expand over time as we're sure we want to enable a given feature.


That seems like a really good design, as opposed to having an AST interpreter like I might do otherwise. But would this indeed support a "much larger set of features" than constexpr as was claimed?


I think that the earlier const evaluator was an AST interpreter? It's been a long time and I don't work on this code, so I'm not 100% sure.

I don't know constexpr well enough to comment on that claim.


It has been a while, and I’m remembering from a discussion of new features in C++20, but I recall that constexpr isn’t capable of fully supporting memory allocations made during evaluation, while Rust’s const fn will eventually be able to.


I think some clarification is warranted here, because Miri is intended to be used for more than just constant evaluation. In particular, Miri wants to be able to dynamically verify `unsafe` code in order to detect undefined behavior, which means that it will eventually want to support the entire Rust language. However, just because Miri supports any given Rust feature does not necessarily mean that that feature will be made available for constant evaluation; consider features that are non-deterministic (like RNG) or architecture-dependent/nonportable (like floating-point operations), and how const-evaluating such things could potentially cause memory-unsafety to occur when runtime output and const output differ.


I believe in a talk[1] it was mentioned that Rust's const eval will support heap-allocated values (accessed as references). A quick search suggests that C++20 will also support this, although it may be safer in Rust as it can give a stronger guarantee that the static memory won't be written to.

[1]https://youtu.be/wkXNm_qo8aY?t=888


I wonder why && and || are not allowed in const functions?

> All boolean operators except for && and || which are banned since they are short-circuiting.

I guess I'm missing something obvious but why does the short circuiting break const-ness?


That was a restriction from before 1.46 stabilized control flow in const functions. Now that we have worked out the details around `if`, we can also stabilize `&&` and `||`.

(I'm a little surprised they weren't stabilized at the same time! Edit: they were! I just didn't look closely enough.)


> (I'm a little surprised they weren't stabilized at the same time!)

They were stabilized at the same time, see the release announcement.

https://blog.rust-lang.org/2020/08/27/Rust-1.46.0.html


Short circuiting introduces conditional branching. If you call a function on the right hand side of a || or && it might or might not be executed depending on the value of the left hand side.

Until this version of rust, all conditional branches were banned from const functions.

I guess to keep things simple they just banned any feature that might cause branching.


Why is branching a problem? Since `if` is now enabled in a const fn, it's trivial to rewrite && with & and if.


`if` wasn't enabled before, so `&&` wasn't enabled. Now that `if` is enabled, `&&` now works too.


Ahh that makes a lot of sense, if you're going to have a compiler insert the result of a function having conditional branching seems a bit gnarly I guess?


For the history of this feature request, see https://github.com/rust-lang/rust/issues/29608 as a starting point.


It's conditional based on what's ultimately constant data, so you end up with predictable output regardless.


That's a weird restriction on not allowing logical operators. AFAIK C++ allows this for constexpr functions - as long as it can be evaluated at compile time.


I’m learning rust right now and there is a lot to like. Steady updates like this are also very motivating. The ecosystem feels very sane - especially compared to npm. Top notch Wasm support, cross compiling is a breeze.

That said, coming from a FP background (mostly Haskell/JS, now TS) Rust is... hard. I do understand the basic rules of the borrow checker, I do conceptually understand lifetimes, but actually using them is tricky.

Especially in a combinator world with lots of higher order functions/closures it’s often completely unclear who should own what. It often feels my library/dsl code needs to make ownerships decisions that actually depend on the usage.

Anyways, I guess this gets easier over time, right? Should I avoid using closures all over the place? Should my code look more like C and less like Haskell?

[edit] great answers all, providing useful context, thanks


> Anyways, I guess this gets easier over time, right?

Yes.

> Should I avoid using closures all over the place?

Not necessarily.

> Should my code look more like C and less like Haskell?

Yes. Others sometimes don't like to hear this, but IMO, Rust is not at all functional. Passing functions around is not ergonomic (how many function types does Rust have again? Three?). Even making heavy use of Traits, especially generic ones, is difficult.

Rust is very much procedural. Java-style OOP doesn't work because of the borrowing/ownership. And FP style function composition doesn't work without Boxing everything. But then you'd need to be careful about reference cycles.


> how many function types does Rust have again? Three?).

Depending on what you meant, there are more than three:

  * There are 3 traits, used by closures depending on their needs:
    * Fn(Args) -> Output
    * FnMut(Args) -> Output
    * FnOnce(Args) -> Output
  * *Every* `fn` is its own type (`fn() {foo}`)
  * Function pointers (`fn()`), which is how you pass the above around in practice
> Rust is very much procedural.

I think this is like saying Python is very much procedural: true, but loses some nuance. Rust has some attributes of OOP, some attributes of FP. Some constructs from OOP and FP are made harder once you involve borrowing. Saying it is procedural conjures images of Pascal and K&R C in people's minds. To bolster your argument, though, I mostly use method chaining for iterators but every now and then I need to turn it into a `for` loop to keep the lifetimes understandable for the compiler, myself and others.


I realized after I wrote the comment that I was really referring to the closure traits when I said that. And I really should have said "kind" instead of "type" because, like you said, every different function has its own type.

But anyway, I don't really disagree with your point about categorizing languages as OOP, Procedural, or Functional.

But honestly, in this case, I think it's pretty damn clear than Rust is procedural WAY more than it's either OOP or FP. (Note: By OOP, I mean Java-style with tall ownership hierarchies and object-managed mutable state, not necessarily caring about inheritance. And definitely not referring to Alan-Kay-Style-OOP a la Lisp and Smalltalk).

Scala can be looked at as FP and/or OOP. C++ can be looked at as Proc and/or OOP. Python, IIRC, can kind of do all of them, but I don't remember it being easy to make copies/clones in Python, so FP is questionable.

Have you ever tried to write two versions of a complex async function in Rust? One with async and one with Futures combinators? Due to ownership, the Futures combinators approach very quickly devolves into a nightmare. The language doesn't "want" you to do that.

What about function composition? Very awkward to do with matching up the different Fn traits.

And deeply nested object hierarchies are a no-go, too, because of the inability to do "partial borrows" of just a single field of a struct.

I mean, yes, it's not C because it has a real type system and generics. But... it's pretty much C in that you just write functions and procedures that operate on structs.

EDIT: Perhaps my "hardline" approach on calling Rust procedural is in response to people who have come to Rust from non-FP languages, see `map`, `filter`, and `Option` and start calling Rust functional. That's not functional programming! Ask someone who does OCaml to try out Rust and see if they call Rust functional afterwards.


>And deeply nested object hierarchies are a no-go, too, because of the inability to do "partial borrows" of just a single field of a struct.

I'm not totally sure if this is what you mean, but FYI you can borrow multiple fields mutably using destructuring:

    struct Foo(u8, u8);
    
    fn main() {
        let mut bar = Foo(1, 2);
        let Foo(ref mut x, ref mut y) = &mut bar;
        *x += 1;
        *y -= 1;
        println!("{} {}", bar.x, bar.y);
    }


The problem comes when someone else needs to also borrow the Foo, even immutably. In Java-style OOP, you typically have "objects" that own other objects, all the way down. And you manage state internally.

So it often comes up that you might call several methods in a given scope. If even one of those mutably borrows one field of one sub-object, then you can't have any other borrows of that object anywhere else in that scope.

Newbies from other languages trip on that often enough that I used to see questions about it in r/rust fairly frequently.


Just let x = &mut bar.0 will work, but this "intelligence" is confined to the body of a single function. Rust possesses the somewhat curious property that there are functions that are intended to always be called like this

  foo(&bar.x, &bar.y, &bar.z)
which cannot be refactored to

  foo(&bar)


This is a complex topic, but what it boils down to is that the function signature is the API. If you borrow the whole thing, you borrow the whole thing, not disjoint parts of it.

This is also why it's okay in the body of a single function; that doesn't impact a boundary.

We'll see what happens in the future.


My preferred solution to this is to make partial borrows part of the method definition syntax to make it clear this is part of the external contract of your API. Also I lean towards mimicking the arbitrary self type syntax and land on something along the lines of

  impl Foo {
      fn bar(self: Foo { ref a, mut ref b, .. }) {}
  }
Where that signature tells the borrow checker that those two fields are the only ones being accessed. Nowadays this method would have to be &mut self, which heavily restrict more complex compositions, as mentioned in this thread.


On one hand this looks/feels absolutely useful and kind of "must have" feature, yet the boilerplate seems excessive. But at the same time it's absolutely something that should be described somewhere as metadata for a function. It seems to me this should be something that the compiler figures out and mostly elides, like lifetimes. (But this should be present as part of API, should be easily discoverable, should be there at compile time when linking against crates, etc.)


The problem with having the compiler figure it out is that the signature now becomes dependent on implementation details that can't be seen from the signature, and could be accidentally changed. This information must be an explicit part of the signature, so that it's properly visible and spells out what the function can do without having to read the body.

That said, I think putting it in the argument list like that is a terrible idea. It would add far too much clutter. What if it was put into the where section, sort of like this:

  impl Foo {
      fn bar(&mut self, other_param: &Data)
        where self: borrows{ a, b, .. },
              other_param: borrows{ a, b, ..},
      {
        // Function body
      }
  }
In one sense, it sort of fits because it's providing a "bounds" of sorts on how the reference can be used, similar to the way a trait bound would for a type. If no reference binding is provided it would default to borrowing all the fields, which is the current behaviour.


But honestly, in this case, I think it's pretty damn clear than Rust is procedural WAY more than it's either OOP or FP. (Note: By OOP, I mean Java-style with tall ownership hierarchies and object-managed mutable state, not necessarily caring about inheritance. And definitely not referring to Alan-Kay-Style-OOP a la Lisp and Smalltalk).

Scala can be looked at ...

Interesting perspectives, and I largely agree with all of them.

Related: I heard someone else say that while Clojure and Erlang embrace immutability for concurrency, Rust shows that you can "just mutate". It's still safe for concurrency (due to its type system).

Rust seems to be one of the only languages that embraces the combination of algebraic data types + stateful/procedural code.

But I've also found this in my Oil project [1], which is written with a bunch of custom DSLs!

I wrote it in statically-typed Python + ASDL [2], so it's very must procedural code + algebraic data types. Despite historically using an immutable style in Python, this combo has grown on me. Lexing and parsing are inherently stateful, and use a lot of mutation.

----

On top of that, my collection of DSLs even translates nicely to C++. Surprisingly, it has some advantages over Rust! The model of algebraic data types is richer ("first class variants"), described here:

https://news.ycombinator.com/item?id=24136949

https://lobste.rs/s/77nu3d/oil_s_parser_is_160x_200x_faster_...

[1] https://www.oilshell.org/

[2] http://www.oilshell.org/blog/tags.html?tag=ASDL#ASDL


> Related: I heard someone else say that while Clojure and Erlang embrace immutability for concurrency, Rust shows that you can "just mutate". It's still safe for concurrency (due to its type system).

Yes! I will repeat a sentiment I articulated on Reddit about that. Even after having used Rust on a handful of small-to-medium sized projects since 2016, I never realized that I could loosen/abandon my immutability fetish that I had been trained to love over the years of working with C++ and Java. C++ needs it for concurrency, and Java needs it for concurrency and because every method can mutate its inputs without telling you. Rust doesn't have either of those problems. Having immutable-by-definition objects in Rust isn't really that useful (unless, of course, the thing is naturally, semantically, immutable anyway, like a Date IMO). It was an eye-opening epiphany and I'm excited for my next Rust session to see how my "new worldview" pans out. :)


Yes I'll be interested to see how it turns out. Any blog posts / writing on the procedural viewpoint will be appreciated.

Does Rust have something like C++'s const methods? Where you can have a method that mutates a member, but doesn't logically mutate from the caller's perspective?

It seems like you could be prevented from having races on individual variables, but still have races at a higher level.

Like on database cells. I guess no language will help you with that, and that's why Hickey wrote Datomic -- to remove mutability from the database.


> Does Rust have something like C++'s const methods? Where you can have a method that mutates a member, but doesn't logically mutate from the caller's perspective?

Yes! "Interior mutatbility" is the term to search for. In Rust, you'd wrap the field in a RefCell<T>. Many connection-pool implementations use interior mutability to manage the connections transparently to the caller.

Interior mutability is basically what, e.g. OCaml, does by default. In Rust, it's opt-in.

Yeah, DB ops are always a sticking point for figuring out how to write my APIs.


I think that Rust is often assumed to be functional because it has ADTs and nice pattern matching, both of which have historically been a feature specific to FP. Just goes to show how fuzzy our definition of FP really is...


Agreed. Same with "OOP". Who the hell knows what people really mean when they say that.

Those people who think that "FP" means "type system like Haskell" are wrong, though, IMO. It precludes languages that are much more function-based, such as Clojure, Schemes, Elixir.


Usually when the term OOP is used, it actually means ClassOrientedProgramming (C++/Java/C# etc)


Which I also don't understand. Is that a style of overusing classes where functions would suffice (FooHelper)? Or is it something about the language? Because almost all popular languages have classes. Rust and Go call them "struct", but it's the same thing. Swift has "class" and "struct", but they're both the same thing as a C++ class.


Rust structs are not classes. Rust puts structs inline (on the stack), classes are virtual. Since Rust is a systems programming language, it's an actual distinction that makes a difference


I'm not sure I follow. Both C++ and Rust allow us to put structs/classes on the stack or the heap. Rust has trait objects which use a vtables. Are Rust traits the same as "classes" then?

When people say "OOP" or "class-oriented-programming" are you saying that they're referring to implementation details such as memory allocation?


I'm not sure what you mean by "classes are virtual", but if it's virtual dispatch, then that's completely orthogonal to allocating objects on the stack vs the heap.


Functional in my mind more or less means means working with immutable values.


> how many function types does Rust have again? Three?

It has to, right? ATS has many function types as well, plus stack-allocated closures (I think Rust has that too??)


Rust's closures do not heap allocate unless you box them, like any other struct, because closures are sugar for a struct + a function, that's correct.

(and yes, there are three types of closures, because they need to know if they take said struct by reference, by mutable reference, or by owner.)


It does have to, because of the way mutation and ownership work. Which is great! But it makes functional programming awkward. The language does not "steward" you toward function composition.


>the borrow checker, I do conceptually understand lifetimes, but actually using them is tricky.

I've been using Rust for a little over year, almost daily at work, and for several projects. I have a pretty good intuition about how the borrow checker works and what needs to be done to appease it. That said, I don't think I'm any closer to understanding lifetimes. I know conceptually how they are supposed to work (I need the reference to X to last as long as Y), but anytime I think I have a situation that could be made better with lifetimes, I can't seem to get the compiler to understand what I'm trying to do. On top of that very little of my code, and the code I read actually uses lifetimes.


I've been writing Rust code since before the 1.0 days, and I still can't understand lifetimes in practice.

When the compiler starts complaining about lifetimes issues, I tend to make everything clone()able (either using Rc, or Arc, or Arc+Mutex, or full clones).

Because if you start introducing explicit lifetimes somewhere, these changes are going to cascade, and tons of annotations will need to be added to everything using these types, and their dependent types.


I think you're really intended to do the latter rather than the former. I mean, Rust lets you do either–it gives you the choice if performance isn't your concern–but usually it's better to not clone everything.


Nah, either is fine. Rust gives you the tools to do both for good reason. Which is right for you completely depends.


I find that lifetimes are ok, albeit annoying sometimes, especially the cascade part, as you mention. The one thing I can't get to stick in my brain is variance. Every time, I need to go back to https://doc.rust-lang.org/nomicon/subtyping.html#variance


It's worth noting that every reference in Rust has a lifetime, but the compiler is usually smart enough to infer it. What you are talking about is explicit lifetimes.


As someone in a similar description as you - i find my lifetime understanding... moderate. Complex lifetime usage still can tweak my brain - notably how i can design it. But simple lifetime usage is intuitive.

A simple example i often run into is wanting to do something with a string, without taking owned parts of the string. Very intuitive how the str matches the lifetime of the owned value.

On the otherhand, the other day i was trying to write a piece of software where:

1. I wanted to deserialize a large tree of JSON nodes. I had the potential to deserialize these nodes without owning the data - since Serde supports lifetimes, i could deserialize strings as strs and hypothetically not allocate a lot of strings.

2. In doing that, because a tree could be infinitely large i couldn't keep all of the nodes together. Nodes could be kept as references, but eventually would need to be GC'd to prevent infinite memory.

3. To do this, i _think_ lifetimes would have to be separate between GC'd instances. Within a GC'd instance, you could keep all the read bytes, and deserialize with refs to those bytes. When a GC took place, you'd convert the remainder partial nodes to owned values (some allocation) to consume the lifetime and restart the process with the owned node as the start of the next GC lifetime. ... or so my plan was.

I have, i think, just enough understanding of lifetimes to _almost_ make that work. I _think_ some allocations would be required due to the GC behavior, but it would still reduce ~90% of allocations in the algorithm.

Unfortunately, i got tired of designing this complex API and just wrote a simple allocation version.

Conceptualizing allocations and the lifetimes to make it work are.. interesting. Especially when there is some data within the lifetime that you want to "break out of" the lifetime, as in my example (where i had a partial node, and i made it owned).

I still think i understand enough to do it - it'll just take a fair bit of thinking and working through the problem.


These kind of optimization are incredibly painful in Rust one common suggestion is to sidestep the issue and store an offset + length in the nodes and then you take that to look up the value from the original string when you need the value.


I've tried and tried but I've never found a situation where explicit lifetimes was the answer. It's almost always more complex than that. I mean, everywhere that is complex enough that implicit lifetimes don't work is also too complex for explicit lifetimes and almost always required Rc or Arc to solve it. Maybe I'm missing something, but it seems like there are so many other missing topics the Rust Book could be spending time on that would be more effective than teaching about explicit lifetimes.


There are a lot of situations where explicit lifetimes are the answer, TBH. However, you have to have a very good working model of lifetimes in order to get the annotations right.


I wrote a parser that needed it. But yeah for the most part whenever explicit lifetimes came into the picture it means that I have made some sort of mistake and need to rethink my approach.


I don't have time for an exhaustive answer, so I'll give you some rules of thumb when using Functional-style combinators:

* If you need to keep unchanged the input, you must either use a reference-to (.iter()) or copy-of (.iter().cloned()) of each item

* If you don't need the input ever again, you should move the items (.into_iter())

These rules follow for each step of the chain.

I very very often write very Functional code in Rust and I find it natural and easier to reason about than imperative-style code. The example I could find the fastest: https://github.com/thenewwazoo/aoc2019/blob/master/src/day10...

Edit: another example (this one uses types that are Copy so the copies are implicit) https://github.com/thenewwazoo/cryptopals/blob/master/src/tr...

Another edit: I am not a Functional programmer, and have never known Haskell or any Lisp. Erlang is as close as I've ever gotten. I've found Rust to be a fantastic language for writing Functionally.


I've been using .to_owned() liberally, that often does the trick in the first instance, albeit at the potential cost of a copy.


This is a perfectly reasonable solution. You might be leaving performance on the table but

1) if perfomance isn't a measurable problem for you, then there's on point on eking the last bit of performance from these allocations

2) it simplifies the code itself

3) sometimes clones are actually efficient, people forget to make their small ADTs Copy

4) if you're learning the language this lets you delay the moment when you have to fully understand the way lifetimes actually behave in complex cases, which means that when you do do that you will have a better grasp of the rest of the language and will be able to form a better mental model of how it fits with other features


> I do understand the basic rules of the borrow checker

It ends up being doable. I dabbled in ATS, developed Stockholm syndrome, and now Rust ain't too bad.

Higher-order functions are difficult in Rust or with linear/affine types in general. Haven't looked at what Rust does recently.

> Should I avoid using closures all over the place? Should my code look more like C and less like Haskell?

When in Rome do as the Romans :)

Anyway, some fun imperative programming stuff you can do in Rust that is fickle in Haskell (or OCaml/Standard ML).


Ah, if you’re making dsl code or functional combinators, you usually want to ‘move’ your values instead of ‘borrowing’ them.

example:

fn add(mut self) -> Self { self }

fn add(self) -> Self { self }

instead of:

fn add(&mut self) {}

fn add(&self) {}

With this, you will be able to ‘store’ closures easily and apply them later. No more fighting with the borrow checker over where to borrow as mut or not. You will also avoid a few copies.


This echoes my experience with learning Rust over the past few weeks (coming from Elixir).

There is a lot to like, understand lifetimes conceptually, but it's hard.


If you're coming from elixir, and not doing this for work, I highly suggest zig; zig feels like elixir since both have the comptime concept.


Yep. I made my first Rust script last week and the amount of care required is similar to C++.

It is definitely not easier compared to C++, contrasting with D, which is easier than C++.

However, the program worked correctly at the first try, which I guess it is also a consequence of the Rust model.


> The ecosystem feels very sane [] compared to npm

Now that's damning with faint praise.


Parts of Rust's ecosystem are just npm but with saner people using it at the moment.


And it's probably going to be a pretty big issue in a few years, IMO.


Sometimes you will be annoyed by changes I guarantee it BUT since 1.0 that's decreased a lot and compared to npm it's night and day. You'll think you're dealing with C in relative terms of stability if npm is your baseline :D


So, I want to learn Rust. I am a C# / Python programmer, experienced.

Are there any particular set of problems that I can solve systematically, so that I can learn all the features of Rust?


https://doc.rust-lang.org/stable/book/ is not purely problems, but does have some problem chapters. (I am a co-author.)

https://doc.rust-lang.org/stable/rust-by-example/ is the "by example" introduction, which is all about sample programs, but feels a bit dated, IMHO. Still not incorrect, but not up-to-date.

You may also like the O'Reilly book, or Rust In Action, which use more fully-featured example programs more heavily than The Book does.


I was super impressed by the O’Reilly book, which throws you right in to writing a multithreaded Mandelbrot set plotter. It also goes through writing a multithreaded HTTP server. Pretty neat!


Meta-answer: my default when picking up a new language is usually to learn just enough to be able to start writing code, and then learn new things piecemeal as necessary to solve whatever thing I'm working on, and it sounds like you're hoping to do something like that here.

I found that approach for Rust in particular to not work well at all, and have colleagues who've reported the same. There are some fairly complicated, fundamental concepts that are unique to Rust that I think need to be tackled before you can really do much of anything (mostly borrowing and lifetimes), and that's not immediately obvious from starter programs -- because of lifetime elision, some early programs can look deceptively familiar, but there's a bunch of barely-hidden complexity there, and as soon as you start to stray from the tutorial path, you'll run headfirst into a wall of compiler errors that you're not yet equipped to understand. For Rust I'd highly recommend just reading a book cover to cover first (either TRPL or the O'Reilly one), and then starting to write code.


One thing I'd be wary of is Googling error messages and taking answers from Stack Exchange. Rust has mutated (heh) a fair bit over the years and many SE answers to noob problems are obsolete and sometimes incorrect. At the very least check the datestamp on any answer and be wary of anything more than a year or two old. This goes double if the answer has some extremely awkward looking syntax with lots of modifiers and underscores sprinkled throughout. There's probably a better way to do it now, or an alternative solution that works better. Or maybe you're just trying to do something that Rust makes hard, like concurrent processing on a shared data structure.

The manual is safer even though it's harder to find your exact problem and solution, especially when you're just starting out.


I literally spend tens of hours a week on Stack Overflow ensuring this isn’t the case, or if it is that it’s clearly notated.

As always, feel free to drop into the Rust Stack Overflow chat room[1], or any of the official Rust discussion channels, and ping me or other Stack Overflow contributors to review and update answers.

1: https://chat.stackoverflow.com/rooms/62927/rust


You are awesome!


Seconded. I've been learning on SO heavily lately as I'm writing my first real Rust program (an IRC bot/client/server/not sure yet), and I was impressed by how many questions and answers had been updated with notes about things being potentially out of date. Not something that I think I've ever seen in the PHP land from whence I came.


Trying to implement anything in Rust will set you up for a crash-course. Even the simplest non-trivial programs will introduce you to the Rust borrow checker, a major feature absent in C# / Python.


Once you've learned the basics (plenty of links in the siblings, including the official Rust Book), this is a key (and entertaining!) unofficial resource that really hammered home for me the ways that Rust is different from the C family when it comes to working with references: https://rust-unofficial.github.io/too-many-lists/

It also taught me about Boxes and Rc's, which are essential for certain kinds of things, and which I don't remember being covered in the main Rust Book at all



Yeah, I figured they were probably in there somewhere. It's possible I read the book before they were added, or that I skipped them (I glossed over some of the final chapters), or it's possible I just didn't fully grasp how important they were until I followed the linked-list tutorial.

What I like about the latter is how closely it steps through the problem-solving process within the context of a very familiar task, teaching you at each stage 1) why the borrow-checker is upset and 2) what tool you need to apply in order to satisfy it. If the Book taught me "what is Rust and what are its features?", this taught me "how do I use Rust in practice?".


Oh yeah, don't get me wrong, the linked-list tutorial is amazing. :)


You might check out https://exercism.io/tracks/rust . Some are a little heavy in the math department but personally I've always found test drive learning useful when learning a new language thanks to instant feedback.



I am in similar boat. Python centric data scientist. Very tempted to try to learn Rust so I can accelerate certain ETL tasks.

Question for Rust experts: On what ETL tasks would you expect Rust to outperform Numpy, Numba, and Cython? What are the characteristics of a workload that sees order-of-magnitude speed ups from switching to Rust?


I'm far from an expert, but I would not expect hand-written Rust code to outperform Numpy. Not because it's Rust and Numpy is written in C, but because Numpy has been deeply optimized over many years by many people and your custom code would not have been. When it comes to performance Rust is generally comparable to C++, as a baseline. It's not going to give you some dramatic advantage that offsets the code-maturity factor.

Now, if you're doing lots of computation in Python itself - not within the confines of Numpy - that's where you might see a significant speed boost. Again, I don't know precisely how Rust and Cython would compare, but I would very much expect Rust to be significantly faster, just as I would very much expect C++ to be significantly faster.


I deal with a lot of ragged data that is hard to vectorize, and currently write cython kernels when the inner loops take too long. Sounds like Rust might be faster than cython? Thanks for the feedback.


Also it might take 20x less RAM compared to using Python objects like sets and dicts. In Rust there's no garbage collection, and you can lay out memory by hand exactly as you want.


Most likely, yes


Julia might be a better fit for this use case.

That way you leverage a more developed data ecosystem, can call python when necessary and avoid writing low level code.

Depends on the task of course.


I’m fascinated by Julia and have test driven it before but it didn’t click for me. Maybe I was doing it wrong and/or the ecosystem has matured since I last looked.

I guess I generally do like the pythonic paradigm of an interpreted glue language orchestrating precompiled functions written in other languages. I don’t need or want to compile the entire pipeline end to end after every edit, that slows my development iteration cycle times.

I just want to write my own fast compiled functions to insert into the pipeline on the rare occasions I need something bespoke that doesn’t already exist in the extended python ecosystem. It seems like a lower level language would be optimal for that?


If the dev cycle feels slow in julia, you can make it snappier with a tool like Revise.jl, it is quite handy.

If you just need to fill a small and slow gap maybe something like numba is also a good option to stay within python.

Going all the way to a low level language would require the compilation, the glue code and expertise in both languages. Probably that slows down the development pipeline more than the JIT compilation from julia or numba.

Anyway, any opportunity to learn/practice some rust is also great!


One thing that may help with the glue-code aspect would be a crate like pyo3[0], which can generate a lot of the details for you.

[0] https://crates.io/crates/pyo3


column-wide map-reduce over large dataframes usually give you a 1000x or so speedup.

With rust you can stream each record and leverage the insane parallelism and async-io libs (rayon, crossbeam, tokio) and a very small memory footprint. sure you have asyncio in python but that’s nowhere near the speed of tokio.


Thanks for the pointers, those crates seem great. The flaky multithreading libs are my least favorite part of python, and rust’s strength in this area seems very appealing.


I like to implement something I have already done before. In my case, the ID3 algorithm has a nice balance of challenge, experience and documentation available. You could try to write it for a specific case, where you structure your data, and then apply it to a generic case.


Try to do some graphics programing with thr backend of your choice. There is also this cool nanovg port https://github.com/cytecbg/gpucanvas. Run the demo in examples to see the nanovg demo.


try to write ETL in rust


Write a chip8 emulator in Rust.


If anyone wants to know more about const fns, see https://doc.rust-lang.org/reference/items/functions.html#con...

It is the Rust way of specifying a function as being _pure_. In other words the output is dependent only on the function arguments, and not on any external state.

This means they can be evaluated at compile time. I suppose in the future, it could also allow better compiler optimizations.


Never worked with Rust, but I am pretty sure that having a function be pure is not enough to evaluate it at compile time.


Yes, we don't actually use the "pure" terminology for this reason.


Why not? By definition the output does not depend on runtime property so you should be able to compute it at compile time right?


Pure functions are functions where the return value only depends on the function arguments. If the function arguments are not known at compile time, obviously you can't evaluate it at compile time. It would only be possible to do that when all the arguments are also known at compile time (constants).


But a const fn can also do that: if given non-constant parameters it will be evaluated at run-time. So I (having never used Rust before) still haven't see the distinction between pure and const. What's an example of a function that is pure but cannot be evaluated at compile time with constant parameters?


The parent didn't specify calling with constant parameters, which makes a huge difference. To answer your question, basically anything the compiler doesn't know how to evaluate - which has been expanded in this release, but does not include everything still.


Looks like we have some terminology confusion. I read mijamo's question as being about the theoretical ability to evaluate at compile time (the value is knowable) not whether the compiler does do it, and that's what I meant in my comment too.

If you say that 'pure' functions are not compile-time-evaluatable because they may be given parameters that are not known at compile time, then you must also say that const fns are not compile-time-evaluatable. I think it's also clear that we mean for const fns to count, so the assumption that the parameters are known at compile time was implicit in the question.

Under those two assumptions: are pure functions evaluatable (in theory) at compile-time (on values known at compile time)? As far as I can think, the answer is yes? In which case, I'm not entirely sure what the distinction between 'pure' and 'cosnt fn' is supposed to be except to separate out the part of 'pure' functions that can are evaluated in practice. Is there anything more to it?


I think that's it in a nutshell. You can't evaluate everything at compile time, even when you could theoretically. So you need some way to mark the subset of pure functions that can be evaluated at compile time, which is what const fn does. That way if a const fn only calls other const functions you know you can evaluate it. It's a convenient way of tagging functions.


That's what I thought I said in the original comment. I wonder what people disliked.


That makes sense, thanks.


Well, that is not the definition. Output of a pure functions can depend on input arguments, and those arguments can definitely depend on runtime properties.

https://en.wikipedia.org/wiki/Pure_function


Rust has no notion of purity. This would require something like an effect system.

const functions can't directly do any IO or even allocation - at the moment.

But this can be easily circumvented, eg by using a proc macro that does IO.

Sidenote: even in Haskell the function signature doesn't guarantee purity, due to unsafePerformIO.


An interesting history note: Rust used to have an effects system which included actually being able to annotate a function as pure.


From way back in 2013, a HN thread https://news.ycombinator.com/item?id=6940624

Sadly it looks like the wayback machine does not have a copy of the original. Does anyone know how to get one?


Gmane was just a nice interface for reading mailing lists, in this case it was just referring to some thread on the old rust-dev mailing list, which is archived at https://mail.mozilla.org/pipermail/rust-dev/ . Sadly I can't tell exactly which thread it was, but I'm guessing it was https://mail.mozilla.org/pipermail/rust-dev/2013-January/002... .

(Niko also once wrote a blog post which gives an overview of the old purity system: https://smallcultfollowing.com/babysteps/blog/2012/10/12/ext... )


I knew it was, but couldn't find a rust-dev post with "pure" in the same time-frame. Thank you! Yours may be it, but I think est has a convincing case that it was the thread this one spun out from...

Thanks for that link to Niko's blog too!


Alright this sent me down a rabbit hole.

The URL is: http://thread.gmane.org/gmane.comp.lang.rust.devel/3674/focu...

I've also seen it without the trailing /focus.

The web interface of gmane.org is down, so the link is not available. Turns out though that the rust-dev mailing list archive is present on both mail.mozilla.org and mail-archive.com, so one only has to find the mail corresponding to the link.

https://www.mail-archive.com/rust-dev@mozilla.org/

https://mail.mozilla.org/pipermail/rust-dev/

Using archive.org's "find all archived websites with this prefix" feature one can obtain a total of three archived e-mails.

http://web.archive.org/web/*/http://article.gmane.org/gmane....

http://web.archive.org/web/20140723013539/http://article.gma...

http://web.archive.org/web/20140719142224/http://article.gma...

http://web.archive.org/web/20141225073140/http://article.gma...

Now, one searches for lines in those e-mails in on mail-archive.com and finds these corresponding links:

https://www.mail-archive.com/rust-dev@mozilla.org/msg06831.h...

https://www.mail-archive.com/rust-dev@mozilla.org/msg09516.h...

https://www.mail-archive.com/rust-dev@mozilla.org/msg10494.h...

Observe that the differences between the two IDs are different each time, namely decreasing: 53, 50, 45

So it's not a constant difference. What to do now?

Let's google the URL! It points towards this hn comment: https://news.ycombinator.com/item?id=7554676

It gives one piece of information: the e-mail was written by Graydon. Similarly, commenters in https://www.reddit.com/r/programming/comments/1t8y6g/why_rus... mention his name, making it very likely that the e-mail was written by him.

Another hint comes from the reddit thread you linked above: someone named maxcan stated they started the thread. Looking up their name plus "pure" gives only e-mails from a single thread, including an e-mail from Graydon: https://www.mail-archive.com/search?l=rust-dev%40mozilla.org...

This is the E-Mail:

https://www.mail-archive.com/rust-dev@mozilla.org/msg03913.h...

https://mail.mozilla.org/pipermail/rust-dev/2013-April/00392...

Also archived it, just to be sure:

http://web.archive.org/web/20200827181214/https://www.mail-a...

It covers precisely the topic you mentioned and is in a thread started by maxcan. I think it's the e-mail we are looking for.

To verify, the difference between the two IDs is either 239, or 58, depending on which of the two numbers in the URL point to the actual E-Mail, but 58 is more likely. The 0.7 release announcement for example has a difference of 57 and is quite close to both:

https://www.mail-archive.com/rust-dev@mozilla.org/msg04653.h...

https://news.ycombinator.com/item?id=5986985


Argh! you are a better sleuth than me. I was looking at that month's rust-dev archives and didn't realize the subject did not have "pure" in it, so I looked right over it.

This is! Thank you so much!


Glad to see `Option::zip` stabilized. I tend to write such a helper in many of my projects to collect optional variables together when they're coupled. Really improves the ergonomics doing more railroad-style programming.


Do you have an example? I'm having trouble understanding how zip would be used in practice.


You sometimes have two Options that must both be Some to have any effect, but other reasons prevent you from making an Option of a tuple of those two fields. Eg think of deserializing a JSON that contains optional username and password strings, but you need both if you are to use them to authenticate to some remote.

In that case, you currently have to write code like:

    if let (Some(username), Some(password)) = (username, password) {
        /* both are set */
    }
    else {
        /* at least one is not set */
    }
With zip this can be written as `if let Some((username, password)) = username.zip(password) {` In this case it doesn't look like a big difference, but it does allow you to chain other Option combinators more easily if you were doing that instead of writing if-let / match. Using combinators is the "railroad-style programming" that kevinastone was talking about. For example, you can more easily write:

    let (username, password) = username.zip(password).ok_or("one or more required parameters is missing")?;
You could of course still do this without .zip(), but it would be clunkier:

    let (username, password) = username.and_then(|username| password.map(|password| (username, password))).ok_or("one or more required parameters is missing")?;
The zip form does lose the information of which of the two original Options was None, so if you do need that information (say the error message needs to specify which parameter is missing) you'd still use the non-zip form with a match.


The zip solution however requires any reviewer to look up on what it actually does, whereas the "if let" is more of a language fundamental and known to most reviewers.

Therefore I would actually prefer the long/verbose form without the zip.


I think `Option::zip(&username, &password)` makes it somewhat clearer what it does.


It's just the opposite. zip is a normal function written in normal code, so if the reviewer doesn't know what it does then they can just click through to the definition in their IDE. Whereas "if let" is some kind of special language construct that a reviewer has to look up through some special alternative channel if they don't understand it.


If I'm doing a code review I don't have an idea. I only see text (yeah - limitation of the tooling, but reality). I can search for the functions, but it's a hassle and I want to limit having to do it to as much as possible. It's even not super easy in Rust, since some functions are defined on (extension) traits and you don't exactly know where to search for them if you don't have the IDE support and are not already an expert.

"if let" is certainly a special construct - but it's also one that Rust authors and reviewers will typically encounter rather fast since a lot of error handling and Option unwrapping uses it. Knowing the majority of library functions will imho take longer.


Understanding - or looking up - library functions is something you're always going to have to do during code review (it's well worth getting IDE integration set up). zip is a very standard and well-known function (I used it yesterday, in a non-Rust codebase); it may well end up being better-known than "if let". Learning what a library function does is certainly never harder than learning what a keyword does and it's often easier (apart from anything else, you know that a library function follows the normal rules for functions, so if you can see how its output is used then you can often tell what it does. Whereas you can't rely on that kind of reasoning with language keywords).


The quality of life improvements to cargo look very nice, and I feel that rust wouldn't be remotely as successful without such a tool. I'm very glad I won't have to be manually picking target directories out of my borg backups anymore when I'm running out of disk space.


These const fn’s are cool, but won’t this also lead to long compile times down the road?


Yes, any time you move computation to compile time, it makes the compile time take longer. As always it is a tradeoff.

One thing that people may not realize, especially now that we have loop. You may expect this to hang the compiler:

    const fn forever() -> ! {
        loop {
            
        } 
    }
    
    static FOO: u32 = forever();
But it won't:

    error[E0080]: could not evaluate static initializer
     --> src/lib.rs:2:5
      |
    2 | /     loop {
    3 | |         
    4 | |     } 
      | |     ^
      | |     |
      | |_____exceeded interpreter step limit (see `#[const_eval_limit]`)
      |       inside `forever` at src/lib.rs:2:5
This does place an upper limit on any given const fn.


That's fantastic!

That could be a real hard-to-source build stall.


Nifty!


Yeah, but only if you use them to compute significant items during compilation.

The upside of course is that any computation you compute at compile time is a computation that you don't compute at runtime. For some applications this trade off is definitely worth the cost of admission.

At the end of the day it's a trade off that will have to be made in light of the scenario it's being used in. Being able to make that decision is a good thing.


Awesome! Any idea when relative links will be available in rustdoc? Seems like it's just on the edge of stabilizing (https://github.com/rust-lang/rust/pull/74430) but I'm curious how long it takes to see in a release after this happens.


There can be fuzziness here depending on exactly when it lands, but generally, if something lands in nightly, it'll be in stable two releases after the current stable.


> if, if let, and match

> while, while let, and loop

> the && and || operators

Common Lisp user here. Why just that? How come you can’t have the entire language as well as all your language customizations available at compile time for evaluation?


You can! Just not through `const fn`. Rust has macros, which at their limit are capable of running arbitrary Rust code that manipulates syntax and communicates with the compilation process, just like in a good old Lisp.

Why isn’t `const fn` like this too? One word answer: determinism. Rust takes type/memory/access/temporal safety very seriously, and consequentially you can’t use anything in a `const fn` that isn’t fully deterministic and doesn’t depend in any way on platform-specific behavior. This includes, for example, any floating-point computation, or any random number generation, or any form of I/O, or handling file paths in an OS-specific way, or certain kinds of memory allocation. The span of things possible in `const fn`s has been expanding over time, and will in the nearish future largely overtake C++’s direct counterpart of it (`constexpr` functions) in capability. But some things will intentionally never be possible in `const fn`s, for the reasons given above.


Is there a resource that explains what const functions are and why you would use them?



Can Mozilla layoffs (on Servo team) impact Rust future? It is just s question to understand if there are others big rust project healty out of there. Just curious



`const fn` improvements are amazing!

I can't wait for when we'll be able to `const fn` all the things. Regex, expensive constants that feel as though they should be literals, etc.


[flagged]


> entire rust team was recently fired from Mozilla

This is completely incorrect, verging on FUD. Mozilla had very few people working full-time on Rust; of the people who were laid off in the recent wave, the ones working on projects adjacent to Rust were working on Servo or WASM-related codebases. In particular, the person at Mozilla who was most central to Rust development, Niko Matsakis, is still employed there and still working full-time on Rust.


They're moving to a Rust Foundation with corporate sponsorship model. I think Rust will be fine, even Amazon expressed interest in sponsoring development.


Hear hear.

System languages don't grow on trees. Rust has had a lot of non-trivial effort put into it and is very usable right now. Somebody is going to see the value just lying around and is going to pick up the financial slack.

I see it as vaguely analogous to the current movie theater situation in the US. A lot of companies are seeing the end of their business, but all of those buildings are still sitting around waiting for someone to swoop in and buy them for fire sale prices.


What's baffling to me is how scott's comment is [flagged] and [dead].

You trully can't have unconfortable opinions about Rust here. That's blatant censorship.

Replying to you since I don't see a reply button to his comment.


It's a factually incorrect comment: the entire Rust team was not fired from Mozilla.

Does that mean it deserves to be flagged? I didn't flag it. But to be clear, while it does have some opinions, it is also plain incorrect in the facts it asserts.

You can also find, many, many, many comments critical of Rust that are upvoted, let alone not flagged. I wouldn’t extrapolate from a single comment.


This is a discussion forum. Users will exaggerate and get facts wrong. Some ammount of lenience is expected before completely silencing their voice.

With that said, where can one read about who is in Mozilla's Rust team? Is that information even public?


Sure. As I said, I did not flag it.

> Is that information even public?

It is in a really weird space. However, a lot of the folks involved have chosen to talk about where they're at publicly, and Niko said he was not laid off.

Also, for what it's worth: there's a charitable and uncharitable reading of the comment.

Charitable reading: everyone who was at Mozilla who was paid to work on Rust was laid off.

Non-charitable reading: everyone who was paid to work on Rust was at Mozilla, and was laid off.

HN readers are supposed to follow the principle of charity, but it's quite possible that people either ignored that, or mis-understood the parent as saying the latter, in which case, it feels quite egregiously incorrect, rather than maybe slightly wrong.


Thanks for keeping our conversation civil despite my tone.


I didn't flag it, but I almost did.

The part about the rust team being fired was just incorrect, I downvoted it for that (burying incorrect information tends to avoid it spreading).

Combined with ranting about a spec and a single reference implementation and it looks a hell of a lot more like like intentional flamebait than a misinformed user. They aren't related, are common points raised by known trolls on reddit, and are largely irrelevant. The most popular language in the world is probably python, which has no formal spec. If I had been in a slightly less charitable mood I would have flagged it for this.


The Rust teams discusses their post-Mozilla future here: https://blog.rust-lang.org/2020/08/18/laying-the-foundation-... TLDR: they're feeling pretty good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: