The most exciting component of this release is the const fn improvements. With loops and if available, you can now do non-trivial computation in const fn for the first time. It reduces the gap between Rust's const fn and C++'s constexpr to a large degree. Ultimately, the miri execution engine that this feature builds upon, supports a much larger set of features that even constexpr supports. It's been a project spanning years to get it merged into the compiler, used for const fn evaluation, and stabilize it's features (this part is far from over).
Previously, the table was present as an array literal in C-style, now I can remove it once I decide for the library to require the 1.46 compiler or later versions.
Basically, the interpreter interpret's rustc's internal IR, so it can theoretically support the entire language. That's not a good idea for various reasons, though, so its capabilities are effectively on an allowlist, that we expand over time as we're sure we want to enable a given feature.
That seems like a really good design, as opposed to having an AST interpreter like I might do otherwise. But would this indeed support a "much larger set of features" than constexpr as was claimed?
It has been a while, and I’m remembering from a discussion of new features in C++20, but I recall that constexpr isn’t capable of fully supporting memory allocations made during evaluation, while Rust’s const fn will eventually be able to.
I think some clarification is warranted here, because Miri is intended to be used for more than just constant evaluation. In particular, Miri wants to be able to dynamically verify `unsafe` code in order to detect undefined behavior, which means that it will eventually want to support the entire Rust language. However, just because Miri supports any given Rust feature does not necessarily mean that that feature will be made available for constant evaluation; consider features that are non-deterministic (like RNG) or architecture-dependent/nonportable (like floating-point operations), and how const-evaluating such things could potentially cause memory-unsafety to occur when runtime output and const output differ.
I believe in a talk[1] it was mentioned that Rust's const eval will support heap-allocated values (accessed as references). A quick search suggests that C++20 will also support this, although it may be safer in Rust as it can give a stronger guarantee that the static memory won't be written to.
That was a restriction from before 1.46 stabilized control flow in const functions. Now that we have worked out the details around `if`, we can also stabilize `&&` and `||`.
(I'm a little surprised they weren't stabilized at the same time! Edit: they were! I just didn't look closely enough.)
Short circuiting introduces conditional branching. If you call a function on the right hand side of a || or && it might or might not be executed depending on the value of the left hand side.
Until this version of rust, all conditional branches were banned from const functions.
I guess to keep things simple they just banned any feature that might cause branching.
Ahh that makes a lot of sense, if you're going to have a compiler insert the result of a function having conditional branching seems a bit gnarly I guess?
That's a weird restriction on not allowing logical operators. AFAIK C++ allows this for constexpr functions - as long as it can be evaluated at compile time.
I’m learning rust right now and there is a lot to like. Steady updates like this are also very motivating. The ecosystem feels very sane - especially compared to npm. Top notch Wasm support, cross compiling is a breeze.
That said, coming from a FP background (mostly Haskell/JS, now TS) Rust is... hard. I do understand the basic rules of the borrow checker, I do conceptually understand lifetimes, but actually using them is tricky.
Especially in a combinator world with lots of higher order functions/closures it’s often completely unclear who should own what. It often feels my library/dsl code needs to make ownerships decisions that actually depend on the usage.
Anyways, I guess this gets easier over time, right? Should I avoid using closures all over the place? Should my code look more like C and less like Haskell?
[edit] great answers all, providing useful context, thanks
> Anyways, I guess this gets easier over time, right?
Yes.
> Should I avoid using closures all over the place?
Not necessarily.
> Should my code look more like C and less like Haskell?
Yes. Others sometimes don't like to hear this, but IMO, Rust is not at all functional. Passing functions around is not ergonomic (how many function types does Rust have again? Three?). Even making heavy use of Traits, especially generic ones, is difficult.
Rust is very much procedural. Java-style OOP doesn't work because of the borrowing/ownership. And FP style function composition doesn't work without Boxing everything. But then you'd need to be careful about reference cycles.
> how many function types does Rust have again? Three?).
Depending on what you meant, there are more than three:
* There are 3 traits, used by closures depending on their needs:
* Fn(Args) -> Output
* FnMut(Args) -> Output
* FnOnce(Args) -> Output
* *Every* `fn` is its own type (`fn() {foo}`)
* Function pointers (`fn()`), which is how you pass the above around in practice
> Rust is very much procedural.
I think this is like saying Python is very much procedural: true, but loses some nuance. Rust has some attributes of OOP, some attributes of FP. Some constructs from OOP and FP are made harder once you involve borrowing. Saying it is procedural conjures images of Pascal and K&R C in people's minds. To bolster your argument, though, I mostly use method chaining for iterators but every now and then I need to turn it into a `for` loop to keep the lifetimes understandable for the compiler, myself and others.
I realized after I wrote the comment that I was really referring to the closure traits when I said that. And I really should have said "kind" instead of "type" because, like you said, every different function has its own type.
But anyway, I don't really disagree with your point about categorizing languages as OOP, Procedural, or Functional.
But honestly, in this case, I think it's pretty damn clear than Rust is procedural WAY more than it's either OOP or FP. (Note: By OOP, I mean Java-style with tall ownership hierarchies and object-managed mutable state, not necessarily caring about inheritance. And definitely not referring to Alan-Kay-Style-OOP a la Lisp and Smalltalk).
Scala can be looked at as FP and/or OOP. C++ can be looked at as Proc and/or OOP. Python, IIRC, can kind of do all of them, but I don't remember it being easy to make copies/clones in Python, so FP is questionable.
Have you ever tried to write two versions of a complex async function in Rust? One with async and one with Futures combinators? Due to ownership, the Futures combinators approach very quickly devolves into a nightmare. The language doesn't "want" you to do that.
What about function composition? Very awkward to do with matching up the different Fn traits.
And deeply nested object hierarchies are a no-go, too, because of the inability to do "partial borrows" of just a single field of a struct.
I mean, yes, it's not C because it has a real type system and generics. But... it's pretty much C in that you just write functions and procedures that operate on structs.
EDIT: Perhaps my "hardline" approach on calling Rust procedural is in response to people who have come to Rust from non-FP languages, see `map`, `filter`, and `Option` and start calling Rust functional. That's not functional programming! Ask someone who does OCaml to try out Rust and see if they call Rust functional afterwards.
The problem comes when someone else needs to also borrow the Foo, even immutably. In Java-style OOP, you typically have "objects" that own other objects, all the way down. And you manage state internally.
So it often comes up that you might call several methods in a given scope. If even one of those mutably borrows one field of one sub-object, then you can't have any other borrows of that object anywhere else in that scope.
Newbies from other languages trip on that often enough that I used to see questions about it in r/rust fairly frequently.
Just let x = &mut bar.0 will work, but this "intelligence" is confined to the body of a single function. Rust possesses the somewhat curious property that there are functions that are intended to always be called like this
This is a complex topic, but what it boils down to is that the function signature is the API. If you borrow the whole thing, you borrow the whole thing, not disjoint parts of it.
This is also why it's okay in the body of a single function; that doesn't impact a boundary.
My preferred solution to this is to make partial borrows part of the method definition syntax to make it clear this is part of the external contract of your API. Also I lean towards mimicking the arbitrary self type syntax and land on something along the lines of
Where that signature tells the borrow checker that those two fields are the only ones being accessed. Nowadays this method would have to be &mut self, which heavily restrict more complex compositions, as mentioned in this thread.
On one hand this looks/feels absolutely useful and kind of "must have" feature, yet the boilerplate seems excessive. But at the same time it's absolutely something that should be described somewhere as metadata for a function. It seems to me this should be something that the compiler figures out and mostly elides, like lifetimes. (But this should be present as part of API, should be easily discoverable, should be there at compile time when linking against crates, etc.)
The problem with having the compiler figure it out is that the signature now becomes dependent on implementation details that can't be seen from the signature, and could be accidentally changed. This information must be an explicit part of the signature, so that it's properly visible and spells out what the function can do without having to read the body.
That said, I think putting it in the argument list like that is a terrible idea. It would add far too much clutter. What if it was put into the where section, sort of like this:
impl Foo {
fn bar(&mut self, other_param: &Data)
where self: borrows{ a, b, .. },
other_param: borrows{ a, b, ..},
{
// Function body
}
}
In one sense, it sort of fits because it's providing a "bounds" of sorts on how the reference can be used, similar to the way a trait bound would for a type. If no reference binding is provided it would default to borrowing all the fields, which is the current behaviour.
But honestly, in this case, I think it's pretty damn clear than Rust is procedural WAY more than it's either OOP or FP. (Note: By OOP, I mean Java-style with tall ownership hierarchies and object-managed mutable state, not necessarily caring about inheritance. And definitely not referring to Alan-Kay-Style-OOP a la Lisp and Smalltalk).
Scala can be looked at ...
Interesting perspectives, and I largely agree with all of them.
Related: I heard someone else say that while Clojure and Erlang embrace immutability for concurrency, Rust shows that you can "just mutate". It's still safe for concurrency (due to its type system).
Rust seems to be one of the only languages that embraces the combination of algebraic data types + stateful/procedural code.
But I've also found this in my Oil project [1], which is written with a bunch of custom DSLs!
I wrote it in statically-typed Python + ASDL [2], so it's very must procedural code + algebraic data types. Despite historically using an immutable style in Python, this combo has grown on me. Lexing and parsing are inherently stateful, and use a lot of mutation.
----
On top of that, my collection of DSLs even translates nicely to C++. Surprisingly, it has some advantages over Rust! The model of algebraic data types is richer ("first class variants"), described here:
> Related: I heard someone else say that while Clojure and Erlang embrace immutability for concurrency, Rust shows that you can "just mutate". It's still safe for concurrency (due to its type system).
Yes! I will repeat a sentiment I articulated on Reddit about that. Even after having used Rust on a handful of small-to-medium sized projects since 2016, I never realized that I could loosen/abandon my immutability fetish that I had been trained to love over the years of working with C++ and Java. C++ needs it for concurrency, and Java needs it for concurrency and because every method can mutate its inputs without telling you. Rust doesn't have either of those problems. Having immutable-by-definition objects in Rust isn't really that useful (unless, of course, the thing is naturally, semantically, immutable anyway, like a Date IMO). It was an eye-opening epiphany and I'm excited for my next Rust session to see how my "new worldview" pans out. :)
Yes I'll be interested to see how it turns out. Any blog posts / writing on the procedural viewpoint will be appreciated.
Does Rust have something like C++'s const methods? Where you can have a method that mutates a member, but doesn't logically mutate from the caller's perspective?
It seems like you could be prevented from having races on individual variables, but still have races at a higher level.
Like on database cells. I guess no language will help you with that, and that's why Hickey wrote Datomic -- to remove mutability from the database.
> Does Rust have something like C++'s const methods? Where you can have a method that mutates a member, but doesn't logically mutate from the caller's perspective?
Yes! "Interior mutatbility" is the term to search for. In Rust, you'd wrap the field in a RefCell<T>. Many connection-pool implementations use interior mutability to manage the connections transparently to the caller.
Interior mutability is basically what, e.g. OCaml, does by default. In Rust, it's opt-in.
Yeah, DB ops are always a sticking point for figuring out how to write my APIs.
I think that Rust is often assumed to be functional because it has ADTs and nice pattern matching, both of which have historically been a feature specific to FP. Just goes to show how fuzzy our definition of FP really is...
Agreed. Same with "OOP". Who the hell knows what people really mean when they say that.
Those people who think that "FP" means "type system like Haskell" are wrong, though, IMO. It precludes languages that are much more function-based, such as Clojure, Schemes, Elixir.
Which I also don't understand. Is that a style of overusing classes where functions would suffice (FooHelper)? Or is it something about the language? Because almost all popular languages have classes. Rust and Go call them "struct", but it's the same thing. Swift has "class" and "struct", but they're both the same thing as a C++ class.
Rust structs are not classes. Rust puts structs inline (on the stack), classes are virtual. Since Rust is a systems programming language, it's an actual distinction that makes a difference
I'm not sure I follow. Both C++ and Rust allow us to put structs/classes on the stack or the heap. Rust has trait objects which use a vtables. Are Rust traits the same as "classes" then?
When people say "OOP" or "class-oriented-programming" are you saying that they're referring to implementation details such as memory allocation?
I'm not sure what you mean by "classes are virtual", but if it's virtual dispatch, then that's completely orthogonal to allocating objects on the stack vs the heap.
It does have to, because of the way mutation and ownership work. Which is great! But it makes functional programming awkward. The language does not "steward" you toward function composition.
>the borrow checker, I do conceptually understand lifetimes, but actually using them is tricky.
I've been using Rust for a little over year, almost daily at work, and for several projects. I have a pretty good intuition about how the borrow checker works and what needs to be done to appease it. That said, I don't think I'm any closer to understanding lifetimes. I know conceptually how they are supposed to work (I need the reference to X to last as long as Y), but anytime I think I have a situation that could be made better with lifetimes, I can't seem to get the compiler to understand what I'm trying to do. On top of that very little of my code, and the code I read actually uses lifetimes.
I've been writing Rust code since before the 1.0 days, and I still can't understand lifetimes in practice.
When the compiler starts complaining about lifetimes issues, I tend to make everything clone()able (either using Rc, or Arc, or Arc+Mutex, or full clones).
Because if you start introducing explicit lifetimes somewhere, these changes are going to cascade, and tons of annotations will need to be added to everything using these types, and their dependent types.
I think you're really intended to do the latter rather than the former. I mean, Rust lets you do either–it gives you the choice if performance isn't your concern–but usually it's better to not clone everything.
I find that lifetimes are ok, albeit annoying sometimes, especially the cascade part, as you mention. The one thing I can't get to stick in my brain is variance. Every time, I need to go back to https://doc.rust-lang.org/nomicon/subtyping.html#variance
It's worth noting that every reference in Rust has a lifetime, but the compiler is usually smart enough to infer it. What you are talking about is explicit lifetimes.
As someone in a similar description as you - i find my lifetime understanding... moderate. Complex lifetime usage still can tweak my brain - notably how i can design it. But simple lifetime usage is intuitive.
A simple example i often run into is wanting to do something with a string, without taking owned parts of the string. Very intuitive how the str matches the lifetime of the owned value.
On the otherhand, the other day i was trying to write a piece of software where:
1. I wanted to deserialize a large tree of JSON nodes. I had the potential to deserialize these nodes without owning the data - since Serde supports lifetimes, i could deserialize strings as strs and hypothetically not allocate a lot of strings.
2. In doing that, because a tree could be infinitely large i couldn't keep all of the nodes together. Nodes could be kept as references, but eventually would need to be GC'd to prevent infinite memory.
3. To do this, i _think_ lifetimes would have to be separate between GC'd instances. Within a GC'd instance, you could keep all the read bytes, and deserialize with refs to those bytes. When a GC took place, you'd convert the remainder partial nodes to owned values (some allocation) to consume the lifetime and restart the process with the owned node as the start of the next GC lifetime. ... or so my plan was.
I have, i think, just enough understanding of lifetimes to _almost_ make that work. I _think_ some allocations would be required due to the GC behavior, but it would still reduce ~90% of allocations in the algorithm.
Unfortunately, i got tired of designing this complex API and just wrote a simple allocation version.
Conceptualizing allocations and the lifetimes to make it work are.. interesting. Especially when there is some data within the lifetime that you want to "break out of" the lifetime, as in my example (where i had a partial node, and i made it owned).
I still think i understand enough to do it - it'll just take a fair bit of thinking and working through the problem.
These kind of optimization are incredibly painful in Rust one common suggestion is to sidestep the issue and store an offset + length in the nodes and then you take that to look up the value from the original string when you need the value.
I've tried and tried but I've never found a situation where explicit lifetimes was the answer. It's almost always more complex than that. I mean, everywhere that is complex enough that implicit lifetimes don't work is also too complex for explicit lifetimes and almost always required Rc or Arc to solve it. Maybe I'm missing something, but it seems like there are so many other missing topics the Rust Book could be spending time on that would be more effective than teaching about explicit lifetimes.
There are a lot of situations where explicit lifetimes are the answer, TBH. However, you have to have a very good working model of lifetimes in order to get the annotations right.
I wrote a parser that needed it. But yeah for the most part whenever explicit lifetimes came into the picture it means that I have made some sort of mistake and need to rethink my approach.
Another edit: I am not a Functional programmer, and have never known Haskell or any Lisp. Erlang is as close as I've ever gotten. I've found Rust to be a fantastic language for writing Functionally.
This is a perfectly reasonable solution. You might be leaving performance on the table but
1) if perfomance isn't a measurable problem for you, then there's on point on eking the last bit of performance from these allocations
2) it simplifies the code itself
3) sometimes clones are actually efficient, people forget to make their small ADTs Copy
4) if you're learning the language this lets you delay the moment when you have to fully understand the way lifetimes actually behave in complex cases, which means that when you do do that you will have a better grasp of the rest of the language and will be able to form a better mental model of how it fits with other features
Ah, if you’re making dsl code or functional combinators, you usually want to ‘move’ your values instead of ‘borrowing’ them.
example:
fn add(mut self) -> Self { self }
fn add(self) -> Self { self }
instead of:
fn add(&mut self) {}
fn add(&self) {}
With this, you will be able to ‘store’ closures easily and apply them later. No more fighting with the borrow checker over where to borrow as mut or not. You will also avoid a few copies.
Sometimes you will be annoyed by changes I guarantee it BUT since 1.0 that's decreased a lot and compared to npm it's night and day. You'll think you're dealing with C in relative terms of stability if npm is your baseline :D
I was super impressed by the O’Reilly book, which throws you right in to writing a multithreaded Mandelbrot set plotter. It also goes through writing a multithreaded HTTP server. Pretty neat!
Meta-answer: my default when picking up a new language is usually to learn just enough to be able to start writing code, and then learn new things piecemeal as necessary to solve whatever thing I'm working on, and it sounds like you're hoping to do something like that here.
I found that approach for Rust in particular to not work well at all, and have colleagues who've reported the same. There are some fairly complicated, fundamental concepts that are unique to Rust that I think need to be tackled before you can really do much of anything (mostly borrowing and lifetimes), and that's not immediately obvious from starter programs -- because of lifetime elision, some early programs can look deceptively familiar, but there's a bunch of barely-hidden complexity there, and as soon as you start to stray from the tutorial path, you'll run headfirst into a wall of compiler errors that you're not yet equipped to understand. For Rust I'd highly recommend just reading a book cover to cover first (either TRPL or the O'Reilly one), and then starting to write code.
One thing I'd be wary of is Googling error messages and taking answers from Stack Exchange. Rust has mutated (heh) a fair bit over the years and many SE answers to noob problems are obsolete and sometimes incorrect. At the very least check the datestamp on any answer and be wary of anything more than a year or two old. This goes double if the answer has some extremely awkward looking syntax with lots of modifiers and underscores sprinkled throughout. There's probably a better way to do it now, or an alternative solution that works better. Or maybe you're just trying to do something that Rust makes hard, like concurrent processing on a shared data structure.
The manual is safer even though it's harder to find your exact problem and solution, especially when you're just starting out.
I literally spend tens of hours a week on Stack Overflow ensuring this isn’t the case, or if it is that it’s clearly notated.
As always, feel free to drop into the Rust Stack Overflow chat room[1], or any of the official Rust discussion channels, and ping me or other Stack Overflow contributors to review and update answers.
Seconded. I've been learning on SO heavily lately as I'm writing my first real Rust program (an IRC bot/client/server/not sure yet), and I was impressed by how many questions and answers had been updated with notes about things being potentially out of date. Not something that I think I've ever seen in the PHP land from whence I came.
Trying to implement anything in Rust will set you up for a crash-course. Even the simplest non-trivial programs will introduce you to the Rust borrow checker, a major feature absent in C# / Python.
Once you've learned the basics (plenty of links in the siblings, including the official Rust Book), this is a key (and entertaining!) unofficial resource that really hammered home for me the ways that Rust is different from the C family when it comes to working with references: https://rust-unofficial.github.io/too-many-lists/
It also taught me about Boxes and Rc's, which are essential for certain kinds of things, and which I don't remember being covered in the main Rust Book at all
Yeah, I figured they were probably in there somewhere. It's possible I read the book before they were added, or that I skipped them (I glossed over some of the final chapters), or it's possible I just didn't fully grasp how important they were until I followed the linked-list tutorial.
What I like about the latter is how closely it steps through the problem-solving process within the context of a very familiar task, teaching you at each stage 1) why the borrow-checker is upset and 2) what tool you need to apply in order to satisfy it. If the Book taught me "what is Rust and what are its features?", this taught me "how do I use Rust in practice?".
You might check out https://exercism.io/tracks/rust . Some are a little heavy in the math department but personally I've always found test drive learning useful when learning a new language thanks to instant feedback.
I am in similar boat. Python centric data scientist. Very tempted to try to learn Rust so I can accelerate certain ETL tasks.
Question for Rust experts: On what ETL tasks would you expect Rust to outperform Numpy, Numba, and Cython? What are the characteristics of a workload that sees order-of-magnitude speed ups from switching to Rust?
I'm far from an expert, but I would not expect hand-written Rust code to outperform Numpy. Not because it's Rust and Numpy is written in C, but because Numpy has been deeply optimized over many years by many people and your custom code would not have been. When it comes to performance Rust is generally comparable to C++, as a baseline. It's not going to give you some dramatic advantage that offsets the code-maturity factor.
Now, if you're doing lots of computation in Python itself - not within the confines of Numpy - that's where you might see a significant speed boost. Again, I don't know precisely how Rust and Cython would compare, but I would very much expect Rust to be significantly faster, just as I would very much expect C++ to be significantly faster.
I deal with a lot of ragged data that is hard to vectorize, and currently write cython kernels when the inner loops take too long. Sounds like Rust might be faster than cython? Thanks for the feedback.
Also it might take 20x less RAM compared to using Python objects like sets and dicts. In Rust there's no garbage collection, and you can lay out memory by hand exactly as you want.
I’m fascinated by Julia and have test driven it before but it didn’t click for me. Maybe I was doing it wrong and/or the ecosystem has matured since I last looked.
I guess I generally do like the pythonic paradigm of an interpreted glue language orchestrating precompiled functions written in other languages. I don’t need or want to compile the entire pipeline end to end after every edit, that slows my development iteration cycle times.
I just want to write my own fast compiled functions to insert into the pipeline on the rare occasions I need something bespoke that doesn’t already exist in the extended python ecosystem. It seems like a lower level language would be optimal for that?
If the dev cycle feels slow in julia, you can make it snappier with a tool like Revise.jl, it is quite handy.
If you just need to fill a small and slow gap maybe something like numba is also a good option to stay within python.
Going all the way to a low level language would require the compilation, the glue code and expertise in both languages. Probably that slows down the development pipeline more than the JIT compilation from julia or numba.
Anyway, any opportunity to learn/practice some rust is also great!
column-wide map-reduce over large dataframes usually give you a 1000x or so speedup.
With rust you can stream each record and leverage the insane parallelism and async-io libs (rayon, crossbeam, tokio) and a very small memory footprint. sure you have asyncio in python but that’s nowhere near the speed of tokio.
Thanks for the pointers, those crates seem great. The flaky multithreading libs are my least favorite part of python, and rust’s strength in this area seems very appealing.
I like to implement something I have already done before. In my case, the ID3 algorithm has a nice balance of challenge, experience and documentation available. You could try to write it for a specific case, where you structure your data, and then apply it to a generic case.
Try to do some graphics programing with thr backend of your choice. There is also this cool nanovg port https://github.com/cytecbg/gpucanvas. Run the demo in examples to see the nanovg demo.
It is the Rust way of specifying a function as being _pure_.
In other words the output is dependent only on the function arguments, and not on any external state.
This means they can be evaluated at compile time.
I suppose in the future, it could also allow better compiler optimizations.
Pure functions are functions where the return value only depends on the function arguments. If the function arguments are not known at compile time, obviously you can't evaluate it at compile time. It would only be possible to do that when all the arguments are also known at compile time (constants).
But a const fn can also do that: if given non-constant parameters it will be evaluated at run-time. So I (having never used Rust before) still haven't see the distinction between pure and const. What's an example of a function that is pure but cannot be evaluated at compile time with constant parameters?
The parent didn't specify calling with constant parameters, which makes a huge difference. To answer your question, basically anything the compiler doesn't know how to evaluate - which has been expanded in this release, but does not include everything still.
Looks like we have some terminology confusion. I read mijamo's question as being about the theoretical ability to evaluate at compile time (the value is knowable) not whether the compiler does do it, and that's what I meant in my comment too.
If you say that 'pure' functions are not compile-time-evaluatable because they may be given parameters that are not known at compile time, then you must also say that const fns are not compile-time-evaluatable. I think it's also clear that we mean for const fns to count, so the assumption that the parameters are known at compile time was implicit in the question.
Under those two assumptions: are pure functions evaluatable (in theory) at compile-time (on values known at compile time)? As far as I can think, the answer is yes? In which case, I'm not entirely sure what the distinction between 'pure' and 'cosnt fn' is supposed to be except to separate out the part of 'pure' functions that can are evaluated in practice. Is there anything more to it?
I think that's it in a nutshell. You can't evaluate everything at compile time, even when you could theoretically. So you need some way to mark the subset of pure functions that can be evaluated at compile time, which is what const fn does. That way if a const fn only calls other const functions you know you can evaluate it. It's a convenient way of tagging functions.
Well, that is not the definition. Output of a pure functions can depend on input arguments, and those arguments can definitely depend on runtime properties.
I knew it was, but couldn't find a rust-dev post with "pure" in the same time-frame. Thank you! Yours may be it, but I think est has a convincing case that it was the thread this one spun out from...
The web interface of gmane.org is down, so the link is not available. Turns out though that the rust-dev mailing list archive is present on both mail.mozilla.org and mail-archive.com, so one only has to find the mail corresponding to the link.
Another hint comes from the reddit thread you linked above: someone named maxcan stated they started the thread. Looking up their name plus "pure" gives only e-mails from a single thread, including an e-mail from Graydon: https://www.mail-archive.com/search?l=rust-dev%40mozilla.org...
It covers precisely the topic you mentioned and is in a thread started by maxcan. I think it's the e-mail we are looking for.
To verify, the difference between the two IDs is either 239, or 58, depending on which of the two numbers in the URL point to the actual E-Mail, but 58 is more likely. The 0.7 release announcement for example has a difference of 57 and is quite close to both:
Argh! you are a better sleuth than me. I was looking at that month's rust-dev archives and didn't realize the subject did not have "pure" in it, so I looked right over it.
Glad to see `Option::zip` stabilized. I tend to write such a helper in many of my projects to collect optional variables together when they're coupled. Really improves the ergonomics doing more railroad-style programming.
You sometimes have two Options that must both be Some to have any effect, but other reasons prevent you from making an Option of a tuple of those two fields. Eg think of deserializing a JSON that contains optional username and password strings, but you need both if you are to use them to authenticate to some remote.
In that case, you currently have to write code like:
if let (Some(username), Some(password)) = (username, password) {
/* both are set */
}
else {
/* at least one is not set */
}
With zip this can be written as `if let Some((username, password)) = username.zip(password) {` In this case it doesn't look like a big difference, but it does allow you to chain other Option combinators more easily if you were doing that instead of writing if-let / match. Using combinators is the "railroad-style programming" that kevinastone was talking about. For example, you can more easily write:
let (username, password) = username.zip(password).ok_or("one or more required parameters is missing")?;
You could of course still do this without .zip(), but it would be clunkier:
let (username, password) = username.and_then(|username| password.map(|password| (username, password))).ok_or("one or more required parameters is missing")?;
The zip form does lose the information of which of the two original Options was None, so if you do need that information (say the error message needs to specify which parameter is missing) you'd still use the non-zip form with a match.
The zip solution however requires any reviewer to look up on what it actually does, whereas the "if let" is more of a language fundamental and known to most reviewers.
Therefore I would actually prefer the long/verbose form without the zip.
It's just the opposite. zip is a normal function written in normal code, so if the reviewer doesn't know what it does then they can just click through to the definition in their IDE. Whereas "if let" is some kind of special language construct that a reviewer has to look up through some special alternative channel if they don't understand it.
If I'm doing a code review I don't have an idea. I only see text (yeah - limitation of the tooling, but reality). I can search for the functions, but it's a hassle and I want to limit having to do it to as much as possible. It's even not super easy in Rust, since some functions are defined on (extension) traits and you don't exactly know where to search for them if you don't have the IDE support and are not already an expert.
"if let" is certainly a special construct - but it's also one that Rust authors and reviewers will typically encounter rather fast since a lot of error handling and Option unwrapping uses it. Knowing the majority of library functions will imho take longer.
Understanding - or looking up - library functions is something you're always going to have to do during code review (it's well worth getting IDE integration set up). zip is a very standard and well-known function (I used it yesterday, in a non-Rust codebase); it may well end up being better-known than "if let". Learning what a library function does is certainly never harder than learning what a keyword does and it's often easier (apart from anything else, you know that a library function follows the normal rules for functions, so if you can see how its output is used then you can often tell what it does. Whereas you can't rely on that kind of reasoning with language keywords).
The quality of life improvements to cargo look very nice, and I feel that rust wouldn't be remotely as successful without such a tool. I'm very glad I won't have to be manually picking target directories out of my borg backups anymore when I'm running out of disk space.
Yeah, but only if you use them to compute significant items during compilation.
The upside of course is that any computation you compute at compile time is a computation that you don't compute at runtime. For some applications this trade off is definitely worth the cost of admission.
At the end of the day it's a trade off that will have to be made in light of the scenario it's being used in. Being able to make that decision is a good thing.
Awesome! Any idea when relative links will be available in rustdoc? Seems like it's just on the edge of stabilizing (https://github.com/rust-lang/rust/pull/74430) but I'm curious how long it takes to see in a release after this happens.
There can be fuzziness here depending on exactly when it lands, but generally, if something lands in nightly, it'll be in stable two releases after the current stable.
Common Lisp user here. Why just that? How come you can’t have the entire language as well as all your language customizations available at compile time for evaluation?
You can! Just not through `const fn`. Rust has macros, which at their limit are capable of running arbitrary Rust code that manipulates syntax and communicates with the compilation process, just like in a good old Lisp.
Why isn’t `const fn` like this too? One word answer: determinism.
Rust takes type/memory/access/temporal safety very seriously, and consequentially you can’t use anything in a `const fn` that isn’t fully deterministic and doesn’t depend in any way on platform-specific behavior. This includes, for example, any floating-point computation, or any random number generation, or any form of I/O, or handling file paths in an OS-specific way, or certain kinds of memory allocation. The span of things possible in `const fn`s has been expanding over time, and will in the nearish future largely overtake C++’s direct counterpart of it (`constexpr` functions) in capability. But some things will intentionally never be possible in `const fn`s, for the reasons given above.
Can Mozilla layoffs (on Servo team) impact Rust future?
It is just s question to understand if there are others big rust project healty out of there. Just curious
> entire rust team was recently fired from Mozilla
This is completely incorrect, verging on FUD. Mozilla had very few people working full-time on Rust; of the people who were laid off in the recent wave, the ones working on projects adjacent to Rust were working on Servo or WASM-related codebases. In particular, the person at Mozilla who was most central to Rust development, Niko Matsakis, is still employed there and still working full-time on Rust.
They're moving to a Rust Foundation with corporate sponsorship model. I think Rust will be fine, even Amazon expressed interest in sponsoring development.
System languages don't grow on trees. Rust has had a lot of non-trivial effort put into it and is very usable right now. Somebody is going to see the value just lying around and is going to pick up the financial slack.
I see it as vaguely analogous to the current movie theater situation in the US. A lot of companies are seeing the end of their business, but all of those buildings are still sitting around waiting for someone to swoop in and buy them for fire sale prices.
It's a factually incorrect comment: the entire Rust team was not fired from Mozilla.
Does that mean it deserves to be flagged? I didn't flag it. But to be clear, while it does have some opinions, it is also plain incorrect in the facts it asserts.
You can also find, many, many, many comments critical of Rust that are upvoted, let alone not flagged. I wouldn’t extrapolate from a single comment.
It is in a really weird space. However, a lot of the folks involved have chosen to talk about where they're at publicly, and Niko said he was not laid off.
Also, for what it's worth: there's a charitable and uncharitable reading of the comment.
Charitable reading: everyone who was at Mozilla who was paid to work on Rust was laid off.
Non-charitable reading: everyone who was paid to work on Rust was at Mozilla, and was laid off.
HN readers are supposed to follow the principle of charity, but it's quite possible that people either ignored that, or mis-understood the parent as saying the latter, in which case, it feels quite egregiously incorrect, rather than maybe slightly wrong.
The part about the rust team being fired was just incorrect, I downvoted it for that (burying incorrect information tends to avoid it spreading).
Combined with ranting about a spec and a single reference implementation and it looks a hell of a lot more like like intentional flamebait than a misinformed user. They aren't related, are common points raised by known trolls on reddit, and are largely irrelevant. The most popular language in the world is probably python, which has no formal spec. If I had been in a slightly less charitable mood I would have flagged it for this.
In addition to the linked examples, I have some code of my own which is made simpler due to this feature: https://github.com/RustAudio/ogg/commit/b79d65dced32342a5f93...
Previously, the table was present as an array literal in C-style, now I can remove it once I decide for the library to require the 1.46 compiler or later versions.
Link to the old/current generation code: https://github.com/RustAudio/ogg/blob/master/examples/crc32-...