> but if it cannot be done for 1.0 then I would expect to see those changes shortly thereafter
Why is Rust in such a rush to hit 1.0? Recent improvements have massively improved it, but I'm afraid it will lock too soon and suffer for it.
Rust holds some promise for me, but things like simple buffer manipulation, strings, and slices are ugly and torturous in Rust, compared to Go where they are a pleasure to work with.
I don't know about "rush" but they should move quickly because Rust is in a fleeting sweet spot of the developer attention cycle where lots of people are eager to jump onboard and it's really good enough for that now.
You are barking up the wrong tree if you are waiting for Go-level simplicity, like the Rust folks have often said.
I honestly think that common string handling would be much improved if String could point to either &'static str or an owned buffer, with allocation on modification if necessary, and literals coerced to either &'static str or a String that points to the static buffer depending on type inference.
String would never allocate more than it does currently, but it would do it in less predictable situations which some people probably wouldn't like. However, it would basically allow people who don't care that much about short strings to completely eschew all the .as_slice()s and .to_string()s that are currently ubiquitous in any rust string handling.
> if String could point to either &'static str or an owned buffer,
That used to be the purpose of MaybeOwned, which was just replaced by CowString (a typedef of Cow to String and str).
On Cow, a deref returns an immutable slice (with no copy) so slice methods can be called directly on the CowString, `to_mut` copies the source data if necessary and returns a mutable ref, and `into_owned` copies the source data if necessary and returns it.
So if you don't really care what you get and whether it's going to be copied or whatever, ask for a CowString and go to town.
Using the experimental slicing syntax x.as_slice() can now be x[] which is a lot better. I'm not sure what will happen to .to_string() but it's a recognized problem.
Alternatively, since String implements Deref<str>, you can do &*x, and that works without enabling the slicing syntax (although the syntax is likely to be in 1.0).
1.0 isn't a total freeze, just a promise not to remove base syntax or features. If you write code for 1.0 now, it'll compile on a future 1.0 release. They are still planning on adding many more things, both in terms of libraries, and language features.
The things you've mentioned as torturous are mostly sugar on-top of the already implemented primitives, and are even being worked on ([n..m] slice notation, to name one).
> If you write code for 1.0 now, it'll compile on a future 1.0 release.
Almost: it will compile on all future Rust releases to the end of time (unless Rust wants to commit suicide). Languages cannot ever make breaking changes once they reach a stable version (unless they have relatively little adoption). This is even more crucial for "low level" languages like Rust, as they are often used for projects that are meant to last for decades, and are often chosen by programmers who want don't have the patience for this kind of thing.
> Almost: it will compile on all future Rust releases to the end of time (unless Rust wants to commit suicide). Languages cannot ever make breaking changes once they reach a stable version (unless they have relatively little adoption).
With proper tooling, one could automatically upgrade a 1.x Rust compatible codebase to be 2.0 compatible (if such a version ever exists in the future).
AFAIK (and it would be interesting to hear of counter examples) that's never been done for a mainstream language before (of course, Rust is far from mainstream -- yet -- but if it never becomes mainstream than it can do whatever it wants, including break backwards compatibility).
Regarding 1.0 I have thought about the same some time ago.
Rust changed in so many areas in quite a short time so that I would think of a fast move to 1.0 as rushed.
Ususally you want things stable for some time to proof themselves.
If you switch to 1.0 soon after a big change you might regret it when you discover a short time later that your recent change also had some shortcomings but you can't change it anymore due to backwards compatibility guarantees.
I have always thought of Rust development as akin to simulated annealing. They could spend years still searching for the proper combination of language features that will give them a global optimum, but in that time their window of relevance in will likely close. I'd rather see them have an impact on the industry with whatever local optimum they have now rather than quietly toil away searching for perfection as they spiral off into irrelevance.
The time for 1.0 is not now, IMO, but it is soon. At some point you just have to cross your fingers and launch, and learn to live with whatever mistakes you may have made. Experience has shown that the language is good enough in most respects, so even if they flub the last-minute things it won't be any worse than what any other brand-new language has had to live with.
Rust is sacrificing a lot for the sake of memory safety without garbage collection. The whole language is in some ways an experiment as to whether modern language design can ameliorate the costs of doing that. But for the kind of problem where Go would even be in the running, you don't need to pay those costs - maybe OCaml (or even Haskell) would deliver the benefits you want, without the complexity costs?
After a while, the mental cost associated with Rust isn't that huge. Rust becomes a fairly productive language (even if it has some rough edges). I will say, however, that how you think of programming and structuring your application is very different from other languages. Especially when it comes to multi-threaded applications and how you deal with ownership and safety. But in the end, I think it pays off pretty well.
Have you compared with OCaml or something like it (maybe even Haskell)? My feeling is that the overhead in Rust is lower than you might think, but it didn't seem like it would ever go away completely. But I didn't get ever so far into it.
I feel like I'm actually offloading lots of my cognitive overhead to the compiler compared to all the things I need to juggle mentally when working in C or C++. It's a relief rather than a burden. I still sometimes curse the borrow checker in the moment, but I know that the quality of the resulting code is worth it. Figuring out the puzzle up front is much more enjoyable than tracking down intermittent segfaults down the road.
> I feel like I'm actually offloading lots of my cognitive overhead to the compiler compared to all the things I need to juggle mentally when working in C or C++. It's a relief rather than a burden.
Sure, but I feel like Haskell is the same thing only more so. And the invariants you keep track of - does this function access the database? could this function error? which audit events might happen in this codepath? - are IMO more useful than in Rust, where you spend the same effort tracking memory ownership. Which, sure, if you need it better to have a compiler that can help you with it, but getting good performance without it is easier than many people seem to think.
I wish I could give a brief backstory, but the history of closures in Rust is a long and convoluted one, with many dead ends and failed experiments. :) Suffice to say, `proc` is a relic from a different era (call it "boxed closures 3.0") which always felt a bit out of place, and there was much rejoicing today at its imminent removal.
I think our final design (which might be charitably referred to as "unboxed closures 2.0") is actually really fantastic, and doesn't leave me with any of the lingering dissatisfied feelings that our previous designs did (at least, assuming that all the improved inference mentioned in the OP materializes). I look forward to playing around with them once they're more polished in a week or so.
> the history of closures in Rust is a long and convoluted one, with many dead ends and failed experiments.
I desperately hope that someday, when all the dust settles, someone writes an account of this and other long and winding paths Rust has taken through its evolution, so people in the future can get the benefit of this wisdom. An account of dead ends could make for fascinating reading.
The trick would be putting in just the right amount of detail. Enough so the reader can truly understand the reasoning behind why things worked (or didn't) without being so dense as to be unreadable.
So for what it's worth, this blog post is mostly a technical deep dive for people who are very interested in the full details.
From a UI perspective, I (and many others, including core team member Aaron Turon) have argued successfully that the vast majority of these details should be inferred from usage. So if you try to mutate something in a closure, the closure will adopt that requirement. If you want to capture a value by reference, you won't also be allowed to move the closure off of the stack.
In other words, the rules for closures will follow intuitively from the regular ownership rules, and you will never, in normal usage, have to be explicit about any of this.
This blog post was intended to explain in excruciating technical detail how it all works under the hood, and could have been clearer about the intended usage.
As mentioned in the blog post, I'm hopeful that the awful |&mut:| and |&:| and |:| can be completely removed with greater inference capabilities. I'll be speaking with nmatsakis soon about exactly what needs to be done to make this a priority for 1.0. Likewise, the `move` keyword should be possible to infer in many cases (though I honestly don't mind it very much).
Finally, I'm actually not bothered about the type of a closure being written as `FnMut(blah)` instead of `|blah|`, because that syntax was always a bit line-noisy in type position anyway.
The bottom line is simply that closures in a low-level systems programming language present a daunting design challenge that is hard to appreciate until you've wrestled with the domain. There is no one-size-fits all here that does not result in compromises that make closures essentially useless for this niche. The best that we can do is reduce down to as few knobs as possible (Rust closures are still a bit less expressive than C++ lambdas) and lean on inference to have the compiler Do What I Mean.
I had to read this 2-3 times to figure out what was going on, but I'm still not entirely sure what lead to proposing this change. I guess I'll write what I got out of it and hope I'm crazy and get corrected.
It seems to me that we're restricting what a closure can do when it executes based on the type of closure that it is. Is this in the name of safety? If so, that seems like a good idea given Rust's goals. Right now it feels like a ton of mental overhead but I'll try and play around with it and see if I have an "A-ha!" moment.
Some of the blog post, mainly the part about using a wrapper function so the compiler can still inline monomorphized functions, makes me feel like this is a bit half-baked at the moment. I mean, the very nature of closures is that they give the programmer dynamic abilities.
I really enjoy Rust and the reason I enjoy Rust is the combination of low-level programming and safety. Maybe experiments like this are what gives us that. I hope I'm just confused right now.
edit: Oh, one benefit I see already is that since these are "unboxed" there won't be an allocation.
The implementation is definitely half-baked at the moment, but will be improving drastically over the next week or two. :)
As for the wrapper function presented in the post, it's only to demonstrate that closures are implemented via traits and act just like any other trait in that they can be used to facilitate virtual dispatch (though IMO you shouldn't strive to use traits in this way if you can help it, given that Rust gives you many tools to help you prefer static dispatch).
Watching closures evolve in Rust over the past few years has actually been really interesting. We had an implementation that was "good" a few years ago, but was ultimately fundamentally inferior to the approach taken by C++ lambdas. The period since has been a fascinating exploration of ways to make closures safe, usable, and potentially zero-overhead in a language without garbage collection. For all I know it may be unprecedented.
Thank you for replying. I do not have any experience with C++ so that's why this could be so confusing to me. I'm a weekend wannabe systems programmer (it's therapy after a long week of Web development) and was attracted to Rust from Ruby because C scares me (this is where Go users laugh that Rust attracted a Rubyist instead of a C++ user). I'll have to research C++ lambdas some more but given the safety, usability and performance gains that you mention this new implementation of closures seems impressive.
> I'm a weekend wannabe systems programmer
> and was attracted to Rust from Ruby because C scares me
I myself was attracted from Javascript for the same reason, you're among friends. :)
> this is where Go users laugh that Rust attracted a
> Rubyist instead of a C++ user
I don't think anyone's laughing at anybody. Different languages will appeal to different people, we don't need to begrudge anyone for that!
> I'll have to research C++ lambdas some more but given
> the safety, usability and performance gains
Performance of Rust closures should be the exact same as C++ lambdas. Safety-wise, you get all the usual guarantees of Rust. Usability-wise, the main nicety of Rust closures over C++ lambdas is that Rust doesn't force you to specify a capture clause, and instead infers how to capture all of the closed-over variables based on context. I think that Rust may actually be strictly less powerful than C++ here (as C++ allows you to explicitly specify the capture mode for each and every upvar if you like), but we believe that you'll have all the power you need in practice. Whether or not this turns out to be true will have to be determined by experience.
While it's less important in a language with immutability as default, closures, especially in dynamic longhops, can be a good way of shooting yourself in the foot (eg, capture inside a loop a variable defined outside of it but which is modified at each iteration and wonder why you always get the value set at the last iteration). Capture lists are not always evil.
I would say that such people may be broader than "Go users," but my "Rust for Rubyists" was one of the very first community tutorials for the language, and Cargo takes a lot of inspiration from Bundler, given that both were written by Yehuda. :)
Rust actually has quite a few Ruby/Python people, for exactly what you say:
C doesn't scare me, but having used both dynamic languages like Ruby/Python and static languages with type inferencing and reasonably robust type systems (e.g., ML-family languages, Haskell), I find C-like languages that require a lot of type ceremony despite having fairly anemic type systems (and this is even more true of the whole C++/C#/Java family of static OO languages) to be quite annoying -- I don't feel like the type system is working for me, but that I'm doing extra work to make things easier for the compiler -- while Rust seems to be in a fairly reasonable place for a statically-typed language.
> I'm a weekend wannabe systems programmer (it's therapy after a long week of Web development)
I love the fact that systems programming is becoming attractive again!
> this is where Go users laugh that Rust attracted a Rubyist instead of a C++ user
The members of the Rust community have very diverse backgrounds, so don't feel bad. We have systems programmers, pure FP folks, game devs, folks from the dynamic/web crowd... this is an encouraging thing! You will not be alone in coming from a dynamic language - some of our best contributors are also experienced Python, Ruby or Racket developers.
The new proposal gives three different types of closures, either immutable, mutable, or call-once. For each type of closure, variables can be either borrowed by reference, or moved into the closure. This gives six types of possible closures with this proposal. Currently, only two out of those six are supported (The current ||, and proc). So this proposal means closures are more powerful, and that two different, inconsistent types of closures can be unified.
While the internals are probably going to stay the same (the traits), syntax sugar is still very much being thought about.
Keep in mind that that we expect that the "six types of possible closures" can be almost completely inferred at the use site, so people making use of APIs that accept closures should be able to be blissfully unaware of this distinction. Library authors will need to keep in mind the distinction between the three different closure traits when writing function signatures, but this is no different from how every other borrowed and owned thing in Rust works.
This is awesome! It isn't trivial to understand but considering it's a high-level abstraction in a low-level language it's totally fine. Folks, there's no GC nor allocation going on there!
What about us, simple mortals who want to learn this exciting language, which changes a lot even now? I spent some time writing stuff on Rust. But it is moving target without no documentation for advanced parts. How do I figure out advanced stuff on my own? Is there parts of compiler I can use for reference?
What can I do while they "rush" language to the 1.0 stable release?
I mean how to get the best from this situation if I want to be programming in Rust in future?
There will be a 1.0-beta period where the language will have largely calmed down and is mostly just hammering out the bugs for a full 1.0 release. This will be coming within the next few months. If you'd like to learn Rust without suffering the massive breakage of our last-minute scramble, then I'd suggest you wait until then. :)
When we say "virtual dispatch" in Rust we mean the same thing as it means in C++, which is that there exists a vtable in which the function pointer is looked up at runtime from a known offset. It grants more flexibility and less binary bloat than static dispatch, but requires an additional pointer indirection and can present barriers to inlining (though LLVM is impressively smart with its devirtualization abilities at times).
So does Rust still "have closures" or are they now just a specific implementation of a trait? Are there other features that could be implemented as traits?
Rust has closures, and they are just a specific implementation of a trait.
Many fundamental concepts in Rust, including arithmetic, comparisons, sendability, and smart pointers are formally traits (though the compiler may treat some of them specially, particularly arithmetic and comparisons).
This has been a common theme through Rusts's history. Things that were baked-in language constructs have gradually migrated to become library things with sugar enabled via traits. This is leading to towards a very simple, extensible language.
This is something which really excites me about Rust. Having constructs as fundamental as function application extensible via traits reminds me of Python's magic methods, but without the runtime overhead.
Hi Steve, the pull request has not been merged yet as tests are failing. The blog post reads as if it's in master right now, which it does not seem to be.
Why is Rust in such a rush to hit 1.0? Recent improvements have massively improved it, but I'm afraid it will lock too soon and suffer for it.
Rust holds some promise for me, but things like simple buffer manipulation, strings, and slices are ugly and torturous in Rust, compared to Go where they are a pleasure to work with.