When Erik Meijer worked with Gilad Bracha and others on the Dart team to bring async to Dart, he approached with a very similar "lets look at the table of all the combinations and make sure we didn't leave any holes". In this case, the two axes (effects) he considered were asynchrony and iteration. So Dart supports all four combinations:
* Synchronous single-value functions are just normal function declarations. Within one, you use `return` to yield the one value.
* Synchronous multi-value functions are generators, marked `sync`, which return a `Iterable<T>`. Within one, you use `yield` and `yield` to emit values.
* Asynchronous single-value functions are marked `async` and return a `Future<T>`. Within on, you use `await` to pause on asynchronous operations and consume values. A `return` valued is implicitly wrapped in a future.
* Asynchronous multi-value functions are marked `async` and return a `Stream<T>`. Within one, you can use both `yield` and `yield` to emit values and `await` to suspend and consume asynchronous values.
We also have an asynchronous `await for` statement that can be used to imperatively consume a stream.
There's no third axis for fallibility, because Dart chooses to comprehensively use exceptions for operations that fail (which I quite like).
Overall, it's a really rich, comprehensive approach to the problem. I have mixed feelings about it. It really does cover all the bases. But, in practice, `async` functions and `await for` are very* rarely used but add a lot of complexity to the language and engineering burden to the implementations.
I believe you can use a backslash to escape stars, but either way: it took me a while to realize there were a ton of missing stars in this comment that has been parsed as italics (in case this doesn't get fixed before the edit window times out).
> There's no third axis for fallibility, because Dart chooses to comprehensively use exceptions for operations that fail (which I quite like).
Can you catch exceptions? Can you catch them when thrown iterating an Iterable, awaiting a Future, or iteratively awaiting values from a Stream?
It doesn't seem like you can get around needing fallibility. What if I have an asynchronous operation that can fail? What if I need a sequence of possibly-empty values?
> It doesn't seem like you can get around needing fallibility.
There’s no axis for fallability because all operations are fallible. Yes, you can catch exceptions when iterating, awaiting, or iteratively awaiting. You have fallibility. What you don’t have is a way of marking operations as non-fallible.
As klodolph says, there's no axis for fallibility because the language doesn't offer different choices of fallibility strategy. There's just one: exceptions.
You can catch exceptions. If an iterator throws, you can catch that by surrounding the for loop in a try catch block.
Futures are sort of like Result types in that a Future can complete with either a value or an error. Exceptions inside asynchronous functions are automatically caught and wrapped into error futures, which then propagate out through the returned future object. When you await a future, if the future completes with an error, that then gets unwrapped back into a thrown exception.
> Futures are sort of like Result types in that a Future can complete with either a value or an error.
Incidentally, for curious readers, the first version of the Future trait in Rust always returned a Result<T, E> from its poll method. The second version decided to instead return T directly, since T could be Result<T, E> too, making things a bit more general, but also leading to a tiny bit of type tetris.
Doesn't mean that I'm saying anything about whatever Dart does here, just that I also think there's a connection here :)
While these points may hold true, I’ve often seen the statement thrown around in chats and various forums that, “dart sux.” I don’t understand the hate, nor the juvenile phrasing.
I will claim that the most important thing to understand about all three of the concepts in this article -- Fallibility, Asynchrony, and Iteration (as well as some that aren't discussed, but are to-me relevant, such as "Allocation: holds scoped resources" <- one I feel Rust and C++ reasonably-correctly assume every object / value might one day take part in) -- is that these are all monads (yes: including Iteration/List, which I think took me the longest to fully appreciate from the set of commonly-referenced monads); and, in a language with a more theoretically-sound core (aka, Haskell... and no: I am not a Haskell zealot by any means and in fact have wasted away most of my life for decades using C++ and Python), these "control-flow effects" (which is frankly a better name than "monad") are all supported using the same typeclass and rely on the same syntax, which not only ensures that every such "register" is correctly analogous with the others but even allows for end developers to extend the set of similar primitives without resorting to increasingly-fraught language modifications.
This is something that came up constantly while Rust was working on `async`, and the main reason Rust doesn't use this approach is that the usual low-level concerns around allocation/memory layout get in the way of just plopping a `Monad` trait in there.
(There's a secondary reason which is that Rust does not have higher-kinded types and thus cannot even express `Monad` to begin with. It recently got GATs which can encode it, but that feature was designed as an intentional alternative to full complexity of higher-kinded types.)
FWIW, most of the arguments in that thread are "you can't add monads to Rust", which seems like a perfectly fine thing to say and is quite likely true. I am not saying "you should add monads to Rust" and I certainly am not saying "it would be trivial to just drop a Monad trait in right now why aren't they doing that" ;P.
I do believe, however, that there is a different design for a programming language that solves Rust's core needs (efficient and safe) and yet which supports something akin to generic "control-flow effects"... which, to be clear, might not fit the specific definition of Monad.
Maybe, for example, such a language simply doesn't support early return (a feature I can do without). Maybe it has linear types (which I maintain are missing from Rust, and yes: I have read the article that claims it is difficult and will note that that same article also firmly admits it is possible).
Maybe, instead of using Monad, the right interface for a control-flow effect is somehow designed around the concept of how optimizing compilers need to reify these effects into actual efficient code, or maybe the way to get them feels more like an unhygienic macro... the design space isn't limited here.
To that end, the one bit of this thread that I personally find valuable is the argument that borrowing is somehow totally incompatible with the idea of a yield point that is a function, which I find both surprising but also possible... I certainly haven't put a ton of thought into it and am so tired right now I likely couldn't even appreciate a trivial and complete proof of such :(.
Regardless: I also am not claiming that Rust shouldn't exist with its current trade offs (at least, for this reason... I do think its non-monadic design should have exceptions and am almost entirely sure they would be compatible with the rest of the language; I am extremely disappointed that they weren't provided).
However, to see this article talking about these three things and only using the word "monad" a single time buried in some table near the end was quite shocking as it is trying to lay out a ton of unusual terminology for something the author clearly understands deeply... it almost feels like they are actively avoiding speaking the word "monad" ;P.
Sure- I didn't read your comment that way, my point was primarily just that there is no clear path to such a language yet. It's an open question of whether it is possible and how to accomplish it, and TFA is already another very small step toward answering that question.
(I suspect their reason for avoiding the word "monad" was that it is totally alien and obfuscatory to a significant part of the target audience, and it's relatively easy for those who are familiar to connect the dots themselves, as you clearly did.)
Aren't effect types generally related to Lawvere theories, as opposed to monads? AIUI the two differ most promimently in the way they compose; composition of monads being very much ad hoc. I didn't read OP as relating to monads specifically.
I also think monads are elegant, and you're write that effects can all be modeled with monads, but I think the notion of a monad abstraction is a poor fit for a systems language as I elaborated years ago in the twitter thread Rusky linked to.
I think you’re making a good point that I probably agree with, but I can’t tell partially because your writing style is hard to follow. Your whole comment could be improved by separating into multiple sentences and moving the asides out of the main body of thought
It could certainly be improved for reading at a glance... but, if you just pay attention to the punctuation, I kind of think inlining the "asides" there is better than simply trying to tree them out (which would be the fast way to fix it). Like: this is a comment, not an article... I am lying in bed (deeply wishing I were asleep) trying to quickly type a train of thought into my phone (a smaller one than I normally even use) and am even feeling forced to do so under a time pressure (as once a post gets enough comments you won't ever find anything). Often, I will even type such a comment at first and then--after posting it--take the next 20 minutes to "clean it up" into some paragraphs (but am not going to do that this time as I am hurting my elbow). You are welcome to just ignore my comment for today; and, maybe one day, I will have more than a few minutes to allocate to writing yet-another-article-on-monads that no one will want to read (and likely won't help anyone understand... see "the monad tutorial fallacy", one of my favorite articles / concepts ;P).
I found this article enlightening; an example of where a little formalism/structure (in the notion of the four registers) really helps identify the missing pieces in the wider picture.
I did find the use of the term "register" slightly jarring given its somewhat overloaded nature between its natural language meaning, used here, and also the sense of "a specific place in which to store a datum". Reading the Wikipedia page on the term, I came across "diatype", which to me seemed to convey the same meaning without the potential confusion from the polysemantics of "register"...?
This dual meaning definitely crossed my mind as I was writing.. it's unfortunate.
Whenever I encounter a dual like this I like to investigate the etymology. I assume the use in linguistics is by metaphor to its use in music, "the range of a voice or instrument." More broadly, the term register comes to us from the Latin for a list of items recorded, ultimately from a word for carrying. Maybe there's something to make of that in connection to each of these very different meanings, I don't know.
The problem with diatype is that register, although not an extremely common word, is I thought somewhat well known, whereas I've never heard of diatype. I've been known to overestimate how familiar words of this sort are to other people, though, so maybe they're equally obscure to most of my readers.
Funny... after I read about the registers of (natural) languages, that reminded me of the "registers" of pipe organs, which are quite similar in a way: it's still the same instrument, but sounds different depending on what register(s) you use. Except that in English these are called "stops" for some reason (as in "pull out all the stops" - which in the programming sense introduced in the article would mean a wild mix of programming styles, probably not something you'd want to see). Oh, well...
I immediately assumed the post would discuss some algorithm that rustc uses to allocate CPU registers, but I was puzzled because I thought that would be abstracted away by LLVM.
I don't know that it would have been better, but an option would have been to use one of the less ambiguous but less familiar terms, and define it in terms of the more familiar term early on. Something like, "diatype (register, in the sociological sense)".
> from the Latin for a list of items recorded, ultimately from a word for carrying. Maybe there's something to make of that in connection to each of these very different meanings, I don't know.
CPU registers carry values, musical registers carry tunes. (Or something very roughly along those lines, anyway; etymology is messy and complicated.)
It's right there in the header, "Without boats, dreams dry up."
The full quote is "In civilizations without boats, dreams dry up, espionage takes the place of adventure and the police take the place of pirates," from his work "Of Other Spaces, Heterotopias."
Programming languages already have dialects, but there is a distinction between dialects and registers, even in linguistics. They blur in some cases (AAVE for example), but dialects are usually thought of more as a variety of language used by people in geographic or cultural grouping; registers are used by users in different social situations. For example, when I give a presentation I use a very different register from when I have a drink with my friends.
That example actually highlights why I think register is a helpful name. Dialects vary across people, but registers vary across situations for each person.
In the programming language context, dialect can be applied on varying levels but usually signifies the former, where each individual or group has a persistent preference for some language or style. Within a single programming language, dialects are usually a bad thing because they risk splintering the community into mutually incompatible subgroups (e.g. scala fp styles, c++ boost).
Part of the reason why evolving a language is hard is because every time you introduce a new way to do something which could be done before, users have to choose. If that choice divides users into groups that persistently pick one over the other based on style or community affiliation, you’ve introduced a new dialect. If the choice flows more naturally from the situation the user finds themselves in, you’ve introduced a new register.
Like the subjunctive mood. I imagine not super helpful for English speakers where we have exactly one verb that takes a change on subjunctive mood.
In most languages with it, entering the subjunctive mood requires a prefixing context/preposition which is analogous to function coloring system/async keyword
The author mentions a pattern emerging, and I do too, which, as he mentions, motivates keyword generics, but perhaps we should go all the way and have full algebraic effects in Rust? I know OCaml 5 has them, one of the first, so perhaps Rust might adopt something similar. Then again, I'm not sure how it'd work with the borrow checker.
Isn't that what the "keyword generics" initiative is all about? The Rust dev team seems to be quite aware that algebraic effects in Rust might be desirable, but coming up with a good design might involve quite a bit of work. It took a long time for even the async MVP to shape up properly, and supporting full effects might be even more complicated.
Not exactly [0]. The keyword generics initiative is constrained, it is not a full effects system. It is akin to GATs versus HKTs, a limited form of a more full feature, but I still think the full versions would make the API cleaner than the more limited implementations now. Again, this is with me not knowing how it'd work with the borrow checker, but like saurik mentioned above, I think there is a way to have a Rust-like language that could embody all of these full-feature aspects cleanly.
The blogpost just says that they're trying to minimize surface area and not expose things like e.g. user-defined effects. That's normal for a MVP where we don't even know how such custom effects might interact with the rest of the language. If it turns out that they can be modeled "cleanly" as you say, they can be added quite easily in the future.
This helped congeal a thing I've been thinking about in my Rust usage for a long time, but didn't know how to express:
> In Rust, this distinction in register can look like this:
> - Will I use references and borrowing, or will I liberally use clone to avoid it?
> In general, Rust strives to have an obvious register which to operate which is still going to be performant enough (and hopefully accessible enough!), while allowing users to switch register when they need to.
One of the biggest ergonomics problems in Rust, in my experience, is that for certain things (like references vs clones, or generics vs enums vs dynamic dispatch), it can be very hard to change registers once you realize you need a different one. Often it means making changes throughout your entire codebase
I don't have an easy solution but I feel like I at least have words for the problem now
The strong typing in rust definitely helps with this, though. For most problems, you can encapsulate a refactor into something that, at each occurrence, either works automatically (with things like generics and deref) or fails to compile.
I think Rust is actually pretty good at this. If you do have to do a change that will affect your entire code base (or at least will be pervasive), the compiler will be able to tell you where things are wrong with a good degree of precision.
It'll tell you, yes, but you'll still have to make non-automatable changes in a whole lot of different places in many situations. In situations where other languages might not require any changes at all
I'm still only really a rust dabbler, but I was very happy to see your blog, boats. I always found your writings, talks, and online meeting contributions to be very thoughtful when you were more involved. I hope the current leadership team will reach out to you from time to time, as you offered at the end of this piece.
Every time withoutboat drops something it’s crazy. I don’t know how people can know all that and have the intuition of it. Hard not to be a fanboy of the Rust community. :)
Awesome post! I hope things like that will move forward.
What is the way to go is very context sensitive. You may be better with the default references, using clone everywhere, with Rc, with Arc, RefCell, or whatever.
What is important to know is that your program is not automatically bad just because you decided not to use only references.
Really? I felt like I was just punting because I didn't take time to understand and resolve potential dependencies.
In my case the data was relatively small and I just wanted functionality. But I felt like if I could have worked out borrowing it would've generated more efficient code and less memory consumption.
> But I felt like if I could have worked out borrowing it would've generated more efficient code and less memory consumption.
That is possible! But while you're learning, this isn't the most important thing to focus on. And depending on what you're trying to do, using borrowing may be easy, or may be hard, and while you're trying to learn, you don't know enough to evaluate the tradeoff on that basis, so in my humble opinion, it is absolutely okay to clone, and is probably even what you should do in most cases.
Once you're feeling good enough about Rust to start digging deeper, working on trying to not do this is a valuable skill. But imho it's a waste of time when you're starting, and may lead you into a pit of despair.
The aside in the book says the opposite of what you claim it to be saying. Here, you suggest that liberal cloning is the way it should be; there, you say it's fine for beginner code, which is a very unsubtle nudge. Which is it?
I am a little offput by how aggro this comment is, but I really like you and so will give you the benefit of the doubt here.
I wrote this text years ago. It is possible I didn't do as good of a job as I could have. This was added directly in response to situations' like the OPs, people saying that they felt bad using clone while they were learning Rust, and so the comment, coming early in the book, and as a text for learners, focuses on that case.
It's also the same as in this thread, I thought I was being pretty clear that I'm talking about people learning Rust?
However, just because I say one thing doesn't mean I'm implying something else; for non-beginners, I don't think clone is a moral issue either way. If you are trying to get ultimate performance, clone may be an issue. It also may not. Some clones are more expensive than others. Furthermore, not everyone needs to get ultimate performance, and that's okay too. It is up to you. The point is that I don't think this question is a useful one for someone to engage with as a beginner. It is a useful question for intermediate programmers to engage with, but so that they can make the right call for them, not because one or the other is the correct answer in all situations.
As a Rust programmer, I have the exact same lived experience of the language that the parent commenter does: that Rust can be relatively straightforward to write if you liberally clone things, but that the prevailing community norm strongly suggests that cloning is a code smell, or something you do in tooling-grade code.
The parent commenter was surprised that you suggesting liberal cloning was the way Rust code should be --- that, like, the median Rust crate should probably be written in the "clone liberally" register. You said that was the reason you wrote the Rust book aside. But I read that book and that aside and reached the opposite conclusion, and I think people who follow your link will mostly see why.
You haven't exactly cleared things up here, either. "For non-beginners, clone isn't a moral issue". Well, that's not what you just said upthread. I'm pressing you on this because it would be helpful to get an actual clear signal on this. Should most code be written with liberal clones, or should most beginner code be written with liberal clones? Like the previous commenter, I just want to understand what you're trying to say.
(A sibling comment here says the exact thing that I'm saying creates the mixed message here: that cloning is fine... in beginner code, but not in serious code.)
This is not, as we both know, a minor distinction in Rust; it is maybe the biggest stylistic distinction in actually writing Rust? Moreso maybe even than async?
> that, like, the median Rust crate should probably be written in the "clone liberally" register.
That is not what I understand myself to have said. The parent said that they "do not write much Rust code." That is very different than "the median Rust crate."
> Should most code be written with liberal clones,
I think this question is a category error. There is not enough context to give an opinion. And I think that means you cannot make a broad generalization about clone; some code should have a bunch of clones, and some code should not, even if they're the same code, because they operate in different kinds of contexts or sure, registers, to sort of stick to the vibes of the original post.
If I am presented the exact same code by two people, and one of them started writing Rust yesterday, and says "what do you think of my code?" I would not even give consideration to the number of clones when attempting to give them my opinion about their code. If someone came to me after writing Rust for ten years and said "hey I am struggling to improve performance in this code" and said "what do you think of my code?" and I saw a bunch of clones, I would say "have you profiled? I would suspect that maybe clones are causing your problem but I wouldn't do anything until I had a flame graph or equivalent." Even then that is not a moral question or answer, it is an engineering one.
> or should most beginner code be written with liberal clones?
Even this isn't exactly what I meant, but is closer. I mean what I said in my parent comment to you, beginners should not feel bad if their code uses clone, whatsoever. That does not then imply that non-beginners should feel bad, I said absolutely nothing in this thread about anything other than beginner code (until the paragraph above in this comment and the paragraph in the comment you're currently replying to).
It’s weird to me the degree to which you’re pressing an offhand comment into being some kind of Statement.
I’d expect anyone writing Rust regularly to know that the impact of Clone is context-dependent, and whether or not you should be fine with using it liberally is also context-dependent. Big object? Hot loop? Avoid cloning. Not worried about performance? Clone all you want if it makes the code easier and faster to write. You can always optimize later.
Sometimes it’s easy to avoid clones, and if you’re writing a lot of Rust it often feels like you might as well, just in case. But at least for my team, “you could get rid of this clone” is generally a take-it-or-leave-it kind of comment in code review.
There’s no easy way to express “it’s a trade-off between performance and ease of use that you’ll get better at with time” in a way that is instructive in any meaningful way, which is why I imagine it’s more useful to push people to err on the side of what’s going to be faster to get them to the amount of experience they need to engage meaningfully with the question within their own context.
Well, if I can mostly ignore fussy ownership and borrowing stuff and just clone everything, all the time, getting rid of clones exclusively in places that I profile and learn to be in the hot path, and the resulting code will be idiomatic Rust, that would be welcome news to me. That's where I'm coming from.
My impression from Klabnik's most recent comment is that this isn't the case. Which is fine! It lines up with my previous assumptions about Rust idiom.
I'm not quite sure where the disconnect is either, but yeah, "should I clone everywhere" or "is cloning everywhere idiomatic" is just one of those models that is useful, but like all models, is invariably wrong in some cases.
Just as one example, consider a function I wrote. It accepts a &str as a parameter. Internally, among other things, it clones that &str into a String. If you asked me whether that was "idiomatic" devoid of any other context, I would say, "probably not? Why not just ask for the String or a Into<String> instead?" But it's not a certainty. Just a likelihood.
But now let's add context. The context is that that function I mentioned is Regex::new. Does that change my answer? You bet your ass it does. Because cloning that pattern string has now turned into a definitively negligible cost. (By many orders of magnitude.) And there is upside to accepting a &str: it's non-generic, a little easier to understand and is unlikely to ever run afoul of type inference. Those aren't huge upsides, but there is actually no downside here because of the context.
As others have pointed out, there is no one-size-fits-all answer to this question. There are models though, and those models (e.g., "don't worry about cloning while you're learning Rust") are very likely to serve you quite well. That might not be every path that everyone learning Rust always takes. But it's a decent bet in my experience.
> My impression from Klabnik's most recent comment is that this isn't the case.
Okay last comment in this sub-thread because I have to get back to my job here, which hilariously is adding some boxing and possibly cloning in order to improve a codebase.
I feel like I'm going crazy. I feel like this:
> if I can mostly ignore fussy ownership and borrowing stuff and just clone everything, all the time, getting rid of clones exclusively in places that I profile and learn to be in the hot path, and the resulting code will be idiomatic Rust
is exactly what my last comment said. This whole chain, in my experience, is me saying "clone is not a moral issue" and you saying "why are you saying that people that use clone should feel bad." So I need to bow out. I wish I could figure out where this disconnect is, but it's clear that this thread is not achieving that.
Your comments and much of this thread have been helpful to my understanding. (Like the Rust book!--thank you!). I also appreciate your effort to be kind/respectful/understanding, elevating the conversation, where others might have responded with anger or disparagement. Thanks for the example.
There is one tangential thing that I’ve thought about before—it would be nice if there was some way to encode this into the library ecosystem (ie crates.io)
I have no idea how this would be realistically possible though—it’s pretty clear that you can rely on categories that are statically verifiable like “no-std/no-alloc”.
But what mechanism could be used to allow library consumers to distinguish between things like “laser focus on high perf/low-alloc” vs “prioritizes API over perf-at-all-costs” vs “this package is an experiment/Baby’s First Crate/shitpost” ??
They’re all valid choices for different consumers or usecases.
Yeah, there is something in this space for sure. I think one of the issues is time; code can start off as a fun experiment, but transform into one that's not, which maybe just means that it is metadata associated with a specific release, yadda yadda, but there's a lot here to think about. For sure.
I recently have been accepting some PRs to some Ruby code that I wrote ten years ago that ended up being still in-use today. "Shitpost" or "baby's first crate" isn't accurate, but "hey I was very serious about this before but now it's not as high priority for me but that doesn't mean I abandoned it" is a tough thing to encode.
Absolutely true re evolution.
That's why it would have to probably be crate keywords rather than categories, I'm thinking.
This whole space about encoding library non-technical aspects like intent and status seems like it has a lot of possibilities. I wonder if there's prior art in other language ecosystems?
Maybe defining the interesting axes (for each, one for whether it is considered a goal, the other for whether it has in fact prioritized that goal in practice; not saying whether it succeeded in others' eyes), and for each axis give a number 0-10 (or whatever) rating it, in the author's view. Then maybe users of the library can give similar ratings as they compare it with others.
I think documentation is the right layer to express what is or is not a goal for the crate. Whether you met the goal is not something you as author can judge reliably.
https://crates.io/crates/misfortunate - is not intended to be used seriously. A redditor pointed out that arguably my Double (a smart pointer where the mutable and immutable references point to separate things and you can swap their places) might actually be useful! But I don't want you to depend on misfortunate::Double in your real software project - if that is useful (and maybe, with careful documentation, it is) you should give it a name suiting its purpose and include it in a crate of stuff that's intended to be used, not as something like a practical joke or thought experiment.
I think the confusion comes from the fact that Steve would consider code written by someone who "hasn't written much Rust" to be "beginner code," so he meant exactly what the book says. I would agree that people getting the hang of Rust shouldn't worry too much about this, but that production code that deep clones buffers when it could use references is badly written (then again, it may not matter at all!).
> The aside in the book says the opposite of what you claim it to be saying. Here, you suggest that liberal cloning is the way it should be; there, you say it's fine for beginner code, which is a very unsubtle nudge. Which is it?
That's phrased way too harshly, considering you're talking to someone who has put significant efforts into making Rust accessible to others.
my interpretation of the “clone is the way to go” comment is that it’s in the context of the OP saying “I haven't written much Rust…”, which puts a pretty clear frame around the intended applicability of the statement no?
I don’t taking it at all as being an absolute statement.
"Beginner code" is a bit misstated, yes. An abundance of .clone()s does suggest that the code path might be unoptimized, but unoptimized code isn't solely written by beginners. If you want to use Rust for quick prototyping, you're going to use .clone() a lot. It's a bit regrettable that the book doesn't clarify this more.
I tend to follow a "clone first, figure out lifetimes later if profiling shows that it would be worthwhile" rule. If I always did the optimal thing with lifetimes, I'd never actually finish anything.
Borrowing isn't implicit, as you have an &. But even beyond that, clone is not implicit because it may be an expensive operation, and so Rust wants to expose that to you. It avoids issues like this: https://news.ycombinator.com/item?id=8704318
The Copy trait is for cases where it's cheap enough to be implicit. Otherwise the philosophy of the designers was that things that may be expensive should be explicit.
I'm undecided. I feel it's a bit like when you .unwrap() everything. Either I take the time to match all the potential results, or I do it quick and dirty. But also when it comes to cloning if you don't do it you get a lot of lifetime annotations.
The article talks about options (english meaning) as in the developer is choosing to model his program to use a combination of these registers. Unfortunately, that is exactly my problem. __There should be one-- and preferably only one --obvious way to do it.__
That is a useful design maxim, but the problem is that in a "systems language," it is difficult if not impossible, in my opinion, to achieve.
The closer you get to physical reality, the harder it is to be able to fit the world into your model, and you need to fit your model to the world. In Rust, unsafe is the way to manage this difficult set of tradeoffs
It's exactly the problem that led us to Pin in designing async/await.
Futures need to own all of their state. If you want to use that state across multiple combinators (in async/await terms, "across an await point"), given that rust doesn't have GC, you have to somehow make it available to both closures. This meant Arc<Mutex<_>> around things even though they were being used in sequence, because both closures needed to own the state and the compiler doesn't know the closures will only be called in sequence. It was a mess, and was a big hurdle for adoption of futures before async/await was added.
All those effects have the same issue, don't they? I've definitely run into situations where I moved from a combinational approach to a control flow based one for iteration because the combinational approach required multiple closures to borrow the same state mutably.
It's almost like there's also still some gap in the borrow checking, but it's not obvious to me how to address that.
I had been a little confused as I'd seen the combinators available in the futures crate (along with Stream) and it's a little tricky to figure out the status and review of techniques that are implemented there but not standardized.
This is exceptionally well communicated. Language design is really hard, and it's so easy to lose sight of the bigger picture when trying to find the next incremental step. This post helped me understand a lot of vague frustrations I've experienced in the past, and the bigger picture it outlines really resonates with me.
Hacker News attempts to prevent bot upvotes, last I heard. So even if there were, it would not work, and if it did, I'm sure the hacker news team would work towards fixing that.
This is simply a high-quality, very technical blog post from someone who's had a tremendous impact on a popular project. Hence upvotes.
I completely understand the feeling but the site guidelines ask us not to convert such feelings into HN posts:
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
I like the f's a lot actually but I think there's a clear problem with the spacing when a title is two lines, and the f's really exacerbate it. But I also don't want to fiddle with my CSS.
When Erik Meijer worked with Gilad Bracha and others on the Dart team to bring async to Dart, he approached with a very similar "lets look at the table of all the combinations and make sure we didn't leave any holes". In this case, the two axes (effects) he considered were asynchrony and iteration. So Dart supports all four combinations:
* Synchronous single-value functions are just normal function declarations. Within one, you use `return` to yield the one value.
* Synchronous multi-value functions are generators, marked `sync`, which return a `Iterable<T>`. Within one, you use `yield` and `yield` to emit values.
* Asynchronous single-value functions are marked `async` and return a `Future<T>`. Within on, you use `await` to pause on asynchronous operations and consume values. A `return` valued is implicitly wrapped in a future.
* Asynchronous multi-value functions are marked `async` and return a `Stream<T>`. Within one, you can use both `yield` and `yield` to emit values and `await` to suspend and consume asynchronous values.
We also have an asynchronous `await for` statement that can be used to imperatively consume a stream.
There's no third axis for fallibility, because Dart chooses to comprehensively use exceptions for operations that fail (which I quite like).
Overall, it's a really rich, comprehensive approach to the problem. I have mixed feelings about it. It really does cover all the bases. But, in practice, `async` functions and `await for` are very* rarely used but add a lot of complexity to the language and engineering burden to the implementations.