I feel like making one more note. I said that “mutable references” was a misnomer and that it’s actually about unique references, but even that’s a bit of a misnomer, because it’s not quite about uniqueness, but uniqueness of access. You can have multiple &mut borrows to the same thing, but only one of them is accessible at any given time:
let mut x = 1;
let y = &mut x;
let z = &mut *y;
*z += 1;
*y += 1;
assert_eq!(x, 3);
z and y both point to x, but only one is accessible at any point in time. Touching y finishes the z borrow; if you swapped the increment lines, it wouldn’t compile.
My memory is fuzzy (this was quite some years back), but I have a vague feeling that this lack of precision in the use of the word “unique” was a factor in some baulking at the proposed change. (“You’re trying to fix something that we admit is strictly wrong, but you’re not even making it right!”)
The pre-NLL version would need some additional scopes. In some ways the current borrow-checker is a lot friendlier (there are also some things possible today that weren't before) but it was also a simpler time, where one could easily imagine the various lifetimes. Getting started with the language was harder, but I think internalizing the borrow-checker was easier, because the rules were simpler and you were forced to learn them for anything more complex than 'hello world'.
let mut x = 1;
{
let y = &mut x;
{
let z = &mut *y;
*z += 1;
}
*y += 1;
}
assert_eq!(x, 3);
> Getting started with the language was harder, but I think internalizing the borrow-checker was easier,
So here's a funny thing: depending on what you mean, I don't think this is actually true. Let me explain.
The shortest way of explaining lexical lifetimes vs NLL is "NLL is based on a control-flow graph, lexical lifetimes are based on lexical scope." CFGs feel more complex, and the implementation certainly is. So a lot of people position this as "NLL is harder to understand."
But that assumes programmers think in the simple way. I think one of Rust's under-appreciated contributions is that programmers intuitively understand control flow graphs better than we may think, and may not intuitively understand lexical scope. Sure, by some metric, NLL may be "more complex" but practically, people only report it being easier to learn and do what they would naturally expect.
Hey Steve :-) I've been following and using Rust since early 2013 (so starting around the same time you did, when I compare our contributions to the compiler) and back then I definitely did not find the lexical lifetimes hard to understand. I remember also noticing an increase in "dumb" lifetime-related questions after NLL landed, seemingly caused by a lack of understanding of how they work.
Perhaps it's all just confirmation bias on my end, but I think truly understanding lifetimes was easier the way I learned it back then. That said, I have never bothered to write documentation for the Rust project, whereas few Rust contributors can claim to be in the same league as you in that area. We probably have very different perspectives.
Oh totally, I know you :) It's interesting how our perceptions are different though, I think a lot of the "dumb" questions went away since NLL. I wonder if there's a way to quantify this.
So, I think this is the thing: I also think that it was easier to learn lexically, personally. I too was worried that it would make things harder. But, I just don't think that's been demonstrated to be true across most people. It is, however, only my gut-check feeling for what I've seen. I could be wrong.
(And, the borrowcheck documentation is very minimal, and didn't really change with NLL, other than needing to add some println!s to make things not compile again. So it's certainly not because I wrote some amazing docs and explained it in a good way, hehe.)
I started in mid-2013 too, and my position is similar to yours. I'd characterise it like this: lexical lifetimes are easier to grok, but NLL turns out to be more practical, doing what people actually want (where LL didn't) enough that it overcomes the greater conceptual complexity, because you have to actually think about the concepts less often.