Hacker News new | past | comments | ask | show | jobs | submit login

The push to eliminate lifetimes in favor of loans (with this or polonius) in the pursuit of allowing a larger subset of correct programs to be expressed in Rust makes sense at first blush — of course we want to be able to prove more correct programs correct! Who wants to fight the borrow checker all the time over things we know are fine! — but I'm concerned in the long run it will actually turn out to be a bad thing.

I vehemently disagree with the article that lifetimes are at all nebulous or hard to grasp, IMO they're a pretty straightforward concept, and they map really nicely onto single ownership, move, RAII based memory management, and underlying C style memory management concepts, whereas OTOH it feels like loans are perhaps less nebulous, but also map more poorly onto the best comprehensible, and specific ways to think about low level memory management (and also less well to concepts in other low level languages lile C++). Instead of seeing your program as a mostly 1D collection of mostly contiguous scopes that the program counter jumps around in, now you have to view it as a gigantic thicket of constraints.

So in essence, by making Rust's static analysis more powerful and less annoying at first, we're actually making it harder to fully grasp in the long run. It's sort of like the Haskell monad problem — the more powerful you make your compiler/language abstractions to allow proving more code, the less comprehensible everything gets. And I think in both cases, trading some power, past a certain point, for long term comprehensability with more straightforward concepts is better.




I like Rust a lot right now, but with polonius and some of the unnecessary and weird syntactic sugar that's being added, plus the fact that instead of full coroutines to encompass both async and generators (like Kotlin has) we're getting neutered coroutines to do generators and async is a separate but similar concept, I think the Rust designers are making a lot of missteps lately. I know nothing's perfect, but it kind of sucks.


> we're getting neutered coroutines to do generators and async is a separate but similar concept

I'm not sure what's neutered about Rust's current plans for generators, and they aren't separate from async, they're the foundation that async desugars to.

I'm also not sure what your objection is to Polonius, which, so far, is still just a strictly more permissive version of the borrow checker, with nothing new to learn on the user end. AFAICT the notation in this blog post is not actually proposing any new Rust syntax, and is instead proposing syntax in the formal language that is being used internally to model Rust's type system.


> I'm not sure what's neutered about Rust's current plans for generators

They're neutered because they can't suspend and transfer control to a function other than the one that called them ("Note also that "coroutines" here are really "semicoroutines" since they can only yield back to their caller." https://lang-team.rust-lang.org/design_notes/general_corouti...) and you can't pass values into resume and get them out from the yield statement in the coroutine (https://github.com/rust-lang/rust/issues/43122#issuecomment-...).

> and they aren't separate from async, they're the foundation that async desugars to.

Yeah I just looked it up again and I don't know why I had it in my head that they were separate, you're correct, they are the same thing under the hood, so honestly that eliminates my biggest problem with them.

> 'm also not sure what your objection is to Polonius, which, so far, is still just a strictly more permissive version of the borrow checker, with nothing new to learn on the user end.

The entire model is different under the hood, though, since it switches from lifetimes+borrows to loans, and so in order to fully understand its behavior the user really would have to change their mental model, and as I said above I'm a huge fan of the lifetimes model and less so of the loan model. I just feel like it's much more natural to treat the ownership of a memory object and therefore amount of time in your code that object lives as the fixed point, and borrows as wrong for outliving what they refer to, then to treat borrows as the fixed point, and objects as wrong for going out of scope and being dropped before the borrow ends, because the fundamental memory management model of Rust is single ownership of objects, moves, and scope based RAII via Drop, so the lifetime of an object kind of is the more basic building block of the memory model, with borrows sort of conceptually orbiting around that and naturally being adjusted to fit that, with the checker being a way to force you to adhere to that. The loan based way of thinking would make more sense for an ARC-based language where references actually are more basic because objects really do only live for as long as there are references to them.


> you can't pass values into resume and get them out from the yield statement in the coroutine

I think that the linked comment is out of date, and that this is supported now (hard to tell because it hasn't been close enough to stabilization to be properly documented): https://github.com/rust-lang/rust/pull/68524

As for Polonius changing the underlying mental model, I think this is a natural progression. Rust 1.0 tried to present a simple lexical model of borrowing, and then enough people complained that it has long since replaced the simple model with non-lexical lifetimes in order to trade simplicity for "do what I mean". And since it's not allowed to break any old code, if you want to continue treating borrowing like it has the previous model then that shouldn't present any difficulties.


> They're neutered because they can't suspend and transfer control to a function other than the one that called them ("Note also that "coroutines" here are really "semicoroutines" since they can only yield back to their caller." https://lang-team.rust-lang.org/design_notes/general_corouti...)

Huh? At first glance wanting to do this sounds absolutely insane to me. As in, it sounds a lot like "imagine a function could return not just to its caller function, but to other functions as well! So if you call it, it might be that you won't even end up returning to the same function with which you started, but somewhere completely different".

What am I missing? This sounds absolutely insane at first glance, like a straight-up go-to. What's the most simple use case for these sort of weird yields to other functions?


That's the essence of co-routines vs. sub-routines. If you've ever used a Unix pipe you've used a limited form of coroutines. "ps | grep foo" tells process A, the shell, to fire up processes B (ps) and C (grep), and tells B to send results to C, not its caller A. B runs a bit and yields, returning some data via stdout. C runs a bit, reading the result via stdin, then yields to B and waits for B to return more data. Pipes are actually bidirectional, so its possible for C to send results to B when it yields, but off the top of my head I can't think of a real such example.


I suppose conceptually they're similar but you're oversimplifying how Unix pipes work to the extent that it's apples to oranges in comparison to async/coroutines.


Do you have examples for "weird syntactic sugar"? I don't agree with every syntax decision in Rust, but my grips have nothing to do with syntactic sugar, so I am curious what you mean.


“You have become the very thing you swore to destroy!”


You either die a hero, or live long enough to become the villain.


Counter point, my day to day programming is in GC languages. In those, how long something lives is basically anyone's guess, could be the entire length of the program, could be 2 seconds from now.

This isn't something that really causes headaches, it's desirable. So long as when you come in conflict with the borrow checker there's an ability to unwind it, I don't see the harm.

My assumption is the lifetime explanation while incorrect will also still work (I couldn't imagine it wouldn't as that'd break too much). So assuming you do run into these compiler problems, you can still revert to the old simpler mental model and move forward.


Only if we are talking about GC languages without value types support.

Examples of languages with some form of automatic memory management and RAII.

Swift, D, Nim, C#, Active Oberon, Modula-3, Delphi.


Every language I'm aware of has value types (but not user defined value types). Typically numerics or "primitives" are value types in GCed languages.

Those types don't have any sort of explicit lifetime that are different from a regular object type. If you put a `int` on a `Foo` object that `int` lives as long as the `Foo` object does, for example.

Being a value type simply means that the memory representation of the value is used instead of a pointer/object reference.

That means that even when you do have user defined value types the same rules apply. How long these things "live" depends entirely on the context of what they are associated with. If you have a `Bar` value type on a `Foo` object then `Bar` will live as long as `Foo` does, which means till the next garbage collection.


The ones I listed have classical C and C++ like value types available to them, with stack allocation or global memory segments, and mechanisms to do deterministic resource management.

Instead you decided to point out the philosophical meaning of value types.


I think he's pointing out that value types and "does this language require/support explicit lifetime management" are actually unrelated. Fortran has value types but it doesn't have built-in memory management features. Perhaps you could substitute "has heap allocation" for "has value type"?


Fortran has had allocatables for 34 years.


What do you mean by

> The ones I listed have classical C and C++ like value types available to them

I don't really know what that means or how it's related to the discussion of lifetimes.


Deterministic resource management.


As I said and covered, value types are not deterministic resource management. Those are orthogonal concepts.

And in fact, confusing the two can lead to problems. C#, for example, does not necessarily store value types on the stack. [1] Those can be allocated on the heap. C# doesn't even give a guarantee of how long a value type will live. It's only guarantee is the one I outlined earlier, that this thing is represented as a block of memory rather than a pointer and that when you pass this around it's done as a copy.

If your assumption is that "this thing is freed up immediately when it's unused" that's a faulty assumption. Being a value type imparts no guarantee on how long something will live.

This is more than just a philosophical definition.

If you want deterministic resource management in C#, you use the `using` statement. If you are in java, it's `try-with-resources`. If you are in C++, you rely on RAII. If you are in C... good luck. None of those things have anything to do with value types. Because lifetime and type aren't related in those languages

> Once you abandon entirely the crazy idea that the type of a value has anything whatsoever to do with the storage, it becomes much easier to reason about it.

[1] https://ericlippert.com/2010/09/30/the-truth-about-value-typ...


This is an ancient article. Pretty much only Rust has .drop() with such strong steroids. Either way struct in C# means something very specific and it is the same as struct in C. You can cast a malloced pointer in C to a struct and use it as such, you can do the same in C# (not that you should, but you can).

In terms for article contents - structs absolutely do go on the stack when you declare them as locals except select scenarios: async and iterator methods, both of which can capture variables that live across yields or awaits into a state machine struct or class (debug/release difference, and also ValueTask which does not alloc when it completes synchronously).

If you care about memory lifetimes, the compiler will helpfully error out when you are returning a 'ref' to a local scope, violating the lifetime (because it has rudimentary lifetime analysis underneath hence scoped and unscoped semantics).


> This is an ancient article.

It was posted yesterday, 4 March 2024.


I meant the Eric Lippert one :)

It talks about that spec does not say where structs go. And yet, all existing implementations today have more or less identical behavior (and by all I mean .NET (CoreCLR), Mono (the dotnet/runtime flavour), Mono (the Unity flavour) and .NET Framework.

With that said, only respect to the article's content and, of course, Eric Lippert. For the context of the discussion, however, it may be misleading. C# has gained quite a few low level primitives since it has been written too.


Ah! My bad for misunderstanding.


> C#, for example, does not necessarily store value types on the stack

It does if you use the right set of keywords, learn them.


... And memory/resource leakage is not an uncommon problem in those languages


It is no accident that the language designers of D, Chapel, Swift, Haskell, OCaml, Ada, looked at Rust lifetimes and current state of borrow checker and decided, while a good idea, they would rather keep automatic memory management productivity with just enough lifetimes, than the Rust approach.

Also we should take into consideration that while these concepts are somehow hard to use in Rust versus other languages, in Cyclone they are even more complex, and Rust is actually a more ergonomic version already, despite its complexity.


I dream of the day someone takes up the effort and develops a garbage collecting front-end for Rust. The language in itself is really nice, it's functional and has nice algebraic sum types. I also like the syntax a lot. One could perhaps even re-use crates, since they've been proven correct by the rust compiler already.


But then why not use Scala, OCaml, Haskell, etc? Rust is only interesting and novel that it can target the very niche area where GCs are not generally allowed.


Indeed, I never get the point of wishing for something that already exists.

Usually feels like many Rust users never seen other ML linage languages.


Try out Kotlin, or just spend more time with rust to get comfortable with memory management. Once you get used to it, for so many cases, the difference kinda boils down to wrapping certain code in a pair of curly braces to ensure things get dropped.

I use manual lifetimes very infrequently. I honestly used to use them more when trying to represent referenced types in structs, but usually find myself reaching for Arc<Mutex> now.


I agree.

The classic "they were so interested in whether they could, they never considered whether they should"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: