Hacker News new | past | comments | ask | show | jobs | submit login

Correct, a key goal is to have no explicit region/lifetime annotations. There have been several papers on region inference after the originals by Tofte & Taplin, all attempting to refine the original analysis by inferring shorter lifetimes. First by analyzing when a region can be safely emptied and re-used, then by abandoning the stack discipline, etc. Unfortunately, none of these are viable in a real program in my opinion. Although they each infer shorter lifetimes in a few cases the core problem of "collections will unify the lifetime variables of all elements in the collection" and "branching on a value and conditionally returning it extends its lifetime, even if the branch was not taken" remain unsolved.

An ideal solution to me needs to solve these problems. Since there is already a large body of research trying to address this on the static side and failing, I believe it needs to be solved with runtime checks. The specifics of which I'm still exploring but its worth mentioning these would only be necessary to tighten existing lifetimes so one can envision annotations or compiler options to elide these if desired. Lifetime inference in MLKit (and I believe ante as well) tends to speed things up by turning more dynamic allocations into stack allocations, so there is some room there for runtime checks without making the result more expensive than the version with dynamic allocation I believe.




Hey, thanks for calling my work "not viable!" (Co-author of "Better Static Memory Management" here)

Seriously, this looks promising and I'm very interested to see where it goes.


Ack, my apologies, I didn't intend to be so inflammatory! If memory serves AFL was one of the first schemes (or perhaps the first?) to abandon the stack discipline to provide better inference in quite a few cases compared to TT. I remember the graphs in that paper giving me more hope for region inference to be a practical memory management scheme. Later, I believe the imperative regions paper improved on the AFL scheme a bit at least in their single 'life' example, but without more test cases or graphs it is more difficult in general to assess that paper or how common this case matters in practice. I'm curious what you believe the future of research for region inference to be considering most of the papers written after TT's retrospective paper seem to have moved in the direction of explicit regions rather than inferred ones.


Entirely joking about being offended, that work was a long time ago. I think it was an interesting exploration but we were not able to get compelling results, I think mostly because we did not at that time have affine types (move semantics). Thus if you put something into a container in one place and took it out in another, it would have to infer the region that spans both, which might as well be the whole program in most cases.

These days I program in Rust and find Rust's approach to explicit regions to be a workable compromise, though reference-heavy types can get pretty ugly and hard to work with (and I'm pretty sure that variance of lifetimes is confusing to everybody who isn't a PLT theorist and some who are).

The approach I personally find most interesting is the Lobster language. There (and I'm probably oversimplifying) the semantics are reference counting, but you so analysis to remove a huge fraction of RC operations. I believe the Perceus work is similar.

I'm happy to chat anytime. Recent work has been using somewhat exotic types provided by Rust (associated types, existentials, lots of inference through product types) to represent UI. So far I've basically been using what Rust gives me, but it's interesting to imagine what changes to the language/type system might buy you. For example, there are a few places in the code where there are downcasts, but I suspect that with a sufficiently strong type system you could prove those downcasts are infallible.


I’m curious what the rationale is for not making regions explicit in the source code. It seems like a downside of region inference for a systems language is the unpredictability of the inferred regions.


I maintain a JIT compiler for a very high performance Haskell like functional programming language. The compiler automatically generates the minimum malloc/free calls, mirroring the hand written code of an experienced C programmer. It works really well and the generated machine code was shown to be faster than hand-written optimised C++ code that people had been maintaining for years. The compiler code is unfortunately closed source and I have zero interest in writing academic papers so I wont elaborate further. Just telling you to hint hint perhaps try something like that :) The execution flow analyser was a b** to get right but worth it. I also recommend checking out some of the recent LEAN 4 papers. Some of those papers are moving in the right direction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: