Hacker News new | past | comments | ask | show | jobs | submit login

The assumption here seems to be that the compiler/analyzer is only able to look at one function at a time. This makes no sense. Safety is a whole-program concern and you should analyze the whole program to check it.

If anything as simple as the following needs lifetime annotations then your proposed solution will not be used by anyone:

const int& f4(std::map<int, int>& map, const int& key) { return map[key]; }




Whole-program analysis is not tractable (i.e., not scalable), and Rust has already proven that function signatures are enough, and does actually scale. The analysis can be performed locally at each call site, and doesn't have to recurse into callees.

Your function would look like this in Rust:

    fn f4<'a>(map: &'a Map<i32, i32>, key: &i32) -> &'a i32 { ... }
You don't need much more than a superficial understanding of Rust's lifetime syntax to understand what's going on here, and you have much more information about the function.


"Whole-program analysis is not tractable (i.e., not scalable),"

The search term for those who'd like to follow up is "Superoptimization", which is one of the perennial ideas that programmers get that will Change the World if it is "just" implemented and "why hasn't anyone else done it I guess maybe they're just stupid", except it turns out to not work in practice. In a nutshell, the complexity classes involved just get too high.

(An interesting question I have is whether a language could be designed from the get-go to work with some useful subset of superoptimizations, but unfortunately, it's really hard to answer such questions when the bare minimum to have a good chance of success is 30 years of fairly specific experience before one even really stands a chance, and by then that's very unlikely to be what that person wants to work on.)


Something I would like to know is how much lifetime annotation you can infer (recursively) from the function implementation itself. Compiler driven, IDE integrated, automatic annotation would be a good tool to have.

Some amount of non-local inference might also be possible for templated C++ code that already lack a proper separate compilation story.


At the limit, the answer is “between zero and completely.” Zero because you may only have access to the prototype and not the body, say if the body is in another translation unit, or completely if a full solution could be found, which is certainly possible for trivial cases.

The reason to not do this isn’t due to impossibility, but for other factors: it’s computationally expensive, I’d you think compile times are already bad, get ready for them to get way worse. Also, changing the body can break code in competently different parts of your program, as changing the body changes the signature and can now invalidate callers.


Translation units have long not been a boundary stopping static analyzers or even compiler optimizations with LTO. It's a semantic/namespace concept only.


Sure, you can do some things sometimes. It still means that you need access to everything, which isn't always feasible. And as I said, it's just not the only reason.


> Whole-program analysis is not tractable (i.e., not scalable)

... in the general case. There are many function (sub-)graphs where this is not only tractable but actually trivial. Leave annotations for the tricky cases where the compiler needs help.

> fn f4<'a>(map: &'a Map<i32, i32>, key: &i32) -> &'a i32 { ... }

The problem is not that you can't understand what this means but that it adds too much noise which is both needless busywork when writing the function and distracting when reading the code. There is a reason why many recent C++ additions have been reducing the amount of boilerplate you need to write.

There have been plenty attempts at safty annotations. There is a reason why they have not been adopted by most projects.


I think you might be surprised how rare explicit lifetime annotations actually are in Rust code.

But, to the point: It precisely isn’t “noise”. It’s important information that any competent C++ developer will immediately look for in documentation, or worse, by analyzing the code. It’s not distracting - it’s essential.

Aside: I’m very confused by your assertion that C++ has been reducing boilerplate. It feels like every couple of years there’s another decoration that you need to care about. The only reduction I can think of is defaulted/deleted constructors and assignment ops? But that feels self-inflicted, from a language design perspective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: