Hacker News new | past | comments | ask | show | jobs | submit login

For hardcore users I don't think this tactic suggestion thing is a good idea, for example, how does it deal with custom ltac tactics (cf Chlipala's bedrock)? To prove me wrong one could test the idea on, say, compcert's development and compute how often the next tactic is among the suggested ones.

On the other hand, One common problem in large proofs is having too many hypothesis in stock, one super nice extension would be to quantify the relevance of each and color/display them accordingly, leaving the option to move the 'tolerance' threshold for display. This relevance metric would have to be aware if lemmas available (of A -> B is proved by a lemma, and B is the goal, A is relevant).

My 2 cts.




Indeed trying tactics in general can be unsatisfactory if you 1) don't let users enrich the set of tactics tried 2) don't let users prevent some things from being tried.

For your other issue, I am thinking about ways to hide hypotheses in the editor, without having to clear them in the actual code. This way they are still here if you need them, but they don't eat some of your precious brain space while they are irrelevant (huh) to your current work.

Thanks for your 2 cts! Maybe I'll think about this threshold idea now!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: