Hacker News new | past | comments | ask | show | jobs | submit login
DeepMath – Deep Sequence Models for Premise Selection (arxiv.org)
79 points by gwern on July 4, 2016 | hide | past | favorite | 14 comments



One of the authors here; didn't expect to see this pop up on HN. Questions welcome!


This is fascinating. If i had more time to get an articulate question i would, but I'm going to need to sit with this paper for a bit. Amazingly, I've been fooling around on the edges of thereom proving for a bit now and had yet to hear of the Mizar corpus. Really, really fascinating stuff.


You should read https://intelligence.org/2013/12/21/josef-urban-on-machine-l... and the citations! After that, you almost wonder why no one had applied NNs before.



thanks for the link, started and bookmarked. Lots to get on top of in that interview!

There was a feeling reading the original paper of "why has no one done this yet" but I don't have the chops to ask that, yet.


How do you avoid accidently encoding knowledge into your algorithm? You can't test your algorithm on a different sample of naturally occurring mathematics , summer wet have only one.


Information leaks are indeed the number one challenge when evaluating this type of approach. Our evaluation procedure does not rigorously guarantee against information leaks, but it does allow for an apples-to-apples comparison with previous methods (which had the same potential issue). And showing an advantage over these methods is what we were trying to achieve.

So in summary: it's hard to prevent, but we're doing our best.


- What kind of theorems were hard to prove using this system?

- Is there a limit on the size of the conjecture/formula?

- Would it be viable to use this method (or something similar) for faster formal software verification?


- the most interesting thing about this approach is that it managed to prove a significantly different set of theorems than previous state-of-the-art approaches. So theorems that are hard to prove for this approach were not all hard to prove for previous approaches, and inversely this approach can prove theorems that were hard for previous approaches.

- in this specific setup, yes, all sequences are truncated for practical reasons. In theory there should be no size limit, however long sequences will of course be more difficult to meaningfully encode.

- formal software verification is definitely one of the long-terms goals of this project. I think software verification is one of the big challenges of our transition into an information society (as algorithms/AI start having more control over our lives, as we start using smart contracts, etc), and AI will help solve it. The current setup would not be of much help, however.


so you're limiting the scope of potential solutions to theorems and then trying to eventually supplant a brute force method with biologically informed cognitive methods? neural networks?

sort of spamming number theory at theorems? by discovering a corpus' bias?


What do you think is missing to capture the high-level structure of mathematical concepts?


How close is the state of the art here to discovering novel and interesting theorems?


I find it a bit counterintuitive that I can't query such a system interactively and ask questions of it about a proof.

Is it safe to say that the architecture in this paper captures MML syntax and the "bounds" within which the software operates?


I've been waiting for this for a long time.

Hopefully, this will allow automatically proving some of unproven theorems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: