Hacker News new | past | comments | ask | show | jobs | submit login

This is precisely how Google built AlphaProof! Read the article, Lean's role is quite critical to its success.

Privately, I think Lean could be incredibly powerful if baked deep into an ML's kernel/structure/training process. I think an AI proof of something like the Riemann Hypothesis may well be possible if you get enough resources behind it.




> This is precisely how Google built AlphaProof! Read the article, Lean's role is quite critical to its success.

I read the article. It doesn't say anything about generating new proofs to train on. It only mentions scraping Github for lean theorems+proofs.


I'm not sure if you're confusing alphaproof with leandojo here? Alphaproof generated its own proofs on 100M formal problems and did RL on this process.


Yes, I know AlphaProof did that. I wrote that it would be exciting "if they [LeanDojo] could integrate the reinforcement learning approach from AlphaProof".

This would give LeanDojo a lot more training data, and hopefully give us an open source proof assistant at IMO Silver level.


Sorry!


I want to see a system that automates formalization of the existing math literature. LLMs are supposed to be good on language, right? So get them to read math papers and fill in the blanks, spitting out formalized versions of the results in Lean. We have centuries of literature to process, and when we're done there will be all of mathematics formalized to serve as training data for theorem provers moving into new mathematics.


The big problem with this is that many domains of math use hyper specialized notation, novel terms, different styles etc, and there's not much data for them to train on within any given niche.

For example, the IUT "proof" of the abc conjecture used completely novel systems to come to its conclusions, to the point that it took top number theorists a few years to even parse what it was saying, and only then could they find the faults in the proof.

I think the IUT proof would be a great adversarial example for any of these systems; if you can find the problem in the proof, you know you've hit on something pretty powerful.


> if you can find the problem in the proof

Is it still open whether there is a problem?

Last I heard about this, there was one guy saying there's problems and the author dismissing that as "they just didn't understand it" without showing much interest in explaining it better...


I recon the general cosensus among mathematicians (as that is what counts) is that the ABC conjecture so far has _not_ been proven. Mochizuki (and his school around him) seem to be the majority of people that believe his proof is correct. As you point out, Scholze has identified a supposed flaw in Mochizuki's argument, but anyone not already at the forefront of IUT/NT/ABC conjecture is probably incapable of telling if this flaw is a true flaw or not. As Mochizuki refuses to elaborate (on this supposed flaw) consensus cannot be reached and thus the ABC conjecture remains open.


Given his behavior, I wouldn't believe anything from that guy unless it came with a formalized proof.


If you mistrust authors and can't actually rule out someone trying to cheat, checking a formaized proof beyond a reasonable doubt is still a lot of work and needs a huge amount of domain-specific knowledge. An author trying to cheat would try to make everything chaotic and hard to read, as well as hiding subtle flaws in theorem statements.


> if baked deep into an ML's kernel/structure/training process

Not sure of the feasibility of implementing lean as a GPU kernel, if you meant that. Also I'm not sure it makes sense to run Lean inside the (mostly matmul) training process. Now to use it to prepare some training data, it seems more realistic. But that seems to be what AlphaProof tries to do in the reinforcement step, if I'm not mistaken.


I think that in order for it to truly find deep insights, it would need to do more than just generate training data. I'm also a believer that the current AI approaches are approaching their limits as the human feed dries up and we start using old models to train new ones.

Of course, what that really means and how you'd go about adapting your CUDA code / hardware, would have to be researched.


It could perhaps also be used to guide the sampling step at the end, or? Similar to those syntax-constrained samplers to ensure the LLM spits out eg valid JSON.


Syntax constraints are usually expressible as grammars, but the language of math is often very unique and domain specific, which makes this kind of approach tricky to get right


Thankfully Lean exists, so the LLM can write that instead of the math syntax used in papers.


So yea that was my thought. Use it to spit out valid Lean syntax, and potentially also to backtrack if it outputs inconsistent or erroneous proofs.


It's fairly good at valid syntax already, and did the backtracking for a long time due to it doing tree search guided by it's predictions for how likely that tactic will end up finishing the tree leaf it's applied to.


It's possible that the hypothesis is independent of the existing axiomatic systems for mathematics and a computer can't discover that on its own. It will loop forever looking for a proof that will never show up in the search. Computers are useful for doing fast calculations but attributing intelligence to them beyond that is mostly a result of confused ontologies and metaphysics about what computers are capable of doing. Computation is a subset of mathematics and can never actually be a replacement for it. The incompleteness theorem for example is a meta-mathematical statement about the limits of axiomatic systems that can not be discovered with axiomatic systems alone.


> It's possible that the hypothesis is independent of the existing axiomatic systems for mathematics and a computer can't discover that on its own.

Humans have discovered independence proofs, e.g. Paul Cohen’s 1963 proof that the continuum hypothesis is independent of ZFC. I can’t see any reason in principle why a computer couldn’t do the same.

If the Riemann hypothesis is independent of ZFC, and there exists a proof of that independence which is of tractable length, then in principle if a human could discover it, why couldn’t a sufficiently advanced computer system?

Of course, it may turn out either that (a) Riemann hypothesis isn’t independent of ZFC (what most mathematicians think), or (b) it is independent but no proof exists, or (c) the shortest proof is so astronomically long nobody will ever be able to know it

> The incompleteness theorem for example is a meta-mathematical statement about the limits of axiomatic systems that can not be discovered with axiomatic systems alone.

We have proofs of Gödel‘s theorems. I see no reason in principle why a (sufficiently powerful) automated theorem prover couldn’t discover those proofs for itself. And maybe even one day discover proofs of novel theorems in the same vein


Bahaha it would be great if RH turned out to be a natural example of a theorem for which its independence is itself independent of ZFC. Do you know any examples of that?

I can probably cook some highly artificial ones up if I try, but maybe there's an interesting one out there!


> turned out to be a natural example of a theorem for which it's independence is itself independent of ZFC. Do you know any examples of that?

Not aware of any myself, no. (But I’m far from an expert on this topic.)

It just occurred to me as a logical possibility.


No computer has ever discovered the concept of a Turing machine and the associated halting problem (incompleteness theorem). If you think a search in an axiomatic system can discover an incompleteness result it is because your ontology about what computers can do is confused. People are not computers.


To be pedantic (mathematical?), computers can find any result that has a formalisation in a finitary logical systems like first-order logic, simply by searching all possible proofs. Undecidability of FOL inference isn't relevant when you already know such a proof exists (it's a "semidecidable" problem).

I imagine that would be the main use case for heuristic solvers like this one - helping mathematicians fill in the blanks in proofs for stuff that's not too tricky but annoying to do by hand. Rather than for discovering novel, unknown concepts by itself (although I'm with the OP, don't see why this is impossible a priori).


Because meta-mathematical proofs often use transcendental induction and associated "non-constructive" and "non-finitistic" arguments. The diagonilization argument itself is an instance of something that can not actually be implemented on a computer because constructing the relevant function in finite time is impossible. Computers are great but when people say things like "The human mind is software running on the brain like a computer" that indicates to me they are confused about what they're trying to say about minds, brains, and computers. Collapsing all those different concepts into a Turing machine is what I mean by a confused ontology.

In any event, I'm dropping out of this thread since I don't have much else to say on this and it often leads to unnecessary theorycrafting with people who haven't done the prerequisite reading on the relevant matters.


> Because meta-mathematical proofs often use transcendental induction and associated "non-constructive" and "non-finitistic" arguments. The diagonilization argument itself is an instance of something that can not actually be implemented on a computer because constructing the relevant function in finite time is impossible.

Humans reason about transcendental induction using finite time and finite resources-the human brain (as far as we know) is a finite entity. So if we can reason about the transfinite using the finite, why can’t computers? Of course they can’t do so by directly reasoning in an infinite way, but humans don’t do that, so why think computers must?


You don't need to implement a diagonalisation in order to prove results about it - this is true for computers as much as it is true for humans. There are formalisations of Godel's theorems in Lean, for instance. Similarly for arguments involving excluded middle and other non-constructive axioms.

I hear your point that humans reason with heuristics that are "outside" of the underlying formal system, but I don't know of a single case where the resulting theorem could not be formalised in some way (after all, this is why ZFC+ was such a big deal foundationally). Similarly, an AI will have its own set of learned heuristics that lead it to more rigorous results.

Also agree about minds and computers and such, but personally I don't think it has much bearing on what computers are capable of mathematically.

Anyway, cheers. Doesn't sound like we disagree about much.


what is the basic intuition of how godel's theorem are proven in Lean?

I understand OP's point of diagonalization proof being impossible to prove on a computer. (Did I get this right?)


You can absolutely formalize proofs using diagonalization arguments on a computer in just the same way you would formalize any other proof. For example here's the Metamath formalization of Cantor's argument that a set cannot be equinumerous to its power set: https://us.metamath.org/mpeuni/canth.html

In mathematics we often use language to talk about a hypothetical function without actually implementing it in any specific programming language. Formal proofs do exactly the same thing in their own formal language.

Although in the case of Cantor's diagonal argument, I don't know in what sense any function involved in that proof would even fail to be implementable in a specific programming language,. Let's say I encode each real number x such that 0 <= x < 1 in my program as a function which takes a positive integer n and returns the n'th digit of x after the decimal point. In Python syntax:

    from typing import Callable

    # PosInt, Zero and One aren't built-in Python types but just assume they are
    Number = Callable[[PosInt], Zero | One]

    Sequence = Callable[[PosInt], Number]
The function involved in the diagonalisation argument can then be implemented as follows:

    def diagonalize(sequence: Sequence) -> Number:
        def diagonal(n: PosInt) -> Zero | One:
            if sequence(n)(n) == 0:
                return 1
            else:
                return 0
        return diagonal
The argument consists of the the observation that whatever sequence you pass as an argument into "diagonalize", the returned number will not be present in the sequence since for every positive integer n, the n'th digit of the returned number will be different from the n'th digit of the n'th number in the sequence, and hence the returned number is distinct from the n'th number in the sequence. Since this holds for every positive integer n, we can conclude that the returned number is distinct from every number in the sequence.

This is just a simple logical argument---it wouldn't be too hard to write it down explicitly as a natural deduction tree where each step is explicitly inferred from previous one using rules like modus ponens, but I'm not going to bother doing that here.


People are not neurons.

Intelligent systems (once eventually devised) will use computation machines as the substrate to implement intelligence in a similar way as the human intelligence uses a biological substrate to perform gazillions of individually unintelligent computations.

Don't confuse the substrate for the system


Perhaps true of the class of problems that are undecidable in, say, the Peano axioms / ZFC. However, there are many things these axioms can prove that are still useful! For example, the multiplicity of the totient function, applications of which power much of modern cryptography.

Riemann is so widely believed to be true that there are entire branches of mathematics dedicated to seeing what cool things you can learn about primes/combinatorics etc by taking Riemann to be true as an assumption.


Perhaps a simpler and more reachable approach at this point would be to use the mathlib documentation to fuel a RAG on top of the fine-tuned/specialised model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: