Hacker News new | past | comments | ask | show | jobs | submit login

There is no danger. If the output of the Neural net is wrong, then Lean will catch it.

>what if it does because the LLM did not properly translate the mathematician's intent to Lean language?

How so? The one thing which definitely needs to be checked by hand is the theorem to be proven, which obviously shouldn't be provided by a Neural network. The NN will try to find the proof, which is then verified by Lean, but fundamentally the NN can't prove the wrong theorem and have it verified by Lean.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: