Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure I agree with the assertion "you can be confident that a compiler will express your program in the way you want". This is only if you program perfectly, which is never the case in practice (after all, bugs exist). What is different is how we can verify correctness (i.e. we read code to check correctness, but reading a text prompt carefully does not help if the LLM is sloppy). For code that is easy to test (i.e. passing a test suite convinces you that the code works, without even reading the code), I can see how the argument that LLMs are compilers makes sense.



I can absolutely write bugs. But I can be confident that there is a very well defined set of rules for what the compiler must emit given the characters that I wrote. That's not going to be the case for a LLM. You can perfectly express the problem you want it to solve and it just spits out garbage and this wouldn't be considered a bug in the system.


At this point yes. But the goal of "automatic programming" against a formal specification has a long history. It does not seem impossible to me that, in the future, programming becomes closer to prompting + formal specification. Then the prompt can be compiled and validated against the spec.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: