Hacker News new | past | comments | ask | show | jobs | submit login

That wouldn't be the case if ordinary compilers outputted buggy code 5% of the time like a SOTA LLM does. (I am being quite generous to LLMs here)



> 5% of the time

that would be a generous invented statistic if it only was addressing the inherent stochastic nature of llm output, but you also have to factor in the training data being poisoned and out of date

in my experience the error rate on llm output is MUCH higher than 5%




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: