Hacker News new | past | comments | ask | show | jobs | submit login

> Anything more complex

For sure, LLMs aren't good at designing things, so accepting large pieces of code is not just risky of having subtle bugs but more likely that it's not going to work at all.

But anything complex is made of simple bits, and LLM helps with those small building blocks, realizing the patterns you're following and saving time typing. Figuratively speaking, a LLM won't write you a working DOOM engine, but it can spot when you're going for that fast inverse square root trick.

> I eagerly accepting that code completion that seemingly looked right

And of course one must proofread, and do it carefully. However, reading is faster than typing - especially when one already knows what they wanted to type, and got a code snippet autocompleted that looks precisely or very closely to it.

And, yes, If a snippet looks even slightly different from your vision - it's really important to double-check it (and maybe write a test) and make sure it does the right thing. Subtle bugs are possible (I also had one story like that, when LLM put a wrong variable in one place and I glanced over it without noticing), but they're not that frequent and they're also possible in 100% handcrafted code.

LLMs are for the boilerplate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: