Hacker News new | past | comments | ask | show | jobs | submit login

> The tools really only help with the accidental tasks

I don't think that's really the problem with using LLMs for coding, although it depends on how you define "accidental". I suppose if we take the opposite of "essential" (the core architecture, planned to solve the problem) to be boilerplate (stuff that needs to be done as part of a solution, but doesn't itself really define the solution), then it does apply.

It's interesting/amusing that on the surface a coding assistant is one of the things that LLMs appear better suited for, and they are suited for, as far as boilerplate generation goes (essentially automated stack overflow, and similar-project, cut and pasting)... But, in reality, it is one of the things LLMs are LEAST suited for, given that once you move beyond boilerplate/accidental code, the key skills needed for software design/development are reasoning/planning, as well as experienced-based ("inference time") learning to progress at the craft, which are two of the most fundamental shortcomings of LLMs that no amount of scale can fix.

So, yeah, maybe they can sometimes generate 70% of the code, but it's the easy/boilerplate 70% of the code, not the 30% that defines the architecture of the solution.

Of course it's trendy to call LLMs "AI" at the moment, just as previous GOFAI attempts at AI (e.g. symbolic problem solvers like SOAR, expert systems like CYC) were called "AI" until their limitations became more apparent. You'll know we're one step closer to AI/AGI when LLMs are in the rear view mirror and back to just being called LLMs again!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: