LLMs don't replace developers. The difference is that a junior can reason, while a LLM fundamentally can't (although they can mimic reason by parroting, and sometimes do it really well).
LLMs are good aide for developers of all skill levels, capable of giving them ready answers to near-textbook questions (which juniors may not know and more senior folks may be just lazy to type).
> LLMs are good aide for developers of all skill levels
If you’re writing usual CRUD-type applications, that is.
Anything more complex, and these LLMs will make something up that looks very close to what you want, but will have subtle bugs in it that’ll take a lot of time to figure out.
Some time ago what I was working on was a “textbook” question (as you say) involving binary tree and sets as part of a firewall management script I was writing; and I eagerly accepting that code completion that seemingly looked right caused a lot of traffic to be unintentionally blocked.
For sure, LLMs aren't good at designing things, so accepting large pieces of code is not just risky of having subtle bugs but more likely that it's not going to work at all.
But anything complex is made of simple bits, and LLM helps with those small building blocks, realizing the patterns you're following and saving time typing. Figuratively speaking, a LLM won't write you a working DOOM engine, but it can spot when you're going for that fast inverse square root trick.
> I eagerly accepting that code completion that seemingly looked right
And of course one must proofread, and do it carefully. However, reading is faster than typing - especially when one already knows what they wanted to type, and got a code snippet autocompleted that looks precisely or very closely to it.
And, yes, If a snippet looks even slightly different from your vision - it's really important to double-check it (and maybe write a test) and make sure it does the right thing. Subtle bugs are possible (I also had one story like that, when LLM put a wrong variable in one place and I glanced over it without noticing), but they're not that frequent and they're also possible in 100% handcrafted code.
I treat LLMs just as a fancier keyboard for near-boilerplate. When I give a prompt I know what I want them to output, cycle completions until I get a snippet that looks close enough to my liking, then edit it to fit. Super helpful.
Another use case was learning a language and SDKs I've never even dabbled with before (Swift and WatchKit). It generated awful architecture (that I think even a junior developer would realize as stinky) and it had limited knowledge about recent Swift language features (async), but gave me just-good-enough samples to get me started - better than a book could have. Can't say I learned Swift that way (didn't need to) but I made a tiny WatchOS app for personal use over in a single evening because LLM gave me a good boost.
And then, LLMs can be just a better autocompleters - it feels nice when you get all arguments correctly listed before you even type the first letter, because LLM had guessed it right from the names (it doesn't always happen, of course).
I can imagine that being too lazy and just blindly accepting LLM completions without thinking or double-checking can be a bad habit, but that sounds more like a risk for a junior than a senior.
My experience was the opposite: don't let the junior devs use AI, or they learn all kinds of bad habits (including relying on AI). A senior on the other hand might just get a good autocomplete.
I have great doubts about juniors' reasoning. Before, seniors asked juniors to write, get sets from the database, loops, if statements, APIs, simple interfaces, and maybe basic regex. Not anymore. LLMS is just right now replacing juniors. Thank you for all the minuses. Solution for juniors? Learn how to use LLMS quickly or go to the construction industry.