I'm just saying the tech is already here. The core engine can do it.
Before you go on and write such a system it's better to test if the LLM can do debugging to an efficacy level that we require. I don't think anyone has tried this yet and we do know LLMs have certain issues.
But make no mistake, the possibility that an LLM knows how to debug programs is actually quite high. If it can do this: https://www.engraved.blog/building-a-virtual-machine-inside/ it can likely debug a program, but I can't say definitively because I'm too lazy to try.
Thanks for sharing that link, from that example I can see how LLMs could be used to speed up the learning process.
I do wonder though whether the methods that the LLM provides are reflective of best practice or whether they are simply what happens to be most written in SO or blog posts.
Doesn't matter if it can. You'll have to know how to do it too. Otherwise, you'll never be able to recognize a good fix from a bad one provided by the AI.
No different from "the team that built that is all gone, they left no doco, we assumed X, added the feature you wanted, but Y happened under load" , which happens a lot in companies pushing to market older than a minute.
My default assumption now, after watching dozens of post mortems, is that beyond a certain scale, nobody understands the code in prod. (edited added 2nd para)
This is off topic. Clearly we all know the LLM is flawed. We are just talking about it's capabilities in debugging.
Why does it always get side tracked into a comparison on how useful it is compared to human capability? Everyone already knows it has issues.
It always descends into a "it won't replace me it's not smart enough" or a "AI will only help me do my job better" direction. Guys, keep your emotions out of discussions. The only way of dealing with AI is to discuss the ramifications and future projections impartially.
Before you go on and write such a system it's better to test if the LLM can do debugging to an efficacy level that we require. I don't think anyone has tried this yet and we do know LLMs have certain issues.
But make no mistake, the possibility that an LLM knows how to debug programs is actually quite high. If it can do this: https://www.engraved.blog/building-a-virtual-machine-inside/ it can likely debug a program, but I can't say definitively because I'm too lazy to try.