The tool doesn't succeed at a reasonable task that humans can do. That's not PEBKAC, and warning people about it is a good thing.
This type of analysis is not outside the purpose of the tool. You're making excuses at this point. Do you really think it would be wrong to add that capability in the future?
It's a technical limitation, one that is far from obvious.
I think you think the llm is a magic box with an intelligent being inside that can magically do whatever you want it to, somehow. It is software. It has capabilities and limitations. Learn them, and use it appropriately. Or don't, you don't have to use it. But don't expect it to just do whatever you think it should do.
This type of analysis is not outside the purpose of the tool. You're making excuses at this point. Do you really think it would be wrong to add that capability in the future?
It's a technical limitation, one that is far from obvious.