A tool that forces me to shift from creating solutions to trying to figure out what might be wrong with some code is entirely detrimental to my workflow.
Is that your actual experience or expectation? If you're just making assumptions, I'd encourage you to give it an actual try. Talking about using Cursor is a bit like talking about riding a bike with someone who never did it. (But yeah, it's totally not for everyone, and that's fine)
Very early on I took pains to figure out what toolchains gave me traction and which tools produced waste complexity.
I think a lot of the excitement about using LLMs to code is because a lot of teams are stuck in local optima where they need to use noisy tools, and there's a lot of de-noised output available to train LLMs.
This is progress in searching and mitigating bad trade-offs, not in advancing the state of the art.
Obviously, yes. For the exact same reason it's true for math homework, too!
Most code most people write is trivial in terms of semantics/algorithms. The hard bit is navigating the space of possible solutions: remembering all the APIs you need at the moment - right bits of the standard library, right bits of your codebase, right bits of third-party dependencies - and holding pieces you need in your head while you assemble some flow of data, transforming it between API boundaries as needed. I'm totally fine letting the AI do that - this kind of work is a waste of brain cycles, and it's much easier to follow and verify than to write from scratch.
When I’m writing code, I often switch between an high-level mental description and the code itself. It’s not a one way interaction. The more I code, the more refined my mental solution becomes until they merge together. I don’t need to hold everything in my memory (which is why there are many browser tabs opened). The invariant is that I can hand over my work any time and describe the rest of my solution. And the other advantage is the growing expertise in the tech.