Using the front end of these websites to work in your complex codebase is very challenging. I use the chatbots for higher level questions about libraries and integrations, not for specific implementation details in my codebase. But without a data agreement in place, you shouldn't (or maybe can't) paste in code, and even if you could, it's an inferior way of providing context in comparison to better tools.
However, I do use copilot+vscode with claude 3.5 and the "Edit with Copilot feature" where it takes my open files, plus any other context I want to give it, to drive changes to my files, has been surprisingly good. It's not really a time saver in that the amount of time I spend verifying and fixing or enhancing the result isn't really faster than me just writing it myself, but I still find benefits for brainstorming, quickly iterating on alternate ideas and smaller refactors, and overcoming the "get started" hesitation on a seemingly complex change. I'm to the point where it can absolutely add tests to files using my existing patterns that are well done and rarely need feedback from me. I have been surprised to the see the progress because for most of the history of LLM I didn't find it useful.
It also helps that I work in a nodejs/react/etc codebase where the models have a ton of information and examples to work with.
> But without a data agreement in place, you shouldn't (or maybe can't) paste in code
There's a checkbox you can toggle so that Openai doesn't use your code to train their models.
And I find the "chatbot" experience different and better than aider/copilot. It forces me to refocus on the really useful interfaces instead of just sending everything, and makes it better to verify everything instead of just accepting a bunch of changes that might even be correct, but not what I exactly want. For me, the time spent verifying is actually a bonus, because I read faster than I can type. I think of it as a peer programmer who just happens to be able to type much, much faster and doesn't mind writing unit tests or rewriting the same thing over and over.
The problem with reading vs writing is building "true understanding". If you are reading code at a high level and building a perfect mental model and reading every bit of code, then you're doing it right. But many folks see finished code and get "LGTM brain" and don't necessarily fully think out every command on every line, leading to poor understanding of code. This is a huge drawback in LLM-assisted coding. Folks re-read code they "wrote" and have no memory of it at all.
In the edit experience I am using, the LLM provides a git style changelog where I can easily compare before/after with a really detailed diff. I find that much more useful than giant "blobs" of code where minor differences crop up that I don't notice.
The other massive drawback to the out-of-codebase chatbot experience (and the Edit With Copilot experience IS a chatbot, it's just integrated into the editor, changes files with diffs, and has a UI for managing file context) is context. I can effortlessly load all my open files into the LLM context with 1 click. The out-of-editor chatbox requires either a totally custom LLM with various layers to handle your codebase context, or you have to manually paste in a lot of context. It's nonsense to waste time pasting proprietary code into OpenAI (with no business agreement other than a privacy policy and a checkbox) when I can get Copilot to sign a BA with strict rules about privacy and storage, and then 1-click add my open files to my context.
Folks should give these new experiences a try. Having claude chatbot integrated into your editor with the ability to see and modify your open files in a collaborative chat experience is very nice.
However, I do use copilot+vscode with claude 3.5 and the "Edit with Copilot feature" where it takes my open files, plus any other context I want to give it, to drive changes to my files, has been surprisingly good. It's not really a time saver in that the amount of time I spend verifying and fixing or enhancing the result isn't really faster than me just writing it myself, but I still find benefits for brainstorming, quickly iterating on alternate ideas and smaller refactors, and overcoming the "get started" hesitation on a seemingly complex change. I'm to the point where it can absolutely add tests to files using my existing patterns that are well done and rarely need feedback from me. I have been surprised to the see the progress because for most of the history of LLM I didn't find it useful.
It also helps that I work in a nodejs/react/etc codebase where the models have a ton of information and examples to work with.