The underlying model might be as good, but how you interface with it seems to deliver wildly different results. I can easily get chat-gpt to give me entire blocks of code. Copilot just trickles out small bits. Copilot is also very distracting because I find myself constantly waiting to see if what it's going to do next is helpful or not, whereas just typing into chat gpt I know I'm going to get something in the direction of what I want.
For me it is very different use cases. Copilot often prevents me from needing to write mundane blocks of code, as it will often fill in what I'm looking for after I write the function name.
If I'm still unsure how I might approach a certain problem, or if it isn't immediately clear to me how I'd write the function I want, I might type in a prompt to ChatGPT and see what it comes up with. But it would really slow down my workflow if I had to prompt ChatGPT for every mundane function I plan to write.
I suspect the cost problem arose specifically upgrading to ChatGPT 4 over ChatGPT 3 - right there their costs increased more than 5x. So before it was likely under $5/month on average, but once you changed to ChatGPT 4, it jumped to like $20/month on average.
It's not GPT-4, it's gpt-3.5-turbo (or a variant there-of). Source: I'm sitting in the audience of a talk about it at AI Engineering summit right now, the speaker confirmed it as gpt-3.5-turbo, switched from Codex.
Is that actually true? I remember Copilot-X being containing the GPT4 version and I'm pretty sure that wasn't out back in April. I don't even think it was out this summer
Interesting that you say that. I had the opposite thought that it doesn’t seem to have improved much over time, but I think my perception might be influenced by my habit of ignoring large suggestions and only looking at the results when it fills the rest of the line I was typing.
I’m also using it with intellij instead of vscode, so for all I know I could be using an old version still.