I've used ChatGPT a lot lately when developing some half-advanced Python code with async, websockets, etc. And I've been a bit underwhelmed to be honest. It will always output plausible code, but almost every time it will hallucinate a bit. For example, it will invent APIs or function parameters that don't exist, or it will mix older and newer versions of libraries. It never works to just copy-paste the code, it usually fails on something subtle. Of course, I'm not planning to just copy paste without understanding the code, but I often have to spend a fair amount of time checking the real docs to check how the APIs are supposed to be used, and then I'm not sure how much time I saved.
The second shortcoming is that that I have to switch over to ChatGPT and it's messy to give it my existing code when it's more than just toy code. It would be a lot more effortless if it was integrated like Copilot (if we ignore the fact that this means sending all your code to OpenAI...).
Still, it's great for boilerplate, general algorithms, data translanslation (for small amounts of data). It's a great tool when exploring.
The second shortcoming is that that I have to switch over to ChatGPT and it's messy to give it my existing code when it's more than just toy code. It would be a lot more effortless if it was integrated like Copilot (if we ignore the fact that this means sending all your code to OpenAI...).
Still, it's great for boilerplate, general algorithms, data translanslation (for small amounts of data). It's a great tool when exploring.