I was playing with the API and found that it returned better answers than ChatGPT. ChatGPT isn't even able to solve simple Python problems anymore, even if you try to help it. And some time ago it did these same problems with ease.
My guess is that they began to restrict ChatGPT because they can't sell that. They probably want to sell you CodeGPT or other products in the future so why would they give that away for free? ChatGPT is just a teaser.
"ChatGPT isn't even able to solve simple Python problems anymore, even if you try to help it. And some time ago it did these same problems with ease."
This is my experience also. I have not formally benchmarked the different releases, but specifically for Python coding ChatGPT 4 got considerably worse with the latest updates.
Probably some combination of quantizing down from original fp16 weights and changes to the system prompt used for chat. Both can cause degraded quality, the former more than the latter.
Oh, it's not too hard to see how the spend that Microsoft put into building the data centers where GPT-4 was trained attracted national security interest even before it went public. The fact that they were even allowed to release it publicly is likely due to its strategic deterrence effect and that they believed the released version was already a dumbed-down version.
The fact that rumors about GPT-5 were quickly suppressed and the models were dumbed down even more cannot be entirely explained by excessive demand. I think it's more likely that GPT-3.5 and GPT-4 demonstrated unexpected capabilities in the hands of the public leading to a pull back. Moreover, Sam Altman's behaviors changed dramatically between the initial release and a few weeks afterward -- the extreme optimism of a CEO followed by a more subdued, even cowed, demeanor despite strong enthusiasm from end-users.
OpenAI cannot do anything without Microsoft's data center resources, and Microsoft is a critical defense contractor.
Anyway, personally, I'm with the crowd that thinks we're about to see a Cambrian explosion of domain-specific expert AIs. I suspect that OpenAI/Microsoft/Gov is still trying to figure out how much to nerf the capability of GPT-3.5 to tutor smaller models (see "Textbooks are all you need") and that's why the API is trash.
Would gladly pay more for a none nerfed version if they were actually honest.
The current versions is close to the original 3.5 version, while 3.5 has become horribly bad, such a scam to not disclose what's going on, especially for a paid service.
I agree. It is difficult to say what happened exactly but I am certain that I got all the answers and very few canned responses. Whatever they did for safety has degraded the product.