Hacker News new | past | comments | ask | show | jobs | submit login

Using the api, I've been seeing this a lot with the gpt-4-turbo preview model, but no problems with the non-turbo gpt-4 model. So I'll assume ChatGPT is now using 4-turbo. It seems the new model has some kinks to work out--I've also personally seen noticeably reduced reasoning ability for coding tasks, increased context-forgetting, and much worse instruction-following.

So far it feels more like a gpt-3.75-turbo rather than really being at the level of gpt-4. The speed and massive context window are amazing though.




Yeah, I usually use gpt-4-turbo (I exclusively use the API via a local web frontend (https://github.com/hillis/gpt-4-chat-ui) rather than ChatGPT Plus). Good reminder to use gpt-4 if I need to work around it - it hasn't bothered me too much in practice, since ChatGPT is honestly good enough most of the time for my purposes.


This has been the case with gpt-3.5 vs gpt-3.5-turbo, as well. But isn't it kinda obvious when things get cheaper and faster that there's a smaller model running things with some tricks on top to make it look smarter?


id be willing to bet all they're doing behind the scenes is cutting computation costs using smaller versions and doing every business' golden rule: price discrimination.

id be willing to bet enshittification is on the horizon. you don't get the shiny 70b model, that's for gold premium customers.

by 2025, it's gonna be tiered enterprise prices.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: