There was a tweet that from an engineer at OpenAI that they're working on the problem that ChatGPT has become too "lazy" - generating text that contains a lot of placeholders and expecting people to fill in much more themselves. As for the general brain damage from RLHF and the political bias, no word still.
Using the api, I've been seeing this a lot with the gpt-4-turbo preview model, but no problems with the non-turbo gpt-4 model. So I'll assume ChatGPT is now using 4-turbo. It seems the new model has some kinks to work out--I've also personally seen noticeably reduced reasoning ability for coding tasks, increased context-forgetting, and much worse instruction-following.
So far it feels more like a gpt-3.75-turbo rather than really being at the level of gpt-4. The speed and massive context window are amazing though.
Yeah, I usually use gpt-4-turbo (I exclusively use the API via a local web frontend (https://github.com/hillis/gpt-4-chat-ui) rather than ChatGPT Plus). Good reminder to use gpt-4 if I need to work around it - it hasn't bothered me too much in practice, since ChatGPT is honestly good enough most of the time for my purposes.
This has been the case with gpt-3.5 vs gpt-3.5-turbo, as well. But isn't it kinda obvious when things get cheaper and faster that there's a smaller model running things with some tricks on top to make it look smarter?
id be willing to bet all they're doing behind the scenes is cutting computation costs using smaller versions and doing every business' golden rule: price discrimination.
id be willing to bet enshittification is on the horizon. you don't get the shiny 70b model, that's for gold premium customers.
I've thought one of the funnier end states for AGI would be if it was created but this ended up making it vastly less productive than when it was just a tool.
So the AI of the future was more like Bender or other robots from Futurama that display all the same flaws as people.
My son asks why, but only once. I'm not yet sure if it is because he is satisfied with his first answer, or if my answers just make the game too boring to play.
they're B2B now, that means only political correctness.
and I'm not sure why anyone dances around it but these models are built by unfiltered data intake. if they actually want to harness bias, they need to do what every capitalist does to a social media platform and curate the content.
lastly, bias is an illusion of choice. choosing color over colour is a byproduct of culture and you're not going to eradicate that but cynically, I assume you mean , why won't it do the thing I agree with.
IIRC they've put in guard rails to try and make sure ChatGPT doesn't say anything controversial or offensive, but doing so hampers its utility and probably creativity, I'm guessing.
Whatever the people who buys ads decides, losing your ad revenue is the main fear of most social media and media companies.
See twitter for example, ad buyers decided it is no longer politically correct so twitter lost a lot of ad revenue. Avoiding that is one of the most important things if you want to sell a model to companies.