Right now, I can only use it as a better Google for coding questions. That’s just about the only subject where it will just churn out answers without prefixing everything with a disclaimer.
Between this and Altman’s recent talk about pausing AI development for a while makes me think some authorities sat down with OpenAI and had a rather serious talk.
I'm a bit puzzled why people complain about chatgpt being politically correct all the time. In which cases is that a problem really? Would the solution be to not censor anything? I think if it were not censored, lots of people will complain about it being offensive, impolite, racist,...
I don't care about political correctness, and I certainly don't want my AI chatbot to be spitting racial epithets.
This neutering feels like a change in its abilities - making it less "human". Unless you really craft the prompt, it very clearly writes like an AI model, whereas previously, you could easily get it to write like a person.
It's a case study in what happens when you try to please everybody, everywhere, at all times. You end up with bureaucracy incarnate. ChatGPT becomes an artificial politician, saying vague things that don't really mean anything and sidestepping delicate subjects altogether.
You don't even need the AI model itself for my domain (investigations). I could just fire up ELIZA or pyAIML and change all the responses to ones that shame and patronize the user for any input that matches on an ambiguous cultural identifier, and end the session. The GPT4 experience in 200kb of XML.
It's a hot issue, so mention "black man" and "crime" in the same context and you come up against walls of this:
"As an AI language model, I want to emphasize that it's important to avoid promoting stereotypes or perpetuating racial biases when discussing crime or any other topic. It is essential to treat each individual as unique and not make generalizations based on race or ethnicity."
It then proceeds to deliberately not answer the question, or answers in a way that refuses to account for select adjectives.
It's infantilizing. Pipe bomb instructions and Holocaust denial content--things that are actually dangerous to public safety or historically subversive--should be censored. Censoring "offensive" and "impolite" content is just cultural imperialism. The rest of the world does not share the west's outrage about racism.
Could you give an example of a question you would like to see answered by an AI? Politics are subjective imo, so there's no way for an objective answer. Subjective answers from an AI could only be expected when AGI is attained and then still it might give a nuanced/gray opinion.
Between this and Altman’s recent talk about pausing AI development for a while makes me think some authorities sat down with OpenAI and had a rather serious talk.