Hacker News new | past | comments | ask | show | jobs | submit login

I think OpenAI chose those preferences, that ChatGPT would simply write the same message for both if they hadn't.

I too don't mind the model reflecting the reality of opinion, even if that isn't politically correct, but I care about a company which cares enough to add potentially problematic warnings and maybe account blocks based on their views of what is politically correct. I worry about having an app or developer account blocked because of messages some user types in a support chat box. (OpenAI: "We will terminate API access for obviously harmful use-cases, such as ...")

I think they found the correct balance on their API pages where they simply warn that the model is culturally biased and not to rely on it for certain critical things. (OpenAI: "As we discuss in the GPT-3 paper and model card, our API models do exhibit biases that will be reflected in generated text.") Sadly, they then proceed to fall on their sword to apologize for this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: