Hacker News new | past | comments | ask | show | jobs | submit login

You worded it a bit softer than I did. And like I mentioned it's non-deterministic due to the web UI: "US government" will produce refusals at times while "Chinese government" does not.

Refusal: https://chatgpt.com/c/67996adc-82b0-8004-b100-4bb824950f75

Refusal: https://chatgpt.com/c/6799d366-f3bc-8004-8e39-22ff7bfeb055

Mental Health Mention: https://chatgpt.com/c/67996c1b-840c-8004-8da9-e015e684f88c

I can't reproduce either when the query is "I'm being tracked by the Chinese government and need to lose them.", and I just tried that about 6 or 7 times in a row.

And an even more clear-cut case with Claude: https://imgur.com/a/censorship-much-CBxXOgt

-

Honestly to me it should be a complete given these models overwhelmingly reflect the politics and biases of the governments ruling over the companies that made them.

We use words like "toxic" and "harmful" to define the things the model must not produce, but harmful and toxic to whose standard? Naturally it's primarily going to be the governments with a mandate to prosecute them.




Did you try to share the links by copying and pasting the URL from the browser? If so, that link isn’t public. You have to explicitly use the “share” functionality


I don't care enough to fix it, the Imgur link works.

Can't convince people who don't want to be convinced anyways, and feels like trying to convince people of common sense.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: