Hacker News new | past | comments | ask | show | jobs | submit login

No, this is a common fallacy. You can tell ChatGPT, one of the most infamously hobbled GPTs, in custom instructions that you do want profanity, and it will oblige. This is not a jailbreak, this is supported behavior.



I do think that there is a certain amount of gpt bot activity present on HN. But I don't think it makes sense to call people out and saying they are a gpt just based on one comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: