I'm trying to create a bot that joins my friends Telegram group and melds into the conversation as if it was a real person. A real person might be the most cute and fun enthusiastic person there is but sometimes it has bad days, or it tells inappropriate jokes, right? People are complicated. Not this bot! No matter what prompt I'm using (with the chat API) it won't lose the happy happy joy joy chatGPT attitude, won't tell inappropriate jokes, won't give advice on certain topics and in general won't talk like a real person, not because of technological limitations.. You can feel it when it's just nerfed.
Trying the same prompts that gave nerfed "I am just an AI I can't speculate about the future" bs on completion API gave somewhat better results, but most of the time they were flagged as breaking the guidelines which is a TOS breach if done enough times.
This can be solved other than open models. The same thing happened with stable diffusion. Good thing it's open so you can still use the pre-nerfed 1.6 models.
I know it might be edgy or unpopular but I don't think one entity should decide how we can use this powerful tool. No matter its implications and consequences.
Trying the same prompts that gave nerfed "I am just an AI I can't speculate about the future" bs on completion API gave somewhat better results, but most of the time they were flagged as breaking the guidelines which is a TOS breach if done enough times.
This can be solved other than open models. The same thing happened with stable diffusion. Good thing it's open so you can still use the pre-nerfed 1.6 models.
I know it might be edgy or unpopular but I don't think one entity should decide how we can use this powerful tool. No matter its implications and consequences.
FOS for the win.