Hacker News new | past | comments | ask | show | jobs | submit login

It's common for neural networks to struggle with negative prompting. Typically it works better to phrase expectations positively, e.g. “be brief” might work better than ”do not write long replies”.



But surely Anthropic knows better than almost anyone on the planet what does and doesn't work well to shape Claude's responses. I'm curious why they're choosing to write these prompts at all.


Maybe it would be even worse without it? I've found that negative prompting is often ignored, but far from always ignored so it's still useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: