It's common for neural networks to struggle with negative prompting. Typically it works better to phrase expectations positively, e.g. “be brief” might work better than ”do not write long replies”.
But surely Anthropic knows better than almost anyone on the planet what does and doesn't work well to shape Claude's responses. I'm curious why they're choosing to write these prompts at all.