The issue is that you can retrieve the prompt with even a low success rate.
You can make prompts where both the prompt itself and the answer is encrypted and GPT-3 struggles with this so the detector may decrypt the prompt or response to something else than what is answering the prompt.
You can make prompts where both the prompt itself and the answer is encrypted and GPT-3 struggles with this so the detector may decrypt the prompt or response to something else than what is answering the prompt.