I have a sneaking suspicion the "moat" of AI models will be in the data used to fine-tune them. Prompts are, as you alluded, inherently impossible to fully secure and playing cat and mouse with all the ways they can be compromised wastes a lot of time that could be spent on more important things.