Prompt injection becomes not a problem if you write a restrictive enough template for your prompt with a a LLM template language, such as what Guidance from microsoft provides.
You can literally force it to return responses that are only one of say 100 possible responses (i.e. structure the output in such a way that it can only return a highly similar output but with a handful of keywords changing).
It's work, but it will work with enough constraints because you've filtered the models ability to generate "naughty" output.
You can literally force it to return responses that are only one of say 100 possible responses (i.e. structure the output in such a way that it can only return a highly similar output but with a handful of keywords changing).
It's work, but it will work with enough constraints because you've filtered the models ability to generate "naughty" output.