Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Best practices to control LLM responses with user queries?
4 points by cloudking 11 days ago | hide | past | favorite | 4 comments
How do you control LLM output in your applications? Is it just a matter of a well-crafted system prompt or are there any other techniques? I'm building a query based UX and I specifically want to make sure users cannot inject their own instructions into queries that would steer the LLM away from it's intended purpose.





There is a misconception in applied LLM usage within the software engineering community that one needs a large pipeline of wrapping functions that peer into and audit what goes into the LLM from the user and what comes back out of the LLM before being shown to the user, or before being considered as "safe" to use when no user is in that loop.

Nah, none of that is necessary with a different architecture. The line of reasoning most seem to be applying to LLM usage requires either another human or a secondary series of LLMs as smart or smarter than the the LLM before it to perform these sanitization tasks before and after the LLM before it operates on some unexamined data. That is never going to work, but it sure till eat up a good amount of time, during which people get paid. Plus, this side steps hallucinations, where most people just say "well, of course and expert has to validate the LLM's responses." But then why not have that expert do the work and drop the LLM? The expert has to validate it, meaning calculate the same, and if they don't they are not being their expert, they are lazy ass collecting salary while a disaster builds.

The solution is you do not control the LLMs, not at all. You educate the LLM user, and then pair them to co-work together. That is the only way you'll dually get both the work at hand done, with verification baked into the process, and you elevate your staff rather than walk into the scenarios that the rest of the industry seems to be telling people to use where the staff has every incentive to sabotage the entire AI effort at your company.


Of course, at a deep philosophical level, you are right! I often like to compare LLMs to typewriters. If we think the typewriter is responsible for the output, we're going down the wrong route.

That said, I am interested in wrapper applications because, as you know, many consumer-facing products are used by folks who don't need or want an education in the subject domain. GPT and RAG integrations could be helpful in generating the UI they need, but untempered access to an LLM would not be great in many software products, methinks.


Might want to check out https://www.lakera.ai/ you send every prompt to their API first and they deal with checking it for problematic content etc.

I'm not affiliated but I've enjoyed hacking their game https://gandalf.lakera.ai/ [0]

They also have a sandbox/free API if you want to test it out (they used to at least)

[0] previous hn discussion https://news.ycombinator.com/item?id=35905876


The simple answer is that you cannot do so with the LLM itself, which does not follow any instructions at all but just predicts, one token at a time, what an answer that looks like following the instructions might contain. That's very different from actually following instructions!

What you need is a "wrapper" application that audits and controls user input before submitting it to the LLM itself, and also examining and controlling the output before it is shown to the user.

You've just discovered the real "moat" in building practical LLM apps that go beyond chat.

You might like this article [0] that I wrote about how LLM models are built and why this is necessary.

[0] https://medium.com/gitconnected/something-from-nothing-d755f...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: