The prompting strategies in this post made me remember a funny anecdote from this Thanksgiving. My older family members had been desperately trying to get ChatGPT to write a good poem about spume (the white foam you see in waves), and no matter how many ways they explicitly prompted it to not write in rhyming couplets, it dutifully produced a rhyming couplet poem every time. There’s clearly an enormous volume of poems in the training data written in this form, and it was practically impossible to escape that local minimum within the latent space of the model, like the half-full wine glass imagery. They only succeeded at generating a poem written the way they wanted when they first prompted ChatGPT to reason through the elements of good poetry writing regardless of style, and then generate a prompt to write a poem following those guidelines. Naturally, that produced a lovely poem on the first attempt with that prompt!
It’s pretty well known at this point, but it seems like when it comes to prompting these models, telling them what to do or not do is less effective than telling them how to go through the process of achieving the outcome. You need to get them to follow steps to reach a conclusion, or they’ll just follow the statistical path of least resistance.
thanks for this comment ! it clarifies the function of the llm well.
ie, use it as a template-generating search-engine helper for most common things.
for uncommon things, you have to prompt-guide it to get what you want.
It’s pretty well known at this point, but it seems like when it comes to prompting these models, telling them what to do or not do is less effective than telling them how to go through the process of achieving the outcome. You need to get them to follow steps to reach a conclusion, or they’ll just follow the statistical path of least resistance.
Edit: the poem: https://paste.ee/d/rIbLa/0