Hacker News new | past | comments | ask | show | jobs | submit login

It used to be that you had to give examples of solving similar problems to coax the LLM to solve the problem you wanted it to solve, like: """ 1 + 1 = 2 | 92 + 41 = 133 | 14 + 6 = 20 | 9 + 2 = """ -- that would be an example of 3-shot prompting.

With modern LLMs you still usually get a benefit from N-shot. But you can now do "0-shot" which is "just ask the model the question you want answered".




Thanks




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: