Hacker News new | past | comments | ask | show | jobs | submit login

Can I use this to make it reliably output code (say JavaScript)? I haven't managed to do it with just prompt engineering as it will still add explanations, apologies and do other unwanted things like splitting the code into two files as markdown.



Here’s an approach to return just JavaScript:

https://github.com/williamcotton/transynthetical-engine

The key is the addition of few-shot exemplars.


Here's a demo of some system prompt engineering which resulted in better results for the older ChatGPT: https://github.com/minimaxir/simpleaichat/blob/main/examples...

Coincidentially, the new gpt-3.5-turbo-0613 model also has better system prompt guidance: for the demo above and some further prompt tweaking, it's possible to get ChatGPT to output code super reliably.


Not this, but using the token selection restriction approach, you can let LLM produce output that conforms to arbitrary formal grammar completely reliably. JavaScript, Python, whatever.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: