Can I use this to make it reliably output code (say JavaScript)? I haven't managed to do it with just prompt engineering as it will still add explanations, apologies and do other unwanted things like splitting the code into two files as markdown.
Coincidentially, the new gpt-3.5-turbo-0613 model also has better system prompt guidance: for the demo above and some further prompt tweaking, it's possible to get ChatGPT to output code super reliably.
Not this, but using the token selection restriction approach, you can let LLM produce output that conforms to arbitrary formal grammar completely reliably. JavaScript, Python, whatever.