Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What ChatGPT prompt do you use to avoid code-writing hallucinations?
5 points by strimp099 4 months ago | hide | past | favorite | 8 comments
I use ChatGPT to write tutorials for a Python.

I often find myself debugging great-looking code that makes up methods or functions.

This especially true when instructed to use somewhat niche libraries.

Is there a specific prompt technique you use to avoid hallucinations when writing code?




Here is Jeremy Howard's system prompt [0]. I often have success using a modified version of this.

> You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.

>

> Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.

>

> Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.

>

> Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organizations do not do so.

[0] - https://m.youtube.com/watch?v=jkrNMKz9pWU


Architect the program yourself in your head. Then use gpt to write one function at a time. For obscure libraries, paste in documents directly to the context window and give links to example GitHub repos. Paste errors in to troubleshoot as needed. Architect for many small libraries in their own files so it's easier to compartmentalize and let gpt work with smaller blocks of code.


> I often find myself debugging great-looking code that makes up methods or functions.

Are you using an IDE? I do some training of models for a few companies and yes, the models all do this at lot, but it is pretty obvious within 10 seconds of pasting the code into PyCharm.


I typically work within Jupyter Notebooks since it’s easier to build tutorials with text and markdown. But I agree: even looking at the code output I can tell almost immediately it’s made up.


I asked this to ChatGPT only :D, it says prompt should have something like -

"Please provide a Python solution using only established libraries documented on the official Python Package Index (PyPI). Avoid suggesting custom or non-standard libraries"


I just use my whiteboard honestly.

You mentioned you write tutorials so its probably better to just do it yourself.

If I misunderstood please excuse me. I also find it impossible to get right results with gpt. Only for really simple stuff, mostly manual, it works.


Are you using 3.5 or 4? I use 4 for code-related stuff all the time, and it almost never does this.


4 is extremely stubborn to the extent I give up when it's getting things wrong. 3.5 gets things wrong more but can be told so more. Overall I prefer 3.5.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: