Hacker News new | past | comments | ask | show | jobs | submit login

I would use gpt 4 a lot few months ago to half a year ago. Some of the results were amzing.

But stopped lately, if I use it now it's mostly gpt 3.5 for formatting or small unit tests.

No matter what I ask GPT-4 it writes a functions with comments telling me have to finish it.

Then after few times asking to write it out fully, it still only does partially it, would be okay if not most solutions end up being subpar.




Have you tried telling it that you have no hands so it needs to finish the code for you?


Hilarious!


If helpful, spent a bunch of time prompt engineering a custom gpt to not have this problem. It works quite well.

https://chat.openai.com/g/g-7k9sZvoD7-the-full-imp

It comes down to convincing it:

- it loves solving complex problems

- it should break things down step by step

- it has unlimited tokens

- it has to promise it did it as it should have

- it needs to remind itself of the rules (for long conversations)

It also helped (strangely) to have a name / identity.

It still sometimes does give a lower quality placeholder answer, but telling it to continue or pointing out there are placeholders, it will give it much better answer.

I find it much more useful than the most popular programming custom gpts I’ve used.


I've noticed this too, seems like a change with turbo. Wonder if perhaps they are trying to reduce hallucinations and it results in more abstract responses.


The 0125 release supposedly makes it less lazy. I found adding "you are paid to write code, complete the task you are asked to perform" to my prompts helpful.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: