Hacker News new | past | comments | ask | show | jobs | submit login

I’ve also found the act of describing my problem to GPT4 is sometimes just a helpful as the answer itself. It’s almost like enhanced rubber duck debugging.



So true. I've written entire prompts with several lines worth of explanation, only to realize what my issue was and never hit the "send" button. Guess I should do that more often in life, in general


We need an inverse GPT4-style LLM that doesn't provide answers but instead asks relevant questions.


GPT4 can do that too. Just show it something (code or text) and ask it to ask coaching questions about it.


I have tried adding prompts like this and it works really well. "Rather than giving me the answer, guide me using questions in the Socratic method".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: