Hacker News new | past | comments | ask | show | jobs | submit login

Fixing bugs in code it writes itself is very different to diagnosing (i.e. identifying, not fixing) bugs in my own code.

When stuck I often paste code into ChatGPT and ask why it doesn't work, and it will often help me quickly identify the error and propose a fix.




I have done the same thing. “I am seeing the following behavior, but I expected <X>. What am I missing?” That line has often quickly solved some bugs that would have otherwise taken some time to debug. Very handy!


Can you share some examples of this? I haven’t had much luck with ChatGPT correctly identifying issues because (in my case, at least) they stem from other parts of a large codebase, and (last time I checked) I couldn’t paste more than a few kilobytes of code into ChatGPT.

One example are bugs caused by precondition violations, which ChatGPT can’t diagnose without also being given the code to all of the incoming call-sites, which means you end-up solving the problem yourself before you’ve even explained the issue to ChatGPT - so (to me, at least) my use of ChatGPT is more akin to rubber-duck-debugging[1] than anything else.

[1]: https://en.wikipedia.org/wiki/Rubber_duck_debugging


> Fixing bugs in code it writes itself is very different to diagnosing (i.e. identifying, not fixing) bugs in my own code.

yeah, we run the random number generator again and hopefully this time less buggy code pops out




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: