I'm only willing to trust such tools with trivial problems, e.g. I'll ask ChatGPT about some generic snippet I'm sure it's seen in the training data and that would take me 5 mins to type out.
If I have to do prompt engineering to get the model to spit out acceptable output for trivial shit, that's not a model I can trust even for said trivial shit.
I'm only willing to trust such tools with trivial problems, e.g. I'll ask ChatGPT about some generic snippet I'm sure it's seen in the training data and that would take me 5 mins to type out.
If I have to do prompt engineering to get the model to spit out acceptable output for trivial shit, that's not a model I can trust even for said trivial shit.