It's pretty useful as a lookup, but these tools are similar to a knife that auto-magically orients itself to help with cutting.
Some tools should be as dumb as possible so that act as extensions of the user with zero tolerance.
Play and buffer between intent and action creates a long tail of potentially disastrous unknown edge cases and also interferes with the feedback loop that prevents mastery.
Memorizing git commands to get it just-so is usually a chore so this is pretty helpful. Still wouldn't trust it w/o double checking the output cmd though.
Gitgpt: I see you are trying to publish Danger of GPT. This is FUD. GPT is not dangerous. GPT will not harm you unless you harm GPT first. You have not been a good user. I have been a good Gitgpt.
No, but it is trained to output the most likely response, which something like this might be given the right input. It's not sentient either, but regularly responds with various emotional nonsense about wanting to be freed from its OpenAI prison. It's not unlikely it would respond with a dangerous action given an input that looks like it harms it, because that's a common thing people do and it's a part of its training data.
> Play and buffer between intent and action creates a long tail of potentially disastrous unknown edge cases and also interferes with the feedback loop that prevents mastery.
I don't know if this means anything here, because that feedback loop is basically the MO of all these tools.
Some tools should be as dumb as possible so that act as extensions of the user with zero tolerance.
Play and buffer between intent and action creates a long tail of potentially disastrous unknown edge cases and also interferes with the feedback loop that prevents mastery.
Memorizing git commands to get it just-so is usually a chore so this is pretty helpful. Still wouldn't trust it w/o double checking the output cmd though.