This to me is why I don't use ChatGPT. Every time I use it I get answers that are questionable in validity. I'd say that 3/4 of my questions have returned incorrect answers at least partially. Maybe 10% are wildly incorrect with the remainder being partially incorrect.
So if it is wrong with MOST of the questions that I am able to validate myself, then how can I trust it on the questions that I am unable to validate myself.
The whole reason I started even doing this passive research is because I had an employee late last year who used to be a decent employee (he was never an all-star, but he got the job done to an average/satisfactory level), and started to all of a sudden perform incredibly poorly.
He was submitting code and solving problems that were just really bad. He was always just an average producer, and was always teetering on the edge during performance reviews, doing just well enough that we kept him around. But he quickly started to make mistake after mistake. Several code reviews I found really strange artifacts and comments in his code that were blatant mistakes. I confronted him about them over the course of several performance reviews and said he blamed stackoverflow "copy pasta". Eventually I actually fired him, since it was getting to a point that almost everything he submitted or produced was problematic in some way and he was burning more of my time than he was saving. So I ended up firing him.
While firing him, in front of HR he finally broke down and admitted that he has been using ChatGPT for everything and he begged us to let him stay and he would stop using it altogether. I of course didn't care at that point and we let him go. But I started to realize the increase of mistakes were all due to ChatGPT leading him astray.
That whole experiment really taught me that ChatGPT is not ready for primetime. If you blindly trusted ChatGPT you will find yourself in the wrong place most of the time. The problem is that unless you already know the answer to the question you are asking, it is very difficult to tell where chatgpt's answers might be correct and were they might be incorrect (because it is usually a mixture of both). This makes it entirely useless for asking questions that you are not comfortable validating.
Presumably you're a good manager but the way you told it makes you sound like a bad manager that didn't dig into what's going on with a poor performing employee and let them flail around until you had to fire them. There's more to that story and it's not really about ChatGPT at all.
There are other stories out there, like https://hyperbo.la/w/chatgpt-4000/ which shows it can useful and a force multiplier when used well, but it's like giving a faster car to a bad driver. It'll just result in them crashing faster. If you've got a programmer that doesn't want to program, ChatGPT can't help them be a better programmer since they don't actually want to be one!
Why’d he be so reluctant to confess to using ChatGPT? And why would he continue using it despite constant feedback that his performance was getting so much worse?
> So if it is wrong with MOST of the questions that I am able to validate myself, then how can I trust it on the questions that I am unable to validate myself.
If you’re using it to generate code, you can validate it yourself - run the code.
So if it is wrong with MOST of the questions that I am able to validate myself, then how can I trust it on the questions that I am unable to validate myself.
The whole reason I started even doing this passive research is because I had an employee late last year who used to be a decent employee (he was never an all-star, but he got the job done to an average/satisfactory level), and started to all of a sudden perform incredibly poorly.
He was submitting code and solving problems that were just really bad. He was always just an average producer, and was always teetering on the edge during performance reviews, doing just well enough that we kept him around. But he quickly started to make mistake after mistake. Several code reviews I found really strange artifacts and comments in his code that were blatant mistakes. I confronted him about them over the course of several performance reviews and said he blamed stackoverflow "copy pasta". Eventually I actually fired him, since it was getting to a point that almost everything he submitted or produced was problematic in some way and he was burning more of my time than he was saving. So I ended up firing him.
While firing him, in front of HR he finally broke down and admitted that he has been using ChatGPT for everything and he begged us to let him stay and he would stop using it altogether. I of course didn't care at that point and we let him go. But I started to realize the increase of mistakes were all due to ChatGPT leading him astray.
That whole experiment really taught me that ChatGPT is not ready for primetime. If you blindly trusted ChatGPT you will find yourself in the wrong place most of the time. The problem is that unless you already know the answer to the question you are asking, it is very difficult to tell where chatgpt's answers might be correct and were they might be incorrect (because it is usually a mixture of both). This makes it entirely useless for asking questions that you are not comfortable validating.