> a non-deterministic tool that can hardly be reasoned about that will straight up hallucinate facts for any professional work
This sounds like a decent enough description of the human brain.
Don't get me wrong: I'm not for a moment suggesting that what we have today is anything approaching "General Artificial Intelligence", and I am extremely concerned & worried about the inevitable massive damage AI is going to do to our world, but I do think it's a little funny that many people's specific objections to it amount to "it can do what people do". Yes. That's the idea.
The main concern with AI usage is not that it will write bad buggy code or lie: we already do that ourselves plenty, so that's not a novel skill in the professional arena. The main concern is that we can scale that stupidity.
But that's a problem of scale: individual usage isn't really going to do you notable damage on an individual level.
> Yes, if you are constantly hopping from one framework of the month to the next big thing, you just don't have the time to learn anything in depth and then again ChatGPT can help.
I actually think the opposite is true. Having used ChatGPT quite a lot for work (by mandate - I wouldn't have chosen to either but am glad now that I had to), I've found it's really very good at generating bad code and being confidently wrong (again, much like people): if you ask it about a subject you're not deeply knowledgeable about, it's going to lead you astray. The most astute way to use it is actually in going the last mile on something you're already very confident in, so that you can correct it as needed.
This sounds like a decent enough description of the human brain.
Don't get me wrong: I'm not for a moment suggesting that what we have today is anything approaching "General Artificial Intelligence", and I am extremely concerned & worried about the inevitable massive damage AI is going to do to our world, but I do think it's a little funny that many people's specific objections to it amount to "it can do what people do". Yes. That's the idea.
The main concern with AI usage is not that it will write bad buggy code or lie: we already do that ourselves plenty, so that's not a novel skill in the professional arena. The main concern is that we can scale that stupidity.
But that's a problem of scale: individual usage isn't really going to do you notable damage on an individual level.
> Yes, if you are constantly hopping from one framework of the month to the next big thing, you just don't have the time to learn anything in depth and then again ChatGPT can help.
I actually think the opposite is true. Having used ChatGPT quite a lot for work (by mandate - I wouldn't have chosen to either but am glad now that I had to), I've found it's really very good at generating bad code and being confidently wrong (again, much like people): if you ask it about a subject you're not deeply knowledgeable about, it's going to lead you astray. The most astute way to use it is actually in going the last mile on something you're already very confident in, so that you can correct it as needed.