I haven't read the article (like many here) and I don't get why the writer would write this. LLMs/AI are not a stage where they can replace people or their jobs (broadly speaking), they still have to be taught and engineered to solve your problems. For example, if I enter a prompt "Write a Javascript function that does XYZ for me.", and don't get a desired result, it's unfair for me to say "ChatGPT bad, AI bad".
At this moment, you have to work with the limits, use better prompts, use ChatGPT as a guide and not as your personal robot. With that approach, I feel lots of revolutionary content is incoming in products. Use the tool better to get the most out of it.
At this moment, you have to work with the limits, use better prompts, use ChatGPT as a guide and not as your personal robot. With that approach, I feel lots of revolutionary content is incoming in products. Use the tool better to get the most out of it.