Most human communications between humans have some physical world purpose, and so an algorithm which is trained to create the impression that a purpose has been fulfilled whilst not actually having any capabilities beyond text generation is going to have negative effects except where the sole purpose of interacting is receiving satisfactory text.
Reviews that look just like real reviews but are actually a weighted average of comments on a different product are negative. Customer service bots that go beyond FAQ to do a very convincing impression of a human service rep promising an investigation into an incident but can't actually start an investigation into the incident are negative. An information retrieval tool which has no information on a subject but can spin a very plausible explanation based on data on a different subject is negative.
Of course, it's entirely possible for humans to bullshit, but unlike text generation algorithms it isn't our default response to everything.
Reviews that look just like real reviews but are actually a weighted average of comments on a different product are negative. Customer service bots that go beyond FAQ to do a very convincing impression of a human service rep promising an investigation into an incident but can't actually start an investigation into the incident are negative. An information retrieval tool which has no information on a subject but can spin a very plausible explanation based on data on a different subject is negative.
Of course, it's entirely possible for humans to bullshit, but unlike text generation algorithms it isn't our default response to everything.