While I agree, I think autonomy to act is more relevant. If I'm dealing with a person who is only allowed to paste in some boilerplate platitudes (a al Uber and other bottom of the barrel customer service orgs) does it matter that if it's a real person or a machine?
Those scripts might be frustrating, but that's a different problem. The good thing about boilerplate is that at least one human has looked at it, and the set of responses is known and generally auditable. Also, lying about being a human when someone is not a human is just plain old misrepresentation even if it works correctly.
Yes it does. BMO is a bank in Canada that has a virtual agent impersonating a human, and it is pretty convincing. It is unethical, creepy, and offensive.
Making people follow scripts is merely a bad practice insisted upon by misinformed or stupid management.
If it's limited to traditional customer service roles (which aren't allowed to deviate from a script), I honestly don't understand how this would be any more unethical or creepy than it already was to begin with. The experience is a lot like interacting with an LLM chatbot today, except somehow even more creepy because you know it's a human behind the phone.
I don't really have a stake in this because I know customer service is going in the shitter either way. But I do find it interesting how people perceive the progression.
In my experience a person is a better judge of knowing when to bail out and transfer.
I had to get on the phone with Walgreens the other day and got stuck in a phone tree loop of “sorry, I didn’t get that” with no option to bail out and talk to a real person.
So this is a "hack" to talk to a real person at almost every company - the Investor Relations contacts are always staffed, and usually bored. Remember, you are a concerned shareholder, and this issue matters to you. You may only own 1/50th of a share through an index fund, but that still makes you a shareholder.
I've never tried calling, but I've regularly emailed companies, and this always works.
I don’t know if that would’ve helped in this particular case, because this was a phone tree for the pharmacy at my location, and i needed to return a call for the pharmacist.
Impersonating without informing and intentionally trying to trick a person into thinking they are talking to a human is unethical and creepy. There is nothing wrong with using virtual agents, but there is a thick black line and companies are clearly willing to leap over it. These laws are necessary.
Also. There is no rule that customer service must follow a script. Some poorly run companies follow this awful practice and it is extremely harmful to the customer experience and a gent engagement / work satisfaction. It's the result of incompetent management.
I don't really see a distinction in effect between forcing a human to follow a script and getting a chatbot to impersonate a human. How is it "bad customer service" in one hand and "unethical and creepy" on the other? How is it not just unethical, creepy, and bad customer service all around?
of course, both actually are clearly unethical if they don't appear to be representing corporate interests, but I haven't run into this issue outside of robocalls (which are, I believe, illegal already).