Why not simply ban any impersonation of humans? For what honest, economically useful reason could someone want to use AI without disclosing it? Proposed regulations:
1. Notify all users that what is displayed is output from a computer program (see #4 for terminology).
2. Notify all users on whose behalf the computer program is operating: There must be a human or organization (e.g., corporation) named, the business equivalent of the 'beneficial owner'. For example, 'You are using the Customer Helper computer program by Amazon'.
3. Going further: Ban any representation that the computer output is human: No human names, no anthropomorphism, no metaphors like 'says' or even 'intelligence'.
4. Terminology describing the system must include 'computer', and use computer-related terms such as 'output'. (This one needs more thought regarding terminology.)
If that seems too far, why not? What are we hiding? And it would be inexpensive to implement. Also, consider that the biggest risk is manipulating people on an automated, mass scale, which depends significantly on the victims thinking they are interacting with a human.
Especially in these early days, people are vulnerable - most people are not in the HN bubble and have no idea what they are dealing with.
I also think it addresses a clear risk: The chat programs I've seen are clearly designed to appear human, to the point of polite chattiness, apologies, etc. Think about it on a practical level, it's a fun gimmick, a way to show off capability, but otherwise superfluous: how does it benefit anyone? It doesn't increase productivity. It appears designed to manipulate or fool people into thinking they are talking with humans, as does much else about the design and language around it. That indicates a serious problem in the direction of the industry.
I think the simplest answer to this is that we have tons of existing anthropomorphic technologies that nobody actually believes is human. But nobody thinks a PBX system is an actual human. The problem with this contemporary issue is that LLMs have become believable.
I'm not sure I would go as far as to ban anthropomorphism, but basically everything else you said I agree with. We should just blanket ban computers impersonating people - this should include deepfakes and similar. The only exception is for cinematic productions, with actors who have pen-and-paper signed consents.
As dumb and over-regulated as it sounds, it really isn't that crazy, and could have positive side-affects. It's one of those rules that really just enables us to trap bad behavior easier. It's like lying on a bank document, it's illegal because it helps law enforcement get a foothold into investigating more serious issues like laundering. Impersonating a person laws would really just helps us get a better grip to investigate phishing or other frauds, as well as address anti-consumer behavior.
It would also help with false advertising issues - "a computer cannot make promises like selling you a car for $1, but a person can. If a computer pretends to be a person, its hallucinated promises are legally binding, and the owner is liable". Same benefit applies to spam and fraud calls - simply pretending to be a person is enough of a crime to get a warrant to investigate the actual fraud.
This sounds good to me. It should not be legal to mislead consumers into thinking they are dealing with a human when they are not, whether explicitly or implicitly.
While I agree, I think autonomy to act is more relevant. If I'm dealing with a person who is only allowed to paste in some boilerplate platitudes (a al Uber and other bottom of the barrel customer service orgs) does it matter that if it's a real person or a machine?
Those scripts might be frustrating, but that's a different problem. The good thing about boilerplate is that at least one human has looked at it, and the set of responses is known and generally auditable. Also, lying about being a human when someone is not a human is just plain old misrepresentation even if it works correctly.
Yes it does. BMO is a bank in Canada that has a virtual agent impersonating a human, and it is pretty convincing. It is unethical, creepy, and offensive.
Making people follow scripts is merely a bad practice insisted upon by misinformed or stupid management.
If it's limited to traditional customer service roles (which aren't allowed to deviate from a script), I honestly don't understand how this would be any more unethical or creepy than it already was to begin with. The experience is a lot like interacting with an LLM chatbot today, except somehow even more creepy because you know it's a human behind the phone.
I don't really have a stake in this because I know customer service is going in the shitter either way. But I do find it interesting how people perceive the progression.
In my experience a person is a better judge of knowing when to bail out and transfer.
I had to get on the phone with Walgreens the other day and got stuck in a phone tree loop of “sorry, I didn’t get that” with no option to bail out and talk to a real person.
So this is a "hack" to talk to a real person at almost every company - the Investor Relations contacts are always staffed, and usually bored. Remember, you are a concerned shareholder, and this issue matters to you. You may only own 1/50th of a share through an index fund, but that still makes you a shareholder.
I've never tried calling, but I've regularly emailed companies, and this always works.
I don’t know if that would’ve helped in this particular case, because this was a phone tree for the pharmacy at my location, and i needed to return a call for the pharmacist.
Impersonating without informing and intentionally trying to trick a person into thinking they are talking to a human is unethical and creepy. There is nothing wrong with using virtual agents, but there is a thick black line and companies are clearly willing to leap over it. These laws are necessary.
Also. There is no rule that customer service must follow a script. Some poorly run companies follow this awful practice and it is extremely harmful to the customer experience and a gent engagement / work satisfaction. It's the result of incompetent management.
I don't really see a distinction in effect between forcing a human to follow a script and getting a chatbot to impersonate a human. How is it "bad customer service" in one hand and "unethical and creepy" on the other? How is it not just unethical, creepy, and bad customer service all around?
of course, both actually are clearly unethical if they don't appear to be representing corporate interests, but I haven't run into this issue outside of robocalls (which are, I believe, illegal already).
So your voice/text goes into the machine, the machine spits out a response, and a Real Person reads/copypastes the machine response. You will be in fact dealing with a human.
I think people using AI for customer service are probably doing it entirely wrong. AI misunderstand hallucinate and your company is absolutely liable for the shit your AI says to your customer as we recently discovered.
The AI and the dumb things it may say ought to be the property and responsibility of the customers AI assistant whereas companies provide a machine parseable database of rules and intents which is to be consumed by users AI assistant.
The endpoints accessed by such intents are intentionally designed flows that insure there is no misunderstanding nor undue friction in helping the user give you more of their money.
User to their assistant: I want to want $SHOW, I see that I can rent it on amazon or add HBO so you can watch it.
User is shown a human designed disclosure of the continuing cost potentially with some surrounding add supporting the value proposition eg other shows people who want to watch $SHOW might also enjoy adding value to the pitch based on information willingly shared about originating purpose.
User: I want to cancel my service
User is transferred to a queue to either talk with someone immediately or schedule a call routing them to retention rather than general tech support.
If THEIR AI tells them that HBO is free or that they are entitled to a credit well THEIR AI told them that the company never did and the user gets the benefit of a singular interface rather than learning 95
AI customer service might end up being a good thing. If a company faces a lawsuit involving systemically poor customer service they can shrug it off claiming individual employees made mistakes. If it's an AI, it's behavior can be reproduced and judged.
1. Notify all users that what is displayed is output from a computer program (see #4 for terminology).
2. Notify all users on whose behalf the computer program is operating: There must be a human or organization (e.g., corporation) named, the business equivalent of the 'beneficial owner'. For example, 'You are using the Customer Helper computer program by Amazon'.
3. Going further: Ban any representation that the computer output is human: No human names, no anthropomorphism, no metaphors like 'says' or even 'intelligence'.
4. Terminology describing the system must include 'computer', and use computer-related terms such as 'output'. (This one needs more thought regarding terminology.)
If that seems too far, why not? What are we hiding? And it would be inexpensive to implement. Also, consider that the biggest risk is manipulating people on an automated, mass scale, which depends significantly on the victims thinking they are interacting with a human.
Especially in these early days, people are vulnerable - most people are not in the HN bubble and have no idea what they are dealing with.
I also think it addresses a clear risk: The chat programs I've seen are clearly designed to appear human, to the point of polite chattiness, apologies, etc. Think about it on a practical level, it's a fun gimmick, a way to show off capability, but otherwise superfluous: how does it benefit anyone? It doesn't increase productivity. It appears designed to manipulate or fool people into thinking they are talking with humans, as does much else about the design and language around it. That indicates a serious problem in the direction of the industry.