Hacker News new | past | comments | ask | show | jobs | submit login

I agree with your cruise control analogy in a sense, but I think it's Air Canada that's misusing the technology, not the customer. If they try to replace customer service agents with chatbots that lie, they need to be prepared to pay for the results. I'm glad they're not allowed to use such unreliable, experimental technologies in their airplanes (737 Max notwithstanding).

There's absolutely a technology available to make a chatbot that won't tell lies: connect a simple text classifier to a human-curated knowledge base.




> If they try to replace customer service agents with chatbots that lie, they need to be prepared to pay for the results.

The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up.

> I'm glad they're not allowed to use such unreliable, experimental technologies in their airplanes (737 Max notwithstanding).

If you use unreliable technology in an airplane, it falls out of the sky and everybody dies. If you use it in a chatbot, the customer can e.g. go to the company's website to apply for the discount it said exists and discover that it isn't there, and then be mildly frustrated in the way that customers commonly are when a company's technology is imperfect. It's not the same thing.

> There's absolutely a technology available to make a chatbot that won't tell lies: connect a simple text classifier to a human-curated knowledge base.

But then it can only answer questions in the knowledge base, and customers might prefer an answer which is right 75% of the time and can be verified either way in five minutes than to have to wait on hold to talk to a human being because the less capable chatbot couldn't answer their question and the more capable one was effectively banned by the government's liability rules.


> The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up.

If, instead of a chatbot, this was about incompetent support reps that lied constantly, would you make the same argument? "We can't hire dirt-cheap low-quality labor because as company representatives we have to do what they say we'll do. It's so unfair"


It isn't supposed to be a company representative, it's supposed to be a chatbot.

If Microsoft puts ChatGPT on Twitter so people could try it, and everybody knows that it's ChatGPT, and then it started offering companies free Windows licenses, why should they have to honor that? It's obvious why it might do that but the purpose of letting people use it wasn't so it could authorize anything.

If the company holds a conference where it allows third party conference speakers to give talks, which everybody knows are third parties and not company employees, should the guest speakers be able to speak for the company? Why would that accomplish anything other than the elimination of guest speakers?


> The result would be a de facto ban on AI chatbots

No, the result would be a de facto ban on using them as a replacement for customer service agents. I support that for the time being since AI chatbots can't actually do that job yet because we don't know how to keep them from lying.

They could put a disclaimer on it of course. To be sufficiently truthful, the disclaimer would need to be front and center and say something like "The chat bot lies sometimes. It is not authorized to make any commitments on behalf of the company no matter what it says. Always double-check anything it tells you."


> No, the result would be a de facto ban on using them as a replacement for customer service agents.

But what does that even mean? If Ford trains a chatbot to answer questions about cars purely for entertainment purposes, or to get people excited about cars, a customer could still use it for "customer service" just by asking it questions about their car, which it might very well be able to answer. But it would also be capable of making up warranty terms etc., so you've just banned that thing and anything like it.

> I support that for the time being since AI chatbots can't actually do that job yet because we don't know how to keep them from lying.

It's pretty unlikely we could ever keep them from lying. We can't even get humans to do that. The best you could do is keep them on a script, which is the exact thing that makes people hate existing human customer service reps who can't help them because it isn't in the script.

> To be sufficiently truthful, the disclaimer would need to be front and center and say something like "The chat bot lies sometimes. It is not authorized to make any commitments on behalf of the company no matter what it says. Always double-check anything it tells you."

Which is exactly what's about to start happening, if that actually works. But that's as pointless as cookie banners and "this product is known to the State of California to cause cancer".


It's all in how it's presented and should not be up to the customer or end-user to understand how technology running on the company's server, which might be changed at any time might behave unreliably.

I expect something that's presented as customer service not to lie to me about the rebate policy. As long as what it says is plausible, I expect the company to be prepared to cover the cost of any mistakes, especially if the airline only discovers the mistake after I've paid them and taken a flight. Compensating customers for certain types of errors is a normal cost of doing business for airlines, and the $800 CAD this incident cost the airline is not an exorbitant amount. The safety valve here is that judges and juries do test against whether a reasonable person would believe a stated offer or policy; I can't trick a chatbot into offering me a billion dollars for nothing and get a court to hold a company to it.

If Ford presents a chatbot as entertainment and makes it really clear at the start of a session that it doesn't guarantee the factual accuracy of responses, there's no problem. If they present it as informational and don't make a statement like that, or hide it in fine print, then it says something like "the 2024 Mustang Ecoboost has more horsepower than the Chevrolet Corvette and burns less gas than the Toyota Prius", they should be on the hook for false advertising to the customer and unfair competition against Chevrolet and Toyota.

Similarly, if Bing or Google presents a chatbot as an alternative to their search engine for finding information on the internet, and it says "Zak's photography website is full of CSAM", I'm going to sue them for libel.


> The safety valve here is that judges and juries do test against whether a reasonable person would believe a stated offer or policy; I can't trick a chatbot into offering me a billion dollars for nothing and get a court to hold a company to it.

Sure, but a billion people could each trick it into offering them $100, which would bankrupt the airline.

> they should be on the hook for false advertising to the customer and unfair competition against Chevrolet and Toyota.

But all you're really doing is requiring everyone to put a banner on everything that says "for entertainment purposes only". Because if something like that gets them out of the liability then that's what everybody is going to do. And if it doesn't then you're effectively banning the technology, because "have it not make stuff up" isn't a thing they know how to do.


Courts probably aren't going to enforce any promise of money for nothing or responses prompted by obvious trickery, but they might enforce promises of discounts, and are very likely to enforce promises of rebates as the court in this case did.

If that means companies can't use chatbots to replace customer service agents yet, so be it.


> Courts probably aren't going to enforce any promise of money for nothing or responses prompted by obvious trickery, but they might enforce promises of discounts, and are very likely to enforce promises of rebates as the court in this case did.

But what does that matter? So someone posts on Reddit how to trick the chatbot into offering a rebate and then 75% of their customers have done it by the time they realize what's going on and now they're out of business.

> If that means companies can't use chatbots to replace customer service agents yet, so be it.

You're still not articulating any way to distinguish "customer service" from any other functioning chatbot. A general purpose chatbot will answer customer service questions, so how does this not just ban all of them?


And if I saw that disclaimer, I wouldn't use the tool. What's the point if you can't trust what it says. Just let me talk to a human that can solve my issue.


> What's the point if you can't trust what it says. Just let me talk to a human that can solve my issue.

That's the point of it -- you don't have to wait on hold for a human to get your answer, and you could plausibly both receive it and validate it yourself sooner than you could get through to a human.


> The result would be a de facto ban on AI chatbots, because nobody knows how to get them not to make stuff up

I think that banning lying to customers is fine.


ChatGPT is presumably capable of making something up about ChatGPT pricing. It should be banned?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: