Hacker News new | past | comments | ask | show | jobs | submit login

I really hate when companies hide behind chatbots. Like, NOOOO, your "AI" chatbot didn't help me - get me an actual person.



Kind of a weird tangent but I had a very interesting experience with one of these customer support chatbots recently. My ISP apparently now requires that all customers communicate with their chatbot as the only point of contact. So, naturally, I was pseudo-outraged because I know chatbots are just a money-saving gimmick that reduces workload by driving away 80% of support requests regardless of whether they actually solve someone's problem. And, being technical, my problem was obviously not going to be in the bank of stock answers or even understood by the bot. (I really wish I could remember the actual question)

Long story short, I proposed the question to the chatbot in all its complexity, assuming it would be handed over to a human agent to read the transcript. The chatbot immediately understood the question and provided the exact response I needed.

That was the day I realized I have a deep-seated prejudice against chatbots that blinded me to the possibility that maybe, just maybe, they actually can help sometimes. And I haven't kept up with their technical advancement to be throwing around judgements on their abilities.

To be clear: I'm not arguing in favour of chatbots; just sharing a story.


I don't have any evidence this is the case, but my general assumption is that there are humans there as well.

Since people only get a chatbot, they ask simple questions the chatbot can answer, which weeds out a lot of support requests. As soon as the bot is stumped, it forwards directly to the pool of humans - a smaller pool than usual because there are fewer support requests.

The response goes back as though the bot did the thinking, which in some ways, it did - in the same way as if someone asked me a question I couldn't answer, I might google it, and then respond.

If this is the case, it may be slightly dishonest, but as long as people are getting the support they need, I don't necessarily think there's anything wrong with it.


>That was the day I realized I have a deep-seated prejudice against chatbots that blinded me to the possibility that maybe, just maybe, they actually can help sometimes.

Nice try, Skynet, but we're on to you.


I had a similar experience with an automated phone bot with my insurance company at one point. I had this very bizarre situation involving billing and a typically-not-covered medication in conjunction with a surgery. I figured that if I went as technically detailed as possible with everything, the bot would be confused and I would be transferred to a person, but the bot completely understood the question and answered it. No humans involved.


How do you know that "no humans involved"? Was there something that clued you in on the fact that the responses were not from a human being?


> Long story short, I proposed the question to the chatbot in all its complexity, assuming it would be handed over to a human agent to read the transcript. The chatbot immediately understood the question and provided the exact response I needed.

How do you know it did? I.e. that a human it was passed to didn't just pass your inverted Turing test!


In my case, I inferred due to the speed of the response. (It was even formatted fancy). So while it's conceivable that a human could have intervened, they would have had to be reading the conversation in real-time and ready to click a one-button response immediately which seems like it would defeat the purpose.

Perhaps the real question is: if a chatbot is powered by a human instead of AI, but I can't tell because the interface is consistent, is it not a chatbot?


> Perhaps the real question is: if a chatbot is powered by a human instead of AI, but I can't tell because the interface is consistent, is it not a chatbot?

The Mechanical Turk[1] was a hoax, not an early mechanical AI, so no. It's a chat interface -- perhaps with some pre-sorting and context-extracting preludes that save the human operator at the other end some time, but still just an interface -- between the human chat operator and you.

___

1: https://en.wikipedia.org/wiki/Mechanical_Turk


This is one of the big confounds: a lot of the companies which brag about AI prowess are relying on a bunch of generally not well paid humans to cover the gaps.


I share the sentiment, I feel like if you have a long FAQ or list of help articles, a chatbot can actually make a good search engine. Contrarily to conventional search engines, it won't trip over synonyms or formulations not found as-is in the knowledge base.


Chatbots aren't any more useful than a good search function on the documentation or the community message board.


I guess it depends on what you define as a chatbot. For example, semantic search that understands a natural language query, like google, is that a chat bot or a search function?


My last chat bot wanted to look up my account information prior to connecting me to the human representative. I gave it the account number, it looked it up. First question from the human: "What is your account number?" Immense waste of time and money.


Still doesn’t piss me off as much as automated phone support systems.

90% of the time they only offer options that I could easily go online to do. If I’m calling your phone number. It’s because I have a problem that’s not solved or clarified by your existing self-support systems.


I've frequently thought about how much net productivity loss automated phone systems cause for the economy. It seems like for every 10 minutes of my time is wasted on one the company I'm calling saved 1-2 minutes of customer service rep time.


They do that because a significant number of people will call rather than check the website first.

My issue is that they often don't give you the "none-of-the-above please let me speak to a real person" option--or they hide it.


Unfortunately I think they've found that most people don't(won't?) do that self debugging.

Relevant xkcd: https://xkcd.com/806/


They could do a better job of separating the two.

Most companies would save their time and mine if they had a callback system which took as many details as possible up front & didnt have to ask my name and account details because Im already logged in.

Even better if it could give me some warning about where I am in the queue and when the call might be coming (e.g. via push notification).

I'd 1000x prefer that over any oversold AI hooked up to an FAQ.

Unfortunately it seems call centres are driven by very traditional metrics which wouldnt lead anybody to set up a system like this.


> They could do a better job of separating the two.

When support is purely viewed as a cost, then this will never happen. 99% plus of your call volume may be for the obvious things. If you offer two options all of those people are going to be more confused than they already are and you will have some of them engaged by costly humans.

> Unfortunately it seems call centres are driven by very traditional metrics which wouldnt lead anybody to set up a system like this.

You are getting to the higher level point here. Costs need to be minimized so you go with the cheapest vendor available and then try to squeeze everything you can out of that. If you can send someone to an AI, again, after you've put them in the direct of a human, there is the possibility of deflecting further cost. Depending on the scenario these cost savings can rack up for both the company and the support vendor. All the time, the humans doing the support or creating the solutions forming the basis of the AI get treated pretty poorly.

Support at a particular scale will start to skew this way unless there are strong forcing functions in the organization. For example, sales need to be able able to sell support which needs to be backed up by solid people, and keep getting renewed. If you offer predominantly free support then you don't have much wiggle room. When PMs and devs only focus on new features and not fixing real issues raised by customers, or more importantly in many ways, proactively identified by support people, then you lose support people and make toil for those remaining.

Lastly, recognizing support people as an asset, will result in better behaviors and attracting more talent. Many times companies struggle badly with this and then decide to just outsource it. Promoting people from support into sales or deeper tech roles over the long-temr can also be pretty cost effective versus hiring outside. Many folks on HN will have done support at one point and felt they could have contributed much more in other roles.


> When support is purely viewed as a cost, then this will never happen. 99% plus of your call volume may be for the obvious things.

Maybe, but not my experience. I worked at a telco, and as developers we had to sit in on support calls a few times, to help identify areas that could be improved with minor effort. The majority of the calls I listened into on a given day had to be assigned to an engineer. The remaining, they just wanted a better deal or help reading their bill.


> Still doesn’t piss me off as much as automated phone support systems.

Yes, the AI voice bot is marginally better because I can request "customer service" without waiting to discover the right numeric code. That's about the extent of that.


I accidentally discovered a cheat code a few years ago, interacting with one of those IVR voice systems, as I was getting frustrated with it and eventually exclaimed "fuck!" -- its response was brilliant: "Alright.. ok.. it looks like you're having troubles. Please hold while I transfer you to a human operator."


I’ve had that work a few times, but other times, I can verbally abuse the bot all I want to no avail.


It can be quite useful in a weird way. While everyone else is stuck on the chatbot going round in circles, usually typing something like "human" or "talk to human" will make the chatbot connect you with an actual human representative (or ask you a few basic questions first and then connect you).

I've used the trick on various large company's websites when trying to get support and it seems to be quite 'universal'.


Reminds me of how I used to occasionally see people on the bus shouting into their phones "OP-ER-AY-TOR!" "REP-RE-SENT-UH-TIVE!" "HUE-MAN!"


It will be universal until people start using it, and then it will be removed, because the point of these systems is to keep you away from humans.


Yep. Hitting "0" or "9" used to directly connect you to a human CS rep across many large companies' phone systems. Then one day they all moved to obscure it behind several levels of number tapping.


Then https://gethuman.com/ was born


Honestly I've had plenty of real world support people who were just as bad, if not worse, than the AI bots. Recently had an experience with paid Microsoft support (for work) so bad that we just stopped even talking to them. It didn't used to be this way, it used to be that if we had a data corruption issue with SQL we'd talk to an engineer who worked on SQL server at Microsoft, now we talk to some third party company's imitation of an engineer who is vaguely aware that SQL exists.


And this is why chat bots are very attractive to the executives who view support technicians as humans who exist simply apologize to a customer and follow a script.


We have a chatbot on our marketing site but it says something like "I am a Bot. Once I ask you a couple of questions, I can connect you to a human. Is that ok ?".


i like that better than fighting with a bot to trigger an actual human getting online. However, a few short questions and connect to human sounds like a form and a submit button.


But forms are so 2008. We need innovation for its own sake -- how else are the leads and techbros gonna justify their paycheck?


If the end result is getting to chat with a human, that's fine. But so many of these are just a different interface to search the FAQs, and the end result is to link to the FAQs. That's useless.


What happens if someone responds with, "No?"


'Ok, great! Let's get started! [Ten emojis]'

(Probably)


I actually work on a chatbot for a big company [1], and I feel like chatbots are substantially better when they are more targeted and less conversational. For example, I'm perfectly happy to use a chatbot and type "return something", since that's relatively easy to parse correctly, and once you're in the right flow it works just fine.

Where I feel like chatbots get bad is when they try super hard to fool you into thinking you're talking to a human. At that point, I totally agree, just give me a human.

[1] It's probably not too hard to find out which company, but I do ask that you do not post it here if you do.


If it's something direct like "return something" then what benefit is there over using the website's interface?

Is it just this interest in doing everything by whatsapp?


> If it's something direct like "return something" then what benefit is there over using the website's interface?

There's no "benefit" exactly, the item gets returned the same way regardless, but it's kind of nice that it's consolidated. The chatbot works as a bit of a "one stop shop" for a lot of administrative stuff like "where's my order" or "return something" or "my order didn't go through", stuff like that.

AFAIK we don't support doing anything through Whatsapp, just our site.


Same. But in fairness I like the latest AMZ chatbot. Not because it's smart, because it doesn't try to be.

I wanted to return a package that for whatever reason claimed to not be able to be returned, despite being sold by Amazon and having the typical return policy. I clicked to get help, confirmed the item and it just said 'OK, I've refunded the amount of xx.xx'.


Bots do not serve ANY purpose in most interaction with people. They are capable of a limited set of tasks and should be used carefully. Mainly they piss people off, and if a bot could handle the interaction, so could a website.

Previously I worked for a company that took pride in not being like the big players, doing this the right way, but apparently that has fallen completely apart. I know it's not the same thing exactly, but it made my a little angry to see some of their web pages having a text saying: "Blip Blop, I'm a tiny bot and I've translated this page. I don't always get it right, but I'm learning". Just leave the reasonable English version or do a proper translation, this automated crap, and don't try to excuse bad translation and messy language with "I'm learning". If you KNOW the bot makes enough mistakes that you have to let people know, then maybe it's not ready yet.


Except the alternative is rarely the actual person you need.

Before chatbots it was endless phone trees. Before phone trees it was oversea operators rerouting you around departments. Before that it was unpaid interns putting you on hold until you get disconnected or give up.

The game has always be to make it as hard as possible to reach the most costly level of support.


Chatbots are really hot in customer service and internal helpdesk applications, because there is that belief that they will offload interactions from hitting a real agent.

I'm skeptical, because the chatbots built to do that are often so bad that people just spam "agent" or "operator" or whatever they have discovered is the magic word to shortcut the bot, the same way that they do with voice phone trees.

You could probably build some decent chatbots if you had strong domain knowledge to draw on and skilled developers building them. But that's not usually the case; it's most often farmed out to a team attached to the Cognizant or TCS or Cap Gemini type outfit that is already handling that function, who are not terribly skilled, don't care, and are viewed as a cost center. So it is usually a poor result.


But then the real person functions like a bot because they’re following a script.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: