I don't have any evidence this is the case, but my general assumption is that there are humans there as well.
Since people only get a chatbot, they ask simple questions the chatbot can answer, which weeds out a lot of support requests. As soon as the bot is stumped, it forwards directly to the pool of humans - a smaller pool than usual because there are fewer support requests.
The response goes back as though the bot did the thinking, which in some ways, it did - in the same way as if someone asked me a question I couldn't answer, I might google it, and then respond.
If this is the case, it may be slightly dishonest, but as long as people are getting the support they need, I don't necessarily think there's anything wrong with it.
Since people only get a chatbot, they ask simple questions the chatbot can answer, which weeds out a lot of support requests. As soon as the bot is stumped, it forwards directly to the pool of humans - a smaller pool than usual because there are fewer support requests.
The response goes back as though the bot did the thinking, which in some ways, it did - in the same way as if someone asked me a question I couldn't answer, I might google it, and then respond.
If this is the case, it may be slightly dishonest, but as long as people are getting the support they need, I don't necessarily think there's anything wrong with it.