Customers are less likely to become aggressive with service failures from a chatbot if they are made aware early on that human intervention is available when needed, according to research by QUT.
Using data from 145 participants associate professor Paula Dootson, from the QUT Business School and co-author of Chatbots and service failure: When does it lead to customer aggression, found that in a chatbot service failure context, telling a customer late in the service interaction that a human employee is available to help results in customer aggression.
“The capability for comprehending natural language and engaging in conversations allows chatbots to not only deliver customer services but also improve customer experiences through lowering customers’ efforts and allowing these customers to use time more efficiently elsewhere.
“However, despite the economic benefits for companies using chatbots in service encounters, they often fail to meet customers’ expectations, can undermine the customer service experience and lead to service failures.
Dootson's study, conducted with assistant professor Yu-Shan (Sandy) Huang from Texas A&M University-Corpus Christi, considers how artificial intelligence technology is changing the way services are delivered and introducing opportunities for new sources of service failure.
Dootson said: “There is still an historical expectation that real people will be available to assist customers if they encounter a technology-related service failure, but it remains unclear how the presence of human employees can influence customers’ responses to a service failure caused by chatbots.
“Our results indicate that disclosing the option to engage with a human employee late in the chatbot interaction, after the service failure, increased the likelihood of emotion-focused coping, which can lead to customer aggression.
“Unexpectedly though, we found that when customers perceive a high level of participation, the positive relationship became negative in that customers were more likely to react with emotion and aggression when the chatbot service failed, if they were offered to interact with a human employee early (compared to late) in the service interaction.
“This could be because customers with a higher level of participation often value relationship building during the service co-creation process, they may be more likely to desire interacting with a human employee.
"So, the early disclosure of the option to interact with a human employee may signal that a service provider has the human resources to support customers but does not value the customers enough to begin the interaction that way.”
Professor Dootson said the findings offered several practical implications for managing chatbot service encounters.
“Service providers should design chatbot scripts that disclose the option of interacting with a human employee early in the customer-chatbot interaction, thereby making customers aware of the possible human intervention prior to the occurrence of chatbot service failures.”
Have something to say on this? Share your views in the comments section below. Or if you have a news story or tip-off, drop us a line at adnews@yaffa.com.au
Sign up to the AdNews newsletter, like us on Facebook or follow us on Twitter for breaking stories and campaigns throughout the day.