X
0
  • Support
  • Astute Connect

Q&A with Ian Jacobs of Forrester: Chatbots and Automation in Customer Care

Ian Jacobs, Principal Analyst at Forrester, was the guest speaker at an Astute webinar on October 17, 2019, entitled Chatbots and Automation in Customer Care: Improve Efficiency without Sacrificing Experience. In advance of the session, he answered four key questions about how organizations can enhance both customer and agent experience using this technology.

Q: As chatbots and self-service offload routine cases from agents, how can we better equip our team to handle the more complex issues that are left?

A: The first thing to do is recognize that the nature of the work you are asking agents to do is changing and changing quickly. Because those simple issues get handled by automation, the human agents not only need to deal with the more difficult cases; they must also deal with customers who have tried to self-serve and failed. Those customers have a latent level of frustration just waiting to burst forth if the agent sets one foot wrong.

So, to help these agents, you’ll need to use AI to augment their performance. There are a few approaches to consider when looking at augmentation. First, you could use AI to automatically route cases and predict case metadata such as tags, classes, and reason codes. This approach saves agents time with post-interaction wrap-up work, removing the more mundane aspects of their job and allowing them to focus on serving customers; it also helps resolve issues by creating the best match between agent and the work being asked of them. The second approach would be to use AI-enabled answers. When an agent receives a new question, they are presented with an AI-suggested response — these responses could take the form of the best knowledge article or nugget to solve the issue, or they could be actual conversational snippets that go straight to the customer. This approach saves time during the interaction and provides consistency in service across agents.

Q: What are some key considerations for implementing a chatbot that has a positive effect on the customer experience?

A: Assume that your chatbot will fail regularly. Yes, that does seem like a pretty negative way to create a better experience. But think about it this way: If you start from that point of view, you will focus on short circuiting the processes that cause customers to become annoyed by bad experiences. How you handle those errors counts for a great deal. Some brands respond in a lightly humorous tone, something akin to “Gee shucks, I’m just a dumb chatbot. Can you explain that to me another way?” Whatever approach you use to handle these misfires, always consider how a customer can get from the chatbot to a live agent. Additionally, consider what context from the unsuccessful chatbot conversation you can provide to the agent so that the customer does not need to repeat their issues again.

You’ll also want to set clear expectations to customers about the chatbot’s capabilities. This can take the form of an initial message that says something like, “Hi, I’m Botty, your virtual agent. You can ask me questions on topics like your account balance, interest rates, or the scores to the Manchester United game.” Other brands get more prescriptive by only providing a few options for the opening conversation: “Can I help you with: 1) account management; 2) payments; 3) new accounts; or 4) FAQs?” Either approach gives users a good sense of how broad the chatbot’s capabilities are. This lets them decide if the chatbot covers the ground they need or if they will need to find another way to get service.

Q: What metrics should we be evaluating to measure the success of chatbots and self-service?

A: As with any initiative, chatbot metrics should ultimately be driven by what goals you have for the chatbot. Is it designed to drive increased conversion? Then you’ll need to look at commerce and experience-focused metrics. Do you dream of providing better-than-human-like customer service? Then the metrics should focus on effectiveness and experience. If you want your chatbot to do triage and routing, then you’d obviously care about intent detection and first-contact resolution rates at the agent level.

That said, there are some metrics that make sense no matter what you hope to achieve with the chatbot. Let’s start with the obvious: Is anyone using the chatbot? Usage metrics come in many flavors, but one that really gets to the heart of the quality of the chatbot is active users. Here you’re measuring return or frequent users during a specified time period. Essentially this metric looks at whether customers find the chatbot appealing enough to use that they will come back and interact with it over and over again.

Some other key metrics to look at include:

  • Conversation turns. How many back-and-forths were there in the conversations? Caution: A high number could mean that the chatbot is handling somewhat complex tasks, or it could indicate that customers cannot get the chatbot to understand them and are likely frustrated. You’ll need to dig into this metric to take meaningful action on it.
  • Task or goal completion. This looks at whether the user’s request has been satisfied (e.g., was this answer helpful?).
  • Failure rate. This means the chatbot could not complete or did not understand the inquiry, resulting in either user abandonment or escalation to an agent.
  • Some user satisfaction measure. Some brands survey for customer satisfaction, others for Net Promoter Score, and others for customer effort. Whatever you choose, recognize that a simple optional thumbs-up/thumbs-down after every answer in a conversation usually captures a higher percentage of user feedback than a post-interaction survey.

Q: What are some best practices for evolving our chatbot capabilities to improve the experience for agents and customers?

A: The key best practice is to spend as much time choosing the right use cases upfront, as you do in tasks such as vendor selection or development. Narrow and small use cases — especially ones that you have already proven can be automated by other means such as web forms or interactive voice response — will allow you to focus on the chatbot-specific education you need to undergo. If problems arise with the chatbot, you won’t need to worry about whether the use case is the problem, as you’ll already know that customers are willing to use self-service to accomplish whatever goal you’ve chosen.

Another fundamental best practice is to have your agents do a good deal of the ongoing training of the chatbot, even if they don’t know that is what they are doing. Asking agents whether the chatbot correctly determined the intent of an interaction that was escalated to them provides knowledgeable input that can be used to improve the chatbot’s performance. Similarly, in the agent augmentation example given above, by using machine learning, the agents can train the chatbot as they go about their workday tasks. When an agent accepts the input from a chatbot, the machine learning will take that as a good sign; if the agent rejects or customizes the input from the chatbot, the machine learning will take that into account when reprioritizing which knowledge or dialogue snippets to push to agents in the future.

 

Learn more about Astute's chatbot and digital self-service platform, Astute Bot.

[1] Net Promoter and NPS are registered service marks, and Net Promoter Score is a service mark, of Bain & Company, Inc., Satmetrix Systems, Inc., and Fred Reichheld.