Effortless interaction between humans and computers, so smooth that the humans don’t even realize they’re conversing with a machine. That’s the promise of chatbots. And yet, the reality today is rather less impressive.
While some will tell you that chatbots are going to change how you do business, there’s a good chance that today’s chatbots will leave customers disappointed when they present complex queries. Here’s how.
Chatbots are failing the Turing Test
Context is everything. As humans, we behave according to the context of our current environment. Give someone a website to navigate and they’ll serve themselves. Give them a chat box and they’ll expect a conversational partner who has an implicit understanding of their circumstances. In particular, they’ll expect a shared cultural context.
Think about the common complaints about offshore contact centers: linguistic and cultural barriers make simple interactions hard. Now, consider how much more difficult those interactions are in a conversation where the other party isn’t from a different culture but instead has no culture and no understanding.
Tech magazine The Memo recently tested a chatbot provided by the UK’s National Health Service. Aimed at new mothers, the chatbot promised to offer advice for feeding newborns. Even simply phrased questions that were directly on-topic prompted the response:
“Sorry, I didn’t quite get that. I’m just a bot after all.”
It turned out that the “chatbot” was capable only of responding to a list of canned questions. The promise made implicitly by the chat format was of a free-flowing conversation. In reality, a frequently asked questions page would have served everyone better.
Organizations that offer chatbots are making a contract with their customers: if you use this service, it will interact like a human being. Fulfilling that contract takes a minimum of two things:
- Every allusion, reference, and metaphor must be “understood” by the chatbot.
- It must have instant access to all the knowledge required to be expert in the topic of the conversation.
Computer scientist Alan Turing famously devised a simple test to determine a computer’s ability to behave intelligently. In it, two humans would sit in separate rooms and take part in a text-based chat with each other and with a computer. If the humans could not tell which of their conversational partners was a computer, then the AI was said to have passed the Turing Test. That was in 1950. Almost seventy years later, many so-called chatbots would not pass.
Natural language processing can do only so much
The first step in creating a convincing chatbot is to closely mimic how humans communicate. The area of computer science addressing that challenge is called natural language processing.
Computers are stupid. Or, more accurately, they have no imagination. That’s one of the really great things about them. They take instructions and execute those instructions precisely and quickly.
As we know, human language is imprecise. Think about the following sentence:
“The trophy doesn’t fit into the suitcase because it’s too small.”
A human will immediately understand that the object that is “too small” is the suitcase. That’s because, when reading the sentence, we imagine the world it describes. To do that we draw on our understanding of the world as we have experienced it; that model is what you might refer to as common sense.
Now let’s change a single word:
“The trophy doesn’t fit into the suitcase because it’s too big.”
Again, our model of the world described by the sentence informs us that the object that is “too big” is the trophy.
To a computer with no common sense, both sentences are not just imprecise, they’re impossible to parse. This type of sentence—where there is an ambiguity that rests on a single pronoun and changing another word switches whether the pronoun refers to the sentence’s subject or object—is known as a Winograd schema. This is such a tough problem in natural language processing that Nuance Communications sponsors an annual event that challenges AIs to solve the ambiguity. That event is the Winograd Schema Challenge.
This is just a glimpse into the challenges of having chatbots understand and use human language. While there are libraries to help with natural language processing, it’s far from being a solved problem. If your chatbot can’t understand your customers then you’re setting expectations that you can’t meet.
That Chatbot’s AI is actually a bunch of IF statements
Pretty quickly, any code gets to a point where a decision must be made between two alternatives.
If the user presses 1, turn the screen pink. Otherwise, turn it green.
In traditional software, the programmer must anticipate and prepare for every possibility that they want their code to handle. Computers are stupid, remember?
AI promises something different. With AI, the software learns for itself. The promise is that the programmer doesn’t have to anticipate all eventualities because the AI will learn by itself how to handle unfamiliar situations.
At the cutting edge of AI are projects where computers really are learning for themselves. iCub is a small robot that is able to learn simple tasks, such as recognizing objects. It doesn’t have free-flowing conversations about your checking account or make suggestions for which bouquet of flowers your mother would most like.
That’s because what many people call AI (in the context of chatbots) is, in fact, nothing more than a very fancy set of IF statements. Someone has anticipated the likely questions and programmed how to respond to them (that’s why automated FAQs are one use case where chatbots are very effective). That, mixed with the challenge of natural language processing, leads to stunted conversations that are far from the promise of artificial intelligence.
And that, perhaps, is the main cause of disappointing chatbot experiences: a chat window brings with it promises of free-flowing communication. It’s a forum for handling questions and situations that haven’t been anticipated.
At the cutting edge, there are chatbots that can maintain limited conversations and show glimpses of what will be available to businesses in just a few years. But are they ready to field complex customer service questions autonomously? In a world where customer service quality is a huge factor in influencing customer loyalty, maybe it’s not worth the risk with today’s chatbot technology.
Augment, don’t replace
Instead of asking chatbots to go beyond their capabilities, we can use AI to augment human contact center agents. Tools such as IBM Watson can already integrate with Nexmo’s Voice API, opening the possibility of an AI agent that assists rather than replaces people. That’s why AI features heavily in the future of the contact center, but the emphasis—for today—is on making human agents more efficient by anticipating and supporting customer needs. Rather than having customers converse with a chatbot about their checking accounts, they should have those conversations with human agents who have an AI assistant. The AI assistant can quickly provide them with answers and the agent can use their human experience to make the final decision on what to share with the customer.
Chatbots are fun to play with but a hybrid approach, using AI to assist human agents, avoids trusting precious customer relationships entirely to unthinking computers. However we look at it, the contact center of the future will be transformed by AI, but we must not entrust too much to chatbots today.Tags: AI, artificial intelligence, chatbots
This post was written by Glen Kunene