How To Train Conversational AI: Getting Started

Post by
Joann Klaas
    Published: 
January 4, 2021
How To Train Conversational AI: Getting Started

In this post, we examine AI training. It is often argued that conversational AI will replace human workforce and cause unemployment across many sectors. It is true that increasing number of companies are interested in more efficient ways of customer service, but that does not mean humans will be replaced in the near future. Behind every successfully automated customer service lies an active support team. The core intention behind automation is to make business more efficient. Optimized solutions usually do generate more business opportunities, thus pushing companies to actually hire more people. This arguably nascent industry brought many new disciplines in to the daylight.

In some ways, artificial intelligence resembles a human brain. Both require regular care, maintenance and energy. Both can make certain amount of decisions per time unit and have limited memory (although AI has no problem memorizing or deleting data from an easily extendable storage space). Back in 2017, Google DeepMind’s AI AlphaGo proved that artificial intelligence is already better than humans can ever be at narrow tasks such as strategic board games. Still, in terms of universal intelligence, adaptability and abstract thinking, humans are unbeatable. Ben Dickson suggests that we should entirely stop comparing AI to human intelligence. One is certain, the symbiosis between artificial and human intelligence is here to stay. AI tends to shine in performance stability and specific tasks such as data processing. Ben argues that if we manage to break down tasks into data, AI can learn it.

One may assume that AI solutions are naturally less biased in decision-making than human counterparts. Yes, decision-making based on large amounts of data rather than emotions or opinions seems the way to go. However, AI today is more of an extension of someone’s vision and ideas, rather than a completely autonomous thinking creature. Craig S. Smith writes that bias is unavoidable feature of life and occurs usually when the view of the world is limited by single person or a group. In conversational AI, biases in decision-making can be caused by different aspects. First, there might emerge problems with gender or race related questions. In addition to this, the dataset could be deliberately fabricated to serve someone’s needs. It is probably impossible to completely avoid the misusage of artificial intelligence as the problems stem from society. Still, to achieve best results, multiple different developers and trainers should be used.

Furthermore, AI is influenced by insufficient knowledge or emphasis on wrong data. Not every piece of received data is usable for AI improvement. Some of the phrases should be deleted or edited. This includes sensitive and personal information. For example, John Smith may have a problem with his bank account and shares his personal credentials (including bank account number) with the chatbot, so that the customer support agents could start working on his ticket. While detailed personal information may be useful for human agents to take immediate action, it provides no benefit for anonymous automated AI systems. Moreover, sensitive information settled in training data may become a security issue. Fortunately, it is possible to implement data deleting robots, which will regularly scan datasets and erase sensitive data.

In addition to that, unnecessary long messages that contain irrelevant information can be shortened in order to maintain AI’s intent matching accuracy. Some customers like to be wordier than others and bring out every detail prior to and after the actual problem:

“Good morning! My name is Sara Smith. Can I ask you a question? Yesterday, I visited your wonderful office. The weather was also very nice! I wanted to learn more about mortgages. Unfortunately, I had to wait 40 minutes as the queue was long. However, eventually I got the answer and the customer service was splendid! Could you connect me with human again? I have some extra questions about the loan. You have a great company, keep up the good work!”

Here we can see that there are many different intents in the phrase. Most of them are irrelevant and the overall sentiment is constantly changing. For example, descriptions about the office and the weather are positive. In contrast, time spent waiting in the queue radiate negative emotions. After that, the sentiment turns once again back to positive with the compliments on customer service and the company. Therefore, copying every single sentence from the conversation is not the best idea. The key is to include only every relevant information fragment. In this case, the only important information is questions about loans and the wish to discuss the topic with human agent. Final phrase in training data should be something like this:

           “Could you connect me with human again? I have some extra questions regarding the loan.”

AI training can be categorized in to two stages. First being the initial stage or time before launch and the second one being the time after going live. There are different data sources for the training. Ideally, emails or conversations between live agents and customers should be used. This is not always possible right away. During the initial stages there might not be a lot of real customer enquiries, so there may be a need for data generation. For starters, company’s employees could write down 10-20 most common topics and link most typical questions to them. Then the results could be summarized for further analysis. If you understand how and what most of your customer base would ask, then even lack of starting data should not be a problem.

In any case, the most important topics should be separated and prioritized first. Methods like Topic Rating Matrix could be used for topic prioritization and discovery. The good news is that when the bot is up and running, the training will get easier and more precise.