Redefining customer support through conversational AI for Spa Ceylon - A Case Study

March 13 2023

Cover Image Two

Introduction

Spa Ceylon is one of the world’s largest luxury Ayurvedic wellness chains crossing over 100 stores globally. For the past few years, Spa Ceylon has been experiencing tremendous growth with an ever-increasing customer base spanning across the world.


Ten years ago e-commerce consumers were ready to wait for days to get a response about an inquiry they made before they finalized their purchase, in the meantime where customer support agents could leisurely sort through inquiries and answer each customer one by one. But that luxury isn’t simply available to modern e-commerce retailers like Spa Ceylon. The digital experience of today’s consumers are formed by services like search engines and social media which provide the information they need at their fingertips within milliseconds. And these consumers expect the same kind of swift response from the brands they interact with too. For retailers like Spa Ceylon, maintaining this speed and efficiency in engagement is not only about matching customer expectations but also majorly about retaining customers in the long run by providing a superior consumer experience.


This case study is about how Spa Ceylon partnered with us at Rootcode AI to build their first conversational AI assistant that can not only resolve repetitive inquiries but can also proactively recommend products and answer dynamic product-related questions just like a human support agent.

The inquiry overload challenge

One of the biggest challenges the support team at Spa Ceylon faced as a result of their rapid growth was the massive increase in the number of customer support inquiries they were receiving through their e-commerce website and other social media channels. With a customer support team of 15 agents, Spa Ceylon was receiving nearly 2000 inquiries per day. This quickly became problematic where on one hand customer inquiries were left unanswered resulting in higher churn rates and on the other hand support agents were overloaded by answering a massive volume of repetitive inquiries - We call this problem the inquiry overload bottleneck.


The traditional solution for the inquiry overload bottleneck was simply to scale up the customer support team. But this generic solution presented three major drawbacks.


  • Scaling up the customer support team in proportion to the volume of inquiries received will skyrocket the operational costs of the business over time.
  • Every individual support agent can only handle one customer at a time, resulting in customers being queued to be answered nevertheless.
  • Newly recruited support agents would not be able to understand and resolve customer inquiries efficiently until they gain extensive knowledge and experience in both the brand and its domain.

Considering these bottlenecks Spa Ceylon wanted to take a uniquely different approach to solve the problem - a chatbot that can resolve common customer inquiries. Spa Ceylon then partnered with Rootcode AI to build the solution. After our initial discussions and workshops with Spa Ceylon’s leadership team to understand their domain, problems, and goals we decided to build an end-to-end conversational AI assistant which can not only resolve basic FAQ inquiries but engage in contextual conversations with customers to recommend products and provide suggestions proactively, so that can human agents can focus on what they do best - engage in conversations where human intelligence and understanding were essential.


How we built it


  • Challenge 1: Analyzing 10 years of conversation data in 10 days

Our first goal in our solution roadmap was to discover patterns in customer inquiries to identify the different dynamic contexts around which customer conversations are centered and the most frequent types of conversation flows. To identify these factors, we acquired more than 10 years of Spa Ceylon’s customer support conversations through all their web and social channels. To make sure personal sensitive information is not exposed, all conversations were passed through an anonymization pipeline to mask any personal sensitive information present in the conversations. After anonymizing and exporting more than 100,000 conversations we set out to perform an extensive exploratory analysis of the received data to discover the required insights within 10 days.


We first cleaned the data using classical pre-processing techniques and then applied unsupervised clustering algorithms like K-Means and DBScan to identify different conversation topics and segment them into different clusters so they can be analyzed further. The segmented clusters were then analyzed even further using context-aware embedding algorithms like BERT to identify contextual trends within conversation topics and segments.


By the end of our analysis, we derived 3 major types of dynamic conversations that the conversational agent has to handle and more than 30 frequently asked questions with static answers.


Dynamic Conversation Categories


A dynamic inquiry is an inquiry in which the customer asks open-ended questions about specific products or product categories where the conversational agent has to respond by using its knowledge of products and product category hierarchies instead of a fixed response


1. Direct product inquiries: These were scenarios in which a customer wants to know more information about a particular product. A good example would be, “What’s the price of a Neem Tea Tree Cream?” or “I want to know whether the pure aloe shampoo is paraben free?”


2. Categorical product inquiries: These are instances in which the user is not sure about the exact product they are looking for but is trying to explore a particular category of products like “Haircare” or “Skincare”. These types of inquiries are generally very non-specific and open-ended, therefore the conversational AI assistant has to ask multiple levels of questions from the user so that they can be recommended the right set of products based on their interests. A good example would be, “I am looking for a good herbal shampoo, do you have any recommendations?”


3. Medical inquiries: Since most of Spa Ceylon’s products contain medical properties, some customers inquire about products that help cure specific medical conditions like Eczema. The conversational agent will then find particular products associated with the medical condition and recommend those to the user. An example would be, “I’ve been having pimples for a long time, can you suggest a good cream that I can use”.


Static Conversations


Static conversations are inquiries in which the user asks a frequently asked question which can be given a static response. A good example would be, “Do you accept cash on delivery?”, since all Spa Ceylon products can be ordered through the cash-on-delivery method, a default answer would be provided to the user.


  • Challenge 2: Building a dynamic product knowledge graph

Support agents in Spa Ceylon’s customer support team are generally experts around the brand’s entire range of products and product categories. They’ve acquired this extensive knowledge about products through a long period of learning and practice, which enables them to answer and resolve any inquiry regarding Spa Ceylon’s wide array of more than a thousand products. For a conversational AI agent to be able to engage in dynamic conversations and recommend products similar to a human, it should be able to replicate and look up through a similar dynamic knowledge base.


Architecting and constructing this knowledge schema was one of the biggest challenges we faced. Primarily because such an information-rich dynamic knowledge base should not only contain information about a product’s attributes like its price and ingredients but should represent the product’s relation to other products and their attributes. This would enable the conversational agent to ask the right set of questions from the user so that it can filter and recommend products that relate closely to their needs.


We researched various data structures that can be used to construct our knowledge base. The main constraints we had to consider when deciding the data structure of the knowledge base were the complexity of representation and speed of information recall.


The complexity of representation highlights the inherent complexity of the data structure and how easily it can be understood by a human. This was an important factor because when we migrate product-related information from Spa Ceylon’s product tables, the more simple the representation of the data structure is, the easier it is for both the engineers and support agents to update the knowledge base with new information in scenarios like when a novel product category is added. On the other hand, the speed of information recall is the time required by the conversational AI agent to search the knowledge base and retrieve information regarding an inquiry. For example, if a customer says, “Can you recommend me a good cream for my dry skin”, the conversational agent has to find “creams” associated with “dry skin” from the knowledge base to recommend the right products. But if it requires a lot of time to retrieve this information, the entire search will be extremely slow which would result in users being left hanging for minutes for a response instead of seconds.


After our initial research, we finalized to use a graph data structure as the foundation of our knowledge base. This enables us to represent the individual products as nodes and define their relationships with other products using vertices within the graph. We then designed and engineered our custom graph algorithms to migrate Spa Ceylon’s product data present in traditional relational databases to our graph database which would be accessed by the chatbot.


  • Challenge 3: Building a conversational engine that can handle both static and dynamic conversations.

When we as humans engage in conversations, the neurons in the left hemisphere of our brain become extremely active so that we can articulate our ideas through our words and that we understand the context of the conversation before responding. This enables us to respond by considering the entire context of the conversation instead of only being able to respond to what was told last. The example conversation below illustrates this perfectly.


Customer: Hey, I would like to order a Neem Tea Tree facewash.


Support agent: Hey there, unfortunately, we do not have any Neem products in stock right now. But we do have the Aloe face cleanser which is similar, would be interested?


Customer: Yes, I would like that, but how much is it?


Support agent: It’s 3500LKR, would you like to confirm your order?


Customer: Great, sure!


In the above example, the customer uses words like, “that” and “it” which do not mean anything specific nevertheless the support agent understands that considering the context of the conversation the customer is meaning the “Aloe face cleanser” mentioned before when he says “that”. Although in humans this trait is extremely trivial and is not even something that requires conscious effort, embedding this type of contextual intelligence within the conversational agent’s language understanding engine is vital for it to be able to resolve customer inquiries intelligently without disrupting the conversation


blog content

The greatest challenge about building such a context-aware conversation engine was, on one hand, that it should be able to retain in memory and consider the entire history of the conversation before generating a response. On the other hand, it should also be able to connect with and traverse through our knowledge graph to identify product relationships and ask questions from the user to filter their recommendations.


The conversation engine we designed and developed resolves every dimension of the problem by using 3 major components - the NLU inference engine, the context cognition engine, and the response synthesizer.


NLU Inference Engine: The NLU (natural language understanding) engine uses a combination of large language models like BERT and custom intent classification models to classify the intent of an inquiry and then extract useful keywords from it like product names, categories, or even medical conditions,


Context Cognition Engine: The context cognition engine receives the classified intent and extracted keywords from the NLU Inference engine and then considers the context of the conversation by looking at each step of the conversation history and then initiates the search through the knowledge graph to identify the right response.


Response Synthesizer: The response synthesizer uses custom language generation models to generate responses that would be dispatched to the customer.


blog content
  • Challenge 4: Continuous autonomous retraining

Modern e-commerce brands like Spa Ceylon consistently keep releasing new products frequently. Adapting to new information from these novel categories of products is a challenge even for human support agents, who have to spend time learning about the new product categories being released. The same challenge applies to our conversational agent too, where the underlying models of the system have to be consistently retrained to make sure that the models are aware of new and updated product information. If the models are not consistently retrained the aggregate performance of models begins to degrade due to a phenomenon known as model drift. To avoid this consistent degradation in accuracy we designed and developed a continuous autonomous retraining pipeline, which would be automatically triggered whenever Spa Ceylon makes a change in their product table. The retraining pipeline consists of 3 main stages - Data migration, Dataset synthesis, and Model retraining.


Data migration: This is the first part of the pipeline. Once a change is made in Spa Ceylon’s product database, a webhook trigger will start a data migration process in which the newly added products and product categories will be preprocessed and migrated into our graph knowledge base so that the conversational assistant can access it.


Dataset Synthesis: Once the product data is migrated, the next task is to generate synthetic conversations involving the newly added products and product categories. To generate these synthetic conversations we use custom deep-learning based generative algorithms which can generate hundreds of synthetic conversations surrounding the newly migrated products.


Model retraining: This is the final part of the pipeline in which the generated data is fetched and is used to train language models present in both, the context cognition engine and the NLU inference engine. Once the models are trained and ready to be deployed, the models are swapped in place for their older versions using a rolling update strategy, so that the older models can be replaced phase by phase without causing any downtime for the users.


blog content

Results


At Rootcode AI whenever we start a project we define a set of measurable quantitative and qualitative metrics to determine the success of the solution post-delivery. For this project we defined 3 major metrics to determine project success:


  • Average response time: After deploying the chatbot Spa Ceylon’s customer support team was able to bring down their response time from hours to seconds.
  • Response rate: Our conversational agent brought Spa Ceylon's average response rate from 78% to 100% because the conversational agent will definitely respond to every customer's message instantaneously.
  • The number of escalated conversations: Only 15% of all conversations were escalated to human support agents after the conversational agent was deployed. The customer support agents can now focus on customer inquiries that actually require human interaction.

blog content

Conclusion


The message overload bottleneck outlined at the start of this case study was one of the biggest bottlenecks Spa Ceylon faced when growing at a rapid scale. Our conversational AI agent not only made Spa Ceylon’s customer experience more seamless and fast but also saved a lot of time spent by customer support agents answering repetitive inquiries that could be automated. In addition, the purpose of this chatbot was not to replace human support agents but to help human agents focus on inquiries that actually required human cognitive and emotional intelligence to resolve.


This project also made our team at Rootcode AI realize that the message overload bottleneck faced by Spa Ceylon, where customer support agents were overloaded with a massive amount of inquiries, was not a problem unique to Spa Ceylon only. After some research, we identified that this is a general problem for nearly all e-commerce retailers who have a high volume of customers. And we wanted to solve this problem in a way that every e-commerce retailer can utilize the capabilities of conversational AI to optimize their customer support experience to the maximum. We built ConverseUp to help e-commerce businesses solve the message-overload bottleneck once and for all.


ConverseUp is a conversational AI platform through which e-commerce businesses can integrate their product databases from various e-commerce platforms like Shopify, Woocommerce, etc., and train and deploy their conversational AI agents seamlessly while providing a rich interface for human support agents to take control of conversations whenever they want. You can read more about our work on ConverseUp and how we are building an end-to-end conversational AI platform for e-commerce businesses over here.


Are you interested in using AI to solve your business problems? Let’s collaborate on your next project.


share-icon